Team LiB
Previous Section Next Section

6.1. Attacking Web Applications at the Source

Historically, network- and operating system-level vulnerabilities have been the sweet spot for attackers. These days, though, hardened firewalls, patched systems, and secure server configurations make these vulnerabilities less desirable than web applications. By their nature, web applications are designed to be convenient for the end user, and security is either overlooked or built in as an afterthought. Web developers lack the real-world security experience of battle-tested firewall and network administrators, who have been targeted by attackers for years. With little or no security experience, developers are unaware of the insecure coding practices that result in web application vulnerabilities. The solution is to test for these vulnerabilities before attackers find them.

The following are two of the most common testing approaches:


Black box

Via the user interface or any other external entry point, this approach pursues the attack vector that provides most of the unauthorized access to the application and/or underlying systems.


White box

Via access to application source code only, this approach identifies insecure coding practices.

The ideal strategy combines identifying insecure code at the source and verifying whether the identified code is exploitable through the user interface. This approach illustrates both the impact and cause of web application vulnerabilities. Access to the application source allows the tester to view the application's "true" attack surface.

In general, an application's attack surface is any interface exposed by default. Attack surface in the context of a web application is any accessible page or file in the webroot, including all parameters accepted by that page or file.


When testing from the source code perspective, it's possible to identify every file, page, and page parameter available to attack. If you're testing solely through the user interface, you might miss a page parameter that is not part of the normal user experience and that provides privileged application access to those who know it exists. With access to the application source, you can more easily identify such back doors and remove them from the code base. In addition, the source answers questions about functionality not readily available through the user interface. For example, when fuzzing page parameters through the user interface, the application might respond with unanticipated behavior.

In the context of a web application, fuzzing entails systematically sending HTTP requests with header, cookie, and parameter values containing unexpected strings of varying character combinations, types, and lengths. Of interest are the HTTP response codes and the page content returned by the web application.


The tester can choose to spend time investigating application responses through the user interface, or dive straight into the source to reveal the actual code implementation. Both techniques might eventually yield the same resulta verified vulnerability. The latter of the two techniques has the added advantage of quickly finding not only the vulnerability but also its root cause.

On the other hand, access to a live instance of the application provides a means for verifying whether a piece of code is vulnerable, and more important, whether it is actually exploitable. This level of access provides other testing benefits as well. If you have access to only the application source, it can be difficult to know where to start looking for vulnerabilities. With access to the live application, you can build an initial map of the user experience. Typically, you do this by crawling through the web application using local proxy tools to log every request and response. With a map of the user experience defined by the request and response log, the tester can return to the source and more intelligently target specific areas of code. For example, the proxy logs contain URLs that often map to specific files and classes, providing a starting point for targeting the most relevant code.

When testing with access to the application source, a disciplined approach is required. Large webroots can swamp even the most experienced testers. An initial test plan that first targets high-risk code helps to avoid incomplete results and missed vulnerabilities. The next section of this chapter outlines a repeatable and measurable approach to source code analysis that strives to accomplish the following goals, in the order shown:

  1. Identify as many potential vulnerabilities/exposures as possible in the allotted time period.

  2. Target high-risk vulnerabilities first.

  3. Confirm the vulnerability through exploitation.

Before delving into details of how to satisfy these objectives, it's important to understand the architecture and code commonly seen when testing web applications.

6.1.1. Scope of a Web Application

Depending on its architecture and size, a production web application can reside on a single server or span across many different servers and tiers, as shown in Figure 6-1. Ideally, a production web application's source is grouped logically into presentation, business, and data layers and is separated physically across tiers. Anyone with experience testing web application security knows this is rarely the case. Table 6-1 provides a brief description of the types of code commonly found at each tier.

Table 6-1. Typical web application architecture

Tier

Code description

Example code

Client

Client-side/mobile code.

JavaScript, VBScript, ActiveX, Java applets

Frontend

Hosts the user interface (UI)/presentation code. Can also contain business logic and data access code.

ASP (VBScript), ASPX (C#/VB.NET), Java/JSP, PHP, Perl

Middle tier

Hosts code implementing a company's business logic and data access code.

C, C++, C#, VB.NET, Java

Backend

Hosts code for the retrieval and storage of application data. Code can also implement business logic rules.

T-SQL, PL/SQL, MySQL dialect


Figure 6-1. Typical web application architecture


6.1.2. Symptomatic Code Approach

Given the complexity, size, and custom nature of production web applications, the previously outlined testing objectives might seem daunting. However, a defined test plan driven by the identification of symptom code provides the tester with a solid foundation for identifying initial high-risk code. As the tester becomes more familiar with the source, he can change the initial test plans to target discovered instances of insecure code. From the tester's perspective, knowing the types of code that result in one or more security vulnerabilities is the key to finding the causes of those vulnerabilities. The symptomatic code approach relies on the tester understanding not just the common web application vulnerabilities, but more importantly, the insecure coding practices that cause them.

6.1.3. Symptom Code

As the name of the approach implies, insecure coding practices or techniques that result in web application vulnerabilities are called symptoms or, more specifically, symptom code. To avoid confusion, the terms symptom code and vulnerability are defined as follows:


Symptom code

Insecure code or coding practices which often lead to exposures or vulnerabilities in web applications. A symptom is not necessarily exploitable. A particular symptom can lead to single or multiple vulnerabilities.


Vulnerability

An exploitable symptom that allows an attacker to manipulate the application in a fashion that was not intended by the developer.

Table 6-2 provides a list of example symptoms and the potential vulnerabilities/attacks that stem from them. This list assumes the reader is already familiar with common web application vulnerabilities and attacks.

Table 6-2. Symptoms of common web application vulnerabilities/attacks

Symptom

Vulnerability/attack

Dynamic SQL

SQL injection

Dangerous functions

Buffer overflows

Methods for executing commands

Command injection

File I/O methods

Arbitrary filesystem interaction (i.e., creation/deletion/modification/reading of any file)

Writing inline request objects

Cross-site scripting

Cookie access methods

Broken access control

Hardcoded plain-text passwords

Unauthorized access, information leakage


The presence of a symptom doesn't guarantee the code has a particular vulnerability. Once you identify a symptom, you need to analyze the surrounding code to determine whether it is used in an insecure manner. For example, the presence of file I/O methods in the application source doesn't necessarily mean arbitrary filesystem interaction is possible. However, if the code uses a path location from an external input source to access the local filesystem, it will likely result in an arbitrary filesystem interaction vulnerability. With access to a live instance of the application, you can further verify the exploitability of this vulnerability.

The strength of an experienced tester is knowledge of symptom code and poor coding techniques that lead to application vulnerabilities. A skilled tester works with a defined set of insecure code instances, techniques, and conditions (similar to those shown in Table 6-1), which should be flagged at the beginning of a review. This list provides the tester with an initial test plan for quickly identifying easily exploited vulnerabilities. Then the tester can concentrate on less common vulnerabilities specific to the current application.

6.1.4. User-Controllable Input

Most web application vulnerabilities stem from poorly validated, user-controllable inputor any data accepted into the application, regardless of method or source. Typically, the data is sent between client and server in either direction and is completely controllable by the user, regardless of where in the HTTP(S) request it is found (GET/POST parameters, headers, etc.). When testing from the source, we might consider identifying each potential user input and tracing its data path through the code. Once the application accepts the input data it typically reassigns it to variables, carries it across multiple layers of code, and uses it in some transaction or database query. Eventually, the data might return to the user on a similar or alternative data path. The problem is that some paths might lead to symptom code, and others might not. In addition, applications with a large number of inputs increase the likelihood for multiple complex data paths, so tracing data paths from the point of input is inefficient. Given time-constrained testing windows, a more efficient approach is to target symptom code first and trace the paths of any related data out to sources of user-controllable input.

    Team LiB
    Previous Section Next Section