Wednesday, November 25, 2009

Step 2: Creating Logical Groups Within the Application Structure





Step 2:
Creating Logical Groups Within the Application Structure



The first thing to do after crawling a site
is to look at the URL path structure of the Web application. In most cases, the
path structure reveals information about application functionality. For
example, let's consider the crawler results from http://www.example.com/, a few
URLs of which are:



http://www.example.com/index.asp



http://www.example.com/login/login.asp



http://www.example.com/login/secure/transaction.asp?id=AD678GWRT67344&user=bob&doit=add



http://www.example.com/download/download.cgi?file=tools.exe



http://www.example.com/download/download.cgi?file=setup.exe



http://www.example.com/scripts/search.cgi



http://www.example.com/scripts/mail.pl



http://www.example.com/admin/administrator.asp



http://www.example.com/news/index.asp



http://www.example.com/public/default.asp



http://www.example.com/servlet/showtrans?id=13445466



http://www.example.com/servlet/setcustinfo



http://www.example.com/contacts/



http://www.example.com/profile/



http://www.example.com/private/



http://www.example.com/logoff/logoff.asp



What can we infer from this result? Some things are apparent.
The /login/ section is some sort of entry point to areas of the application
that are restricted for registered application users. The /logoff/ section is
for clearing the user session after the user has finished using the
application. The /download/ section hosts resources made available for all
users. Files such as setup.exe and tools.exe can be downloaded from the site by
the script download.cgi.



Let's see what happens when we make a request for the
/private/ area. In the browser, if we click on this particular link, we get the
HTTP authentication dialog box shown in style='color:#003399'>Figure 8-7.



style='font-size:10.5pt;font-family:Arial'>Figure 8-7. Authentication dialog
box for example.com




If we try sending the same HTTP request by using netcat, we
get this response:



nc www.example.com 80
HEAD /private/ HTTP/1.0
 
HTTP/1.1 401 Authorization Required
Date: Mon, 18 Mar 2002 09:40:24 GMT
Server: Microsoft-IIS/5.0
WWW-Authenticate: Basic realm="special directory"
Connection: close


The HTTP response code from the server is 401, which indicates
that the /private/ area is password protected. Here the authentication required
is HTTP Basic authentication, where the user and password string is sent after
encoding it with Base64 encoding. Now we know that in order to proceed further,
we need some valid user credentials. The Web hacker notes to herself either to
use some previously cracked accounts here or to try to enter this page by using
an automated brute force script.



Another interesting directory is the /scripts/ directory. It
has a few server-side scripts, such as mail.pl and search.cgi. By simply
looking at the script names, we infer that mail.pl probably is used to send
e-mail and that search.cgi probably is used to perform some keyword searches.
The /admin/ directory hosts a resource called administrator.asp, which appears
to be a Web site administration page available only to the site administrator
with proper credentials.



As we look at the collection of URLs as a whole, another
thought emerges. Grouping ASP files, CGI files, and Java servlets in one Web
application isn't commonly done. Their collective presence suggests that this
Web application is hosted by more than one Web server platform. The ASP�and
perhaps even Perl and CGI�pages are hosted by a Windows server running IIS,
whereas the Java servlets are hosted by another server. Recall from style='color:#003399'>Chapter 6 that various areas of a Web
application can be hosted on various platforms and that these platforms are
linked to function as a cohesive unit.



There are no fixed rules when it comes to
extracting such information. Past experience and knowledge about commonly
followed Web application development practices is a great help when you're
performing such an analysis. In the following chapters, we discuss various
attack scenarios, which further illustrate the concepts introduced here. Each
of the links in a Web application can have a potential security flaw, which an
attacker can exploit. Our focus here is to associate crawled URLs with their
possible functional uses and perhaps also to identify areas of potential abuse.
In this way, we can associate all such inferences with the Web resources found.
lang=EN-GB style='color:#003399'>Figure 8-8 shows
the Funnel Web Profiler's tree view of directories and files found on
style='color:#003399'>http://www.foundstone.com/. What
we need to do is tack on the logical inferences with a list of this type.



style='font-size:10.5pt;font-family:Arial'>Figure 8-8. Funnel tree




Step-2 Wrap-Up



We demonstrated how to look at URL strings
and Web resources to figure out their individual purposes in the Web
application. In Step 3 we describe how to compile all the information
discovered and reorganize it into a more usable format.



 





No comments: