Vous êtes sur la page 1sur 39

WEB APPLICATION PENETRATION TESTING

HTTP BASICS
A Deeper analysis of the HTTP Protocol
HTTP is a stateless protocol. The web works in and around this
protocol. To manifest and understand the HTTP Protocol and its
derivatives keeping web application penetration testing in scope,
we move on to study this particular protocol to its root.
To the hardworking people and to ninjas!


The HTTP Protocol

The Basics of HTTP
Protocol in Security.

HTTP Versions

HTTP Authentication

Security Centric
HTTP Research
Paper

OPENFIRE TECHNOLOGIES
Author: Shritam Bhowmick

22
nd
February, 2014











































INTRODUCTION
This paper drafts out the required knowledge about the HTTP protocol in use by
web technologies in a brief format to study the security risks affected to the
same protocol in later versions of the undergoing research documents. HTTP
protocol is the widest and most recently adopted protocol since 1996. The first
versions of HTTP were version 0.9 and quickly changed to 1.0 after IETF (Internet
Engineering Task Force) recommended protocol developers to maintain strict
streamlined compression due to huge traffic flow it has to go through. After the
version 1.0 in 1997, year 2000 saw the dawn of web 2.0 and soon version 1.1
came out to be the proposed standard HTTP protocol in use. HTTP version 1.1 is
still prevalent today (year 2014) and most discussion on development of version
2.0 of the same protocol is undergoing development and experimentation.
One can go through RFCs at IETF to study HTTP and other internet based
protocols and their massive progression. HTTP also plays an important role in
web applications, as a web application has to be accessed via text based
protocol such as HTTP itself and therefore is prone to security flaws. Most security
centric web application flaw either come due to injection attacks at the server
side itself (from misconfigured, unsanitized input and poor coding practices) or
HTTP method tampering.
The paper contains a brief study on both version 1.0 and 1.1 of HTTP. HTTP 2.0 is
also added to this document for further studies. HTTP/2.0 would be
implemented to the recent HTML5 standards and hence the required
knowledge of the protocol is almost mandatory and recommended. In
continuation to this document, other document releases might pop up in
conjunction to the present current topics being researched. However all the
topics being researched is kept in pace of series and in a step deduction
format to maintain legacy and generic security centric problems. All the
papers are concerned with web application flaws, web application
vulnerability research, web application vulnerability vectors and web
application exploitation and penetration testing.
HTTP BASICS
What is HTTP?
HTTP is a stateless protocol. Protocols are a set of rules which has to be
followed to maintain a strict boundary of instructions for a networking system.
HTTP stands for Hypertext Transfer Protocol which in itself suggests it is a protocol
based out of text commands and used to transfer data via these pre-set of
command instructions. The commands itself are test supported and therefore it
makes easy for a networking point of view to collaborate, distribute, interact
and communicate among hypertext networking systems. Statelessness of HTTP
protocol means the protocol supports being on different state from the original
one. This means if a request is sent from a client to a server via a medium; this
newly introduced medium could tamper with the request methods and then
send the newly formed request to the server. Now this has security concerns;
which is why this paper is all about. Its to know the HTTP protocol closely.
The World Wide Web largely depends on the HTTP protocol, as the protocol
itself is light, fast, efficient and dependent on use of status codes, headers and
request methods. This use of dependent headers, status code and request
methods makes the HTTP protocol suited the best for any packet based
communication in-between a network, group of network or an isolated
network. The study of HTTP is a must for any web application security enthusiast
as later for HTTP tampering, HTTP based attacks, HTTP CR/LF attacks, and other
HTTP attacks; HTTP based required knowledge would be needed. Its to be
added that HTTP is an application level protocol and works at the application
level of the OSI model. Find the references of HTTP below.
HTTP is said to be asymmetric request-response client-server protocol. Its also a
pull based protocol which means the client pulls out information from the
server rather than the server pushing the information back to the client. To
study HTTP protocols, we must have a basic idea of an URL or Uniform Resource
Locator. There are URI and URN too. That comes later.
References:
1.) http://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol
2.) http://www.w3.org/Protocols/HTTP/AsImplemented.html
Terminology:
Referenced from: http://www.w3.org/Protocols/rfc2616/rfc2616-
sec1.html#sec1
Connection: A transport layer virtual circuit established between two programs
for the purpose of communication.
Message: The basic unit of HTTP communication, consisting of a structured
sequence of octets matching the syntax defined in section 4 (of the referenced
RFC) and transmitted via the connection.
Request: An HTTP request message.
Response: An HTTP response message.
Resource: A network data object or service that can be identified by a URI, as
defined in section 3.2. Resources may be available in multiple representations
(e.g. multiple languages, data formats, size, and resolutions) or vary in other
ways.
Entity: The information transferred as the payload of a request or response. An
entity consists of metainformation in the form of entity-header fields and
content in the form of an entity-body, as described in section 7.
Representation: An entity included with a response that is subject to content
negotiation, as described in section 12. There may exist multiple representations
associated with a particular response status.
Content negotiation: The mechanism for selecting the appropriate
representation when servicing a request, as described in section 12. The
representation of entities in any response can be negotiated (including error
responses).
Variant: A resource may have one, or more than one, representation(s)
associated with it at any given instant. Each of these representations is termed
a `variant'. Use of the term `variant' does not necessarily imply that the resource
is subject to content negotiation.
Client: A program that establishes connections for the purpose of sending
requests.
User agent: The client which initiates a request. These are often browsers,
editors, spiders (web-traversing robots), or other end user tools.
Server: An application program that accepts connections in order to service
requests by sending back responses. Any given program may be capable of
being both a client and a server; our use of these terms refers only to the role
being performed by the program for a particular connection, rather than to
the program's capabilities in general. Likewise, any server may act as an origin
server, proxy, gateway, or tunnel, switching behavior based on the nature of
each request.
Origin server: The server on which a given resource resides or is to be created.
Proxy: An intermediary program which acts as both a server and a client for the
purpose of making requests on behalf of other clients. Requests are serviced
internally or by passing them on, with possible translation, to other servers. A
proxy MUST implement both the client and server requirements of this
specification. A "transparent proxy" is a proxy that does not modify the request
or response beyond what is required for proxy authentication and
identification. A "non-transparent proxy" is a proxy that modifies the request or
response in order to provide some added service to the user agent, such as
group annotation services, media type transformation, protocol reduction, or
anonymity filtering. Except where either transparent or non-transparent
behavior is explicitly stated, the HTTP proxy requirements apply to both types of
proxies.
Gateway: A server which acts as an intermediary for some other server. Unlike a
proxy, a gateway receives requests as if it were the origin server for the
requested resource; the requesting client may not be aware that it is
communicating with a gateway.
Tunnel: An intermediary program which is acting as a blind relay between two
connections. Once active, a tunnel is not considered a party to the HTTP
communication, though the tunnel may have been initiated by an HTTP
request. The tunnel ceases to exist when both ends of the relayed connections
are closed.
Cache: A program's local store of response messages and the subsystem that
controls its message storage, retrieval, and deletion. A cache stores cacheable
responses in order to reduce the response time and network bandwidth
consumption on future, equivalent requests. Any client or server may include a
cache, though a cache cannot be used by a server that is acting as a tunnel.
Cacheable: A response is cacheable if a cache is allowed to store a copy of
the response message for use in answering subsequent requests. The rules for
determining the cacheability of HTTP responses are defined in section 13. Even
if a resource is cacheable, there may be additional constraints on whether a
cache can use the cached copy for a particular request.
First-hand: A response is first-hand if it comes directly and without unnecessary
delay from the origin server, perhaps via one or more proxies. A response is also
first-hand if its validity has just been checked directly with the origin server.
Explicit expiration time: The time at which the origin server intends that an entity
should no longer be returned by a cache without further validation.
Heuristic expiration time: An expiration time assigned by a cache when no
explicit expiration time is available.
Age: The age of a response is the time since it was sent by, or successfully
validated with, the origin server.
Freshness lifetime: The length of time between the generation of a response
and its expiration time.
Fresh: A response is fresh if its age has not yet exceeded its freshness lifetime.
Stale: A response is stale if its age has passed its freshness lifetime.
Semantically transparent: A cache behaves in a "semantically transparent"
manner, with respect to a particular response, when its use affects neither the
requesting client nor the origin server, except to improve performance. When a
cache is semantically transparent, the client receives exactly the same
response (except for hop-by-hop headers) that it would have received had its
request been handled directly by the origin server.
Validator: A protocol element (e.g., an entity tag or a Last-Modified time) that
is used to find out whether a cache entry is an equivalent copy of an entity.
Upstream/downstream: Upstream and downstream describe the flow of a
message: all messages flow from upstream to downstream.
Inbound/outbound: Inbound and outbound refer to the request and response
paths for messages: "inbound" means "traveling toward the origin server", and
"outbound" means "traveling toward the user agent"

What is an URL?
URL is short for Uniform Resource Locator. A generic URL has four parts to it:
1. Protocol
2. Hostname
3. Port
4. Path and file
To breakdown all of the mentioned 4 entities of an URL, lets take how an
actual URL would look like.
http://www.mytestsite.com:80/destination_path/index.htm
Here:
1. Http is the protocol.
2. www.mytestsite.com is the host.
3. 80 is the port, by default if its not mentioned, 80 is taken as default.
4. Destination_path/index.htm is the destination directory with a file within.
Broadly, HTTP is the stateless protocol used to communicate or request a
resource from the server from the client. HTTP works on TCP suite and 80 is the
default port number for the server to listen on which acts as a HTTP server.
Hostname is www.mytestsite.com which is again the DNS host and it could be
an IP too such as for localhost, it could be something like 192.168.78.11. The
rest is path and the filename of the requested resource on the server. The
server may have directories and sub-directories and end up with a particular
file that is being requested. In this case its index.htm. There could had been an
index.htm file under the document root directory of the server too, however
here for illustration purposes, the directory path is shown for conveying
directories could be included as well. HTTP is an application level protocol.
Application level protocols.
There are different application level protocols for the client and the server to
talk to each other. These protocols are varied and few of them are mentioned
below for reference purposes only.
1. SMTP: Smart Mail Transfer Protocol.
2. GMTP: Group Mail Transfer Protocol.
3. FTP: File Transfer Protocol.
4. TFTP: Trivial File Transfer Protocol.
5. ARP: Address Resolution Protocol.
6. RARP: Reverse Address Resolution protocol.
7. Telnet: Remote Terminal Access Protocol.
8. ADC
9. AFP
10. BACnet
11. BitTorrent
12. BGP: Border Gateway Protocol.
13. BOOTP: Bootstrap Protocol.
14. CAMEL
15. DiCOM: An Authentication, Authorization and Accounting Protocol.
16. DICT: Dictionary Protocol.
17. DNS: domain Name System.
18. DSM-CC: Digital Storage Media Command and Control.
19. DNSP: Dynamic Host Control Protocol.
20. ED2K: A peer to peer protocol.
21. Finger: User profile Information Protocol.
22. Gnutella: Peer to peer file swapping protocol.
23. Gopher: Hierarchical Hyperlinkable protocol.
24. HTTP: Hypertext Transfer Protocol.
25. HTTPS: Hypertext Transfer Protocol Secure.
26. IMAP: Internet Message Access Protocol.
27. IRC: Internet Relay Chat.
28. ISUP: ISDN User Part.
29. LDAP: Lightweight Directory Access Protocol.
30. MIME: Multipurpose Internet Mail Extensions.
31. MSNP: Microsoft Notification Protocol.
32. MAP: Mobile Application Protocol.
33. NetBIOS: File sharing and Name Resolution Protocol for Windows.
34. NNTP: Network News Transfer Protocol.
35. NTCIP: National Transportation Communications for Intelligent
Transportation System Protocol.
36. POP3: Post Office Protocol Version 3.
37. RADIUS: An Authentication, Authorization and Accounting Protocol.
38. RDP: Remote Desktop Protocol.
39. Rlogin: A UNIX remote login protocol.
40. Rsync: File Transfer Protocol for backups, copying and mirroring.
41. RTP: Real-Time Transport Protocol.
42. RTSP: Real-Time Transport Streaming Protocol.
43. SIP: Session Initiation Protocol.
44. SISNAPI: Siebel Internet Session Network API.
45. SNMP: Simple Network Management Protocol.
46. SMB: Microsoft Server Message Block Protocol.
47. STUN: Session Traversal Utilities for NAT.
48. TUP: Telephone User Part.
49. TCAP: Transaction Capabilities Application Part.
50. WebDAV: Web Distributed Authoring and Versioning.
51. XMPP: Instant-Messaging Protocol.
52. SSH: Secure Shell.
53. SOAP: Simple Object Access Protocol.
54. DHCP: Dynamic Host Control Protocol.

This does not end the application level protocols however, because
application protocols keep emerging from recent times, new applications
implement their own protocols and therefore is updated in continuation.
Application protocol take the support of the underlying transport protocol in
the OSI model. There are 7 layers in the OSI model; application and transport
layer are one of them. In web application exploitation, we will generally
confront and will be restricted in and around these two specified layers. The
application layer protocol generally varies with programs or softwares in use
because they require different environment, different negotiations beside
packet synchronization at the TCP/IP level and there are much more factors
which results in a varied protocol for certain softwares or programs. That
been said, we will now confront with the most used protocol which is HTTP and
could be easily used by a browser or any other client which supports the HTTP
protocol. The first of its kind being HTTP 0.9 and later extended to version 1.1 for
better performances and added functionality!
References: http://www.slideshare.net/Eacademy4u/application-protocols





HTTP/1.0
HTTP/1.0 is the older HTTP standard in succession to HTTP/0.9. HTTP/1.0 is still
under use other than HTTP/1.1 used widely. Besides HTTP/0.9, HTTP/1.0 had major
improvements to it. HTTP/1.0 is a hypertext based protocol which could be
tested via telnet on windows or on a Linux machine. Other operating system
works too. All the requests made to a server for the availability of a certain
resource is requested via telnet and the commands end with a CR/LF. CR is
Carriage Return and LF is Line Feed. When a site doesnt support this protocol
because its old, a forced response is bounced back to the telnet session.
Below, attached are screenshots taken while communication via a program
called telnet which supports the telnet protocol. It could communicate to a
remote server which is based off the HTTP protocol and the default port as
shown is 80 because port number 80 is the default HTTP server port. Telnet
could be also used to connect to other ports. Later we would see how we
could achieve the same via a Swiss army knife tool called netcat. Its
dubbed as Swiss-Army Knife as it provides major additional functionality along
with listening to certain ports and communicating to remote ports on basis of
the situation. We will further take it to Nmap (a network mapper) and using its
internal NSE (Nmap Scripting Engine) determine the services running, detection
of web applications running, determine if there is a web application firewall
present and a variety of tasks to automate a penetration test task.


In HTTP/1.0 each connections are closed after a request/response session. The
connection is closed by the server itself. HTTP/1.0 has TCP slow-start
disadvantage, references to TCP Slow start has been documented at the
reference section of this document. HTTP/1.0 supports caching via its header If-
Modified-Since. It is not mandatory to specify the HOST header in HTTP/1.0 but
it allows to add one. Overall HTTP/1.0 is faster than 0.9 and comparatively
slower than the newly introduced HTTP version 1.1.
References: TCP-Slow-start: http://en.wikipedia.org/wiki/Slow-start
HTTP/1.0 Header fields are different and could be classified as the following:
HTTP_ACCEPT (or ACCEPT)
HTTP_USER_AGENT (or User-Agent)
Content-Type
Date
Expires
From
If-Modified-Since
Last-Modified
Location
Referer
Server
Various fields describes various functions. Because field information is
information about the information that is being sent through the protocol, it is
termed as meta-information. Therefore HTTP fields are meta-information.
Anything preceding the actual data (in HTML, CSS or other form) is the HTTP
Header Fields. The HTTP header fields are responsible for crypto-identification,
authentication and user cookies. The above header fields are supported by
HTTP version 1.0. A brief description of what these headers do are mentioned in
this document.
HTTP_ACCEPT (or Accept) header specifies what MIME-types the client would
accept provided the client request a resource from a remote server. The MIME-
type has to be divided with type and subtype along with the Accept: header
specification. Each item on this specified header must be separated by a
comma. Examples could be: Accept: txt/xml, xml/css,

HTTP_USER_AGENT header (or User-Agent) header specifies the browser agent in
use at the client side to get the resources from the server. The general
representation of the User-Agent header is in format: User-Agent:
Software/version_library/version; the User-Agent is used to traceback protocol
violation errors, statistical uses to record what client software is in use and for
tailoring response in accordance to the User-Agent to attempt more efficient
browsing experiences. The study of User-Agent in web application security is
due to the fact that the servers being able to intercept the User-Agent as
different format makes the User-Agent vulnerable to User-Agent Injections such
as Cross Site Scripting attacks and SQL Injection attacks. Malware analyst
frequently use the User-Agent fields to study botnets because generally a
malware source would have a different User-Agent string which might look
abnormal. The specification of User-Agent header could be blank or kept
omitted, but the RFCs say it is mandatory.

Content-Type header field states the media type of the body sent by the server
to a client. The media types are in accordance to the RFCs. Example
specification for Content-Type could be as: Content-Type: txt/html;
charset=ISO-8859-4

Date header field represents the HTTP-date and the time the HTTP message
originated. There are three standards to mention the date field according to
the RFC. All these three standards are later supported by the recent HTTP
version 1.1; the first standard and the third standard provides the year, however
the second standard commonly in use doesnt say the year of the message
originated. Syntax for use is Date: HTTP-date, example:

Expires header field gives the notion of the content to be expired or cease to
be valid after a certain period of time. This time stamp and the data is given by
the Expire header field. After this data and time, the content of the HTTP
message could not be accessed by any caching servers, proxies or a client
until and unless the originating party updates the expire field and lets the
content to be accessed or extends the date and time stamp for validity.
Format for specifying the Expire header is as: Expires: HTTP-expire-date-time,
example:

From header among the HTTP fields are used to specify an email address for
reference and to provide details of the human factor involved in making the
request. Robot spiders use this field to let the servers know the responsible head
to make such a request containing the request parameters to access any
particular resources on a remote server. The format for using the From header is,
From: example_email_address@example_domain.com
This header is generally for reference purposes and should not be taken as an
access token for any authorization.

If-Modified-Since header field is to request a resource which has been modified
in relation to the date and time mentioned on the header field. If the resource
is not modified in accordance to the time and date stamp mentioned in the
header, then the server responds with a 304 which states Not Modified error.
Here 304 is status code. Status code will be discussed in this document at later
stages. It is to check if the content is modified and older content is not served
to the client. The format is If-Modified-Since: If-Modified-Since_data_year_time,
for an example:

If-Modified-Since is a conditional HTTP header field.
Last-Modified header field is a HTTP header which states the date and time of
the content which were last modified. If the content was modified at an older
outdated date; then the requested content which was outdated, the cache
would take care of it and thus not requiring the content transfer once again. It
saves page rendering and requesting obsolete content which were not
modified and the content which the client already had. It saves unnecessary
data transfers. The format could be Last-Modified: Date_year_time; for an
example:

The Last-Modified header saves unnecessary data transfer and hence cache
comes into play while data packet transfer via the HTTP protocol.
Location header field is a HTTP header which is triggered by the server to let
know the client that the URI (Uniform Resource Identifier) the client has
provided is moved to another location and send the location of the path to
this header field (Location). The client then picks up the path and request
another request to the server including the specified path to the server to
achieve the resource. One can use curl with -IL switch to inspect the
Location header and see how it works.


The diagram shows, http://google.com was requested to from the google
server, the server replied with a 301 status code which meant moved
permanently and added a Location header to its response adding www to
the original requested URI. Now, the client picks up this path and requests the
server again to query if the resource was available, to which the server
responded moved temporarily this time but with a status code of 302 which
exactly meant moved temporarily (well look to status codes later in this
document). The server responded with a Location header again and the
client picks up the path and sends the 3
rd
query as a request for the resource
and this time the server responds with a 200 OK status code, which meant the
resource the client was looking for was found on the server and could now
serve the resource.
Referer in the HTTP header field is generally for the servers benefit. Referer might
look like a mis-spell and originally it was a mis-spell (ftp://ftp.isi.edu/in-
notes/rfc2616.txt RFC 2616, section 14.36) but due to it being mis-spelled at the
first place and the authors not caring to alter the RFC went ahead to keep it
official. Referer field is for maintenance, optimized caching and logging
requests. The Referer field is asked (rather requested!) by the server web
application to the client to provide the server with the originating request URI.
This could be for inspection purposes, could be for determining from where the
traffic comes and might be for back-linking.

The Referer header is prone to spam, injections, scripting attacks from a web
application security aspects. References are attached below for a read.
References: http://resources.infosecinstitute.com/sql-injection-http-headers/
References for malware inspection purposes:
http://isc.sans.edu/diary/The+Importance+of+HTTP+Headers+When+Investigati
ng+Malicious+Sites+/10279
Server HTTP header field is a response header field from a server upon
responding to a request. This field is used to provide information on the web
technologies being used on the server side to handle the requests sent by a
client. The server header from a web application security aspect, provides
informational data like the version numbers of the web technology (HTTP
Daemon being used {httpd}) being used and more. A sample server header on
a HTTP communication between a client and a server would look like:

That being said, we would now move on to HTTP 1.1 (version 1.1) which are the
current official HTTP communication protocol. We had taken look onto how
HTTP/1.0 works for most of the clients and how the server responds to the query
of the clients does. We also looked at how the HTTP protocol itself is a pull
protocol and not a push protocol. By that it is meant, the client requests and
pulls out information from the server and the server responds back to the
clients requests. The server itself does not push the information to the client.
Furthermore, we went to look at some HTTP/1.0 headers which will be still at use
on HTTP/1.1 headers. This is because a vast number of clients and the internet
itself depends on these header fields. For an instance, the HTTP version and the
GET/POST requests containing headers like server,User-Agent,Content-Type
and many more are of prime importance.



HTTP/1.1
Welcome to HTTP/1.1, a further enhancement made for HTTP/1.0 protocol.
Since we will mostly be dealing with HTTP/1.1 security concerns and later; we
would take a much closer look to HTTP/1.1 then HTTP/1.0
The HTTP daemon is a HTTP server which accepts and client requests and
responds to these requested based on the availability of the resource,
authorization of the resource asked for and location of the resource been
asked for. All requests are answered via an appropriate status code as we saw
earlier with HTTP/1.0
The overall operation could be much more complicated if a tunnel, proxy or
other clients were involved in the process. As implied in the terminology
section of this document, Origin Server is the server which has the resource
asked for by the client. The client may be asking for the resource via a
forwarding agent which could be a proxy. A proxy in this case could then be a
mediator, it accepts the requests made by the client, parses it and adds its own
attributes or re-writes the HTTP messages and then forwards it to the server. Or
this might not happen, because if the proxy was supposed to forward this to a
gateway, it could forward to a gateway as well. A gateway is another
mediator here, which accepts the requests made by the proxy, and then
without any modifications sends this message to other involved in the process.
The last client or the party receiving the request but not before making this
original request to the Origin Server is the party which would get the respond
back from the server and the server has to respond back to only this party. Now
the original client which made the request is called the User Agent and the
one which this request was meant for originally is the Origin Server. There
might be at times that the connection or the request chain could be denied to
other parties provided the rules that they do not use any tunnels or do not
modify the original request or be in range of the neighboring User Agent.
To keep it simple, one should know that the request chain is always in some or
other way falls under complex variety of mediator proxies, tunnels, other
clients, other servers etc... And hence this request could be modified in several
factors if security was to be a point. Well come to tunnels later. Basics first, we
would discuss HTTP/1.1 broadly.


The diagram and the protocols doesnt matter as long as you know whats
going and what is attempted to make the connections clear. Here the peer0
could consist bunch of other clients as well, finally reaching to the server
which is the origin server.
The HTTP/1.1 is designed to maintain the wide diversity among the web
technologies used and being deployed. This happens without un-necessary
steps to change the protocol itself, because the protocol is maintained in such
a way that it negotiates the connection and data transfer to maintain the
diversity which is in use. And if any errors occur at any point, the HTTP should
be able to diagnose the problem and should be able to throw back the error
via error codes or via any form which could be normally presented to the UA
(User Agent). From here on well look at certain header fields when the topic
pops in. The general HTTP field headers as mentioned above for HTTP/1.0 are
common and would remain valid unless notified here in the document. We
look at the HTTP/1.1 protocol from a security prospective and look at its access
controls, authentication mechanisms provided, its caching, the range requests,
conditional requests, semantics and content, messaging syntax and routing.
Well also look at the encoding parameters for basic authentication, Mutual
Authentication Protocol, use if HTTP signatures, the way HTTP manages sessions,
and beyond the HTTP basics. This specific section beyond the basics will be
covered in this document for informational purposes and should be studied for
references and to know more about the protocol which falls under the web
application developers, as well as web application exploiters scope of
references. That been said, we would divide the HTTP/1.1 to number of chunks
of topic to pick up and to be discussed about!
HTTP/1.1 Authentication
The most relevant and broadly authentication mechanism used by the client-
server model falls under three types:
1. Basic
2. Digest
3. NTLM
HTTP Access Authentication Framework: Before moving any further onto basic,
digest or NTLM authentication schemes, one should consider studying the basic
authentication framework which the HTTP protocol itself provides. This proposed
framework is to chalk out the architectural layout of how an authentication
could be achieved whilst being on HTTP protocol over a TCP/IP stack
connection. This framework is the root of all other authentication scheme which
would follow.
First, the HTTP is a challenge-response based authentication protocol, which
means the server provides the client with a challenge to which the client has
to solve by picking up the authentication tokens provided by the origin server
in the headers while responding back to a clients request. The authentication
framework of HTTP relies on a 2-way credential share, which means the client
sends a request first and then receives a 401 Unauthorized from the server ends
with a realm value and the schemes over which the challenge depends
and has to be solved by the client. Now if the client goes ahead and solves
these scheme provided by the server and then the server verifies the
credentials provided by the user agent that is the client, and matches the
values from the servers internal databases and identifies a valid user, the user
agent is allowed to browse through the resource. If that fails, either the
connection terminates or the server keeps serving 401 unauthorized until the
connection itself is timed-out or the client closes up the connection.

Hitting on cancel would close the connection from the user agent or the client
side. This is how the HTTP authentication architecture works. Now we will move
on to discuss how the specifics work on basic, digest and NTLM authentication
which are common among HTTP authentication.
Basic HTTP Authentication: Over basic HTTP authentication, the client sends the
username and the password as requested by the server upon the clients
request to access a particular resource which resides in the web server. But
this username and password is encoded (not encrypted!) as Base64 and sent
to the server. This might be a security concern. If the mode being used is a pure
HTTP connection over TCP/IP and not via HTTPS (HTTP secure), the username
and the password could be decoded (not decrypted!) in clear text if any
malicious user/client/proxy sites in-between the client and the server (Origin
Server). The tokens used over this authentication protocol is always case-
insensitive and does not require the validation to be on case-sensitive basis.
This token is only used to identify among which of the various authentication
schemes the server and the client both negotiates and move ahead to the
actual authentication process. After this token being stated, authentication
parameters are set for actual authentication process and consists of attribute:
value set with more parameters if necessary with commas in place between
them. For the origin server to provide an authentication challenge to the
requesting client which is a user-agent, the origin server must throw back a 401
status code which is Unauthorized in terms of HTTP protocol status code
(more on status codes later!). The stating of 401 should contain a WWW-
authenticate HTTP header field in response to challenge the user agent.
When the same is dealt with proxy, the proxy sends 407 status code which
means Proxy Authentication required in HTTP protocol status code terms. This is
only possible when the proxy itself sends the HTTP header Proxy-
Authentication with the status code mentioned. Now, the client or the original
user agent which requests the protected resource must solve the challenge;
looking at the authentication scheme token, choose if its compatible scheme
or not and then looking at the attributes and the parameters which is followed
after the scheme has been stated. As said before, the authentication scheme
is resolved by non case sensitive string as a token in the HTTP header.

There might be more than one WWW-authenticate schemes to be solved as a
challenge as the attribute and the value pair may contain other challenges to
solve via a comma separated list. The client or the user agent takes care of this
and solves them. Now this does not confine to WWW-authenticate header but
to the Proxy-Authenticate header as well. Below is a snap how it would look if a
TELNET client was used. One could do the inspection of the same using any
proxy such as Httpwatch or Burp Suite. Here the WWW-authenticate was not
set by the server. It throws us with a status code of 401 with a Basic realm. So
what is Basic Realm? And how does in come under the authentication
framework used by the HTTP protocol. We will come back to the Basic Realm
after we had solved our authentication headers and the authentication
parameters.
Now that we know that various authentication headers could be in use for
authorization to be sent to the server and hence the user agent has to decide
which authentication scheme to choose; basically the one with the strongest
scheme is chosen and then solved. But various authentication schemes in
place with different auth-scheme could have its own problems which are
security centric to web applications such as the following:

1. Online Dictionary attacks.
2. Man in the middle attacks.
3. Chosen plaintext attacks.
4. Precomputed Dictionary attacks.
5. Batch Bruteforce attacks.
6. Spoofing via 3
rd
party counterfeit servers.
References: http://tools.ietf.org/html/rfc2617#section-4.6
The digest authentication scheme requires the client to store passwords which
could be fetched easily if the client was compromised. But this was not
covered here as this is the BASIC authentication section. We will look to security
concerns for the digest authentication scheme whilst being on the digest
authentication topic and discuss the reasons it could be compromised.

Unlike the snap above, there could be more WWW-authenticate or Proxy-
Authenticate HTTP headers with different challenges which needs to be solved
first to request and successfully get the requested resource residing in the web
server. The user agent as discussed above has to parse the response in a way
to solve these challenges separated by commas and list of other auth-param
after it has identified the auth-scheme which includes the token.
So this goes like the following:
Auth-scheme = token {where token could be of a string or any value!}
Token = {parameters, strings}
Here the [Token] Authentication-parameter could be customized by the web
developer or the web framework itself in their entire own fashion, such as
App_token = thisisthetokenValUechallenge
This Token or the Auth-scheme generally takes form of nonstandard HTTP
header fields such as X-auth-token, etc common nonstandard HTTP request
header fields could be the following:
X-Requested-With
DNT
X-Forwarded-For
X-Forwarded-Proto
Front-End-HTTPS
X-Wap-Profile
X-ATT-deviceid
Proxy-Connection
There are dozens of nonstandard HTTP response headers as well, we will look at
them later when we discuss about various framework which relies on HTTP
headers and use custom header fields, and sometimes this leads to information
disclosure keeping web application in mind. Some examples of custom
response HTTP headers are:
Content-Type
Content-Range
Etag
P3P
Pragma
The customized authentication we are specifically taking an example over
here is the x-auth-token for Authentication token which has to be reused later
on each request being sent to the origin server.



That being said, different web frameworks might use different challenges with
different HTTP auth-param headers for solving its challenges. The WWW-
Authenticate however states if its a BASIC authentication level
challenge/DIGEST authentication level challenge or NTLM challenge which the
client must solve.
We will now come to the HTTP realm part which the HTTP header field uses for
authentication. The HTTP authentication provides the realm to the client for a
specific purpose. The reason and the purpose being the server stores a
protected area where all the realms are stored and they have their own
names to it. The names could be of any string type and hence enclosed in
double quotes. The realm however is case-sensitive in relation to the realm
directive which is a directory saved on the server side to which the client
wants access and is protected by a realm value. Here the realm directive
might be case-insensitive, but the realm value has to be case-sensitive
because there might be several other protected spaces for different resources
all placed on the same protected space or a different protected area, space,
page, group, etc
The realm value might have similar semantics which relates to the
authentication scheme provided by the server. The origin server sets the realm
value. A realm looks like the following:

This was for the digest authentication scheme, however for a basic
authentication scheme, it would look same but with a WWW-authenticate:
Basic HTTP header in place.
References on realm directives are the following:
For HTTPD Apache: http://httpd.apache.org/docs/2.2/howto/auth.html
For SQUID: http://www.squid-cache.org/Doc/config/auth_param/
For Ngnix: http://wiki.nginx.org/HttpAuthBasicModule
Now, how does the user agent (the client) goes ahead and provide the
credentials to the origin server? The answer is it sends a special HTTP header
field called Authorization not authentication as presented by the origin
server with 401 status code. The Authorization HTTP header is sent by the user
agent (the client) to a server only after it receives a 401 (Unauthorized) or 407
(Proxy-Authentication) response from a foreign remote origin server. It is to be
noted that the client does not necessarily have to send the Authorization but
only if it wishes to, failing to which the server would endlessly (until timeout) send
the client a 401 status code with Unauthorized in its response. The HTP header
would look like the following when the user agent sends the origin server with a
authorization HTTP header!

We can check the client has now added an Authorization HTTP header along
with the other necessary HTTP headers for its authentication scheme and the
value for this HTTP attribute (which is Authorization) has been Base64 encoded.
This Base64 encoded value is basically the credentials we had provided in a
username:password form separated by a : in place between them. The server
sends back the response as 401 yet again because the credentials I had
provided must be invalid and not applicable to the requested protected
resource under a protected area based out of a realm for that particular
resource being requested by the user agent (or the client!).
Now how does one move along to decode the Base64 encoding? The answer
is simply using an online Base64 encoder or using python or C libraries. A simple
program which is programmed in C could be useful if a user has no access to
the internet and hence on practical scenarios could create his own C program
and compile to run the decode check to an encoded Base64 string:
#include <stdint.h>
#include <stdlib.h>


static char encoding_table[] = {'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H',
'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P',
'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X',
'Y', 'Z', 'a', 'b', 'c', 'd', 'e', 'f',
'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n',
'o', 'p', 'q', 'r', 's', 't', 'u', 'v',
'w', 'x', 'y', 'z', '0', '1', '2', '3',
'4', '5', '6', '7', '8', '9', '+', '/'};
static char *decoding_table = NULL;
static int mod_table[] = {0, 2, 1};


char *base64_encode(const unsigned char *data,
size_t input_length,
size_t *output_length) {

*output_length = 4 * ((input_length + 2) / 3);

char *encoded_data = malloc(*output_length);
if (encoded_data == NULL) return NULL;

for (int i = 0, j = 0; i < input_length;) {

uint32_t octet_a = i < input_length ? (unsigned char)data[i++] : 0;
uint32_t octet_b = i < input_length ? (unsigned char)data[i++] : 0;
uint32_t octet_c = i < input_length ? (unsigned char)data[i++] : 0;

uint32_t triple = (octet_a << 0x10) + (octet_b << 0x08) + octet_c;

encoded_data[j++] = encoding_table[(triple >> 3 * 6) & 0x3F];
encoded_data[j++] = encoding_table[(triple >> 2 * 6) & 0x3F];
encoded_data[j++] = encoding_table[(triple >> 1 * 6) & 0x3F];
encoded_data[j++] = encoding_table[(triple >> 0 * 6) & 0x3F];
}

for (int i = 0; i < mod_table[input_length % 3]; i++)
encoded_data[*output_length - 1 - i] = '=';

return encoded_data;
}


unsigned char *base64_decode(const char *data,
size_t input_length,
size_t *output_length) {

if (decoding_table == NULL) build_decoding_table();

if (input_length % 4 != 0) return NULL;

*output_length = input_length / 4 * 3;
if (data[input_length - 1] == '=') (*output_length)--;
if (data[input_length - 2] == '=') (*output_length)--;

unsigned char *decoded_data = malloc(*output_length);
if (decoded_data == NULL) return NULL;

for (int i = 0, j = 0; i < input_length;) {

uint32_t sextet_a = data[i] == '=' ? 0 & i++ : decoding_table[data[i++]];
uint32_t sextet_b = data[i] == '=' ? 0 & i++ : decoding_table[data[i++]];
uint32_t sextet_c = data[i] == '=' ? 0 & i++ : decoding_table[data[i++]];
uint32_t sextet_d = data[i] == '=' ? 0 & i++ : decoding_table[data[i++]];

uint32_t triple = (sextet_a << 3 * 6)
+ (sextet_b << 2 * 6)
+ (sextet_c << 1 * 6)
+ (sextet_d << 0 * 6);

if (j < *output_length) decoded_data[j++] = (triple >> 2 * 8) & 0xFF;
if (j < *output_length) decoded_data[j++] = (triple >> 1 * 8) & 0xFF;
if (j < *output_length) decoded_data[j++] = (triple >> 0 * 8) & 0xFF;
}

return decoded_data;
}


void build_decoding_table() {

decoding_table = malloc(256);

for (int i = 0; i < 64; i++)
decoding_table[(unsigned char) encoding_table[i]] = i;
}


void base64_cleanup() {
free(decoding_table);
}

However using Python; the program becomes a lot simpler and has less
dependencies, an example of this python script could be:
#!/usr/bin/python

str = "this is string example....wow!!!";
str = str.encode('base64','strict');

print "Encoded String: " + str;
print "Decoded String: " + str.decode('base64','strict')


Or if on a bash environment, use the following to get the output!
root@coded:~# echo QWxhZGRpbjpvcGVuIHNlc2FtZQ== | base64 --decode


Where, QWxhZGRpbjpvcGVuIHNlc2FtZQ== is the base64 encoded string.
Now the great feature about using realm is that it uses protection space
which means if a particular user credential was authorized for this realm, the
same credentials would hold valid for the realm identically same as that of
the other realm, which means the resources are shared over the same realm
name and the browser (or the user agent or the client) could go ahead and
may not ask for credentials when trying to access the same realm. Now it is to
be kept in mind that these credentials might be valid only for certain amount of
time for that protected space set by the authentication scheme in use, the
authentication parameters, and the user preferences. Unless defined by the
authentication scheme itself, the scope is only limited to that particular
protected space and does not extend beyond this scope.
Now, if the origin server does not accept the credentials sent by the user agent,
it must send 401 (Unauthorized) with WWW-Authenticate header back to the
client (or the user agent). The same applies to the Proxy-Authenticate header, if
the proxy does not accept the credentials sent by the user agent, the proxy-
server must send a 407 (Proxy-Authenticate) header back to the user agent.
This happens with providing newer authentication schemes to the user agent
on each response and is liable to both proxy-server and the origin server.
Hence overall the basic authentication scheme is all about the user ID and the
password being sent to the origin server or a proxy-server along with base64
encoded credentials separated by a colon. The left part of the colon could
represent the user ID and the right side of the colon could represent the
password. This being said, the whole credential would be only valid for a
particular realm to which the realm value is set by the origin server or the proxy-
server itself and keeps the resource at a protected space. This protected
space has its own realm to which the client needs an access to! The URI is sent
by the client first, to which the server sends a 401 Unauthorized with WWW-
Authenticate with mentioned authentication scheme back with its realm for
the protected space, the client picks up these headers, and sends the
credentials over the HTTP Authorization HTTP header field encoded with base64
encoding along with Connection header if the connection is to keep alive or
close at authentication. The Connection is generally kept alive. The same
happens with the proxy-server, instead the proxy-server sends Proxy-
Authenticate HTTP header and accepts Proxy-Authorization as a request to
access the protected resource (along with the Referer if present!)
When the Authorization header is sent to the origin server, it also states what
authentication scheme it uses, for instance it would say Authorization: Basic
Y29kZWQzMjppYW10ZXN0Cg==
The basic states the client had chosen the basic authentication. The string
followed by the basic is the username:password base64 encoded value. To
get the base64 value, one could use the echo command in a Linux machine (I
am using Kali Linux) and quote out the string to be encoded with a pipe to
filter it out to the base64 command. The returned string is the base64 encoded
equivalent to the username:password string.


The header should look like the above. Now, the rest is it should be
remembered by the user agent that the credentials it just provided to the origin
server with the relative URI should be valid for the directories falling under that
URI. This means the client should assume that the request-URI the client had
provided, the links deeper than the depth of the same URI falls under the same
protection space and hence belongs to the same realm and therefore the
same credentials could be valid under the directory previously specified by the
request-URI. If the file being requested ends at that particular URI and is
mentioned in the URI, its then absolute URI path which the client requested
and hence it was the last resource which could be fetched via the credentials
provided by the user agent (or the client).
Digest Access HTTP Authentication: The Digest access authentication is a need
over basic HTTP authentication due to security concerns. Because the basic
authentication provides no encryption (only encoding to the plain text), this
could fall under MITM attacks. MITM is short for Man in the Middle attack which
would be discussed in other coming documents. For now we shall stick to what
is digest access authentication and how does it work. The digest access
authentication itself provides no encryption but was originally meant to cover
up the flaws of the basic authentication over HTTP connections.
The digest access authentication is same as the basic access authentication
with minor changes in the way the username and the password is verified by
the origin server. The digest access authentication uses the same scheme as
the basic access authentication which is challenge-response paradigm. The
origin server provides a challenge with a nonce header to the user agent or
the client which asks for the protected resource. This nonce header is
detected and picked up by the client (or the user agent) and sends the
credentials in hashed format (which is generally MD5 hash). The origin server
picks up the hash values, matches it up and verifies if the same hash is present
in the database of the server or not, if the hashed credentials been served by
the user agent (or the client) is valid, the credentials would be accepted or
otherwise not authorized to access the resource asked for. But this happens in a
deeper way. The hash is being send as a hash checksum of the username and
the password. The challenge-response consists of these checksums and the
response to this could be the MD5 checksum (generally unless specified by the
server itself when first contact was made by the user agent or the client!) of the
username, the password, the URI the client or the user agent sent, the nonce
and the HTTP method via which the resource is requested over. Now the HTTP
method is something we are yet to discuss about. There are various methods
supported such as GET, PUT, DELETE, TRACE, OPTIONS, and POST etc...
If at the first client request the serer does not find any credentials, the sever
generates a 401 Authenticate status code and sends this to the client with
HTML Etag headers (for different supported web application) as to where the
URI requested could be found and the ciphers the server supports and lets the
client choose the appropriate cipher the client finds comfortable with.
Everything happens same as the Basic Access Authentication, just that the
nonce header is added and the checksums are sent by the client instead of
encoding the plain text credentials. The verification is also two way as the
server verifies the stored checksums and sends matches with the one client
sent! Etag headers are the non-standard headers which different web
application generates for different purposes, these purposes could serve
information about the server itself, or as a hint of the ciphers it supports, or
maybe any resource which has to be followed before the authentication takes
place or for any different reason as suited by the application framework the
web application itself was built upon.
Now how this prevents replay attacks which were caused by the basic access
authentication? The answer is the origin server responds with a different
nonce value to which then the client according calculates the MD5 hash
checksums and thereby generating a different credential digest every time it
sends this request to the server. The general calculation via which the client
calculates the MD5 checksum is as follows:
md5(md5(username:realm:password):nonce:md5(httpMethod:uri))

This again could be implemented over python or suitable language of choice
to verify the integrity of the different requests sent to the origin server when
trying to access the protected resource. No request is the same as the previous
one because the nonce header sent by the origin server mitigates the issues
related with the basic access authentication scheme.
References: http://en.m.wikipedia.org/wiki/Cryptographic_nonce
The whole operation over a digest access authentication in-between a user
agent or a client and the origin server or a proxy server would go as follows. On
original request, if the user agent or the client does not send any credentials
(however the server would drop the credentials because the nonce will be
computed only when the origin server sends the nonce value), the response
from the servers end would look like:


Upon, getting the response, the client will take a note of the WWW-
Authenticate header, grab the realm, will take a note of the URI, take a note of
the request method, will see the algorithm used to compute the challenge
provided by the server, would grab the nonce value which is supposed to be
generated by the origin server each time on each request (no duplicates
happen here!), and then altogether will computer the MD5 checksum of the
username:realm:password and then combine this with the :nonce: value and
then merge the MD5 of HTTP_method:URI; and hence the algorithm derived
would be:
md5(md5(username:realm:password):nonce:md5(httpMethod:uri))

Which will be sent to the origin server to access the protected resource. The
origin server would accept if the credentials provided were valid and this way
the credentials are never stored in the server itself, only the MD5 checksums of
the credentials are stored which is requested and accessed each time via the
combination of the credentials themselves along with the HTTP URI, the HTTP
METHOD and the realm in-between the credentials.
References: http://technet.microsoft.com/en-
us/library/cc780170%28v=ws.10%29.aspx

Vous aimerez peut-être aussi