Académique Documents
Professionnel Documents
Culture Documents
The advantage of a circuit-level proxy is that it provides service for a wide variety of
different protocols. Most circuit-level proxy servers are also generic proxy servers; they
can be adapted to serve almost any protocol. Not every protocol can easily be handled
by a circuit-level proxy, however. Protocols like FTP, which communicate port data from
the client to the server, require some protocol level intervention, and thus some
application level knowledge. The disadvantage of circuit-level proxy server is that it
provides very little control over what happens through the proxy. Like a packet filter, it
controls connections on the basis of their source and destination and cant easily
determine whether the commands going through it are safe or even in the expected
protocol. Circuit-level proxies are easily fooled by servers set up at the port numbers
assigned to other services.
Proxy-aware router
With this approach, nothing on the client’s end is modified, but a router intercepts the
connection and redirects it to the proxy server or proxies the request. This requires an
intelligent router in addition to the proxy software (although the routing and the proxying
can co-exist on the same machine).
Even if software is available for your platforms it may not be software your users want.
For example, dozens of FTP client programs are on macintosh. Some of them have
really impressive graphical user interfaces. Others have other useful features; for
example, they allow you to automate transfers. You’re out of luck if the particular client
you want to use, for whatever reason, doesn’t support your particular proxy server
mechanism. In some cases you may be able to modify client to support your proxy
server, but doing so requires that you have the source code for the client, as well as the
tools and the ability o recompile it. Few client programs come with support for any form
of proxying.
The happy exception to this rule is web browsers like Netscape, Internet Explorer, and
Lynx. Many of these programs support proxies of various sorts (typically SOCKS and
HTTP proxying). Most of these programs were written after firewalls and proxy systems
had become common on the internet; recognizing the environment they would be
working in, their authors chose to support proxying by design, right from the start.
Using application changes for proxying does not make proxying completely transparent
to users. The application software still needs to be configured to use the appropriate
proxy server, and to use it only for connections that actually need to be proxied. Most
applications provide some way of assisting the user with this problem and partially
automating the process, but misconfiguration of proxy software is still one of the most
common user problems at sites that use proxies.
In some cases, sites will use the unchanged applications for internal connections and
the proxy-aware ones only to make external connections; users need to remember to
use the proxy-aware program in order to make external connections. Following
procedures they have become accustomed to using elsewhere, or procedures that are
written in books, may leave them mystified at apparently intermittent results as internal
connections succeed and external ones fail. (Using the proxy-aware applications
internally will work, but it can introduce unnecessary dependencies on the proxy server,
which is why most sites avoid it.)
Exactly how this is implemented varies from operating system to operating system.
Where dynamically linked libraries are available, you add a library; where they are not,
you have to replace the network drivers, which are a more fundamental part of the
operating system.
In either case, there may be problems. If applications do unexpected things, they may go
around the proxying or be disrupted by it. All of the following will cause problems:
Because the proxying is relatively transparent to the user, problems with it are usually
going to be mysteries to the user. The user interface for configuring this sort of proxying
is also usually designed for the experienced administrator, not the native user, further
confusing the situation.
Let’s look at FTP. Imagine that Amalie Jones want to retrieve a file from an anonymous
FTP server. Here is what she does:
1. Using any FTP client, she connects to your proxy server (which is probably
running on the bastion host – the gateway to the internet) instead of directly to
the anonymous FTP server.
2. At the username prompt, in addition to specifying the name he wants to use,
Amalie also specifies the name of the real server she wants to connect to. If she
wants to access the anonymous FTP server on ftp.greatcircle.com for example,
then instead of simply typing “anonymous” at the prompt generated by the proxy
server, she’ll type anonymous@ftp.greatcircle.com
The main problem with using custom procedures, however, is that you have to teach
them to your users. If you have a small user base and one that is technically adept, it
may not be a problem. However, if you have 10,000 users spread across four
continents, it’s going to be a problem. On the one side, you have hundreds of books,
thousands of magazine articles, and tens of thousands of usenet news postings, not to
mention whatever previous training or experience the users might have had, all of which
attempt to teach users the standard way to use basic internet services like FTP. On the
other side is your tiny voice, telling them how to use a procedure that is at odds with all
the other information they are getting. On top of that, your users will have to remember
the name of your gateway and the details of how to use it. In any organization of a
reasonable size, this approach can’t be relied upon.
A proxy-aware router of some sort (like the one shown in figure given below) is the
solution that’s easiest for the users; they don’t have to configure anything or learn
anything. All of the work is done by whatever device is intercepting the packets, and by
the administrator who configures it.
On the good side, this is the most transparent of the options. In general, it’s only
noticeable to the user when it doesn’t work (or when it does work, but the user is trying
to do something that the proxy system does not allow). From the users point of view, it
combines the advantages of packet filtering (you don’t have to worry about it, it’s
automatic) and proxying (the proxy can do caching, for instance).
From the administraor’s point of view, it combines the disadvantages of packet filtering
with those of proxying:
• It’s easy for accidents or hostile actions to make connections that don’t go
through the system.
• You need to be able to identify the protocol based on the packets in order to do
the redirection, so you can’t support protocols that don’t work with packet
filtering. But you also need to be able to make the actual connection from the
proxy server, so you can’t support protocols that don’t work with proxying.
• All internal hosts need to be able to translate all external hostnames into
addresses in order to try to connect to them.