Vous êtes sur la page 1sur 24

Index

1. Abstract
2. Introduction
3. Proxy Server
4. Squid Proxy Server
5. System Requirements
6. Squid Installation
7. Configuring Squid
8. Access Control
9. Client Machine Configuration
10.Using ACLs
11.Authentication Methods
12.Advanced Authentication
13.User Authentication
14.Transparent Proxy/Caching

Abstract
In the Linux project, our main task is to install and configure Squid Proxy Server. To do this
project, we chose the Ubuntu Linux as the operating system.
Linux has become a synonym for Networking. It is being used both in office and home
environments as file, print, e-mail, application server and also it is increasingly being used as
Proxy server. Ubuntu is a new distribution within the Linux world, founded in 2004 and mainly

focused on the necessity of end users. However, Ubuntu is the fastest-growing Linux distribution
in the world today. It is built upon one of the most stable and secures distributions, Debian.
Ubuntu provides the latest version of the X Window System and its server. Ubuntu is a free
distribution and is supported mainly by the helpful community.

A proxy-server provides Internet access to different users at same time i.e. by sharing a single
Internet connection. A good proxy server also provides for caching of the requests, which helps
to access data from local resources rather fetching the data from web thus reducing access time
and bandwidth.
Squid is one such software which supports proxy, caching of HTTP, ftp, gopher, etc.. It also
supports SSL, access controls, caching of DNS and maintains a full log of all the requests. Squid
is free software and is released under the GNU, General Public License. This means, that anyone
who distributes Squid must make the source code available to others. Squid is the proxy server
software for the Web, supporting all necessary protocols like HTTP, HTTPS, FTP, and more. It
reduces bandwidth utilization and improves web access by caching and reusing frequentlyrequested web pages. Hundreds of Internet Service Providers in the world use Squid to provide
their users with the fastest possible web access. Its wider configuration option and extensive
access control makes it a great server accelerator program. Squid is as well available for
Windows-NT from Logi Sense. Squid is a widely-used proxy cache for Linux and UNIX
platforms. This means that it stores requested Internet objects, such as data on a Web or FTP
server, on a machine that is closer to the requesting workstation than the server. It may be set up
in multiple hierarchies to assure optimal response times and low bandwidth usage, even in modes
that are transparent for the end user. Additional software like squidGuard may be used to filter
Web contents.
The focus of this report is to give basic guidelines of setting up a squid proxy server and
ways of providing controlled access to users.

Introduction
The usage of internet has been drastically increased in few years. This increase demands
larger and larger bandwidth to work in easy and fast manner. But with limited bandwidth usage
we can provide faster internet working by using proxy caches. That is where proxy server comes

in to the picture. We have large amount of proxy servers available out of which some are open
source. We have to configure these servers according to our requirement by setting its
architecture and cache replacement algorithm.
A proxy server is a computer system sitting between the client requesting a web
document and the target server (another computer system) serving the document. In its simplest
form, a proxy server facilitates communication between client and target server without
modifying requests or replies. When we initiate a request for a resource from the target server,
the proxy server hijacks our connection and represents itself as a client to the target server,
requesting the resource on our behalf. If a reply is received, the proxy server returns it to us,
giving a feel that we have communicated with the target server. In this manner it hides private
network and private IP from public network. In advanced forms, a proxy server can filter
requests based on various rules and may allow communication only when requests can be
validated against the available rules. The rules are generally based on an IP address of a client or
target server, protocol, content type of web documents, web type, and so on.

As seen in the figure, clients can't make direct requests to the web servers. To facilitate
communication between clients and web servers, we have connected them using a proxy server
which is acting as a medium of communication for clients and web servers.

Sometimes, a proxy server can modify requests or replies, or can even store the replies from the
target server locally for fulfilling the same request from the same or other clients at a later stage.
Storing the replies locally for use at a later time is known as caching. Caching is a popular

technique used by proxy servers to save bandwidth, empowering web servers, and improving the
end user's browsing experience.

Proxy Server
A proxy server is a special kind of server which lies between client computers and the Internet.
The client computers are connected with the internet via proxy server. The clients request
websites and send the HTTP request to the local proxy server. The proxy server then forwards
their request on to the Web, retrieves the result, and hands it back to the client.
The three main reasons for deploying a proxy server are as follows:

Content control: We can control the web traffic using proxy server.
Speed: The proxy server stores the common sites into the cache and makes the most use
of the bandwidth.
Security: We can monitor what people are doing and can implement different security
features.

Proxy servers are mostly deployed to perform the following:

Reduce bandwidth usage


Enhance the user's browsing experience by reducing page load time which, in turn, is
achieved by caching web documents
Enforce network access policies
Monitoring user traffic or reporting Internet usage for individual users or groups
Enhance user privacy by not exposing a user's machine directly to Internet
Distribute load among different web servers to reduce load on a single server
Empower a poorly performing web server
Filter requests or replies using an integrated virus/malware detection system
Load balance network traffic across multiple Internet connections.
Relay traffic around within a local area network

Squid Proxy Server


Squid is a Unix-based proxy server that caches Internet content closer to a requestor than its
original point of origin. Squid supports caching of many different kinds of Web objects,
including those accessed through HTTP and FTP. Caching frequently requested Web pages,
media files and other content accelerates response time and reduces bandwidth congestion.
A Squid proxy server is generally installed on a separate server than the Web server with the
original files. Squid works by tracking object use over the network. Squid will initially act as an
intermediary, simply passing the client's request on to the server and saving a copy of the
requested object. If the same client or multiple clients request the same object before it expires
from Squid's cache, Squid can then immediately serve it, accelerating the download and saving
bandwidth.
Internet Service Providers (ISPs) have used Squid proxy servers since the early 1990s to provide
faster download speeds and reduce latency, especially for delivering rich media and streaming
video. Website operators frequently will put a Squid proxy server as a content accelerator,
caching frequently viewed content and easing loads on Web servers. Content delivery networks
and media companies employ Squid proxy servers and deploy them throughout their networks to
improve the experience of viewers requesting programming, particularly for load balancing and
handling traffic spikes for popular content.
Squid is provided as free, open source software and can be used under the GNU General Public
License (GPL) of the Free Software Foundation. Squid was originally designed to run on Unixbased systems but can also be run on Windows machines.

Squid was originally an outgrowth from the Harvest Project, an ARPA-funded open source
information gathering and storage tool. "Squid" was the code name used to differentiate the
project when development in the new direction was initially begun.
Squid acts as a proxy cache. It redirects object requests from clients (in this case, from Web
browsers) to the server. When the requested objects arrive from the server, it delivers the objects
to the client and keeps a copy of them in the hard disk cache. One of the advantages of caching is
that several clients requesting the same object can be served from the hard disk cache. This
enables clients to receive the data much faster than from the Internet. This procedure also
reduces the network traffic.
Along with the actual caching, Squid offers a wide range of features such as distributing the load
over intercommunicating hierarchies of proxy servers, defining strict access control lists for all
clients accessing the proxy, allowing or denying access to specific Web pages with the help of
other applications, and generating statistics about frequently-visited Web pages for the
assessment of the users' surfing habits. Squid is not a generic proxy. It normally proxies only
HTTP connections. It supports the protocols FTP, Gopher, SSL, and WAIS, but it does not
support other Internet protocols, such as Real Audio, news, or video conferencing. Because
Squid only supports the UDP protocol to provide communication between different caches, many
other multimedia programs are not supported.
A proxy server can simply manage traffic between a Web server and the clients that want to
communicate with it, without doing caching at all. Squid combines both capabilities as a server.

System Requirements

The most important thing is to determine the maximum network load the system must bear.
Therefore, it is important to pay more attention to the load peaks, because these might be more
than four times the day's average. When in doubt, it would be better to overestimate the system's
requirements, because having Squid working close to the limit of its capabilities could lead to a
severe loss in the quality of the service. The following sections point to the system factors in
order of significance.

Hard Disks
Speed plays an important role in the caching process, so this factor deserves special attention.
For hard disks, this parameter is described as random seek time, measured in milliseconds.
Because the data blocks that Squid reads from or writes to the hard disk tend to be rather small,
the seek time of the hard disk is more important than its data throughput. For the purposes of a
proxy, hard disks with high rotation speeds are probably the better choice, because they allow the
read-write head to be positioned in the required spot more quickly. One possibility to speed up
the system is to use a number of disks concurrently or to employ striping RAID arrays.

Size of the Disk Cache


In a small cache, the probability of a HIT (finding the requested object already located there) is
small, because the cache is easily filled and the less requested objects are replaced by newer
ones. If, for example, one GB is available for the cache and the users only surf ten MB per day, it
would take more than one hundred days to fill the cache.

The easiest way to determine the needed cache size is to consider the maximum transfer rate of
the connection. With a 1 Mbit/s connection, the maximum transfer rate is 125 KB/s. If all this
traffic ends up in the cache, in one hour it would add up to 450 MB and, assuming that all this
traffic is generated in only eight working hours, it would reach 3.6 GB in one day. Because the
connection is normally not used to its upper volume limit, it can be assumed that the total data
volume handled by the cache is approximately 2 GB. This is why 2 GB of disk space is required
in the example for Squid to keep one day's worth of browsed data cached.

RAM

The amount of memory (RAM) required by Squid directly correlates to the number of objects in
the cache. Squid also stores cache object references and frequently requested objects in the main
memory to speed up retrieval of this data. Random access memory is much faster than a hard
disk.

In addition to that, there is other data that Squid needs to keep in memory, such as a table with all
the IP addresses handled, an exact domain name cache, the most frequently requested objects,
access control lists, buffers, and more.

It is very important to have sufficient memory for the Squid process, because system
performance is dramatically reduced if it must be swapped to disk. The cachemgr.cgi tool can be
used for the cache memory management. This tool is introduced in Section 31.6, cachemgr.cgi.

CPU
Squid is not a program that requires intensive CPU usage. The load of the processor is only
increased while the contents of the cache are loaded or checked. Using a multiprocessor machine
does not increase the performance of the system. To increase efficiency, it is better to buy faster
disks or add more memory.

Squid Installation

We can install the squid server in two ways: one in command line mode and the other is in
graphical mode.

In the command prompt, enter the following command in the terminal to install the Squid
server:

sudo apt-get update


sudo apt-get upgrade
sudo apt-get install squid

In the graphical mode, we use synaptic package manager to install the Squid server.

Squid Configuration
The working and behavior of the Squid is controlled by the configuration details given in its
configuration file i.e. squid.conf; this file is usually found in directory the /etc/squid. The
configuration file squid.conf is a mile long affair, it just keeps on going for pages after pages, but
the good point is that it has all options listed out clearly with explanation.
The first thing that has to be edited is the http_port, which specifies the socket address where the
Squid will listen to the clients request; by default this is set to 3128, but can be changed to a user
defined value also. Along with the port value, one can also give the IP address of the machine on
which Squid is running ; this can be changed to:
http_port 192.168.0.1:8080
With above declaration Squid is bounded to the IP address of 192.168.0.1 and port address of
8080. Any port address can be given; but make sure that no other application is running at set
port value. With similar configuration lines other services request ports can also be set.
This section covers the easiest way to use Squid as an HTTP proxy, using only the client IP
address for authentication.
Edit the Squid configuration file and add the following lines:
/etc/squid3/squid.conf
1 acl client src 12.34.56.78 \# Home IP http\_access allow client
Be sure to replace client with a name identifying the connecting computer, and 12.34.56.78 with
your local IP address. The comment # Home IP isnt required, but comments can be used to help
identify clients.

Once youve saved and exited the file, restart Squid:


1 sudo service squid3 restart
At this point you can configure your local browser or operating systems network settings to use
your Linode as an HTTP proxy. How to do this will depend on your choice of OS and browser.
Once youve made the change to your settings, test the connection by pointing your browser at a
website that tells you your IP address, such as ifconfig, What is my IP, or by Googling What is
my ip.
Additional clients can be defined by adding new acl lines to /etc/squid3/squid.conf. Access to the
proxy is granted by adding the name defined by each acl to the http_access allow line.

Access Control
Through access control features the access to Internet can be controlled in terms of access during
particular time interval, caching, access to particular or group of sites, etc.. Squid access control
has two different components i.e. ACL elements and access list. An access list infact allows or
deny the access to the service.
A few important type of ACL elements are listed below
src : Source i.e. clients IP addresses
dst : Destination i.e. servers IP addresses
srcdomain : Source i.e. clients domain name
dstdomain : Destination i.e. servers domain name
time : Time of day and day of week
url_regex : URL regular expression pattern matching
urlpath_regex: URL-path regular expression pattern matching, leaves out the protocol and
hostname
proxy_auth : User authentication through external processes
maxconn : Maximum number of connections limit from a single client IP address
To apply the controls, one has to first define set of ACL and then apply rules on them. The
format of an ACL statement is

acl acl_element_name type_of_acl_element values_to_acl


Note :
1. acl_element_name can be any user defined name given to an ACL element.
2. No two ACL elements can have the same name.
3. Each ACL consists of list of values. When checking for a match, the multiple values use OR
logic.
In other words, an ACL element is matched when any one of its values matches.

4. Not all of the ACL elements can be used with all types of access lists.
5. Different ACL elements are given on different lines and Squid combines them together into
one list.
A number of different access lists are available. The ones which we are going to use here are
listed below
http_access: Allows HTTP clients to access the HTTP port. This is the primary access control
list.
no_cache: Defines the caching of requests responses
An access list rule consists of keywords like allow or deny ; which allows or denies the service to
a particular ACL element or to a group of them.
Note:
1. The rules are checked in the order in which they are written and it terminates as soon as rule is
matched.
2. An access list can consists of multiple rules.
3. If none of the rules is matched, then the default action is opposite to the last rule in the list;
thus it is good to be explicit with the default action.
4. All elements of an access entry are ANDed together and executed in following manner
http_access Action statement1 AND statement2 AND statement OR.
http_access Action statement3

Multiple http_access statements are ORed whereas elements of an access entry are ANDed
together.
5. Do remember that rules are always read from top to bottom.

By default, Squid will not give any access to clients and access controls have to modified for this
purpose. One has to list out ones own rules to allow the access. Scroll down in the squid.conf
and enter the following lines just above the http_access deny all line
acl mynetwork 192.168.0.1/255.255.255.0
http_access allow mynetwork
mynetwork is the acl name and the next line is the rule applicable to a particular acl i.e.
mynetwork.
192.168.0.1 refers to the address of the network whose netmask is 255.255.255.0.. mynetwork
basically gives a name to group of machines in the network and the following rule allows the
access to clients.
The above changes along with http_port is good enough to put Squid into gear. After the changes
Squid can be started by the following command service squid start
Note :
Squid can also be started automatically at boot time by enabling it in ntsysv or setup (System
Service Menu). After each and every change in the configuration file, the present Squid process
has to be stopped and for new configuration changes to take effect, Squid has to be started once
again. These two
steps can be achieved by following commands
1. service squid restart or
2. /etc/rc.d/init.d/squid restart

Client Machine Configuration

Since the client request will be placed at a particular port of the proxy server, client machines
have to be configured for the same purpose. It is taken at this point that these machines are
already connected to LAN ( with valid IP address) and are able to ping the Linux sever.
For Internet Explorer
1. Go to Tools -> Internet Options
2. Select Connection Tab and click LAN Setting
3. Check Proxy Server box and enter IP address of proxy server and port address where request
are being handled (http_port address).
For Netscape Navigator
1. Go to Edit -> Preference -> Advanced -> Proxies.
2. Select Manual Proxy Configuration radio button.
3. Click on View Button &
4. Enter enter IP address of proxy server and port address where request are being handled
(http_port address).

Using Access Control

Multiple Access controls and rules offer a very good and flexible way of controlling clients
access to Internet. Examples of most commonly used control are given below; this by no means
should be taken as the only controls available.
1. Allowing selected machines to have access to the Internet
acl allowed_clients src 192.168.0.10 192.168.0.20 192.168.0.30
http_access allow allowed_clients
http_access deny !allowed_clients
This allows only machine whose IPs are 192.168.0.10, 192.168.0.20 and 192.168.0.30 to have
access to Internet and the rest of IP addresses (not listed ) are denied the service.

2. Restrict the access during particular duration onlyacl allowed_clients src


192.168.0.1/255.255.255.0
acl regular_days time MTWHF 10:00-16:00
http_access allow allowed_clients regular_days
http_access deny allowed_clients
This allows the access to all the clients in network 192.168.0.1 to access the net from Monday to
Friday from 10:00am to 4:00 pm.

3. Multiple time access to different clients


acl hosts1 src192.168.0.10
acl hosts2 src 192.168.0.20
acl hosts3 src 192.168.0.30
acl morning time 10:00-13:00
acl lunch time 13:30-14:30
acl evening time 15:00-18:00
http_access allow host1 morning

http_access allow host1 evening


http_access allow host2 lunch
http_access allow host3 evening
http_access deny all
The above rule will allow host1 access during both morning as well as evening hours; where as
host2 and host3 will be allowed access only during lunch and evening hours respectively.
Note:
All elements of an access entry are ANDed together and executed in following manner
http_access Action statement1 AND staement2 AND statement OR.
multiple http_access statements are ORed whereas elements of an access entries are ANDed
together; due to this reason the
http_access allow host1 morning evening
would have never worked as time morning and evening (morning AND evening ) would never
ever be TRUE and hence no action would have taken place.

4. Blocking sites
Squid can prevent the access to a particular site or to sites which contain a particular word. This
can be implemented in the following way
acl allowed_clients src 192.168.0.1/255.255.255.0
acl banned_sites url_regex abc.com *()(*.com
http_access deny banned_sites
http_access allow allowed_clients
The same can also be used to prevent access to sites containing a particular word i.e. dummy ,
fake
acl allowed_clients src 192.168.0.1/255.255.255.0acl banned_sites url_regex dummy fake

http_access deny banned_sites


http_access allow allowed_machines
It is not practical to list all the words list or sites names to whom the access is to be prevented;
these can be listed out in the file (say banned.list in /etc directory) and ACL can pick up this
information from this file and prevent the access to the banned sites.
acl allowed_clients src 192.168.0.1/255.255.255.0
acl banned_sites url_regex "/etc/banned.list"
http_access deny banned_sites
http_access allow allowed_clients

5. To optimize the use


Squid can limit number the of connections from the client machine and this is possible through
the
maxconn element. To use this option, client_db feature should be enabled first.
acl mynetwork 192.168.0.1/255.255.255.0
acl numconn maxconn 5
http_access deny mynetwork numconn
Note:
maxconn ACL uses less-than comparison. This ACL is matched when the number of connections
is greater than the specified value. This is the main reason for which this ACL is not used with
the
http_access allow rule.

6. Caching the data


Response of the request are cached immediately, this is quite good for static pages. There is no
need to cache cgi-bin or Servlet and this can be prevented by using the no_cache ACL element.

acl cache_prevent1 url_regex cgi-bin /?


acl cache_prevent2 url_regex Servlet
no_cache deny cache_prevent1
no_cache deny cache_prevent2
7. Creating Your Own Error Messages
It is possible to create your own error message with a deny rule and this is possible with the
deny_info option. All the Squid error messages by default are placed in the /etc/squid/errors
directory. The error directory can be configured with the error_directory option. You can even
customize the existing error messages.
acl allowed_clients src 192.168.0.1/255.255.255.0
acl banned_sites url_regex abc.com *()(*.com
http_access deny banned_sites
deny_info ERR_BANNED_SITE banned_sites
http_access allow allowed_clients
In the above example, a special message will be displayed when ever users try to access the sites
with above banned words.The file name in the option i.e.ERR_BANNED_SITE must exist in
theabove error directory. This error message file should be in HTML format. The above listed out
examples are just a few of the options, facilities and capabilities of ACL. One can read through
the

Authentication Methods
Squid in the default configuration allows any user to have access without any authentication
process. To authenticate the users i.e. to allow only valid users (from any machine in the
network) to access the Internet, Squid provides for authentication process but via an external
program, for this a valid username and password is required. This is achieved by using
proxy_auth ACL and authenticate_program; which forces a user to verify the username and
password before the access is given. Several authentication programs are available which Squid
can use and these are
1. LDAP : Uses Linux Lightweight Directory Access Protocol
2. NCSA : Uses NCSA style username and password file
3. SMB : Uses SMB server like SAMBA or Windows NT
4. MSNT : Uses Windows NT authentication domain
5. PAM : Uses Linux Pluggable Authentication Modules
6. getpwam : Uses Linux password file.
One needs to specify the authentication program being used and this can be specified by using
the authenticate_program option. Make sure that the authentication program being used for the
purpose is installed and working.
The changes in the squid.conf file now should also reflect the same authenticate_program
/usr/local/bin/pam_auth acl pass proxy_auth REQUIRED
acl mynetwork src 192.168.0.1/255.255.255.0
http_access deny !mynetwork
http_access allow pass
http_access deny all
This uses the PAM authentication program and all users need to authenticate before accessing the
Internet.
Options like authenticate_ttl and authenticate_ip_ttl can also be used to change the behavior of
the authentication process i.e. revalidation of username and password

Advanced Authentication
We can start editing the configuration file by opening squid.conf in any text editor. The default
port for Squid is 3128, but we can change the port by editing the http_port line. To set the Squid
server to listen on TCP port 8080 instead of the default TCP port 3128, change the http_port line
as: http_port 8080.We can also specify in which interface squid listen the http request. When
Squid is used on a firewall, it should have two network interfaces: one internal and one external.
To make Squid listen on only the internal interface, simply put the IP address in front of the port
number as:
http_port 192.168.1.1:3128
We can change the visible_hostname line to give the Squid server a specific hostname. This
hostname may be any name. Here we chose the hostname as Cisco; so we edit the line as:
visible_hostname TestName.
We can configure squid for security purposes i.e. allow specific networks and block the rest. We
can also configure the timetable for using the internet. All of this can be done by writing ACL in
the squid configuration file.
For example:
We can allow the internal network user by specifying the IP address of the network.

User Authentication
The following configuration allows for authenticated access to the Squid proxy service using
usernames and passwords.
1. You will need the htpasswd utility. If youve installed Apache on your Linode, you will
already have it. Otherwise run:
sudo apt-get install apache2-utils
Create a file to store Squid users and passwords, and change ownership:

1 sudo touch /etc/squid3/squid_passwd


2 sudo chown proxy /etc/squid3/squid_passwd
Create a username password pair:

1 sudo htpasswd /etc/squid3/squid_passwd user1


Replace user1 with a username. You will be prompted to create a password for this user:

1 New password:
2 Re-type new password:
3 Adding password for user user1
You can repeat this step at any time to create new users.
Edit the Squid configuration file and add the following lines:
/etc/squid3/squid.conf
1 auth_param basic program /usr/lib64/squid/ncsa_auth /etc/squid/squid_passwd
2 acl ncsa_users proxy_auth REQUIRED
3 http_access allow ncsa_users
Once youve saved and exited the file, restart Squid:
1 sudo service squid3 restart
At this point, you can configure your local browser or operating systems network settings to use
your Linode as an HTTP proxy. You will need to specify that the server requires authentication,
and provide the username and password. How to do this will depend on your choice of OS and
browser. Once youve made the settings change, test the connection by pointing your browser at
a website that tells you your IP address, such as ifconfig, What is my IP, or by Googling What is
my ip.
To remove a users access to the proxy, you must delete their entry in the squid_passwd file.
Each user is represented in the file on a single line in the format of user:passwordhash:

/etc/squid3/squid_passwd
sudo service squid3 restart

Transparent Caching
When you implement disk caching in an Operating System Kernel, all applications automatically
see the benefit: the data caching happens without their knowledge. Since the Operating System
ensures that on-disk copies of data are always the same as the cached copies, the data that an
application reads is never out of date.
With web caching, however, there is a chance that the original data can change without the cache
knowing. Squid uses refresh patterns (described in chapter 11) to decide when cached objects are
to be removed. If these rules are too aggressive, you could end up serving stale objects to clients.
Even if these rules are perfect, an incorrectly configured source-server could get Squid to return
old objects. Because users could retrieve an out of date page, you should not implement caching
without their knowledge.
Squid can be configured to act transparently. In this mode, clients will not configure their
browsers to access the cache, but Squid will transparently pick up the appropriate packets and
cache requests. This solves the biggest problem with caching: getting users to use the cache
server. Users hardly ever know how to configure their browsers to use a cache, which means that
support staff have to spend time with every user getting them to change their settings. Some
users are worried about their privacy, or they think (that since it's a host between them and the
Internet) that the cache is slower (certainly not the case, as a few tests with the client program
will show).
However: transparent caching isn't really transparent. The ''cache setup'' is transparent, but using
the cache isn't. Users will notice a difference in error messages, and even the ''progress bars'' that
browsers show can act differently.

Configuring Transparent Proxy


Step 1. Make sure squid3 is installed correctly on ubuntu server

Step 2. Configure network interfaces with static IP address, on this case proxy server using 2
network card

sudo nano /etc/network/interfaces


auto eth0
iface eth0 inet static

address 192.168.1.10
netmask 255.255.255.0
network 192.168.1.0
broadcast 192.168.1.255
gateway 192.168.1.1

post-up iptables-restore < /etc/iptables.up.rules

auto eth1
iface eth1 inet static
address 192.168.2.10
netmask 255.255.255.0
network 192.168.2.0
broadcast 192.168.2.255

Step 3. Edit file /etc/squid/squid.conf, add the word transparent on http_port 3128

# NETWORK OPTIONS
#
#
http_port 3128 transparent
Change IP address on options acl localnet src 192.168.1.0/24 # Your network here

acl localnet src 192.168.2.0/24 # LAN Ip Address


save and exit.

Step 4. Edit /etc/sysctl.conf

sudo nano /etc/sysctl.conf


Replace with configuration below:

net.ipv4.ip_forward=1
net.ipv6.conf.all.forwarding=1
Save and exit

Step 5. define IPTABLE rules for port forwarding with Editing /etc/iptables.up.rules,

sudo nano /etc/iptables.up.rules


*nat -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j DNAT --to-destination
192.168.2.10:3128 -A PREROUTING -i eth1 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports
3128 -A POSTROUTING -s 192.168.2.0/24 -o eth0 j MASQUERADE COMMIT
Step 6. Edit /etc/rc.local, and add this script on end of file
iptables -t nat -A POSTROUTING -s 192.168.2.0/24 o eth0 -j MASQUERADE

Step 7. Restart squid3 and network


sudo /etc/init.d/squid3 restart && sudo /etc/init.d/networking restart
On client set IP address manually:
IP address : 192.168.2.11
Netmask: 255.255.255.0
Gateway: 192.168.2.10
DNS: 192.168.2.10 # or you can use Google DNS 8.8.8.8, 8.8.8.4;

Vous aimerez peut-être aussi