Vous êtes sur la page 1sur 17

School of Computer Science and Information Technology University of Nottingham Jubilee Campus NOTTINGHAM NG8 1BB, UK

Computer Science Technical Report No. NOTTCS-TR-2005-1

Firewalls, Intrusion Detection Systems and Anti-Virus Scanners


Julie Greensmith and Uwe Aickelin

First released: February 2005

Copyright 2005 Julie Greensmith and Uwe Aickelin

In an attempt to ensure good-quality printouts of our technical reports, from the supplied PDF files, we process to PDF using Acrobat Distiller. We encourage our authors to use outline fonts coupled with embedding of the used subset of all fonts (in either Truetype or Type 1 formats) except for the standard Acrobat typeface families of Times, Helvetica (Arial), Courier and Symbol. In the case of papers prepared using TEX or LATEX we endeavour to use subsetted Type 1 fonts, supplied by Y&Y Inc., for the Computer Modern, Lucida Bright and Mathtime families, rather than the public-domain Computer Modern bitmapped fonts. Note that the Y&Y font subsets are embedded under a site license issued by Y&Y Inc. For further details of site licensing and purchase of these fonts visit http://www.yandy.com

Firewalls, Intrusion Detection and Anti-virus Scanners


Julie Greensmith ASAP Group, University Of Nottingham, UK email: jqg@cs.nott.ac.uk June 21, 2004

1 Introduction
While the sharing of resources and information in an interconnected communication network is essential, it is necessary to impose access restrictions. As a consequence, systems can be vulnerable to misuse by other users through access violation attempts. A number of tools have been developed to prevent this vulnerability including rewalls, intrusion detection systems and anti-virus software. However, the differences between these tools are not immediately obvious, but do exist and play a core role in securing systems. This article will examine the process involved in using each of the tools and will highlight the differences between the tools themselves and their subsequent deployment throughout a network of computers.

2 Securing Networks
Security is needed throughout distributed systems (interconnected components forming a network) in order to build dependable and trusted computing platforms. During the design phase of a distributed system, security policies are developed which account for the measures taken to ensure both the condentiality and integrity of the system, when it is necessary. Condentiality in this context refers to access constraints on users, and is there to protect the data. The integrity refers to the correct running of the system and the data contained on the system. Additionally, the usability of the system must be preserved, which is tied in with preserving the integrity of the system so that it is still functioning at the use level. There are several ways in which a system can be compromised, as stated in [7].

Interception can occur when an unauthorised user gains access to a service or to a resource, such as the illegal copying of data after breaking into a restricted le system. Interruption can occur when les are corrupted or erased, occurring as the result of denial of service attacks or from the action of a computer virus.
1

Modication involves an unauthorised user or program making changes to data or system conguration, and can also include the modication of transmitted data, leading to a breakdown of trust between parties. Fabrication is where data or activities are generated which would not normally occur. An example of this would be the addition of information to a password le in order to compromise a system. To prevent such events from taking place within a system, a security policy must be put into place, and the necessary measures taken. Such measures can include the encryption of data, correct authentication and authorisation of users with respect to data access and command execution, and the conscientious audit of log les monitoring system activity.
From these descriptions it is evident that potential abusers of these systems can be both external and internal to the system. Many tools and techniques exist with the purpose of ensuring the condentiality and integrity of a system. The use and deployment of the tools (in this particular instance, rewalls, intrusion detection systems and anti-virus scanners) is dependent upon where in the system they are placed, and indeed, the architecture of the system itself. Therefore, I will briey digress and discuss what is meant by systems within this context. The system in question is a network of interconnected computers and servers, forming a local area network. This network could be used for example, in the inland revenue. A diagram of the connected components is represented in gure 1. This local network additionally needs to be connected to the external world i.e. the Internet. There are several security challenges that need to be addressed for this network. The data within the system must be protected: not all users within that local network need to have access to all les on the network or the external Internet environment. Similarly, external entities may need to access the web server within the network, for instance, to access a particular forum held on the web server. These functions must be available without compromising the integrity of condentiality of the system, data or users. The level of security and methods of ensuring this are dened in a security policy. The type of tool used and the way in which it is implemented is dependent on the contents of the policy. For example, the policy would be used to dene if incoming telnet connections were permitted. If so, there are various constraints and congurations that should be applied to the system to enforce this.

3 Security Measures
3.1 Firewalls
Firewall systems are commonly implemented throughout computer networks. They act as a measure of control, enforcing the relevant components of the security policy. A rewall can be a number of different components such as a router or a collection of 2

Figure 1: A simple network structure

host machines. However, the basic function of a rewall is to protect the integrity of the network which is rewall controlled. There are different types of rewall that can be implemented, with the choice of rewall being dependent upon the security policy and the level of deployment in the system. 3.1.1 Packet Filtering Firewalls Packet ltering rewalls work at the transport layer of the seven layer model[8]. This means that they are commonly deployed on routers and act as a bottleneck between the local network and the external Internet. As the name suggests, a packet ltering rewall examines a packet passing through it, comparing it against a set of criteria for what is permissible either in or out of the network. The criteria for this is dened by the security policy. There are two ways in which packet lters operate; either accept all packets except those which are specied; or deny all packets except those which are specied. The advantage of the accept method is that it gives legitimate users of the network greater exibility. For example, a remote user of the system from a previously unseen IP address (e.g. in an Internet cafe) could log in to an ofce machine from a remotely connected laptop while working out of the ofce. However, it also increases the vulnerability of the network because not all attacks could come from rules which are already known: this is why the denial paradigm is more likely to be used. This method should be deployed frequently but often isnt due to a lack of understanding from persons responsible for the conguration of the rewall. Denying all that is unknown can give greater security, however, it can cause inconvenience to legitimate users. Packet lters can examine the following attributes of a packet:

Source IP address Destination IP address TCP/UDP source port TCP/UDP destination port
If in the example network an external user was trying to connect to port 23 of a machine on the local network, then it is likely that the external user is trying to TELNET into that machine. This operation is likely to not be authorised and therefore the rewall on the router would not permit the transmission of the packets into the network. 3.1.2 Circuit Level Gateways The situation could arise when an external user (not from the local area network) wishes to access information on a le server, behind at least one rewall. The security policy for the network would not permit a direct connection between the external user and the le server (as shown as part of the LAN in Figure 1) as this could leave the network vulnerable to attack. The solution to this is for the two parties to create a tunnel between the two components, employing a method of encryption in the connection. The initial connection request is ltered (and is subject acceptance based on the security 4

policy) but all packets following are not, as it acts as a relay between the two entities. In this case is important in this case to explicitly state the use of circuit level gateways in order to avoid the exploitation of the network. 3.1.3 Application Gateways Application gateways, also known as proxies, are a commonly used rewall mechanism. It is feasible to want a particular component of a network, such as a publically available interface e.g. an online enquires form, to be available to entities outside of the local network. While remote access to other components of the network may not be allowed, the inclusion of components in a demilitarized zone ( in between the two rewalls) would allow access to components which were needed by the external network. Restricting the access to components via a DMZ, and through the use of a proxy server allows external users to perform functions on, for example a web server, but would not disclose the architectural details of the LAN. As with circuit level gateways, this proxy can act as a mediator between an external entity and a component behind the packet ltering rewall on the main router. However, unlike circuit gateways, application gateways can lter IP trafc. This is an advantage because it would not allow certain actions to be taken once a connection to the proxy has been made e.g. it can prevent anonymous FTP log-in to the system. Proxies can also act as caches for the local users accessing the Internet. This can be useful in the event of restricting access to certain blacklisted web-sites. For example, in corporate LANs, where common mail providers such as hotmail and Yahoo, cannot be accessed. Allowing employees to surf such sites is seen as a waste of resources, not to mention a breeding ground for viruses1 . Again, the pre-dened security policy, if adequately prepared, would dene the access permitted to each individual user of the network. Additionally, application gateways can perform packet logging for a post hoc inspection of the trafc going both in and out of the network. The disadvantage with using an application gateway is that it requires a multi-stage handshake for the initialisation of a connection which could slow down the performance of that application considerably as opposed to making a direct connection. Due to the optional requirement for restricted commands execution, as in the case of FTP through a proxy, then modied clients may need to be installed, which is extra work for both the system administrators and the users. Hence, the transparency of the service to the users is affected. 3.1.4 Other Points to Note One feature of rewalls is that they should provide a high level of user transparency, meaning that the end user should be unaware of the action of the rewall, so quality of service is maintained as a result. Transparency is high for packet ltering rewalls as the user is not always aware of the rewall until a transmission is denied. Application gateways have a lower transparency as it often requires the users to use modied software clients in order to use the proxys service which could result in the user attempting to bypass the system entirely.
1 More

about this in a little while

Recently, Stateful Multilayer Inspection Firewalls have been deployed, at the application layer, transport layer and network layer, which combines the packet lter property with the packet snifng capabilities of gateways. Stateful inspection can be used to prevent attacks such as the Loki or Smurf denial of service attack, as the rewall would be aware that the original packet was not sent as a broadcast message from a machine on the network [6]. However, experience has shown that these systems are difcult to manage due to the complexity of the rules and the processes involved, rendering them less secure than their separate counterparts. With respect to the actual hardware required in order to implement rewalls, there are two types, namely bridging rewalls and rewall routers. Bridging rewalls are software rewalls that can be run on a standard machine, using a rewall such as IPtables. Firewall routers are a specic piece of hardware designed to perform as a router and a rewall, and have been implemented as the rst line of defence in many networks. Bridging rewalls are becoming prominent due to their ease of conguration, ease of initial installation, good performance (little computational overhead) and their ability to be stealthy and so are less likely to be attacked [15] .

3.2 Intrusion detection systems


As previously stated, the majority of trafc on the network is not malicious, and users within a system do not set out to gain unauthorised access to information. However, the use of an intrusion detection system is becoming increasingly commonplace due to both the increase in complexity of attack and of the computer systems themselves. As with any complex system, emergent properties can arise unexpectedly. In the case of such systems, unexpected interactions between the various components can give rise to vulnerabilities which can be exploited. Additionally, the use of a rewall may not prevent internal abuse from an otherwise legitimate user of the system (either for breaches of condentiality or for system integrity). When dening what intrusion detection systems are, it perhaps makes more sense to describe what they are not. IDS are not a preventive measure. They will not stop intruders breaking into a system. Neither will they prevent internal damage to a system. As the name clearly states, they are a detection system, thus implying that abuse of a system is reported as and when it happens. In essence, they are analogous to a burglar alarm in a house. Such an alarm can trigger an immediate response e.g. call the police, can be used to alert the owner that unauthorised behaviour is taking place, or simply to cause annoyance to the neighbours. As with rewalls, different types of intrusion detection system exist. There are two different ways of classifying an IDS. The rst way is to classify based on the method of detection, in the form of either misuse detection or anomaly detection. An alternative way is to classify based on the position of deployment within a network. IDS can be either network based, host based or application based, depending on where they are deployed [9].

Irrespective of the specics regarding implementation and deployment, IDS function in a generic way. Input data from a system is collected and processed into a manageable format. The data items are classied as a threat or harmless. If a threat is detected, then a response is produced, usually in the form of an alert to the system administrator. A more detailed explanation of the process is as follows: 1. Data has to be captured, often in the form of IP packets. 2. The data are decoded and transformed into a uniform format, through the process of feature extraction. 3. The data are then analysed in a manner which is specic to the individual IDS, and classied as threatening or not. 4. Alerts are generated if and when a threatening pattern is encountered. However, precautions must be taken to stealth this part of the system, so that an intruder can not spoof alerts (potentially leading to a denial of service attack). Various techniques are employed to produce correlations of the results; this can be done using an automated system, or manually. 3.2.1 IDS Classication based on style of detection Misuse Detection : This type of IDS can also be called a signature recognition system. Misuse detection systems rely on the accurate matching of system or network activity [19]. This method of detection is accurate for matching behaviour against a list of already documented patterns, known as signatures. An example of this type of IDS is a system known as Snort [4]. The means by which snort functions involves the use of software component processing information regarding network connections. Snort examines the network trafc at its position on the network in a passive manner: it sniffs the network. Examination of the headers and content of TCP packets is performed and matched against patterns contained in a signature database. If certain patterns of trafc are captured, then an alert is generated2. The use of only already known signatures means that the system will produce only a few false positives, or false alarms where an alert is generated yet there is not actual attack. There is a relatively high maintenance cost in that the signature base has to be kept up to date, else potential attacks could go unnoticed. Additionally, this type of system can miss highly novel attacks to which a signature does not yet exist, giving a higher rate of false negatives (where a real attack is not detected) than would be desired. Missing an actual attack is probably worse than being inundated with false alarms, though this is debatable.
2 Let me pose a question: is this really an intrusion detection system, or is it a TCP pattern detection system? This depends immensely on how you dene an intrusion

Snort is an open source IDS which implements a range of pattern matching algorithms of the input data and produces alerts based on the matching of the input to a signature base. For example it is likely that multiple port-scans on a particular component would raise some sort of alarm. The advantage of the system being open source is that if a vulnerability is found, then it is likely to be posted on a user forum. The idea being that 1000 pairs of eyes are more likely to notice a vulnerability in the software than a select few hired experts. A recent example of this is a vulnerability found in the snort program itself, in which an integer overow was discovered in one of the stream processors responsible for the calculation of the segment size for re-assembly. This could lead to a buffer overow which could turn into a denial of service attack on the system itself, or even remote command execution of the host running the program (for examples, see [18]). Anomaly Detection : The goal of anomaly detection systems is to successfully classify user or network behaviour as normal or abnormal, based on a prole of information gathered during a training period. This is performed by taking into account the amount of background noise or user variation which is intrinsic to the system. The characterisation of what constitutes normal behaviour is certainly a non-trivial issue. There have been many approaches used in order to perform this classication, including statistical models, Markov chains, neural nets and ideas based on other modern AI techniques (inclusive of articial immune systems[3]). Normal behaviour is proled either from an individual user or from the network, variants from this are dened as anomalies and alerts are generated. For example, a user of the example network ordinarily runs word processing applications and Internet browsers. If this user suddenly gains super-user privileges, starts changing le permissions and sending broadcast SYN packets, then it is likely that the integrity of the system is being compromised. A corresponding alert would be generated and some form of action would be taken by the system administrator. An example of this type of IDS is the experimental articial immune system developed by Somayaji et al[5]. This IDS resides on a host machine and examines numerous Unix system calls to construct a prole of normal behaviour over a training period through examining the IP trafc in and out of the host machine. Once this period had ends (approximately two weeks was used for the training period), an insight into normal behaviour was used as the basis of the classication: if the observed behaviour deviates from the normal, then an anomaly is detected. This causes the generation of a warning message which is sent to the user. While anomaly detection is a relatively effective way of predicting novel attacks, they do not as yet feature in many commercially produced systems partially due to the high rate of false positives. Still, it remains a promising area of research[]. One of the potential drawbacks with anomaly detection systems is the genera8

tion of false positives. This could occur if the user behaviour suddenly changed; perhaps the user went on holiday! However, the change of behaviour caused by this would be sufciently different to the normal prole that an excessive amount of alerts could be generated. Additionally, user behaviour is dynamic, changing over time as the user needs change. As a consequence of this increased amount of alerts, not only does it become irritating to the administrator, but it also becomes more difcult to detect an actual attack. The amount of false positives can be reduced using various methods, specic to the technique involved in the anomaly detection process. In the case of the system in Hofmeyer and Forrest [3], the amount of false positives were reduced through using a richer representation of the network trafc and through the ner tuning of several system parameters[2].

3.2.2 Classication through deployment There are several places throughout a system where an IDS could be placed, including on switches, router, even within programs themselves. Here are some specic details regarding where IDS are placed and how this affects their functions. Network Based : This type of IDS sniffs the trafc on the network by capturing packets of data (often IP data) and using them in the analysis. Data capture is performed at the network switch level, so providing detection for trafc going in and out of multiple hosts. This method of deployment is popular for commercially available IDS [1] as they are relatively scalable so can be used for large scale networks: only one system is used to detect attacks covering many hosts. Additionally, the presence of the snifng device on the network should operate in a stealthy manner making it difcult for malicious users to launch an attack on the IDS itself. The IDS should not interfere with the end-users of the system thus providing a high degree of transparency. As this type of IDS is passive i.e. does not have a direct effect on the system, it is relatively easy to apply to pre-existing networks without causing too much disruption. The methods used in these products can provide a large amount of audit data so attack patterns can be studied retrospectively, and the security vulnerabilities of a system can be explored in a post-hoc manner. However, it should be taken into account that if the network is subject to particularly large amounts of trafc, then it would be difcult to detect an attack with large amounts of background noise. The use of a token bucket lter [] in this case would be preferable, but this could potentially slow down the network, eliminating such a degree of user transparency. It is also difcult to analyse the content of an IP packet if a method of encryption is used. This could be a problem especially if virtual private networks form part of the system, as once a connection has been established, the level of encryption used makes it difcult to detect suspicious behaviour. Additionally, the problem of packet fragmentation 9

is often not overcome in this type of system, as it is difcult to piece together the fragmented packets in a way in which to both capture the necessary information, without increasing greatly the computational overheads. All of the above are non-trivial issues, which may have to be resolved before such systems can reach the effectiveness which they promise. Host Based : There are examples of systems that use a bottom up system of decentralised deployment based on a per host distribution. There are several advantages of using one of these types of detection systems. Prominently, the analysis of the trafc and the impact of any disruption can be analysed with greater accuracy, with the information of exactly what is happening within the system becoming integral to the alert generating process. Additionally, logs based on the host machine record the outcome of an attack, which can assist in the development of various countermeasures. The operation of such systems rely on the availability of system logs which are used as an audit trail, often generated at the kernel level of the system. Compressing the data contained within these logs is difcult as it requires signicant feature extraction of relevant information from a potentially data rich source. In addition to the wealth of data provided, a further advantage is that host based systems can view encrypted malicious trafc that a network based system would not be able to examine in detail [16]. Such systems often use user proling in a manner similar to anomaly detectors, for example the statistical proling method as described in [19]. This vantage point can also be used to detect processes which should not be running in this manner, namely it can detect Trojan horse programs (programs which perform malicious operations, but pose as something non-threatening) based on the detection of unexpected behaviour. However, the major disadvantage with the deployment of this type of system lies in the distributed nature of such a system. Scalability issues become a consideration. If a signature based system is implemented, then the signature data-base must be kept up to date on every machine in the network. The high maintenance cost of this means that the situation of the database becoming obsolete is likely, and the machines would become increasingly vulnerable. An adaptive system where each machine would adapt to the perils of the dynamic network environment would reduce the maintenance of the network, and avoid the users of the system being directly involved in the protection of the system as a whole. But, combining transparency and autonomy into a system is difcult. The computational resources for the host based systems are provided by the host machine. If the intrusion system consumes too many system resources and slows the system down to an unacceptable level, then the user may be inclined to switch off the system. This also applies for excessive amounts of alerts caused by a system with too many false positives; the system would be rendered useless if the user turned it off. In theory it is possible to disable the system by using a denial 10

of service attack in the form of alert ooding, either causing the system to crash or the deactivation of the system because of the annoyance to the user. Finally, another major disadvantage is that the information from host based systems cannot be used in order to detect attacks on the network itself, so systemic port scans could go undetected. Application Based : Application based IDS are a subset of host based systems. These systems analyse the behaviour of applications running on a host machine. They are specically used to detect unauthorised usage of an application within a system, and use the information generated from the application logs in order to detect unusual behaviour. They can also monitor systems using encryption as it runs on the host machine. Unfortunately, such systems are relatively easy to attack through program exploits or denial of service, as they run within applications themselves, or even embedded into an operating system. However, they are more effective when used in combination with other types of IDS.

3.3 Anti-virus Scanners


Anti-virus (AV) scanners used in an attempt to directly protect systems from damage. AV scanners detect a specic type of unauthorised activity in the form of malicious mobile code, collectively known as malware. The behaviour of these malware agents varies considerably, as does the resultant effect on the system. A relatively benign but annoying virus could change a small feature of a program or system[11]; on the other hand, a maliciously designed Internet worm can bring the world of interconnected computers to a standstill within a matter of hours. In order to really appreciate the role of AV scanners in context, it is necessary to explore and explain the basic principles as to what scanners have to protect against3 . 3.3.1 Malware in a minute Malicious code is essentially a computer program that modies a system call or the functioning of a program without the consent of the user of the system. Due to the sheer amount of unexpected bugs that can occur due to program interactions, there are a number of holes that can be, and indeed have been exploited. For example, the Network Associates virus glossary gives a denition of malware as programs that are intentionally designed to perform some unauthorised act[12]. As there are a plethora of ways in which such malicious code can be written and deployed, various classication systems exist. Computer viruses rst emerged during the 1980s and their main transmission vector was shared oppy disks. The virus would often modify critical code within the boot sector of a machine rendering it useless, or would cause programs to crash. The
3 Bearing

in mind that examining computer viruses and attacks would be an essay worth in itself!

11

term virus was coined due to the similarity to biological viruses; both do not have the capability to replicate on their own, and rely on using other cells/les on a host in order to spread. The spread of viruses before computers were connected was relatively slow, due to the fact that the transmission was not over a scale free network, as the spread had to be via a physical oppy disk. Computer viruses in the conventional sense of the word are now less prevalent, due to two facts; people rarely use actual mediums (oppy disks, CDs) to exchange information and booting from oppy disks is less common, with this option frequently disabled as a default. However, the increasing interconnection of computers spawned a new transport vector for malware, and thus, many of the most disruptive pieces of malware are in the forms of computer worms. A worm can be dened as malicious code which is either le infecting or not, which may or may not require user intervention, but propagates through a network. Worms are a serious problem for organisations large and small, and cost billions of pounds in wasted time and resources. There are several aspects to worm design and propagation. As with malware, loose classication schemes exist for worms, based on how they install themselves on a host machine and also how they propagate through a network. In general worms use Internet connectivity in the form of either e-mail, windows le sharing systems or through direct TCP/IP connections. However, such denitions are not mutually exclusive; indeed, the Nimda worm utilised all three methods of disruption and propagation. In the last ve years, proliferation of worms through e-mail based transmission has become a real problem. To illustrate this, the Virus Library [17] recorded one e-mail worm in 1998, 44 in 2000, but by the rst half of 2003, 192 had been reported. Such worms propagate through the network masquerading as an e-mail attachment, which the user downloads. The worm then replicates by sending itself to all addresses in the host machines address book. This often leads to a mass duplication of the virus, slowing up both the host machine and mail servers In the case of such worms as Nimda and SobigF, brought the Internet to a total standstill. The recovery after such events costs billions in both nancial terms and in peoples time. This type of worm required an element of social engineering as the actual e-mail message often contains a generic message, enticing the user to open the virus containing attachment, with frequently disastrous consequences. Despite all the publicity and hype surrounding the dangers and damage caused by Internet worms, another persistent viral offender is a Trojan horse program. This type of malware does not focus on using networking to spread, but is a malicious program designed to cause damage to computer systems, which masquerade as benign programs. It is dened by the SANS institute [14]as a computer program that appears to have a useful function, but also has a hidden and potentially malicious function. For example, someone within an organisation receives an e-mail attachment which is a screen-saver, and this user decides to download this attachment and install it on their machine. However, this screen saver on execution infects the computer causing malfunctions in a multitude of system processes. Unfortunately, the screen saver is amusing to the user, and so they forward this e-mail complete with its virus infected 12

attachment to a series of their colleagues. It is only once the symptoms of this virus are noted that a system administrator is alerted to any danger, and by this time it could be too late. Trojan horse programs often cause minor malfunctions in applications: decimal point errors in spreadsheets or formatting issues in word processing software. A more serious security hole created by Trojan horse programs is known as a backdoor. The Trojan program can convert a host machine into a zombie machine for the purposes of launching a distributed denial of service attack. If the program is run on a machine, it can create remote access to the machine for the creator of the virus. This can greatly compromise data condentiality and the integrity of a system.

3.3.2 Scanners: The Remedy For an organisation to completely stop the proliferation of computer viruses, e-mail services could be restricted to management staff only, and by prohibiting downloading of attachments. However, this provides a severe impediment to modern business practises and is obviously no solution. The most favourable method of protecting against malware is the installation of an anti-virus scanner. This software examines processes at the application layer of the network, and can also be run at both the level of the server (to detect viruses that could infect servers) and the individual host machines. Anti-virus scanners are popular in the commercial sector, in a multitude of companies, including F-secure[10] and Norton[13] who provide several products available for home and commercial use. Such scanners acting on the user machines contain ( as with misuse detectors) a signature base containing pre-dened virus behaviour patterns which can include information about what anomalies to examine in terms of system calls or the presence of les with certain extensions. There are two points at which an anti-virus scanner is run on a host machine: on the commencement of downloading an attachment, and when the computer is booted. Efcient pattern matching in terms of computational resources is required in order to provide any protection, as if the virus scanner was to slow down the system processes sufciently, then the user could be tempted to turn off the software. On discovery of a virus, the user or administrator is informed and the anti-virus vendors often provide a virtual antidote in the form of a patch to aid in xing any damage caused by the virus. It is worth bearing in mind that the more publicised viruses of late have not caused unsurmountable damage to the individual machines itself, but have often been used to create distributed denial of service attacks on large corporations including the Microsoft web site. However, as with misuse IDS, the protection from such viruses is only effective providing that the signature base is kept up to date with the latest update. Updates can become so voluminous that individual users do not have the time, patience (or in the case of those still running dial-up connections) the resources to keep constantly updating virus denitions. There are several ways in which the signature bases can be updated across the network, but there are various ethical and practical considerations. Firstly, the most obvious method involves the user downloading updates and patches. The problem with 13

this is that it is not scalable in large organisations, and it is difcult to get everyone to be responsible for this. This is more suitable for very small networks, and would ensure a higher level of protection. An alternative method would include the installation of new signatures and patches via a network administrator. Again, the AV-scanner would only be effective if this was done on a regular basis, but is likely to be more reliable as updates would be performed as a matter of due course, and any potential problems that could be caused via software interaction are likely to be noticed. The most effective form of updates would come directly and automatically from the vendor. However, there are obvious privacy issues that can arise because of this : the vendor would need some form of access to the network. There are two foreseeable problems: certain individuals at the vendor end abusing the trust and using the addition of a patch to open up a back-door, compromising the security of the network without the administrator being aware; and the addition of a patch or a new signature causing an unexpected error. On the face of it this does not seem like too much of a problem, however, if this was on a critical system such as a air trafc control system or medical system, then the consequences could be disastrous. It is true that if the vendor was responsible for providing and making the updates, virus incidents would be less common, however, it is still seen to be the responsibility of the individual user or their organisation. This is similar to the problem faced by the administrators of host based intrusion detection systems.

4 Comparatively Speaking
Supercially, IDS, rewalls and anti-virus scanners perform similar roles, though on closer inspection differences become apparent. When viewed in terms of deployment, the systems appear to have similarities. Firewalls can be implemented on a network at multiple layers, or in the case of personal rewalls, examining network connections on a host machine. This type of deployment can be seen in both IDS - the network IDS Snort and the host based Tripwire; and anti-virus products - for use on mail servers as lters and for use on host machines checking system calls. Additionally, in all cases, a database of already known patterns of misbehaviour, represented as a rule set can be used to specify what is permitted and what is not, as stated in the security policy. This is seen in many different types of rewall, misuse detecting IDS and most anti-virus scanners. It really does beg the question then, why are these systems classied as different types of security measure when they essentially perform similar, if not the same, function? The differences lie in two main aspects: what the component is looking for as a violation and how the component responds to the detection of a violation. Vigilance :- Firewalls are implemented to prevent connections being mad and packets being transmitted on violation of the rules laid out by the security policy. Intrusion detection systems are looking for anomalous system/network behaviour, through the examination of communication mediums and system calls, be it using a pre-dened pattern base or proling mechanism. Anti-virus scanners are 14

looking for the presence of pre-dened les or the execution of system commands which are known to cause problems. Response :- The common response of a rewall is to deny a connection or to drop a packet which would be seen to be against the security policy, without exception. The action of an anti-virus product would involve quarantining infected or modied les and producing an notication message to the user, informing them of the problem. The same applies for anti-virus systems applied at a mail server: if a mail message contains a suspected virus, then the recipient is warned and the message is quarantined. More passively still, the common response from an intrusion detection system is to notify the administrator of a system of a suspected intrusion. Of course there are exceptions to all of the above. The distinctions between the different components is not entirely clear cut: a well implemented intrusion detection system should, in theory, be able to detect the action of a computer virus too. The three components, when used in conjunction, form a spectrum of overlapping function. As each of the components is developed with different constraints in mind, the use of all three in combination, with due care and attention, provide a higher level of security than treating only one component as the security solution.

5 Summary
This article has concentrated on exploring and explaining three countermeasures which are used to improve the security of networked computers. The basic concept of a network and the need for an effective security policy, was introduced. The different types of rewall, intrusion detection system and anti-virus scanner, where they are deployed and their functions and respective behaviour was discussed, along with some examples of intrusion and attack to which they are used to protect against. The differences between these systems is not as clear cut as was rst thought: indeed there is some overlap in the functioning of all these systems. Yet, they are sufciently different in their mechanism of action and their response, to warrant being treated as separate components. The correct implementation, deployment and conguration of all of these systems form some of the most effective measures that are available in the battle for the defence of computer systems.

References
[1] Rebecca Bace and Peter Mell. Intrusion detection systems. NIST Special Publication on Intrusion Detection System. [2] J Balthrop, F Esponda, S Forrest, and M Glickman. Coverage and generaliszation in an articial immune system. Proceedings of GECCO, pages 310, 2002. [3] S Hofmeyr and S Forrest. Immunity by design. Proceedings of GECCO, pages 12891296, 1999. 15

[4] Martin Roesch. Snort: Lightweight intrusion detection for networks. In Proceedings of the 13th Conference on Systems Administration, pages 229238. USENIX Association, 1999. [5] A Somayaji, S Forrest, S Hofmeyr, and T Longstaff. A sense of self for unix processes. IEEE Symposium on Security and Privacy, pages 120128, 1996. [6] Judy Novak Stephen Northcutt. Network Intrusion Detection. New Riders, 3rd edition edition, 2003. [7] Andrew S Tanenbaum. Distributed Systems: Principles and Paradigms. Prentice Hall, 2002. [8] Andrew S Tanenbaum. Computer Networks. Prentice Hall, 4th edition edition, 2003. [9] H Venter and J Eloff. A taxonomy for information security technologies. Computers and Security, 22(4):299307, 2003. [10] www.fsecure.com/. [11] www.fsecure.com/v descs/nuclear.shtml. [12] www.nai.com/. [13] www.norton.com/. [14] www.sans.org. [15] www.securityfocus.com/infocus/1737. [16] www.tripwire.com. [17] www.viruslibrary.com/virusinfo. [18] www.whitehats.com. [19] Nong Ye, Xiangyang Li, Qiang Chen, Syed Masum Emran, and Mingming Xu. Probabilistic techniques for intrusion detection based on computer audit data. In IEEE Transactions on systems, man and cybernetics- part A, systems and humans, volume 31:4, pages 266274, 2001.

16

Vous aimerez peut-être aussi