Vous êtes sur la page 1sur 69

Cryptographic Versus Trust-based Method s for MANET Routing Security

Abstract Mobile Ad-hoc Networks (MANETs) allow wireless nodes to form a network without requiring a xed infrastructure. Early routing protocols for MANETs failed to take security issues into account. Subsequent proposals used strong cryptographic methods to secure the routing information. In the process, however, these protocols created new avenues for denial of service (DoS). Consequently, the trade-o between security strength and DoS vulnerability has emerged as an area requiring further investigation. It is believed that dierent trust methods can be used to develop protocols at various levels in this trade-o. To gain a handle on this exchange, real world testing that evaluates the cost of existing proposals is necessary. Without this, future protocol design is mere speculation. In this project, we give the rst comparison of SAODV and TAODV, two MANET routing protocols, which address routing security through cryptographic and trust-based means respectively. We provide performance comparisons on actual resource-limited hardware. Finally, we discuss design decisions for future routing protocols.

Introduction
In traditional wireless networks, a base station or access point facilitates all communications between nodes on the network and communications with destinations outside the network. In contrast, MANETs allow for the formation of a network without requiring a xed infrastructure. These networks only require that nodes have interoperable radio hardware and are using the same routing protocol to route trac over the network. The lessened requirements for such networks, along with the ability to implement them using small, resource-limited devices has made them increasingly popular in all types of application areas. For example, MANET-based sensor networks have been proposed to assist in collecting data on the battleeld. Since there is no xed infrastructure, the nodes in the network forward trac for one another in order to allow communication between nodes that are not within physical radio range. Nodes must also be able to change how they forward data over the network as individual nodes move around and acquire and lose neighbors, i.e., nodes within radio range. Routing protocols are used to determine how to forward the data as well as how to adapt to topology changes resulting from mobility. Initial MANET routing protocols, such as AODV were not

designed to withstand malicious nodes within the network or

outside attackers nearby with malicious intent. Subsequent protocols and protocol extensions have been proposed to address the issue of security Many of these protocols seek to apply

cryptographic methods to the existing protocols in order to secure the information in the routing packets. It was quickly discovered, however, that while such an approach does indeed prevent tampering with the routing information, it also makes for a very simple denial of service (DoS) attack . This attack is very eective in MANETs as the devices often have limited battery power in addition to the limited computational power. Consequently, this type of DoS attack allows for an attacker to eectively shutdown nodes or otherwise disrupt the network. The trade-o between strong cryptographic security and DoS has become increasinglyimportantasMANETapplicationsaredevelopedwhichrequi reaprotocol with reasonable security and reasonable resistance to DoS, a kind of middle-ground. It has been suggested that various trust mechanisms could be used to develop new protocols with unique security assurances at dierent levels in this trade-o . However, the arguments for this have been purely theoretical or simulation-based. Determining the actual span of this trade-o in real world implementations is of utmost importance in directing future research and protocol design.

It is in this context that this paper considers two proposed protocol extensions to secure MANET routing. The rst, SAODV , uses crytographic methods to secure the routing information in the AODV protocol. The second, TAODV , uses trust metrics to allow for better

routing decisions and penalize uncooperative nodes. While some applications may be able to accept SAODVs vulnerability to DoS or TAODVs weak preventative security, most will require an intermediate protocol tailored to the specic point on the DoS/security trade-o that ts the application. The tailored protocols for these applications will also require performance that falls between that of SAODV and TAODV. Understanding how the SAODV and TAODV protocols (which are on the boundaries of the DoS/security trade-o) perform on real hardware, and to what extent there exists a performance gap is a prerequisite for being able to develop the intermediate protocols. Such evaluation is not only required for developing intermediate protocols, but also for determining the direction for development of new trust metrics for ad-hoc networks. In this paper we provide the rst performance evaluations for these protocols on real world hardware.

2. Organization Profile
Company Profile At Mindset It solutions , We go beyond providing software

solutions. We work with our clients technologies and business changes that shape their competitive advantages.

Founded in 2000, Mindset It solutions

(P) Ltd. is a

software and service provider that helps organizations deploy, manage, and support their business-critical software more effectively. Utilizing a combination of proprietary software,

services and specialized expertise, Mindset It solutions

(P) Ltd.

helps mid-to-large enterprises, software companies and IT service providers improve consistency, speed, and transparency with service delivery at lower costs. Mindset It solutions (P) Ltd. helps

companies avoid many of the delays, costs and risks associated with the distribution and support of software on desktops, servers and remote devices. Our automated solutions include rapid, touch-free deployments, ongoing software upgrades, fixes and security patches, technology asset inventory and tracking, software license optimization, application self-healing and policy management. At Mindset It solutionsTechnologies, we go beyond providing software solutions. We work with our clients technologies and business processes that shape there competitive advantages.

About The People As a team we have the prowess to have a clear vision and realize it too. As a statistical evaluation, the team has more than 40,000 hours of expertise in providing real-time solutions in the fields of Embedded Systems, Control systems, Micro-Controllers, c Based Interfacing, Programmable Logic Controller, VLSI Design And Implementation, Networking With C, ++, java, client Server Technologies in Java,(J2EE\J2ME\J2SE\EJB),VB & VC++, Oracle and operating system concepts with LINUX.

Our Vision Dreaming a vision is possible and realizing it is our goal. Our Mission We have achieved this by creating and perfecting processes that are in par with the global standards and we deliver high quality, high value services, reliable and cost effective IT products to clients around the world. Clientele Aray InfoTech Inquirre consultancy (U.S.A) K square consultancy pvt Ltd (U.S.A) Opal solutions Texlab Solutions Vertex Business Machines JM InfoTech

Related Work DES Encryption standard


DES encrypts and decrypts data in 64-bit blocks, using a 64-bit key (although the effective key strength is only 56 bits, as explained below). It takes a 64-bit block of plaintext as input and outputs a 64-bit block of ciphertext. Since it always operates on blocks of equal size and it uses both permutations and substitutions in the algorithm, DES is both a block cipher and a product cipher. DES has 16 rounds, meaning the main algorithm is repeated 16 times to produce the ciphertext. It has been found that the number of rounds is exponentially proportional to the amount of time required to find a key using a brute-force attack. So as the number of rounds increases, the security of the algorithm increases exponentially. The Data Encryption Standard (DES) was developed in the 1970s by the National Bureau of Standards with the help of the National Security Agency. Its purpose is to provide a standard method for protecting sensitive commercial and unclassified data. IBM created the first draft of the algorithm, calling it LUCIFER. DES officially became a federal standard in November of 1976.

Algorithm

Figure 5: DES Block Diagram

Fundamentally DES performs only two operations on its input, bit shifting, and bit substitution. The key controls exactly how this process works. By doing these operations repeatedly and in a non-linear manner you end up with a result which can not be used to retrieve the original without the key. Those familiar with chaos theory should see a great deal of similarity to what DES does. By applying relatively simple operations repeatedly a system can achieve a state of near total randomness.

DES works on 64 bits of data at a time. Each 64 bits of data is iterated on from 1 to 16 times (16 is the DES standard). For each iteration a 48 bit subset of the 56 bit key is fed into the encryption block represented by the dashed rectangle above. Decryption is the inverse of the encryption process. The "F" module shown in the diagram is the heart of DES. It actually consists of several different transforms and non-linear substitutions. Consult one of the references in the bibliography for details.

What is the Limited DES that Enigma Implements?


The limited DES mode available in the freeware version of Enigma modifies the DES standard in two ways. First of all, a 32 bit key is used instead of 56 bits [note: 32 bits, NOT 28 bits]. Secondly the data is iterated on only 4 times instead of 16. These changes reduce the computational complexity of the algorithm by at least 2^26 times. Nevertheless a naive user would still have to guess on average 2 billion times before the correct key was determined. However, by using only 4 iterations over the F module there are known attacks better than brute force which could be used for a more sophisticated attack.

How Secure is DES?

Users of Enigma will most likely be subject to ciphertext only attacks. That is an attack in which the cryptographer has access only to encrypted documents. Under such conditions there is no known method of attack better than randomly guessing keys. This discussion assumes you meet this condition. The limited DES version of Enigma has 2^32 or 4,294,967,296 possible keys. The full DES version of Enigma has 2^56 or

72,057,594,037,900,000 possible keys. To determine the time it will take to break a file protected by Enigma multiply the number of keys by the time it takes your computer to try one key (times one half because on average you will guess the key by the time you have tried half the keys). For comparison I have done some rough (but conservative) calculations. Using brute force a Mac LC-II can break into a file protected by the free version of Enigma in about 1 day of non-stop computing. It would take that same Mac almost a million years to break into the same file protected by the full DES version. Equivalent numbers for a single Cray super computer (estimate somewhat rougher) would be about 10 minutes versus 3,000 years

Several dierent protocols have been proposed for ad-hoc routing. The earliest protocols such as DSDV DSR , and AODV

focused on problems that mobility presented to the accurate determination of routing information. DSDV is a proactive protocol

requiring periodic updates of all the routing information. In contrast, DSR and AODV are reactive protocols, only used when new destinations are sought, a route breaks, or a route is no longer in use. As more applications were developed to take advantage of the unique properties of ad-hoc networks, it soon became obvious that security of routing information was an issue not addressed in the existing protocols. energy consumption attacks, and black hole attacks. Deng et al. further discuss energy consumption and black hole attacks along with impersonation and routing information disclosure in. Jakobsson et al. categorize attacks as manipulation of routing information and exhaustive power

consumption, and provide detailed treatments of many characteristic attacks in . While research has focused on lightweight security mechanisms, some proposed protocols use more expensive asymmetric cryptography. In present a multi-path protocol extension that uses threshold cryptography to implement the key management system. It requires some nodes to function as servers and an authority to initialize these servers. Zapata and Asokan propose SAODV , a secure version of AODV, which uses digital signatures and hash chains to secure the routing messages.

In

Pissinou et al. propose a trust-based version of AODV using

static trust levels. The same authors then extend this protocol in to thwart multiple colluding nodes. Neither of these address securing the trust exchanges, or the overhead involved. Li et al. introduce a trust-based variant of AODV in [12] that secures the trust information. However, their protocol requires an intrusion detection system inthenetwork. Finally, Meka etal. proposeathirdtrustedAODV with a

simple method of evaluating trust even without source routing

Our work in this paper considers the asymmetric cryptography and trust-based extensions to AODV presented in respectively and

shows a real world comparison of the performance of the two protocols. Our results suggest that new protocols can be developed which take advantage of the best features of both types of protocols, and which share aspects of each security model.

Protocol Overviews
Due to space considerations, the reader is referred to for descriptions of the AODV, SAODV, and TAODV protocols respectively

Experimental Setup
Since ad-hoc networkings most promising applications make use of small, resourceconstrained devices that are signicantly dierent from todays ever faster desktop computers, special attention must be paid to the trade-o between strong cryptographic security and DoS. While theoretical analysis or simulation may give helpful hints on the relative eciency of dierent approaches, only real world

implementation and performance testing can give a concrete picture of the actual width of this spectrum. Such measurements provide the necessary information to determine which protocols are suitable for specic applications. In addition, the results can then be used to guide the design of novel protocols better suited to particular deployment situations.

In order to get an understanding for the real world performance of the AODV, SAODV, and TAODV protocols, we have implemented each of them on real hardware and measured their performance. In this section we detail the setup for the experiments used to acquire these measurements. We rst describe the supporting

hardware and software setup for our implementations. We then present details on the actual implementation for each of the three protocols. Finally we detail the design of the experiments used to evaluate the protocols and explain why these tests are more relevant than other more common metrics.

Hardware EachZauruswasequippedwithaLinksysWCF11compactashcardforwireless communication. The Zauruses ran OpenZaurus, an embedded version of Linux. In order to compile programs for the Zaurus we used a cross-compiler toolchain based on GCC v3.3.4. In addition, as described in, our code requires the OpenSSL libraries. For this purpose, OpenSSL was crosscompiled and statically linked into executables where necessary. All cross-compiling was performed on a desktop running Slackware Linux 11.0

I mple me ntati on Our AODV implementation is the result of previous projects in this area. The implementation is designed to run on the Linux operating system. As with many other AODV implementations for Linux, it separates functionality into a kernel module and a userspace daemon. The kernel module uses hooks in the netlter interface to send packet headersfromthe wireless interface to theuserspace

daemon. The daemon then determines how to handle the packet. If the packet is a routing control packet, then the daemon processes the packet in accordance with the AODV specication. If instead the packet is a data packet, the daemon determines whether or not a route exists to the necessary destination. If there is a suitable route, the packet is agged and the kernel module queues it to be sent out. If no route exists, the daemon begins route discovery. Once a route is found, the daemon enters the route into the kernels routing table. It then ags the packet (and any additional packets arriving during discovery) to be queued for transmission. The implementation is written completely in C. In order to implement SAODV, it was necessary to have a library of cryptographicoperations. We used OpenSSLfor this purpose,and wedeveloped asecurity library which wrapped much of OpenSSLs functionality into components appropriate for ad-hoc routing purposes. One particularly useful feature of the security library is that it allows easy use of several dierent OpenSSL contexts at once. For SAODV, this was useful as nodes must switch between signing, verifying, and hash chain operations rapidly to both send and receive routing messages. New data structures were added for SAODVs single signature extension and the necessary code was added to the message processing functions for RREQ , RREP , HELLO ,and RERR messages. The design of the AODV implementation allowed SAODV functionality to be implemented while maintaining one binary with the ability to run both protocols. Implementing TAODV required many additions similar to those involved in SAODV. New data structures were used for the NTT as well as the extended messages and the new R ACK message. Similarly, message handling functions were updated to use the extensions and take the appropriate actions. One challenge in implementing TAODV was counting packets sent, forwarded, or received for a particular route. While it intuitively seems to be something that should be implemented in the kernel module that is already tied into the netlter framework, this would require extra data exchange between the kernel module and the daemon. Since our implementation already passes packet headers to the daemon for route discovery initiation and agging, it was simply necessary to place the

counting mechanism in the daemon. Keeping track of the additional routing information required signicant extension of our AODV implementation. The original implementation does not support any multi-path entries in the routing table. Modifying it to supportsuch a setup for TAODV would have required rewriting signicant amounts of the base AODV code. Instead, we implemented a multi-path capable routing table for use exclusively by the TAODV protocol. When a nodeinitially discovers a route, or changes the active route to a particular destination, it merely copies the necessary entry to the daemons local routing table and marks it as having been altered so that it is updated in the kernels routing table at the next sync. This simplied the implementation using only a negligible amount of extra memory.

4.3 Testing Setup There were two performance factors we were interested in for the purposes of this comparison. The rst is the per-packet processing overhead. It is important to note that only CPU time was measured. Therefore this overhead reects use of the processor by each protocol. In these tests we use AODV as a baseline. Thus, for SAODV we measure the time it takes to generate an SSE for RREQ , RREP ,andHELLO messages. We also measure the time it takes for a node to verify an SSE for those same messages. For TAODV we measure how long it takes a node to generate or process and update RREP and R ACK messages. Due to the fact that some of the operations we measure have a runtime less than the resolution of our timer (10ms as per the Linux kernel), we perform a large number of operations back-to-back per measurement. We then make multiple measurements.

Our second performance metric is round trip time for route discovery. The justication for this metric lies in the fact that we are looking at securing the routing control packets. Once a route is established, data is forwarded with the same eciency regardless of the routing protocol. Therefore, it is important to see how the per-packet overhead along with the increased packet size aect the time for route discovery. For this test, we measure the performance of AODV in addition to that of SAODV and TAODV. This is necessary because both AODV and TAODV will

generate RREP safter fewer hopswhenthedestinations neighbor responds,while SAODV requires that the destination itself responds. For our experiments, we used a ve node network consisting of one laptop and four Zauruses as illustrated in Figure 1. We used the network snier ethereal [6] running on the laptop to measure the time elapsed from the sending of the RREQ to the receipt of the RREP .These individual measurements were also performed repeatedly as explained in Section 5.

Hardware Interface Hard disk RAM Processor Speed Processor : : : 40 GB

512 MB 3.00GHz : Pentium IV Processor

Lan set up for 4 systems (min)

Software Interface JDK 1.5 Java Swing

Software Description Java: Java was conceived by James Gosling, Patrick Naughton, Chris Wrath, Ed Frank, and Mike Sheridan at Sun Micro system. It is an platform independent programming language that extends its features wide over the network.Java2 version introduces an new component called Swing is a set of classes that provides more power & flexible components than are possible with AWT.

- Its a light weight package, as they are not implemented by platform-specific code. -Related classes are contained in javax.swing and its sub packages, such as javax.swing.tree. -Components explained in the Swing have more capabilities than those of AWT

What Is Java?
Java is two things: a programming language and a platform. The Java Programming Language Java is a high-level programming language that is all of the following: Simple Object-oriented Distributed Interpreted Robust Secure Architecture-neutral Portable High-performance Multithreaded Dynamic

Java is also unusual in that each Java program is both compiled and interpreted. With a compiler, you translate a Java program into an intermediate language called Java byte codes--the platform-independent codes interpreted by the Java interpreter. With an interpreter, each Java byte code instruction is parsed and run on the computer. Compilation happens just once; interpretation occurs each time the program is executed. This figure illustrates how this works.

Java byte codes can be considered as the machine code instructions for the Java Virtual Machine (Java VM). Every Java interpreter, whether it's a Java development tool or a Web browser that can run Java applets, is an implementation of the Java VM. The Java VM can also be implemented in hardware.

Java byte codes help make "write once, run anywhere" possible. The Java program can be compiled into byte codes on any platform that has a Java compiler. The byte codes can then be run on any implementation of the Java VM. For example, the same Java program can run on Windows NT, Solaris, and Macintosh. The Java Platform
A platform is the hardware or software environment in which a program runs. The Java platform differs from most other platforms in that it's a software-only platform that runs on top of other, hardware-based platforms. Most other platforms are described as a combination of hardware and operating system.

The Java platform has two components:


The Java Virtual Machine (Java VM) The Java Application Programming Interface (Java API)

The Java API is a large collection of ready-made software components that provide many useful capabilities, such as graphical user interface (GUI) widgets. The Java API is grouped into libraries (packages) of related components. The following figure depicts a Java program, such as an application or applet, that's running on the Java platform. As the figure shows, the Java API and Virtual Machine insulates the Java program from hardware dependencies.

As a platform-independent environment, Java can be a bit slower than native code. However, smart compilers, well-tuned interpreters, and just-in-time byte code compilers can bring Java's performance close to that of native code without threatening portability.

What Can Java Do?


Probably the most well-known Java programs are Java applets. An applet is a Java program that adheres to certain conventions that allow it to run within a Java-enabled browser.

However, Java is not just for writing cute, entertaining applets for the World Wide Web ("Web"). Java is a general-purpose, high-level programming language and a powerful software platform. Using the generous Java API, we can write many types of programs. The most common types of programs are probably applets and applications, where a Java application is a standalone program that runs directly on the Java platform.

How does the Java API support all of these kinds of programs? With packages of software components that provide a wide range of functionality. The core API is the API included in every full implementation of the Java platform. The core API gives you the following features:
The Essentials: Objects, strings, threads, numbers, input and output, data structures, system properties, date and time, and so on. Applets: The set of conventions used by Java applets. Networking: URLs, TCP and UDP sockets, and IP addresses. Internationalization: Help for writing programs that can be localized for users worldwide. Programs can automatically adapt to specific locales and be displayed in the appropriate language. Security: Both low-level and high-level, including electronic signatures, public/private key management, access control, and certificates. Software components: Known as JavaBeans, can plug into existing component architectures such as Microsoft's OLE/COM/Active-X architecture, OpenDoc, and Netscape's Live Connect. Object serialization: Allows lightweight persistence and communication via Remote Method Invocation (RMI). Java Database Connectivity (JDBC): Provides uniform access to a wide range of relational databases. Java not only has a core API, but also standard extensions. The standard extensions define APIs for 3D, servers, collaboration, telephony, speech, animation, and more.

How Will Java Change My Life?


Java is likely to make your programs better and requires less effort than other languages. We believe that Java will help you do the following:

Get started quickly: Although Java is a powerful object-oriented language, it's easy to learn, especially for programmers already familiar with C or C++.

Write less code: Comparisons of program metrics (class counts, method counts, and so on) suggest that a program written in Java can be four times smaller than the same program in C++.

Write better code: The Java language encourages good coding practices, and its garbage collection helps you avoid memory leaks. Java's object orientation, its JavaBeans component architecture, and its wide-ranging, easily extendible API let you reuse other people's tested code and introduce fewer bugs.

Develop programs faster: Your development time may be as much as twice as fast versus writing the same program in C++. Why? You write fewer lines of code with Java and Java is a simpler programming language than C++.

Avoid platform dependencies with 100% Pure Java: You can keep your program portable by following the purity tips mentioned throughout this book and avoiding the use of libraries written in other languages.

Write once, run anywhere: Because 100% Pure Java programs are compiled into machine-independent byte codes, they run consistently on any Java platform. Distribute software more easily: You can upgrade applets easily from a central

server. Applets take advantage of the Java feature of allowing new classes to be loaded "on the fly," without recompiling the entire program. We explore the java.net package, which provides support for networking. Its creators have called Java programming for the Internet. These networking classes encapsulate the socket paradigm pioneered in the Berkeley Software Distribution (BSD) from the University of California at Berkeley.

Networking Basics

Ken Thompson and Dennis Ritchie developed UNIX in concert with the C language at Bell Telephone Laboratories, Murray Hill, New Jersey, in 1969. In 1978, Bill Joy was leading a project at Cal Berkeley to add many new features to UNIX, such as virtual memory and full-screen display capabilities. By early 1984,

just as Bill was leaving to found Sun Microsystems, he shipped 4.2BSD, commonly known as Berkeley UNIX.4.2BSD came with a fast file system, reliable signals, interprocess communication, and, most important, networking. The networking support first found in 4.2 eventually became the de facto standard for the Internet. Berkeleys implementation of TCP/IP remains the primary standard for communications with the Internet. The socket paradigm for inter process and network communication has also been widely adopted outside of Berkeley.
Socket Overview A network socket is a lot like an electrical socket. Various plugs around the network have a standard way of delivering their payload. Anything that understands the standard protocol can plug in to the socket and communicate. Internet protocol (IP) is a low-level routing protocol that breaks data into small packets and sends them to an address across a network, which does not guarantee to deliver said packets to the destination. Transmission Control Protocol (TCP) is a higher-level protocol that manages to reliably transmit data. A third protocol, User Datagram Protocol (UDP), sits next to TCP and can be used directly to support fast, connectionless, unreliable transport of packets. Client/Server A server is anything that has some resource that can be shared. There are compute servers, which provide computing power; print servers, which manage a collection of printers; disk servers, which provide networked disk space; and web servers, which store web pages. A client is simply any other entity that wants to gain access to a particular server. In Berkeley sockets, the notion of a socket allows as single computer to serve many different clients at once, as well as serving many different types of information. This feat is managed by the introduction of a port, which is a numbered socket on a particular machine. A server process is said to listen to a port until a client connects to

it. A server is allowed to accept multiple clients connected to the same port number, although each session is unique. To mange multiple client connections, a server process must be multithreaded or have some other means of multiplexing the simultaneous I/O.

Reserved Sockets Once connected, a higher-level protocol ensues, which is dependent on which port you are using. TCP/IP reserves the lower, 1,024 ports for specific protocols. Port number 21 is for FTP, 23 is for Telnet, 25 is for e-mail, 79 is for finger, 80 is for HTTP, 119 is for Netnews-and the list goes on. It is up to each protocol to determine how a client should interact with the port. Java and the Net

Java supports TCP/IP both by extending the already established stream I/O interface. Java supports both the TCP and UDP protocol families. TCP is used for reliable stream-based I/O across the network. UDP supports a simpler, hence faster, point-to-point datagram-oriented model.
InetAddress The InetAddress class is used to encapsulate both the numerical IP address and the domain name for that address. We interact with this class by using the name of an IP host, which is more convenient and understandable than its IP address. The InetAddress class hides the number inside. As of Java 2, version 1.4, InetAddress can handle both IPv4 and IPv6 addresses. Factory Methods The InetAddress class has no visible constructors. To create an InetAddress object, we use one of the available factory methods. Factory methods are merely a convention whereby static methods in a class return an instance of that class. This is done in lieu of overloading a constructor with various parameter lists when having unique method names makes the results much clearer. Three commonly used InetAddress factory methods are shown here. static InetAddress getLocalHost( ) throws UnknownHostException

static InetAddress getByName(String hostName) throws UnknowsHostException static InetAddress[ ] getAllByName(String hostName) throws UnknownHostException

The getLocalHost( ) method simply returns the InetAddress object that represents the local host. The getByName( ) method returns an InetAddress for a host name passed to it. If these methods are unable to resolve the host name, they throw an UnknownHostException.

On the internet, it is common for a single name to be used to represent several machines. In the world of web servers, this is one way to provide some degree of scaling. The getAllByName ( ) factory method returns an array of InetAddresses that represent all of the addresses that a particular name resolves to. It will also throw an UnknownHostException if it cant resolve the name to at least one address. Java 2, version 1.4 also includes the factory method getByAddress( ), which takes an IP address and returns an InetAddress object. Either an IPv4 or an IPv6 address can be used.
Instance Methods The InetAddress class also has several other methods, which can be used on the objects returned by the methods just discussed. Here are some of the most commonly used. Boolean equals (Object other)Internet address as other. byte[ ] getAddress( )Returns a byte array that represents the Returns true if this object has the same

objects Internet address in network byte order. String getHostAddress( )- Returns a string that represents the host address associated with the InetAddress object.

String getHostName( )- Returns a string that represents the host name associated with the InetAddress object. boolean isMulticastAddress( )- Returns true if this Internet address is a multicast address. Otherwise, it returns false. String toString( )- Returns a string that lists the host name and the IP address for conveneince.

Internet addresses are looked up in a series of hierarchically cached servers. That means that your local computer might know a particular name-to-IP-address mapping autocratically, such as for itself and nearby servers. For other names, it may ask a local DNS server for IP address information. If that server doesnt have a particular address, it can go to a remote site and ask for it. This can continue all the way up to the root server, called InterNIC (internic.net). TCP/IP Client Sockets TCP/IP sockets are used to implement reliable, bidirectional, persistent, point-to-point, stream-based connections between hosts on the Internet. A socket can be used to connect Javas I/O system to other programs that may reside either on the local machine or on any other machine on the Internet. There are two kinds of TCP sockets in Java. One is for servers, and the other is for clients. The ServerSocket class is designed to be a listener, which waits for clients to connect before doing anything. The Socket class is designed to connect to server sockets and initiate protocol exchanges.
The creation of a Socket object implicitly establishes a connection between the client and server. There are no methods or constructors that explicitly expose the details of establishing that connection. Here are two constructors used to create client sockets: Socket(String hostName, int port) Creates a socket connecting the local

host to the named host and port; can throw an UnknownHostException or anIOException.

Socket(InetAddress ipAddress, int port) Creates a socket using a preexisting InetAddress object and a port; can throw an IOException. A socket can be examined at any time for the address and port information associated with it, by use of the following methods: InetAddress getInetAddress( )- Returns the InetAddress associated with the Socket object.

int getPort( ) Returns the remote port to which this Socket object is connected.
int getLocalPort( ) Returns the local port to which this Socket object is connected. Once the Socket object has been created, it can also be examined to gain access to the input and output streams associated with it. Each of these methods can throw an IOException if the sockets have been invalidated by a loss of connection on the Net. InputStream getInputStream( )Returns the InputStream associated with the invoking socket.

OutputStream getOutputStream( ) Returns the OutputStream associated with the invoking socket.
TCP/IP Server Sockets Java has a different socket class that must be used for creating server applications. The ServerSocket class is used to create servers that listen for either local or remote client programs to connect to them on published ports. ServerSockets are quite different form normal Sockets. When we create a ServerSocket, it will register itself with the system as having an interest in client connections. The constructors for ServerSocket reflect the port number that we wish to accept connection on and, optionally, how long we want the queue for said port to be. The queue length tells the system how many client connection it can

leave pending before it should simply refuse connections. The default is 50. The constructors might throw an IOException under adverse conditions. Here are the constructors: ServerSocket(int port) Creates server socket on the specified port with a queue length of 50. Serversocket(int port, int maxQueue)-Creates a server socket on the specified port with a maximum queue length of maxQueue. ServerSocket(int port, int maxQueue, InetAddress localAddress)-Creates a server socket on the specified port with a maximum queue length of maxQueue. On a multihomed host, localAddress specifies the IP address to which this socket binds. ServerSocket has a method called accept( ), which is a blocking call that will wait for a client to initiate communications, and then return with a normal Socket that is then used for communication with the client.

URL
The Web is a loose collection of higher-level protocols and file formats, all unified in a web browser. One of the most important aspects of the Web is that Tim Berners-Lee devised a scaleable way to locate all of the resources of the Net. The Uniform Resource Locator (URL) is used to name anything and everything reliably. The URL provides a reasonably intelligible form to uniquely identify or address information on the Internet. URLs are ubiquitous; every browser uses them to identify information on the Web. Within Javas network class library, the URL class provides a simple, concise API to access information across the Internet using URLs. Format Two examples of URLs are http;//www.osborne.com/ and http://

www.osborne.com:80/index.htm.

A URL specification is based on four components. The first is the protocol to use, separated from the rest of the locator by a colon (:). Common protocols are http, ftp, gopher, and file, although these days almost everything is being done via HTTP. The second component is the host name or IP address of the host to use; this is delimited on the left by double slashes (/ /) and on the right by a slash (/) or optionally a colon (:) and on the right by a slash (/). The fourth part is the actual file path. Most HTTP servers will append a file named index.html or index.htm to URLs that refer directly to a directory resource. Javas URL class has several constructors, and each can throw a MalformedURLException. One commonly used form specifies the URL with a string that is identical to what is displayed in a browser: URL(String urlSpecifier)

The next two forms of the constructor breaks up the URL into its component parts:
URL(String protocolName, String hostName, int port, String path) URL(String protocolName, String hostName, String path) Another frequently used constructor uses an existing URL as a reference context and then create a new URL from that context. URL(URL urlObj, String urlSpecifier) The following method returns a URLConnection object associated with the invoking URL object. it may throw an IOException.

URLConnection openConnection( )-It returns a URLConnection object associated with the invoking URL object. it may throw an IOException.

JDBC In an effort to set an independent database standard API for Java, Sun Microsystems developed Java Database Connectivity, or JDBC. JDBC offers a generic SQL database access mechanism that provides a consistent interface to a variety of RDBMS. This consistent interface is achieved through the use of plug-in database connectivity modules, or drivers. If a database vendor wishes to have JDBC support, he or she must provide the driver for each platform that the database and Java run on. To gain a wider acceptance of JDBC, Sun based JDBCs framework on ODBC. As you discovered earlier in this chapter, ODBC has widespread support on a variety of platforms. Basing JDBC on ODBC will allow vendors to bring JDBC drivers to market much faster than developing a completely new connectivity solution. JDBC Goals Few software packages are designed without goals in mind. JDBC is one that, because of its many goals, drove the development of the API. These goals, in conjunction with early reviewer feedback, have finalized the JDBC class library into a solid framework for building database applications in Java. The goals that were set for JDBC are important. They will give you some insight as to why certain classes and functionalities behave the way they do. The eight design goals for JDBC are as follows:
1. SQLevelAPI The designers felt that their main goal was to define a SQL interface for Java. Although not the lowest database interface level possible, it is at a low enough level for higher-level tools and APIs to be created. Conversely, it is at a high enough level for application programmers to use it confidently. Attaining this goal allows for future tool vendors to generate JDBC code and to hide many of JDBCs complexities from the end user.

2. SQLConformance SQL syntax varies as you move from database vendor to database vendor. In an effort to support a wide variety of vendors, JDBC will allow any query statement to be passed through it to the underlying database driver. This allows the connectivity module to handle non-standard functionality in a manner that is suitable for its users.

3. JDBC must be implemental on top of common database interfaces The JDBC SQL API must sit on top of other common SQL level APIs. This goal allows JDBC to use existing ODBC level drivers by the use of a software interface. This interface would translate JDBC calls to ODBC and vice versa. 4. Provide a Java interface that is consistent with the rest of the Java system Because of Javas acceptance in the user community thus far, the designers feel that they should not stray from the current design of the core Java system. 5. Keep it simple This goal probably appears in all software design goal listings. JDBC is no exception. Sun felt that the design of JDBC should be very simple, allowing for only one method of completing a task per mechanism. Allowing duplicate functionality only serves to confuse the users of the API. 6. Use strong, static typing wherever possible Strong typing allows for more error checking to be done at compile time; also, less errors appear at runtime. 7. Keep the common cases simple Because more often than not, the usual SQL calls used by the programmer are simple SELECTs, INSERTs, DELETEs and UPDATEs, these queries should be simple to perform with JDBC. However, more complex SQL statements should also be possible.

DESIGN

SAODV DFD: Secure data transmission

NODE A

NODE B

NODE D

NODE C

TAODV DFD TRUST BASED DATA TRANSMISSION NODE A NODE B

TRUSTED CENTER

NODE D

NODE C

USE CASE

NODE :
SAODV/TAODV MODE

REGISTER/UNREGISTER

SEND MSG

CLEAR MSG

EXIT

TRUST CENTER

VIEW STATUS

VIEW ROUTER

CLOSE

CLASS DIAGRAM

NODE A CREATEWIN() ACTION PERFORMED() MAIN()

NODE TAB READ ADDR() ITEM STATE CHANGED() ACTION PERFRMED() SEND ROUTE REQ() SENDMSG() ENCRYPT() DECRYPT() TAODVMSG()

MSG TAB SUPER()

NODE A SERVER RUN() READADDR() ENCRYPT() DECRYPT()

TAODV SERVER RUN()

TURSTED CENTER:

TC CREATEWIN() ACTIONPERFORMED() RUN() MAIN()

TC SERVER RUN() CLEARTABLE() ADDROUTER() FINDROUTER()

SEQUENCE DIAGRAM FOR SAODV

SENDER

INTERMEDIATE

RECEIVER

Encrypt&send RREQ Forward RREQ

Encrypt & send route ADD route, encrypt & fwd

Encrypt & send msg

Forward msg

Send ack Forward ack

TAODV

SENDER REGISTER ACK

TRUSTED CENTER

RECEIVER

REGISTER

ACK SEND ROUTE REQ

ROUTE REPLY

SEND MSG ACK

IMPLEMENTATION MODULES

SAODV SAODV has three modules. Sender Intermediate node Receiver

SAODV contains four nodes: nodeA, nodeB, nodeC & nodeD. Each node behaves either as sender or receiver or intermediate node. Sender Sender node is one which wants to send message to some other node. In SAODV whenever a sender want to send a message to some node, first it sends a route request to its neighbours and waits for reply. This route request is encrypted before being sent to neighbours. Once a neighbor sends the route reply, the sender has to decrypt the message given by neighbor using neighbours key to view the route. After getting the route, the sender encrypts the message along with route and forwards to receiver and waits for acknowledgement. Once the acknowledgement is received by neighbor, it confirms that the message is delivered successfully to neighbor.

Intermediate node The role of intermediate node is to exchange the route requests and message between the sender and receiver. When the intermediate node gets the route request from sender, it is encrypted. Intermediate node has to decrypt to route request using senders key to know the destinations name. then it forwards the request to destination. Again it gets the reply for route from destination which is encrypted by destination node. The intermediate node decrypts the route reply and appends its address to it and again encrypts using its own key and forwards to the sender. When sender sends the message to

intermediate node, the route will be in encrypted form. The intermediate node decrypts the route and checks the next node to which it has to forward the message and forwards it.

Receiver The receiver node is one which receives the message from sender. When receiver gets the route request, it is encrypted by sender. The receiver decrypts the route request and views it, then it encrypts the route reply and forwards to sender. Again when sender sends

message, it will be in encrypted form. The receiver decrypts the message and views it, then it sends an acknowledgement to sender.

TAODV TAODV contains four modules. Trusted Center Sender Intermediate node Receiver

Trusted Center Trusted Center(TC) is a trusted node or router. The role of TC is to store the routes available for all the registered nodes and send the route to a node whenever it requests for it. If a node gets registered with TC, it finds all the possible routes from the registered node to other nodes and viceversa. If a node gets unregistered, TC removes the routes which contain unregister nodes address.

Sender In TAODV everything happens on basis of trust. Whenever a sender has to send message to receiver, first it has to get registered with TC. When sender wants to send message to receiver, it sends a route request to TC and TC sends the route reply. Then the sender sends the

message to receiver using that route and waits for acknowledgement. Once it gets acknowledgement for message, it confirms that message is delivered to receiver.

Intermediate node This node gets the message from sender and forwards to receiver. Again it gets acknowledgement from receiver for sent message and forwards it to sender. This node also to be registered in TC.

Receiver The receiver is one who gets the message from sender. Once it gets the message from some sender, it sends acknowledgement for it. The receiver should also be registered with TC to receive message from sender.

Bar chart In this project we show the comparison of SAODV protocol with TAODV protocol. In SAODV protocol since the route request and message is to be encrypted and decrypted at each node, it takes more time to be delivered to receiver. But in case of TAODV, since we use TC, there is no need to use encryption techniques, since we forward

route request and message only to trusted nodes. Hence it takes very less time compared to SAODV. We show this comparison between SAODV and TAODV protocol by drawing a bar chart. This Bar Chart is created using the special package called org.jfreechart.

6. System Testing

The purpose of testing is to discover errors. Testing is the process of trying to discover every conceivable fault or weakness in a work product. It provides a way to check the functionality of components, sub assemblies, assemblies and/or a finished product It is the process of exercising software with the intent of ensuring that the Software system meets its requirements and user expectations and does not fail in an unacceptable manner. There are various types of test. Each test type addresses a specific testing requirement.

Types of Tests

6.1 Unit testing


Unit testing involves the design of test cases that validate that the internal program logic is functioning properly, and that program input produce valid outputs. All decision branches and internal code flow should be validated. It is the testing of individual software units of the

application .it is done after the completion of an individual unit before integration. This is a structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform basic tests at component level and test a specific business process, application, and/or system configuration. Unit tests ensure that each unique path of a business process performs accurately to the documented specifications and contains clearly defined inputs and expected results.

6.1.1 Functional test


Functional tests provide systematic demonstrations that functions tested are available as specified by the business and technical requirements, system documentation, and user manuals. Functional testing is centered on the following items: Valid Input accepted. Invalid Input rejected. Functions Output must be exercised. Systems/Procedures: interfacing systems or procedures must be invoked. : identified functions must be exercised. : identified classes of application outputs : identified classes of invalid input must be : identified classes of valid input must be

6.1.2 System Test


System testing ensures that the entire integrated software system meets requirements. It tests a configuration to ensure known and predictable results. An example of system testing is the configuration oriented system integration test. System testing is based on process descriptions and flows, emphasizing pre-driven process links and integration points.

6.1.3 Performance Test The Performance test ensures that the output be produced within the time limits, and the time taken by the system for compiling, giving response to the users and request being send to the system for to retrieve the results.

6.2 Integration Testing


Software integration testing is the incremental integration testing of two or more integrated software components on a single platform to produce failures caused by interface defects. The task of the integration test is to check that components or software applications, e.g. components in a software system or one step up software applications at the company level interact without error.

Integration testing for Server Synchronization: Testing the IP Address for to communicate with the other Nodes Check the Route status in the Cache Table after the status information is received by the Node The Messages are displayed throughout the end of the application

6.3 Acceptance Testing


User Acceptance Testing is a critical phase of any project and requires significant participation by the end user. It also ensures that the system meets the functional requirements.

Acceptance testing for Data Synchronization:

The Acknowledgements will be received by the Sender Node after the Packets are received by the Destination Node The Route add operation is done only when there is a Route request in need The Status of Nodes information is done automatically in the Cache Updation process

Test cases Testing & Validation SAODV MODUL GIVEN EXPECT E INPUT ED OUTPUT Sender NULL Cannot send empty message dialog box sender message Select the destinatio n dialog box sender Messag Dialog e& box destinati prompting on for key address sender Messag Encrypt e& RREQ & destinati forward to on neighbour address & key sender Messag e& destinati on Encrypt message and forward to ACTUAL OUTPUT Cannot send empty message dialog box Select the destination dialog box REMA RK Tested OK

Tested OK

Dialog box Tested prompting for OK key

Bad parity exception

Message encrypted & forwarded to neighbour

Base64 encoder & decoder to be used in DES Tested OK

address & key Intermedi Key of ate node sender

neighbour Forward message to destinatio n Encrypt &Forward route reply to sender Decrypt message and send reply to sender Message forwarded to destination Tested OK

receiver

Key of sender

Route reply Tested sent to sender OK

receiver

Messag e from sender & key of sender

Message Tested decrypted and OK acknowledge ment sent to sender

TAODV

MODUL GIVEN E INPUT

EXPECT ED OUTPUT

TC

Registrati Register on request node, find

ACTUA L OUTPU T Node registere

REMAR K

Tested OK

from a node

TC

sender

sender

receiver

routes for d and all the node & possible send reply routes are found Route Send the Route request route as sent to from reply to sender sender sender Destinatio Forward Route n name route request request to sent TC Destinatio get route Message n name & from TC forwarde message and d to forward destinati message to on destinatio n Message Display Message from message & received sender send ack and ack to sender sent to sender

Tested OK

Tested OK

Tested OK

Tested OK

SNAPS SAODV

sender

Intermediate node

Receiver

Intermediate node

Sender

Intermediate node

Receiver

Sender

TAODV TC

Graph

6Conclusion
In this paper, we have compared the SAODV and TAODV protocols for securing adhoc network routing. We presented the results of implementation and evaluation of both protocols on real resource-limited hardware. The expected dierence between the two protocols was shown to be consistent with this real world scenario. These experiments showed that there is signicant room between the two protocols for a secure hybrid protocol to be developed which takes advantage of the strongest points of both. Future work needs to delve further into the extensive body of work on various trust metrics. This includes the testing of other trust metrics for use in ad-hoc routing as well as developing the aforementioned hybrid protocols and testing their performance against the results presented in this paper. In addition, it is necessary to test the quality of the routing decisions produced by all of these protocols in a malicious environment.