Académique Documents
Professionnel Documents
Culture Documents
1. INTRODUCTION
MABS Stands for Multicast Authentication Based on Batch Signature. Conventional block-based multicast authentication schemes overlook the heterogeneity of receivers by letting the sender choose the block size, divide a multicast stream into blocks, associate each block with a signature, and spread the effect of the signature across all the packets in the block through hash graphs or coding algorithms. The correlation among packets makes them vulnerable to packet loss, which is inherent in the Internet and wireless networks. Moreover, the lack of Denial of Service (DoS) resilience renders most of them vulnerable to packet injection in hostile environments. In this paper, we propose a novel multicast authentication protocol, namely MABS, including two schemes. The basic scheme (MABS-B) eliminates the correlation among packets and thus provides the perfect resilience to packet loss, and it is also efficient in terms of latency, computation, and communication overhead due to an efficient cryptographic primitive called batch signature, which supports the authentication of any number of packets simultaneously. We also present an enhanced scheme MABS-E, which combines the basic scheme with a packet filtering mechanism to alleviate the DoS impact while preserving the perfect resilience to packet loss.
Traditionally, multicast authentication schemes manage the different involvement of the receivers by letting the sender. Choose the block size, divide a multicast stream into blocks, connect each block with a signature, and spread the effect of the Signature across all the packets in the block. The correlation among packets makes them vulnerable to packet loss. Moreover, the lack of Denial of Service occurred. Furthermore in the existing system the efficiency of the receivers or not considered. Compared with the multicast sender, which could be a powerful server, but receiver are having different capabilities and resources.
Our Goal is to eliminate the correlation among packets and provide the perfect resilience to packet loss. We develop an efficient system by using latency, computation, and communication overhead due to an efficient cryptographic primitive called batch signature, which supports the authentication of any number of packets simultaneously. We also present another scheme called packet filtering mechanism to alleviate the DoS impact while preserving the perfect resilience to packet loss. It is gaining popular applications such as real-time stock quotes, interactive games, video conference, live video broadcast, or video on demand.
1.2 SCOPE
Multicast is an efficient method to deliver multimedia content from a sender to a group of receivers and is gaining popular applications such as realtime stock quotes, interactive games, video conference, live video broadcast, or video on demand. Authentication is one of the critical topics in securing multicast in an environment attractive to malicious attacks. Basically, multicast authentication may provide the following security services: Data integrity: Each receiver should be able to assure that received packets have not been modified during transmissions. Data origin authentication: Each receiver should be able to assure that each received packet comes from the real sender as it claims. Nonrepudiation: The sender of a packet should not be able to deny sending the packet to receivers in case there is a dispute between the sender and receivers. All the three services can be supported by an asymmetric key technique called signature. In an ideal case, the sender generates a signature for each packet with its private key, which is called signing, and each receiver checks the validity of the signature with the senders public key, which is called verifying. If the verification succeeds, the receiver knows the packet is authentic.
1.3 OBJECTIVES
Multicast authentication protocol called MABS (in short for Multicast Authentication based on Batch Signature). MABS includes two schemes. The basic scheme (called MABS-B hereafter) utilizes an efficient asymmetric cryptographic primitive called batch signature which supports the authentication of any number of packets simultaneously with one signature verification, to address the efficiency and packet loss problems in general environments. The enhanced scheme (called MABS-E hereafter) combines MABS-B with packet filtering to alleviate the DoS impact In hostile environments. MABS provides data integrity, origin authentication, and nonrepudiation as previous asymmetric key based protocols. In addition, we make the following contributions:
Our MABS can achieve perfect resilience to packet loss in lossy channels in the sense that no matter how many packets are lost the already-received packets can still be authenticated by receivers.
MABS-B is efficient in terms of less latency, computation, and communication overhead. Though MABS-E is less efficient than MABS-B since it includes the DoS defense, its overhead is still at the same level as previous schemes.
We propose two new batch signature schemes based on BLS and DSA and show they are more efficient than the batch RSA signature scheme.
The basic scheme MABS-B targets at the packet loss problem, which is inherent in the Internet and wireless networks. It has perfect resilience to packet loss, no matter whether it is random loss or burst loss. In some circumstances, however, an attacker can inject forged packets into a batch of packets to disrupt the batch signature verification, leading to DoS. A naive approach to defeat the DoS attack is to divide the batch into multiple smaller batches and perform batch verification over each smaller batch, and this divide-and-conquer approach can be recursively carried out for each smaller batch, which means more signature verifications at each receiver. In the worst case, the attacker can inject forged packets at very high frequency and expect that each receiver stops the batch operation and recovers the basic per-packet signature verification, which may not be viable at resource-constrained receiver devices. An enhanced scheme called MABS-E, which combines the basic scheme MABS-B and a packet filtering mechanism to tolerate packet injection. In particular, the sender attaches each packet with a mark, which is unique to the packet and cannot be spoofed. At each receiver, the multicast stream is classified into disjoint sets based on marks. Each set of packets comes from either the real sender or the attacker. The mark design ensures that a packet from the real sender never falls into any set of packets from the attacker, and vice versa.
: : :
: : : :
2. TECHNOLOGIES USED
The CLR is described as the execution engine of DOTNET. It provides the environment within which programs run. The most important features are Conversion from a low-level assembler-style language, called Intermediate Language (IL), into code native to the platform being executed on. Memory management, notably including garbage collection. Checking and enforcing security restrictions on the running code. Loading and executing programs, with version control and other such features. The following features of the DOTNET framework are also worth description: The code that targets DOTNET, and which contains certain extra Information metadata - to describe itself. While both managed and unmanaged code can run in the runtime, only managed code contains the information that allows the CLR to guarantee, for instance, safe execution and interoperability.
With Managed Code comes Managed Data. CLR provides memory allocation and Deal location facilities, and garbage collection. Some DOTNET languages use Managed Data by default, such as C#, Visual Basic DOTNET and Jscript DOTNET, whereas others, namely C++, do not. Targeting CLR can, depending on the language youre using, impose certain constraints on the features available. As with managed and unmanaged code, one can have both managed and unmanaged data in DOTNET applications - data that doesnt get garbage collected but instead is looked after by unmanaged code.
The set of classes is pretty comprehensive, providing collections, file, screen, and network I/O, threading, and so on, as well as XML and database connectivity. The class library is subdivided into a number of sets (or namespaces), each providing distinct areas of functionality, with dependencies between the namespaces kept to a minimum.
The multi-language capability of the DOTNET Framework and Visual Studio DOTNET enables developers to use their existing programming skills to build all types of applications and XML Web services. The DOTNET framework supports new versions of Microsofts old favorites Visual Basic and C++ (as Visual Basic DOTNET and Managed C++), but there are also a number of new additions to the family.
Visual Basic DOTNET has been updated to include many new and improved language features that make it a powerful object-oriented programming language. These features include inheritance, interfaces, and overloading, among others. Visual Basic also now supports structured exception handling, custom attributes and also supports multi-threading. Visual Basic
10
DOTNET is also CLS compliant, which means that any CLS-compliant language can use the classes, objects, and components you create in Visual Basic DOTNET.
Managed Extensions for C++ and attributed programming are just some of the enhancements made to the C++ language. Managed Extensions simplify the task of migrating existing C++ applications to the new DOTNET Framework. C# is Microsofts new language. Its a C-style language that is essentially C++ for Rapid Application Development. Unlike other languages, its specification is just the grammar of the language. It has no standard library of its own, and instead has been designed with the intention of using the DOTNET libraries as its own.
Microsoft Visual J# DOTNET provides the easiest transition for Java-language developers into the world of XML Web Services and dramatically improves the interoperability of Java-language programs with existing software written in a variety of other programming languages.
Active State has created Visual Perl and Visual Python, which enable DOTNET-aware applications to be built in either Perl or Python. Both products can be integrated into the Visual Studio DOTNET environment. Visual Perl includes support for Active States Perl Dev Kit.
11
ASPDOTNET XML WEB SERVICES Base Class Libraries Common Language Runtime Operating System
Windows Forms
C#, DOTNET is also compliant with CLS (Common Language pecification) and supports structured exception handling. CLS is set of rules and constructs that are supported by the CLR (Common Language Runtime). CLR is the runtime environment provided by the DOTNET Framework; it manages the execution of the code and also makes the development process easier by providing services. C#DOTNET is a CLS-compliant language. Any objects, classes, or components that created in C#DOTNET can be used in any other CLS-compliant language. In addition, we can use objects, classes, and components created in other CLS-compliant languages in C#DOTNET .The use of CLS ensures complete interoperability among applications, regardless of the languages used to create the application.
CONSTRUCTORS AND DESTRUCTORS: Constructors are used to initialize objects, whereas destructors are used to destroy them. In other words, destructors are used to release the resources allocated to the object. In C#DOTNET the sub finalize procedure is available. The sub finalize procedure is used to complete the tasks that must be performed when an object is destroyed. The sub finalize procedure is called automatically when an object is destroyed. In addition, the sub finalize procedure can be called only from the class it belongs to or from derived classes.
12
GARBAGE COLLECTION: Garbage Collection is another new feature in C#DOTNET. The DOTNET Framework monitors allocated resources, such as objects and variables. In addition, the DOTNET Framework automatically releases memory for reuse by destroying objects that are no longer in use. In C#DOTNET, the garbage collector checks for the objects that are not currently in use by applications. When the garbage collector comes across an object that is marked for garbage collection, it releases the memory occupied by the object.
OVERLOADING: Overloading is another feature in C#. Overloading enables us to define multiple procedures with the same name, where each procedure has a different set of arguments. Besides using overloading for procedures, we can use it for constructors and properties in a class.
MULTITHREADING: C#DOTNET also supports multithreading. An application that supports multithreading can handle multiple tasks simultaneously, we can use multithreading to decrease the time taken by an application to respond to user interaction.
STRUCTURED EXCEPTION HANDLING: C#DOTNET supports structured handling, which enables us to detect and remove errors at runtime. In C#DOTNET, we need to use TryCatchFinally statements to create exception handlers. Using
TryCatchFinally statements, we can create robust and effective exception handlers to improve the performance of our application.
13
14
2.2 AJAX
Ajax is only a name given to a set of tools that previously existed. The main part is XMLHttpRequest, a server-side object usable in JavaScript that was implemented into In Internet Explorer it is an ActiveX object that was first named XMLHTTP sometimes, before to be generalized on all browser under the name XMLHttpRequest, when the Ajax technology becomes commonly used. The use of XMLHttpRequest in 2005 by Google, in Gmail and GoogleMaps has contributed to the success of this format. But this is the when the name Ajax was itself coined that the technology started to be so popular. But Ajax can selectively modify a part of a page displayed by the browser, and update it without the need to reload the whole document with all images, menus, etc... For example, fields of forms, choices of user, may be processed and the result displayed immediately into the same page. Ajax allows performing processing on client computer (in JavaScript) with data taken from the server. This shares the processing power. Before, processing of web page was only server-side, using web services or PHP scripts, before the whole page was sent within the network, required data transfers now useless. Ajax is a set of technologies, supported by a web browser, including these elements:
HTML and CSS for presenting. JavaScript (ECMAScript) for local processing, and DOM (Document Object Model) to access data inside the page or to access elements of XML file read on the server (with the getElementByTagName method for example)...
The XMLHttpRequest object is used to read or send data on the server asynchronously.
Optionally,
DOMParser may be used PHP or another scripting language may be used on the server. XML and XSLT to process the data if returned in XML form. SOAP may be used to dialog with the server.
15
The "asynchronous" word, means that the response of the server while be processed when available, without to wait and to freeze the display of the page. Dynamic HTML has same purpose and is a set of standards: HTML, CSS, JavaScript. This allows changing the display of the page from user commands or from text typed by the user. Ajax is DHTML plus the XHR object to exchange data with the server.
16
Ajax uses a programming model with display and events. These events are user actions; Interactivity is achieved with forms and buttons. DOM allows to link elements of the page with actions and also to extract data from XML files provided by the server. They call functions associated to elements of the webpage. To get data on the server, XMLHttpRequest provides two methods:
Data furnished by the server will be found in the attributes of the XMLHttpRequest object:
Take note that a new XMLHttpRequest object has to be created for each new file to load. We have to wait for the data to be available to process it, and in this purpose, the state of availability of data is given by the readyState attribute of XMLHttpRequest.
States of readyState follow (only the last one is really useful): 0: not initialized. 1: connection established. 2: request received. 3: answer in process. 4: finished.
17
The XMLHttpRequest object Allows interacting with the servers, thanks to its methods and attributes.
2.2.1 ATTRIBUTES
readyState status responseText responseXml onreadystatechange The code successively changes value from 0 to 4 that means for "ready". 200-OK 404 if the page is not found. Holds loaded data as a string of characters. Holds an XML loaded file, DOM's method allows to extract data. Property that takes a function as value that is invoked when the readystatechange event is dispatched.
2.2.2 METHODS
mode: type of request, GET or POST open(mode, url, boolean) url: the location of the file, with a path. boolean: true (asynchronous) / false (synchronous). optionally, a login and a password may be added to arguments send("string") Null for a GET command.
18
SQL-SERVER database consist of six type of objects, They are, 1. TABLE 2. QUERY 3. FORM 4. REPORT 5. MACRO
TABLE: A database is a collection of data about a specific topic. VIEWS OF TABLE: We can work with a table in two types, 1. Design View 2. Datasheet View
19
DESIGN VIEW: To build or modify the structure of a table we work in the table design view. We can specify what kind of data will be hold.
DATASHEET VIEW: To add, edit or analyses the data itself we work in tables datasheet view mode.
QUERY: A query is a question that has to be asked the data. Access gathers data that answers the question from one or more table. The data that make up the answer is either dynaset (if you edit it) or a snapshot (it cannot be edited).Each time we run query, we get latest information in the dynaset. Access either displays the dynaset or snapshot for us to view or perform an action on it, such as deleting or updating.
20
REQUESTERS AND PROVIDERS: The purpose of a Web service is to provide some functionality on behalf of its owner -- a person or organization, such as a business or an individual. The provider entity is the person or organization that provides an appropriate agent to implement a particular service. A requester entity is a person or organization that wishes to make use of a provider entity's Web service. It will use a requester agent to exchange messages with the provider entity's provider agent. (In most cases, the requester agent is the one to initiate this message exchange, though not always. Nonetheless, for consistency we still use the term "requester agent" for the agent that interacts with the provider agent, even in cases when the provider agent actually initiates the exchange.) A word on terminology: Many documents use the term service provider to refer to the provider entity and/or provider agent. Similarly, they may use the term service requester to refer to the requester entity and/or requester agent. However, since these terms are ambiguous -sometimes referring to the agent and sometimes to the person or organization that owns the agent -- this document prefers the terms requester entity, provider entity, requester agent and provider agent. In order for this message exchange to be successful, the requester entity and the provider entity must first agree on both the semantics and the mechanics of the message exchange. Service Description: The mechanics of the message exchange are documented in a Web service description (WSD). The WSD is a machine-processable specification of the Web service's interface, written in WSDL. It defines the message formats, datatypes, transport protocols, and transport serialization formats that should be used between the requester agent and the provider agent. It also specifies one or more network locations at which a provider agent can be invoked, and may provide some information about the message exchange pattern that is expected. In essence, the service description represents an agreement governing the mechanics of interacting with that service.
21
SEMANTICS: The semantics of a Web service is the shared expectation about the
behavior of the service, in particular in response to messages that are sent to it. In effect, this is the "contract" between the requester entity and the provider entity regarding the purpose and consequences of the interaction. Although this contract represents the overall agreement between the requester entity and the provider entity on how and why their respective agents will interact, it is not necessarily written or explicitly negotiated. It may be explicit or implicit, oral or written, machine processable or human oriented, and it may be a legal agreement or an informal (non-legal)agreement. While the service description represents a contract governing the mechanics of interacting with a particular service, the semantics represents a contract governing the meaning and purpose of that interaction. The dividing line between these two is not necessarily rigid. As more semantically rich languages are used to describe the mechanics of the interaction, more of the essential information may migrate from the informal semantics to the service description. As this migration occurs, more of the work required to achieve successful interaction can be automated.
22
Figure 2.2 The General Process of Engaging a Web Service Concepts and Relationships: The formal core of the architecture is this enumeration of the concepts and relationships that are central to Web services' interoperability. The architecture is described in terms of a few simple elements: concepts, relationships and models. Concepts are often noun-like in that they identify things or properties that we expect to see in realizations of the architecture, similarly relationships are normally linguistically verbs. As with any large-scale effort, it is often necessary to structure the architecture itself. We do this with the larger-scale meta-concept of model. A model is a coherent portion of the architecture that focuses on a particular theme or aspect of the architecture. Concepts: A concept is expected to have some correspondence with any realizations of the architecture. For example, the message concept identifies a class of object (not to be confused with Objects and Classes as are found in Object Oriented Programming languages) that we expect to be able to identify in any Web services context. The precise
23
form of a message may be different in different realizations, but the message concept tells us what to look for in a given concrete system rather than prescribing its precise form. Not all concepts will have a realization in terms of data objects or structures occurring in computers or communications devices; for example the person or organization refers to people and human organizations. Other concepts are more abstract still; for example,message reliability denotes a property of the message transport service a property that cannot be touched but nonetheless is important to Web services. Each concept is presented in a regular, stylized way consisting of a short definition, an enumeration of the relationships with other concepts, and a slightly longer explanatory description. For example, the concept of agent includes as relating concepts the fact that an agent is a computational resource, has an identifier and an owner. The description part of the agent explains in more detail why agents are important to the architecture. Relationships: Relationships denote associations between concepts. Grammatically, relationships are verbs; or more accurately, predicates. A statement of a relationship typically takes the form: concept predicate concept. For example, in agent, we state that: An agent is a computational resource This statement makes an assertion, in this case about the nature of agents. Many such statements are descriptive, others are definitive: A message has a message sender Such a statement makes an assertion about valid instances of the architecture: we expect to be able to identify the message sender in any realization of the architecture. Conversely, any system for which we cannot identify the sender of a message is not conformant to the architecture. Even if a service is used anonymously, the sender has an identifier but it is not possible to associate this identifier with an actual person or organization.
24
In this section we describe some of those technologies that seem critical and the role they fill in relation to this architecture. This is a necessarily bottom-up perspective, since, in this section, we are looking at Web services from the perspective of tools which can be used to design, build and deploy Web serivces.
25
The technologies that we consider here, in relation to the Architecture, are XML, SOAP, WSDL. However, there are many other technologies that may be useful. (For example, see the list of Web services specifications compiled by Roger Cutler and Paul Denning.) XML: XML solves a key technology requirement that appears in many places. By offering a standard, flexible and inherently extensible data format, XML significantly reduces the burden of deploying the many technologies needed to ensure the success of Web services. The important aspects of XML, for the purposes of this Architecture, are the core syntax itself, the concepts of the XML Infoset [XML Infoset], XML Schema and XML Namespaces. XML Infoset is not a data format per se, but a formal set of information items and their associated properties that comprise an abstract description of an XML document [XML 1.0]. The XML Infoset specification provides for a consistent and rigorous set of definitions for use in other specifications that need to refer to the information in a well-formed XML document. Serialization of the XML Infoset definitions of information may be expressed using XML 1.0 [XML 1.0]. However, this is not an inherent requirement of the architecture. The flexibility in choice of serialization format(s) allows for broader interoperability between agents in the system. In the future, a binary encoding of the XML infoset may be a suitable replacement for the textual serialization. Such a binary encoding may be more efficient and more suitable for machine-to-machine interactions. SOAP: SOAP 1.2 provides a standard, extensible, compensable framework for packaging and exchanging XML messages. In the context of this architecture, SOAP 1.2 also provides a convenient mechanism for referencing capabilities (typically by use of headers). [SOAP 1.2 defines an XML-based messaging framework: a processing model and an extensibility model. SOAP messages can be carried by a variety of network protocols; such as HTTP, SMTP, FTP, RMI/IIOP, or a proprietary messaging protocol.
26
[SOAP 1.2 Part 2] defines three optional components: a set of encoding rules for expressing instances of application-defined data types, a convention for representing remote procedure calls (RPC) and responses, and a set of rules for using SOAP with HTTP/1.1. While SOAP Version 1.2 [SOAP 1.2 Part 1] doesn't define "SOAP" as an acronym anymore, there are two expansions of the term that reflect these different ways in which the technology can be interpreted: 1. Service Oriented Architecture Protocol: In the general case, a SOAP message represents the information needed to invoke a service or reflect the results of a service invocation, and contains the information specified in the service interface definition. 2. Simple Object Access Protocol: When using the optional SOAP RPC Representation, a SOAP message represents a method invocation on a remote object, and the serialization of in the argument list of that method that must be moved from the local environment to the remote environment. WSDL: WSDL 2.0 is a language for describing Web services. WSDL describes Web services starting with the messages that are exchanged between the requester and provider agents. The messages themselves are described abstractly and then bound to a concrete network protocol and message format. Web service definitions can be mapped to any implementation language, platform, object model, or messaging system. Simple extensions to existing Internet infrastructure can implement Web services for interaction via browsers or directly within an application. The application could be implemented using COM, JMS, CORBA, COBOL, or any number of proprietary integration solutions. As long as both the sender and receiver agree on the service description, (e.g. WSDL file), the implementations behind the Web services can be anything.
27
3. SYSTEM ANALYSIS
3.1 INTRODUCTION
System analysis first stage according to System Development Life Cycle model. This
System Analysis is a process that starts with the analyst. Analysis is a detailed study of the various operations performed by a system and their relationships within and outside the system. One aspect of analysis is defining the boundaries of the system and determining whether or not a candidate should consider other related systems. During analysis, data is collected from the available files, decision points, and transactions handled by the present system. Logical system models and tools are used in analysis. Training, experience, and common sense are required for collection of the information needed to do the analysis.
28
over UDP, which does not provide any loss recovery support. In mobile environments, the situation is even worse. The instability of wireless channel can cause packet loss very frequently. Moreover, the smaller data rate of wireless channel increases the congestion possibility. This is not desirable for applications like realtime online streaming or stock quotes delivering. End users of online streaming will start to complain if they experience constant service interruptions due to packet loss, and missing critical stock quotes can cause severe capital loss of service subscribers. Therefore, for applications where the quality of service is critical to end users, a multicast authentication protocol should provide a certain level of resilience to packet loss. Specifically, the impact of packet loss on the authenticity of the already-received packets should be as small as possible. Efficiency and packet loss resilience can hardly be supported simultaneously by conventional multicast schemes.
Another problem with schemes in is that they are vulnerable to packet injection by malicious attackers. An attacker may compromise a multicast system by intentionally injecting forged packets to consume receivers resource, leading to Denial of Service (DoS). Compared with the efficiency requirement and packet loss problems, the DoS attack is not common, but it is still important in hostile environments. In the literature, some schemes attempt to provide the DoS resilience. However, they still have the packet loss problem because they are based on the same approach as previous schemes.
29
Our Goal is to eliminate the correlation among packets and provide the perfect resilience to packet loss. We develop an efficient system by using latency, computation, and communication overhead due to an efficient cryptographic primitive called batch signature, which supports the authentication of any number of packets simultaneously. We also present another scheme called packet filtering mechanism to alleviate the DoS impact while preserving the perfect resilience to packet loss. It is gaining popular applications such as real-time stock quotes, interactive games, video conference, live video broadcast, or video on demand.
ECONOMICAL FEASIBILITY: This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased.
30
TECHNICAL FEASIBILITY: This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system.
SOCIAL FEASIBILITY: The aspect of study is to check the level of acceptance of the system by the user. This includes the process of training the user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. The level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it. His level of confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the system.
31
4. SYSTEM DESIGN
4.1 MODULES
1. DATA ALLOCATION MODULE
If data send to multiple users. Sender chooses the particular users only to send/receive the files. It maintains the file status. The sender sent a file in multiple users with attachs one signature (Private Key). User enters the private key and then receives the files. Each an every file attach one signature.
2. PACKET ALLOCATION
If sender split the file in ten segments or packets. Each packet allocates some bytes if any bytes are missing the packet loss is accrued. Sender choose the block size, divide a multicast stream into blocks, associate each block with a signature, and spread the effect of the signature across all the packets in the block.
Each receiver should be able to assure that each received packet comes from the real sender as it claims. Efficiency needs to be considered, especially for receivers. Each receiver checks the validity of the signature with the senders Private Key. The Private Key is generated on Batch BLS Signature Algorithm.
32
Batch Description:
1.
RSA is a very popular cryptographic algorithm in many security protocols. Before the batch verification, the receiver must ensure all the messages are distinct. Otherwise batch RSA is vulnerable to the forgery attack. RSA algorithm is very secure.
The BLS signature scheme uses a cryptographic primitive called pairing. BLS signature scheme consists of three phases: Key generation phase. Signing phase. Verification phase.
33
Literature survey is the most important step in software development process. Before developing the tool it is necessary to determine the time factor, economy n company strength. Once these things r satisfied, ten next steps is to determine which operating system and language can be used for developing the tool. Once the programmers start building the tool the programmers need lot of external support. This support can be obtained from senior programmers, from book or from websites. Before building the system the above consideration are taken into account for developing the proposed system.
Generally, data mining (sometimes called data or knowledge discovery) is the process of analyzing data from different perspectives and summarizing it into useful information information that can be used to increase revenue, cuts costs, or both. Data mining software is one of a number of analytical tools for analyzing data. It allows users to analyze data from many different dimensions or angles, categorize it, and summarize the relationships identified. Technically, data mining is the process of finding correlations or patterns among dozens of fields in large relational databases.
34
Automated prediction of trends and behaviours: Data mining automates the process of finding predictive information in large databases. Questions that traditionally required extensive hands-on analysis can now be answered directly from the data quickly. A typical example of a predictive problem is targeted marketing. Data mining uses data on past promotional mailings to identify the targets most likely to maximize return on investment in future mailings. Other predictive problems include forecasting bankruptcy and other forms of default, and identifying segments of a population likely to respond similarly to given events.
Automated discovery of previously unknown patterns. Data mining tools sweep through databases and identify previously hidden patterns in one step. An example of pattern discovery is the analysis of retail sales data to identify seemingly unrelated products that are often purchased together. Other pattern discovery problems include detecting fraudulent credit card transactions and identifying anomalous data that could represent data entry keying errors.
Artificial neural networks: Non-linear predictive models that learn through training and resemble biological neural networks in structure.
Decision trees: Tree-shaped structures that represent sets of decisions. These decisions generate rules for the classification of a dataset. Specific decision tree methods include Classification and Regression Trees (CART) and Chi Square Automatic Interaction Detection (CHAID).
Genetic algorithms: Optimization techniques that use process such as genetic combination, mutation, and natural selection in a design based on the concepts of evolution.
Nearest neighbor method: A technique that classifies each record in a dataset based on a combination of the classes of the k record(s) most similar to it in a historical dataset (where k 1). Sometimes called the k-nearest neighbor technique.
35
Rule induction: The extraction of useful if-then rules from data based on statistical significance.
The ideal starting point is a data warehouse containing a combination of internal data tracking all customer contact coupled with external market data about competitor activity. Background information on potential customers also provides an excellent basis for prospecting. This warehouse can be implemented in a variety of relational database systems: Sybase, Oracle, Redbrick, and so on, and should be optimized for flexible and fast data access.
36
37
1. Input Design is the process of converting a user-oriented description of the input into a computer-based system. This design is important to avoid errors in the data input process and show the correct direction to the management for getting correct information from the computerized system.
2. It is achieved by creating user-friendly screens for the data entry to handle large volume of data. The goal of designing input is to make data entry easier and to be free from errors. The data entry screen is designed in such a way that all the data manipulates can be performed. It also provides record viewing facilities.
38
3. When the data is entered it will check for its validity. Data can be entered with the help of screens. Appropriate messages are provided as when needed so that the user will not be in maize of instant. Thus the objective of input design is to create an input layout that is easy to follow.
The output form of an information system should accomplish one or more of the following objectives. Convey information about past activities, current status or projections of the Future. Signal important events, opportunities, problems, or warnings. Trigger an action. Confirm an action.
39
The DFD is also called as bubble chart. It is a simple graphical formalism that can be used to represent a system in terms of the input data to the system, various processing carried out on these data, and the output data is generated by the system.
Admin
Ch eck
User
no
Exists
Exists
no
Create Account
yes
yes
Create Account
Fil eSending
Stop
Exists
Mobile
Not Exists
Sent
no
Check
Get PrivateKey
yes
Ch eck
yes
no
End
File Download
End
40
Create an Account
Login
User
Admin
Search Admin/User
41
CLASS DIAGRAM
File Received
ID FileNames PrivateKey
Multiple Users
ID FileNames Users
filenames() privatekey()
MultipleSend() Failure()
Upload Files
FileID FileName UserID FileType Filepath FileStatus
SenttoUser() ViewFileDetails()
Login Account
Name ID Password Mobile Admin/User
CreateAccount() GenerateKey()
42
SEQUENCE DIAGRAM
DataBase
Admin
File Send to Multiple Users Files Received with PrivteKey Download Files
Received Acknoledment
Alert Message
43
ACTIVITY DIAGRAM
Login
Create Account
No
Exists
Exists Yes
No
Create Account
Yes
File Maitainance
Redeived Files
File Download
Success
PrivateKey exists
Download File
44
5. IMPLEMENTATION
Install Microsoft Visual studio 2008 and Microsoft SQL Server Management Studio 2005. Run Microsoft SQL Server Management studio 2005 and Visual Studio 2008 A new Database should be created for our Project; Now Database of our Project should be added to the SQL Server. Create tables for register, multiple users, sent files and success receive. Go to Microsoft Visual Studio 2008 and open the project by selecting website option and choose the path where the project file is saved. When the project gets loaded, go to solution Explorer and open web.config file. Set the connection String under the appsettings tag according to the User id and password given while installing the SQL Server. Now Start Debug i.e. the Homepage for our Project opens. Now an account should be created according to the requirement whether it may be user or Admin. Registration of User or Admin should be done, where User is the receiver and Admin is the Sender, therefore there can be multiple users but a single admin. Here when the file is sent, then file gets divided into segments. In our project we have files sent option under admin and files receive option under user, if the file doesnt go the user then file goes into the files not received or files no t sent under user and admin homepage respectively. After successful receive of file, user can download the file. As the client and Server are on the same system we cannot find any packet loss, so purposely we need to select packet loss option so as to see the packet loss while the sender sends packets to the receiver.
45
6. SYSTEM TESTING
Software Testing is the process of executing a program or system with the intent of finding errors. Or, it involves any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. Software is not unlike other physical processes where inputs are received and outputs are produced. Where software differs is in the manner in which it fails. Most physical systems fail in a fixed (and reasonably small) set of ways. By contrast, software can fail in many bizarre ways. Detecting all of the different failure modes for software is generally infeasible. Unlike most physical systems, most of the defects
in software are design errors, not manufacturing defects. Software does not suffer from corrosion, wear-and-tear -- generally it will not change until upgrades, or until obsolescence. So once the software is shipped, the design defects, bugs, will be buried in and remain latent until activation.
Software bugs will almost always exist in any software module with moderate size: not because programmers are careless or irresponsible, but because the complexity of software is generally intractable -- and humans have only limited ability to manage complexity. It is also true that for any complex systems, design defects can never be completely ruled out.
Discovering the design defects in software is equally difficult, for the same reason of complexity. Because software and any digital systems are not continuous, testing boundary values are not sufficient to guarantee correctness. All the possible values need to be tested and verified, but complete testing is infeasible. Exhaustively testing a simple program to add only two integer inputs of 32-bits (yielding 2^64 distinct test cases) would take hundreds of years, even if tests were performed at a rate of thousands per second. Obviously, for a realistic software module, the complexity can be far beyond the example mentioned here. If inputs from the real world are involved, the problem will get worse, because timing and unpredictable environmental effects and human interactions are all possible input parameters under consideration.
A further complication has to do with the dynamic nature of programs. If a failure occurs during preliminary testing and the code is changed, the software may now work for a test case
46
that it didn't work for previously. But its behavior on pre-error test cases that it passed before can no longer be guaranteed. To account for this possibility, testing should be restarted. The expense of doing this is often prohibitive. Regardless of the limitations, testing is an integral part in software development. It is broadly deployed in every phase in the software development cycle. Typically, more than 50% percent of the development time is spent in testing. Testing is usually performed for the following purposes: The purpose of testing is to discover errors. Testing is the process of trying to discover every conceivable fault or weakness in a work product. It provides a way to check the functionality of components, sub-assemblies, assemblies and/or a finished product It is the process of exercising software with the intent of ensuring that the Software system meets its requirements and user expectations and does not fail in an unacceptable manner. There are various types of test. Each test type addresses a specific testing requirement.
47
correct and consistent. Integration testing is specifically aimed at arise from the combination of components.
: : : :
Identified classes of valid input must be accepted. Identified classes of invalid input must be rejected. Identified functions must be exercised. Identified classes of application outputs must be exercised. Interfacing systems or procedures must be invoked.
Systems/Procedures :
Organization and preparation of functional tests is focused on requirements, key functions, or special test cases. In addition, systematic coverage pertaining to identify Business process flows; data fields, predefined processes, and successive processes must be considered for testing. Before functional testing is complete, additional tests are identified and the effective value of current tests is determined.
48
49
reason is that all the above techniques will need some knowledge of the specification of the software under test. Another reason is that the idea of specification itself is broad -- it may contain any requirement including the structure, programming language, and programming style as part of the specification content. We may be reluctant to consider random testing as a testing technique. The test case selection is simple and straightforward: they are randomly chosen. Study in indicates that random testing is more cost effective for many programs. Some very subtle errors can be discovered with low cost. And it is also not inferior in coverage than other carefully designed testing techniques. One can also obtain reliability estimate using random testing results based on operational profiles. Effectively combining random testing with other testing techniques may yield more powerful and cost-effective testing strategies.
50
It is obvious that the more we have covered in the input space, the more problems we will find and therefore we will be more confident about the quality of the software. Ideally we would be tempted to exhaustively test the input space. But as stated above, exhaustively testing the combinations of valid inputs will be impossible for most of the programs, let alone considering invalid inputs, timing, sequence, and resource variables. Combinatorial explosion is the major roadblock in functional testing. To make things worse, we can never be sure whether the specification is either correct or complete. Due to limitations of the language used in the specifications (usually natural language), ambiguity is often inevitable. Even if we use some type of formal or restricted language, we may still fail to write down all the possible cases in the specification. Sometimes, the specification itself becomes an intractable problem: it is not possible to specify precisely every situation that can be encountered using limited words. And people can seldom specify clearly what they want -- they usually can tell whether a prototype is, or is not, what they want after they have been finished. Specification problems contribute approximately 30 percent of all bugs in software. The research in black-box testing mainly focuses on how to maximize the effectiveness of testing with minimum cost, usually the number of test cases. It is not possible to exhaust the input space, but it is possible to exhaustively test a subset of the input space. Partitioning is one of the common techniques. If we have partitioned the input space and assume all the input values in a partition is equivalent, then we only need to test one representative value in each partition to sufficiently cover the whole input space. Domain testing partitions the input domain into regions, and consider the input values in each domain an equivalent class. Domains can be exhaustively tested and covered by selecting a representative value(s) in each domain. Boundary values are of special interest. Experience shows that test cases that explore boundary conditions have a higher payoff than test cases that do not. Boundary value analysis requires one or more boundary values selected as representative test cases. The difficulties with domain testing are that incorrect domain definitions in the specification cannot be efficiently discovered. Good partitioning requires knowledge of the software structure. A good testing plan will not only contain blackbox testing, but also white-box approaches, and combinations of the two.
51
Test objectives All field entries must work properly. Pages must be activated from the identified link. The entry screen, messages and responses must not be delayed.
Features to be tested Verify that the entries are of the correct format No duplicate entries should be allowed All links should take the user to the correct page.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
52
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
53
7. SAMPLE SCREENS
54
55
56
57
58
59
The future enhancement of the project includes the increase in batch size thus the elimination of packet delay is achieved when the congestion at the receiver is more, instead of sending each packet thoroughly the batch can be sent so as to remove the packet delay.
60
9. BIBLIOGRAPHY
[2] T. Ballardie and J. Crowcroft, Multicast-Specific Security Threatsand CounterMeasures, Proc. Second Ann. Network and DistributedSystem Security Symp. (NDSS 95), pp. 2-16, Feb. 1995. [3] P. Judge and M. Ammar, Security Issues and Solutions in Mulicast Content Distribution: A Survey, IEEE Network Magazine, vol. 17, no. 1, pp. 30-36, Jan./Feb. 2003. [4] Y. Challal, H. Bettahar, and A. Bouabdallah, Taxonomy of Multicast Data Origin Authentication: Issues and Solutions, IEEE Comm. Surveys & Tutorials, vol. 6, no. 3, pp. 34-57, Oct. 2004.
[5] Y. Zhou and Y. Fang, BABRA: Batch-Based Broadcast Authentication in Wireless Sensor Networks, Proc. IEEE GLOBECOM, Nov. 2006.
61
SITES REFERRED
http://www.asp.net.com http://www.dotnetspider.com/ http://www.dotnetspark.com http://www.almaden.ibm.com/software/quest/Resources/ http://www.computer.org/publications/dlib http://www.developerfusion.com/