Vous êtes sur la page 1sur 61

1

1. INTRODUCTION
MABS Stands for Multicast Authentication Based on Batch Signature. Conventional block-based multicast authentication schemes overlook the heterogeneity of receivers by letting the sender choose the block size, divide a multicast stream into blocks, associate each block with a signature, and spread the effect of the signature across all the packets in the block through hash graphs or coding algorithms. The correlation among packets makes them vulnerable to packet loss, which is inherent in the Internet and wireless networks. Moreover, the lack of Denial of Service (DoS) resilience renders most of them vulnerable to packet injection in hostile environments. In this paper, we propose a novel multicast authentication protocol, namely MABS, including two schemes. The basic scheme (MABS-B) eliminates the correlation among packets and thus provides the perfect resilience to packet loss, and it is also efficient in terms of latency, computation, and communication overhead due to an efficient cryptographic primitive called batch signature, which supports the authentication of any number of packets simultaneously. We also present an enhanced scheme MABS-E, which combines the basic scheme with a packet filtering mechanism to alleviate the DoS impact while preserving the perfect resilience to packet loss.

Traditionally, multicast authentication schemes manage the different involvement of the receivers by letting the sender. Choose the block size, divide a multicast stream into blocks, connect each block with a signature, and spread the effect of the Signature across all the packets in the block. The correlation among packets makes them vulnerable to packet loss. Moreover, the lack of Denial of Service occurred. Furthermore in the existing system the efficiency of the receivers or not considered. Compared with the multicast sender, which could be a powerful server, but receiver are having different capabilities and resources.

Our Goal is to eliminate the correlation among packets and provide the perfect resilience to packet loss. We develop an efficient system by using latency, computation, and communication overhead due to an efficient cryptographic primitive called batch signature, which supports the authentication of any number of packets simultaneously. We also present another scheme called packet filtering mechanism to alleviate the DoS impact while preserving the perfect resilience to packet loss. It is gaining popular applications such as real-time stock quotes, interactive games, video conference, live video broadcast, or video on demand.

1.1 PROJECT NOTION DEFINITIONS, ACRONYMS AND ABBREVIATIONS


DOS TCP/IP CLR CLS CTS XML HTTP COBOL FORTRAN SQL RSA BLS DSA Denial of Service Transmission Control Protocol/Internet Protocol Common Language Runtime Common Language Specification Common Type System Extensible Markup Language Hyper Text Transfer Protocol Common Basic Oriented Language Formal Translation Structured Query Language Rivest, Shamir and Adelman Algorithm Bilinear Signature Digital Signature Algorithm

1.2 SCOPE
Multicast is an efficient method to deliver multimedia content from a sender to a group of receivers and is gaining popular applications such as realtime stock quotes, interactive games, video conference, live video broadcast, or video on demand. Authentication is one of the critical topics in securing multicast in an environment attractive to malicious attacks. Basically, multicast authentication may provide the following security services: Data integrity: Each receiver should be able to assure that received packets have not been modified during transmissions. Data origin authentication: Each receiver should be able to assure that each received packet comes from the real sender as it claims. Nonrepudiation: The sender of a packet should not be able to deny sending the packet to receivers in case there is a dispute between the sender and receivers. All the three services can be supported by an asymmetric key technique called signature. In an ideal case, the sender generates a signature for each packet with its private key, which is called signing, and each receiver checks the validity of the signature with the senders public key, which is called verifying. If the verification succeeds, the receiver knows the packet is authentic.

1.3 OBJECTIVES
Multicast authentication protocol called MABS (in short for Multicast Authentication based on Batch Signature). MABS includes two schemes. The basic scheme (called MABS-B hereafter) utilizes an efficient asymmetric cryptographic primitive called batch signature which supports the authentication of any number of packets simultaneously with one signature verification, to address the efficiency and packet loss problems in general environments. The enhanced scheme (called MABS-E hereafter) combines MABS-B with packet filtering to alleviate the DoS impact In hostile environments. MABS provides data integrity, origin authentication, and nonrepudiation as previous asymmetric key based protocols. In addition, we make the following contributions:

Our MABS can achieve perfect resilience to packet loss in lossy channels in the sense that no matter how many packets are lost the already-received packets can still be authenticated by receivers.

MABS-B is efficient in terms of less latency, computation, and communication overhead. Though MABS-E is less efficient than MABS-B since it includes the DoS defense, its overhead is still at the same level as previous schemes.

We propose two new batch signature schemes based on BLS and DSA and show they are more efficient than the batch RSA signature scheme.

The basic scheme MABS-B targets at the packet loss problem, which is inherent in the Internet and wireless networks. It has perfect resilience to packet loss, no matter whether it is random loss or burst loss. In some circumstances, however, an attacker can inject forged packets into a batch of packets to disrupt the batch signature verification, leading to DoS. A naive approach to defeat the DoS attack is to divide the batch into multiple smaller batches and perform batch verification over each smaller batch, and this divide-and-conquer approach can be recursively carried out for each smaller batch, which means more signature verifications at each receiver. In the worst case, the attacker can inject forged packets at very high frequency and expect that each receiver stops the batch operation and recovers the basic per-packet signature verification, which may not be viable at resource-constrained receiver devices. An enhanced scheme called MABS-E, which combines the basic scheme MABS-B and a packet filtering mechanism to tolerate packet injection. In particular, the sender attaches each packet with a mark, which is unique to the packet and cannot be spoofed. At each receiver, the multicast stream is classified into disjoint sets based on marks. Each set of packets comes from either the real sender or the attacker. The mark design ensures that a packet from the real sender never falls into any set of packets from the attacker, and vice versa.

1.4 SYSTEM REQUIREMENTS


1.4.1 HARDWARE REQUIREMENTS

System Processor Hard Disk RAM

: : :

Pentium IV 2.4 GHz. 40 GB. 1GB.

1.4.2 SOFTWARE REQUIREMENTS

Operating system Browser Coding Language Data Base

: : : :

Windows XP. Mozilla Firefox ASP.Net with C# SQL Server 2005.

2. TECHNOLOGIES USED

2.1 FEATURES OF DOTNET


Microsoft DOTNET is a set of Microsoft software technologies for rapidly building and integrating XML Web services, Microsoft Windows-based applications, and Web solutions. The DOTNET Framework is a language-neutral platform for writing programs that can easily and securely interoperate. Theres no language barrier with DOTNET: there are numerous languages available to the developer including Managed C++, C#, Visual Basic and Java Script. The DOTNET framework provides the foundation for components to interact seamlessly, whether locally or remotely on different platforms. It standardizes common data types and communications protocols so that components created in different languages can easily interoperate. DOTNET is also the collective name given to various software components built upon the DOTNET platform. These will be both products (Visual Studio DOTNET and Windows DOTNET Server, for instance) and services (like Passport, DOTNET My Services, and so on).

2.1.1 THE DOTNET FRAMEWORK

The DOTNET Framework has two main parts:

1. The Common Language Runtime (CLR). 2. A hierarchical set of class libraries.

The CLR is described as the execution engine of DOTNET. It provides the environment within which programs run. The most important features are Conversion from a low-level assembler-style language, called Intermediate Language (IL), into code native to the platform being executed on. Memory management, notably including garbage collection. Checking and enforcing security restrictions on the running code. Loading and executing programs, with version control and other such features. The following features of the DOTNET framework are also worth description: The code that targets DOTNET, and which contains certain extra Information metadata - to describe itself. While both managed and unmanaged code can run in the runtime, only managed code contains the information that allows the CLR to guarantee, for instance, safe execution and interoperability.

With Managed Code comes Managed Data. CLR provides memory allocation and Deal location facilities, and garbage collection. Some DOTNET languages use Managed Data by default, such as C#, Visual Basic DOTNET and Jscript DOTNET, whereas others, namely C++, do not. Targeting CLR can, depending on the language youre using, impose certain constraints on the features available. As with managed and unmanaged code, one can have both managed and unmanaged data in DOTNET applications - data that doesnt get garbage collected but instead is looked after by unmanaged code.

2.1.2 COMMON TYPE SYSTEM


The CLR uses something called the Common Type System (CTS) to strictly enforce type-safety. This ensures that all classes are compatible with each other, by describing types in a common way. CTS define how types work within the runtime, which enables types in one language to interoperate with types in another language, including cross-language exception handling. As well as ensuring that types are only used in appropriate ways, the runtime also ensures that code doesnt attempt to access memory that hasnt been allocated to it.

2.1.3 COMMON LANGUAGE SPECIFICATION


The CLR provides built-in support for language interoperability. To ensure that you can develop managed code that can be fully used by developers using any programming language, a set of language features and rules for using them called the Common Language Specification (CLS) has been defined. Components that follow these rules and expose only CLS features are considered CLS-compliant.

2.1.4 CLASS LIBRARY AND LANGUAGES SUPPORTED BY DOTNET


DOTNET provides a single-rooted hierarchy of classes, containing over 7000 types. The root of the namespace is called System; this contains basic types like Byte, Double, Boolean, and String, as well as Object. All objects derive from System. Object. As well as objects, there are value types. Value types can be allocated on the stack, which can provide useful flexibility. There are also efficient means of converting value types to object types if and when necessary.

The set of classes is pretty comprehensive, providing collections, file, screen, and network I/O, threading, and so on, as well as XML and database connectivity. The class library is subdivided into a number of sets (or namespaces), each providing distinct areas of functionality, with dependencies between the namespaces kept to a minimum.

The multi-language capability of the DOTNET Framework and Visual Studio DOTNET enables developers to use their existing programming skills to build all types of applications and XML Web services. The DOTNET framework supports new versions of Microsofts old favorites Visual Basic and C++ (as Visual Basic DOTNET and Managed C++), but there are also a number of new additions to the family.

Visual Basic DOTNET has been updated to include many new and improved language features that make it a powerful object-oriented programming language. These features include inheritance, interfaces, and overloading, among others. Visual Basic also now supports structured exception handling, custom attributes and also supports multi-threading. Visual Basic

10

DOTNET is also CLS compliant, which means that any CLS-compliant language can use the classes, objects, and components you create in Visual Basic DOTNET.

Managed Extensions for C++ and attributed programming are just some of the enhancements made to the C++ language. Managed Extensions simplify the task of migrating existing C++ applications to the new DOTNET Framework. C# is Microsofts new language. Its a C-style language that is essentially C++ for Rapid Application Development. Unlike other languages, its specification is just the grammar of the language. It has no standard library of its own, and instead has been designed with the intention of using the DOTNET libraries as its own.

Microsoft Visual J# DOTNET provides the easiest transition for Java-language developers into the world of XML Web Services and dramatically improves the interoperability of Java-language programs with existing software written in a variety of other programming languages.

Active State has created Visual Perl and Visual Python, which enable DOTNET-aware applications to be built in either Perl or Python. Both products can be integrated into the Visual Studio DOTNET environment. Visual Perl includes support for Active States Perl Dev Kit.

Other languages for which DOTNET compilers are available include

FORTRAN COBOL Eiffel

11

ASPDOTNET XML WEB SERVICES Base Class Libraries Common Language Runtime Operating System

Windows Forms

Table 2.1 Dotnet Framework

C#, DOTNET is also compliant with CLS (Common Language pecification) and supports structured exception handling. CLS is set of rules and constructs that are supported by the CLR (Common Language Runtime). CLR is the runtime environment provided by the DOTNET Framework; it manages the execution of the code and also makes the development process easier by providing services. C#DOTNET is a CLS-compliant language. Any objects, classes, or components that created in C#DOTNET can be used in any other CLS-compliant language. In addition, we can use objects, classes, and components created in other CLS-compliant languages in C#DOTNET .The use of CLS ensures complete interoperability among applications, regardless of the languages used to create the application.

CONSTRUCTORS AND DESTRUCTORS: Constructors are used to initialize objects, whereas destructors are used to destroy them. In other words, destructors are used to release the resources allocated to the object. In C#DOTNET the sub finalize procedure is available. The sub finalize procedure is used to complete the tasks that must be performed when an object is destroyed. The sub finalize procedure is called automatically when an object is destroyed. In addition, the sub finalize procedure can be called only from the class it belongs to or from derived classes.

12

GARBAGE COLLECTION: Garbage Collection is another new feature in C#DOTNET. The DOTNET Framework monitors allocated resources, such as objects and variables. In addition, the DOTNET Framework automatically releases memory for reuse by destroying objects that are no longer in use. In C#DOTNET, the garbage collector checks for the objects that are not currently in use by applications. When the garbage collector comes across an object that is marked for garbage collection, it releases the memory occupied by the object.

OVERLOADING: Overloading is another feature in C#. Overloading enables us to define multiple procedures with the same name, where each procedure has a different set of arguments. Besides using overloading for procedures, we can use it for constructors and properties in a class.

MULTITHREADING: C#DOTNET also supports multithreading. An application that supports multithreading can handle multiple tasks simultaneously, we can use multithreading to decrease the time taken by an application to respond to user interaction.

STRUCTURED EXCEPTION HANDLING: C#DOTNET supports structured handling, which enables us to detect and remove errors at runtime. In C#DOTNET, we need to use TryCatchFinally statements to create exception handlers. Using

TryCatchFinally statements, we can create robust and effective exception handlers to improve the performance of our application.

13

2.1.5 OBJECTIVES OF DOTNET FRAMEWORK


1. To provide a consistent object-oriented programming environment whether object codes is stored and executed locally on Internet-distributed, or executed remotely. 2. To provide a code-execution environment to minimizes software deployment and guarantees safe execution of code. 3. Eliminates the performance problems. There are different types of application, such as Windows-based applications and Web-based applications.

14

2.2 AJAX
Ajax is only a name given to a set of tools that previously existed. The main part is XMLHttpRequest, a server-side object usable in JavaScript that was implemented into In Internet Explorer it is an ActiveX object that was first named XMLHTTP sometimes, before to be generalized on all browser under the name XMLHttpRequest, when the Ajax technology becomes commonly used. The use of XMLHttpRequest in 2005 by Google, in Gmail and GoogleMaps has contributed to the success of this format. But this is the when the name Ajax was itself coined that the technology started to be so popular. But Ajax can selectively modify a part of a page displayed by the browser, and update it without the need to reload the whole document with all images, menus, etc... For example, fields of forms, choices of user, may be processed and the result displayed immediately into the same page. Ajax allows performing processing on client computer (in JavaScript) with data taken from the server. This shares the processing power. Before, processing of web page was only server-side, using web services or PHP scripts, before the whole page was sent within the network, required data transfers now useless. Ajax is a set of technologies, supported by a web browser, including these elements:

HTML and CSS for presenting. JavaScript (ECMAScript) for local processing, and DOM (Document Object Model) to access data inside the page or to access elements of XML file read on the server (with the getElementByTagName method for example)...

The XMLHttpRequest object is used to read or send data on the server asynchronously.

Optionally,

DOMParser may be used PHP or another scripting language may be used on the server. XML and XSLT to process the data if returned in XML form. SOAP may be used to dialog with the server.

15

The "asynchronous" word, means that the response of the server while be processed when available, without to wait and to freeze the display of the page. Dynamic HTML has same purpose and is a set of standards: HTML, CSS, JavaScript. This allows changing the display of the page from user commands or from text typed by the user. Ajax is DHTML plus the XHR object to exchange data with the server.

Fig 2.1 Diagram of operation of Ajax

16

Ajax uses a programming model with display and events. These events are user actions; Interactivity is achieved with forms and buttons. DOM allows to link elements of the page with actions and also to extract data from XML files provided by the server. They call functions associated to elements of the webpage. To get data on the server, XMLHttpRequest provides two methods:

open: create a connection. send: send a request to the server.

Data furnished by the server will be found in the attributes of the XMLHttpRequest object:

responseXml for an XML file or responseText for a plain text.

Take note that a new XMLHttpRequest object has to be created for each new file to load. We have to wait for the data to be available to process it, and in this purpose, the state of availability of data is given by the readyState attribute of XMLHttpRequest.

States of readyState follow (only the last one is really useful): 0: not initialized. 1: connection established. 2: request received. 3: answer in process. 4: finished.

Table 2.2 onreadystates

17

The XMLHttpRequest object Allows interacting with the servers, thanks to its methods and attributes.

2.2.1 ATTRIBUTES
readyState status responseText responseXml onreadystatechange The code successively changes value from 0 to 4 that means for "ready". 200-OK 404 if the page is not found. Holds loaded data as a string of characters. Holds an XML loaded file, DOM's method allows to extract data. Property that takes a function as value that is invoked when the readystatechange event is dispatched.

Table 2.3 Attributes in AJAX

2.2.2 METHODS
mode: type of request, GET or POST open(mode, url, boolean) url: the location of the file, with a path. boolean: true (asynchronous) / false (synchronous). optionally, a login and a password may be added to arguments send("string") Null for a GET command.

Table 2.4 Methods in AJAX

18

2.3 FEATURES OF SQL-SERVER


The OLAP Services feature available in SQL Server version 7.0 is now called SQL Server 2000 Analysis Services. The term OLAP Services has been replaced with the term Analysis Services. Analysis Services also includes a new data mining component. The Repository component available in SQL Server version 7.0 is now called Microsoft SQL Server 2000 Meta Data Services. References to the component now use the term Meta Data Services. The term repository is used only in reference to the repository engine within Meta Data Services

SQL-SERVER database consist of six type of objects, They are, 1. TABLE 2. QUERY 3. FORM 4. REPORT 5. MACRO

TABLE: A database is a collection of data about a specific topic. VIEWS OF TABLE: We can work with a table in two types, 1. Design View 2. Datasheet View

19

DESIGN VIEW: To build or modify the structure of a table we work in the table design view. We can specify what kind of data will be hold.

DATASHEET VIEW: To add, edit or analyses the data itself we work in tables datasheet view mode.

QUERY: A query is a question that has to be asked the data. Access gathers data that answers the question from one or more table. The data that make up the answer is either dynaset (if you edit it) or a snapshot (it cannot be edited).Each time we run query, we get latest information in the dynaset. Access either displays the dynaset or snapshot for us to view or perform an action on it, such as deleting or updating.

2.4 WEB SERVICES


A Web service is a software system designed to support interoperable machine-tomachine interaction over a network. It has an interface described in a machine-process able format (specifically WSDL). Other systems interact with the Web service in a manner prescribed by its description using SOAP messages, typically conveyed using HTTP with an XML serialization in conjunction with other Web-related standards. AGENTS AND SERVICES: A Web service is an abstract notion that must be implemented by a concrete agent. The agent is the concrete piece of software or hardware that sends and receives messages, while the service is the resource characterized by the abstract set of functionality that is provided. To illustrate this distinction, you might implement a particular Web service using one agent one day (perhaps written in one programming language), and a different agent the next day (perhaps written in a different programming language) with the same functionality. Although the agent may have changed, the Web service remains the same.

20

REQUESTERS AND PROVIDERS: The purpose of a Web service is to provide some functionality on behalf of its owner -- a person or organization, such as a business or an individual. The provider entity is the person or organization that provides an appropriate agent to implement a particular service. A requester entity is a person or organization that wishes to make use of a provider entity's Web service. It will use a requester agent to exchange messages with the provider entity's provider agent. (In most cases, the requester agent is the one to initiate this message exchange, though not always. Nonetheless, for consistency we still use the term "requester agent" for the agent that interacts with the provider agent, even in cases when the provider agent actually initiates the exchange.) A word on terminology: Many documents use the term service provider to refer to the provider entity and/or provider agent. Similarly, they may use the term service requester to refer to the requester entity and/or requester agent. However, since these terms are ambiguous -sometimes referring to the agent and sometimes to the person or organization that owns the agent -- this document prefers the terms requester entity, provider entity, requester agent and provider agent. In order for this message exchange to be successful, the requester entity and the provider entity must first agree on both the semantics and the mechanics of the message exchange. Service Description: The mechanics of the message exchange are documented in a Web service description (WSD). The WSD is a machine-processable specification of the Web service's interface, written in WSDL. It defines the message formats, datatypes, transport protocols, and transport serialization formats that should be used between the requester agent and the provider agent. It also specifies one or more network locations at which a provider agent can be invoked, and may provide some information about the message exchange pattern that is expected. In essence, the service description represents an agreement governing the mechanics of interacting with that service.

21

SEMANTICS: The semantics of a Web service is the shared expectation about the

behavior of the service, in particular in response to messages that are sent to it. In effect, this is the "contract" between the requester entity and the provider entity regarding the purpose and consequences of the interaction. Although this contract represents the overall agreement between the requester entity and the provider entity on how and why their respective agents will interact, it is not necessarily written or explicitly negotiated. It may be explicit or implicit, oral or written, machine processable or human oriented, and it may be a legal agreement or an informal (non-legal)agreement. While the service description represents a contract governing the mechanics of interacting with a particular service, the semantics represents a contract governing the meaning and purpose of that interaction. The dividing line between these two is not necessarily rigid. As more semantically rich languages are used to describe the mechanics of the interaction, more of the essential information may migrate from the informal semantics to the service description. As this migration occurs, more of the work required to achieve successful interaction can be automated.

2.4.1 OVERVIEW OF ENGAGING A WEB SERVICE


There are many ways that a requester entity might engage and use a Web service. In general, the following broad steps are required, as illustrated in Figure 1-1: (1) the requester and provider entities become known to each other (or at least one becomes know to the other); (2) the requester and provider entities somehow agree on the service description and semantics that will govern the interaction between the requester and provider agents; (3) the service description and semantics are realized by the requester and provider agents; and (4) the requester and provider agents exchange messages, thus performing some task on behalf of the requester and provider entities. (I.e., the exchange of messages with the provider agent represents the concrete manifestation of interacting with the provider entity's Web service.)

22

Figure 2.2 The General Process of Engaging a Web Service Concepts and Relationships: The formal core of the architecture is this enumeration of the concepts and relationships that are central to Web services' interoperability. The architecture is described in terms of a few simple elements: concepts, relationships and models. Concepts are often noun-like in that they identify things or properties that we expect to see in realizations of the architecture, similarly relationships are normally linguistically verbs. As with any large-scale effort, it is often necessary to structure the architecture itself. We do this with the larger-scale meta-concept of model. A model is a coherent portion of the architecture that focuses on a particular theme or aspect of the architecture. Concepts: A concept is expected to have some correspondence with any realizations of the architecture. For example, the message concept identifies a class of object (not to be confused with Objects and Classes as are found in Object Oriented Programming languages) that we expect to be able to identify in any Web services context. The precise

23

form of a message may be different in different realizations, but the message concept tells us what to look for in a given concrete system rather than prescribing its precise form. Not all concepts will have a realization in terms of data objects or structures occurring in computers or communications devices; for example the person or organization refers to people and human organizations. Other concepts are more abstract still; for example,message reliability denotes a property of the message transport service a property that cannot be touched but nonetheless is important to Web services. Each concept is presented in a regular, stylized way consisting of a short definition, an enumeration of the relationships with other concepts, and a slightly longer explanatory description. For example, the concept of agent includes as relating concepts the fact that an agent is a computational resource, has an identifier and an owner. The description part of the agent explains in more detail why agents are important to the architecture. Relationships: Relationships denote associations between concepts. Grammatically, relationships are verbs; or more accurately, predicates. A statement of a relationship typically takes the form: concept predicate concept. For example, in agent, we state that: An agent is a computational resource This statement makes an assertion, in this case about the nature of agents. Many such statements are descriptive, others are definitive: A message has a message sender Such a statement makes an assertion about valid instances of the architecture: we expect to be able to identify the message sender in any realization of the architecture. Conversely, any system for which we cannot identify the sender of a message is not conformant to the architecture. Even if a service is used anonymously, the sender has an identifier but it is not possible to associate this identifier with an actual person or organization.

24

2.4.2 WEB SERVICES TECHNOLOGIES


Web service architecture involves many layered and interrelated technologies. There are many ways to visualize these technologies, just as there are many ways to build and use Web services. Figure below provides one illustration of some of these technology families.

Figure 2.3 Web Services Architecture Stack

In this section we describe some of those technologies that seem critical and the role they fill in relation to this architecture. This is a necessarily bottom-up perspective, since, in this section, we are looking at Web services from the perspective of tools which can be used to design, build and deploy Web serivces.

25

The technologies that we consider here, in relation to the Architecture, are XML, SOAP, WSDL. However, there are many other technologies that may be useful. (For example, see the list of Web services specifications compiled by Roger Cutler and Paul Denning.) XML: XML solves a key technology requirement that appears in many places. By offering a standard, flexible and inherently extensible data format, XML significantly reduces the burden of deploying the many technologies needed to ensure the success of Web services. The important aspects of XML, for the purposes of this Architecture, are the core syntax itself, the concepts of the XML Infoset [XML Infoset], XML Schema and XML Namespaces. XML Infoset is not a data format per se, but a formal set of information items and their associated properties that comprise an abstract description of an XML document [XML 1.0]. The XML Infoset specification provides for a consistent and rigorous set of definitions for use in other specifications that need to refer to the information in a well-formed XML document. Serialization of the XML Infoset definitions of information may be expressed using XML 1.0 [XML 1.0]. However, this is not an inherent requirement of the architecture. The flexibility in choice of serialization format(s) allows for broader interoperability between agents in the system. In the future, a binary encoding of the XML infoset may be a suitable replacement for the textual serialization. Such a binary encoding may be more efficient and more suitable for machine-to-machine interactions. SOAP: SOAP 1.2 provides a standard, extensible, compensable framework for packaging and exchanging XML messages. In the context of this architecture, SOAP 1.2 also provides a convenient mechanism for referencing capabilities (typically by use of headers). [SOAP 1.2 defines an XML-based messaging framework: a processing model and an extensibility model. SOAP messages can be carried by a variety of network protocols; such as HTTP, SMTP, FTP, RMI/IIOP, or a proprietary messaging protocol.

26

[SOAP 1.2 Part 2] defines three optional components: a set of encoding rules for expressing instances of application-defined data types, a convention for representing remote procedure calls (RPC) and responses, and a set of rules for using SOAP with HTTP/1.1. While SOAP Version 1.2 [SOAP 1.2 Part 1] doesn't define "SOAP" as an acronym anymore, there are two expansions of the term that reflect these different ways in which the technology can be interpreted: 1. Service Oriented Architecture Protocol: In the general case, a SOAP message represents the information needed to invoke a service or reflect the results of a service invocation, and contains the information specified in the service interface definition. 2. Simple Object Access Protocol: When using the optional SOAP RPC Representation, a SOAP message represents a method invocation on a remote object, and the serialization of in the argument list of that method that must be moved from the local environment to the remote environment. WSDL: WSDL 2.0 is a language for describing Web services. WSDL describes Web services starting with the messages that are exchanged between the requester and provider agents. The messages themselves are described abstractly and then bound to a concrete network protocol and message format. Web service definitions can be mapped to any implementation language, platform, object model, or messaging system. Simple extensions to existing Internet infrastructure can implement Web services for interaction via browsers or directly within an application. The application could be implemented using COM, JMS, CORBA, COBOL, or any number of proprietary integration solutions. As long as both the sender and receiver agree on the service description, (e.g. WSDL file), the implementations behind the Web services can be anything.

27

3. SYSTEM ANALYSIS

3.1 INTRODUCTION
System analysis first stage according to System Development Life Cycle model. This

System Analysis is a process that starts with the analyst. Analysis is a detailed study of the various operations performed by a system and their relationships within and outside the system. One aspect of analysis is defining the boundaries of the system and determining whether or not a candidate should consider other related systems. During analysis, data is collected from the available files, decision points, and transactions handled by the present system. Logical system models and tools are used in analysis. Training, experience, and common sense are required for collection of the information needed to do the analysis.

3.2 PROBLEM DEFINITION


Schemes in follow the ideal approach of signing and verifying each packet individually but reduce the computation overhead at the sender by using one-time signatures or k-time signatures. They are suitable for RSA, which is expensive on signing while cheap on verifying. For each packet, however, each receiver needs to perform one more verification on its one-time or k-time signature plus one ordinary signature verification. Moreover, the length of one-time signature is too long (on the order of 1,000 bytes). Generally, there are following issues in real world challenging the design. First, efficiency needs to be considered, especially for receivers. Compared with the multicast sender, which could be a powerful server, receivers can have different capabilities and resources. The receiver heterogeneity requires that the multicast authentication protocol be able to execute on not only powerful desktop computers but also resource-constrained mobile handsets. In particular, latency, computation, and communication overhead are major issues to be considered. Second, packet loss is inevitable. In the Internet, congestion at routers is a major reason causing packet loss. An overloaded router drops buffered packets according to its preset control policy. Though TCP provides a certain retransmission capability, multicast content is mainly transmitted

28

over UDP, which does not provide any loss recovery support. In mobile environments, the situation is even worse. The instability of wireless channel can cause packet loss very frequently. Moreover, the smaller data rate of wireless channel increases the congestion possibility. This is not desirable for applications like realtime online streaming or stock quotes delivering. End users of online streaming will start to complain if they experience constant service interruptions due to packet loss, and missing critical stock quotes can cause severe capital loss of service subscribers. Therefore, for applications where the quality of service is critical to end users, a multicast authentication protocol should provide a certain level of resilience to packet loss. Specifically, the impact of packet loss on the authenticity of the already-received packets should be as small as possible. Efficiency and packet loss resilience can hardly be supported simultaneously by conventional multicast schemes.

Another problem with schemes in is that they are vulnerable to packet injection by malicious attackers. An attacker may compromise a multicast system by intentionally injecting forged packets to consume receivers resource, leading to Denial of Service (DoS). Compared with the efficiency requirement and packet loss problems, the DoS attack is not common, but it is still important in hostile environments. In the literature, some schemes attempt to provide the DoS resilience. However, they still have the packet loss problem because they are based on the same approach as previous schemes.

3.3 EXISTING SYSTEM


Traditionally, multicast authentication schemes manage the different involvement of the receivers by letting the sender. Choose the block size, divide a multicast stream into blocks, connect each block with a signature, and spread the effect of the Signature across all the packets in the block. The correlation among packets makes them vulnerable to packet loss. Moreover, the lack of Denial of Service occurred. Furthermore in the existing system the efficiency of the receivers or not considered. Compared with the multicast sender, which could be a powerful server, but receiver are having different capabilities and resources.

29

3.4 PROPOSED SYSTEM

Our Goal is to eliminate the correlation among packets and provide the perfect resilience to packet loss. We develop an efficient system by using latency, computation, and communication overhead due to an efficient cryptographic primitive called batch signature, which supports the authentication of any number of packets simultaneously. We also present another scheme called packet filtering mechanism to alleviate the DoS impact while preserving the perfect resilience to packet loss. It is gaining popular applications such as real-time stock quotes, interactive games, video conference, live video broadcast, or video on demand.

3.5 FEASIBILITY STUDY


The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company. For feasibility analysis, some understanding of the major requirements for the system is essential. Three key considerations involved in the feasibility analysis are

ECONOMICAL FEASIBILITY TECHNICAL FEASIBILITY SOCIAL FEASIBILITY

ECONOMICAL FEASIBILITY: This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased.

30

TECHNICAL FEASIBILITY: This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client. The developed system must have a modest requirement, as only minimal or null changes are required for implementing this system.

SOCIAL FEASIBILITY: The aspect of study is to check the level of acceptance of the system by the user. This includes the process of training the user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. The level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it. His level of confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the system.

31

4. SYSTEM DESIGN
4.1 MODULES
1. DATA ALLOCATION MODULE

If data send to multiple users. Sender chooses the particular users only to send/receive the files. It maintains the file status. The sender sent a file in multiple users with attachs one signature (Private Key). User enters the private key and then receives the files. Each an every file attach one signature.

2. PACKET ALLOCATION

If sender split the file in ten segments or packets. Each packet allocates some bytes if any bytes are missing the packet loss is accrued. Sender choose the block size, divide a multicast stream into blocks, associate each block with a signature, and spread the effect of the signature across all the packets in the block.

3. DATA ORIGIN AUTHENTICATION

Each receiver should be able to assure that each received packet comes from the real sender as it claims. Efficiency needs to be considered, especially for receivers. Each receiver checks the validity of the signature with the senders Private Key. The Private Key is generated on Batch BLS Signature Algorithm.

32

4.2 ALGORITHM / TECHNIQUE USED


Batch Signatures: 1. Batch RSA Signature 2. Batch BLS Signature

Batch Description:

1.

Batch RSA Signature

RSA is a very popular cryptographic algorithm in many security protocols. Before the batch verification, the receiver must ensure all the messages are distinct. Otherwise batch RSA is vulnerable to the forgery attack. RSA algorithm is very secure.

2. Batch BLS Signature

The BLS signature scheme uses a cryptographic primitive called pairing. BLS signature scheme consists of three phases: Key generation phase. Signing phase. Verification phase.

33

4.3 ARCHITECTURE DESIGN OF DATA MINING

Literature survey is the most important step in software development process. Before developing the tool it is necessary to determine the time factor, economy n company strength. Once these things r satisfied, ten next steps is to determine which operating system and language can be used for developing the tool. Once the programmers start building the tool the programmers need lot of external support. This support can be obtained from senior programmers, from book or from websites. Before building the system the above consideration are taken into account for developing the proposed system.

4.3.1 DATA MINING

Generally, data mining (sometimes called data or knowledge discovery) is the process of analyzing data from different perspectives and summarizing it into useful information information that can be used to increase revenue, cuts costs, or both. Data mining software is one of a number of analytical tools for analyzing data. It allows users to analyze data from many different dimensions or angles, categorize it, and summarize the relationships identified. Technically, data mining is the process of finding correlations or patterns among dozens of fields in large relational databases.

4.3.2 THE SCOPE OF DATA MINING


Data mining derives its name from the similarities between searching for valuable business information in a large database for example, finding linked products in gigabytes of store scanner data and mining a mountain for a vein of valuable ore. Both processes require either sifting through an immense amount of material, or intelligently probing it to find exactly where the value resides. Given databases of sufficient size and quality, data mining technology can generate new business opportunities by providing these capabilities:

34

Automated prediction of trends and behaviours: Data mining automates the process of finding predictive information in large databases. Questions that traditionally required extensive hands-on analysis can now be answered directly from the data quickly. A typical example of a predictive problem is targeted marketing. Data mining uses data on past promotional mailings to identify the targets most likely to maximize return on investment in future mailings. Other predictive problems include forecasting bankruptcy and other forms of default, and identifying segments of a population likely to respond similarly to given events.

Automated discovery of previously unknown patterns. Data mining tools sweep through databases and identify previously hidden patterns in one step. An example of pattern discovery is the analysis of retail sales data to identify seemingly unrelated products that are often purchased together. Other pattern discovery problems include detecting fraudulent credit card transactions and identifying anomalous data that could represent data entry keying errors.

The most commonly used techniques in data mining are:

Artificial neural networks: Non-linear predictive models that learn through training and resemble biological neural networks in structure.

Decision trees: Tree-shaped structures that represent sets of decisions. These decisions generate rules for the classification of a dataset. Specific decision tree methods include Classification and Regression Trees (CART) and Chi Square Automatic Interaction Detection (CHAID).

Genetic algorithms: Optimization techniques that use process such as genetic combination, mutation, and natural selection in a design based on the concepts of evolution.

Nearest neighbor method: A technique that classifies each record in a dataset based on a combination of the classes of the k record(s) most similar to it in a historical dataset (where k 1). Sometimes called the k-nearest neighbor technique.

35

Rule induction: The extraction of useful if-then rules from data based on statistical significance.

4.3.3 ARCHITECTURE FOR DATA MINING


To best apply these advanced techniques, they must be fully integrated with a data warehouse as well as flexible interactive business analysis tools. Many data mining tools currently operate outside of the warehouse, requiring extra steps for extracting, importing, and analyzing the data. Furthermore, when new insights require operational implementation, integration with the warehouse simplifies the application of results from data mining. The resulting analytic data warehouse can be applied to improve business processes throughout the organization, in areas such as promotional campaign management, fraud detection, new product rollout, and so on. Figure 1 illustrates architecture for advanced analysis in a large data warehouse.

Figure 4.1 - Integrated Data Mining Architecture

The ideal starting point is a data warehouse containing a combination of internal data tracking all customer contact coupled with external market data about competitor activity. Background information on potential customers also provides an excellent basis for prospecting. This warehouse can be implemented in a variety of relational database systems: Sybase, Oracle, Redbrick, and so on, and should be optimized for flexible and fast data access.

36

4.3.4 DATA MINING PRODUCTS


Data mining products are taking the industry by storm. The major database vendors have already taken steps to ensure that their platforms incorporate data mining techniques. Oracle's Data Mining Suite (Darwin) implements classification and regression trees, neural networks, knearest neighbors, regression analysis and clustering algorithms. Microsoft's SQL Server also offers data mining functionality through the use of classification trees and clustering algorithms. If you're already working in a statistics environment, you're probably familiar with the data mining algorithm implementations offered by the advanced statistical packages SPSS, SAS, and S-Plus. Comprehensive data warehouses that integrate operational data with customer, supplier, and market information have resulted in an explosion of information. Competition requires timely and sophisticated analysis on an integrated view of the data. However, there is a growing gap between more powerful storage and retrieval systems and the users ability to effectively analyze and act on the information they contain. Both relational and OLAP technologies have tremendous capabilities for navigating massive data warehouses, but brute force navigation of data is not enough. A new technological leap is needed to structure and prioritize information for specific end-user problems. The data mining tools can make this leap. Quantifiable business benefits have been proven through the integration of data mining with current information systems, and new products are on the horizon that will bring this integration to an even wider audience of users.

37

4.4 INPUT DESIGN AND OUTPUT DESIGN 4.4.1 INPUT DESIGN


The input design is the link between the information system and the user. It comprises the developing specification and procedures for data preparation and those steps are necessary to put transaction data in to a usable form for processing can be achieved by inspecting the computer to read data from a written or printed document or it can occur by having people keying the data directly into the system. The design of input focuses on controlling the amount of input required, controlling the errors, avoiding delay, avoiding extra steps and keeping the process simple. The input is designed in such a way so that it provides security and ease of use with retaining the privacy. Input Design considered the following things: What data should be given as input? How the data should be arranged or coded? The dialog to guide the operating personnel in providing input. Methods for preparing input validations and steps to follow when error occur.

The objectives of Input design includes,

1. Input Design is the process of converting a user-oriented description of the input into a computer-based system. This design is important to avoid errors in the data input process and show the correct direction to the management for getting correct information from the computerized system.

2. It is achieved by creating user-friendly screens for the data entry to handle large volume of data. The goal of designing input is to make data entry easier and to be free from errors. The data entry screen is designed in such a way that all the data manipulates can be performed. It also provides record viewing facilities.

38

3. When the data is entered it will check for its validity. Data can be entered with the help of screens. Appropriate messages are provided as when needed so that the user will not be in maize of instant. Thus the objective of input design is to create an input layout that is easy to follow.

4.4.2 OUTPUT DESIGN


A quality output is one, which meets the requirements of the end user and presents the information clearly. In any system results of processing are communicated to the users and to other system through outputs. In output design it is determined how the information is to be displaced for immediate need and also the hard copy output. It is the most important and direct source information to the user. Efficient and intelligent output design improves the systems relationship to help user decision-making. 1. Designing computer output should proceed in an organized, well thought out manner; the right output must be developed while ensuring that each output element is designed so that people will find the system can use easily and effectively. When analysis design computer output, they should Identify the specific output that is needed to meet the requirements. 2. Select methods for presenting information. 3. Create document, report, or other formats that contain information produced by the system.

The output form of an information system should accomplish one or more of the following objectives. Convey information about past activities, current status or projections of the Future. Signal important events, opportunities, problems, or warnings. Trigger an action. Confirm an action.

39

4.5 DATA FLOW DIAGRAM AND UML DIAGRAMS

The DFD is also called as bubble chart. It is a simple graphical formalism that can be used to represent a system in terms of the input data to the system, various processing carried out on these data, and the output data is generated by the system.

DATA FLOW DIAGRAM


Login

Admin

Ch eck

User

no

Exists

Exists

no

Create Account

yes

yes

Create Account

Upload Fileto Multiple User

View Received File Details

Fil eSending

Stop

Exists

Mobile

Not Exists

Sent

no

Check

Get PrivateKey

File Send with Pr ivateKey

yes

Ch eck
yes

no

End

File Download

End

Fig 4.2 Dataflow Diagram for User and Admin

40

USE CASE DIAGRAM

Create an Account

Login

Upload File to Mutiple User

User

Generate Private Key

Admin

Search Admin/User

Download File with PrivateKey

File Received/ Not Received

Fig 4.3 Use case Diagram for User and Admin

41

CLASS DIAGRAM

File Received
ID FileNames PrivateKey

Multiple Users
ID FileNames Users

filenames() privatekey()

MultipleSend() Failure()

Upload Files
FileID FileName UserID FileType Filepath FileStatus
SenttoUser() ViewFileDetails()

Login Account
Name ID Password Mobile Admin/User

CreateAccount() GenerateKey()

Fig 4.4 Class Diagram for User and Admin

42

SEQUENCE DIAGRAM

DataBase

User Create an Account

Admin

Create an Account Upload Files Store Files

Users File Status Admin File Sending Status

File Send to Multiple Users Files Received with PrivteKey Download Files

Send Required File

Received Acknoledment

Alert Message

If Private key matches

Fig 4.5 Sequence diagram for User and Admin

43

ACTIVITY DIAGRAM

Login

Check Admin User

Create Account

No

Exists

Exists Yes

No

Create Account

Yes

File Maitainance

Redeived Files

Upload File to Multiple User

File Download

Success

Check Fie Sending

PrivateKey exists

Generate PrivateKey Failure If PrivateKey not exists

Download File

Fig 4.6 Activity Diagram for User and Admin

44

5. IMPLEMENTATION
Install Microsoft Visual studio 2008 and Microsoft SQL Server Management Studio 2005. Run Microsoft SQL Server Management studio 2005 and Visual Studio 2008 A new Database should be created for our Project; Now Database of our Project should be added to the SQL Server. Create tables for register, multiple users, sent files and success receive. Go to Microsoft Visual Studio 2008 and open the project by selecting website option and choose the path where the project file is saved. When the project gets loaded, go to solution Explorer and open web.config file. Set the connection String under the appsettings tag according to the User id and password given while installing the SQL Server. Now Start Debug i.e. the Homepage for our Project opens. Now an account should be created according to the requirement whether it may be user or Admin. Registration of User or Admin should be done, where User is the receiver and Admin is the Sender, therefore there can be multiple users but a single admin. Here when the file is sent, then file gets divided into segments. In our project we have files sent option under admin and files receive option under user, if the file doesnt go the user then file goes into the files not received or files no t sent under user and admin homepage respectively. After successful receive of file, user can download the file. As the client and Server are on the same system we cannot find any packet loss, so purposely we need to select packet loss option so as to see the packet loss while the sender sends packets to the receiver.

45

6. SYSTEM TESTING
Software Testing is the process of executing a program or system with the intent of finding errors. Or, it involves any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results. Software is not unlike other physical processes where inputs are received and outputs are produced. Where software differs is in the manner in which it fails. Most physical systems fail in a fixed (and reasonably small) set of ways. By contrast, software can fail in many bizarre ways. Detecting all of the different failure modes for software is generally infeasible. Unlike most physical systems, most of the defects

in software are design errors, not manufacturing defects. Software does not suffer from corrosion, wear-and-tear -- generally it will not change until upgrades, or until obsolescence. So once the software is shipped, the design defects, bugs, will be buried in and remain latent until activation.

Software bugs will almost always exist in any software module with moderate size: not because programmers are careless or irresponsible, but because the complexity of software is generally intractable -- and humans have only limited ability to manage complexity. It is also true that for any complex systems, design defects can never be completely ruled out.

Discovering the design defects in software is equally difficult, for the same reason of complexity. Because software and any digital systems are not continuous, testing boundary values are not sufficient to guarantee correctness. All the possible values need to be tested and verified, but complete testing is infeasible. Exhaustively testing a simple program to add only two integer inputs of 32-bits (yielding 2^64 distinct test cases) would take hundreds of years, even if tests were performed at a rate of thousands per second. Obviously, for a realistic software module, the complexity can be far beyond the example mentioned here. If inputs from the real world are involved, the problem will get worse, because timing and unpredictable environmental effects and human interactions are all possible input parameters under consideration.

A further complication has to do with the dynamic nature of programs. If a failure occurs during preliminary testing and the code is changed, the software may now work for a test case

46

that it didn't work for previously. But its behavior on pre-error test cases that it passed before can no longer be guaranteed. To account for this possibility, testing should be restarted. The expense of doing this is often prohibitive. Regardless of the limitations, testing is an integral part in software development. It is broadly deployed in every phase in the software development cycle. Typically, more than 50% percent of the development time is spent in testing. Testing is usually performed for the following purposes: The purpose of testing is to discover errors. Testing is the process of trying to discover every conceivable fault or weakness in a work product. It provides a way to check the functionality of components, sub-assemblies, assemblies and/or a finished product It is the process of exercising software with the intent of ensuring that the Software system meets its requirements and user expectations and does not fail in an unacceptable manner. There are various types of test. Each test type addresses a specific testing requirement.

6.1 UNIT TESTING


Unit testing involves the design of test cases that validate that the internal program logic is functioning properly, and that program inputs produce valid outputs. All decision branches and internal code flow should be validated. It is the testing of individual software units of the application .it is done after the completion of an individual unit before integration. This is a structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform basic tests at component level and test a specific business process, application, and/or system configuration. Unit tests ensure that each unique path of a business process performs accurately to the documented specifications and contains clearly defined inputs and expected results.

6.2 INTEGRATION TESTING


Integration tests are designed to test integrated software components to determine if they actually run as one program. Testing is event driven and is more concerned with the basic outcome of screens or fields. Integration tests demonstrate that although the components were individually satisfaction, as shown by successfully unit testing, the combination of components is

47

correct and consistent. Integration testing is specifically aimed at arise from the combination of components.

exposing the problems that

6.3 FUNCTIONAL TEST


Functional tests provide systematic demonstrations that functions tested are available as specified by the business and technical requirements, system documentation, and user manuals. Functional testing is centered on the following items:

Valid Input Invalid Input Functions Output

: : : :

Identified classes of valid input must be accepted. Identified classes of invalid input must be rejected. Identified functions must be exercised. Identified classes of application outputs must be exercised. Interfacing systems or procedures must be invoked.

Systems/Procedures :

Organization and preparation of functional tests is focused on requirements, key functions, or special test cases. In addition, systematic coverage pertaining to identify Business process flows; data fields, predefined processes, and successive processes must be considered for testing. Before functional testing is complete, additional tests are identified and the effective value of current tests is determined.

6.4 SYSTEM TEST


System testing ensures that the entire integrated software system meets requirements. It tests a configuration to ensure known and predictable results. An example of system testing is the configuration oriented system integration test. System testing is based on process descriptions and flows, emphasizing pre-driven process links and integration points.

48

6.5 WHITE-BOX TESTING


White Box Testing is a testing in which in which the software tester has knowledge of the inner workings, structure and language of the software, or at least its purpose. It is purpose. It is used to test areas that cannot be reached from a black box level. Contrary to black-box testing, software is viewed as a white-box, or glass-box in whitebox testing, as the structure and flow of the software under test are visible to the tester. Testing plans are made according to the details of the software implementation, such as programming language, logic, and styles. Test cases are derived from the program structure. White-box testing is also called glass-box testing, logic-driven testing or design-based testing There are many techniques available in white-box testing, because the problem of intractability is eased by specific knowledge and attention on the structure of the software under test. The intention of exhausting some aspect of the software is still strong in white-box testing, and some degree of exhaustion can be achieved, such as executing each line of code at least once (statement coverage), traverse every branch statements (branch coverage), or cover all the possible combinations of true and false condition predicates (Multiple condition coverage). Control-flow testing, loop testing, and data-flow testing, all maps the corresponding flow structure of the software into a directed graph. Test cases are carefully selected based on the criterion that all the nodes or paths are covered or traversed at least once. By doing so we may discover unnecessary "dead" code -- code that is of no use, or never get executed at all, which cannot be discovered by functional testing. In mutation testing, the original program code is perturbed and many mutated programs are created, each contains one fault. Each faulty version of the program is called a mutant. Test data are selected based on the effectiveness of failing the mutants. The more mutants a test case can kill, the better the test case is considered. The problem with mutation testing is that it is too computationally expensive to use. The boundary between black-box approach and white-box approach is not clear-cut. Many testing strategies mentioned above, may not be safely classified into black-box testing or white-box testing. It is also true for transaction-flow testing, syntax testing, finite-state testing, and many other testing strategies not discussed in this text. One

49

reason is that all the above techniques will need some knowledge of the specification of the software under test. Another reason is that the idea of specification itself is broad -- it may contain any requirement including the structure, programming language, and programming style as part of the specification content. We may be reluctant to consider random testing as a testing technique. The test case selection is simple and straightforward: they are randomly chosen. Study in indicates that random testing is more cost effective for many programs. Some very subtle errors can be discovered with low cost. And it is also not inferior in coverage than other carefully designed testing techniques. One can also obtain reliability estimate using random testing results based on operational profiles. Effectively combining random testing with other testing techniques may yield more powerful and cost-effective testing strategies.

6.6 BLACK BOX TESTING


Black Box Testing is testing the software without any knowledge of the inner workings, structure or language of the module being tested. Black box tests, as most other kinds of tests, must be written from a definitive source document, such as specification or requirements document, such as specification or requirements document. It is a testing in which the software under test is treated, as a black box .you cannot see into it. The test provides inputs and responds to outputs without considering how the software works. The black-box approach is a testing method in which test data are derived from the specified functional requirements without regard to the final program structure. [ It is also termed data-driven, input/output driven, or requirements-based testing. Because only the functionality of the software module is of concern, black-box testing also mainly refers to functional testing -- a testing method emphasized on executing the functions and examination of their input and output data. The tester treats the software under test as a black box -- only the inputs, outputs and specification are visible, and the functionality is determined by observing the outputs to corresponding inputs. In testing, various inputs are exercised and the outputs are compared against specification to validate the correctness. All test cases are derived from the specification. No implementation details of the code are considered.

50

It is obvious that the more we have covered in the input space, the more problems we will find and therefore we will be more confident about the quality of the software. Ideally we would be tempted to exhaustively test the input space. But as stated above, exhaustively testing the combinations of valid inputs will be impossible for most of the programs, let alone considering invalid inputs, timing, sequence, and resource variables. Combinatorial explosion is the major roadblock in functional testing. To make things worse, we can never be sure whether the specification is either correct or complete. Due to limitations of the language used in the specifications (usually natural language), ambiguity is often inevitable. Even if we use some type of formal or restricted language, we may still fail to write down all the possible cases in the specification. Sometimes, the specification itself becomes an intractable problem: it is not possible to specify precisely every situation that can be encountered using limited words. And people can seldom specify clearly what they want -- they usually can tell whether a prototype is, or is not, what they want after they have been finished. Specification problems contribute approximately 30 percent of all bugs in software. The research in black-box testing mainly focuses on how to maximize the effectiveness of testing with minimum cost, usually the number of test cases. It is not possible to exhaust the input space, but it is possible to exhaustively test a subset of the input space. Partitioning is one of the common techniques. If we have partitioned the input space and assume all the input values in a partition is equivalent, then we only need to test one representative value in each partition to sufficiently cover the whole input space. Domain testing partitions the input domain into regions, and consider the input values in each domain an equivalent class. Domains can be exhaustively tested and covered by selecting a representative value(s) in each domain. Boundary values are of special interest. Experience shows that test cases that explore boundary conditions have a higher payoff than test cases that do not. Boundary value analysis requires one or more boundary values selected as representative test cases. The difficulties with domain testing are that incorrect domain definitions in the specification cannot be efficiently discovered. Good partitioning requires knowledge of the software structure. A good testing plan will not only contain blackbox testing, but also white-box approaches, and combinations of the two.

51

6.7 TEST STRATEGY AND APPROACH


Field testing will be performed manually and functional tests will be written in detail.

Test objectives All field entries must work properly. Pages must be activated from the identified link. The entry screen, messages and responses must not be delayed.

Features to be tested Verify that the entries are of the correct format No duplicate entries should be allowed All links should take the user to the correct page.

6.8 INCREMENTAL TESTING


Software integration testing is the incremental integration testing of two or more integrated software components on a single platform to produce failures caused by interface defects. The task of the integration test is to check that components or software applications, e.g. components in a software system or one step up software applications at the company level interact without error.

Test Results: All the test cases mentioned above passed successfully. No defects encountered.

52

6.9 ACCEPTANCE TESTING


User Acceptance Testing is a critical phase of any project and requires significant participation by the end user. It also ensures that the system meets the functional requirements.

Test Results: All the test cases mentioned above passed successfully. No defects encountered.

53

7. SAMPLE SCREENS

Fig 7.1 Login Screen

54

Fig 7.2 Admin Homepage

55

Fig 7.3 Admin Packet Sending

56

Fig 7.4 User Inbox

57

Fig 7.5 User Private Key Generation

58

Fig 7.6 File Received at User

59

8. CONCLUSION AND FUTURE ENHANCEMENTS


To reduce the signature verification overheads in the secure multimedia multicasting, block-based authentication schemes have been proposed. Unfortunately, most previous schemes have many problems such as vulnerability to packet loss and lack of resilience to denial of service (DoS) attack. To overcome these problems, we develop a novel authentication scheme MABS. We have demonstrated that MABS is perfectly resilient to packet loss due to the elimination of the correlation among packets and can effectively deal with DoS attack. Moreover, we also show that the use of batch signature can achieve the efficiency less than or comparable with the conventional schemes. Finally, we further develop two new batch signature schemes based on BLS and DSA, which are more efficient than the batch RSA signature scheme.

The future enhancement of the project includes the increase in batch size thus the elimination of packet delay is achieved when the congestion at the receiver is more, instead of sending each packet thoroughly the batch can be sent so as to remove the packet delay.

60

9. BIBLIOGRAPHY

References Made From


[1] S.E. Deering, Multicast Routing in Internetworks and Extended LANs, Proc. ACM SIGCOMM Symp. Comm. Architectures and Protocols, pp. 55-64, Aug. 1988.

[2] T. Ballardie and J. Crowcroft, Multicast-Specific Security Threatsand CounterMeasures, Proc. Second Ann. Network and DistributedSystem Security Symp. (NDSS 95), pp. 2-16, Feb. 1995. [3] P. Judge and M. Ammar, Security Issues and Solutions in Mulicast Content Distribution: A Survey, IEEE Network Magazine, vol. 17, no. 1, pp. 30-36, Jan./Feb. 2003. [4] Y. Challal, H. Bettahar, and A. Bouabdallah, Taxonomy of Multicast Data Origin Authentication: Issues and Solutions, IEEE Comm. Surveys & Tutorials, vol. 6, no. 3, pp. 34-57, Oct. 2004.

[5] Y. Zhou and Y. Fang, BABRA: Batch-Based Broadcast Authentication in Wireless Sensor Networks, Proc. IEEE GLOBECOM, Nov. 2006.

61

SITES REFERRED
http://www.asp.net.com http://www.dotnetspider.com/ http://www.dotnetspark.com http://www.almaden.ibm.com/software/quest/Resources/ http://www.computer.org/publications/dlib http://www.developerfusion.com/

Vous aimerez peut-être aussi