Vous êtes sur la page 1sur 78


The growth of Internet technologies is revolutionizing the way organizations do business with their partners and customers. Companies are focusing on the operation of web for more automation, efficient business process and global requirements. In order to compete, companies should implement the right software and follow recent trends in technology.. The latest development in using web for conducting business resulted in a new paradigm called web services. Web services are software components, based on loosely coupled, distributed and independent services operating via the web infrastructure. They are platform and language independent, which is suitable for accessing them from heterogeneous environments. With the rapid introduction of web-services technologies, researchers focused more on the functional and interfacing aspects of web services, which include HTTP and XML-based messaging. They are used to communicate across by using open standards such as HTTP and XML-based protocols including SOAP, WSDL and UDDI. WSDL is a document that describes the services location on the web and the functionality the service provides. Information related to the web service is to be entered in a UDDI registry, which permits web service consumers to find out and locate the services they required. Using the information available in the UDDI registry based on the web services, client developer uses instructions in the WSDL to construct SOAP messages for exchanging data with the service over HTTP attributes. Therefore, we attempt to approximate this nonlinear relationship with help of several intelligent techniques. The most significant application of the developed models is that we can confidently predict the quality of a new web service (whichis not in the training set) given its QoS attributes. In this context, we observed that there is no work reported addressing these aspects. An application is offered by two different web services. Now, any user will choose one of the web services that has higher ranking as measured by the QoS attributes, which are essentially non-functional in nature. In this context, if one develops classification model based on intelligent techniques to classify the given new web service, then the user can use this ranking in order to select a web service.

Web Services

A web service is any piece of software that makes itself available over the internet and uses a standardized XML messaging system. XML is used to encode all communications to a web service. For example, a client invokes a web service by sending an XML message, then waits for a corresponding XML response. Because all communication is in XML, web services are not tied to any one operating system or programming language-Java can talk with Perl; Windows applications can talk with Unix applications.

Web Services are self-contained, modular, distributed, dynamic applications that can be described, published, located, or invoked over the network to create products, processes, and supply chains. These applications can be local, distributed, or Web-based. Web services are built on top of open standards such as TCP/IP, HTTP, Java, HTML, and XML.

Web services are XML-based information exchange systems that use the Internet for direct application-to-application interaction. These systems can include programs, objects, messages, or documents.

A web service is a collection of open protocols and standards used for exchanging data between applications or systems. Software applications written in various programming languages and running on various platforms can use web services to exchange data over computer networks like the Internet in a manner similar to inter-process communication on a single computer. This interoperability (e.g., between Java and Python, or Windows and Linux applications) is due to the use of open standards.

To summarize, a complete web service is, therefore, any service that:

Is available over the Internet or private (intranet) networks Uses a standardized XML messaging system Is not tied to any one operating system or programming language Is self-describing via a common XML grammar Is discoverable via a simple find mechanism

Components of Web Services The basic Web services platform is XML + HTTP. All the standard Web Services works using following components

SOAP (Simple Object Access Protocol) UDDI (Universal Description, Discovery and Integration) WSDL (Web Services Description Language) You can build a Java-based Web Service on Solaris that is accessible from your Visual Basic

program that runs on Windows. You can also use C# to build new Web Services on Windows that can be invoked from your Web application that is based on Java Server Pages (JSP) and runs on Linux. Example: Consider a simple account-management and order -processing system. The accounting personnel use a client application built with Visual Basic or JSP to create new accounts and enter new customer orders. The processing logic for this system is written in Java and resides on a Solaris machine, which also interacts with a database to store the information. The steps illustrated above are as follows: 1. The client program bundles the account registration information into a SOAP message. 2. This SOAP message is sent to the Web Service as the body of an HTTP POST request. 3. The Web Service unpacks the SOAP request and converts it into a command that the application can understand. The application processes the information as required and responds with a new unique account number for that customer. 4. Next, the Web Service packages up the response into another SOAP message, which it sends back to the client program in response to its HTTP request. 5. The client program unpacks the SOAP message to obtain the results of the account registration process. For further details regarding the implementation of Web Services technology, read about the Cape Clear product set and review the product components.

Benefits of using Web Services

Exposing the existing function on to network: A Web service is a unit of managed code that can be remotely invoked using HTTP, that is, it can be activated using HTTP requests. So, Web Services allows you to expose the functionality of your existing code over the network. Once it is exposed on the network, other application can use the functionality of your program.

Connecting Different Applications ie Interoperability: Web Services allows different applications to talk to each other and share data and services among themselves. Other applications can also use the services of the web services. For example VB or .NET application can talk to java web services and vice versa. So, Web services are used to make the application platform and technology independent.

Standardized Protocol: Web Services uses standardized industry standard protocol for the communication. All the four layers (Service Transport, XML Messaging, Service Description and Service Discovery layers) uses the well defined protocol in the Web Services protocol stack. This standardization of protocol stack gives the business many advantages like wide range of choices, reduction in the cost due to competition and increase in the quality.

Low Cost of communication: Web Services uses SOAP over HTTP protocol for the communication, so you can use your existing low cost internet for implementing Web Services. This solution is much less costly compared to proprietary solutions like EDI/B2B. Beside SOAP over HTTP, Web Services can also be implemented on other reliable transport mechanisms like FTP etc.

Web Services Behavioral Characteristics Web services have special behavioral characteristics:

XML-based Web Services uses XML at data representation and data transportation layers. Using XML eliminates any networking, operating system, or platform binding. So Web Services based applications are highly interoperable application at their core level.

Loosely coupled A consumer of a web service is not tied to that web service directly. The web service interface can change over time without compromising the client's ability to interact with the service. A tightly coupled system implies that the client and server logic are closely tied to one another, implying that if one interface changes, the other must also be updated. Adopting a loosely coupled architecture tends to make software systems more manageable and allows simpler integration between different systems.

Coarse-grained Object-oriented technologies such as Java expose their services through individual methods. An individual method is too fine an operation to provide any useful capability at a corporate level. Building a Java program from scratch requires the creation of several fine-grained methods that are then composed into a coarse-grained service that is consumed by either a client or another service. Businesses and the interfaces that they expose should be coarse-grained. Web services technology provides a natural way of defining coarse-grained services that access the right amount of business logic.

Ability to be synchronous or asynchronous Synchronicity refers to the binding of the client to the execution of the service. In synchronous invocations, the client blocks and waits for the service to complete its operation before continuing. Asynchronous operations allow a client to invoke a service and then execute other functions. Asynchronous clients retrieve their result at a later point in time, while synchronous clients receive their result when the service has completed. Asynchronous capability is a key factor in enabling loosely coupled systems.

Supports Remote Procedure Calls (RPCs) Web services allow clients to invoke procedures, functions, and methods on remote objects using an XML-based protocol. Remote procedures expose input and output parameters that a web service must support. Component development through Enterprise JavaBeans (EJBs) and .NET Components has increasingly become a part of architectures and enterprise deployments over the past couple of years. Both technologies are distributed and accessible through a variety of RPC mechanisms. A web service supports RPC by providing services of its own, equivalent to those of a traditional component, or by translating incoming invocations into an invocation of an EJB or a .NET component.

Supports document exchange One of the key advantages of XML is its generic way of representing not only data, but also complex documents. These documents can be simple, such as when representing a current address, or they can be complex, representing an entire book or RFQ. Web services support the transparent exchange of documents to facilitate business integration.

Web Services Architecture There are two ways to view the web service architecture.

The first is to examine the individual roles of each web service actor. The second is to examine the emerging web service protocol stack.

1. Web Service Roles There are three major roles within the web service architecture:

Service provider: This is the provider of the web service. The service provider implements the service and makes it available on the Internet.

Service requestor This is any consumer of the web service. The requestor utilizes an existing web service by opening a network connection and sending an XML request.

Service registry This is a logically centralized directory of services. The registry provides a central place where developers can publish new services or find existing ones. It therefore serves as a centralized clearinghouse for companies and their services.

2. Web Service Protocol Stack A second option for viewing the web service architecture is to examine the emerging web service protocol stack. The stack is still evolving, but currently has four main layers.

Service transport This layer is responsible for transporting messages between applications. Currently, this layer includes hypertext transfer protocol (HTTP), Simple Mail Transfer Protocol

(SMTP), file transfer protocol (FTP), and newer protocols, such as Blocks Extensible Exchange Protocol (BEEP).

XML messaging This layer is responsible for encoding messages in a common XML format so that messages can be understood at either end. Currently, this layer includes XML-RPC and SOAP.

Service description This layer is responsible for describing the public interface to a specific web service. Currently, service description is handled via the Web Service Description Language (WSDL).

Service discovery This layer is responsible for centralizing services into a common registry, and providing easy publish/find functionality. Currently, service discovery is handled via Universal Description, Discovery, and Integration (UDDI). As web services evolve, additional layers may be added, and additional technologies may be added to each layer.

Next chapter explains about various components of Web Services. Service Transport The bottom of the web service protocol stack is service transport. This layer is responsible for actually transporting XML messages between two computers.

Hyper Text Transfer Protocol (HTTP) Currently, HTTP is the most popular option for service transport. HTTP is simple, stable, and widely deployed. Furthermore, most firewalls allow HTTP traffic. This allows XMLRPC or SOAP messages to masquerade as HTTP messages. This is good if you want to easily integrate remote applications, but it does raise a number of security concerns.

Blocks Extensible Exchange Protocol (BEPP) One promising alternative to HTTP is the Blocks Extensible Exchange Protocol (BEEP).BEEP is a new IETF framework of best practices for building new protocols. BEEP is layered directly on TCP and includes a number of built-in features, including an initial handshake protocol, authentication, security, and error handling. Using BEEP, one can create new protocols for a variety of applications, including instant messaging, file transfer, content syndication, and network management

SOAP is not tied to any specific transport protocol. In fact, you can use SOAP via HTTP, SMTP, or FTP. One promising idea is therefore to use SOAP over BEEP. Web Services Components Over the past few years, three primary technologies have emerged as worldwide standards that make up the core of today's web services technology. These technologies are: XML This is the simplest XML based protocol for exchanging information between computers.

XML-RPC is a simple protocol that uses XML messages to perform RPCs. Requests are encoded in XML and sent via HTTP POST. XML responses are embedded in the body of the HTTP response. XML-RPC is platform-independent. XML-RPC allows diverse applications to communicate. A Java client can speak XML-RPC to a Perl server. XML-RPC is the easiest way to get started with web services.

SOAP SOAP is an XML-based protocol for exchanging information between computers.

SOAP is a communication protocol SOAP is for communication between applications SOAP is a format for sending messages SOAP is designed to communicate via Internet SOAP is platform independent SOAP is language independent SOAP is simple and extensible SOAP allows you to get around firewalls SOAP will be developed as a W3C standard

WSDL WSDL is an XML-based language for describing Web services and how to access them.

WSDL stands for Web Services Description Language WSDL is an XML based protocol for information exchange in decentralized and distributed environments.

WSDL is the standard format for describing a web service. WSDL definition describes how to access a web service and what operations it will perform.

WSDL is a language for describing how to interface with XML-based services. WSDL is an integral part of UDDI, an XML-based worldwide business registry. WSDL is the language that UDDI uses. WSDL was developed jointly by Microsoft and IBM. WSDL is pronounced as 'wiz-dull' and spelled out as 'W-S-D-L'

UDDI UDDI is an XML-based standard for describing, publishing, and finding Web services.

UDDI stands for Universal Description, Discovery and Integration. UDDI is a specification for a distributed registry of Web services. UDDI is platform independent, open framework. UDDI can communicate via SOAP, CORBA, Java RMI Protocol. UDDI uses WSDL to describe interfaces to web services. UDDI is seen with SOAP and WSDL as one of the three foundation standards of web services.

UDDI is an open industry initiative enabling businesses to discover each other and define how they interact over the Internet.

Web Services - Examples Based on the Web Service Architecture we will create following two components as a part of Web Services implementation

Service provider or publisher: This is the provider of the web service. The service provider implements the service and makes it available on the Internet or intranet. We will write and publish a simple web Service using .NET SDK.

Service requestor or consumer This is any consumer of the web service. The requestor utilizes an existing web service by opening a network connection and sending an XML request. We will also write two Web Service requestors: one Web-based consumer (ASP.NET application) and another Windows application-based consumer.

Following is our First Web Service example which works as a service provider and exposes two methods (add and Say Hello) as Web Services to be used by applications. This is a standard template for a Web Service. .NET Web Services use the .asmx extension. Note that a method exposed as a Web Service has the Web Method attribute. Save this file as FirstService.asmx in the IIS virtual directory (as explained in configuring IIS; for example, c:\MyWebSerces).

To test a Web Service, it must be published. A Web Service can be published either on an intranet or the Internet. We will publish this Web Service on IIS running on a local machine. Let's start with configuring the IIS.

Open Start->Settings->Control Panel->Administrative tools->Internet Services Manager. Expand and right-click on [Default Web Site]; select New ->Virtual Directory. The Virtual Directory Creation Wizard opens. Click Next. The "Virtual Directory Alias" screen opens. Type the virtual directory namefor example, MyWebServicesand click Next.

The "Web Site Content Directory" screen opens. Here, enter the directory path name for the virtual directoryfor example, c:\MyWebServicesand click Next.

The "Access Permission" screen opens. Change the settings as per your requirements. Let's keep the default settings for this exercise. Click the Next button. It completes the IIS configuration. Click Finish to complete the configuration.

To test that IIS has been configured properly, copy an HTML file (for example, x.html) in the virtual directory (C:\MyWebServices) created above. Now, open Internet Explorer and type http://localhost/MyWebServices/x.html. It should open the x.html file. If it does not work, try replacing localhost with the IP address of your machine. If it still does not work, check whether IIS is running; you may need to reconfigure IIS and Virtual Directory. To test our Web Service, copy FirstService.asmx in the IIS virtual directory created above (C:\MyWebServices). Open the Web Service in Internet Explorer (http://localhost/MyWebServices/FirstService.asmx). It should open your Web Service page. The page should have links to two methods exposed as Web Services by our application. Congratulations; you have written your first Web Service!!!

Testing the Web Service As we have just seen, writing Web Services is easy in the .NET Framework. Writing Web Service consumers is also easy in the .NET framework; however, it is a bit more involved. As said earlier, we will write two types of service consumers, one Web- and another Windows application-based consumer. Let's write our first Web Service consumer. Web-Based Service Consumer Write a Web-based consumer as given below. Call it WebApp.aspx. Note that it is an ASP.NET application. Save this in the virtual directory of the Web Service (c:\MyWebServices\WebApp.axpx). This application has two text fields that are used to get numbers from the user to be added. It has one button, Execute, that, when clicked, gets the Add and Say Hello Web Services. After the consumer is created, we need to create a proxy for the Web Service to be consumed. This work is done automatically by Visual Studio .NET for us when referencing a Web Service that has been added. Here are the steps to be followed:

Create a proxy for the Web Service to be consumed. The proxy is created using the wsdl utility supplied with the .NET SDK. This utility extracts information from the Web Service and creates a proxy. Thus, the proxy created is valid only for a particular Web Service. If you need to consume other Web Services, you need to create a proxy for this service as well. VS .NET creates a proxy automatically for you when the reference for the Web Service is added. Create a proxy for the Web Service using the wsdl utility supplied with the .NET SDK. It will create FirstSevice.cs in the current directory. We need to compile it to create FirstService.dll (proxy) for the Web Service.

Put the compiled proxy in the bin directory of the virtual directory of the Web Service (c:\MyWebServices\bin). IIS looks for the proxy in this directory.

Create the service consumer, which we have already done. Note that I have instantiated an object of the Web Service proxy in the consumer. This proxy takes care of interacting with the service.

Type the URL of the consumer in IE to test it (for example, http://localhost/MyWebServices/WebApp.aspx).

Writing a Windows application-based Web Service consumer is the same as writing any other Windows application. The only work to be done is to create the proxy (which we have already done) and reference this proxy when compiling the application. Following is our Windows application that uses the Web Service. This application creates a Web Service object (of course, proxy) and calls the SayHello and Add methods on it. Compile it using c:>csc /r:FirstService.dll WinApp.cs. It will create WinApp.exe. Run it to test the application and the Web Service. Now, the question arises: How can I be sure that my application is actually calling the Web Service? It is simple to test. Stop your Web server so that the Web Service cannot be contacted. Now, run the WinApp application. It will fire a run-time exception. Now, start the Web server again. It should work. Web Services Security Security is critical to web services. However, neither the XML-RPC nor SOAP specifications make any explicit security or authentication requirements. There are three specific security issues with Web Services:

Confidentiality Authentication Network Security

Confidentiality If a client sends an XML request to a server, then question is that can we ensure that the communication remains confidential? Answer lies here

XML-RPC and SOAP run primarily on top of HTTP. HTTP has support for Secure Sockets Layer (SSL).

Communication can be encrypted via the SSL. SSL is a proven technology and widely deployed.

But a single web service may consist of a chain of applications. For example one large service might tie together the services of three other applications. In this case, SSL is not adequate; the messages need to be encrypted at each node along the service path, and each node represents a potential weak link in the chain. Currently, there is no agreed-upon solution to this issue, but one promising solution is the W3C XML Encryption Standard. This standard provides a framework for encrypting and decrypting entire XML documents or just portions of an XML document. Authentication If a client connects to a web service, how do we identify the user? And is the user authorized to use the service? Following options can be considered but there is no clear consensus on a strong authentication scheme.

HTTP includes built-in support for Basic and Digest authentication, and services can therefore be protected in much the same manner as HTML documents are currently protected.

SOAP Security Extensions: Digital Signature (SOAP-DSIG). DSIG leverages public key cryptography to digitally sign SOAP messages. This enables the client or server to validate the identity of the other party.

The Organization for the Advancement of Structured Information Standards (OASIS) is working on the Security Assertion Markup Language (SAML).

Network Security There is currently no easy answer to this problem, and it has been the subject of much debate. For now, if you are truly intent on filtering out SOAP or XML-RPC messages, one possibility is to filter out all HTTP POST requests that set their content type to text/xml. Another alternative is to filter for the SOAP Action HTTP header attribute. Firewall vendors are also currently developing tools explicitly designed to filter web service traffic.

WSDL WSDL Abstract:

WSDL stands for Web Services Description Language WSDL is an XML based protocol for information exchange in decentralized and distributed environments.

WSDL is the standard format for describing a web service. WSDL definition describes how to access a web service and what operations it will perform.

WSDL is a language for describing how to interface with XML-based services. WSDL is an integral part of UDDI, an XML-based worldwide business registry. WSDL is the language that UDDI uses. WSDL was developed jointly by Microsoft and IBM. WSDL is pronounced as 'wiz-dull' and spelled out as 'W-S-D-L'

WSDL Usage: WSDL is often used in combination with SOAP and XML Schema to provide web services over the Internet. A client program connecting to a web service can read the WSDL to determine what functions are available on the server. Any special datatypes used are embedded in the WSDL file in the form of XML Schema. The client can then use SOAP to actually call one of the functions listed in the WSDL. History of WSDL WSDL 1.1 was submitted as a W3C Note by Ariba, IBM and Microsoft for describing services for the W3C XML Activity on XML Protocols in March 2001. WSDL 1.1 has not been endorsed by the World Wide Web Consortium (W3C), however it has just (May 11th, 2005) released a draft for version 2.0, that will be a recommendation (an official standard), and thus endorsed by the W3C.

WSDL Elements WSDL breaks down Web services into three specific, identifiable elements that can be combined or reused once defined. Three major elements of WSDL that can be defined separately and they are:

Types Operations Binding

A WSDL document has various elements, but they are contained within these three main elements, which can be developed as separate documents and then they can be combined or reused to form complete WSDL files. Following are the elements of WSDL document. Within these elements are further sub elements, or parts:

Definition: element must be the root element of all WSDL documents. It defines the name of the web service, declares multiple namespaces used throughout the remainder of the document, and contains all the service elements described here.

Data types: the data types - in the form of XML schemas or possibly some other mechanism - to be used in the messages

Message: an abstract definition of the data, in the form of a message presented either as an entire document or as arguments to be mapped to a method invocation.

Operation: the abstract definition of the operation for a message, such as naming a method, message queue, or business process, that will accept and process the message

Port type: an abstract set of operations mapped to one or more end points, defining the collection of operations for a binding; the collection of operations, because it is abstract, can be mapped to multiple transports through various bindings.

Binding: the concrete protocol and data formats for the operations and messages defined for a particular port type.

Port: a combination of a binding and a network address, providing the target address of the service communication.

Service: a collection of related end points encompassing the service definitions in the file; the services map the binding to the port and include any extensibility definitions.

In addition to these major elements, the WSDL specification also defines the following utility elements:

Documentation: element is used to provide human-readable documentation and can be included inside any other WSDL element.

Import: element is used to import other WSDL documents or XML Schemas.

NOTE: WSDL parts usually are generated automatically using Web services-aware tools. UDDI UDDI is an XML-based standard for describing, publishing, and finding Web services. UDDI stands for Universal Description, Discovery and Integration. In this tutorial you will learn what is UDDI and Why and How to use it. UDDI UDDI is an XML-based standard for describing, publishing, and finding Web services.

UDDI stands for Universal Description, Discovery and Integration. UDDI is a specification for a distributed registry of Web services. UDDI is platform independent, open framework. UDDI can communicate via SOAP, CORBA, Java RMI Protocol. UDDI uses WSDL to describe interfaces to web services. UDDI is seen with SOAP and WSDL as one of the three foundation standards of web services.

UDDI is an open industry initiative enabling businesses to discover each other and define how they interact over the Internet.

UDDI has two parts:

A registry of all a web service's metadata including a pointer to the WSDL description of a service

A set of WSDL port type definitions for manipulating and searching that registry

History of UDDI

UDDI 1.0 was originally announced by Microsoft, IBM, and Ariba in September 2000. Since the initial announcement, the UDDI initiative has grown to include more than 300 companies inclduing Dell, Fujitsu, HP, Hitachi, IBM, Intel, Microsoft, Oracle, SAP, and Sun.

May 2001, Microsoft and IBM launched the first UDDI operator sites and turned the UDDI registry live.

June 2001, UDDI announced Version 2.0. As of this writing, the Microsoft and IBM sites implement the 1.0 specification and plan 2.0 support in the near future

Currently UDDI is sponsored by OASIS

Partner Interface Processes - PIPs Partner Interface Processes (PIPs) are XMLbased interfaces that enable two trading partners to exchange data. Dozens of PIPs already exist. Few are listed here:

PIP2A2: Enables a partner to query another for product information. PIP3A2: Enables a partner to query the price and availability of specific products. PIP3A4 : Enables a partner to submit an electronic purchase order and receive acknowledgment of the order

PIP3A3: Enables a partner to transfer the contents of an electronic shopping cart. PIP3B4: Enables a partner to query status on a specific shipment.

Private UDDI Registries As an alternative to using the public federated network of UDDI registries available on the Internet, companies or industry groups may choose to implement their own private UDDI registries.

These exclusive services would be designed for the sole purpose of allowing members of the company or of the industry group to share and advertise services amongst themselves. However, whether the UDDI registry is part of the global federated network or a privately owned and operated registry, the one thing that ties it all together is a common web services API for publishing and locating businesses and services advertised within the UDDI registry. UDDI Technical Architecture The UDDI technical architecture consists of three parts: UDDI data model: An XML Schema for describing businesses and web services. The data model is described in detail in the "UDDI Data Model" section. UDDI API Specification: A Specification of API for searching and publishing UDDI data. UDDI cloud services: This is operator sites that provide implementations of the UDDI specification and synchronize all data on a scheduled basis. The UDDI Business Registry (UBR), also known as the Public Cloud, is a conceptually single system built from multiple nodes that has their data synchronized through replication. The current cloud services provide a logically centralized, but physically distributed, directory. This means that data submitted to one root node will automatically be replicated across all the other root nodes. Currently, data replication occurs every 24 hours. UDDI cloud services are currently provided by Microsoft and IBM. Ariba had originally planned to offer an operator as well, but has since backed away from the commitment. Additional operators from other companies, including Hewlett-Packard, are planned for the near future.

It is also possible to set up private UDDI registries. For example, a large company may set up its own private UDDI registry for registering all internal web services. As these registries are not automatically synchronized with the root UDDI nodes, they are not considered part of the UDDI cloud. UDDI Elements A business or company can register three types of information into a UDDI registry. This information is contained into three elements of UDDI. These three elements are : (1) White pages: This category contains:

Basic information about the Company and its business. Basic contact information including business name, address, contact phone number etc. A unique identifiers for the company tax IDs. This information allows others to discover your web service based upon your business identification.

(2) Yellow pages: This category contains:

This has more details about the company, and includes descriptions of the kind of electronic capabilities the company can offer to anyone who wants to do business with it.

It uses commonly accepted industrial categorization schemes, industry codes, product codes, business identification codes and the like to make it easier for companies to search through the listings and find exactly what they want.

(3) Green pages: This category contains technical information about a web service. This is what allows someone to bind to a Web service after it's been found. This includes:

The various interfaces The URL locations Discovery information and similar data required to find and run the Web service.

NOTE: UDDI is not restricted to describing web services based on SOAP. Rather, UDDI can be used to describe any service, from a single web page or email address all the way up to SOAP, CORBA, and Java RMI services. SOAP SOAP is an XML-based protocol for exchanging information between computers. SOAP is XML. That is, SOAP is an application of the XML specification. All statements are TRUE for SOAP

SOAP is acronym for Simple Object Access Protocol SOAP is a communication protocol SOAP is designed to communicate via Internet SOAP can extend HTTP for XML messaging SOAP provides data transport for Web services SOAP can exchange complete documents or call a remote procedure SOAP can be used for broadcasting a message SOAP is platform and language independent SOAP is the XML way of defining what information gets sent and how

Although SOAP can be used in a variety of messaging systems and can be delivered via a variety of transport protocols, the initial focus of SOAP is remote procedure calls transported via HTTP.

SOAP enables client applications to easily connect to remote services and invoke remote methods. Other frameworks, including CORBA, DCOM, and Java RMI, provide similar functionality to SOAP, but SOAP messages are written entirely in XML and are therefore uniquely platform- and language-independent. SOAP Message Structure A SOAP message is an ordinary XML document containing the following elements.

Envelope: (Mandatory) defines the start and the end of the message.

Header: (Optional) Contains any optional attributes of the message used in processing the message, either at an intermediary point or at the ultimate end point.

Body: (Mandatory) Contains the XML data comprising the message being sent.

Fault: ( Optional ) An optional Fault element that provides information about errors that occurred while processing the message

SOAP Envelope Element The SOAP envelope indicates the start and the end of the message so that the receiver knows when an entire message has been received. The SOAP envelope solves the problem of knowing when you're done receiving a message and are ready to process it. The SOAP envelope is therefore basic ally a packaging mechanism SOAP Envelope element can be explained as:

Every SOAP message has a root Envelope element. Envelope element is mandatory part of SOAP Message. Every Envelope element must contain exactly one Body element. If an Envelope contains a Header element, it must contain no more than one, and it must appear as the first child of the Envelope, before the Body.

The envelope changes when SOAP versions change. The SOAP envelope is specified using the ENV namespace prefix and the Envelope element.

The optional SOAP encoding is also specified using a namespace name and the optional encoding Style element, which could also point to an encoding style other than the SOAP one.

A v1.1-compliant SOAP processor will generate a fault when receiving a message containing the v1.2 envelope namespace.

A v1.2- compliant SOAP processor generates a Version Mismatch fault if it receives a message that does not include the v1.2 envelope namespace.

SOAP Header Element The optional Header element offers a flexible framework for specifying additional applicationlevel requirements. For example, the Header element can be used to specify a digital signature for password-protected services; likewise, it can be used to specify an account number for pay-peruse SOAP services. SOAP Header element can be explained as:

Header elements are optional part of SOAP messages. Header elements can occur multiple times. Headers are intended to add new features and functionality The SOAP header contains header entries defined in a namespace. The header is encoded as the first immediate child element of the SOAP envelope. When more than one header is defined, all immediate child elements of the SOAP header are interpreted as SOAP header blocks.

SOAP Header element can have following two attributes

Actor attribute: The SOAP protocol defines a message path as a list of SOAP service nodes. Each of these intermediate nodes can perform some processing and then forward the message to the next node in the chain. By setting the Actor attribute, the client can specify the recipient of the SOAP header.

Must Understand attribute Indicates whether a Header element is optional or mandatory? If set to true i.e. 1 the recipient must understand and process the Header attribute according to its defined semantics, or return a fault.

SOAP Body Element The SOAP body is a mandatory element which contains the application-defined XML data being exchanged in the SOAP message. The body must be contained within the envelope and must follow any headers that might be defined for the message. The body is defined as a child element of the envelope, and the semantics for the body are defined in the associated SOAP schema.Normally, the application also defines a schema to contain semantics associated with the request and response elements.The Quotation service might be implemented using an EJB running in an application server; if so, the SOAP processor would be responsible for mapping the body information as parameters into and out of the EJB implementation of the GetQuotationResponse service. The SOAP processor could also be mapping the body information to a .NET object, a CORBA object, a COBOL program, and so on. SOAP Fault Element When an error occurs during processing, the response to a SOAP message is a SOAP fault element in the body of the message, and the fault is returned to the sender of the SOAP message. The SOAP fault mechanism returns specific information about the error, including a predefined code, a description, the address of the SOAP processor that generated

A SOAP Message can carry only one fault block Fault element is an optional part of SOAP Message For the HTTP binding, a successful response is linked to the 200 to 299 range of status codes;

SOAP fault is linked to the 500 to 599 range of status codes.

The SOAP Fault element has the following sub elements:

SOAP Transport

SOAP is not tied to any one transport protocol. SOAP can be transported via SMTP, FTP, IBM's MQSeries, or Microsoft Message Queuing (MSMQ).

SOAP specification includes details on HTTP only. HTTP remains the most popular SOAP transport protocol.

SOAP via HTTP Quite logically, SOAP requests are sent via an HTTP request and SOAP responses are returned within the content of the HTTP response. While SOAP requests can be sent via an HTTP GET, the specification includes details on HTTP POST only. Additionally, both HTTP requests and responses are required to set their content type to text/xml. The SOAP specification mandates that the client must provide a SOAPAction header, but the actual value of the SOAPAction header is dependent on the SOAP server implementation. For example, to access the AltaVista BabelFish Translation service, hosted by XMethods, you must specify the following as a SOAPAction header. Even if the server does not require a full SOAPAction header, the client must specify an empty string (""), or a null value. For example: Here is a sample request sent via HTTP to the XMethods Babelfish Translation service: Note the content type and the SOAPAction header. Also note that the BabelFish method requires two String parameters. The translation mode en_fr will translate from English to French. Here is the response from XMethods: SOAP responses delivered via HTTP are required to follow the same HTTP status codes. For example, a status code of 200 OK indicates a successful response. A status code of 500 Internal Server Error indicates that there is a server error and that the SOAP response includes a Fault element.

Back Propagation Neural Network(BPNN) Introduction This page will be confined to providing two toolkits which can be used to build a backpropagation network, and the explanations offered are related to the use of these toolkits, assuming that the user is already familiar with neural network generally and backpropagation in particular. As this page addresses only one of the numerous neural network architectures, the backpropagation neural network. For the remainder of the page, the terms neural net, network, or net are meant as abbreviations for backpropagation neural network. How are backpropagation neural networks used Backpropagation neural network, conceptually, represent a memory of a set of logic. The net receives one or more inputs, usually in terms of probability estimates of some parameters, process these inputs according to how the net is trained, and outputs one or more results, also in terms of some probability estimates. The inputs and outputs of a net are therefore numbers ranged from 0 to 1, the probability of something being true or present. Zero can also be used to represent absolute false and 1 to represent absolute true, while all the values in between the fuzzy logic of partly true and partly false. An example is a net of two inputs and a single output, that is trained to reproduce the XOR situation. Such a net will output false (0) if the two inputs are the same (0,0 or 1,1), and true(1) if tbe two inputs are different (0,1 or 1,0). An advantage of the net is that, if properly trained, it will handle ambiguous information (0.9,0.8 may still produce an output close to 0).

The architecture of back propagation neural network The basic processing unit in a neural network is the neurone, as shown in the diagram to the right.

The neuron receives one or more numerical inputs, usually a number between 0 and 1 to represent the probability of a concept.

The neuron then provides weighting to each of the inputs, and combined these in some mathematical function. A common function, which is used in back propagation, is y = sum (inputi weighti).

The result of the function is then transformed, usually using a logistic function, into an axonal output (output = 1 / (1 + exp(-y))), so that the output is also numerically between 0 and 1 and represents a probability. The nature of the logistic function is such that the axonal output tends towards the extremes of 0 or 1, thus avoiding an ambiguous output.

In summary, a neuron accepts one or more probability values, and process them into an output that is also a probability value, but approximates the output towards true(1) of false(0).

The back propagation neural network arranges groups of neurons in layers, as shown in the diagram to the right.

The input layer, which accepts input data, and pass them into the middle layers. The data is usually numerically between 0 and 1 which represents probability of some concept. In neural networks that accepts measurements (such as results of a chemical test), the neurons in the input layer transforms each measurement into probability before passing this to the middle layer. The number of neurons in the input layer depends on the number of pieces of input information to be received by the network.

The output layer, which produces the output from the network. The output is usually numerically between 0 and 1. They represent the probability of particular outcomes if the network is used as a prediction model. They represent weighting to decisions if the network is used as a action decision model.

One or more middle layers, each containing a number of neurons, which translate the inputs to produce the outputs. The number of middle layers, and the number of neurons in each layer depends on the complexity of the patterns the network has to handle. In the

simplest back propagation network possible, the XOR model, as used in the default of the Back propagation Applet in toolkit 1, a single middle layer containing two neurons is sufficient. In most biological systems, where the network is used to select amongst a limited numbers of options (in diagnosis or treatment), one or two middle layers each with 6 to 10 neurons often suffice. In the more convoluted networks, such as those used for investment and business decisions, more layers and greater number of neurons in each layer may be necessary. The final structure is often decided after trial and error in testing. Creation, training, storage, and use of neural network Creation of a neural network In a computer program, a neural net is an object which contains a number of layers. Each layer is an object that contains a number of neurons, and each neuron is an object that contains a number of coefficients (weights for each input). All these objects are created when a neural net is initially created, although at this stage the coefficients consists of random numbers, and the net has the correct structure but unable to produce useful results. Training a neural net A net is trained by presenting it with a series of templates to memorize. These are usually in a block of text, where each row is a template, and consists of the template input values (0 to 1), together with the template output values (0 to 1). The neural net takes each row in turn, calculates the outputs from the template input values, and matches these against the template outputs. Using the differences between the calculated and template outputs, the computer program changes all the coefficients in the net (thus back propagation) towards those that would produce the correct (template) outputs (thus memorizing the patterns in the templates). With each template, the coefficients in the net improve a little, and by presenting the net repeatedly with coherent templates, the coefficients in the neural net are adjusted progressively, until the neural net is able to reproduce all the patterns in the templates. When this is achieved, training is completed and the neural net is now usable.

Back propagation neural network as a decision engine The trained neural net itself is, of course, a decision engine. In some complex decision engines, a cluster of connected neural nets is used. The outputs of earlier neural nets are used as input to later ones, and depending on the needs of the user, an architecture of great complexity that simulate the biological brain can be created. However, neural nets require that the inputs are probability estimates (numbers of 0 to 1), and that the outputs of probabilities (numbers 0 to 1) are immediately interpretable. In a cluster of interconnected neural nets this presents no difficulties, as the output of one neural net can immediately be used as input to the net down the line. However those nets that are at the initial input and final output interface still require probabilistic inputs and outputs, and the decision engine will work only if these are available and acceptable. In real life, however, available data for decision making are only occasionally in the form of probability estimates or in the clear cut binary (no/yes) form. In many cases they are things such as temperature measurements, chemical concentrations, share prices, distance to targets, or even difficult conceptual measurements such as perceptions, preferences and attitudes. These measurements will need to be transformed into probability estimates before they are presented to the neural net. The outputs are usually easier to interpret by an user. In diagnostic situations, they represent the probability of each alternative diagnosis. In action situations, they represent the preference weighting towards each of the action options. In some cases however, further processing of the numerical output may also be necessary. A decision engine that is based on a neural network therefore has three components. In the middle are one or more interconnected neural networks. There is an optional but commonly necessary input interface where real life data are transformed into probabilities (0 to 1) before they are fed as input to the neural net. There is an optional output interface where the output from the neural network (0 to 1) are translated into real life decisions

Purpose of the Project:

The main purpose of the project is to overcome some of the problems in the existing system like Reliability, Maintainability, Accuracy etc.


Problems in the existing system:

Here let the existing project be Bayesian network which has strengths as well as limitations such as:

Strength Transparent representation of causal relationships between system variables

Limitations Difficult reaching agreement on the BN structure with experts

Use a variety of input data

Difficult defining the CPTs with expert opinion

Representation of Uncertainty Visual decision support tool

Continuous data representation Spatial and temporal dynamics

Can handle missing Observations Structural and parameter Learning New evidence can be Incorporated

No feedback loops

Results are not accurate


strengths and weakness of existing system(Bayesian network)

1.3 Proposed Project:

This project helps in easily classifying the different web services available in the market were classification depends on certain parameters which makes the nave user to know the rating of the web service and use services accordingly. So for this type of classification the technique used is Web services classification using Back propagation neural network.


Scope of the project:

Here the user is independent of its internal functionality and just depends on the result. Results are totally dependent on data sets any errors in the sets are not concerned with the tool. No user is dependent on the other user. Accuracy would be up to 90% not 100%.

CHAPTER 2 Software Requirement Specifications:

The Requirements are broadly divided into two groups: 2.1 2.2 Functional requirements Non-functional requirements Functional Requirements: Non-Functional Requirements

Maintainability All the modules must be clearly separate to allow different user interfaces to be developed in future. Through thoughtful and effective software engineering, all steps of the software development process will be well documented to ensure maintainability of the product throughout its life time. All development will be provided with good documentation. Performance The response time, utilization and throughput behaviour of the system. Care is taken so as to ensure a system with comparatively high performance. Usability The ease of use and training the end users of the system is usability. System should have qualities like- learning ability, efficiency, affect, control. The main aim of the project is to increase the scope of page designer to design a page and to reduce the rework of the programmer. Modifiability The ease with which a software system can accommodate changes to its software is modifiability. Our project is easily adaptable for changes that is useful for the application to withstand the needs of the users. Portability The ability of the system to run under different computing environments. The environment types can be either hardware or software, but is usually a combination of two. Reusability

The extent to which existing application can be reused in new application. Our application can be reused a number of times without any technical difficulties.

2.3 Software Requirements: Java 6 Apache Tomcat Server Jdbc.odbc Connection Java servlets Java Swings Java applets Java code editor Operating system: windows xp only 32 bit.

2.4 Hardware Requirements PROSESSOR HARD DISK RAM Internet Connectivity : Pentium IV 1GHz Speed : 40 GB : 1 GB

CHAPTER 3 Literature Survey

Over the past few years, business has interacted using ad hoc approaches which take advantage of the basic Internet infrastructure. Now, however, Web Services are emerging technology to provide a systematic and extensible framework for the applicationtoapplication interaction which built on the top of the existing Web protocols and it is also based on the open The Extensible Markup Language (XML) standards. Web Services combine the best aspects of componentbased development and the Web Link component; Web Services represent blackbox functionality that can be reused without worrying about how the service is implemented. Unlike current component technologies, Web Service is not accessed via object modelspecific protocols, such as Distributed Component Object Model (DCOM), The Java Remote Method Invocation (RMI), or Internet Inter-Orb Protocol (IIOP). Instead, Web Service are accessed via ubiquitous Web protocols (ex: HTTP) and data formats (ex: XML)1. Web Services are the new breed of Web applications. They are selfcontained, self describing, modular applications that can be published, located, and invoked across to the web. Web Service performs functions, which can be anything from simple request to complicated business processes. Once a Web Service is deployed, other applications (and other Web Service) can discover and invoke the deployed services. Web Service is a term being used to define a set of technologies that exposes business functionality over the Web as a set of automated interfaces. These automated interfaces allow business to discover and bind to an interface at run times, supposedly minimizing the amount of static preparation that is needed by other integration technologies.

4.1 System Architecture:
A back propagation neural network uses a feed-forward topology, supervised learning, and back propagation learning algorithm. This algorithm was responsible in large part for the re-emergence of neural networks in the mid 1980s. Back propagation is a general purpose learning algorithm. It is powerful but also expensive in terms of computational requirements for training. A back propagation network with a single hidden layer of processing elements can model any continuous function to any degree of accuracy (given enough processing elements in the hidden layer).

Fig- 4.1: Back Propagation Network

There are literally hundreds of variations of back propagation in the neural network literature, and all claim to be superior to basic back propagation in one way or the other. Indeed, since back propagation is based on a relatively simple form of optimization known as gradient descent, mathematically astute observers soon proposed modifications using more powerful techniques such as conjugate gradient and Newtons methods. However, basic back propagation is still the most widely used variant. Its two primary virtues are that it is simple and easy to understand, and it works for a wide range of problems. The basic back propagation algorithm consists of three steps.

The input pattern is presented to the input layer of the network. These inputs are propagated through the network until they reach the output units. This forward pass produces the actual or predicted output pattern.

Because back propagation is a supervised learning algorithm, the desired outputs are given as part of the training vector. The actual network outputs are subtracted from the desired outputs and an error signal is produced.

This error signal is then the basis for the back propagation step, whereby the errors are passed back through the neural network by computing the contribution of each hidden processing unit and deriving the corresponding adjustment needed to produce the correct output. The connection weights are then adjusted and the neural network has just learned from an experience.


Adaptive learning: An ability to learn how to do tasks based on the data given for training or initial experience.

Self-Organization: An ANN can create its own organization or representation of the information it receives during learning time.

Real Time Operation: ANN computations may be carried out in parallel, and special hardware devices are being designed and manufactured which take advantage of this capability.

Fault Tolerance via Redundant Information Coding: Partial destruction of a network leads to the corresponding degradation of performance. However, some network capabilities may be retained even with major network damage. [2]


Minsky and Papert (1969) showed that there are many simple problems such as the exclusive-or problem which linear neural networks can not solve. Note that term "solve" means learn the desired associative links. Argument is that if such networks can not solve such simple problems how they could solve complex problems in vision, language, and motor control. Solutions to this problem were as follows:



The behavior of an ANN (Artificial Neural Network) depends on both the weights and the inputoutput function (transfer function) that is specified for the units. This function typicallY falls into one of three categories: Linear (or ramp): the output activity is proportional to the total weighted output.

Threshold: the output is set at one of two levels, depending on whether the total input is greater than or less than some threshold value.

Sigmoid: the output varies continuously but not linearly as the input changes. Sigmoid units bear a greater resemblance to real neurons than do linear or threshold units, but all three must be considered rough approximations.

Step function Step(x) =1, if x>=threshold x<threshold

Sign function Sign(x) =+1, if x>=0 Sign (x) = -1, if x<0

Sigmoid function Sigmoid(x) = 1/ (1+ex)

Fig -2.10: Transfer Functions

To make a neural network that performs some specific task, we must choose how the units are connected to one another, and we must set the weights on the connections appropriately. The connections determine whether it is possible for one unit to influence another. The weights specify the strength of the influence. A three-layer network can be taught to perform a particular task by using the following procedure:

The network is presented with training examples, which consist of a pattern of activities for the input units together with the desired pattern of activities for the output units.

It is determined how closely the actual output of the network matches the desired output.

The weight of each connection can be changed so that the network produces a better approximation of the desired output.

3.1 The algorithm Most people would consider the Back Propagation network to be the quintessential 1,2,3 Neural Net. Actually, Back Propagation is the training or learning algorithm rather than the network itself. The network used is generally of the simple type shown in figure 1.1, in chapter 1 and in the examples up until now. These are called FeedForward Networks (well see why in chapter 7 on Hopfield Networks) or occasionally Multi-Layer Perceptrons (MLPs). The network operates in exactly the same way as the others weve seen (if you need to remind yourself, look at worked example 2.3). Now, lets consider what Back Propagation is and how to use it. A Back Propagation network learns by example. You give the algorithm examples of what you want the network to do and it changes the networks weights so that, when training is finished, it will give you the required output for a particular input. Back 4 Propagation networks are ideal for simple Pattern Recognition and Mapping Tasks . As just mentioned, to train the network you need to give it examples of what you want the output you want (called the Target) for a particular input as shown in Figure 3.1. Figure 3.1, a Back Propagation training set.


Targets (the output you want for each pattern)




For this particular input pattern to the network, we would like to get this output.


So, if we put in the first pattern to the network, we would like the output to be 0 1 as shown in figure 3.2 (a black pixel is represented by 1 and a white by 0 as in the previous examples). The input and its corresponding target are called a Training Pair. Figure 3.2, applying a training pair to a network.

Input 1 0 1 1 Input 2 0 Input 3 Wed like this neuron to give a 0 out. Wed like this neuron to give a 1 out.

Input 4


Tutorial question 3.1: Redraw the diagram in figure 3.2 to show the inputs and targets for the second pattern. Once the network is trained, it will provide the desired output for any of the input patterns. Lets now look at how the training works. The network is first initialised by setting up all its weights to be small random numbers say between 1 and +1. Next, the input pattern is applied and the output calculated (this is called the forward pass). The calculation gives an output which is completely different to what you want (the Target), since all the weights are random. We then calculate the Error of each neuron, which is essentially: Target - Actual Output (i.e. What you want What you actually get). This error is then used mathematically to change the weights in such a way that the error will get smaller. In other words, the Output of each neuron will get closer to its Target (this part is called the reverse pass). The process is repeated again and again until the error is minimal. Let's do an example with an actual network to see how the process works. Well just look at one connection initially, between a neuron in the output layer and one in the hidden layer, figure 3.3. Figure 3.3, a single connection learning in a Back Propagation network.



The connection were interested in is between neuron A (a hidden layer neuron) and neuron B (an output neuron) and has the weight WAB. The diagram also shows another connection, between neuron A and C, but well return to that later. The algorithm works like this: 1. First apply the inputs to the network and work out the output remember this initial output could be anything, as the initial weights were random numbers. 2. Next work out the error for neuron B. The error is What you want What you actually get, in other words: ErrorB = OutputB (1-OutputB)(TargetB OutputB) The Output(1-Output) term is necessary in the equation because of the Sigmoid Function if we were only using a threshold neuron it would just be (Target Output). 3. Change the weight. Let W weight.

be the new (trained) weight and WAB be the initial + (ErrorB x OutputA)

+ W AB = WAB

Notice that it is the output of the connecting neuron (neuron A) we use (not B). We update all the weights in the output layer in this way. 4. Calculate the Errors for the hidden layer neurons. Unlike the output layer we cant calculate these directly (because we dont have a Target), so we Back Propagate them from the output layer (hence the name of the algorithm). This is done by taking the Errors from the output neurons and running them back through the weights to get the hidden layer errors. For example if neuron A is connected as shown to B and C then we take the errors from B and C to generate an error for A. ErrorA = Output A (1 - Output A)(ErrorB WAB + ErrorC WAC) Again, the factor Output (1 - Output ) is present because of the sigmoid squashing function. 5. Having obtained the Error for the hidden layer neurons now proceed as in stage 3 to change the hidden layer weights. By repeating this method we can train a network of any number of layers. This may well have left some doubt in your mind about the operation, so lets clear that up by explicitly showing all the calculations for a full sized network with 2 + inputs, 3 hidden layer neurons and 2 output neurons as shown in figure 3.4. W represents the new, recalculated, weight, whereas W (without the superscript) represents the old weight.


Figure 3.4, all the calculations for a reverse pass of Back Propagation.










Hidden layer


1. Calculate errors of output neurons = out (1 - out) (Target - out) = out (1 - out ) (Target - out) 2. Change output layer weights
W A = WA + outA + W B = WB + outB + W C = WC + outC

W A = WA + outA + W B = WB + outB W C = WC + outC


3. Calculate (back-propagate) hidden layer errors

A = outA (1 outA) (WA + WA) B = outB (1 outB) (WB + WB) C = outC (1 outC) (WC + WC)

4. Change hidden layer weights

W A = WA + A in + W B = WB + B in + W C = WC + C in

W A = W A + A in + + W B = W B + B in W C = W C + C in
+ +

The constant (called the learning rate, and nominally equal to one) is put in to speed up or slow down the learning if required.


Minsky and Papert (1969) showed that there are many simple problems such as the exclusive-or problem, which linear neural networks cannot solve. Note that term "solve" means learn the desired associative links. Argument is that if such networks cannot solve such simple problems how could they solve complex problems in vision, language, and motor control.


The back propagation algorithm is an involved mathematical tool; however, execution of the training equations is based on iterative processes, and thus is easily implement able on a computer. [8]

Weight changes for hidden to output weights just like Widrow-Hoff learning rule.

Weight changes for input to hidden weights just like Widrow-Hoff learning rule but error signal is obtained by "back-propagating" error from the output units.

During the training session of the network, a pair of patterns is presented (X k, Tk), where Xk in the input pattern and Tk is the target or desired pattern. The Xk pattern causes output responses at teach neurons in each layer and, hence, an output Ok at the output layer. At the output layer, the difference between the actual and target outputs yields an error signal. This error signal depends on the values of the weights of the neurons I each layer. This error is minimized, and during this process new values for the weights are obtained. The speed and accuracy of the learning processthat is, the process of updating the weights-also depends on a factor, known as the learning rate.

Before starting the back propagation learning process, we need the following:

The set of training patterns, input, and target A value for the learning rate

A criterion that terminates the algorithm A methodology for updating weights

The nonlinearity function (usually the sigmoid)

Initial weight values (typically small random values)

Fig-5.1: Back Propagation Network

The process then starts by applying the first input pattern Xk and the corresponding target output Tk. The input causes a response to the neurons of the first layer, which in turn cause a response to the neurons of the next layer, and so on, until a response is obtained at the output layer. That response is then compared with the target response; and the difference (the error signal) is calculated. From the error difference at the output neurons, the algorithm computes the rate at which the error changes as the activity level of the neuron changes. So far, the calculations were computed forward (i.e., from the input layer to the output layer). Now, the algorithm steps back one layer before that output layer and recalculate the weights of the output layer (the weights between the last hidden layer and the neurons of the output layer) so that the output error is minimized. The algorithm next calculates the error output at the last hidden layer and computes new values for its weights (the weights between the last and nextto-last hidden layers). The algorithm continues calculating the error and computing new weight values, moving layer by layer backward, toward the input. When the input is reached and the weights do not change, (i.e., when they have reached a steady state), then the algorithm selects the next pair of input-target patterns and repeats the process. Although responses move in a forward direction, weights are calculated by moving backward, hence the name back propagation.


The back-propagation algorithm consists of the following steps: [16], [17]

Each Input is then multiplied by a weight that would either inhibit the input or excite the input. The weighted sum of then inputs in then calculated

First, it computes the total weighted input Xj, using the formula:

X =

i ij


Where yi is the activity level of the jth unit in the previous layer and Wij is the weight of the connection between the ith and the jth unit.

Then the weighed Xj is passed through a sigmoid function that would scale the output in between a 0 and 1.

Next, the unit calculates the activity yj using some function of the total weighted input. Typically we use the sigmoid function:

1 yj = 1+ e
Once the output is calculated, it is compared with the required output and the total Error E is computed.


Once the activities of all output units have been determined, the network computes the error E, which is defined by the expression:

E= 1 2

(yj dj)


where y j is the activity level of the ith unit in the top layer and dj is the desired output of the ith unit.

Now the error is propagated backwards.

1. Compute how fast the error changes as the activity of an output unit is changed. This error derivative (EA) is the difference between the actual and the desired activity.

EAj =

E yj

= yj d j


2. Compute how fast the error changes as the total input received by an output unit is changed. This quantity (EI) is the answer from step 1 multiplied by the rate at which the output of a unit changes as its total input is changed.

dy EI = E = E j = EA y (1 y )


yj dxj


3. Compute how fast the error changes as a weight on the connection into an output unit is changed. This quantity (EW) is the answer from step 2 multiplied by the activity level of the unit from which the connection emanates.


E = dWij =

E x j

x j dWij = EI j



4. Compute how fast the error changes as the activity of a unit in the previous layer is changed. This crucial step allows back propagation to be applied to multi-layer networks. When the activity of a unit in the previous layer changes, it affects the activities of all the output units to which it is connected. So to compute the overall effect on the error, we add together all these separate effects on output units. But each effect is simple to calculate. It is the answer in step 2 multiplied by the weight on the connection to that output unit.

E = EAi = y

E x

x j


j ij


By using steps 2 and 4, we can convert the EAs of one layer of units into EAs for the previous layer. This procedure can be repeated to get the EAs for as many previous layers as desired. Once we know the EA of a unit, we can use steps 2 and 3 to compute the EWs on its incoming connections.

Although widely used, the back propagation algorithm has not escaped criticism. The method of backwards-calculating weights does not seem to be biologically plausible; neurons do not seem to work backward to adjust the efficacy of their synaptic weights. Thus, the back propagation-learning algorithm is not viewed by many as a learning process that emulates the biological world but as a method to design a network with learning.

Second, the algorithm uses a digital computer to calculate weights. When the final network is implemented in hardware, however, it has lost its plasticity. This loss is in contrast with the initial motivation to develop neural networks that emulate brain like networks and are adaptable (plastic) enough to learn new patterns. If changes are necessary, a computer calculates anew the weight values and updates them. This means that the neural network implementation still depends on a digital computer.

The algorithm suffers from extensive calculations and, hence, slows training speed. The time required to calculate the error derivatives and to update the weights on a given training exemplar is proportional to the size of the network. The amount of computation is proportional to the number of weights. In large networks, increasing the number of training patterns causes the learning time to increase faster that the network. The computational speed inefficiency of this algorithm has triggered an effort to explore techniques that accelerated the learning time by at least a factor of 2. Even these accelerated techniques, however, do not make the back propagation learning algorithm suitable in many real time applications. [20]

Despite it wide applicability, the error back propagation algorithm cannot be applied to all neural network systems which can be imagined. In particular, the algorithm requires that the activation functions of each of the neurons in the network be both continuous and differentiable. Several historically important neural network architectures use activation functions which do not satisfy this condition: these include the discontinuous linear threshold activation function of the original perceptron of Rosenblatt and the continuous but nondifferentiable linear ramp activation function of the units in the brain-state-in-e-box model of Anderson et al.

NAME OF SIGNAL I Wij Si Oi wjk Sj Oj

DESCRIPTION OF SIGNAL an array of 4 elements i.e. input to the neural network weights at the input layer sum of weighted inputs, from synapse at input layer output of first neuron based on sigmoid function weights at the output layer sum of weighted inputs, from the second neuron output of second neuron based on sigmoid function

an array of 4 elements i.e. the target for which the Neural network has to be trained

der , der j

derivative of outputs of the first and second neuron, respectively

Delta j

calculated errors (delta functions) obtained by comparing the output of the second neuron with the target value to be achieved

Delta i

calculated errors (delta functions) obtained from error generator at the input

Error i dwij dwjk

error from the error generator at the input updated weight values at the input layer updated weight values at the output layer

Table 2: signals for back propagation neural networks


Error generator at the output calculates the error by subtracting the actual output from the desired one. This difference is then multiplied with the derivative of the output (derj1, derj2, derj3, derj4) from neuron to get a delta function i.e. deltaj1, deltaj2, deltaj3, deltaj3, deltaj4.

derj = oj (1 oj ) deltaj =(t oj )derj

(5.8) (5.9)


Weight update entity updates the weight by using the delta function and the output of the neuron.

dwij = wij delta i oi



= wjk deltajoj



Weight transfer unit transfers the updated weight values to the original weights which are then used in the neural network training.


= dw ij


= dwij


Error generator at the input calculates the error by subtracting the actual output from the desired one. This difference is then multiplied with the derivative of the output from neuron to get a delta function i.e. deltai1, deltai2, deltai3, deltai3, deltai4.

deri = oi (1 oi ) deltai = (t oi )deri

(5.14) (5.15) (5.16)

errori = wjk deltaj


5.1.1 Unit testing Unit testing involves the design of test cases that validate that the internal program logic is functioning properly, and that program inputs produce valid outputs. All decision branches and internal code flow should be validated. It is the testing of individual software units of the application .it is done after the completion of an individual unit before integration. This is a structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform basic tests at component level and test a specific business process, application, and/or system configuration. Unit tests ensure that each unique path of a business process performs accurately to the documented specifications and contains clearly defined inputs and expected results.

5.1.2 Integration testing

Integration tests are designed to test integrated software components to determine if they actually run as one program. Testing is event driven and is more concerned with the basic outcome of screens or fields. Integration tests demonstrate that although the components were individually satisfaction, as shown by successfully unit testing, the combination of components is correct and consistent. Integration testing is specifically aimed at exposing the problems that arise from the combination of components.

5.1.3 Functional test

Functional tests provide systematic demonstrations that functions tested are available as specified by the business and technical requirements, system documentation, and user manuals. Functional testing is centered on the following items:

Valid Input Invalid Input Functions

: identified classes of valid input must be accepted. : identified classes of invalid input must be rejected. : identified functions must be exercised.

Output : identified classes of application outputs must be exercised. Systems/Procedures: interfacing systems or procedures must be invoked.

Organization and preparation of functional tests is focused on requirements, key functions, or special test cases. In addition, systematic coverage pertaining to identify Business process flows; data fields, predefined processes, and successive processes must be considered for testing. Before functional testing is complete, additional tests are identified and the effective value of current tests is determined.

5.1.4 System Test System testing ensures that the entire integrated software system meets requirements. It tests a configuration to ensure known and predictable results. An example of system testing is the configuration oriented system integration test. System testing is based on process descriptions and flows, emphasizing pre-driven process links and integration points.

5.1.5 White Box Testing

White Box Testing is a testing in which in which the software tester has knowledge of the inner workings, structure and language of the software, or at least its purpose. It is purpose. It is used to test areas that cannot be reached from a black box level.

5.1.6 Black Box Testing

Black Box Testing is testing the software without any knowledge of the inner workings, structure or language of the module being tested. Black box tests, as most other kinds of tests, must be written from a definitive source document, such as specification or requirements document, such as specification or requirements document. It is a testing in which the software under test is

treated, as a black box .you cannot see into it. The test provides inputs and responds to outputs without considering how the software works.

Compilation Test: It was a good idea to do our stress testing early on, because it gave us time to fix some of the unexpected deadlocks and stability problems that only occurred when components were exposed to very high transaction volumes.

Execution Test: This program was successfully loaded and executed. Because of good programming there were no execution errors. The complete performance of the project MOBILE QUIZZ Application was good.

Output Test: The successful output screens are placed in the output screens section above with brief explanation about each screen..


Screen 7.1:Homepage of Neuroshell2 tool.

Screen 7.2:Loading the new problem.

Screen 7.3:Filling the no.of fields and starting import.

Screen 7.4:Saving ASCII file format.

Screen 7.5:Definig Inputs

Screen 7.6:Defining by clicking on all variable types.

Screen 7.7: Selecting the Actual Output in the Variable Type Selection.

Screen 7.8:Selecting one of the variable as output.

Screen 7.9:Computing max and mins.

Screen 7.10:Showing max, min and mean values of all inputs.

Screen 7.11:Test set extraction and setting an extraction method.

Screen 7.12: Beginning an extraction.

Screen 7.13:Extration Result with training and testing sets.

Screen 7.14: Selection of Architecture.

Screen 7.15: Selecting BPNN Architecture.

Screen 7.16: Three layered Architecture

Screen 7.17:Saving the changes made.

Screen 7.18:Back Propagation Pattern, Weight Selection

Screen 7.19:Starting the training set.

Screen 7.20:BPNN Training Completed.

Screen 7.21:Network Processing.

Screen 7.22:opening the .tst file.

Screen 7.23:Selectin an alternate Pattern file.

Screen 7.24:Viewing the Output File.

Screen 7.25:Starting the process.

Screen 7.26:DataGrid showing fold1.out

Screen 7.27:Fold1.out before Simulation.

Screen 7.28:Result set of all Folds from 1 to 10.

This thesis describes the java implementation of a supervised learning algorithm for artificial neural networks. The algorithm is the Error Back propagation learning algorithm for a layered feed forward network and this algorithm has many successful applications for training multilayer neural networks. Implementation in java creates a flexible, fast method and high degree of parallelism for implementing the algorithm. The proposed neural network has two layers with four neurons each and sixteen synapses per layer. Different situations may need neural networks of different scales. Such a situation can be overcome by combining a certain number of such unit neural networks. The different modules are: inputs, neuron, error generator at the input, error generator at the output, weight update unit and a weight transfer unit. At the synapse inputs are multiplied by the corresponding weights and the weighted inputs are summed together to get four outputs. After the synapse is the neuron that calculates the output in accordance with the transfer function sigmoid and its derivative has also been calculated. Further the neuron is followed by the synapse and the neuron because in this paper two layer network has been considered. Then the network is followed by the error generator at the output, which compares the output of the neuron with the target signal for which the network has to be trained. Similarly, there is error generator at the input, which updates the weights of the first layer taking into account the error propagated back from the output layer. Finally, a weight transfer unit is present just to pass on the values of the updated weights to the actual weights.

Then a final having a result set in our hand we need to classify the web services based on the results obtained after the training is done. Generally the web services are classified in to four categories. They are Platinum, Gold, Silver and Bronze.

When the result of the given data set is above 90% then we classify the web service as platinum.

In the same way when the result of the data set is between 70-90% then the web service is categorized as gold. Web services are classified as silver when the result is in between 60-70% and when the result is below 60% then it is classified as bronze.

An algorithm similar to back propagation algorithm, which may be used to train networks whose neurons may have discontinuous or non-differentiable activation functions. These new algorithms should also have the capability to speed up the convergence of back propagation algorithm. Modified forms of back propagation algorithm such as Quick BP, RPROP, SAR-PROP, and MGF-PROP can provide a great help. Taking into consideration massively expensive parallel architecture required for hardware implementation of the algorithm, this problem can be overcome by using analog computation elements such as multipliers and adders. Second solution to this problem is the use of fixed-point computation for limited precision architectures, where a look up table can be used for squishing the functions.