Vous êtes sur la page 1sur 17

SOAP, originally defined as Simple Object Access Protocol, is a protocol specification for exchanging structured information in the implementation

of Web Services in computer networks. It relies on Extensible Markup Language (XML) for its message format, and usually relies on other Application Layer protocols, most notably Hypertext Transfer Protocol (HTTP) or Simple Mail Transfer Protocol (SMTP), for message negotiation and transmission.

Characteristics
SOAP can form the foundation layer of a web services protocol stack, providing a basic messaging framework upon which web services can be built. This XML based protocol consists of three parts: an envelope, which defines what is in the message and how to process it, a set of encoding rules for expressing instances of application-defined datatypes, and a convention for representing procedure calls and responses. SOAP has three major characteristics: Extensibility (security and WS-routing are among the extensions under development), Neutrality (SOAP can be used over any transport protocol such as HTTP, SMTP, TCP, or JMS) and Independence (SOAP allows for any programming model). As an example of how SOAP procedures can be used, a SOAP message could be sent to a web site that has web services enabled, such as a realestate price database, with the parameters needed for a search. The site would then return an XML-formatted document with the resulting data, e.g., prices, location, features. With the data being returned in a standardized machine-parsable format, it can then be integrated directly into a third-party web site or application. The SOAP architecture consists of several layers of specifications for: message format, Message Exchange Patterns (MEP), underlying transport protocol bindings, message processing models, and protocol extensibility. SOAP is the successor of XML-RPC, though it borrows its transport and interaction neutrality and the envelope/header/body from elsewhere (probably from WDDX).

Specification
The SOAP specification defines the messaging framework which consists of:

The SOAP processing model defining the rules for processing a SOAP message The SOAP extensibility model defining the concepts of SOAP features and SOAP modules The SOAP underlying protocol binding framework describing the rules for defining a binding to an underlying protocol that can be used for exchanging SOAP messages between SOAP nodes The SOAP message construct defining the structure of a SOAP message

Processing model

The SOAP processing model describes a distributed processing model, its participants, the SOAP nodes, and how a SOAP receiver processes a SOAP message. The following SOAP nodes are defined:
SOAP sender

A SOAP node that transmits a SOAP message. SOAP receiver A SOAP node that accepts a SOAP message. SOAP message path The set of SOAP nodes through which a single SOAP message passes. Initial SOAP sender (Originator) The SOAP sender that originates a SOAP message at the starting point of a SOAP message path. SOAP intermediary A SOAP intermediary is both a SOAP receiver and a SOAP sender and is targetable from within a SOAP message. It processes the SOAP header blocks targeted at it and acts to forward a SOAP message towards an ultimate SOAP receiver. Ultimate SOAP receiver The SOAP receiver that is a final destination of a SOAP message. It is responsible for processing the contents of the SOAP body and any SOAP header blocks targeted at it. In some circumstances, a SOAP message might not reach an ultimate SOAP receiver, for example because of a problem at a SOAP intermediary. An ultimate SOAP receiver cannot also be a SOAP intermediary for the same SOAP message.

Transport methods
Both SMTP and HTTP are valid application layer protocols used as Transport for SOAP, but HTTP has gained wider acceptance as it works well with today's Internet infrastructure; specifically, HTTP works well with network firewalls. SOAP may also be used over HTTPS (which is the same protocol as HTTP at the application level, but uses an encrypted transport protocol underneath) with either simple or mutual authentication; this is the advocated WS-I method to provide web service security as stated in the WS-I Basic Profile 1.1. This is a major advantage over other distributed protocols like GIOP/IIOP or DCOM which are normally filtered by firewalls. SOAP over AMQP is yet another possibility that some implementations support.[3] There is also the SOAP-over-UDP OASIS standard.

Message format
XML was chosen as the standard message format because of its widespread use by major corporations and open source development efforts. Additionally, a wide variety of freely available tools significantly eases the transition to a SOAP-based implementation. The somewhat lengthy syntax of XML can be both a benefit and a drawback. While it promotes readability for humans, facilitates error detection, and avoids interoperability problems such as byte-order (Endianness), it can slow processing speed and can be cumbersome. For example, CORBA, GIOP, ICE, and DCOM use much shorter, binary message formats. On the other hand, hardware appliances are available to accelerate processing of XML messages.[4][5] Binary XML is also being explored as a means for streamlining the throughput requirements of XML.

Advantages

SOAP is versatile enough to allow for the use of different transport protocols. The standard stacks use HTTP as a transport protocol, but other protocols such as JMS[6] and SMTP[7] are also usable. Since the SOAP model tunnels fine in the HTTP post/response model, it can tunnel easily over existing firewalls and proxies, without modifications to the SOAP protocol, and can use the existing infrastructure.

Disadvantages

Because of the verbose XML format, SOAP can be considerably slower than competing middleware technologies such as CORBA or ICE. This may not be an issue when only small messages are sent.[8] To improve performance for the special case of XML with embedded binary objects, the Message Transmission Optimization Mechanism was introduced. When relying on HTTP as a transport protocol and not using WS-Addressing or an ESB, the roles of the interacting parties are fixed. Only one party (the client) can use the services of the other. Developers must use polling instead of notification in these common cases.

Usenet is a worldwide distributed Internet discussion system. It was developed from the
general purpose UUCP architecture of the same name. Duke University graduate students Tom Truscott and Jim Ellis conceived the idea in 1979 and it was established in 1980.[1] Users read and post messages (called articles or posts, and collectively termed news) to one or more categories, known as newsgroups. Usenet resembles a bulletin board system (BBS) in many respects, and is the precursor to the various Internet forums that are widely used today. Usenet can be superficially regarded as a hybrid between email and web forums. Discussions are threaded, with modern news reader software, as with web forums and BBSes, though posts are stored on the server sequentially.

One notable difference between a BBS or web forum and Usenet is the absence of a central server and dedicated administrator. Usenet is distributed among a large, constantly changing conglomeration of servers that store and forward messages to one another in so-called news feeds. Individual users may read messages from and post messages to a local server operated by a commercial usenet provider, their Internet service provider, university, or employer.

Wide Area Information Servers or WAIS is a clientserver text


searching system that uses the ANSI Standard Z39.50 Information Retrieval Service Definition and Protocol Specifications for Library Applications" (Z39.50:1988) to search index databases on remote computers. It was developed in the late 1980s as a project of Thinking Machines, Apple Computer, Dow Jones, and KPMG Peat Marwick. WAIS did not adhere to either the standard or its OSI framework (adopting instead TCP/IP) but created a unique protocol inspired by Z39.50:1988.

WAIS and Gopher


Public WAIS is often used as a full text search engine for individual Internet Gopher servers, supplementing the popular Veronica system which only searches the menu titles of Gopher sites. WAIS and Gopher share the World Wide Web's clientserver architecture and a certain amount of its functionality. The WAIS protocol is influenced largely by the z39.50 protocol designed for networking library catalogs. It allows a text-based search, and retrieval following a search. Gopher provides a free text search mechanism, but principally uses menus. A menu is a list of titles, from which the user may pick one. While gopher space is a web containing many loops, the menu system gives the user the impression of a tree.[3] The W3 data model is similar to the gopher model, except that menus are generalized to hypertext documents. In both cases, simple file servers generate the menus or hypertext directly from the file structure of a server. The W3 hypertext model gives the program more power to communicate the options available to the reader, as it can include headings and various forms of list structure.[3] A is a website that brings information together from diverse sources in a uniform way. Usually, each information source gets its dedicated area on the page for displaying information (a portlet); often, the user can configure which ones to display. Apart from the standard search engines feature, web portals offer other services such as e-mail, news, stock prices, information, databases and entertainment. Portals provide a way for enterprises to provide a consistent look and feel with access control and procedures for multiple applications and databases, which otherwise would have been different entities altogether.

web portal

Classification
Web portals are sometimes classified as horizontal or vertical. A horizontal portal is used as a platform to several companies in the same economic sector or to the same type of manufacturers or distributors.[1] A vertical portal (also known as a "vortal") is a specialized entry point to a specific market or industry niche, subject area, or interest.[2] Some vertical portals are known as "vertical information portals" (VIPs). VIPs provide news, editorial content, digital publications, and e-commerce capabilities. In contrast to traditional vertical portals, VIPs also provide dynamic multimedia applications including social networking, video posting, and blogging.

Types of web portals


Personal portals

A personal portal is a site on the World Wide Web that typically provides personalized capabilities to its visitors, providing a pathway to other content. It is designed to use distributed applications, different numbers and types of middleware and hardware to provide services from a number of different sources. In addition, business portals are designed for sharing and collaboration in workplaces. A further business-driven requirement of portals is that the content be able to work on multiple platforms such as personal computers, personal digital assistants (PDAs), and cell phones/mobile phones. Information, news, and updates are examples of content that would be delivered through such a portal. Personal portals can be related to any specific topic such as providing friend information on a social network or providing links to outside content that may help others beyond your reach of services. Portals are not limited to simply providing links. Information or content that is placed on the web may create a portal in the sense of a path to new knowledge and capabilities.
News portals

The traditional media rooms all around the world are fast adapting to the new age technologies. This marks the beginning of news portals by media houses across the globe. This new media channels give them the opportunity to reach the viewers in a shorter span of time than their print media counterparts. Examples of News web portals include;
Government web portals

At the end of the dot-com boom in the 1990s, many governments had already committed to creating portal sites for their citizens. These included primary portals to the governments as well as portals developed for specific audiences. Examples of government web portals include;

Saudi.gov.sa for Saudi Arabia australia.gov.au for Australia. newzealand.govt.nz for New Zealand. USA.gov for the United States (in English) & GobiernoUSA.gov (in Spanish). Disability.gov for citizens with disabilities in the United States.

gov.uk for citizens & businesslink.gov.uk for businesses in the United Kingdom. india.gov.in for India. Europa (web portal) links to all EU agencies and institutions in addition to press releases and audiovisual content from press conferences. Health-EU portal gathers all relevant health topics from across Europe. National Resource Directory links to resources for United States Service Members, Veterans and their families (NRD.gov).

Cultural portals

Cultural portal aggregate digitised cultural collections of galleries, libraries (see: library portal), archives and museums. This type of portals provides a point of access to invisible web cultural content that may not be indexed by standard search engines. Digitised collections can include books, artworks, photography, journals, newspapers, music, sound recordings, film, maps, diaries and letters, and archived websites as well as the descriptive metadata associated with each type of cultural work. These portals are usually based around a specific national or regional groupings of institutions. Examples of cultural portals include:

DigitalNZ A cultural portal led by the National Library of New Zealand focused on New Zealand digital content. Europeana A cultural portal for the European Union based in the National Library of the Netherlands and overseen by the Europeana Foundation. Trove A cultural portal led by the National Library of Australia focused on Australian content. In development - Digital Public Library of America

Corporate web portals Main article: Intranet portal

Corporate intranets became common during the 1990s. As intranets grew in size and complexity, webmasters were faced with increasing content and user management challenges. A consolidated view of company information was judged insufficient; users wanted personalization and customization. Webmasters, if skilled enough, were able to offer some capabilities, but for the most part ended up driving users away from using the intranet. Many companies began to offer tools to help webmasters manage their data, applications and information more easily, and through personalized views. Portal solutions can also include workflow management, collaboration between work groups, and policy-managed content publication. Most can allow internal and external access to specific corporate information using secure authentication or single sign-on. JSR168 Standards emerged around 2001. Java Specification Request (JSR) 168 standards allow the interoperability of portlets across different portal platforms. These standards allow portal developers, administrators and consumers to integrate standards-based portals and portlets across a variety of vendor solutions.

The concept of content aggregation seems to still gain momentum and portal solution will likely continue to evolve significantly over the next few years. The Gartner Group predicts generation 8 portals to expand on the Business Mashups concept of delivering a variety of information, tools, applications and access points through a single mechanism.[citation needed] With the increase in user generated content, disparate data silos, and file formats, information architects and taxonomist will be required to allow users the ability to tag (classify) the data. This will ultimately cause a ripple effect where users will also be generating ad hoc navigation and information flows. Corporate Portals also offer customers & employees self-service opportunities.
Stock portals

Also known as stock-share portals, stock market portals or stock exchange portals are Web-based applications that facilitates the process of informing the share-holders with substantial online data such as the latest price, ask/bids, the latest News, reports and announcements. Some stock portals use online gateways through a central depository system (CDS) for the visitors (ram) to buy or sell their shares or manage their portfolio.
Search portals

Search portals aggregate results from several search engines into one page.
Tender's portals

Tender's portals stands for a gateway to search/modify/submit/archive data on tenders and professional processing of continuous online tenders. With a tender portal the complete tendering processsubmitting of proposals, assessment, administrationare done on the web. Electronic or online tendering is just carrying out the same traditional tendering process in an electronic form, using the Internet. Using online tendering, bidders can do any of the following:

Receive notification of the tenders. Receive tender documents online. Fill out the forms online. Submit proposals and documents. Submit bids online.

Hosted web portals

Hosted web portals gained popularity a number of companies began offering them as a hosted service. The hosted portal market fundamentally changed the composition of portals. In many ways they served simply as a tool for publishing information instead of the loftier goals of integrating legacy applications or presenting correlated data from distributed databases. The early hosted portal companies such as Hyperoffice.com or the now defunct InternetPortal.com focused on collaboration and scheduling in addition to the distribution of corporate data. As hosted web portals have risen in popularity their feature set has grown to include hosted databases, document management, email, discussion forums and more. Hosted portals automatically personalize the content generated from their modules to provide a personalized experience to their users. In this regard they have remained true to the original goals of the earlier corporate web portals. Emerging new classes of internet portals called Cloud Portals are showcasing the power of API (Application Programming Interface) rich software systems leveraging SOA (service oriented architecture, web services, and custom data exchange) to accommodate machine to machine interaction creating a more fluid user experience for connecting users spanning multiple domains during a given "session". Leading cloud portals like Nubifer Cloud Portal: [1] showcase what is possible using Enterprise Mashup and Web Service integration approaches to building cloud portals.
Domain-specific portals

A number of portals have come about that are specific to the particular domain, offering access to related companies and services, a prime example of this trend would be the growth in property portals that give access to services such as estate agents, removal firm, and solicitors that offer conveyancing. Along the same lines, industry-specific news and information portals have appeared, such as the clinical trials specific portal: IFPMA Clinical Trials Portal

Engineering aspects
The main concept is to present the user with a single web page that brings together or aggregates content from a number of other systems or servers. For portals that present application functionality to the user, the portal server is in reality the front piece of a server configuration that includes some connectivity to the application server. Service-Oriented Architecture (SOA) is one example of how a portal can be used to deliver application server content and functionality. The application server or architecture performs the actual functions of the application. This application server is in turn connected to database servers, and may be part of a clustered server environment. High-capacity portal configurations may include load balancing equipment. SOAP, an XML-based protocol, may be used for servers to communicate within this architecture. The server hosting the portal may only be a "pass through" for the user. By use of portlets, application functionality can be presented in any number of portal pages. For the most part, this architecture is transparent to the user.In such a scheme, security and capacity can be important features, and administrators need to ensure that only an authorized visitor or user can generate requests to the application server. If administration does not ensure this aspect, then the portal may inadvertently present vulnerabilities to various types of attacks.

Internet Protocol version 4 (IPv4) is the fourth version in the


development of the Internet Protocol (IP) and the first version of the protocol to be widely deployed. Together with IPv6, it is at the core of standards-based internetworking methods of the Internet. IPv4 is still used to route most traffic across the Internet.[1] IPv4 is described in IETF publication RFC 791 (September 1981), replacing an earlier definition (RFC 760, January 1980). IPv4 is a connectionless protocol for use on packet-switched Link Layer networks (e.g., Ethernet). It operates on a best effort delivery model, in that it does not guarantee delivery, nor does it assure proper sequencing or avoidance of duplicate delivery. These aspects, including data integrity, are addressed by an upper layer transport protocol, such as the Transmission Control Protocol (TCP).

Addressing
IPv4 uses 32-bit (four-byte) addresses, which limits the address space to 4294967296 (232) addresses. Addresses were assigned to users, and the number of unassigned addresses decreased. IPv4 address exhaustion occurred on February 3, 2011. It had been significantly delayed by address changes such as classful network design, Classless Inter-Domain Routing, and network address translation (NAT). This limitation of IPv4 stimulated the development of IPv6 in the 1990s, which has been in commercial deployment since 2006. IPv4 reserves special address blocks for private networks (~18 million addresses) and multicast addresses (~270 million addresses).

Internet Protocol version 6 (IPv6) is the latest revision of the Internet


Protocol (IP), the communications protocol that routes traffic across the Internet. It is intended to replace IPv4, which still carries the vast majority of Internet traffic as of 2013.[1] IPv6 was developed by the Internet Engineering Task Force (IETF) to deal with the long-anticipated problem of IPv4 address exhaustion. Every device on the Internet, such as a computer or mobile telephone, must be assigned an IP address for identification and location addressing in order to communicate with other devices. With the ever-increasing number of new devices being connected to the Internet, the need arose for more addresses than IPv4 is able to accommodate. IPv6 uses a 128-bit address, allowing for 2128, or approximately 3.41038 addresses, or more than 7.91028 times as many as IPv4, which uses 32-bit addresses. IPv4 allows for only approximately 4.3 billion addresses. The two protocols are not designed to be interoperable, complicating the transition to IPv6.

Technical definition
Decomposition of the IPv6 address representation into its binary form IPv6, like the more commonly used IPv4 (as of 2013), is an Internet Layer protocol for packetswitched internetworking and provides end-to-end datagram transmission across multiple IP networks. It is described in Internet standard document RFC 2460, published in December 1998.[5] In addition to offering more addresses, IPv6 also implements features not present in IPv4. It simplifies aspects of address assignment (stateless address autoconfiguration), network renumbering and router announcements when changing network connectivity providers. It simplifies processing of packets by routers by placing the need for packet fragmentation into the end points. The IPv6 subnet size has been standardized by fixing the size of the host identifier portion of an address to 64 bits to facilitate an automatic mechanism for forming the host identifier from link-layer media addressing information (MAC address). Network security is also integrated into the design of the IPv6 architecture, including the option of IPsec. IPv6 does not implement interoperability features with IPv4, but essentially creates a parallel, independent network. Exchanging traffic between the two networks requires special translator gateways or other transition technologies, such as tunneling protocol like 6to4, 6in4, or Teredo.

Comparison to IPv4
On the Internet, data is transmitted in the form of network packets. IPv6 specifies a new packet format, designed to minimize packet header processing by routers.[5][21] Because the headers of IPv4 packets and IPv6 packets are significantly different, the two protocols are not interoperable. However, in most respects, IPv6 is a conservative extension of IPv4. Most transport and application-layer protocols need little or no change to operate over IPv6; exceptions are application protocols that embed internet-layer addresses, such as FTP and NTPv3, where the new address format may cause conflicts with existing protocol syntax.
Larger address space

The main advantage of IPv6 over IPv4 is its larger address space. The length of an IPv6 address is 128 bits, compared to 32 bits in IPv4.[5] The address space therefore has 2128 or approximately 3.41038 addresses. By comparison, this amounts to approximately 4.81028 addresses for each of the seven billion people alive in 2011.[22] In addition, the IPv4 address space is poorly allocated, with approximately 14% of all available addresses utilized.[23] While these numbers are large, it wasn't the intent of the designers of the IPv6 address space to assure geographical saturation with usable addresses. Rather, the longer addresses simplify allocation of addresses, enable efficient route aggregation, and allow implementation of special addressing features. In IPv4, complex Classless Inter-Domain Routing (CIDR) methods were developed to make the best use of the small address space. The standard size of a subnet in IPv6 is 264 addresses, the square of the size of the entire IPv4 address space. Thus, actual address space utilization rates

will be small in IPv6, but network management and routing efficiency is improved by the large subnet space and hierarchical route aggregation. Renumbering an existing network for a new connectivity provider with different routing prefixes is a major effort with IPv4.[24][25] With IPv6, however, changing the prefix announced by a few routers can in principle renumber an entire network, since the host identifiers (the leastsignificant 64 bits of an address) can be independently self-configured by a host.[26]
Multicasting

Multicasting, the transmission of a packet to multiple destinations in a single send operation, is part of the base specification in IPv6. In IPv4 this is an optional although commonly implemented feature.[27] IPv6 multicast addressing shares common features and protocols with IPv4 multicast, but also provides changes and improvements by eliminating the need for certain protocols. IPv6 does not implement traditional IP broadcast, i.e. the transmission of a packet to all hosts on the attached link using a special broadcast address, and therefore does not define broadcast addresses. In IPv6, the same result can be achieved by sending a packet to the linklocal all nodes multicast group at address ff02::1, which is analogous to IPv4 multicast to address 224.0.0.1. IPv6 also provides for new multicast implementations, including embedding rendezvous point addresses in an IPv6 multicast group address, which simplifies the deployment of inter-domain solutions.[28] In IPv4 it is very difficult for an organization to get even one globally routable multicast group assignment, and the implementation of inter-domain solutions is very arcane.[29] Unicast address assignments by a local Internet registry for IPv6 have at least a 64-bit routing prefix, yielding the smallest subnet size available in IPv6 (also 64 bits). With such an assignment it is possible to embed the unicast address prefix into the IPv6 multicast address format, while still providing a 32-bit block, the least significant bits of the address, or approximately 4.2 billion multicast group identifiers. Thus each user of an IPv6 subnet automatically has available a set of globally routable source-specific multicast groups for multicast applications.[30]
Stateless address autoconfiguration (SLAAC) See also: IPv6 address

IPv6 hosts can configure themselves automatically when connected to a routed IPv6 network using the Neighbor Discovery Protocol via Internet Control Message Protocol version 6 (ICMPv6) router discovery messages. When first connected to a network, a host sends a linklocal router solicitation multicast request for its configuration parameters; if configured suitably, routers respond to such a request with a router advertisement packet that contains network-layer configuration parameters.[26] If IPv6 stateless address autoconfiguration is unsuitable for an application, a network may use stateful configuration with the Dynamic Host Configuration Protocol version 6 (DHCPv6) or hosts may be configured statically.

Routers present a special case of requirements for address configuration, as they often are sources for autoconfiguration information, such as router and prefix advertisements. Stateless configuration for routers can be achieved with a special router renumbering protocol.[31]
Network-layer security

Internet Protocol Security (IPsec) was originally developed for IPv6, but found widespread deployment first in IPv4, for which it was re-engineered. IPsec was a mandatory specification of the base IPv6 protocol suite,[5][32] but has since been made optional.[33]
Simplified processing by routers

In IPv6, the packet header and the process of packet forwarding have been simplified. Although IPv6 packet headers are at least twice the size of IPv4 packet headers, packet processing by routers is generally more efficient,[5][21] thereby extending the end-to-end principle of Internet design. Specifically:

The packet header in IPv6 is simpler than that used in IPv4, with many rarely used fields moved to separate optional header extensions. IPv6 routers do not perform fragmentation. IPv6 hosts are required to either perform path MTU discovery, perform end-to-end fragmentation, or to send packets no larger than the IPv6 default minimum MTU size of 1280 octets. The IPv6 header is not protected by a checksum; integrity protection is assumed to be assured by both link-layer and higher-layer (TCP, UDP, etc.) error detection. UDP/IPv4 may actually have a checksum of 0, indicating no checksum; IPv6 requires UDP to have its own checksum. Therefore, IPv6 routers do not need to recompute a checksum when header fields (such as the time to live (TTL) or hop count) change. This improvement may have been made less necessary by the development of routers that perform checksum computation at link speed using dedicated hardware, but it is still relevant for software-based routers. The TTL field of IPv4 has been renamed to Hop Limit, reflecting the fact that routers are no longer expected to compute the time a packet has spent in a queue.

Mobility

Unlike mobile IPv4, mobile IPv6 avoids triangular routing and is therefore as efficient as native IPv6. IPv6 routers may also allow entire subnets to move to a new router connection point without renumbering.[34]
Options extensibility

The IPv6 packet header has a fixed size (40 octets). Options are implemented as additional extension headers after the IPv6 header, which limits their size only by the size of an entire packet. The extension header mechanism makes the protocol extensible in that it allows future services for quality of service, security, mobility, and others to be added without redesign of the basic protocol.[5]

Jumbograms

IPv4 limits packets to 65535 (2161) octets of payload. An IPv6 node can optionally handle packets over this limit, referred to as jumbograms, which can be as large as 4294967295 (2321) octets. The use of jumbograms may improve performance over high-MTU links. The use of jumbograms is indicated by the Jumbo Payload Option header.[35]
Privacy

Like IPv4, IPv6 supports globally unique static IP addresses, which can be used to track a single device's Internet activity. Most devices are used by a single user, so a device's activity is often assumed to be equivalent to a user's activity. This causes privacy concerns in the same way that cookies can also track a user's navigation through sites. The privacy enhancements in IPv6 have been mostly developed in response to a misunderstanding.[36] Interfaces can have addresses based on the MAC address of the machine (the EUI-64 format), but this is not a requirement. Even when an address is not based on the MAC address though, the interface's address is (contrary to IPv4) usually global instead of local, which makes it much easier to identify a single user through the IP address. Privacy extensions for IPv6 have been defined to address these privacy concerns.[37] When privacy extensions are enabled, the operating system generates ephemeral IP addresses by concatenating a randomly generated host identifier with the assigned network prefix. These ephemeral addresses, instead of trackable static IP addresses, are used to communicate with remote hosts. The use of ephemeral addresses makes it difficult to accurately track a user's Internet activity by scanning activity streams for a single IPv6 address.[38] Privacy extensions are enabled by default in Windows, Mac OS X (since 10.7), and iOS (since version 4.3).[39] Some Linux distributions have enabled privacy extensions as well.[40] Privacy extensions do not protect the user from other forms of activity tracking, such as tracking cookies. Privacy extensions do little to protect the user from tracking if only one or two hosts are using a given network prefix, and the activity tracker is privy to this information. In this scenario, the network prefix is the unique identifier for tracking. Network prefix tracking is less of a concern if the user's ISP assigns a dynamic network prefix via DHCP.[41][42]

Simple Mail Transfer Protocol (SMTP) is an Internet standard for


electronic mail (e-mail) transmission across Internet Protocol (IP) networks. SMTP was first defined by RFC 821 (1982, eventually declared STD 10),[1] and last updated by RFC 5321 (2008)[2] which includes the Extended SMTP (ESMTP) additions, and is the protocol in widespread use today. SMTP uses TCP port 25. The protocol for new submissions (MSA) is effectively the same as SMTP, but it uses port 587 instead. SMTP connections secured by SSL are known by the shorthand SMTPS, though SMTPS is not a protocol in its own right. While electronic mail servers and other mail transfer agents use SMTP to send and receive mail messages, user-level client mail applications typically use SMTP only for sending messages to a

mail server for relaying. For receiving messages, client applications usually use either the Post Office Protocol (POP) or the Internet Message Access Protocol (IMAP) or a proprietary system (such as Microsoft Exchange or Lotus Notes/Domino) to access their mail box accounts on a mail server.

Protocol overview
SMTP is a connection-oriented, text-based protocol in which a mail sender communicates with a mail receiver by issuing command strings and supplying necessary data over a reliable ordered data stream channel, typically a Transmission Control Protocol (TCP) connection. An SMTP session consists of commands originated by an SMTP client (the initiating agent, sender, or transmitter) and corresponding responses from the SMTP server (the listening agent, or receiver) so that the session is opened, and session parameters are exchanged. A session may include zero or more SMTP transactions. An SMTP transaction consists of three command/reply sequences (see example below.) They are: 1. MAIL command, to establish the return address, a.k.a. Return-Path, 5321.From[citation needed] , mfrom, or envelope sender. This is the address for bounce messages. 2. RCPT command, to establish a recipient of this message. This command can be issued multiple times, one for each recipient. These addresses are also part of the envelope. 3. DATA to send the message text. This is the content of the message, as opposed to its envelope. It consists of a message header and a message body separated by an empty line. DATA is actually a group of commands, and the server replies twice: once to the DATA command proper, to acknowledge that it is ready to receive the text, and the second time after the end-of-data sequence, to either accept or reject the entire message. The is one of the core members of the Internet protocol suite, the set of network protocols used for the Internet. With UDP, computer applications can send messages, in this case referred to as datagrams, to other hosts on an Internet Protocol (IP) network without prior communications to set up special transmission channels or data paths. The protocol was designed by David P. Reed in 1980 and formally defined in RFC 768. UDP uses a simple transmission model with a minimum of protocol mechanism.[1] It has no handshaking dialogues, and thus exposes any unreliability of the underlying network protocol to the user's program. As this is normally IP over unreliable media, there is no guarantee of delivery, ordering or duplicate protection. UDP provides checksums for data integrity, and port numbers for addressing different functions at the source and destination of the datagram. UDP is suitable for purposes where error checking and correction is either not necessary or performed in the application, avoiding the overhead of such processing at the network interface level. Time-sensitive applications often use UDP because dropping packets is preferable to waiting for delayed packets, which may not be an option in a real-time system.[2] If error correction facilities are needed at the network interface level, an application may use the

User Datagram Protocol (UDP)

Transmission Control Protocol (TCP) or Stream Control Transmission Protocol (SCTP) which are designed for this purpose. A number of UDP's attributes make it especially suited for certain applications.

It is transaction-oriented, suitable for simple query-response protocols such as the Domain Name System or the Network Time Protocol. It provides datagrams, suitable for modeling other protocols such as in IP tunneling or Remote Procedure Call and the Network File System. It is simple, suitable for bootstrapping or other purposes without a full protocol stack, such as the DHCP and Trivial File Transfer Protocol. It is stateless, suitable for very large numbers of clients, such as in streaming media applications for example IPTV The lack of retransmission delays makes it suitable for real-time applications such as Voice over IP, online games, and many protocols built on top of the Real Time Streaming Protocol. Works well in unidirectional communication, suitable for broadcast information such as in many kinds of service discovery and shared information such as broadcast time or Routing Information Protocol

UDP is a minimal message-oriented Transport Layer protocol that is documented in IETF RFC 768. UDP provides no guarantees to the upper layer protocol for message delivery and the UDP protocol layer retains no state of UDP messages once sent. For this reason, UDP is sometimes referred to as Unreliable Datagram Protocol.[4] UDP provides application multiplexing (via port numbers) and integrity verification (via checksum) of the header and payload.[5] If transmission reliability is desired, it must be implemented in the user's application.

Email Protocols: IMAP, POP3, SMTP and HTTP


Basicaly, a protocol is about a standard method used at each end of a communication channel, in order to properly transmit information. In order to deal with your email you must use a mail client to access a mail server. The mail client and mail server can exchange information with each other using a variety of protocols.

IMAP Protocol:
IMAP (Internet Message Access Protocol) Is a standard protocol for accessing e-mail from your local server. IMAP is a client/server protocol in which e-mail is received and held for you by your Internet server. As this requires only a small data transfer this works well even over a

slow connection such as a modem. Only if you request to read a specific email message will it be downloaded from the server. You can also create and manipulate folders or mailboxes on the server, delete messages etc.
see also IMAP.org

POP3 Protocol:
The POP (Post Office Protocol 3) protocol provides a simple, standardized way for users to access mailboxes and download messages to their computers. When using the POP protocol all your eMail messages will be downloaded from the mail server to your local computer. You can choose to leave copies of your eMails on the server as well. The advantage is that once your messages are downloaded you can cut the internet connection and read your eMail at your leisure without incuring further communication costs. On the other hand you might have transferred a lot of message (including spam or viruses) in which you are not at all interested at this point.
see also POP3 Description (RFC)

SMTP Protocol:
The SMTP (Simple Mail Transfer Protocol) protocol is used by the Mail Transfer Agent (MTA) to deliver your eMail to the recipient's mail server. The SMTP protocol can only be used to send emails, not to receive them. Depending on your network / ISP settings, you may only be able to use the SMTP protocol under certain conditions (see incoming and outgoing mail servers
see also SMTP RFC

HTTP Protocol:
The HTTP protocol is not a protocol dedicated for email communications, but it can be used for accessing your mailbox. Also called web based email, this protocol can be used to compose or retrieve emails from an your account. Hotmail is a good example of using HTTP as an email protocol.

Protocol:
Protocol is a set of rules that define communication of data between two or more devices using its key elements like syntax, semantics and timing. Let us discuss how mail transfer takes place:

SMTP-Simple Mail Transfer Protocol


SMTP is a TCP/IP protocol that is used to exchange electronic mail on the internet. This mail exchange can be between two users or within a single computer which may contain text, voice, video and graphics etc. Email address is divided into two parts: 1.Local part- This is the address of the local user. 2. Domain name-domain name at which the data has to be sent. example: ZZZZ @ XXXXX.com only with the above given address the destination can be reached.so when a user wants to send an Email actually uses SMTP protocol to deliver.

SMTP has two components


1. User Agent(UA). 2. Mail Transfer Agent(MTA). User agent actually creates the message, adds the additional detail, then envelopes the actual message. Function of the MTA is to deliver the envelope across multiple routers on the internet. It is not possible for a single MTA to transfer the data effectively to the destination so SMTP uses multiple Message transfer Agents to perform the task. This is how actual mail transfer takes place.

Additional/supplementary protocols
NOTE: The process described above explain how a mail transfer takes place in a simple way i.e with only ASCII characters which doesn't supports video or audio data. For advanced non-ASCII mail transfer we need additional protocols like 1. MIME - Multipurpose Internet Mail Extension(from the name itself it suggests it is an extension to SMTP and it is not a mail protocol)

Vous aimerez peut-être aussi