Académique Documents
Professionnel Documents
Culture Documents
An intranet uses TCP/IP, HTTP, and other Internet protocols and in general looks like a
private version of the Internet. With tunneling, companies can send private messages
through the public network, using the public network with special encryption/decryption
and other security safeguards to connect one part of their intranet to another.
Typically, larger enterprises allow users within their intranet to access the public Internet
through firewall servers that have the ability to screen messages in both directions so that
company security is maintained. When part of an intranet is made accessible to
customers, partners, suppliers, or others outside the company, that part becomes part of
an extranet.
Advantages of Intranet
1. Workforce productivity: Intranets can also help users to locate and view
information faster and use applications relevant to their roles and responsibilities.
With the help of a web browser interface, users can access data held in any
database the organization wants to make available, anytime and - subject to
security provisions - from anywhere within the company workstations, increasing
employees' ability to perform their jobs faster, more accurately, and with
confidence that they have the right information. It also helps to improve the
services provided to the users.
2. Time: With intranets, organizations can make more information available to
employees on a "pull" basis (i.e., employees can link to relevant information at a
time which suits them) rather than being deluged indiscriminately by emails.
3. Communication: Intranets can serve as powerful tools for communication within
an organization, vertically and horizontally. From a communications standpoint,
intranets are useful to communicate strategic initiatives that have a global reach
throughout the organization./// The type of information that can easily be
conveyed is the purpose of the initiative and what the initiative is aiming to
achieve, who is driving the initiative, results achieved to date, and who to speak to
for more information. By providing this information on the intranet, staff have the
opportunity to keep up-to-date with the strategic focus of the organization. Some
examples of communication would be chat, email, and or blogs. A great real
world example of where an intranet helped a company communicate is when
Nestle had a number of food processing plants in Scandinavia. Their central
support system had to deal with a number of queries every day (McGovern,
Gerry). When Nestle decided to invest in an intranet, they quickly realized the
savings. McGovern says the savings from the reduction in query calls was
substantially greater than the investment in the intranet.
4. Web publishing: It Allows 'cumbersome' corporate knowledge to be maintained
and easily accessed throughout the company using hypermedia and Web
technologies. Examples include: employee manuals, benefits documents,
company policies, business standards, newsfeeds, and even training, can be
accessed using common Internet standards (Acrobat files, Flash files, CGI
applications). Because each business unit can update the online copy of a
document, the most recent version is always available to employees using the
intranet.
5. Business operations and management: Intranets are also being used as a
platform for developing and deploying applications to support business operations
and decisions across the internetworked enterprise.
6. Cost-effective: Users can view information and data via web-browser rather than
maintaining physical documents such as procedure manuals, internal phone list
and requisition forms. This can potentially save the business money on printing,
duplicating documents, and the environment as well as document maintenance
overhead. "PeopleSoft, a large software company, has derived significant cost
savings by shifting HR processes to the intranet". Gerry McGovern goes on to say
the manual cost of enrolling in benefits was found to be USD109.48 per
enrollment. "Shifting this process to the intranet reduced the cost per enrollment
to $21.79; a saving of 80 percent". PeopleSoft also saved some money when they
received requests for mailing address change. "For an individual to request a
change to their mailing address, the manual cost was USD17.77. The intranet
reduced this cost to USD4.87, a saving of 73 percent" PeopleSoft was just one of
the many companies that saved money by using an intranet. Another company
that saved a lot of money on expense reports was Cisco. "In 1996, Cisco
processed 54,000 reports and the amount of dollars processed was USD19
million"
7. Promote common corporate culture: Every user is viewing the same
information within the Intranet.
8. Enhance Collaboration: With information easily accessible by all authorised
users, teamwork is enabled.
9. Cross-platform Capability: Standards-compliant web browsers are available for
Windows, Mac, and UNIX.
10. Built for One Audience: Many companies dictate computer specifications.
Which, in turn, may allow Intranet developers to write applications that only have
to work on one browser (no cross-browser compatibility issues).
11. Knowledge of your Audience: Being able to specifically address your "viewer"
is a great advantange. Since Intranets are user specific (requiring
database/network authentication prior to access), you know exactly who you are
interfacing with. So, you can personalize your Intranet based on role (job title,
department) or individual ("Congratulations Jane, on your 3rd year with our
company!").
12. Immediate Updates: When dealing with the public in any capacity,
laws/specifications/parameters can change. With an Intranet and providing your
audience with "live" changes, they are never out of date, which can limit a
company's liability.
13. Supports a distributed computing architecture: The intranet can also be linked
to a company’s management information system, for example a time keeping
system.
Disadvantages of Intranets
Management fears loss of control
Hidden or unknown complexity and costs
Management concerns
Potential for chaos
Security concerns Unauthorized access
Abuse of access
Denial of service
Packet sniffing
Productivity concerns Overabundance of information
Information overload lowers productivity
• Feeding the Intranet: Key personnel must be assigned and committed to feeding
Intranet consumers. The alternative for your project to become the "yellow-pages"
(a tool that is used as a last resort).
• Keep it current: Information that is current, relevant, informative, and useful to
the end-user is the only way to keep them coming back for more.
• Interact or "Listen": Allow your users to create content. Social networking must
be an integral part of any Intranet project, if a company is serious about providing
information to and receiving information from their employees.
• Feedback: Allow a specific forum for users to tell you what they want and what
they do not like.
Act on Feedback: Your users of the Intranet are typically the employees of the company
with their finger on the pulse of your industry. Those that are in the trenches on a daily
basis will be able to tell "corporate" what trends are happening in the marketplace before
any news source. This two-way communication is critical for any successful Intranet.
Company executives must read the input and create responses based on the company's
direction. Otherwise, what is the point of any employee taking the time to respond. If an
employee submits their opinion or their observation, they need to feel that they have been
heard. This is accomplished by:
• Require management to review intranet posts on a daily basis and respond to the
poster. Let them know that their post has already been addressed, is being
reviewed, or is being referred to a department head. This ensures the poster that
their post has been read and is being acted upon accordingly. If they do not
receive feedback, they will discontinue posting.
• Broadcast feedback: The ideas that make it into the "this is a great idea" bucket,
should become "news-worthy". This makes the poster feel useful and encourages
others to follow.
• Log feedback by users: This information can be useful when considering an
applicant for promotion/transfer, etc. It will also let you know who is focused on
the company's benefit and not just "filling a position".
• Require executives to provide daily/weekly content: Everyone wants to hear from
the person(s) they are working for. The Executive Team needs to lead the way in
communicating the company's vision to their associates on a frequent (daily, if
possible. If not, no less than weekly) basis.
There is a defined “admin side” to it which comparitively few people have access.
Information is reviewed before it’s published, and it’s often subject to workflow
and approvals. The intranet is structured just like a public Web site, it just
happens to be behind the fire wall.
Intranet Architecture
Before discussing the architecture of Intranets, a few background concepts need to be
introduced.
Project/group information is intended for use within a specific group. It may be used to
communicate and share ideas, coordinate activities or manage the development and
approval of content that eventually will become formal. Project/Group information
generally is not listed in the enterprise-wide directories and may be protected by
passwords or other restrictions if general access might create problems.
Informal information begins to appear on the Intranet when authors and users discover
how easy it is to publish within the existing infrastructure. Informal information is not
necessarily the same thing as personal home pages. A personal folder or directory on an
Intranet server can serve as a repository for white papers, notes and concepts that may be
shared with others in the enterprise to further common interests, for the solicitation of
comments or for some other reason. Instead of making copies, the URL can be given to
the interested parties, and the latest version can be read and tracked as it changes. This
type of informal information can become a powerful stimulus for the collaborative
development of new concepts and ideas.
Content pages can take many forms. They may be static pages, like the ones you are
reading here, or they may be active pages where the page content is generated "on the
fly" from a database or other repository of information. Content pages generally are
owned by an individual. Over time expect the "form and sense" of content pages to
change as more experience is gained in the areas of non-linear documents (hyperlinking),
multimedia, modular content and integration of content and logic using applets.
Broker pages also come in more than one form, but all have the same function, to help
users find relevant information. Good broker pages serve an explicitly defined audience
or function. Many of the pages with which we already are familiar are broker pages. A
hyperlink broker page contains links to other pages, in context. It also may have a short
description of the content to which it is pointing to help the user evaluate the possibilities.
On the other hand, a search oriented broker page is not restricted to the author's scope,
but it also does not provide the same level of context to help the user formulate the
appropriate question.
Combination search and hyperlink broker pages are common today. Search engines
return the "hits" as a hyperlink broker page with weightings and first lines for context,
and hyperlink broker pages sometimes end in a specific category that is refined by
searching that defined space. It is unlikely that hyperlink broker pages ever will be
generated entirely by search engines and agents, because the context that an expert broker
provides often contains subjective or expert value in its own right. After all, not all
content is of equal quality or value for specific purposes, and even context sensitive word
searches cannot provide these qualitative assessments. As the amount of raw content
increases, we will continue to need reviewers to screen which competing content is most
useful, or the official source, for workers in our enterprise.
A special use of broker pages is for assisting with the management of web content. There
are several specific instances of these management pages. We call one instance the
"Enterprise Map" because collectively these broker pages form a hyperlinked map of all
the formal content in the organization. Other sets are used for project management,
functional management and to support content review cycles. The use of broker pages for
each of these management functions is discussed in more detail in the next section.
The Enterprise Map
A structured set of broker pages can be very useful for managing the life cycle of
published content. We call this the Enterprise Map, and while the primary audience for
this set of broker pages is management, we have discovered that end users frequently find
the Enterprise Map useful for browsing or to find content when their other broker pages
have failed them.
With the exception of the content pages at the bottom of the map, the Enterprise Map
pages consist only of links. Each page corresponds to an organization committed to the
creation and quality of a set of content pages. In today's organizations, commitments tend
to aggregate into a hierarchical pyramid, but the mapping technique also could be applied
to most any organizational model. The Enterprise Map also does not have to be based on
organization. It could be a logical map where the top level is the mission, the next level
the major focuses required to accomplish the mission, and so on, down to the content
level. Since most large organizations are starting from a pyramidal accountability
structure, that is the form of the example that follows.
Using the terminology from the previous chapter, the Enterprise Map begins with a top
page, owned by the CIO and /or CEO (with responsibility usually delegated to the Web
Administrator). This page consists of a link to the Map Page of each line of business and
major support organization in the enterprise. The Map pages at this next level are owned
by the publisher for each organization. The Publisher Pages, in turn, consist of links to
each of their Editor's Pages. The Editor's Pages may have additional pages or structure
below them created and maintained by the editor that help organize the content, but
ultimately these pages point to the formal content pages.
The Map provides a commitment (or accountability) view of all the formal content in the
enterprise. Management can start at their point in the map and follow the links to all the
content which supports the functions for which they are responsible. They also can look
at what other organizations provide and how well it integrates. Experience predicts that
when a Management Map is first implemented, and managers get involved, they are
shocked by the quality and incompleteness of the information for which they are
responsible. The reason is that they have never been able to easily browse all the
information and create multiple, contextual views of their own when the information was
on paper or in rigid electronic formats. The Intranet gives them this ability. Handled
properly, demonstrating this ability to managers is a great opportunity to show the
strengths of an Intranet for improving not just accessibility but information quality.
An Enterprise Map has several interesting characteristics. Once it is in place, authors and
editors can self publish, and the information automatically shows up in a logical
structure. Also, content categories and even editor level functions generally are not
affected by reorganizations, because major product lines and service areas generally are
not added or deleted. Most reorganizations shift responsibilities at higher levels in the
Map. This means that when a reorganization does occur, the Map can be adjusted
quickly, by the managers affected, by changing one or a few links. Content does not need
to be moved around. The result is a very low maintenance path to all the formal
enterprise content, without forcing publishing through a central authority that can quickly
become a bottleneck.
Shadow Maps
The Enterprise Map provides a management path to all the formally published content.
However, management also has a need to see work in progress, formal content that is not
yet completed. This is the realm of project and departmental information. A Shadow Map
can be constructed for this purpose. The Shadow Map works the same way as the
Enterprise Map, but it is not generally advertised and can be protected by passwords or
other access controls. The Shadow Map can be enhanced with a few additional Broker
Pages to assist with the management of content development.
A Shadow Map continues down to the author level. In this model, the author maintains an
Index Page that is divided into two sections, work commitments and work completed.
When the first draft of committed content is created, the author places it in his web
directory and links the item line on his Index Page to the file. As revisions are made, the
author places the latest version in the same directory with the same name so the Index
automatically points to the latest version. This does not preclude keeping back versions if
they are required. The previous version is copied and numbered as it is moved out of the
current version status. When the content completes review and goes into "production" the
author moves the item from the committed section to the completed section and redirects
the link to the permanent address of the published item. Note that this can work for
development of non-web content as well by configuring mime types and having the
browser automatically start up the appropriate application on the client when the link is
activated.
A second Broker Page that can be added is a Project Page. This page is created by the
Project Manager and contains item lines for all the project deliverables. When the author
creates the first draft, she not only links the content file to her Index Page; she also
notifies the Project Manager of the location so the content can be linked to the
appropriate line item on the Project Page. Like the Index Page, as the content is revised
the Project Page always points to the most current version, without additional
maintenance.
In a matrix organization a third Broker Page can be created by the Functional Manager.
This page consists of links to the Index Page for each employee reporting to the
Functional Manager. This provides a quick path to the work, both in progress and
completed, of all her employees. Once again, after the structure is set up, it takes little
maintenance, with each person keeping his own information up to date.
Finally, Reviewer Pages can be created when the content is ready for review. Each
reviewer has a "Review Page," which consists of links to all the content in their review
queue. When the Editor (or whoever is responsible for managing the review process)
places the content into formal review, it is added to each reviewer's page. The reviewers
access their page when they are ready to do reviews, and by selecting a link can retrieve
and view the content. There are numerous ways the comments and comment resolution
could be handled using Internet technology. One is to funnel comments into a threaded-
discussion-group format. Automated email messages can be used to notify or remind
reviewers of deadlines and status.
The various Broker Pages discussed above are meant to create a model of the basic
management functions and how they can be structured. Whether or not the specific model
described here is used, the most effective process for managing Intranet content will use
Intranet tools and approaches.
When we first conceived of this model, there were no higher level tools to help create and
manage the pages for a process like this. Today several tools are emerging to help
manage functional sets of pages, and they can be configured to support these processes.
Some are message-based, others are centralized, shared-database models with Web front-
ends. Over time, we anticipate that a variety of vendors will offer improved tools, based
on Intranet paradigms, that are specifically tuned to support the distributed, message-
based management model. What ever tools are chosen, the most effective are those that
help the functional managers use the Intranet to manage the development of the content
for which they are responsible, without requiring technical specialists in between. In the
beginning, many managers may find a simple static page implementation of this logical
structure more approachable than a more sophisticated automated tool.
General Brokering
Brokers are the main way users find information on an Intranet. A broker may serve
many functions. He may provide information to users in the context of specific processes,
providing structure for efficiency and consistency. He may screen large pools of content
for material relevant to a large number of employees so each one does not have to
duplicate the process. He may identify which information is considered official. Or, he
may provide interpretation of general information in the context of the organization.
Most knowledge worker jobs today involve some form of information brokering. In the
paper world the broker output often is formally sanctioned by the organization and may
be the worker's main responsibility. The same kinds of roles will evolve in the Intranet
world, and ideally the people in the role today will evolve into the electronic version of
their role. These types of formally managed broker pages can be treated as content in the
map structure described above.
Most organizations also have informal broker pages that spring up. An individual may
start the page for herself, and it gains a following, or she may identify an unfilled need
and consciously fill it. These pages can be a valuable way to identify and quickly meet
new requirements. However, until these pages are in a formal commitment (or
accountability) structure, there is no guarantee that the content is verified or that the
author will keep the content current.
Brokering Summary
Three distinct discovery paths need to be provided by the Intranet Infrastructure:
What has been missing are packages that integrate the functionality of the independent
tools, add routing and tracking, and provide the user with an interface that is easy to
configure. This appears to be changing with the appearance of companies like MKS,
Action Technologies, WebFlow and Netmosphere who now offer web-enabled and
web-based products that support groupware, reviewer comments, routing, sign-off,
checkout-checkin and project management functionality in an open, web environment.
From a technical standpoint, there are a number of ways these interfaces can be created.
What is important is that access be provided to the authors (knowledge workers) in a way
that supports the distributed decision-making, enabling model rather than the centralized
expertise model. This means that authors who are relatively naive technically need to be
able to incorporate database managed data into their pages.
A number of tools are beginning to emerge that move in this direction. Most of the
database vendors and several other application vendors are pushing the use of their
databases to manage all the content in the web. The advantage is unique pages can be
generated automatically and easily. These are the tools that support the first model
identified above, automatic tailoring of page content. The disadvantage is that much of
the "distributed decision making" and "do for yourself" paradigm is violated. Experts are
still needed to manage and change the database schemas for innovation to occur.
A more promising approach combines a library of scripts (CGI, Java, Active-X, etc.)
residing on the hosting web-server with templates, wizards and "bots" incorporated into
WYSIWYG authoring packages (e.g. Microsoft's FrontPage ). Another set of tools,
coming from the database side, automatically converts existing database schemas into
hyperlinked web pages that allow users to browse and access the data from their web
browser (e.g. Netscheme). When applications that merge these two functional approaches
begin to appear, very powerful packages will be available to content providers who need
to incorporate database information into their pages.
This approach satisfies both the "distributed decision making" and the "do for yourself"
paradigms. At the current time, these approaches do contain a "proprietary lock." The
authoring tool and web server extensions are tightly coupled and not interchangeable with
other packages. However, at this point the proprietary nature is not unduly restrictive.
First, the client remains independent of the authoring tool and server extensions. Second,
individual authors can choose to use different tools than those used by their peers, as long
as a server with their tool’s extension set is available. Third, this technology is still in the
early innovative stages, where a significant amount of knowledge needs to be gained.
This is the appropriate stage for non-standard solutions. As more knowledge is gained,
one hopes that the authoring tools will become increasingly independent either through
standardization of the script libraries or through standardization of object linking
technology.
The development of object linking standards and the availability of tools that conform to
these standards will increase the power of Intranet technology. These tools, in
conjunction with previously mentioned software that uses agents to discover and create
organized views of distributed objects, provide a promising base for supporting the
distributed decision making and implementation model. A major trend one can expect to
see is a move away from the use of database technology (or other structured technologies
like SGML) for integrating content enterprise-wide. Instead these tools will be used to
manage local content, and integration will take place as needed by linking the content
objects through Intranet standard pages.
It is required that all isolated tasks are collected together and make a larger process.
The most important processes in a company are those that create value for a
customer. Processes can be relatively distinct, such as developing or selling products.
So all the processes must be handled in a way that for intranet users it should be an
easy task to perform.
Virtual Workgroups
For intranet users there must be virtual workgroup to work together. Intranet can also
bring together employees and partners who are geographically isolated to work on
common problems. By putting all people together they can work on single task with
their best. The central to the value of an intranet is the design of virtual spaces, which
promotes new forms of collaboration, but in being paid less attention.
Reflection of Intranet
An intranet is actually the reflection of the company. By seeing the intranet of any
company people can make decision how the company can be. An intranet that reflects
the culture of its company will make employees feel more at home. For the intranet to
be successful, it must provide ways of empowering all employees.
HTTP Protocols
HTTP stands for Hypertext Transfer Protocol. It is an TCP/IP based communication protocol which is used to deliver
virtually all files and other data, collectively called resources, on the World Wide Web. These resources could be HTML files,
image files, query results, or anything else.
A browser is works as an HTTP client because it sends requests to an HTTP server which is called Web server. The Web
Server then sends responses back to the client. The standard and default port for HTTP servers to listen on is 80 but it can be
changed to any other port like 8080 etc.
There are three important things about HTTP of which you should be aware:
• HTTP is connectionless: After a request is made, the client disconnects from the server and waits for a response.
The server must re-establish the connection after it process the request.
• HTTP is media independent: Any type of data can be sent by HTTP as long as both the client and server know
how to handle the data content. How content is handled is determined by the MIME specification.
• HTTP is stateless: This is a direct result of HTTP's being connectionless. The server and client are aware of each
other only during a request. Afterwards, each forgets the other. For this reason neither the client nor the browser can
retain information between different request across the web pages.
Like most network protocols, HTTP uses the client-server model: An HTTP client opens a connection and sends a request
message to an HTTP server; the server then returns a response message, usually containing the resource that was requested.
After delivering the response, the server closes the connection.
The format of the request and response messages are similar and will have following structure:
Initial lines and headers should end in CRLF. Though you should gracefully handle lines ending in just LF. More exactly, CR
and LF here mean ASCII values 13 and 10.
• GET is the most common HTTP method. Other methods could be POST, HEAD etc.
• The path is the part of the URL after the host name. This path is also called the request Uniform Resource Identifier
(URI). A URI is like a URL, but more general.
• The HTTP version always takes the form "HTTP/x.x", uppercase.
HTTP/1.0 200 OK
or
Header Lines
Header lines provide information about the request or response, or about the object sent in the message body.
The header lines are in the usual text header format, which is: one line per header, of the form "Header-Name: value", ending
with CRLF. It's the same format used for email and news postings, defined in RFC 822.
• A header line should end in CRLF, but you should handle LF correctly.
• The header name is not case-sensitive.
• Any number of spaces or tabs may be between the ":" and the value.
• Header lines beginning with space or tab are actually part of the previous header line, folded into multiple lines for
easy reading.
User-agent: Mozilla/3.0Gold
or
If an HTTP message includes a body, there are usually header lines in the message that describe the body. In particular:
• The Content-Type: header gives the MIME-type of the data in the body, such as text/html or image/gif.
• The Content-Length: header gives the number of bytes in the body.
The request line and headers must all end with <CR><LF> (that is, a carriage return
followed by a line feed). The empty line must consist of only <CR><LF> and no other
whitespace. In the HTTP/1.1 protocol, all headers except Host are optional.
A request line containing only the path name is accepted by servers to maintain
compatibility with HTTP clients before the HTTP/1.0 specification in RFC1945
9.3 GET
The GET method means retrieve whatever information (in the form of an entity) is
identified by the Request-URI. If the Request-URI refers to a data-producing process, it
is the produced data which shall be returned as the entity in the response and not the
source text of the process, unless that text happens to be the output of the process.
The semantics of the GET method change to a "conditional GET" if the request message
includes an If-Modified-Since, If-Unmodified-Since, If-Match, If-None-Match, or If-
Range header field. A conditional GET method requests that the entity be transferred
only under the circumstances described by the conditional header field(s). The
conditional GET method is intended to reduce unnecessary network usage by allowing
cached entities to be refreshed without requiring multiple requests or transferring data
already held by the client.
The semantics of the GET method change to a "partial GET" if the request message
includes a Range header field. The partial GET method is intended to reduce unnecessary
network usage by allowing partially-retrieved entities to be completed without
transferring data already held by the client.
9.4 HEAD
The HEAD method is identical to GET except that the server MUST NOT return a
message-body in the response. The metainformation contained in the HTTP headers in
response to a HEAD request SHOULD be identical to the information sent in response to
a GET request. This method can be used for obtaining metainformation about the entity
implied by the request without transferring the entity-body itself. This method is often
used for testing hypertext links for validity, accessibility, and recent modification.
The response to a HEAD request MAY be cacheable in the sense that the information
contained in the response MAY be used to update a previously cached entity from that
resource. If the new field values indicate that the cached entity differs from the current
entity (as would be indicated by a change in Content-Length, Content-MD5, ETag or
Last-Modified), then the cache MUST treat the cache entry as stale.
9.5 POST
The POST method is used to request that the origin server accept the entity enclosed in
the request as a new subordinate of the resource identified by the Request-URI in the
Request-Line. POST is designed to allow a uniform method to cover the following
functions:
The actual function performed by the POST method is determined by the server and is
usually dependent on the Request-URI. The posted entity is subordinate to that URI in
the same way that a file is subordinate to a directory containing it, a news article is
subordinate to a newsgroup to which it is posted, or a record is subordinate to a database.
The action performed by the POST method might not result in a resource that can be
identified by a URI. In this case, either 200 (OK) or 204 (No Content) is the appropriate
response status, depending on whether or not the response includes an entity that
describes the result.
If a resource has been created on the origin server, the response SHOULD be 201
(Created) and contain an entity which describes the status of the request and refers to the
new resource, and a Location header Responses to this method are not cacheable, unless
the response includes appropriate Cache-Control or Expires header fields. However, the
303 response can be used to direct the user agent to retrieve a cacheable resource.
9.6 PUT
The PUT method requests that the enclosed entity be stored under the supplied Request-
URI. If the Request-URI refers to an already existing resource, the enclosed entity
SHOULD be considered as a modified version of the one residing on the origin server. If
the Request-URI does not point to an existing resource, and that URI is capable of being
defined as a new resource by the requesting user agent, the origin server can create the
resource with that URI. If a new resource is created, the origin server MUST inform the
user agent via the 201 (Created) response. If an existing resource is modified, either the
200 (OK) or 204 (No Content) response codes SHOULD be sent to indicate successful
completion of the request. If the resource could not be created or modified with the
Request-URI, an appropriate error response SHOULD be given that reflects the nature of
the problem. The recipient of the entity MUST NOT ignore any Content-* (e.g. Content-
Range) headers that it does not understand or implement and MUST return a 501 (Not
Implemented) response in such cases.
If the request passes through a cache and the Request-URI identifies one or more
currently cached entities, those entries SHOULD be treated as stale. Responses to this
method are not cacheable.
The fundamental difference between the POST and PUT requests is reflected in the
different meaning of the Request-URI. The URI in a POST request identifies the resource
that will handle the enclosed entity. That resource might be a data-accepting process, a
gateway to some other protocol, or a separate entity that accepts annotations. In contrast,
the URI in a PUT request identifies the entity enclosed with the request -- the user agent
knows what URI is intended and the server MUST NOT attempt to apply the request to
some other resource. If the server desires that the request be applied to a different URI,
it MUST send a 301 (Moved Permanently) response; the user agent MAY then make its
own decision regarding whether or not to redirect the request.
A single resource MAY be identified by many different URIs. For example, an article
might have a URI for identifying "the current version" which is separate from the URI
identifying each particular version. In this case, a PUT request on a general URI might
result in several other URIs being defined by the origin server.
Unless otherwise specified for a particular entity-header, the entity-headers in the PUT
request SHOULD be applied to the resource created or modified by the PUT.
9.7 DELETE
The DELETE method requests that the origin server delete the resource identified by the
Request-URI. This method MAY be overridden by human intervention (or other means)
on the origin server. The client cannot be guaranteed that the operation has been carried
out, even if the status code returned from the origin server indicates that the action has
been completed successfully. However, the server SHOULD NOT indicate success
unless, at the time the response is given, it intends to delete the resource or move it to an
inaccessible location.
If the request passes through a cache and the Request-URI identifies one or more
currently cached entities, those entries SHOULD be treated as stale. Responses to this
method are not cacheable.
9.8 TRACE
The TRACE method is used to invoke a remote, application-layer loop- back of the
request message. The final recipient of the request SHOULD reflect the message received
back to the client as the entity-body of a 200 (OK) response. The final recipient is either
the
origin server or the first proxy or gateway to receive a Max-Forwards value of zero (0) in
the request (see section 14.31). A TRACE request MUST NOT include an entity.
TRACE allows the client to see what is being received at the other end of the request
chain and use that data for testing or diagnostic information. The value of the Via header
field (section 14.45) is of particular interest, since it acts as a trace of the request chain.
Use of the Max-Forwards header field allows the client to limit the length of the request
chain, which is useful for testing a chain of proxies forwarding messages in an infinite
loop.
If the request is valid, the response SHOULD contain the entire request message in the
entity-body, with a Content-Type of "message/http". Responses to this method MUST
NOT be cached.
9.9 CONNECT
This specification reserves the method name CONNECT for use with a proxy that can
dynamically switch
The OPTIONS method represents a request for information about the communication
options available on the request/response chain identified by the Request-URI. This
method allows the client to determine the options and/or requirements associated with a
resource, or the capabilities of a server, without implying a resource action or initiating a
resource retrieval.
Responses to this method are not cacheable.
If the Request-URI is an asterisk ("*"), the OPTIONS request is intended to apply to the
server in general rather than to a specific resource. Since a server's communication
options typically depend on the resource, the "*" request is only useful as a "ping" or "no-
op" type of method; it does nothing beyond allowing the client to test the capabilities of
the server. For example, this can be used to test a proxy for HTTP/1.1 compliance (or
lack thereof).
If the Request-URI is not an asterisk, the OPTIONS request applies only to the options
that are available when communicating with that resource.
A 200 response SHOULD include any header fields that indicate optional features
implemented by the server and applicable to that resource (e.g., Allow), possibly
including extensions not defined by this specification. The response body, if any,
SHOULD also include information about the communication options. The format for
such a
body is not defined by this specification, but might be defined by future extensions to
HTTP. Content negotiation MAY be used to select the appropriate response format. If no
response body is included, the response MUST include a Content-Length field with a
field-value of "0".
The Max-Forwards request-header field MAY be used to target a specific proxy in the
request chain. When a proxy receives an OPTIONS request on an absoluteURI for which
request forwarding is permitted, the proxy MUST check for a Max-Forwards field. If the
Max-Forwards field-value is zero ("0"), the proxy MUST NOT forward the message;
instead, the proxy SHOULD respond with its own communication options. If the Max-
Forwards field-value is an integer greater than zero, the proxy MUST decrement the
field-value when it forwards the request. If no Max-Forwards field is present in the
request, then the forwarded request MUST NOT include a Max-Forwards field.
Implementors should be aware that the software represents the user in their interactions
over the Internet, and should be careful to allow the user to be aware of any actions they
might take which may have an unexpected significance to themselves or others.
In particular, the convention has been established that the GET and HEAD methods
SHOULD NOT have the significance of taking an action other than retrieval. These
methods ought to be considered "safe". This allows user agents to represent other
methods, such as POST, PUT and DELETE, in a special way, so that the user is made
aware of the fact that a possibly unsafe action is being requested.
Naturally, it is not possible to ensure that the server does not generate side-effects as a
result of performing a GET request; in fact, some dynamic resources consider that a
feature. The important distinction here is that the user did not request the side-effects, so
therefore cannot be held accountable for them.
Methods can also have the property of "idempotence" in that (aside from error or
expiration issues) the side-effects of N > 0 identical requests is the same as for a single
request. The methods GET, HEAD, PUT and DELETE share this property. Also, the
methods OPTIONS and TRACE SHOULD NOT have side effects, and so are inherently
idempotent.
However, it is possible that a sequence of several requests is non- idempotent, even if all
of the methods executed in that sequence are idempotent. (A sequence is idempotent if a
single execution of the entire sequence always yields a result that is not changed by a
reexecution of all, or part, of that sequence.) For example, a sequence is non-idempotent
if its result depends on a value that is later modified in the same sequence.
A sequence that never has side effects is idempotent, by definition (provided that no
concurrent operations are being executed on the same set of resources).
Status codes
The values of the numeric status code to HTTP requests are as follows. The data sections
of messages Error, Forward and redirection responses may be used to contain human-
readable diagnostic information.
Success 2xx
These codes indicate success. The body section if present is the object returned by the
request. It is a MIME format object. It is in MIME format, and may only be in text/plain,
text/html or one fo the formats specified as acceptable in the request.
OK 200
CREATED 201
Following a POST command, this indicates success, but the textual part of the response
line indicates the URI by which the newly created document should be known.
Accepted 202
The request has been accepted for processing, but the processing has not been completed.
The request may or may not eventually be acted upon, as it may be disallowed when
processing actually takes place. there is no facility for status returns from asynchronous
operations such as this.
When received in the response to a GET command, this indicates that the returned
metainformation is not a definitive set of the object from a server with a copy of the
object, but is from a private overlaid web. This may include annotation information about
the object, for example.
No Response 204
Server has received the request but there is no information to send back, and the client
should stay in the same document view. This is mainly to allow input for scripts without
changing the document at the same time.
The 4xx codes are intended for cases in which the client seems to have erred, and the 5xx
codes for the cases in which the server is aware that the server has erred. It is impossible
to distinguish these cases in general, so the difference is only informational.
The body section may contain a document describing the error in human readable form.
The document is in MIME format, and may only be in text/plain, text/html or one for the
formats specified as acceptable in the request.
Unauthorized 401
The parameter to this message gives a specification of authorization schemes which are
acceptable. The client should retry the request with a suitable Authorization header.
PaymentRequired 402
The parameter to this message gives a specification of charging schemes acceptable. The
client may retry the request with a suitable ChargeTo header.
Forbidden 403
The server has not found anything matching the URI given
The server encountered an unexpected condition which prevented it from fulfilling the
request.
The server cannot process the request due to a high load (whether HTTP servicing or
other requests). The implication is that this is a temporary condition which maybe
alleviated at other times.
This is equivalent to Internal Error 500, but in the case of a server which is in turn
accessing some other service, this indicates that the respose from the other service did not
return within a time that the gateway was prepared to wait. As from the point of view of
the clientand the HTTP transaction the other service is hidden within the server, this
maybe treated identically to Internal error 500, but has more diagnostic value.
Redirection 3xx
The codes in this section indicate action to be taken (normally automatically) by the
client in order to fulfill the request.
Moved 301
The data requested has been assigned a new URI, the change is permanent. (N.B. this is
an optimisation, which must, pragmatically, be included in this definition. Browsers with
link editing capabiliy should automatically relink to the new reference, where possible)
Found 302
The data requested actually resides under a different URL, however, the redirection may
be altered on occasion (when making links to these kinds of document, the browser
should default to using the Udi of the redirection document, but have the option of
linking to the final document) as for "Forward".
Method 303
Method: <method> <url>
body-section
Note: This status code is to be specified in more detail. For the moment it is for
discussion only.
Like the found response, this suggests that the client go try another network address. In
this case, a different method may be used too, rather than GET.
The body-section contains the parameters to be used for the method. This allows a
document to be a pointer to a complex query operation.
If the client has done a conditional GET and access is allowed, but the document has not
been modified since the date and time specified in If-Modified-Since field, the server
responds with a 304 status code and does not send the document body to the client.
Response headers are as if the client had sent a HEAD request, but limited to only those
headers which make sense in this context. This means only headers that are relevant to
cache managers and which may have changed independently of the document's Last-
Modified date. Examples include Date , Server and Expires .
The purpose of this feature is to allow efficient updates of local cache information
(including relevant metainformation) without requiring the overhead of multiple HTTP
requests (e.g. a HEAD followed by a GET) and minimizing the transmittal of information
already known by the requesting client (usually a caching proxy).
What is HTTP Persistent Connections?
HTTP persistent connections, also called HTTP keep-alive, or HTTP connection reuse, is
the idea of using the same TCP connection to send and receive multiple HTTP
requests/responses, as opposed to opening a new one for every single request/response
pair. Using persistent connections is very important for improving HTTP performance.
• Network friendly. Less network traffic due to fewer setting up and tearing down
of TCP connections.
• Reduced latency on subsequent request. Due to avoidance of initial TCP
handshake
• Long lasting connections allowing TCP sufficient time to determine the
congestion state of the network, thus to react appropriately.
The advantages are even more obvious with HTTPS or HTTP over SSL/TLS. There,
persistent connections may reduce the number of costly SSL/TLS handshake to establish
security associations, in addition to the initial TCP connection set up.
In HTTP/1.1, persistent connections are the default behavior of any connection. That is,
unless otherwise indicated, the client SHOULD assume that the server will maintain a
persistent connection, even after error responses from the server. However, the protocol
provides means for a client and a server to signal the closing of a TCP connection.
SESSION STATE
Session state is a server side tool for managing state. Every time your web app
goes to the server to get the next request, the server has to know how much of the
last web page needs to be "remembered" when the new information is sent to the
web page. The process of knowing the values of controls and variables is known as
state management.
When a page postback occurs, ASP.Net has many techniques to remember state
information. Some of these state management information methods are on the client
side and others are on the server side. Client side methods for maintaining state
include query strings, cookies, hidden fields and view state.
Most client side state management modes can be read by users and other programs,
meaning that user ids and passwords can be stolen. But session state sits on the
server and the ability for other users to capture this information is reduced and in
some cases eliminated.
The importance of this method is the server, especially in a web farm, can know if a
particular user is a new user or has already visited this web page. Imagine in a web
farm, where you have multiple servers serving the same web page. How do the
servers recognize unique visitors? It is through the session id. Even if server one
gets the initial request, server two and server three can recognize user A as already
having a session in process.
Now the server can store session specific information about the current user. Is there
highly critical sensitive information about the user that needs to be remembered?
Like credit card information or name, address and phone number? This information
can be kept out of the prying eyes of internet identity thieves with session state.
TCP/IP MODEL
TCP/IP stands for Transmission Control Protocol/Internet Protocol which is widely
accepted and used communications protocol. TCP/IP has only four layers, which
roughly correspond to groups of the OSI model. The Internet, many internal business
networks and some home networks used TCP/IP. TCP (Transmission Control
Protocol) – responsible for reliable delivery of data. IP (Internet Protocol) – provides
addressing and routing information.
TCP/IP Layers
The four layers in TCP/IP are :
Application Layer
Transport Layer
Internet Layer
Network Interface Layer
Internet Layer
It implements routing of frames (packets) through the network. It defines the most
optimum path the packet should take from the source to the destination. This layer also
handles congestion in the network. The network layer also defines how to fragment a
packet into smaller packets to accommodate different media.
Transport Layer
Purpose of this layer is to provide a reliable mechanism for the exchange of data between
two processes in different computers. It ensures that the data units are delivered error
free. This layer also ensures that there is no loss or duplication of data units. It provide
the connection management. With this layer multiplex multiple connection can be
managed over a single channel.
Application Layer
Application layer interacts with application programs and is the highest level of TCP/IP
model. Application layer contains management functions to support distributed
applications. Examples of application layer are applications such as file transfer,
electronic mail, remote login etc.
The scenario is all too familiar: computer systems within an enterprise previously thought
to be isolated from the outside world become accessible through carelessness and back
doors introduced. Your company develops a major new product in secret using its
Intranet, hackers creep in and sell the details to the competition or blackmail the
enterprise.
Security has long been seen as a major sticking point in the adoption of Internet
technology in the enterprise. As networks have grown and connected to the Internet, the
spectre of the hacker has haunted managers responsible for both delivering information
within the enterprise and to its partners, and protecting it from unauthorised outsiders.
In fact, the security capabilities of the latest Internet and intranet technologies enable
companies to control the availability of information and the authenticity of that
information better than ever before. The increasing sophistication of both server and
client software means that this unprecedented level of security can be provided without
requiring users to undergo complex and bureaucratic procedures to gain legitimate access
to sites.
Firewalls
For intranet developers, restricting access to the site has been the primary security
concern. The simplest way to achieve this is to position the internal site where it cannot
be seen or accessed from the Internet at large-behind a firewall. At their simplest,
firewalls consist of software which blocks access to internal networks from the Internet.
While legitimate traffic such as email is allowed in to the mail server, programs such as
search engine spiders or FTP clients cannot access machines inside the safe boundary of
the firewall.
Firewalls also offer some protection to users venturing out from the network to the
Internet, acting as proxies to fetch web pages so that the name and IP number of
machines on the network are not revealed to web sites that they visit-preventing hackers
from learning details of the structure of the network.
While the basic firewall remains a fundamental of Internet and intranet security,
increasing levels of sophistication are required by many users as access to the corporate
intranet needs to be widened beyond those physically present on the same network.
Allowing users dial-up access behind the firewall violates basic security principles;
restricting them to the same access offered to the rest of the Internet in front of the
firewall denies them valuable services.
Intranets and extranets are often constructed using Web servers to deliver information to
users in a now-familiar form. Username/password authentication has long been used as a
mechanism for restricting access to web sites. But because these character strings are
themselves passed as clear text, capable of being intercepted and read with simple
network management tools, basic passwords do not adequately secure communications.
SSL has become fundamental to the spread of Internet commerce, and is being used for
an increasing range of transactions across the Internet. However, by default most SSL
implementations in web servers do not authenticate the client web browser. In its raw
form, therefore, SSL is best suited to the largely anonymous requirements of retailing.
One option for widening access is to set up a virtual private network (VPN) using the
Internet. A VPN uses software or hardware to encrypt all the traffic that travels over the
Internet between two predetermined end-points. This is an ideal solution where limited
access to an intranet is required, for example between two sites of the same company
requiring access to the same corporate information, or suppliers and customers
integrating their supply chains.
A potential weakness of VPN solutions is their relative inflexibility. VPNs work well for
creating fixed tunnels from one known point to another, but they are less well suited to
situations where access needs to be given on-the-fly to groups of people not necessarily
known at the outset, or who need to gain access from a variety of locations. VPN
technology at present works best for encrypting traffic between two known points that are
accepted as valid destinations for traffic: once a link has been established, the technology
is used to encrypt the information which is sent, not for establishing the validity of the
destination to which it is being sent.
As more flexible VPN access is required, the prime issue becomes that of authenticating
potential visitors to the site and the credentials that they present. Are they who they say
they are, or an impostor? With this capability it is possible to open up the system to
provide access to a wider range of partners, customers or suppliers.
Certification authorities
One solution for is to use a digital certificate-based solution. Users are given access based
on their possession of certificates signed or authorised for access by or on behalf of the
server to which they wish to gain access. The certificate acts as evidence of their digital
identity. Certificates can also be combined with other access control mechanisms, such as
tokens (identification hardware carried by users) or only accepting visitors from certain
authenticated addresses.
At the moment this option is most easily achieved with a custom solution combined with
a certification authority (CA) server or external CA service, which can issue and revoke
certificates and authenticate any certificates presented in order to gain access. This can
involve a simple implementation of a public key infrastructure (PKI), a system which
establishes a hierarchy of authority for the issuance and authentication of certificates and
users presenting them.
The use of public-key based security systems requires considerable care in system design
and management. The security of the entire system is ultimately guaranteed by the
security of the key used for signing certificates at the top (commonly called the root) of
the public key infrastructure. Here specialized hardware can play a useful role.
Normally, all keys that are accessed by the server are held at some point in the main
memory of the server, where they are potentially vulnerable to attack (for example, in a
server core dump). A higher degree of protection is desirable for the most valuable keys.
A specialized hardware cryptographic module for storing and protecting the signing keys
provides an answer. The keys are stored in a strongly encrypted format. When loaded for
signing, the keys are decrypted and loaded into the memory of the secure cryptographic
module, which then performs all the signing operations on behalf of the server. The keys
are never revealed in their unencrypted form to the server, so even if an intruder manages
to access the network, the keys will remain safe. Security is further assisted by physical
design features of the module; tamper-resistant enclosures and advanced manufacturing
techniques protect the keys from physical attack.
Future of Intranet
Intranet trends follow closely on the heels of the latest Internet trends. The biggest
Internet buzzword right now is Web 2.0. Web 2.0 is all about social media and user-
generated content as opposed to the static, read-only nature of Web 1.0.
Many of the most trafficked Web sites are fueled by Web 2.0 principles. It explains the
explosion of blogs, the pre-eminence of Wikipedia and the tremendous popularity of
online social networking sites like MySpace, Facebook and LinkedIn.
Corporate intranets are getting an upgrade now that Net generation students are entering
the workplace. The Net Generation grew up in a world steeped in communications
technology. Many of them don't remember life before they had a MySpace account, and
they'd be lost without their cell phones.
Net Generation employees expect their employers to think and communicate the same
way they do. E-mail is just a start. They want to have their own company blogs and
subscribe to RSS (Really Simple Simplification) feeds from the blogs of their bosses and
coworkers. They want to help build a company Wiki and hook up with friends on a
company-wide social network.
Only recently have businesses woken up to the necessity of so-called intranet 2.0 to
attract and maintain talented young employees. According to a recent survey of chief
information officers, only 18 percent of American businesses host blogs on their intranet
and only 13 percent have launched corporate Wikis. However, 40 percent said they have
such programs in the development and testing stages [source: Prescient Digital].
Corporate intranets will take on increasing importance as more and more businesses turn
to Web-based applications to manage core business systems like SAP and PeopleSoft.
Companies are learning that on-demand Web services are cheaper to maintain and easier
to use than hosting software on their own systems. All of these Web-based applications
can be bundled into the corporate intranet where they can be accessed securely with one
network password.
Cost of Intranet
A corporate intranet can cost very little (from $3,000 to $4,000) if it is done with existing
hardware and free software that can be downloaded from the Internet. Most corporate
intranets however cost between $50,000 and $150,000 to get started. The corporation
must also budget for maintaining the intranet and this will usually cost more than what
was spent on start-up as it will involve salaries for new staff and possibly more hardware
and software as the intranet grows.
1. HTTP
2. TCP/IP
3. SMTP
4. NNTP
5. FTP
6. SOAP
7. UDP
SMTP
SMTP is a short for Simple Mail Transfer Protocol and it is used to transfer e-mail
messages between computers. It is a text based protocol and in this, message text is
specified along with the recipients of the message. Simple Mail Transfer Protocol is a
'push' protocol and it cannot be used to 'pull' the messages from the server. A procedure
of queries and responses is used to send the message between the client and the server.
An end user's e-mail client or the relaying server's Mail Transport Agents can act as an
SMTP client which is used to initiate a TCP connection to the port 25 of the server.
SMTP is used to send the message from the mail client to the mail server and an e-mail
client using the POP or IMAP is used to retrieve the message from the server.
SMTP Functions
NNTP
NNTP (Network News Transfer Protocol) is the predominant protocol used by computer
clients and servers for managing the notes posted on Usenet newsgroups. NNTP replaced
the original Usenet protocol, UNIX-to-UNIX Copy Protocol (UUCP) some time ago.
NNTP servers manage the global network of collected Usenet newsgroups and include
the server at your Internet access provider. An NNTP client is included as part of a
Netscape, Internet Explorer, Opera, or other Web browser or you may use a separate
client program called a newsreader.
FTP
FTP (File Transfer Protocol) is the generic term for a group of computer programs aimed
at facilitating the transfer of files or data from one computer to another. It originated in
the Massachusetts Institute of Technology (MIT) in the early 1970s when mainframes,
dumb terminals and time-sharing were the standard.
Traditionally, when communications speeds were low (ranging from the then-standard
9.8 kbps to the "fast" 16.8 Kbps unlike today's broadband 1 Mbps standard) FTP was the
method of choice for downloading large files from various websites. Although the FTP
programs have been improved and updated over time, the basic concepts and definitions
remain the same and are still in use today.
The primary objective in the formulation of File Transfer Protocols was to make file
transfers uncomplicated and to relieve the user of the burden of learning the details on
how the transfer is actually accomplished. The result of all these standards and rules can
be seen in today's web interactions, where pointing-and-clicking (with a mouse) initiates
a series of actions that the typical internet user does not see or even remotely understand.
Another point to bear in mind is that File Transfer in FTP means exactly that: files are
automatically copied or moved from a file server to a client computer's hard drive, and
vice versa. On the other hand, files in an HTTP transfer are viewed and can 'disappear'
when the browser is turned off unless the user executes commands to move the data to
the computer's memory.
Another major difference between the two systems lies in the manner in which the data is
encoded and transmitted. FTP systems generally encode and transmit their data in binary
sets which allow for faster data transfer; HTTP systems encode their data in MIME
format which is larger and more complex. Note that when attaching files to emails, the
size of the file is usually larger than the original because of the additional encoding
involved.
SOAP
SOAP (Simple Object Access Protocol) is a way for a program running in one kind of
operating system (such as Windows 2000) to communicate with a progam in the same or
another kind of an operating system (such as Linux) by using the World Wide Web's
Hypertext Transfer Protocol (HTTP)and its Extensible Markup Language (XML) as the
mechanisms for information exchange. Since Web protocols are installed and available
for use by all major operating system platforms, HTTP and XML provide an already at-
hand solution to the problem of how programs running under different operating systems
in a network can communicate with each other. SOAP specifies exactly how to encode an
HTTP header and an XML file so that a program in one computer can call a program in
another computer and pass it information. It also specifies how the called program can
return a response.
SOAP was developed by Microsoft, DevelopMentor, and Userland Software and has
been proposed as a standard interface to the Internet Engineering Task Force (IETF). It is
somewhat similar to the Internet Inter-ORB Protocol (IIOP), a protocol that is part of the
Common Object Request Broker Architecture (CORBA). Sun Microsystems' Remote
Method Invocation (RMI) is a similar client/server interprogram protocol between
programs written in Java.
An advantage of SOAP is that program calls are much more likely to get through firewall
servers that screen out requests other than those for known applications (through the
designated port mechanism). Since HTTP requests are usually allowed through firewalls,
programs using SOAP to communicate can be sure that they can communicate with
programs anywhere.
UDP
UDP (User Datagram Protocol) is a communications protocol that offers a limited
amount of service when messages are exchanged between computers in a network that
uses the Internet Protocol (IP). UDP is an alternative to the Transmission Control
Protocol (TCP) and, together with IP, is sometimes referred to as UDP/IP. Like the
Transmission Control Protocol, UDP uses the Internet Protocol to actually get a data unit
(called a datagram) from one computer to another. Unlike TCP, however, UDP does not
provide the service of dividing a message into packets (datagrams) and reassembling it at
the other end. Specifically, UDP doesn't provide sequencing of the packets that the data
arrives in. This means that the application program that uses UDP must be able to make
sure that the entire message has arrived and is in the right order. Network applications
that want to save processing time because they have very small data units to exchange
(and therefore very little message reassembling to do) may prefer UDP to TCP. The
Trivial File Transfer Protocol (TFTP) uses UDP instead of TCP.
UDP provides two services not provided by the IP layer. It provides port numbers to help
distinguish different user requests and, optionally, a checksum capability to verify that
the data arrived intact.