Vous êtes sur la page 1sur 21

Internet and Internet Application

Introduction

It is a worldwide system which has the following characteristics:

• Internet is a world-wide / global system of interconnected computer networks.


• Internet uses the standard Internet Protocol (TCP/IP)
• Every computer in internet is identified by a unique IP address.
• IP Address is a unique set of numbers (such as 110.22.33.114) which identifies a
computer’s location.
• A special computer DNS (Domain Name Server) is used to give name to the IP
Address so that user can locate a computer by a name.
• For example, a DNS server will resolve a name http://www.tutorialspoint.com to a
particular IP address to uniquely identify the computer on which this website is
hosted.
• Internet is accessible to every user all over the world.

Evolution of the Internet


The structure and makeup of the Internet has adapted as the needs of its community have
changed. Today's Internet serves the largest and most diverse community of network users in
the computing world. A brief chronology and summary of significant components are
provided in this chapter to set the stage for understanding the challenges of interfacing the
Internet and the steps to build scalable internetworks.
Origins of the Internet

The Internet started as an experiment in the late 1960s by the Advanced Research Projects
Agency (ARPA, now called DARPA) of the U.S. Department of Defense. DARPA
experimented with the connection of computer networks by giving grants to multiple
universities and private companies to get them involved in the research.

In December 1969, the experimental network went online with the connection of a four-node
network connected via 56 Kbps circuits. This new technology proved to be highly reliable
and led to the creation of two similar military networks, MILNET in the U.S. and MINET in
Europe. Thousands of hosts and users subsequently connected their private networks
(universities and government) to the ARPANET, thus creating the initial "ARPA Internet."
ARPANET had an Acceptable Use Policy (AUP), which prohibited the use of the Internet for
commercial use. ARPANET was decommissioned in 1989.

By 1985, the ARPANET was heavily used and congested. In response, the National Science
Foundation (NSF) initiated phase one development of the NSFNET. The NSFNET was
composed of multiple regional networks and peer networks (such as the NASA Science
Network) connected to a major backbone that constituted the core of the overall NSFNET.

In its earliest form, in 1986, the NSFNET created a three-tiered network architecture. The
architecture connected campuses and research organizations to regional networks, which in
turn connected to a main backbone linking six nationally funded super-computer centers. The
original links were 56 Kbps.

The links were upgraded in 1988 to faster T1 (1.544 Mbps) links as a result of the NSFNET
1987 competitive solicitation for a faster network service, awarded to Merit Network, Inc.
and its partners MCI, IBM, and the state of Michigan. The NSFNET T1 backbone connected
a total of 13 sites that included Merit, BARRNET, MIDnet, Westnet, NorthWestNet,
SESQUINET, SURANet, NCAR (National Center of Atmospheric Research), and five NSF
supercomputer centers.

In 1990, Merit, IBM, and MCI started a new organization known as Advanced Network and
Services (ANS). Merit Network's Internet engineering group provided a policy routing
database and routing consultation and management services for the NSFNET, whereas ANS
operated the backbone routers and a Network Operation Center (NOC).

The history of the Internet begins with the development of electronic computers in the 1950s.
Initial concepts of packet networking originated in several computer science laboratories in
the United States, Great Britain, and France. The US Department of Defense awarded
contracts as early as the 1960s for packet network systems, including the development of the
ARPANET (which would become the first network to use the Internet Protocol.) The first
message was sent over the ARPANET from computer science Professor Leonard Kleinrock's
laboratory at University of California, Los Angeles (UCLA) to the second network node at
Stanford Research Institute (SRI).

Packet switching networks such as ARPANET, Mark I at NPL in the UK, CYCLADES,
Merit Network, Tymnet, and Telenet, were developed in the late 1960s and early 1970s using
a variety of communications protocols. The ARPANET in particular led to the development
of protocols for internetworking, in which multiple separate networks could be joined into a
network of networks.

Access to the ARPANET was expanded in 1981 when the National Science Foundation
(NSF) funded the Computer Science Network (CSNET). In 1982, the Internet protocol suite
(TCP/IP) was introduced as the standard networking protocol on the ARPANET. In the early
1980s the NSF funded the establishment for national supercomputing centers at several
universities, and provided interconnectivity in 1986 with the NSFNET project, which also
created network access to the supercomputer sites in the United States from research and
education organizations. Commercial Internet service providers (ISPs) began to emerge in the
late 1980s. The ARPANET was decommissioned in 1990. Private connections to the Internet
by commercial entities became widespread quickly, and the NSFNET was decommissioned
in 1995, removing the last restrictions on the use of the Internet to carry commercial traffic.

Since the mid-1990s, the Internet has had a revolutionary impact on culture and commerce,
including the rise of near-instant communication by electronic mail, instant messaging, voice
over Internet Protocol (VoIP) telephone calls, two-way interactive video calls, and the World
Wide Web with its discussion forums, blogs, social networking, and online shopping sites.
The research and education community continues to develop and use advanced networks such
as NSF's very high speed Backbone Network Service (vBNS), Internet2, and National
LambdaRail. Increasing amounts of data are transmitted at higher and higher speeds over
fiber optic networks operating at 1-Gbit/s, 10-Gbit/s, or more. The Internet's takeover of the
global communication landscape was almost instant in historical terms: it only communicated
1% of the information flowing through two-way telecommunications networks in the year
1993, already 51% by 2000, and more than 97% of the telecommunicated information by
2007.[1] Today the Internet continues to grow, driven by ever greater amounts of online
information, commerce, entertainment, and social networking.

Working of Internet:
The internet is the network of networks around the world. It is a global network of computer.
It consists of millions of private, public, academic, business, and government networks. The
Internet connects millions of computers. These computers are called hosts. The
communication protocol used for Internet is TCP/IP. The computers on Internet are linked
through different communication media. The commonly used communication media are
telephone lines, fiber optic cables, microwave and satellite.
A large number of books, newspapers, magazines, encyclopedia, and other types of materials
are available in electronic from on the Internet. We can find information or news on about
almost any thing of the world. We can also access latest information or news on any topic. It
means that Internet is an ocean of knowledge.In addition of finding information, we can
communicate with other people around the world. Due to Internet our world has become a
"global village".

Working of the InternetThere is no particular organization that controls the Internet. Different
networks of private companies, government agencies, research organizations, universities etc.
are interconnected together. You can say that the Internet is a collection of millions of
computers, all linked together.

A personal computer can be linked to the Internet using a phone-line modem, DSL or cable
modem. The modem is used to communicate with the server of an Internet Server Provider
(ISP). ISP is a company that provides the Internet connections to the users. There are many
ISP companies in each country of the world. The user has to get an Internet connection from
any ISP company to connect to the Internet.

A computer in a business or university is usually connected with LAN using Network


Interface Card (or LAN card). The LAN of university or business is connected to the server
of ISP using a high-speed phone line such as TI Line. A TI Line can handle approximately
1.5 million bits per second. A normal phone line using a modem can handle 30,000 to 50,000
bits per second.

The user's computer connects to ISP's server makes its connection to larger ISP. The largest
ISPs maintain fiber-optic lines, under sea cables or satellite links. In this way, every computer
on the Internet is connected to every other computer on the Internet.
Use of Internet:

Internet is today one of the most important part of our daily life. There are large numbers of
things that can be done using the internet and so it is very important. You can say that with
the progress in the internet we are progressing in every sphere of life as it not only makes our
tasks easier but also saves a lot of time. Today internet is used for different purposes
depending upon the requirement. Here in this very article we have mentioned then ten best
uses of the internet. Here goes the list.

Internet has been the most useful technology of the modern times which helps us not only in
our daily lives, but also our personal and professional lives developments. The internet helps
us achieve this in several different ways.

For the students and educational purposes the internet is widely used to gather information so
as to do the research or add to the knowledge of any sort of subject they have. Even the
business personals and the professions like doctors, access the internet to filter the necessary
information for their use. The internet is therefore the largest encyclopedia for everyone, in
all age categories.

The internet has served to be more useful in maintaining contacts with friends and relatives
who live abroad permanently. The easiest communication means like the internet chatting
systems and the emails are the best and the most common for the maintaining contacts with
the people around the world.

Not to forget internet is useful in providing with most of the fun these days. May it be all the
games, and networking conferences or the online movies, songs, dramas and quizzes, internet
has provided the users with a great opportunity to eradicate the boredom from their lives.

Internet is also used to upgrade the internet and use special software to work on the projects
and documentation works as the internet enables the user to download a myriad of different
software for a variety of different purposes, making it much easier than buying the costly
software cds.

1. Communication

Easiest thing that can be done using the internet is that we can communicate with the people
living far away from us with extreme ease. Earlier the communication used to be a daunting
task but all that chanced once internet came into the life of the common people. Now people
can not only chat but can also do the video conferencing. It has become extremely easy to
contact the loved ones who are in some other part of the world. Communication is the most
important gift that the internet has given to the common man. Email, social networking sites
are some of the prime example of it. This is one such gift of the internet which is cherished
by everyone and has made our life easier to much extent.

2. Research

Now the point that has been placed next is research. In order to do research you need to go through
hundreds of books as well as the references and that was one of the most difficult jobs to do earlier.
Since the internet came into life, everything is available just a click away. You just have to search for
the concerned topic and you will get hundreds of references that may be beneficial for your
research. And since internet is here to make your research public, you can then benefit a large
amount of people from the research work that you have done. Research is one such thing which has
got lots of benefit from this evolution of internet. Research process has now got wings and has
gained the most due to the internet.

Education

The next point that we have in this list is education. Yes you read it right. Education is one of
the best things that the internet can provide. There are a number of books, reference books,
online help centres, expert’s views and other study oriented material on the internet that can
make the learning process very easier as well as a fun learning experience. There are lots and
lots of websites which are related to different topic. You can visit them and can gain endless
amount of knowledge that you wish to have. With the use of internet for education, you are
non-longer dependent on some other person to come and teach you. There are various
number of tutorials available over the internet using which you can learn so many thing very
easily. There can’t be any excellent use of the internet other than education as it is the key to
achieve everything in life.

4. Financial Transaction

Mentioned here is financial transaction. Financial transaction is the term which is used when
there is exchange of money. With the use of internet in the financial transaction, your work
has become a lot easier. Now you don’t need to stand in the queue at the branch of your
particular bank rather you can just log in on to the bank website with the credential that has
been provided to you by the bank and then can do any transaction related to finance at your
will. With the ability to do the financial transaction easily over the internet you can purchase
or sell items so easily. Financial transaction can be considered as one of the best uses of
resource in the right direction.

5. Real Time Updates

Real time updates have been placed at the number fifth position here. This has been
mentioned here in regards to the news and other happenings that may be on-going in different
parts of the world but with the use of internet we come to know about it very easily and
without any difficulty. There are various websites on the internet which provides you with the
real time updates in every field be it in business, sports, finance, politics, entertainment and
others. Many a time the decisions are taken on the real time updates that are happening in
various parts of the world and this is where internet is very essential and helpful.

Overview of World Wide Web(Web Server and Client)


The term WWW refers to the World Wide Web or simply the Web. The
World Wide Web consists of all the public Web sites connected to the Internet worldwide,
including the client devices (such as computers and cell phones) that access Web content.
The WWW is just one of many applications of the Internet and computer networks.

The World Wide Web (WWW, W3) is an information system of interlinked hypertext
documents that are accessed via the Internet. It has also commonly become known simply as
the Web. Individual document pages on the World Wide Web are called web pages and are
accessed with a software application running on the user's computer, commonly called a web
browser. Web pages may contain text, images, videos, and other multimedia components, as
well as web navigation features consisting of hyperlinks.

The "Web", short for "World Wide Web" (which gives us the acronym www), is the name
for one of the ways that the Internet lets people browse documents connected by hypertext
links.

The concept of the Web was perfected at CERN (Centre Européen de Recherche
Nucléaire) in 1991 by a group of researchers which included Tim-Berners Lee, the creator
of the hyperlink, who is today considered the father of the Web.

The principle of the Web is based on using hyperlinks to navigate between documents (called
"web pages") with a program called a browser. A web page is a simple text file written in a
markup language (called HTML) that encodes the layout of the document, graphical
elements, and links to other documents, all with the help of tags.
Besides the links which connect formatted documents to one another, the web uses the HTTP
protocol to link documents hosted on distant computers (called web servers, as opposed to the
client represented by the broswer). On the Internet, documents are identified with a unique
address, called a URL, which can be used to locate any resource on the Internet, no matter
which server may be hosting it.

• http:// indicates that we want browse the web using the HTTP protocol, the default
protocol for browsing the Web. There are other protocols for other uses of the
Internet.
• www.commentcamarche.net corresponds to the address of the server that hosts the
web pages. By convention, web servers have a name that begins with www., to make
it clear that they are dedicated web servers and to make memorising the address
easier. This second part of the address is called the domain name. A website can be
hosted on several servers, each belonging to the same name:
www.commentcamarche.net www2.commentcamarche.net,
intranet.commentcamarche.net, etc.
• /www/www-intro.php3 indicates where the document is located on the machine. In
this case, it is the file www-intro.php3 situé located in the directory www.

Introduction to Search engine and Searching the Web

A search engine is a software system that is designed to search for information on the World
Wide Web. The search results are generally presented in a line of results often referred to as
search engine results pages (SERPs). The information may be a mix of web pages, images,
and other types of files. Some search engines also mine data available in databases or open
directories. Unlike web directories, which are maintained only by human editors, search
engines also maintain real-time information by running an algorithm on a web crawler.

1)Web Crawling
Matthew Gray’s World Wide Web Wanderer (1993) was one of the first efforts to automate
the discovery of web pages Gray’s web crawler would download a web page, examine it for
links to other pages, and continue downloading links it discovered until there were no more
links left to be discovered. This is how web crawlers, also called spiders, generally operate
today.

Because the Web is so large, search engines normally employ thousands of web crawlers that
meticulously scour the web day and night, downloading pages, looking for links to new
pages, and revisiting old pages that might have changed since they were visited last. Search
engines will often revisit pages based on their frequency of change in order to keep their
index fresh. This is necessary so search engine users can always find the most up-to-date
information on the Web.
Maintaining an accurate "snap shot" of the Web is challenging, not only because of the size
of the Web and constantly changing content, but also because pages disappear at an alarming
rate (a problem commonly called linkrot). Brewster Kahle, founder of the Internet Archive,
estimates that web pages have an average life expectancy of only 100 days And some pages
cannot be found by web crawling. These are pages that are not linked to others, pages that are
password-protected or are generated dynamically when submitting a web form. These pages
reside in the deep Web, also called the hidden or invisible Web

Some website owners don’t want their pages indexed by search engines for any number of
reasons, so they use the Robots Exclusion Protocol (robots.txt) to tell web crawlers which
URLs are off-limits. Other website owners want to ensure certain web pages are indexed by
search engines, so they use the Sitemap Protocol, a method supported by all major search
engines, to provide a crawler with a list of URLs they want indexed Sitemaps are especially
useful in providing the crawler URLs it would be unable to find with web crawling.

Figure 1 below shows how a web crawler pulls from the Web and places downloaded web
resources into a local repository. The next section will examine how this repository of web
resources is then indexed and retrieved when you enter a query into a search engine.

Figure 1 - The Web is crawled and placed into a local repository where it is indexed and
retrieved when using a search engine.

2)Indexing and Ranking

When a web crawler has downloaded a web page, the search engine will index its content.
Often the stop words, words that occur very frequently like a, and, the, and to, are ignored.
Other words might be stemmed. Stemming is a technique that removes suffixes from a word
to improve the content of the index. For example, eating, eats, and eaten may all be stemmed
to eat so that a search for eat will match all its variants.

An example index (usually called an inverted index) will look something like this where the
number corresponds to a web page that contains the text:

cat > 2, 5
dog > 1, 5, 6
fish > 1, 2
bird > 4

So a query for dog would result in pages 1, 5, and 6. A query for cat dog would only result in
page 5 since it is the only page that contains both search terms. Some search engines provide
advanced search capabilities, so a search for cat OR dog and NOT fish would be entered
which would result in pages 1, 5, and 6.

The search engine also maintains multiple weights for each term. The weight might
correspond to any number of factors that determines how relevant the term is to its host web
page. Term frequency is one such weight which measures how often a term appears in a
web page. For example, if someone wanted to search the Web for pages about dogs, a web
page containing the term dog five times would likely be more relevant than a page containing
dog just once. However, term frequency is susceptible to spamming (or spamdexing), a
technique which some individuals use to artificially manipulate a web page’s ranking, so it is
only one of many factors which are used

Another weight that is given to a web page is based on the context in which the term appears
in the page. If the term appears in a large, bold font or in the title of the page, it may be given
more weight than to a term that appears in a regular font. A page might also be given more
weight if links pointing to the page use the term in its anchor text. In other words, a page that
is pointed to with the link text “see the dogs” is more likely about dogs since the term dogs
appears in the link. This functionality has left search engines susceptible to a practice known
as Google-bombing, where many individuals collude to produce the same anchor text to the
same web page for humorous effect. A popular Google bomb once promoted the White
House website to the first result when searching Google for “miserable failure”. Google has
recently implemented an algorithmic solution capable of diffusing most Google bombs

A final weight which most search engines will use is based on the web graph, the graph
which is created when viewing web pages as nodes and links as directed edges. Sergey Brin
and Larry Page were graduate students at Stanford University when they noted just how
important the web graph was in determining the relevancy of a web page. In 1998, they wrote
a research paper about how to measure the importance of a web page by examining a page’s
position in the web graph, in particular the page’s in-links (incoming links) and out-links
(outgoing links). Essentially, they viewed links like a citation. Good pages receive many
citations, and bad pages receive few. So pages that have in-links from many other pages are
probably more important and should rank higher than pages that few people link to. Weight
should also be given to pages based on who is pointing to them; an in-link from a highly cited
page is better than an in-link from a lowly cited page. Brin and Page named their ranking
algorithm PageRank, and it was instrumental in popularizing their new search engine called
Google. All search engines today take into account the web graph when ranking results.

Figure 2 shows an example of a web graph where web pages are nodes and links from one
page to another are directed edges. The size and color of the nodes indicate how much
PageRank the web pages have. Note that pages with high PageRank (red nodes) generally
have significantly more in-links than do pages with low PageRank (green nodes).

Figure 2 – Example web graph. Pages with higher PageRank are represented with
3)Rank Optimization

Search engines guard their weighting formulas as a trade secret since it differentiates their
service from other search engines, and they do not want content-producers (the public who
produces web pages) to “unfairly” manipulate their rankings. However, many companies rely
heavily on search engines for recommendations and customers, and their ranking on a search
engine results page (SERP) is very important. Most search engine users only examine the
first screen of results, and they view the first few results more often than the results at the
bottom of the page. This naturally pits content-producers in an adversarial role against search
engines since the producers have an economic incentive to rank highly in SERPs.
Competition for certain terms (e.g., Hawaii vacation and flight to New York) is particularly
fierce. Because of this, most search engines provide paid-inclusion or sponsored results
along with regular (organic) results. This allows companies to purchase space on a SERP for
certain terms.

An industry based on search engine optimization (SEO) thrives on improving their


customer’s rankings by designing their pages to maximize the various weights discussed
above and to increase the number and quality of incoming links. Black hat SEOs may use a
number of questionable techniques like spamdexing and link farms, artificially created web
pages designed to bolster the PageRank of a particular set of web pages, to increase their
ranking. When detected, such behavior is often punished by search engines by removing the
pages from their index and embargoing the website for a period of time

Vertical Search
Search engines like Google, Yahoo!, and Bing normally provide specialized types of web
search called vertical search [. A few examples include:

1. Regular web search is the most popular type of search which searches the index
based on any type of web page. Other on-line textual resources like PDFs and
Microsoft Office formats are also available through regular web search.
2. News search will search only news-related websites. Typically the search results are
ordered based on age of the story.
3. Image search searches only images that were discovered when crawling the web.
Images are normally indexed by using the image’s filename and text surrounding the
image. Artificial intelligence techniques for trying to discover what is actually
pictured in the image are slowly emerging. For example, Google can now separate
images of faces and line drawing from other image types.
4. Video search searches the text accompanied by videos on the Web. Like image
search, there is heavy reliance on people to supply text which accurately describes the
video.

Other specialty searches include blog search, newsgroup search, scholarly literature search,
etc. Search engines also occasionally mix various types of search results together onto the
same SERP. Figure 3 below shows how Ask.com displays news and images along with
regular web search results when searching for harding. The blending of results from different
vertical search offerings is usually called universal search .
Figure 3 - Ask.com's universal search results.

Personalized Search
In order to provide the best possible set of search results for a searcher, many search engines
today are experimenting with techniques that take into account personal search behavior.
When searching for leopard, a user who often queries for technical information is more likely
to want to see results dealing with Leopard the operating system than leopard the animal.
Research has also shown that one third of all queries are repeat queries, and most of the time
an individual will click on the same result they clicked on before [14]. Therefore a search
engine should ideally present the previously-selected result close to the top of the SERP when
recognizing the user has entered the same query before.

Figure 4 below shows a screen shot of personalized search results via Google's SearchWiki
[15], an experiment in search personalization that Google rolled-out in late 2008. The user
was able to promote results higher in the list, remove poor results from the list, and add
comments to specific results. The comment and removal functions are no longer available
today, but Google does allow users to star results that they like, and these starred results
appear prominently when the user later searches for the same content.
Figure 4 – Example of Google's SearchWiki.
As smartphones have become increasingly popular, search engines have started providing search
results based on the user's location A location-aware search engine recognizes that when a user
searches for restaurants on their mobile device, they are likely wanting to find restaurants in their
near vicinity.

List of search engines:

• Metasearch engines
• Geographically limited scope
• Semantic
• Accountancy
• Business
• Computers
• Enterprise
• Fashion
• Food/Recipes
• Genealogy
• Mobile/Handheld
• Job
• Legal
• Medical
• News
• People
• Real estate / property
• Television
Downloading Files:
The term downloading is distinguished from the related concept of streaming, which
indicates the receiving of data that is used nearly immediately as it is received, while the
transmission is still in progress and which may not be stored long-term, whereas in a process
described using the term downloading, this would imply that the data is only usable when it
has been received in its entirety.

Increasingly, websites that offer streaming media or media displayed in-browser, such as
YouTube, and which place restrictions on the ability of users to save these materials to their
computers after they have been received, say that downloading is not permitted. In this
context, download implies specifically "receive and save" instead of simply "receive".
However, it is also important to note that downloading is not the same as "transferring" (i.e.,
sending/receiving data between two storage devices would be a transferral of data, but
receiving data from the Internet would be considered a download).

Downloading is the transmission of a file from one computer system to another, usually
smaller computer system. From the Internet user's point-of-view, to download a file is to
request it from another computer (or from a Web page on another computer) and to receive it.

When you download a file, you transfer it from the Internet to your computer. The most commonly
downloaded files are programs, updates, or other kinds of files such as game demos, music and
video files, or documents. Downloading can also mean copying information from any source to a
computer or other device, such as copying your favorite songs to a portable music player.

To copy data (usually an entire file) from a main source to a peripheral device. The term is
often used to describe the process of copying a file from an online service or bulletin board
service (BBS) to one's own computer. Downloading can also refer to copying a file from a
network file server to a computer on the network.

In addition, the term is used to describe the process of loading a font into a laser printer. The
font is first copied from a disk to the printer's local memory. A font that has been downloaded
like this is called a soft font to distinguish it from the hard fontsthat are permanently in the
printer's memory.The opposite of download is upload, which means to copy a file from your
own computer to another computer.

Introduction to web browser


A web browser is a software application which enables a user to display and interact with text,
images, videos, music, and other information that could be on a website. Text and images on a
web page can contain hyperlinks to other web pages at the same or different website. Web
browsers allow a user to quickly and easily access information provided on many web pages at
many websites by traversing these links. Web browsers format HTML information for display
so the appearance of a web page many differ between browsers.
Web browser is used to run the software application that allows retrieving, presenting and
traversing the information from one place to another.

- Web browser provides the resources using the WWW (World Wide Web) this can be
identified by URI (Uniform Resource Identifier).

- Web browser fetches the data like web page, image, video or other piece of content from the
server and displays it accordingly.

- Web browser uses hyperlinks to display the resources and allow the users to navigate their
browsers according to the resources.

- Web browser defines the application software that is designed for the user to access and
retrieve the documents using the Internet.

Protocols and Standards

Web browsers communicated with web servers primarily using HTTP (hypertext transfer
protocol) to fetch web pages. HTTP allows web browsers to submit information to web servers
as well as fetch web pages from them. Pages are identified by means of a URL (uniform
resource locater), which is treated as an address, beginning with “http://” for HTTP access.

The file format for a web page is usually HTML (hyper-text markup language) and is identified
in the HTTP protocol. Most web browsers also support a variety of additional formats, such as
JPEG, PNG, and GIF image formats, and can be extended to support more through the use of
plugins. The combination of HTTP content type and URL protocol specification allows web
page designers to embed images, animations, video, sound, and streaming media into a web
page, or to make them accessible through the web page.

Popular Browsers

1)Firefox

Firefox is a very popular web browser. One of the great things about Firefox is that it is
supported on all different OSs. Firefox is also open source which makes its support group a
very large community of open source developers. Firefox is also known for its vast range of
plugins/add-ons that let the user customize in a variety of ways. Firefox is a product of the
Mozilla Foundation. The latest version of Firefox is Firefox 3.

Some of Firefox’s most prominant features include: tabbed browsing, a spell checker,
incremental find, live bookmarking, a download manager, and an integrated search system that
uses the user’s favorite search engine. Like mentioned before, one of the best things about
Firefox is its vast amount of plugins/add-ons. Some of the most popular include NoScript
(script blocker), FoxyTunes (controls music players), Adblock Plus (ad blocker), StumbleUpon
(website discovery), DownThemAll! (download functions), and Web Developer (web tools).
2)Internet Explorer

Internet Explorer (IE - created by Microsoft) is a very prominant web browser for the Windows
OS. IE is the most popular web browser. It comes pre-installed on all Windows computers. The
latest version of IE is IE7 with IE8 in beta. IE was designed to view a broad range of web pages
and to provide certain features within the OS.

IE almost fully supports HTML 4.01, CSS Level 1, XML 1.0, and DOM Level 1. It has
introduced a number of proprietary extensions to many of the standards. This has resulted in a
number of web pages that can only be viewed properly using IE. It has been subject to many
security vulnerabilities just like Windows has. Much of the spyware, adware, and viruses across
the Internet are made possible by exploitable bugs and flaws in the security architecture of IE.
These are were drive-by downloads come into play (see computer security lesson for more
details on that).

3)Others

Safari (created by Apple) is a very popular web browser among Apple computers. Safari is also
the native browser on the iPhone and iPod touch. Safari is available for Windows, but has not
reached a very high level of Windows users since. In May 2008 Safari controlled 6.25% of
marketshare among all web browsers.

Opera (created by the Opera Software company) is another fairly popular web browser. It
handles common Internet-related tasks. Opera also includes features such as tabbed browsing,
page zooming, mouse gestures, and an integrated download manager. Its security features
include phishing and malware protection, strong encryption when browsing secure web sites,
and the ability to easily delete private data such as cookies and browsing history. Opera runs on
Windows, OS X, and Linux.

The browser's main functionality


The main function of a browser is to present the web resource you choose, by requesting it
from the server and displaying it in the browser window. The resource is usually an HTML
document, but may also be a PDF, image, or some other type of content. The location of the
resource is specified by the user using a URI (Uniform Resource Identifier).
The way the browser interprets and displays HTML files is specified in the HTML and CSS
specifications. These specifications are maintained by the W3C (World Wide Web
Consortium) organization, which is the standards organization for the web. For years browsers
conformed to only a part of the specifications and developed their own extensions. That caused
serious compatibility issues for web authors. Today most of the browsers more or less conform
to the specifications.
Browser user interfaces have a lot in common with each other. Among the common user
interface elements are:

• Address bar for inserting a URI


• Back and forward buttons
• Bookmarking options
• Refresh and stop buttons for refreshing or stopping the loading of current documents
• Home button that takes you to your home page

Strangely enough, the browser's user interface is not specified in any formal specification, it
just comes from good practices shaped over years of experience and by browsers imitating
each other. The HTML5 specification doesn't define UI elements a browser must have, but
lists some common elements. Among those are the address bar, status bar and tool bar. There
are, of course, features unique to a specific browser like Firefox's downloads manager.

The browser's main components are

1. The user interface: this includes the address bar, back/forward button, bookmarking
menu, etc. Every part of the browser display except the window where you see the requested
page.
2. The browser engine: marshals actions between the UI and the rendering engine.
3. The rendering engine : responsible for displaying requested content. For example if
the requested content is HTML, the rendering engine parses HTML and CSS, and displays
the parsed content on the screen.
4. Networking: for network calls such as HTTP requests, using different
implementations for different platform behind a platform-independent interface.
5. UI backend: used for drawing basic widgets like combo boxes and windows. This
backend exposes a generic interface that is not platform specific. Underneath it uses operating
system user interface methods.
6. JavaScript interpreter. Used to parse and execute JavaScript code.
7. Data storage. This is a persistence layer. The browser may need to save all sorts of
data locally, such as cookies. Browsers also support storage mechanisms such as
localStorage, IndexedDB, WebSQL and FileSystem.

Working with E-mail:


E-mail (electronic mail) is the exchange of computer-stored messages by
telecommunication. (Some publications spell it email; we prefer the currently more
established spelling of e-mail.) E-mail messages are usually encoded in ASCII text. However,
you can also send non-text files, such as graphic images and sound files, as attachments sent
in binary streams. E-mail was one of the first uses of the Internet and is still the most popular
use. A large percentage of the total traffic over the Internet is e-mail. E-mail can also be
exchanged between online service provider users and in networks other than the Internet, both
public and private.

E-mail can be distributed to lists of people as well as to individuals. A shared distribution list
can be managed by using an e-mail reflector. Some mailing lists allow you to subscribe by
sending a request to the mailing list administrator. A mailing list that is administered
automatically is called a list server.

E-mail is one of the protocols included with the Transport Control Protocol/Internet Protocol
(TCP/IP) suite of protocols. A popular protocol for sending e-mail is Simple Mail Transfer
Protocol and a popular protocol for receiving it is POP3. Both Netscape and Microsoft
include an e-mail utility with their Web browsers.

How to Create a Email


Gmail has been increasing in popularity since it was first introduced in 2004. With the
decline of Yahoo!, AOL, and Hotmail, more and more people are moving to Google's
services. Creating a Gmail account is quick and easy, and also provides you access to other
Google products such as YouTube, Google Drive, and Google Plus.
1 . Creating Your Account
Suppose If u want to open your account on gmail.com. then follow the steps given below
Open a Web browser ( internet explorer or google chrome or mozilla etc)

write in address bar www.gmail.com and you will get below image

Now click on "CREATE AN ACCOUNT", as shown in below (check the red arrow) .
After clicking on "CREATE AN ACCOUNT " button you will get a window as shown in
below image

Fill all the details, here the user name is the desired user ID which you want to create.

after felling all the details click on "Next step" Button (check the red arrow)
after next step it will ask for Phone number for verification, enter cell phone number and
click on next

now click on "next step " button and you will get you inbox .
Congs you have created your new gmail ID.

Enjoy your new Gmail account. You're finished! Click on "Continue to Gmail" to access your
inbox, read your emails, and write new ones.

Use of Email
Email is one of the most important forms of communication in today's digital age. It's the way
that millions (if not billions) of people stay in touch with each other. Luckily, this form of
near-instant communication is completely free. Make a free email account today to start
sending and receiving email immediately. Read on below the jump for detailed instructions
on registering a new email account with several of the internet's most popular email
providers.
Go to Gmail.com. The first step to creating an email account with Gmail, Google's free email
service, is to visit Gmail's main site. Type "gmail.com" into your browser's navigation bar, or,
alternatively, type "Gmail" into your search engine of choice and click the relevant result.

The email is actually used to transfer messages between one to another. It is also used for :-

1. Group discussion by making groups in hotmail, yahoo, etc


2. Stay in touch with users attached in the group.
3. Transmitting documents through attachments
4. Group email to multiple users
5. Convineint way of sending job application.
6. Easy method of advertisement.
7. Receiving conformation of service.
8. Service subscription

Vous aimerez peut-être aussi