Vous êtes sur la page 1sur 19

ISO/OSI Network Model

The standard model for networking protocols and distributed applications is the International Standard Organization's Open System Interconnect (ISO/OSI) model. It defines seven network layers. Layer 1 - Physical Physical layer defines the cable or physical medium itself, e.g., thinnet, thicknet, unshielded twisted pairs (UTP). All media are functionally equivalent. Layer 2 - Data Link Data Link layer defines the format of data on the network. A network data frame, aka packet, includes checksum, source and destination address, and data. The largest packet that can be sent through a data link layer defines the Maximum Transmission Unit (MTU). The data link layer handles the physical and logical connections to the packet's destination, using a network interface. A host connected to an Ethernet would have an Ethernet interface to handle connections to the outside world, and a loopback interface to send packets to itself. Ethernet addresses a host using a unique, 48-bit address called its Ethernet address or Media Access Control (MAC) address. MAC addresses are usually represented as six colon-separated pairs of hex digits, e.g., 8:0:20:11:ac:85. This number is unique and is associated with a particular Ethernet device. Layer 3 - Network NFS(Network File System) uses Internetwork Protocol (IP) as its network layer interface. IP is responsible for routing, directing datagrams from one network to another. The network layer may have to break large datagrams, larger than MTU (Maximum Transmission Unit), into smaller packets and host receiving the packet will have to reassemble the fragmented datagram. The Internetwork Protocol identifies each host with a 32-bit IP address. IP addresses are written as four dotseparated decimal numbers between 0 and 255, e.g., 129.79.16.40. The leading 13 bytes of the IP identify the network and the remaining bytes identifies the host on that network.Even though IP packets are addressed using IP addresses, hardware addresses must be used to actually transport data from one host to another. The Address Resolution Protocol (ARP) is used to map the IP address to it hardware address. Layer 4 - Transport Transport layer subdivides user-buffer into network-buffer sized datagrams and enforces desired transmission control. Two transport protocols, Transmission Control Protocol (TCP) and User Datagram Protocol (UDP), sits at the transport

layer. Reliability and speed are the primary difference between these two protocols. TCP establishes connections between two hosts on the network through 'sockets' which are determined by the IP address and port number. TCP keeps track of the packet delivery order and the packets that must be resent. Maintaining this information for each connection makes TCP a stateful protocol. UDP on the other hand provides a low overhead transmission service, but with less error checking. NFS is built on top of UDP because of its speed and statelessness. Statelessness simplifies the crash recovery. Layer 5 - Session The session protocol defines the format of the data sent over the connections. The NFS uses the Remote Procedure Call (RPC) for its session protocol. RPC may be built on either TCP or UDP. Login sessions uses TCP whereas NFS and broadcast use UDP. Layer 6 - Presentation External Data Representation (XDR) sits at the presentation level. It converts local representation of data to its canonical form and vice versa. The canonical uses a standard byte ordering and structure packing convention, independent of the host. Layer 7 - Application Provides network services to the end-users. Mail, ftp, telnet, DNS, NIS, NFS are examples of network applications.

IP address
What is an IP address?
Every device connected to the public Internet is assigned a unique number known as an Internet Protocol (IP) address. IP addresses consist of four numbers separated by periods (also called a 'dotted-quad') and look something like 127.0.0.1. Since these numbers are usually assigned to internet service providers within region-based blocks, an IP address can often be used to identify the region or country from which a computer is connecting to the Internet. An IP address can sometimes be used to show the user's general location. Because the numbers may be tedious to deal with, an IP address may also be assigned to a Host name, which is sometimes easier to remember. Hostnames may be looked up to find IP addresses, and vice-versa. At one time ISPs issued one IP address to each user. These are called static IP addresses. Because there is a limited number of IP addresses and with increased usage of the internet ISPs now issue IP addresses in a dynamic fashion out of a pool of IP addresses (Using DHCP). These are referred to as dynamic IP addresses. This also limits the ability of the user to host websites, mail servers, ftp servers, etc. In addition to users connecting to the internet, with virtual hosting, a single machine can act like multiple machines (with multiple domain names and IP addresses).

Significance of Subnet Mask


The subnet mask is the network address plus the bits reserved for identifying the subnetwork. It's called a mask because it can be used to identify the subnet to which an IP address belongs by performing a bitwise AND operation on the mask and the IP address. More simply, it is required to tell the computer how the network administrator has created the network. The subnet mask is actually a 32-bit number, same as an IP address. It is composed of four octets. Each of the eight bits in the octet can be either 1 or 0. Unlike an IP address, a subnet mask consists of a number of ones followed by a number of zeros. e.g. 11111111.11111111.11111111.00000000 11111111.11111111.11111111.01110000 is a is valid NOT subnet mask valid.

Since all those 1 and 0 are hard to read and type, subnet masks are translated into one of two formats. One option is to just give the number of consecutive 1s The other is to convert each octet from a binary number to a decimal number. Thus, 11111111.11111111.11111111.11000000 is a valid mask It can also be written as /26 or as 255.255.255.192 As you can see, the mask is not restricted to 255.255.255.0 Each of the following subnet masks are used at some point within our network: /8 : 255.0.0.0 used for route summarisations /16: 255.255.0.0 used for route summarisations /20 : 255.255.240.0 - block size assigned to large buildings

PROXY SERVER In computer networks, a proxy server is a server (a computer system or an application program) which services the requests of its clients by forwarding requests to other servers. A client connects to the proxy server, requesting some service, such as a file, connection, web page, or other resource, available from a different server. The proxy server provides the resource by connecting to the specified server and requesting the service on behalf of the client. A proxy server may optionally alter the client's request or the server's response, and sometimes it may serve the request without contacting the specified server. In this case, it would 'cache' the first request to the remote server, so it could save the information for later, and make everything as fast as possible.

A proxy server that passes all requests and replies unmodified is usually called a gateway or sometimes tunneling proxy. A proxy server can be placed in the user's local computer or at specific key points between the user and the destination servers or the Internet Caching proxy server A proxy server can service requests without contacting the specified server, by retrieving content saved from a previous request, made by the same client or even other clients. This is called caching. Caching proxies keep local copies of frequently requested resources, allowing large organizations to significantly reduce their upstream bandwidth usage and cost, while significantly increasing performance. There are well-defined rules for caching. Some poorly-implemented caching proxies have had downsides (e.g., an inability to use user authentication). Some problems are described in RFC 3143 (Known HTTP Proxy/Caching Problems). Web proxy A proxy that focuses on WWW traffic is called a "web proxy". The most common use of a web proxy is to serve as a web cache. Most proxy programs (e.g. Squid, NetCache) provide a means to deny access to certain URLs in a blacklist, thus providing content filtering. This is usually used in a corporate environment, though with the increasing use of Linux in small businesses and homes, this function is no longer confined to large corporations. Some web proxies reformat web pages for a specific purpose or audience (e.g., cell phones and PDAs). Content Filtering Web Proxy A content filtering web proxy server provides administrative control over the content that may be relayed through the proxy. It is commonly used in commercial and non-commercial organizations (especially schools) to ensure that Internet usage conforms to acceptable use policy. Common methods used for content filtering include: URL or DNS blacklists, URL regex filtering, MIME filtering, or content keyword filtering. Some products have been known to employ content analysis techniques to look for traits commonly used by certain types of content providers. A content filtering proxy will often support user authentication, to control web access. It also usually produces logs, either to give detailed information about the URLs accessed by specific users, or to monitor bandwidth usage statistics. It may

also communicate to daemon based and/or ICAP based antivirus software to provide security against virus and other malware by scanning incoming content in real time before it enters the network. Anonymizing proxy server An anonymous proxy server (sometimes called a web proxy) generally attempts to anonymize web surfing. These can easily be overridden by site administrators, and thus rendered useless in some cases. There are different varieties of anonymizers. Access control: Some proxy servers implement a logon requirement. In large organizations, authorized users must log on to gain access to the web. The organization can thereby track usage to individuals. Hostile proxy Proxies can also be installed by online criminals, in order to eavesdrop upon the dataflow between the client machine and the web. All accessed pages, as well as all forms submitted, can be captured and analyzed by the proxy operator. For this reason, passwords to online services (such as webmail and banking) should be changed if an unauthorized proxy is detected. Intercepting proxy server An intercepting proxy (also known as a "transparent proxy") combines a proxy server with a gateway. Connections made by client browsers through the gateway are redirected through the proxy without client-side configuration (or often knowledge). Intercepting proxies are commonly used in businesses to prevent avoidance of acceptable use policy, and to ease administrative burden, since no client browser configuration is required. It is often possible to detect the use of an intercepting proxy server by comparing the external IP address to the address seen by an external web server, or by examining the HTTP headers on the server side. Transparent and non-transparent proxy server The term "transparent proxy" is most often used incorrectly to mean "intercepting proxy" (because the client does not need to configure a proxy and cannot directly detect that its requests are being proxied).

However, RFC 2616 (Hypertext Transfer Protocol -- HTTP/1.1) offers different definitions: "A 'transparent proxy' is a proxy that does not modify the request or response beyond what is required for proxy authentication and identification". "A 'non-transparent proxy' is a proxy that modifies the request or response in order to provide some added service to the user agent, such as group annotation services, media type transformation, protocol reduction, or anonymity filtering". Forced proxy The term "forced proxy" is ambiguous. It means both "intercepting proxy" (because it filters all traffic on the only available gateway to the Internet) and its exact opposite, "non-intercepting proxy" (because the user is forced to configure a proxy in order to access the Internet). Forced proxy operation is sometimes necessary due to issues with the interception of TCP connections and HTTP. For instance interception of HTTP requests can affect the usability of a proxy cache, and can greatly affect certain authentication mechanisms. This is primarily because the client thinks it is talking to a server, and so request headers required by a proxy are unable to be distinguished from headers that may be required by an upstream server (esp authorization headers). Also the HTTP specification prohibits caching of responses where the request contained an authorization header. Open proxy server Because proxies might be used for abuse, system administrators have developed a number of ways to refuse service to open proxies. many IRC networks automatically test client systems for known types of open proxy. Likewise, an email server may be configured to automatically test e-mail senders for open proxies. Groups of IRC and electronic mail operators run DNSBLs publishing lists of the IP addresses of known open proxies, such as AHBL, CBL, NJABL, and SORBS. The ethics of automatically testing clients for open proxies are controversial. Some experts, such as Vernon Schryver, consider such testing to be equivalent to an attacker portscanning the client host. [1] Others consider the client to have solicited the scan by connecting to a server whose terms of service include testing.

Reverse proxy server A reverse proxy is a proxy server that is installed in the neighborhood of one or more web servers. All traffic coming from the Internet and with a destination of one of the web servers goes through the proxy server. There are several reasons for installing reverse proxy servers: Encryption / SSL acceleration: when secure web sites are created, the SSL encryption is often not done by the web server itself, but by a reverse proxy that is equipped with SSL acceleration hardware. See Secure Sockets Layer. Load balancing: the reverse proxy can distribute the load to several web servers, each web server serving its own application area. In such a case, the reverse proxy may need to rewrite the URLs in each web page (translation from externally known URLs to the internal locations). Serve/cache static content: A reverse proxy can offload the web servers by caching static content like pictures and other static graphical content. Compression: the proxy server can optimize and compress the content to speed up the load time. Spoon feeding: reduces resource usage caused by slow clients on the web servers by caching the content the web server sent and slowly "spoon feeds" it to the client. This especially benefits dynamically generated pages. Security: the proxy server is an additional layer of defense and can protect against some OS and WebServer specific attacks. However, it does not provide any protection to attacks against the web application or service itself, which is generally considered the larger threat. Extranet Publishing: a reverse proxy server facing the Internet can be used to communicate to a firewalled server internal to an organization, providing extranet access to some functions while keeping the servers behind the firewalls. If used in this way, security measures should be considered to protect the rest of your infrastructure in case this server is compromised, as it's web application is exposed to attack from the Internet. HTTP Hypertext Transfer Protocol (HTTP) is a communications protocol for the transfer of information on the intranet and the World Wide Web. Its original

purpose was to provide a way to publish and retrieve hypertext pages over the Internet. HTTP development was coordinated by the World Wide Web Consortium (W3C) and the Internet Engineering Task Force (IETF), culminating in the publication of a series of Request for Comments (RFCs), most notably RFC 2616 (June 1999), which defines HTTP/1.1, the version of HTTP in common use. HTTP is a request/response standard between a client and a server. A client is the end-user, the server is the web site. The client making an HTTP request - using a web browser, spider, or other end-user tool - is referred to as the user agent. The responding server - which stores or creates resources such as HTML files and images - is called the origin server. In between the user agent and origin server may be several intermediaries, such as proxies, gateways, and tunnels. HTTP is not constrained to using TCP/IP and its supporting layers, although this is its most popular application on the Internet. Indeed HTTP can be "implemented on top of any other protocol on the Internet, or on other networks. HTTP only presumes a reliable transport; any protocol that provides such guarantees can be used." Typically, an HTTP client initiates a request. It establishes a Transmission Control Protocol (TCP) connection to a particular port on a host (port 80 by default; see List of TCP and UDP port numbers). An HTTP server listening on that port waits for the client to send a request message. Upon receiving the request, the server sends back a status line, such as "HTTP/1.1 200 OK", and a message of its own, the body of which is perhaps the requested file, an error message, or some other information. The reason that HTTP uses TCP and not UDP is because much data must be sent for a webpage, and TCP provides transmission control, presents the data in order, and provides error correction. See the difference between TCP and UDP. Resources to be accessed by HTTP are identified using Uniform Resource Identifiers (URIs) (or, more specifically, Uniform Resource Locators (URLs)) using the http: or https URI schemes. SCIRUS Scirus is the most comprehensive science-specific search engine on the Internet. Driven by the latest search engine technology, Scirus searches over 450 million science-specific Web pages, enabling you to quickly: Pinpoint scientific, scholarly, technical and medical data on the Web.

Find the latest reports, peer-reviewed articles, patents, pre prints and journals that other search engines miss. Offer unique functionalities designed for scientists and researchers. Scirus has proved so successful at locating science-specific results on the Web that the Search Engine Watch Awards voted Scirus 'Best Specialty Search Engine' in 2001 and 2002 and 'Best Directory or Search Engine Website' WebAward from Web Marketing Association in 2004, 2005 and 2006. Why Use Scirus? Search engines are all different in the Web sites they cover, and the way they classify these Web sites. Scirus, the search engine for science, focuses only on Web pages containing scientific content. Searching more than 450 million science-related pages, Scirus helps you quickly locate scientific information on the Web: Filters out non-scientific sites. For example, if you search on REM, Google finds the rock group - Scirus finds information on sleep, among other things Finds peer-reviewed articles such as PDF and PostScript files, which are often invisible to other search engines. Searches the most comprehensive combination of web information, preprint servers, digital archives, repositories and patent and journal databases. Scirus goes deeper than the first two levels of a Web site, thereby revealing much more relevant information.

Pinpointing Scientific Information Scirus has a wide range of special features to help you pinpoint the scientific information you need. With Scirus, you can: Select to search in a range of subject areas including health, life, physical and social sciences. Narrow your search to a particular author, journal or article. Restrict your results to a specified date range. Find scientific conferences, abstracts and patents. Refine, customize and save your searches.

How Does Scirus Rank Results? Search results in Scirus are, by default, ranked according to relevance. It is also possible to rank results by date. You can do this by clicking the Rank by date link on the Results Page. Scirus uses an algorithm to calculate ranking by relevance. This ranking is determined by two basic values: 1.Words - the location and frequency of a search term within a result account for one half of the algorithm. This is known as static ranking. 2. Links - the number of links to a page account for the second half of the algorithm - the more often a page is referred to by other pages, the higher it is ranked. This is known as dynamic ranking. Overall ranking is the weighted sum of the static and dynamic rank values. Scirus does not use metatags, as these are subject to ranking-tweaking by users.

MEDMINER MedMiner: an Internet text-mining tool for biomedical information, with application to gene expression profiling. The trend toward high-throughput techniques in molecular biology and the explosion of online scientific data threaten to overwhelm the ability of researchers to take full advantage of available information. This problem is particularly severe in the rapidly expanding area of gene expression experiments, for example, those carried out with cDNA microarrays or oligonucleotide chips. We present an Internet-based hypertext program, MedMiner, which filters and organizes large amounts of textual and structured information returned from public search engines like GeneCards and PubMed. We demonstrate the value of the approach for the analysis of gene expression data, but MedMiner can also be extended to other areas involving molecular genetic or pharmacological information. More generally still, MedMiner can be used to organize the information returned from any arbitrary PubMed search. MedMiner, a computerized tool that filters the literature and presents themost relevant portions in a well-organized way that facilitates

understanding. The result has been a considerable reduction in the time and effort required to survey the literature on genes and gene-gene relationships. The General Query option in MedMiner has proved similarly useful for any arbitrary PubMed search The second key component of MedMiner's procedure is text filtering. Text filtering systems translate user queries into relevance metrics that can be applied to large quantities of text automatically. Relevance metric research is an area of active investigation, but there are currently two widely used approaches. One applies combinations of keywords to identify relevant documents, paragraphs or sentences (4). The text filter might, for example, specify that an abstract is relevant if it contains a sentence with both the name of the gene and the word inhibits. A second approach uses word frequencies to determine relevance (4). A frequency-based filter might specify that an abstract is relevant if it contains words like gene, inhibit or inhibition significantly more frequently than does an average document. More sophisticated strategies for assessing relevance have also been proposed. Included are surface clue evaluation (1), shallow parsing (8), lexical and contextual analysis (5), semantic and discourse processing (2) and machine learning (11). However, the computational costs of applying any of these more sophisticated strategies to a large textual database is considerable, hence their use for Internet applications is problematical. The third component of MedMiner is a carefully designed user interface. Because we are presenting large amounts of information, users must be able to navigate the material easily and modify their queries repeatedly to optimize results. The output is organized according to the relevance rule triggered, rather than being ordered arbitrarily or by date. MedMiner searches documents for relevant facts specific to a predetermined domain. Our own studies have been done in the context of the National Cancer Institutes (NCI) drug discovery program, hence, we have been examining correlations between drug activity and gene expression (13). For that reason, MedMiner incorporates tools for literature exploration of gene-drug, as well as gene-gene, relationships. it can easily be extended (without additional programming) to other pursuits (12) such as single nucleotide polymorphism (SNP) analysis, sequence analysis and proteomic profiling HTML HTML, an initialism of HyperText Markup Language, is the predominant markup language for web pages. It provides a means to describe the structure of text-based information in a document by denoting certain text as links, headings, paragraphs, lists, and so on and to supplement that text with interactive forms, embedded images, and other objects. HTML is written in the form of tags, surrounded by angle brackets. HTML can also describe, to some degree, the appearance and semantics of a document, and can include embedded

scripting language code (such as JavaScript) which can affect the behavior of Web browsers and other HTML processors. HTML is also often used to refer to content of the MIME type text/html or even more broadly as a generic term for HTML whether in its XML-descended form (such as XHTML 1.0 and later) or its form descended directly from SGML (such as HTML 4.01 and earlier). By convention, HTML format data files use a file extension .html or .htm. Publishing HTML with HTTP The World Wide Web is composed primarily of HTML documents transmitted from a Web server to a Web browser using the Hypertext Transfer Protocol (HTTP). However, HTTP can be used to serve images, sound, and other content in addition to HTML. To allow the Web browser to know how to handle the document it received, an indication of the file format of the document must be transmitted along with the document. This vital metadata includes the MIME type (text/html for HTML 4.01 and earlier, application/xhtml+xml for XHTML 1.0 and later) and the character encoding (see Character encodings in HTML). In modern browsers, the MIME type that is sent with the HTML document affects how the document is interpreted. A document sent with an XHTML MIME type, or served as application/xhtml+xml, is expected to be well-formed XML, and a syntax error causes the browser to fail to render the document. The same document sent with an HTML MIME type, or served as text/html, might be displayed successfully, since Web browsers are more lenient with HTML. However, XHTML parsed in this way is not considered either proper XHTML or HTML, but so-called tag soup. If the MIME type is not recognized as HTML, the Web browser should not attempt to render the document as HTML, even if the document is prefaced with a correct Document Type Declaration. Nevertheless, some Web browsers do examine the contents or URL of the document and attempt to infer the file type, despite this being forbidden by the HTTP 1.1 significance
Working of a search engine A search engine operates, in the following order 1. Web crawling 2. Indexing 3. Searching

Web search engines work by storing information about many web pages, which they retrieve from the WWW itself. These pages are retrieved by a Web crawler (sometimes also known as a spider) an automated Web browser which follows every link it sees. Exclusions can be made by the use of robots.txt. The contents of each page are then analyzed to determine how it should be indexed (for example, words are extracted from the titles, headings, or special fields called meta tags). Data about web pages are stored in an index database for use in later queries. Some search engines, such as Google, store all or part of the source page (referred to as a cache) as well as information about the web pages, whereas others, such as AltaVista, store every word of every page they find. This cached page always holds the actual search text since it is the one that was actually indexed, so it can be very useful when the content of the current page has been updated and the search terms are no longer in it. This problem might be considered to be a mild form of link rot, and Google's handling of it increases usability by satisfying user expectations that the search terms will be on the returned webpage. This satisfies the principle of least astonishment since the user normally expects the search terms to be on the returned pages. Increased search relevance makes these cached pages very useful, even beyond the fact that they may contain data that may no longer be available elsewhere. When a user enters a query into a search engine (typically by using key words), the engine examines its index and provides a listing of best-matching web pages according to its criteria, usually with a short summary containing the document's title and sometimes parts of the text. Most search engines support the use of the boolean operators AND, OR and NOT to further specify the search query. Some search engines provide an advanced feature called proximity search which allows users to define the distance between keywords. The usefulness of a search engine depends on the relevance of the result set it gives back. While there may be millions of webpages that include a particular word or phrase, some pages may be more relevant, popular, or authoritative than others. Most search engines employ methods to rank the results to provide the "best" results first. How a search engine decides which pages are the best matches, and what order the results should be shown in, varies widely from one engine to another. The methods also change over time as Internet usage changes and new techniques evolve. Most Web search engines are commercial ventures supported by advertising revenue and, as a result, some employ the controversial practice of allowing advertisers to pay money to have their listings ranked higher in search results. Those search engines which do not accept money for their search engine results make money by running search related ads alongside the regular search engine results. The search engines make money every time someone clicks on one of these ads. The vast majority of search engines are run by private companies using proprietary algorithms and closed databases, though some are open source.

IP
The Internet Protocol (IP) is a protocol used for communicating data across a packetswitched inter network. IP is a network layer protocol in the Internet protocol suite and is encapsulated in a data link layer protocol (e.g., Ethernet). As a lower layer protocol, IP provides the service of communicable unique global addressing amongst computers.

Packetization
Data from an upper layer protocol is encapsulated inside one or more packets/data grams (the terms are basically synonymous in IP). No circuit setup is needed before a host tries to send packets to a host it has previously not communicated with (this is the point of a packet-switched network), thus IP (Internet protocol) is a connectionless protocol. This is quite unlike Public Switched Telephone Networks that require the setup of a circuit before a phone call may go through (a connection-oriented protocol).

IP addressing and routing


Perhaps the most complex aspects of IP are IP addressing and routing. Addressing refers to how end hosts become assigned IP addresses and how subnetworks of IP host addresses are divided and grouped together. IP routing is performed by all hosts, but most importantly by internetwork routers, which typically use either interior gateway protocols (IGPs) or external gateway protocols (EGPs) to help make IP datagram forwarding decisions across IP connected networks Encapsulation of user data inside a UDP datagram inside an IP packet inside an unspecified data-link layer frame. IP ADDRESS An IP address (or Internet Protocol address) is a unique address that certain electronic devices use in order to identify and communicate with each other on a computer network utilizing the Internet Protocol standard (IP)in simpler terms, a computer address. Any participating network deviceincluding routers, switches, computers, infrastructure servers (e.g., NTP, DNS, DHCP, SNMP, etc.), printers, Internet fax machines, and some telephonescan have its own address that is unique within the scope of the specific network. Some IP addresses are intended to be unique within the scope of the global Internet, while others need to be unique only within the scope of an enterprise. ETHICS AND LEGAL ISSUES

Computer ethics Computer ethics is a branch of practical philosophy which deals with how computing professionals should make decisions regarding professional and social conduct. The term "computer ethics" was first coined by Walter Maner in the mid-1970s, but only since the 1990s has it started being integrated into professional development programs in academic settings. The conceptual foundations of computer ethics are investigated by information ethics, a branch of philosophical ethics established by Luciano Floridi. Computer ethics is a very important topic in computer applications. The importance of computer ethics increased through the 1990s. With the growth of the Internet, privacy issues as well as concerns regarding computing technologies such as spyware and web browser cookies have called into question ethical behavior in technology

Identifying issues
Identifying ethical issues as they arise, as well as defining how to deal with them, has traditionally been problematic in computer ethics. Some have argued against the idea of computer ethics as a whole. However, Collins and Miller proposed a method of identifying issues in computer ethics in their Paramedic Ethics model. The model is a data-centered view of judging ethical issues, involving the gathering, analysis, negotiation, and judging of data about the issue. In solving problems relating to ethical issues, Davis proposed a unique problem-solving method. In Davis's model, the ethical problem is stated, facts are checked, and a list of options are generated by considering relevant factors relating to the problem. The actual action taken is influenced by specific ethical standards.

Some questions in computer ethics


There are a number of questions that are frequently discussed under the rubric of computer ethics. One set of issues deal with some of the new ethical dilemmas that have emerged, or taken on new form, with the rise of the internet. For example, there is a wide range of behaviors that fall under the heading of hacking many of which have been variously defended and opposed by ethicists. There are now many ways to gain information about others that were not available, or easily available, before the rise of computers. Thus ethical issues about information storage and retrieval are now in the forefront. How should we protect private data in large databases? Questions about software piracy are also widely discussed, especially in light of file sharing programs such as Napster. Is it immoral or wrong to copy software, music, or movies? If so, why?

A second set of questions pertaining to the Internet that are becoming more widely discussed are questions relating to the values that some may wish to promote via the Internet. Some have claimed that the internet is a "democratic technology", or an edemocracy. But is it really? Does the Internet foster democracy? Should it? Does the digital divide raise ethical issues that society is morally obligated to ameliorate?

Ethical standards
One of the most definitive sets of ethical standards is the Association for Computing Machinery Code of Ethics. The code is a four-point standard governing ethical behavior among computing professionals. It covers the core set of computer ethics from professional responsibility to the consequences of technology in society.[2] Another computer ethics body is the British Computer Society[3], which has published a code of conduct and code of practice for computer professionals in the UK. The Uniform Computer Information Transactions Act (UCITA) defines ethical behavior from the standpoint of legality, specifically during the contracting process of computing. It defines how valid computing contracts are formed, and how issues such as breach of contract are defined and settled. However, legality does not completely encompass computer ethics. It is just one facet of the constantly expanding field of computer ethics.

INTERNET SECURITY
Internet security is the prevention of unauthorized access and/or damage to computer systems via internet access. Most security measures involve data encryption and passwords. Data encryption is the translation of data into a form that is unintelligible without a deciphering mechanism. A password is a secret word or phrase that gives a user access to a particular program or system. Internet security professionals should be fluent in the four major aspects:

Penetration testing Intrusion Detection Incidence Response Legal / Audit Compliance

Details of routers
Network Address Translation (NAT) typically has the effect of preventing connections from being established inbound into a computer, whilst permitting connections out. For a small home network, software NAT can be used on the computer with the Internet connection, providing similar behaviour to a router and similar levels of security, but for a lower cost and lower complexity.

Firewalls
A firewall blocks all "roads and cars" through authorized ports on your computer, thus restricting unfettered access. A stateful firewall is a more secure form of firewall, and system administrators often combine a proxy firewall with a packet-filtering firewall to create a highly secure system. Most home users use a software firewall. These types of firewalls can create a log file where it records all the connection details (including connection attempts) with the PC.

Anti-virus
Some people or companies with malicious intentions write programs like computer viruses, worms, trojan horses and spyware. These programs are all characterised as being unwanted software that install themselves on your computer through deception. Trojan horses are simply programs that conceal their true purpose or include a hidden functionality that a user would not want. Worms are characterised by having the ability to replicate themselves and viruses are similar except that they achieve this by adding their code onto third party software. Once a virus or worm has infected a computer, it would typically infect other programs (in the case of viruses) and other computers. Viruses also slow down system performance and cause strange system behavior and in many cases do serious harm to computers, either as deliberate, malicious damage or as unintentional side effects. In order to prevent damage by viruses and worms, users typically install antivirus software, which runs in the background on the computer, detecting any suspicious software and preventing it from running. Some malware that can be classified as trojans with a limited payload are not detected by most antivirus software and may require the use of other software designed to detect other classes of malware, including spyware.

Anti-spyware
There are several kinds of threats: Spyware is software that runs on a computer without the explicit permission of its user. It often gathers private information from a user's computer and sends this data over the Internet back to the software manufacturer. Adware is software that runs on a computer without the owner's consent, much like spyware. However, instead of taking information, it typically runs in

the background and displays random or targeted pop-up advertisements. In many cases, this slows the computer down and may also cause software conflicts.

Browser choice
Internet Explorer is currently the most widely used web browser in the world, making it the prime target for phishing and many other possible attacks.

Upper layer protocol


In computer networking, the upper layer protocol (ULP) refers to the more abstract protocol when performing encapsulation. It contrasts with lower layer protocol which refers to the more specific protocol. In the internet protocol suite, IP is the lower layer protocol for UDP and TCP; likewise UDP and TCP are two upper layer protocols for IP. HTTP Hypertext Transfer Protocol (HTTP) is a communications protocol for the transfer of information on the intranet and the World Wide Web. Its original purpose was to provide a way to publish and retrieve hypertext pages over the Internet. HTTP development was coordinated by the World Wide Web Consortium (W3C) and the Internet Engineering Task Force (IETF), culminating in the publication of a series of Request for Comments (RFCs), most notably RFC 2616 (June 1999), which defines HTTP/1.1, the version of HTTP in common use. HTTP is a request/response standard between a client and a server. A client is the enduser, the server is the web site. The client making an HTTP request - using a web browser, spider, or other end-user tool - is referred to as the user agent. The responding server - which stores or creates resources such as HTML files and images - is called the origin server. In between the user agent and origin server may be several intermediaries, such as proxies, gateways, and tunnels. HTTP is not constrained to using TCP/IP and its supporting layers, although this is its most popular application on the Internet. Indeed HTTP can be "implemented on top of any other protocol on the Internet, or on other networks. HTTP only presumes a reliable transport; any protocol that provides such guarantees can be used." Typically, an HTTP client initiates a request. It establishes a Transmission Control Protocol (TCP) connection to a particular port on a host (port 80 by default; see List of TCP and UDP port numbers). An HTTP server listening on that port waits for the client to send a request message. Upon receiving the request, the server sends back a status line, such as "HTTP/1.1 200 OK", and a message of its own, the body of which is perhaps the requested file, an error message, or some other information.

The reason that HTTP uses TCP and not UDP is because much data must be sent for a webpage, and TCP provides transmission control, presents the data in order, and provides error correction. See the difference between TCP and UDP. Resources to be accessed by HTTP are identified using Uniform Resource Identifiers (URIs) (or, more specifically, Uniform Resource Locators (URLs)) using the http: or https URI schemes Features of web servers: In practice many web servers implement the following features also: Authentication, optional authorization request (request of user name and password) before allowing access to some or all kind of resources. Handling of static content (file content recorded in server's filesystem(s)) and dynamic content by supporting one or more related interfaces (SSI, CGI, SCGI, FastCGI, JSP, PHP, ASP, ASP .NET, Server API such as NSAPI, ISAPI, etc.). HTTPS support (by SSL or TLS) to allow secure (encrypted) connections to the server on the standard port 443 instead of usual port 80. Content compression (i.e. by gzip encoding) to reduce the size of the responses (to lower bandwidth usage, etc.). Virtual hosting to serve many web sites using one IP address. Large file support to be able to serve files whose size is greater than 2 GB on 32 bit OS. Bandwidth throttling to limit the speed of responses in order to not saturate the network and to be able to serve more clients.

Vous aimerez peut-être aussi