Vous êtes sur la page 1sur 30

Transcript

Second IGF Meeting of the Dynamic Colation on Core Internet Values

Sixth Annual Meeting of the Internet Governance Forum 27 -30 September 2011 United Nations Office in Naiorbi, Nairobi, Kenya

September 28, 2011 - 14:30PM

Aims The objective of the dynamic coalition on 'Core Internet Values is to debate and find answers to fundamental questions such as What is the Internet? What

makes it what it is? What are its architectural principles? What are the core principles and values? And what is happening to the core values in the process of its evolution? What is it that needs to be preserved and what changes are inevitable? The coalition would seek
answers and define the Core Internet Principles and Values. The Internet model is open, transparent, and collaborative and relies on processes and products that are local, bottom-up, and accessible to users around the world. These principles and values are threatened when policy makers propose to regulate and control the Internet, with inadequate understanding of the core values.

What is it that must be preserved in the process of policy making by legislators who seek to regulate the Internet and in the process of design changes by the Business sector in pursuit of business friendly models? What does the Internet Community say as what can't be changed? How could changes and improvements be brought about without compromising on the core values? How would the different positions between stakeholders be reconciled to commit to the core Internet values?
About the Dynamic Coaltion The Dynamic Coaltion continues its work along the lines of the discussions during the IGF 2009 Workshop (319) on Fundamentals: Core Internet Values. The first meeting of the Coaltion was held at the IGF Vilnius, chaired by Alejandro Pisanty and the second IGF meeting at the IGF Nairobi with Dr Vint Cerf moderating the proceedings as Chair with Alejadro Pisanty as Co-Chair. The Second IGF meeting was held on September 28, 2011 A fair 'list' of principles and values is emerging from the discussions. The coaliton would work to emphasize that these values need to be preserved. The Coaltion continues its work outside the IGF. Weblog http://coreinternetvalues.org

Transcript

Dynamic Coalition on Core Internet Values Sixth Annual Meeting of the Internet Governance Forum 27 -30 September 2011 United Nations Office in Naiorbi, Nairobi, Kenya September 28, 2011 - 14:30PM *** The following is the output of the real-time captioning taken during the Sixth Meeting of the IGF, in Nairobi, Kenya. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the session, but should not be treated as an authoritative record. *** >> SIVASUBRAMAN MUTHUSAMY: one of the front seats? (Pause) >> Test, test. I have a request to make a test. I hope that it's okay for I don't know if we can solve Thank you. Could everyone please come forward, occupy

everybody.

It seems that we have some echo.

that, it would be great if we can. >> >> Testing, 1, 2, 3.

Decrease the echo.

Testing, 1, 2, 3. Thank you.

That works for me.

>> VINT CERF: >>

That was a slow introduction. They are waiting to start as Thank you.

By the way, we have a lot of people online. They will be very happy.

soon as you are ready.

>> ALEJANDRO PISANTY:

This is Alejandro Pisanty from International University

of Mexico and Mexico chapter of the Internet Society. I am very thankful to Siva, Sivasubraman Muthusamy, who is sitting at the right from my side, from your left, end of the table, for the enormous work he put in creating this session and giving it continuity from last year's. moderated and chaired by Vint Cerf. I will be an acting chair until he picks up. This is the meeting on the Dynamic The session will be

Coalition on Internet core values and principles. We were set up first a year ago in Vilnius. The objective, as you may

remember, the Dynamic Coalitions are, were described by Nitin Desai, in his days as the chair of the Internet Governance Forum, as potential for mentions that would emerge from the Forum spontaneously formed by people with similar interests. We have come together around the idea that there are some design principles of the Internet which extend well into the layers above, including some of the ways that the Internet is adopted in society, whatever field you look at, education, politics, health, and that it is important to keep an eye open within the Internet Governance Forum's framework on these very basic design principles, interoperability, the end-to-end principle, and a few others. One of the beauties of the thing is that there aren't that many, it is not a long list, but it's a very powerful list; and that we have to work together with other Dynamic Coalitions that are forming that are concerned with very important things like Internet rights and principles at the more political or social level in higher layers, or with freedom of expression, where again the technological support has to be available and has to be kept open and interoperable. On the other hand, there are principles that can or not be supported by the technological architecture of the Internet or identity or proper management that allows for both that can interfere with the architecture, if there is suddenly a legal order to design things in a specific way, and the Dynamic Coalition would be working to keep an eye open and to eventually produce some statements of warning or of support for things that could go either way.

Last year, we had a very lively meeting, and I'm sure this one will become lively as people are free to enter the room, as well as to participate remotely. We have some views which were quite different among the different participants. Maybe the most striking differences were the expressions of a woman engineer from Lebanon who was all for anonymity. She said one of the first values she wants to see preserved on the Internet is a core value is anonymity, and her argument which is on the record of that session is anonymity is very important for us because that is the only condition under which women in countries like mine, meaning Lebanon and I'm sure many of the surrounding ones, this is the only way she said in which women and young people in countries like mine can have access to sexual and reproductive health and conduct information. we cannot predict. On the other hand, we have a very young man from the host country from Lithuania coming in to that discussion, with the following statement: said that we need for this anonymity thing to end. I believe he We need for everybody who If we have to be identified, then we will just not be free enough to access this information, because it would create different reactions that

comes on to the Internet to use it to be fully identified, and the reason for that is that Lithuania on becoming part of the European Union and more active member of Europe is going to become a country of culture. We want to have everybody become potentially culture creator, and we want for us to be able to make a living with that. intellectual property rights. We have to be able to protect our And the only way we think, he thinks he could do this

is by making sure that everybody who makes a copy of anything on the Internet is properly identified, so you can follow up on that. So you can see that the views of the values on the Internet are as various and diverse as values people hold. Some of them can be inactive, some of the opposite values can be made compatible with certain architectural principles, and some of them may actually be incompatible among themselves and may ruin if there is, if a Government, for example, others in implementation may actually ruin the way the Internet operates, at least it may become, old word, fractionalist. Those are the issues we are trying to address. With this introduction I will tell you people who are on the front panel. Myself,

Alejandro Pisanty, as I have mentioned.

Dr. Vint Cerf, who will be chairing it, as He separately

you know, Dr. Cerf was one of the key people at the start of the Internet, together with Bob Kahn who we are honored to have in the room. disappeared from IP and in taking that rib out of the original single being, he created, they created a great opportunity that the Internet has become. We have Sivasubraman Muthusamy, who is the chair of ISOC, Internet society in Chanai, an active businessperson and civil society actor, with great experience. And I'm honored also to have on this panel, to sit on this panel with Scott Bradner, who is now the chief independent genius for the Internet in Harvard -- is that the correct job description -- and long-standing creator and supporter of the evolution of Internet standards of the IETF, Internet architecture board steering group amongst everything else that can be done for the Internet, and very very very esteemed friend. So you can see we have the academic, the private, the civil society sector already sitting at the panel, and we see there are Government officials. have all four sectors present in the room, all four stakeholders. this multistakeholder as we did for the work from last year. over to you . >> VINT CERF: Thank you very much, Alex. And welcome, everyone. I'd like So we We hope to keep

Vint, I will hand it

to start out by observing that Scott Bradner's initials are SOB, and that might also be an important indication of some kind. Second, Alex didn't mention that he is Chairman of ISOC Mexico, so we have two ISOC chairs here. It's really a pleasure to discover that these institutions which were started so long ago continue to persist, grow and be more effective. So, the topic is core values. I think we could span a fairly broad discussion

starting with the technological values or principles that have helped make the Internet so persistent, and also able to evolve all the way up to and including the social and economic values that the Internet has engendered, in part in consequence of its origins, and the people who built it. But I thought I would start out, first of all, with some very important specific core values, and they are 32, 128, 16, 7 or 8, 13, and 42.

Now, it's an exercise for the reader to figure out what those numbers correspond to, but indeed, every one of them is important to the Internet; although 42 is a red herring, drawn from Doug Adams' wonderful writing, so long and thanks for all the fish. To go back in history, however, to the earliest notions of open networking, Bob Kahn started thinking about this while he was still at Volper, Nex and Newman before coming to DARPA in 1972, and although I'm paraphrasing, Bob, and if you feel I've left something out, you should react, there were several things that I noted. One of them is that his notion of open architecture, about networking, started out with the assumption that each distinct network would have to stand on its own, and no internal changes would be required or even permitted to connect it to the Internet. So this really was intended to be a network of networks. The second notion was that communications would be on a best efforts basis. if a packet didn't make it to the final destination, it would be retransmitted from the source. Part of the reason for that is that some networking technologies didn't have any place to store the packets in between, Ethernet being a good example; although, it hadn't been, Ethernet had not quite been invented at the point that Bob was writing these ideas down. to connect the networks. then later, routers. There would be no information retained by the gateways about the individual flows of packets passing through them, thereby keeping them simple and avoiding complicated adaptation and recovery from various network failures. So a memoryless environment was attractive because of its resilience and robustness. And finally, among other important notions, was the idea that there would be no global control at the operational level, that the system would be fully distributed. In the prehistory of Internet, work was done on the outernet. And out of that work came notions of layers of structure with the lowest layers bearing packets and bearing bits, and the higher layers carrying more and more substantive content. The third notion was that black boxes would be used Later, these black boxes would be called gateways, and So

Substantive information.

Some people took layering to be a strict kind of thing, The notion of keeping the It meant you

and the term layer violation was often bandied about.

layers ignorant of what the other layers were doing had advantages. upper or lower layers, because the interfaces were kept stable.

could remove or change or reimplement a layer without having any effect on the

Similarly, the notion of end-to-end allowed the network to be ignorant of the applications or the meaning of the bits that were flowing in the packets, and those bits would be interpreted only by software at the end. There had been debates about these two ideas, subsequently, and some arguments had been made for permeability. It's pretty clear that in some cases, let's even say at the routing layer, it might be nice to know what is going on with regard to the underlying transmission system, because you might decide that some path on the net is not appropriate for use because it's failing. If you don't know that, the routing system can't know it should switch to a different alternative path. One can make similar arguments at higher layers where there's, for example, a loss of capacity. If this is known to an application layer, the application might This notion respond by changing the coding scheme for let's say video or audio.

of layering and end-to-end treatment could be argued to be not necessarily absolute, but it has turned out to be a very powerful notion, because we have swept new transmission technologies into the Internet as they have come along, without having to modify the architecture. Frame relay and X25, and ATM and MPLS, became part of the tools for moving packets around the basic Internet Protocol layer, didn't have to change except for adapting it by figuring out how to encapsulate an Internet packet into the lower level transmission system. Interoperability was a key notion in the system. The whole idea behind the

Internet was that if you could build something that matched the Internet's architecture and technical specifications, then you should be able to connect to the rest of the Internet, if you could find someone who was willing to connect to you. This notion of organic growth has been fundamental to the Internet's ability to grow over time.

In the Internet Engineering Task Force, there are some other principles that have emerged, one of them which Scott, you might care to comment on, is that if you are going to do something, do it one way, not two ways or three ways or four ways. If you can get away with that, it's helpful because you don't have to figure out which way is the other party choosing to do this particular function. The IETF also underscored some other important principles. membership in the IETF. show up and contribute. You can't become a member of it. It's a meritocracy. There is no All you can do is If nobody

If your ideas attract others, you may

actually succeed in getting a standard out of the IETF process.

considers your ideas to be particularly attractive, then you may not succeed. But the idea here is that it's the ideas that count. I think there is also a

wonderful quote from Dave Clark who served as the Chairman of the Internet architecture board during its previous incarnation of the Internet activities board. Scott, I'm not sure I can get this exactly right. don't believe in voting, something in -- . >> SCOTT BRADNER: >> VINT CERF: Kings or voting . But it was something like we

We don't believe in kings or voting, we believe in rough

consensus and running code. I would say that that principle continues to guide much of what the IETF does. I have other observations to make. I'm going to set them aside for the moment, Scott, can I ask you to

and turn to my fellow panellists and ask them to make a few remarks from their point of view on what is important in Internet principles. take the -- . >> SCOTT BRADNER: talking about. I'd like to sort of pop up a level, based on what Vint is

The result of the, these principles that Vint just articulated was a

sort of a different, higher level principle, which was the ability to innovate without permission, that you and I could agree on a new application, and deploy it without having to get permission of the network to do so. This was the initial driver and is still, in the corporate environment, corporation to corporation, a very strong ability. It's less so within a corporation because those

firewalls and within ISPs, some of which filter to residences, but still on a one business talking to another one, it's been key. It is what allowed the World Wide Web to come about because when Tim Berners-Lee decided he was going to make things easier for physicists who didn't like to type, and allow pointing and clicking, he could put together a browser and put together a server, and distribute it to his friends, and they can start using it without getting any permissions from anybody, a very important thing. Another piece that Vint alluded to is what is called the end-to-end argument or end-to-end principle from Dave Clark and others at MIT, Salza Reed and Clark which can be paraphrased to say render under the ends what can be best done there. That the network itself is agnostic to the traffic going over it. It doesn't try and do a better job for traffic that it thinks wants better service. It doesn't look into the traffic to see that it's voice and therefore should be accelerated or something. that it doesn't make sense. This is a principle which is constantly under attack, in It doesn't make sense from the point of view of Somebody from the

somebody who is focused on a particular application.

telephony world wanting to use the Internet for telephony and the Internet is now the underlying connectivity for most of the world's communication, and by Internet here I'm drawing it broad as in the Internet Protocol and the way it's used, not all of it is public Internet. voice. Bob Braden said, he was a person on the Internet activities board, architecture board that Vint mentioned, optimization was not one of the goals of the Internet Protocol, and wasn't one of the goals of the Internet standardization. and ability to create new things was. So, we constantly in the IETF come under pressure from various folks who want to make the Internet better for some particular application at the expense of other applications, because their application is the most important one, at least to them. The rough consensus in running code bit that Vint mentioned, there were two important parts of that. The IETF works on rough consensus, and this was mentioned earlier in another session I wasn't at, but it was paraphrased for me, that consensus in the original way that that term developed many centuries ago Flexibility They look at the Internet Protocol and say, but it doesn't It is not tuned for voice. It is not architected for do a very good job with voice.

was not that everybody agreed. It was that everybody had a chance to discuss, and even if there were a few people who disagreed, you could still move forward. Consensus in many And standards bodies has come to mean unanimity, that everybody has to agree.

when you have it mean that, it means that your standard that you develop has to take into account all of the weirdnesses that any particular participant might want. So standards tend to be complicated, difficult to maintain and difficult to understand. The IETF strictly believes in rough consensus, meaning that if some number of people really don't like the result, but they can't convince the majority, the vast majority of the badness of the idea, then it will go forward. Running code is that the standards process, and the original standards process, three-steps standard process where the middle step required that you actually had interoperable implementations of code before you could move forward, that's been actually in the last few weeks dropped to a two-stage process where the second stage requires that. interested. The running code was not to prove that somebody is It was to prove that the standard was clear.

So that if you implemented a standard, and I implemented a standard, both reading the standard without resorting to looking at other materials, and we could interoperate, that means the standard was clear enough. running code was to ensure clear standards. It also mentioned that the IETF has a tendency to go do it one way. at one level and not true at another level. problem. That's true And requirement for

It is true when we are talking about Take

an approach to a problem where there is one architectural principle for that It is not so true where there are multiple architectural principles. an example, the IETF developed an Internet voice protocol called SIP, session initiation protocol, that is an end-to-end principle protocol. aco, that is a fundamentally different architecture. And different providers would use the different architectures in different environments, and they are both being used today in very different environments. They compete with each other, at the result level, but not at the architecture and At the same time, we also developed a core centric, carrier centric voice over IP protocol, called make

implementer level.

So where we do tend to try for a single solution, is where it's

just different variants of the same architecture, because then Vint is absolutely right, if you have more than one way to do it, it dilutes the picture. But having different architectures, different fundamental philosophies of how to approach something, trying to argue them into one bin can be completely counterproductive. You will get the entire Working Group spending most of its time to fighting that sort of thing. One other thing I want to bring up, we have a constant pressure in the community against the Internet. another. Here I mean the Internet Vint described. We have folks who believe that it needs to be optimized for one application or Or folks that Alejandro mentioned believe that attribution is required for anybody who actually uses the net, an Internet driver's license for example, or people who believe that different applications should have their own Internets; governments should have a private Internet, for example. These are constant battles, and usually what is brought up as rationale is protecting kids or fighting terrorism or something like that. different architectural business model of control. A few years ago, one of the big U.S. telephone companies tried to get the FCC to require that Internet service providers architect their networks in such a way that all the traffic went through a common central set of switches. Their rationale, And stated rationale, was that this was the only way that the phone company could guarantee the quality of the connection, is to go through the central point. in. oh, by the way, this is basically wiretapping which the Government was interested In reality, the fundamental reason they wanted to do it was because that was a common taxing point where they could collect money. The FCC didn't do that. In the U.S., the FCC Federal Communications We have had But it's fitting into a

Commission has been pretty much hands off on the Internet. innovate without having to get permission. >> VINT CERF: anything to this? Thank you very much, Scott.

almost no regulation there, letting the styles and flowers bloom, letting anybody

Alex, would you like to add

Otherwise I have a bunch of other bullets to shoot.

>> ALEJANDRO PISANTY:

I will just make a very brief comment.

One thing

that impresses many of us who are latecomers to the use of the Internet -although I must state that a couple of years ago, I discovered in some print styles I have kept from my quantum chemistry workshop of 1979 that I was actually using Harper net at that time to run things on computers in Berkeley, and a couple of other laboratories in the U.S. from Bloomington, Indiana, so maybe not such a newcomer -- but I started as a user and certainly not as an architect. But one of the things that many of us find very impressive is how these architectural and design principles first are so fundamental, being very few, they really are from the, meant to be growth of the net, and second, how they map well into some social and even political principles which are pretty sound and I will not say universal, but helpful universally. And some people have pointed also to the fact that in the U.S., where much of this work was done as well as in Europe, in the years in which this work was being done by you guys. It actually was so universal, that two cultures that were almost opposite were able to shape it, which was a sort of more collectivist culture and a traditional, I will say, Yankee in all respective terms of these characteristics, very strong individualistic self-reliant culture, and both were able to coexist. And that is a witness for the robustness of these principles. environments. >> SCOTT BRADNER: >> VINT CERF: Can I say something else? More so, when we

see that now the Internet is implemented in so many different political and cultural

Certainly, Scott. I want to build on that a little. One of the powers of the Threats to society, in the

>> SCOTT BRADNER:

Internet is ability for people to innovate end-to-end without getting permission, but that is also one of the most basic threats of the net. sense of the social order, we have seen in Arab Spring there was the level of impact that the Internet has varied, but still it had an impact, of allowing individuals to communicate that the state wouldn't necessarily want to be able to communicate.

Many years ago, I was doing a series of tutorials for the Internet society, in developing country workshops. like pornography. And a representative of a particular country came to me, as an instructor, and said that he really would like the Internet but didn't After a bunch of discussion, we concluded that no, he really And that is a direct quote. That is fundamental. was using pornography a symbol, that what he didn't like was information that would confuse the citizens. One of the by-products of not having that control point that Vint mentioned that Bob had in his principles is you don't have a control point, you don't have a way to filter what people can say to each other. different degrees of success. But Larry Lussig, once of Harvard -- once of Stanford, now of Harvard, said that code is law. He meant that the design of the Internet and the design of that kind You can't make laws to tell of technology impinges on the ability of state control. Some countries try very hard with

it, to tell the net to do things that it's not architected to do, which is why the telephone company in the U.S. wanted a requirement to rearchitect the Internet, because the Internet doesn't support the kind of controls that they believe they needed. >> VINT CERF: might be useful. factorization. Those are all very good points, Scott. A couple of other things

Back in the technical domain, I would call this notion design

And to illustrate this notion, I would offer the observation that if you

read the protocol specification for the Internet Protocol, nowhere in that document will you see the word "routing," or at least I don't think there will be any mention of how that is done. The assumption is made that somehow, a packet with the right format that is handed in to the Internet will find its way to the destination, but the details of how that routing is done is distinct and separate from the basic Internet Protocol. The idea behind this is to allow, for example, the possibility of multiple alternative routing algorithms, and indeed, we have a number of them. that by factoring things out, you offer significant flexibility. Another interesting feature that was very deliberate in the Internet is that the Internet addressing space is nonnational in character. space to each of them. We didn't start out with the assumption that we should identify countries and then allocate some address So the point here is

Rather, we started out with the notion that every address in the network is reflective of the topology of the network and the way in which or where you connect to it. Interestingly enough, despite the fact that that was an important principle and continues to be the case with regard to IP addresses, the actual use of the net especially with the advent of the World Wide Web has led to people creating tables that associate IP addresses with national locations, and in some cases even more refined identifiers down to the city level. It turns out that their rationale for this has, as far as I can tell, not much or anything to do with control or identifying anything other than using this as a clue for what kind of response should be offered to the party that is using the net. So as an example, at Google, when you try to connect to WWW.Google.com, and the domain name lookup is done, our name server asks where did this question come from? What is the IP address of the source? Do I have any idea what country that might be in? And it makes a guess or it looks up in the table, and

hopes that it's correct, and then it vectors the party to whichever version of Google is specific to that country. If you are here, using Internet addresses that are believed to be allocated to Kenya, the Web page that would come up is not Google.com, but Google.CO.DC.ZKE. So this is intended to be friendly response to try to offer, for example, in language assistance, so in spite of the fact that the design principle is to be nonnational, people who saw application utility in having some mapping had to implement it themselves. I think it would be useful, I think, to move to some nontechnical kinds of principles that have certainly been powerful element of the Internet's evolution. The openness of the specifications turned out to be quite important. constrained access to the information about how to build the network. very deliberate decision not to restrict the access to the design or the specifications, and even effort was made to produce reference implementations of the protocols, and make them available. Probably one of the most important decisions made in the time that Bob was at DARPA was to fund the implementation of the TCP/IP protocols for the UNIX No one It was a

operating system. Initially, that work was done at Ball Bearneck and Newman and later reimplemented by Bill Joy at UC California, University of California Berkeley. The Berkeley 4.2 release was the first of the UNIX implementations with TC/PIP in it. And during that time frame in the early 1980s, this was a period when -- what They weren't personal computers yet. They were work The notion of a work station with an Ethernet connection and a local did they call them? stations.

area net running TCP/IP was enormously attractive to the academic community. The consequence of making this application or this software implementation of TCP/IP plus the operating system available freely was, certainly induced a rapid uptake in the academic community. This notion of freely available implementations, the notion of source code, the notion of freely available specifications, continues to permeate the Internet environment and continues I think to stimulate its further growth. You can see the same sort of thing happening in my company. or the Chrome operating system are all examples. here in Africa. widely used partly because it's freely available. For example, the

release of the Android operating system as source code or the Chrome browser And there are many others Ubuntu, which is the one of the popular versions of UNIX, is I think the notion of stimulating

the use of the network by making its tools and raw materials readily available has been an important part of its history and it should remain that way. The same thing can be said for application programming interfaces. When Scott

Bradner mentioned Tim Berners-Lee and the World Wide Web, it was the standardization of the protocols and interfaces to them that has allowed so many new applications to be built on top of the World Wide Web. Another example of this exposure of source code is dramatic. When the World

Wide Web was first released, one of the features of the browsers is that if you wondered how did that Web page get built, you could ask the browser to show you what the source of the Web page was. language. The side effect of showing people how this was actually accomplished is that they You could see the hypertext markup

learned how to make Web pages on their own.

The notion of Web master Over time,

emerged out of the freedom to see and copy what other people have done, and to experiment with new ways of implementing Web pages. create Web pages. programmes were developed to make it easy for people, easier for people to But the important thing is that this openness notion permeated so many of the layers of the system. I think we might want to move into some of the institutional consequences of, and principles in the Internet world. >> SCOTT BRADNER: Scott .

I'd like to actually reflect on one of the points you made, One of the things I mentioned earlier We have

World Wide Web, and the way that worked.

is that the net is no longer quite as transparent as it used to be. countries are getting in the way and things like that.

situations where there is firewalls in corporations and some ISPs, and some

One of the things that's happened is that what used to be the IP layer of the Internet, the layer where everything could be innovated on, has moved up to the World Wide Web layer port 80. You can now -- or the secure port for that -- so have new applications running on top of that. Where that is getting more and more important is with the html 5, the new version of html, which allows you to build things that look like applications within Web browsers. phones. And where this may have the most effect is actually on smart And with html 5, which for example is being Many smart phones have some levels of control by the vendor as to

what applications they will support.

pushed by Apple, one of the ones that has very strong controls on what you can put onto the phone, you can build applications in html 5 that would never get approved by the app store, and therefore have a whole new layer of this innovation without having to worry about control. >> VINT CERF: That is actually, we are big fans of html 5 at Google too. I

wanted to note something about the ability of the system to evolve. which is trapped in time.

One of the

things that is interesting about the Internet is that it's not a fixed architecture

So over the 30-some-odd years that it's been in operation, or nearly 30 years, it has evolved. And for example, we have run out of IP version 4 address space,

and we have to implement a new version of the Internet protocols which were standardized in 1996 with 128 bits of address space. can run together in the same network. It is not like you have to throw a switch somewhere. can also invoke non-http protocols. In fact, even the World It What is important is they

Wide Web is very important platform for many applications, as Scott points out. you are talking Google Talk or you are doing some kind of video interaction or some other application, you may very well be running multiple protocols at the same time, some within the World Wide Web environment, and some below that level, right above UDP or IP or RTP or some of the other very low-level bearing protocols.

And so when you are talking Skype or when

So this ability to invoke multiple protocols at the same time inside and outside the World Wide Web environment means that the network continues to be a place where innovation is possible, and I would not be surprised to find that there will be new applications and new protocols arising sitting on top of IP or sitting on top of UDP, or sitting on top of PCP, or possibly even others that come along. So that is a very important part of the evolution. Another example of that is the Domain Name System which was developed in the early 1980s, 1984, 1985, heavily invested in things encoded in ASCII. In the last several years, it's been quite apparent, it's been apparent for a long time that not all languages in the world can be written using characters that are drawn from the Latin character set. Now we have internationalized domain names, and even though they ultimately ended up being encoded in ASCII in order to avoid having to change everything in the DNS to accommodate, the point is that it's been possible to evolve to a much richer presentation of naming than would have been possible if we had stuck only with the ASCII coding. This notion of being able to continue to evolve and exploit new ideas, exploit new kinds of transmission systems, is a very important part of the longevity of the Internet and its ability to accommodate new ideas. So maybe we could push a little further up now into the institutional layer,

because you have seen institutions emerge out of the Internet experience. talked about.

The

most visible technical one is the Internet Engineering Task Force we have already

The Internet society arose in part out of the belief that a society would emerge in consequence of people using the Internet. seeing that. It is not one society. I think it's fair to say that we are And that's okay. They It is lots of societies.

can run in parallel on the network and use whatever applications seem to be best fitted to human interests. The Internet corporation for assign names and numbers emerged out of this whole process and the Internet Governance Forum emerged out of WSIS and they have one feature in common. They believe in multistakeholder processes that are open and accessible to all who have something to say. I hope that we are able to preserve that principle. sessions on Monday. There was a very strong

statement to that effect by Larry Strickland, the head of NTIA, in the ministerial The reason that is so important is that it is the vital interaction of all these interests that give the Internet the opportunity to evolve new applications and new ways of serving the people who use it. Since you have joined us here, would you like to offer some comments as well? >> SIVASUBRAMAN MUTHUSAMY: My job in all this has been very easy. I

ask questions, and I've left the answers to come from you. >> VINT CERF: You want to ask some more questions? Some more.

So I'm comfortable .

>> SIVASUBRAMAN MUTHUSAMY: do to preserve core values?

One question is, what can we

And are we doing enough to preserve the values?

Bits and pieces are happening in different parts of the world. In one country, it's legislation about filters. In another country it's on surveillance. So, but all this happens in

In some other country, it's on some other problem. prevent these values from being altered? >> VINT CERF:

complete isolation of what is being discussed here, and what can we do to So that is my question to you.

So, I'm not going to try to respond alone to that question.

Let me make a couple of observations. value of some of these core notions. is not required. it.

We have already seen the utility and The Internet

On the other hand, it is not the purpose in

my view of the Internet to jam its principles down anybody's throat.

It is a thing that's offered for people to use if they want to use

I think the freedom to use the Internet, however you want to, is a very important one. On the other hand, so I'm not sure that I would try to force everyone to behave the same way. I think though that we have to recognize that when the system gets to the scale that it is today, that it can be used to do bad things as well as good things. And I think that we have to accept that as a society, we should be interested in protecting ourselves from bad actors. What means do we have to do that? that? The question is, how can we do that? In what Forum do we even talk about

The Internet Governance Forum is one place where we can and should talk

about the harms that could potentially occur, because the Internet is so open and freely usable. I guess I'd like to ask my two other colleagues here whether they have response to that important question. >> SCOTT BRADNER: I mentioned that one of the things about the Internet at

least in the U.S. has been a lack of regulation. This has been a puzzle. It just doesn't make any sense that something as

important as the Internet has gone, has succeeded to exist for as long as it has without any significant levels of regulation. It's too important to the economic health and social health of the world to have that continue, at least in some minds. The net has succeeded because of the flexibilities and principles that Vint and others have articulated, and it succeeded in arenas that the people in those arenas never imagined that it would succeed; particularly, telecommunications. telephone companies did not believe that the net would ever work. competitor to TCP/IP at the time it came up was X25. The And that made an The

assumption that somebody did one thing at a time. between you and one other thing.

You put up a connection

Vint mentioned one of the things you get on the net is parallelism, multiple things going on at the same time. This has enabled a telephone company to be using IBM in 1972 said, in an IBM user IP in their backbone for a decade and not admitting it, maybe because they really didn't think that they wanted to say that. group, quote, "you cannot build a corporate data network out of TCP/IP." And the reason for that was definitional. of them. The very organisations that fought the net because it wasn't optimal, because it didn't have controls, because it didn't do what they thought they needed to do, it's done. It's taken over. The net has taken over IBM, has taken over the telephone companies. And they are in an environment that doesn't make sense to them regulation-wise. We are at a precipice. We have been at it for a while, where Governments By definition, a corporate data network

had all of these quality of service and managerial controls, and TCP/IP had none

believe that the Internet is far too important to leave to the people who know what they are doing in a technical sense, and they need to imply some, impose some kind of controls. The President of France said that the Internet had no management, and it was a moral imperative to fix that. We are going to see more and more of that, on the organizational level. Vint

has talked about some of the organisations that have done wonderful things there. But we have to be continually vigilant in order to preserve these rights and these principles, because to the folks like the guy who came to me and said that he wanted to control information that would confuse the citizens, the Internet doesn't make sense at a societal level where the aim of some societies is to control the society. It just simply doesn't make sense. want to fight it. When something doesn't make sense, they

>> VINT CERF: feed.

Alex, looked like you had a question coming from the Twitter

>> ALEJANDRO PISANTY: in Mexico. stuff.

The back channel is active on several platforms.

There is a question coming in from the Twitter feed which is made by a colleague It's, it asks me to ask Scott and Vint what they think of Microsoft's efforts to control the hardware to access the Internet, and that refers to the UFI We are reminded that UN rules do not allow ad hominem attacks, so we are advised to express opinion that can be grounded in fact. >> SCOTT BRADNER: This current effort, current thing that is probably being This actually comes from a

referred to is the boot, the authoritative boot process.

patent that Dave Farber and a few other people had from a number of years ago. And it's a big organisation put together to commercialize it, trusting computing Forum or trusting computing environment. There was a big play on that a few years ago where the aim was to say that you could have a platform, a computing platform which content providers could actually trust. In theory, if I control the computer that is sitting in front of me, no, there is no theoretical way to have a digital rights management that allows a content control, content producer to ensure that I'm only using the content in a way that I've paid for. It's theoretically impossible to do, if I control the platform.

Trusted computing environment is hardware that allows the content owner to better control that environment. The chips to support that have been in most PCs for They were in Macs for a while but eight or nine years, maybe even longer. they are doing. And the latest thing is simply an incantation of that. functionality that has been there for a long time. very strong. It is making use of the

Apple dropped them, not saying why; it's Apple's forte is to not ever say what

The arguments in favor of it are

If you control that environment so that only software which has been

approved can be run, you get rid of all viruses, because the viruses aren't going to be approved. You get rid of all worms. You have an environment where the user doesn't have

to know how to protect themselves in order to protect themselves. lot of power to that. control what you can do.

And there is a

But the other side of it is that the computer owner can And the controversy that arose recently was whether

the PCs that were built to, in order to support this boot functionality, secure boot functionality in Microsoft could refuse to run Linux and other operating systems. Microsoft has assured the community that that is not their intent, and that they will ensure that that's not blocked. But it is that level of control that has been They have been fighting desired by the content community for a very long time. don't like it. >> VINT CERF: Scott, I actually have a somewhat different interpretation of this,

very hard over an environment that they have no possible way of controlling, and

so this may be an interesting debate. There has been focus of attention on protecting machines against the ingestion of malware, and the most vulnerable moment in a machine's life is when it boots in the operating system. So the idea that the machine won't boot in a piece of code that hasn't been digitally signed is a pretty powerful protection. It feels to me as if your observations made a fairly big leap from the ability to assure that the boot code hasn't been modified, to the assumption that somehow that would inhibit all forms of other operating systems or anything else. I think that one has to be a little careful about under what circumstances a chunk of boot code is signed and by whom and what the boot code is allowed to do. >> SCOTT BRADNER: You are absolutely right, and you're wrong, in that the That is exactly what it's for. But the

TCE was specifically designed, the Farber patent is specifically talking about sequential boot with signed blocks. station. hardware in it involves a set of functionality that includes for example remote at a So that a content owner can ask your PC whether it is running particular software, particular flavors of the software of generations and things like that, particular operating systems, and refuse to download content unless you are, for example. It is specifically built into the TCE functionality. Nobody is currently implementing

that.

Microsoft currently is talking about only the boot process.

But the chip that

supports that, supports all the way up the stack, to that nothing can run on the machine that you don't approve, that the machine, the machine manager and I'm carefully saying it's not machine owner, because there is a difference in concept of whether if you bought the machine, are you the owner, when somebody else is saying what can boot on it. There is a philosophical question of whether you own But certainly, Microsoft at the moment is only the machine under that condition. talking about the boot process. >> VINT CERF: It's not just Microsoft. I mean, this proposal to use strong

authentication of the, and validation of the boot sequence, is proposed for all machines and all chips. implement that. piece of malware. I think the correct formulation to get to your point is that whoever is able to sign the boot code is the party that has control over whether the machine will run that particular boot sequence. By the way, there is another little nuance to this. sequence is also digitally signed. sign that boot sequence? might or might not trust. I have a concern about time. okay with you, Alex. If there are people who would like to raise questions, either from the floor, or possibly online, Sebastien, have there been any online -- why don't we start with you. >> Thank you, Vint. We have a question online from Olivia. The Internet could So I'm going to suggest that we try to open this If you are going to update the The chip makers have been asked specifically to Again, the intent being to avoid having a machine boot up in

boot sequence, you also have to check to make sure that the proposed new boot So the issue here is who is the party that can If it turns out to be a particular manufacturer, maybe

Microsoft in this case, that would be different than some other party that you

up to interaction with the people who have joined us for this session, if that's

be anything from a free-for-all network, where everybody and anything is allowed, including criminal behavior, to the other extreme of content provider or Government controlling it, filtering it, listening to it through deep packet inspection. How can

we solve the challenge of finding the right comfort zone in between those two extremes? or other? Are there methods to look out for? Thank you. Alex or Scott? You start . First of all, it is clear that we don't want the Are there any early warning signs that we should watch out for, that will tell us we are going too far in one direction

>> VINT CERF:

>> SCOTT BRADNER: >> VINT CERF: extremes.

I start, okay.

It's also clear that, at least I would like to propose that we don't want

a network which is so open to abusive behavior that we, not only do we not feel safe, we are not safe, and that our privacy is eroded or lost, our security and confidentiality are eroded or lost, and they could be eroded or lost in both directions; even a network which is completely and totally transparent and controlled by the Government is not going to stop, that will lose all of our privacy and confidentiality. On the other side of the coin, if it's completely wide open, we already have worked examples of people penetrating machines, creating zombies and so on. There has to be some place in between. solution which is purely technical in nature. increasing the safety of the network. And it is my belief that there is no There are a variety of ways of

We implied in this talk some of them about

talking about the secure boot, but that will get us only so far. Then we have to deal with the fact that there are people who will use this facility to exercise abusive behavior, and maybe even attempt to cause harm to others or extract value from them. The only way to deal with that is to detect the problem, and then come to fairly broad common agreements, fairly widespread common agreements, that those behaviors aren't acceptable; and if they are detected, that there will be consequences. That still leads to a question of how you find the perpetrator. It leads to questions of reciprocity across It leads to legal agreements about coping with these This leads to questions of attribution. national boundaries. unacceptable behaviors. I think we are going to have to have discussions in the Internet Governance

Forum and possibly in other forums in order to establish norms that are acceptable on a fairly wide scale. In the absence of any decisions along those lines, I don't see how we will enact any protections that are worth anything at all. Finally, we can't stop people from doing bad things. because it's ethically wrong. values and other things. with the problem. And because we can't stop

them, the only other thing that we have to do is to tell them that they shouldn't, And that's the kind of educational thing that we This is ought to be teaching kids as they grow up, to value national values and family You wouldn't want other people to harm you. the golden rule all over again. We have all those three possible ways of dealing Scott.

Somehow we are going to have to work our way through to a

place that is largely, let's say roughly comfortable for everybody. >> SCOTT BRADNER: Vint said.

I want to add a little bit of flavor to some of the things

Deep packet inspection won't stop the bad guys, because if you remember in World War II, the U.S. employed Navajo Indians to speak code and the code they spoke was their native language. Even if you can intercept something, assuming that it's unencrypted, which is a bad assumption, then you can talk in a code which allows you to actually communicate, and many dissidents in many countries have found this out. So that deep packet inspection is not quite the killer of communication that some Governments might like, or some businesses might want. a risk. But it is still a definitely It is still definitely a threat to one's personal life and privacy.

The question of attribution that Vint brought up is actually a very powerful one. There is a great paper from Dave Clark and Susan Landou on attribution and the difficulty of it. In particular, attribution is being able to determine who sent you With the kind of attacks that we are something or who did something to you. attack. It almost always goes through one or two or four or seven or 25 middle men. Somebody hacks into a computer, a student computer at Harvard, and uses that as a stepping off point to another student computer at Harvard, to a student computer at MIT, to a student computer someplace else, to somebody's home

seeing today, the attack almost never comes from the party that is controlling the

computer, and finally attacks the Pentagon.

If the Pentagon says, we are under And that is probably not

attack, we are going to nuke who is attacking us, and they use the source IP address for that, they are going to nuke some grandma. what they have in mind. So there aren't any easy answers. Attribution or holding countries accountable for

what happens from them, there was, if you look at the, I think it's the Potomac Institute had a videoconference a couple months ago about that, where one of the proposals was exactly that, to hold countries accountable for any attacks, any cyber attacks that come from within the country. But that doesn't stop somebody from the U.S. breaking into a computer in Bulgaria and then using that to attack China. difficult. have some control. out of their territory. That doesn't really work in the modern Internet. >> ALEJANDRO PISANTY: Thank you, Vint. To add to this replies to Olivia's There is, attribution is very very We did that many years ago, with pirates, and where you could actually There was a doctrine of accountability, that countries were

held accountable from bandits that came out of their territory or pirates that came

question, the signs that something is going to go wrong, in what I understood of the question, are very much embodied, and pending what has been said by Vint and Scott, you know something is going to begin to be weird when you see a mix of responses to behavior problems on the net, that leans too heavily on technology and too little on the behavior that it actually wants to regulate, and where the technological solution creates more problems than is intended to solve or is just unachievable. is very important. The attribution problem as has been mentioned by Scott We have As Vint has said, no law actually prevents crime.

terrible laws, that one can kill and there is the possibility in many countries of being killed for killing and people continue to kill. So we have to go back to our basic social problems, and make sure that we do more with the Internet than against the Internet to solve them. >> SCOTT BRADNER: One other note, that one of the approaches that some So to keep track

Governments have worked on and law enforcement proponents frequently talk about is to require ISPs to record the activities of their users.

of every Web site you go to, every E-mail message you send, this is something that is technically possible to do in the Internet, but imagine this in the physical world. A Government that requires every letter to be opened and copied, and recorded, would that ever survive in the physical world? But that is something you can technically do quite easily in the Internet world, and there are many Governments that want to do that. >> SIVASUBRAMAN MUTHUSAMY: Yes. In line with what Scott said, if a If this

Government wants data to be retained, it can only go to ISP, and if it wants something filtered, it can go to another business, a certain company. business, businesses increase their resistance, or they team up better and try and convince Governments that this is not right, or this is against the values, then can this happen. information? >> VINT CERF: tackling? >> ALEJANDRO PISANTY: Yes, thank you. Especially after seeing other Thank you. Alex, I think that we have according to my clock So are there administrativia you believe we should be If the ISPs say no, how could Governments have all the

just about five minutes left.

Dynamic Coalitions not treat the organizational stuff in public and on the record, I think that we are particularly stressed to make sure that the path forward for the Dynamic Coalition is at least set out in a proposal in this session. Given the questions we have received, pretty broad nature of them, recognizing visually many of the participants physically present in the session, I think that we can safely put forward the following proposal, which is that, I will put forward my volunteer effort, and I'm counting on Siva's continuing volunteer effort, he is a man of incredible strength and initiative, and whoever else wants to volunteer to be a core group to move the Dynamic Coalition forward. What we need to do is to produce a report which you can easily do from this session. Siva and I can take that responsibility based on the transcript, put it . forward, put it up on a blog which we will announce, and make sure that it gets proper comment, so that if it's faithful rendition of the session ->> SCOTT BRADNER: It's accurate.

>> ALEJANDRO PISANTY: -- we have the accurate transcript, but we will have a summary that people can use without reading the whole transcript, and without the "umms" of the transcript. Coalition. Also try to get some continuity for this Dynamic What the Dynamic Coalition can actually do now as far as I can see

with the people here, is basically set up a very lightweight observatory, in which we keep track of the most visible at least and facilitate other people keeping track of initiatives, that either by being restrictive or by proposing things like filtering, blocking, and so forth, or by putting forward set of principles, sets of policies, digital agendas and so forth, may have an impingement, an impact on the, or new requirement on the evolution of the Internet's architecture, to make sure that we continue to have a dialogue in this conversation, continue to have a conversation with the private sector, with the technical community, with researchers in the academic community that are making sociological and political science research about these things, Government and civil society, and to promote the activity around this. I think that for now, we would not have an immediate pressing need of establishing membership rules for the Dynamic Coalition, bylaws to regulate the behavior in detail, and stuff like that, which has been found necessary in another Dynamic Coalition . >> SCOTT BRADNER: This is the Internet. It's the Internet. We do it the Internet way. When

>> ALEJANDRO PISANTY: there and solving it.

we have a problem to solve, for which the solution arose, we start seeing who is For now what we want to do is keep this, stress more the dynamic than the coalition side of Dynamic Coalition, and make sure that we can make it useful and valuable over the coming year. I would emphatically ask for comments on that. >> SCOTT BRADNER: >> VINT CERF: have in mind. the same topic. >> Sounds good to me.

I'm certainly happy to help craft whatever draft documents you I think we have run out of time. Sebastien.

I hope we get a lot of feedback from others who are interested in

As Sebastien, not as remote participant.

I think what I understood you suggest, it's a very good way. one point.

I would like to add

As we are the core Internet, I think we can show that Internet it's a good tool to urge the Dynamic Coalition to work, and maybe we can be also the core of what, how could a Dynamic Coalition could work. tools, some people. We need certainly some ideas, some But I am sure that we can pave the way for others using

the Internet with right tools for the future and the Dynamic Coalition. I think it's important for the future of the IGF itself, because Working Group, it's okay. Dynamic Coalition could be the way to go from one IGF to the other, then And I'm sure with the people that are around the table and in Thank you. let's try to do it.

this room, and connected, it will be possible. >> VINT CERF:

Thank you very much, Sebastien.

I think that at this point we have to call the session to a close, but thank you all very much for participating. We look forward to hearing more from you in the Thank you. future, and of course seeing you the rest of the week. (Applause.) (Session ends at 4:00.)

Vous aimerez peut-être aussi