P. 1


|Views: 587|Likes:

More info:

Published by: Crowdsourcing.org on Jan 11, 2013
Droits d'auteur :Attribution Non-commercial


Read on Scribd mobile: iPhone, iPad and Android.
See more
See less



) French Air Force Research Centre (CReA) Management of Defence Organisations Research Team EOAA/CReA BA 701 F-13661 SALON AIR Tel: +33 (0)4 90 17 83 30 pierre.barbaroux@inet.air.defense.gouv.fr

Abstract. This article presents a qualitative research from a historical case which focuses on the ARPANET project. Re-considering the history of the ARPANET project as a vivid example of OI, the article seeks to identify the capabilities the innovative organisation should hold so as to develop open innovation (OI) projects. In particular, it shows that benefiting from openness in innovation entails that the innovative organisation is capable to achieve (at least) the following tasks: to leverage complementarities between internal and external sources of innovation (selection capability), to codify, capitalise and disseminate knowledge outcomes (knowledge management capability) and to design and align product and organisations (design capability). Key words. Open innovation, Capabilities, Design, ARPANET. 1. Introduction The Open Innovation (OI) paradigm introduced by Henry Chesbrough and his colleagues offers practical guidelines and theoretical principles to deal with the various dimensions attached to the concept of openness in innovation (Chesbrough and Teece, 1996; Chesbrough, 2003; Chesbrough and Appleyard, 2007). According to the OI approach, the sources of innovation include not just in-house assets and capabilities, but entail establishing connections with external resources which are often distributed and embodied in diverse organisations, technologies and communities.


As Nieto and Santamaria (2006, p. 367) explained, innovation, “may not simply depend on skills that firms can find and exploit in-house, but on the effectiveness with which they can gain access to external sources of technological knowledge and skills”. In this context, the capabilities required for opening up innovation and benefiting from external resources are critical. Although the literature on innovation management and the theory of the firm have long acknowledged the role played by capabilities in providing the firm with competitive advantage (Wernerfelt, 1984; Barney, 1991; Teece, Pisano and Schuen, 1997; Dosi, Faillo and Marengo, 2008), there is a gap in our understanding of how firms manage to reap the full benefits of openness in innovation management, i.e. How does the firm manage to identify internal and external sources of innovation, integrate them effectively, and foster the diffusion of innovation outcomes? To address the above question, this article adopts an exploratory methodology and presents a qualitative research from a historical case. The case focuses on the development of the first computerised communication network by the U.S. Advanced Research Project Agency (ARPA) in the late sixties: ARPANET. The ARPANET project involved the participation of distinctive communities of scientists, R&D firms and telecommunication companies under the heading of a public establishment: the Information Processing Technology Office (IPTO). By building on a historical case study, this contribution seeks to investigate the nature and logics of the capabilities deployed by the IPTO and its partners to create ARPANET both as a network and as an organisation. The article begins by reviewing the literature on open approaches to innovation. The literature review insists on the knowledge-intensive tasks the innovative firm must be capable to realise so as to develop open innovations. It continues on by presenting the research methodology adopted to document the history of the ARPANET project. Next, the case study findings are introduced. This section discriminates between two themes. The first theme focuses on how the IPTO designed a collaborative problem-solving organisational form to grapple with a number of technical problems and coordinate a variety of internal and external resources, communities and organisations. The second theme is concerned with the role played by the Network Working Group (NWG), a multidisciplinary community made up with representatives of users, developers and government agencies, in codifying the core concepts embedded in the network and facilitating their dissemination and implementation. Finally, the main implications of the findings are drawn and a typology of open innovation capabilities is introduced and discussed.


2. Conceptual background To innovate, firms must be capable to manage a variety of resources, including money, technological artefacts, human skills, marketing knowledge and social capital (Dodgson, Gann and Salter, 2008; p. 97). According to open approaches to innovation, these resources are not easily accessible to the firm (Pénin, 2008). They are distributed inside and outside its own financial, technological, organisational and cognitive boundaries. In seeking to access these resources, combine them and develop new products and services, the innovative organisation must find ways to open up its R&D facilities and integrate external sources of innovation In addition, the OI paradigm suggests that innovation outcomes are likely to be commercialised thanks to a variety of proprietary and non proprietary strategies (Chesbrough and Appleyard, 2007) which enable the firm to profit from innovation either by strengthening/weakening appropriation regimes or by shaping industry architectures (Pisano and Teece, 2007). Although it is not directly related to the process of invention itself (Arthur, 2007), the appropriation strategy adopted by the innovative firm remains determinative for its ability to profit from its efforts during the commercialisation phase. Basically, the combination of internal and external sources of innovation has been construed by Chesbrough (2003) as a strategic shift from a closed innovation model relying on internal R&D, vertical integration and control, to an open innovation model which, in fact, can take many different organisational forms and rely on many different strategies (e.g. collaboration, strategic alliances, licensing technology, industrial clusters, innovation networks, user integration…). The particular form and strategy adopted by the innovative firm merely reflect its response regarding the specific demands for innovation it has to deal with. Despite important differences among them, these forms and strategies share a common view on the virtues attached to the concepts of collaboration, interaction and openness. In an increasingly interconnected and turbulent economy, innovation requires that the firm is able to interact and collaborate with others (including users, suppliers, rivals and so on) to create, absorb, combine and integrate a variety of in-sourced and out-sourced knowledge (Chesbrough and Teece, 1996). The foregoing does not lessen the need for nurturing in-house innovative assets, but involves that the innovating firm is capable to dialogue with external partners so as to combine (Kogut and Zander, 1992) and absorb (Cohen and Levinthal, 1990) a collection of internally and externally distributed, often fragmented, pieces of knowledge. How does the firm manage to grapple with these (highly) demanding activities?


Investigating the relations between innovation and new technology, Dodgson, Gann and Salter (2005) suggested that the ability of the firm to engage in technological innovation resides in the combination of specific organisational attributes and capacities. The authors stated that “the competitive advantage to be derived by firms from innovation lie at the creative leadership in design and development linked with effective integration of other productive functions, the capacity to manage complexity, and in the ability to fully engage the users of innovation in the process of its realization” (Dodgson et al. 2005, p. 24). Managing innovation within knowledge-intensive environments therefore requires that the firm is capable to coordinate creative, productive and marketing resources, and to integrate in-house knowledge with external knowledge, e.g. from mobile workers, skilled users, component suppliers, rival firms and venture capitalists (Chesbrough, 2003). Consistent with the above argument, students of innovation in complex product systems (CoPS, Hobday, 1998; Prencipe, Davies and Hobday, 2003) have long established that the capacity of the firm to coordinate participants involved in innovation projects (e.g. users, suppliers, university, financial institutions) and integrate their respective contributions depends upon its ability to design some organisational form that fits the most with the architectural properties of the innovative product. Indeed, the particular organisational form (e.g. centralised, decentralised, network-centric, matrix, project-based) deployed by the firm to innovate is likely to shape how it encourages participation, integrates contributions and, at the same time, controls the resulting innovation outcomes (Sosa, Eppinger and Rowles, 2004). By investing time and resources to design product and organisational forms properly, the firm is likely to reduce coordination costs and foster its capacity to integrate dispersed knowledge assets and technologies (Ulrich, 1995; Sanchez and Mahoney, 1996; Langlois, 2002; Ethiraj, 2007). It should be noticed that the above issues (and challenges) are not specific to open approaches to innovation. Innovation has long been considered by scholars and practitioners as a profit-oriented, knowledge-intensive activity supported by the combination of creative processes (e.g. design and R&D) that aim at facilitating the integration of new ideas into new marketable products and services. What is new with the open innovation perspective is that it refers to innovative contexts where knowledge is distributed inside and outside the boundaries of the firm. The sources of innovation being embedded in a variety of organisational units (e.g. individuals, communities, organisations), they cannot be integrated vertically by the innovative firm. Therefore, open model regards innovation as a collective and interactive process involving a number of individuals and organisations which collaborate for accessing,


creating and sharing resources and technologies (Pénin, 2008) that could be used in combination to support the invention and commercialisation of new products and services. 3. Research design 3.1. Methodology This contribution adopted an exploratory methodology based on a qualitative research from a single case (Eisenhardt, 1989). In particular, a historical case study approach has been developed which focuses on the ARPANET project. The latter has been selected since it provides a vivid example of what openness means for innovation management. As such, the development of ARPANET had necessitated a number of public agencies, research laboratories, R&D firms and telecommunication companies to collaborate under the heading of a public establishment: the U.S. Department for Advanced Research Projects Agency, ARPA). The project exemplifies how innovation management depends on the ability of the corporate sponsor (i.e. ARPA) to encourage interactions among participants, both formally and informally, so as to solve a variety of technical problems, codify and disseminate knowledge, and promote the diffusion of innovation outcomes. By opting for a historical methodology, the objective is to deconstruct the innovation capabilities on which the ARPANET project relied, identify their main attributes and proceed to a synthetic analysis. This methodological position is neither unique nor singular. Weick (1993) adopted a similar perspective to identify the sources of resilience in organisations by re-analysing the history of the Mann Gulch disaster. Hargadon and Douglas (2001) also applied a historical methodology to examine how the social and institutional embeddedness of Thomas Edison’s system of electric lighting (a radical innovation) influenced its adoption and further commercial exploitation and development. In the same vein, Scranton (2007) developed a historical case study of the U.S. military Jet-propulsion industry in the World War II and early Cold War periods to elaborate on the concept of dynamic innovation. Recently, Lenfle (2009) explored the organisational logics supporting the development of the Manhattan project and introduced a theoretical framework which reconsiders the concepts of explorative innovation project and management. 3.2. Data analysis To document the case, a variety of data from books, academic publications, archives, interviews and documentations published by the various institutions, organisations and 5

researchers that participated in the development of the ARPANET, has been collected and analysed. The analysis of the data followed a three-step process of classification, mapping and synthesis (see, Appendix). First, data sources have been classified according to their form and content. This preliminary classification provided elementary associations between the various data sources and their descriptive, illustrative, comparative and/or analytic content. Second, connections among the previous classified data have been established by employing three criteria: chronology, technology, and actors. This allowed for a detailed mapping of the data according to their form, content and major topics. Third, the above structured data have been scrutinised (hand-made analysis) resulting in the identification of the following themes: participants’ skills and responsibilities, technical challenges and scientific problems, product and organisation design issues and knowledge management processes (e.g. codification and dissemination). These themes shaped the way the presentation of the case-study findings have been organised (see, Appendix) 3.3. A brief history of the ARPANET In 1958, the U.S. Department for Advanced Research Projects Agency (ARPA) was created by the U.S. Department of Defense (DoD) to promote applied research in computer technology. With the intensification of the Cold War, it was expected that this emerging technological field could deliver decisive command and control (C²) capabilities to the U.S military and maintain the technological superiority of the U.S. services over the Soviets. Indubitably, one of the most outstanding accomplishments of the ARPA had been the development of the first wide-area computer network: the ARPANET (Kleinrock, 2008). The concept of social interactions that could be enabled by computer-mediated networking appeared at the very beginning of the sixties. In 1962, Joseph Licklider and David Clark, researchers at the Massachusetts Institute of Technology (MIT), envisioned in a memorandum untitled "On-Line Man Computer Communication" a globally interconnected set of computers through which everyone could quickly access data and programs from any site (Licklider and Clark, 1962). Very rapidly, Licklider became the first head of the computer research program at the ARPA, called the “Information Processing Techniques Office” (IPTO). While at the IPTO, he argued for the importance of the concept of networking in remote computer-mediated communication (Leiner et al., 1997). In 1964, Paul Baran at the Rand Corporation (Santa Monica) and Donald Davies at the National Physical Laboratory (Middlesex, UK) investigated the conditions for deploying highly reliable computer networks by exploring the concept of distributed communication (Baran, 1964; Davies and Barber, 6

1973). Two years later, working with Thomas Marrill (Computer Corporation of America, Cambridge Massachusetts) on Leonard Kleinrock’s (UCLA) packet switching theory (Kleinrock, 1964), Lawrence Roberts (MIT) empirically demonstrated the feasibility of communications using packets rather than circuits (Roberts and Marrill, 1966). The result of this experiment was the realisation that time-shared computers could work well together, running programs and retrieving data as necessary on the remote machine. In late 1966, Roberts went to ARPA1 to develop the computer network project. He published his “plan for the ARPANET” ("Multiple Computer Networks and Intercomputer Communication") at the Association for Computing Machinery (ACM) Symposium on Operating Systems Principles (Gatlinburg) in October 1967. The IPTO was responsible for a program entitled “Resource Sharing Computer Networks” which objectives were to develop technologies and obtain experience on interconnecting computers, and improve applied research in computer and resource sharing (BBN 1981, II-2). In particular, this program aimed to fund and coordinate research efforts in distributed computer-mediated communication networks, and to select prime contractors for the nodes and the overall network design (BBN 1981, II-10). In July 1968, a competitive procurement was prepared and issued for the selection of prime contractor. Bolt Beranek and Newman Inc. (BBN) was selected in January 1969 for designing the Interface Message Processor (IMP, BBN 1981, II-11). The IMP appeared critical since they enabled distinctive hosts to communicate with each other thanks to a telephone line. The BBN team consisted of seven computer scientists (including William Crowther and Robert Kahn) placed under the supervision of Franck Heart. In September 1969, after BBN finished with the development of the IMP, Leonard Kleinrock’s “Network Measurement Center” at UCLA was selected to be the first node on the ARPANET. Kleinrock assembled a research team of computer science graduate students (including Charlie Kline, Steve Crocker, Jon Postel and Vint Cerf) to prepare for connection as the first node on the experimental network (XNET). One month later, Douglas Engelbart's project on “Augmentation of Human Intellect” at Stanford Research Institute (SRI) provided a second node. Two other nodes were selected to complement the network architecture: Glen Culler and Burton Fried at the UC Santa Barbara (UCSB) and Bob Taylor and Ivan Sutherland at the University of Utah. The first four host sites formed the what is called the experimental network (XNET).


Lawrence Roberts became director of the IPTO in 1969 (Kleinrock, 2008, p.11).


In December 1970, the initial ARPANET Host-to-Host protocol called the Network Control Protocol (NCP) was completed by the Network Working Group (NWG). The NCP “was the first transport layer protocol of the ARPANET, later to be succeeded by the TCP” (Kleinrock, 2008, p. 13). Remote computers were added quickly to the ARPANET during the following years, and work proceeded on completing a functionally complete Host-to-Host protocol and other network software. In the early seventies (1971/1972), network users could begin to develop applications, including commercial applications. In October 1972, a public demonstration of the ARPANET was organised in Washington during the first International Conference on Computer Communications (ICCC). Robert Kahn (SRI) installed a complete ARPANET node at the conference with about 40 active terminals permitting access to dozens of dispersed computers. Each host site was invited to participate in the demonstration. An air traffic control simulation involving many geographically distant interconnected sites was successfully organised during the conference. The ARPANET demonstration lasted for three days and proved the feasibility of packet switching theory and reliability of the network (Roberts, 1985). In 1974, a common language was developed through the participation of a team of scientists at Stanford (SRI) under the supervision of Robert Kahn and Vinton Cerf, which would allow different packet networks to interconnect and communicate with each other. This was known as a transmission control protocol/internet protocol (TCP/IP). In 1977, the transmission control protocol (TCP) “is used to interconnect three networks (ARPANET, PRNET and SATNET) in an intercontinental demonstration” (Kleinrock, 2008, p.13). This gave birth to what shall become the INTERNET, a connected set of networks using the TCP/IP standard. Figure 1 summarises the history of the ARPANET project presented above, starting with the creation of the ARPA and ending with the final adoption of the TCP/IP standard in 1983.


Distributed communication (P. Baran) Packet switching theory (L. Kleinrock, T. Marrill & L. Roberts)

NWG completes the NCP R. Kahn and V. Cerf develop the TCP/IP 1972 1970 1974 ARPANET adopts the TCP/IP standard R. Kahn demonstrates the reliability of the ARPANET in October 1983

DOD creates the ARPA 1962 1958 1964

Prime contractor (PC) procurement in July 1966 1968 1969

Online communication and data sharing (J. Licklider & D. Clark) J. Licklider becomes the 1st head of the IPTO

BBN is selected in January as PC for the IMP XNET connected in September

L. Roberts goes to ARPA-IPTO

Figure 1. A brief history of the ARPANET. 4. Case study findings Next sections discriminate between four themes to organise the presentation of findings: organisation design, problem-solving, participants’ role and contributions and codification of knowledge. 4.1. Organisation design When the decision was made to develop the network, the IPTO and its partners began to decompose the whole product development process into discrete technical tasks. The contractual arrangements made by the IPTO had privileged decentralisation which in turn enabled theoretical experimentations and creative exploration. Fundamentally, the organisational solution adopted by the Office was supported by two pillars: (i) decomposition and autonomy and (ii) collaborative interactions. Decomposition allowed for treating interdependent technical problems as if they were independent, while collaborative interactions fostered the establishment of a dominant design for the network. Rather than


imposing central control over local developments, the IPTO promoted direct interactions among participants. As Leonard Kleinrock remembered,
“We dealt with BBN directly. When we had a problem with BBN, we complained to Larry [Lawrence Roberts] and he would step in and make sure things were fixed up. It was not a formal relationship that required all kinds of paperwork to go back and forth. It was peers, and researchers, and developers. It was a friendly and efficient environment in that sense.” (Interview with Judy O’Neill, Ibid.)

The objective of the IPTO was to reap the full benefits of interdisciplinary teams by encouraging informal dialogue and knowledge exchanges. Again, Leonard Kleinrock explained that
“the culture of those early days of the ARPANET community was one of open research, shared ideas and work, no overbearing control structure, and trust in the members of the community” (Kleinrock, 2008, p. 12).

Even when telecommunication infrastructure required close attention, participants (i.e. BBN and telephone companies, like AT&T) interacted directly through informal contacts which enabled “effective maintenance with a minimum network disruption” (ARPANET Study Final Report, 1972). Consequently, the ARPANET technology could be considered “as a natural outcome of the progressive R&D atmosphere that was necessary for the development and implementation of the network concept” (ARPANET Study Final Report, 1972). 4.2. Identification of major technical and scientific problems As a complex and technically demanding project, the development of the ARPANET required a proper definition and attribution of both technical and organisational authority and responsibility. Therein, the selection and division of tasks among participants had been primarily determined by their respective expertise and capabilities regarding a number of key technical problems. Seven technical problems needed to be handled before connecting the first four sites of the network: topology, error control, host interfaces, switching node performance, remote control, routing and host protocol (BBN 1981, II-13-19). Each problem required interactions and active collaborations between representatives of the host sites (research centres at universities), R&D companies (NAC, BBN and Honeywell), and government agencies (IPTO,


RML and DSS-W). Table 1 presents the seven technical problems and the solutions initially adopted to cope with them.
Challenge Topology Formulation To design network architecture (arranging M links among N nodes) subject to given constraints (e.g., maximum/average time delays, reliability) at the lowest cost. To minimise errors in data (packets) transmission from one site to another. Solution Leonard Kleinrock (NMC-UCLA) proposed a design algorithm, the Concave Branch Elimination (CBE). NAC submitted a competing solution called Cut Saturation Method (CSM). The latter solution had been adopted. The solution consisted in comparing input data with output data according to a cyclical procedure. A message was then transmitted to the transmitting node is there was no error. If there was an error, no message was send to the transmitting node and the packet was redirected. The solution adopted by BBN consisted in implementing a dedicated hardware unit in each host machine, called « special host interface », to enable dialogue between host sites and the IMP. BBN elected the Honeywell 316 computer to be integrated into IMPs because it provided users with high capacities at modest costs.

Error control

Host interface

To allow a logical match between switching nodes and the host site, each using distinctive language (the word length of the interface and the computer hosts vary). Allowing remote computers to share resources and data requires effective switching nodes in terms of reliability and computing speed. The increasing number of computers in the network requires designing a technology for managing, updating and debugging remote computers. How to streamline the decision process by which each node decides to route information to reach any particular destination node?

Switching node performance

Remote control

A Network Control Center (NCC) was established and software was developed which made it possible to examine or change the operating software in any node of the network from the NCC. A distributed traffic routing algorithm had been developed which estimated on the basis of information from adjacent nodes the globally proper path for each message to be transmitted. This enabled adaptation to varying traffic loads and potential lines and node failures. The NWG developed a program called the « Network Control Program » (NCP) which function was to establish, cut and switch connections between host machines and the network, and to control information flows.


Host protocol

To design a common language called the Host Protocol so as to facilitate communications among heterogeneous machines.

Table 1. Seven technical problems and solutions (Adapted from BBN 1981, II-13-19)


By identifying major technical problems, the IPTO made a first step towards selecting the individuals and organisations (i.e. designing an organisational form) capable of conceiving local solutions to the various domain-specific problems and at the same time integrating them properly. 4.3. Selection of participants Basically, the ARPANET project had necessitated interactions among a range of heterogeneous communities of scientists, R&D firms and telecommunication companies under the heading of a public establishment (ARPA-IPTO). In addition, a number of government agencies assisted the IPTO within specific domains. Each participant involved in the project held particular skills and expertise within a number of scientific, managerial, engineering, industrial and organisational areas. The ability of the IPTO to coordinate these expertises had been determinative for the success of the ARPANET project. Figure 2 presents a graphical overview of the organisational form deployed by the IPTO to develop the ARPANET. This graphical representation allows for identifying major participants involved in the project and establishing connections between them. Next paragraphs present their respective skills, contributions and responsibilities.









NWG University labs Government agencies R&D companies Telecommunications companies

Figure 2. ARPANET organisational form. IPTO and government agencies The ARPANET project required a joint effort of many government agencies, all responsive to the Information Processing Techniques Office (IPTO) at the ARPA. These agencies offered critical support capabilities which enabled the IPTO to supervise the development of the network, control costs and play a leading role in the emergence of modern IT industry. The effective coordination between government agencies (IPTO, DECCO, RML and DSS-W) facilitated the development of expertise in key organisational and technological domains which, in turn, enabled the implementation of effective governance mechanisms. As such, the capabilities offered by the IPTO and other government agencies reflected the internal resources injected by the U.S. Department of Defense (DoD) to develop the project. More precisely regarding organisational and technical leadership, the key participant in the ARPANET project was the IPTO. Representing the ARPA, this Office “made certain


key architectural decisions (…) the IPTO set policy of the network, made decisions about who would join the network” (BBN 1981, III-26). As Leonard Kleinrock indicated,
“I think the IPTO was a prime mover for the United-States in the advancement of computer technology through advanced thinking and … what I shall say… heroic funding of the things they thought were worthwhile. Their motto was, high risks, high payoff (…) It was one of the great experiments in science, I think. It completely changes the way things are going on now – commerce, government, industry, science, etc.” (Interview with Judy O’Neill, Charles Babbage Institute, Center for the History of Information Processing, 1990, April 03).

A number of government agencies provided the IPTO with a bundle of management and technical support capabilities. Therein, three government agencies played a critical role. A procurement agency, the Defense Science and Security-Washington (DSS-W) supported the IPTO in providing expertise in contractual negotiations and interactions with contractors. The scope of DSS-W expertise also included technical monitoring of component contractors. In addition, the Range Measurements Laboratory (RML) at Patrick Air Force Base (Florida) offered procurement and technical support capabilities which complemented those of the DSS-W. Finally, the Defense Commercial Communication Office (DECCO) monitored the contractual arrangements made by the IPTO with the telephone companies which provided the telecommunication infrastructure of the network. In particular, DECCO acquired wide band facilities which enabled the rapid growth of the network in the seventies. Computer science community In the mid-sixties, few research centres and/or companies had successfully connected remote computers for the purpose of experimenting with shared resources (BBN 1981, I-5). Only the Western Data Processing Center at UCLA and the Bell Laboratories had been capable to enable similar computers to perform load sharing. In this context, “the most efficient way to develop the techniques needed for an effective network was thought to be involving the research talent (…) in prototype activity” (BBN 1981, II-2). However, when the ARPANET project was officially launched in 1969, the IPTO did not have many options to select the first nodes of the network. Lawrence Roberts (director of the IPTO) explained that,
“There, we had a whole collection of sites identified, probably at least fifteen, that we knew we were going to try to connect to … but many of those sites were uncooperative to begin with. So there was only a small set of sites where it was going to be compatible with things now. In addition, we had some particular requirements. We needed to have the testing started


immediately. So Kleinrock at UCLA was a must. And probably the only one that was an absolute must. Because, he was going to do all of the testing of the network… to figure out what the theory was behind this. And make it work for us in the future. Then, secondly, Engelbart had the resource center. And we wanted to get that on line as soon as possible. And get that rolling. So that we would have the document center on line. And then at the University of Utah, they were very compatible and very supportive. And so that was effective. And the Santa Barbara was also the same type of very cooperative and supportive. And that was all local enough that we could do it without expensive communications” (Interview with Lawrence Roberts, Internet Archive, Caribiner Group, August 15, 1994, Tape Number Two).

Two research centres played an active role in the initial development of the experimental network (XNET): the Network Measurement Center (NMC) at UCLA and the Network Information Center (NIC) at Stanford. The NMC “had the responsibility for much of the analysis and simulation of the ARPANET performance, as well as direct measurements based on statistics gathered by the IMP program” (BBN 1981, III-39). Leonard Kleinrock at UCLA was involved in the definition, development and testing of all protocols and procedures. His competency was unanimously acknowledged within the computer science community. He also enjoyed personal relationships with Lawrence Roberts and a number of other key individuals involved in the project, in particular with scientists and engineers from R&D companies (NAC and BBN). The NIC, under the heading of Douglas Englebart at SRI (Stanford), was responsible for collecting data, codifying knowledge and developing the technological tools for storage and dissemination of technical documentations. As Leonard Kleinrock explained,
“SRI was an important member of the community (…) they were the second node of the network (…) Documentation went through them, as did the network RFC notes” (Interview with Judy O’Neill, Ibid.).

The NIC had developed an on-line computer program to provide network users with electronic and hard-copy documentations. The latter included information about resources available at the various host sites, network protocols, languages and procedures, and a number of working papers and technical reports. More generally, the computer science community had been responsible for designing and implementing the hardware and software necessary to connect themselves to the network and to access to other hosts’ resources in the network. The academic community therefore contributed primarily by offering high level technological skills and user expertise, and


sharing critical component knowledge in a number of technological domains, including microcomputers and software programming. Subsequently, research host sites participated in the ARPANET project by offering user and developer capabilities. They articulated the preferences, needs and constraints characterising the (future) users of the network. These skills and expertise represented key external resources which the IPTO needed to access so as to solve the multiple scientific problems that came along with the exploration of computermediated communication technology. R&D companies During the initial stages of the project, what delineated the division of technical responsibilities among research centres, R&D companies and Telephone Companies was the specification of a key component of the network: the Interface Message Processor (IMP). Although Telephone Companies (AT&T and General Telephone) provided the physical architecture of the network (e.g., circuits, data sets and lines), representing the most costly element of the project budget, individual host sites could not communicate without reliable interfaces. Amazingly, the ITPO designated two research-oriented companies as prime contractors for developing the IMP and configuring the network: BBN and NAC. Bolt Beranek and Newman Inc. (BBN) was responsible for the development of the interfaces (IMPs). BBN subcontracted part of the work on IMPs with Honeywell and Lockheed. These companies offered the system hardware (Honeywell 316 computer and Lockheed SUE minicomputer), as the building blocks of the IMP core design concepts (ARPANET Study Final Report, 1972). The foregoing gave BBN a special position within the project. Although the IPTO participated in key managerial decision processes regarding the setting up of a network policy or the selection of host sites and prime contractors, it was Bolt Beranek and Newman (BBN) that provided day-to-day operation and maintenance. BBN “carried out much of the day-by-day business of the network (…) without need for daily IPTO supervision” (BBN 1981, III-26). Assuming the responsibility for maintaining orderly network operation, BBN implemented a Network Control Center (NCC) which assumed the central tasks of scheduling and monitoring network operations. As quoted in the ARPANET Study Final Report (1972), “the control center appears highly effective in the area of problem identification, diagnosis and restore action”. The foregoing stemmed from the healthy relationships between BBN, Honeywell and Lockheed which shared the responsibility for the maintenance of the networks and conceived a reliable diagnosis system. In 1975, after the 16

ARPANET became technologically mature, the day-to-day maintenance and regulation of the ARPANET had been transferred from BBN to the Defense Communication Agency (DCA). Regarding network topology, the IPTO selected another R&D company called NAC. The Network Analysis Corporation (NAC) had been created in the late sixties by Howard Frank, Ivan Frisch (Berkeley) and Steve Carr (SRI) as a profit-oriented R&D company specialised in network optimisation. Leonard Kleinrock, who supervised the NMC at UCLA, remembered:
“In about 1971 or so Larry [Lawrence Roberts] was at my house, and I suggested that he meet Howie [Howard] Frank to assist in the topological design problem. So I put them together. And it was a click. Then Larry gave NAC the contract for doing the topological design of the ARPA network” (Interview with Judy O’Neill, Ibid.).

NAC was thus selected by the IPTO and awarded a contract for designing the network topology. This task involved the selection of host-sites, the establishment of links between them, and the reconfiguration of the network whenever a new node joined the network. According to the ARPANET Study Final Report (1972), “along with BBN, NAC has a very important role in the planning and engineering of the ARPANET”. Beside topological optimisation tasks, NAC also developed software to assist the IPTO and the NMC in planning the evolution of the network topology. The R&D companies which participated in the projects played a critical role in the development, exploitation and maintenance of the network. They offered essential external resources which were used in combination with research centres’ and government agencies’ capabilities. Telephone companies The telecommunication infrastructure of the ARPANET had been supplied by traditional telephone companies operating at local, national and international levels. DECCO negotiated with the relevant companies and managed to obtain the desired service at reduced costs. “In the case of a circuit from UCLA to RAND, for example, most likely the service would be procured from General Telephone, the dominant telephone company in the Los Angeles area” (BBN 1981, III-32). At the national and international levels, two companies played a central role: AT&T and Bell Systems. The latter utilised their Long Lines division with which DECCO negotiated specific procurements. National telephones companies also supplied “the components necessary to make up the service from the regional telephone companies” (BBN












telecommunication infrastructure and provided formal and informal assets to coordinate the local and the global levels, they did not participate in the innovation process per se. As Leonard Kleinrock argued,
“It has been said that the telephone industry, or the communications industry, had absolutely nothing to do with the development of the ARPA network (…) To first order, that was correct (…) IBM dropped out (…) AT&T was not involved as an organisation (…) It took them decades to come up to the technology that the data processing guys developed in the ARPANET” (Interview with Judy O’Neill, Ibid.).

The telephone companies provided external resources to the IPTO but did not contribute to creating new architectural and component knowledge (exploration process). Rather they participated in the exploitation of the network and supported its rapid growth and extension. 4.4. Codification of knowledge In the early years of the ARPANET, the knowledge required to connect to the network was disseminated and shared through informal channels. Leonard Kleinrock indicated that
“The way the ARPANET was used was not easy, but it was used mainly as people migrated from one site to another (typically when they changed jobs). They wanted to use the software back at their old hosts, and they knew how to use it” (Interview with Judy O’Neill, Ibid.).

However, as the number of computers communicating over the network increased, general protocols issues were growing in complexity and required standardisation. Basically, there was a need for a general agreement in order to minimise “the amount of implementation necessary for network-wide communication” (BBN 1981, III-58). After a period of experimentations, a layered approach to the specification of communication protocols was designed (BBN 1981, III-59), the latter becoming standardised in January 1972. In addition, the Network Information Center (NIC at SRI) was facing increasing demands for information on protocols, languages and standards from users and developers. It was the goal of the Network Working Group (NWG) to specify technical protocols and codify the resulting host-to-host communication standards so as to facilitate their diffusion and use among the host organisations and communities.


The Network Working Group Back to the summer of 1968, Elmer Shapiro at SRI had been designated by the IPTO to explore technical solutions for dealing with host-to-host communications (BBN 1981, III-45). After an initial meeting, Shapiro and his colleagues at SRI began to investigate host-to-host protocols theoretical issues. A couple of months later, in February 1969, “the first meeting of host representatives and representatives from the NMC and NAC, along with the IMP contractor, was held at BBN (…) they called themselves the Network Working Group” (BBN 1981, III-45-46). In April 1969, the NWG consisted of Steve Carr (Utah), Jeff Rulifson and Bill Duvall (SRI) and Steve Crocker and Gerard Deloche (UCLA). Copies of the group’s notes had to be sent to six persons for validation: Robert Kahn (BBN), Lawrence Roberts (ARPA), Steve Carr (UCLA), Jeff Rulifson (Utah), Ron Stoughton (UCSB) and Steve Crocker (UCLA). Since the membership had never been closed, the composition of the NWG evolved through time with the growth of the network. The NWG promoted open values and critical thinking among its members, and encouraged users and developers to participate in specifying problem solving procedures and sharing of best practices. The group emerged from informal meetings organised by the computer science community to review the research challenges which appeared essential to make the development of communication networks feasible. It was during those meetings that major decisions had been made regarding the architecture of the network (e.g., distributed control, host-to host protocol, routing algorithm). Steve Crocker at UCLA played a major role within the NWG. Together with a number of graduate students at the University of Utah, UCLA and SRI, they investigated how to specify and install the host-to-host protocol. The objective of the NWG was to promote informal discussions between users and developers so as to refine any intuition, suggestion or criticism which could foster the development and utilisation of the network. Steve Crocker indicated that
“notes are encouraged to be timely rather than polished. Philosophical position without examples or other specifics, specific suggestions or implementation techniques without introductory or background explication, and explicit questions without any attempted answers are all acceptable” (RFC 3, 10, 24, 27 & 30, p. 1; Italics added).

The Request For Comments (RFC) The NWG’s working notes began to circulate to most participants. Their edition gave birth to the major reference for ARPANET documentations called the “Request for Comments” (RFC). The first two notes produced by the NWG (RFC 1 and RFC 2) were concerned with 19

Host-to-Host Software specification. They had been written by Steve Crocker (UCLA) and Bill Duval (SRI) respectively, in April 1969. The same month, Steve Crocker wrote the third note (RFC 3) which stated the documentation conventions and established the general philosophy of the NWG and the RFC. As mentioned in the introduction of RFC 3,
“The Network Working Group (NWG) is concerned with the HOST software, the strategies for using the network, and initial experiments with the network. Documentation of the NWG’s effort is through notes such as this. Notes may be produced at any site by anybody and included in this series” (RFC 3, p. 1).

RFC 3 also defined the form that every note should take in order to get stored, indexed and disseminated easily. The initial NWG notes included the following information: serial number (assigned by Steve Crocker), author’s name and affiliation, date and title. The documentation conventions included in RFC 3 had been revised and updated regularly (cf., RFC 10 24, 27 and 30), in particular because the group’s composition evolved as the number of users increased. As the network developed, the NWG increased in size and the variety of RFC became significant. It followed a need to categorise the RFC notes, identify the main topics under discussion and indicate whether the notes are current, obsolete, or superseded. Authored by Peter Karp (researcher at MITRE) in February 1971, the RFC 100 provided the user community with the first Guide to NWG/RFC. As indicated by its author, the Guide “is an attempt to introduce order into the NWG/RFC series” (RFC 100, p.1). It introduced nine categories to classify the notes, each category being divided into subcategories2. “For each category the official document (if any), unresolved issues, and documents to be published are identified. For each subcategory, relevant NWG/RFCs are listed and a brief description of the topics addressed in each note is given” (FRC 100, p. 1). A given note could be associated to more than one category and subcategory3. Among the early contributors, Steve Crocker had been the most prolific writer4. When the RFC 100 was published in February 1971, 6 notes

The nine categories had been labeled as follows: Administrative (A.), Host/IMP Protocol (B.), Host/Host Protocol (C.), Sub-System Level Protocol (D.), Measurement on Network (E.), Network Experience (F.), Site Documentation (G.), Accounting (H.) and Other (I.).


For instance, RFC n°102 had been classified simultaneously within the following categories and subcategories: A.5 (Administrative – Policies), B.1. (Host/IMP - General ), B.2. (Host/IMP - Marking/Padding), C.1. (Host/Host – Protocol Proposals), C.4. (Host/Host – Flow Control), C.5. (Host/Host – Error Control) and C.6 (Host/Host – Interrupt).

Crocker authored, often with Jon Postel, 27 of the 102 first notes, more than ¼ of the total sum of RFCs. Vinton Cerf had been the second most fertile contributor with 6 notes among the 102 first notes.



had been already classified as obsolete. They all belonged to the first ten RFCs to be published in 1969. In June and July 1984, John Reynolds and Jon Postel co-authored the RFC 901 and 902 which established the ARPA-INTERNET protocols and policy. These two notes identified “the documents specifying the official protocols used in the Internet” (RFC 901, p. 1) and described “a policy-statement on how protocols become official standards for the ARPA-Internet and DARPA research community”5 (RFC 902, p. 1). In August 2003, the RFC series included 3587 notes. It had been continued long after the conversion from NCP to TCP standards had occurred in the early eighties. The RFC documentation is still in use within the computer science and Internet community. Figure 3 provides a graphical view of the contributions of major participants involved in the project. Herein, participants are presented as critical resource suppliers for the IPTO. These resources are internally and externally distributed and hence must be coordinated and integrated effectively to deliver full R&D and problem-solving capabilities.

Tests and simulation of network performance, definition of protocols and standards, codification of knowledge Development of the IMP, optimisation of the network architecture, dayto-day operation and maintenance Local and global telecommunication infrastructures (telephone lines, data, circuits … etc.) Internal (ARPA) and external (US Air Force) technical, logistical and management support capabilities




Technical expertise, scientific problem-solving capability, codification of communication protocols & standards

Figure 3. The contributions of major participants involved in the ARPANET.


The ARPA changed its name and became the Defense Advanced Research Projects Agency (DARPA) in 1972.


5. Implications The case study findings provide evidence that engaging in innovation projects requires that the innovative organisation if capable to select, connect and coordinate heterogeneous resources which are likely to be distributed and embodied in a variety of organisations and communities. This is rendered possible through proper organisation design and knowledge integration processes. Drawing on these findings, next sections elaborates a typology of the capabilities the firm must hold and use in order to meet open innovation (OI) requirements. Presented in Table 2, the typology discriminates between three categories of capabilities: design, knowledge management and governance.
Capabilities Organisation design Definition The ability to select heterogeneous “resources” (individuals, technology, communities, organisations) that are distributed internally and externally, establish formal and informal linkages among them, and harness complementariness between them The capacity to use social and technological intermediation artefacts that aim at managing the codification, storage, retrieval, communication and classification of knowledge The capacity to implement incentive systems, reporting structures and governance mechanisms so as to support formal and informal collaborations within distributed contexts, and adapt them continuously along with the development of the project

Knowledge management

Adaptive governance

Table 2. Open innovation capabilities: A typology. 5.1. Organisation design Our case study findings suggest that the ability of the IPTO to design proper organisational forms has been determinative for the success of the ARPANET project. Designing organisations requires setting up and codifying the underlying structures, processes and goals which support both organisational and organising phenomena (Chiva and Alegre, 2007). Remarkably, the IPTO decided to deploy a network-centric organisation to support the development of the ARPNET technology. The decision made by the IPTO regarding organisation design had been determined by the distributed nature of the skills and expertise required to design the network and by the architectural properties of the network itself. Indeed, the decomposition of the network architecture into specific components (e.g. IMP,


Host-Host software) had a direct influence on the design of the organisational form (i.e. on the selection of participants and establishment of linkages between them) that fit the most with the development of the new technology. The division and allocation of R&D tasks by the IPTO led to the adoption of a community-based network considered as fittest organisational form given the innovation project at hand. Therein, the selection of participants and exploitation of complementarities between them had been supported by a combination of formal and informal mechanisms which in turn fostered creative interactions and collective problem solving. Until its inception, the ARPANET project had been nurtured by a mixing of interpersonal contacts and contractual relations. Regarding formal relations, government agencies like DSS-W, RML and DECCO had had a decisive impact on the ability of the IPTO to interact with participants by providing the Office with the capacity to handle the variety of contractual arrangements involved in the project. Regarding informal linkages, the representatives of host sites (research centres), R&D companies (NAC and BBN) and government agencies (including the IPTO itself) did know each other personally before engaging in the ARPANET project. They all belonged to the extended computer science community, had often been colleagues within the same research institutions (e.g. MIT) and had accumulated experience of collaborative work on prior research programs under the heading and financial support of the Federal government. Hence, the identification and exploitation of complementarities between them had been facilitated by the cultural proximity of researchers and engineers who shared common values and norms (e.g., technical excellence, knowledge sharing, learning, reciprocity, trust). This enabled the IPTO to size innovation opportunities arising from “cultural compatibility” (Eng and Wong, 2006) and capitalise on the synergistic effects emerging from it so as to establish formal relationships and control the innovation outcomes coming up with multi-disciplinary interactions. Investigating the relation between distinctive types of innovation and specific kinds of knowledge interactions, Tödling, Lehner and Kaufmann (2009) supported this assertion. Although radical innovation is likely to “draw on new scientific knowledge, generated in universities and research organisations”, the authors claimed that “the exchange of this type of knowledge requires personal interactions” (Tödtling et al., 2009, p. 59). In the same vein, Love and Roper (2009) suggested that “the benefits of cross-functional teams arise from synergies from different sets of views, skills, and expertise that can arise only through physical interaction of, and particularly verbal communication among, specialized personnel”


(Love and Roper, 2009, p. 194). Building on the ARPANET case, one could add to the foregoing suggestions that synergies are more likely to emerge when people share a common understanding of others’ skills and expertise, the existence of a common set of values and norms shaping innovation practices positively. 5.2. Knowledge management The knowledge management capability refers to the capacity of the innovative organisation to deploy intermediation artefacts and organisational forms that enable managing the codification, storage, retrieval, communication and classification of knowledge. This capability is critical because it guaranties that both product architecture and technical functionalities are unambiguously written, stored and accessed which in turn facilitate adoption by increasing numbers of users. Furthermore, the diffusion of innovation outcomes tends to be extremely complicated without technical systems assistance and documentations. Intermediation artefacts are thus essential for new users to implement the technology and use it in a simple and effective way. Regarding documentation and codification tasks, the IPTO made an original decision that consisted in supporting the formation of a user-developer community: the Network Working Group (NWG). The NWG provided the IPTO with a formal organisational structure capable to achieve coordination of distributed efforts, foster the integration of component knowledge and promote the adoption of the network beyond computer science and military communities. Since the NWG transcended the boundaries of its members’ parent organisations, it offered critical resources to grapple with technical problems and codify the resulting protocols, languages and procedures. Anticipating recent works on the role played by user-developer communities in open source software (OSS) development projects, the ARPANET exemplifies how the integration of users could be vital in ensuring that the technical specifications of the innovating product fit with users’ expectations and providing additional expertise to solve technical problems and codify the resulting solutions. Regarding technological artefacts, the ARPANET project provides what could be construed as the first example of systematic utilisation of electronic communication devices to facilitate knowledge creation and diffusion (e.g. RFC) and accompany the growth of user communities. Indeed, the first e-mail communication software had been invented by Ray Tomlinson (BBN) as a response to the need for a user-friendly communication and coordination device (Leiner et al. 1997, p. 103). Thanks to the availability


of friendly communication technology and to the organisation of public meetings and conferences which demonstrated the reliability and applicability of the ARPANET, the level of technical skills required to use the network began to decrease. In turn, this enabled the IPTO to reach a larger set of users and strengthen the dialogue between expert users/developers and novices. 5.3. Adaptive governance Regarding governance issues, the development of the ARPANET necessitated the implementation of a decentralised structure which combined formal authority and informal peer-based legitimacy. Among the multiple tasks for which governance mechanisms were needed, the codification, storage and dissemination of the languages, standards and protocols related to the utilisation of the ARPANET had been a matter of particular concern for the IPTO. This was due to the fact that codifying the knowledge embodied in the network’s architecture and software components was essential for capitalising on local and global technical solutions and supporting the rapid growth of the network beyond the computer science community. Although the resulting informal and cooperative mode of management appeared to be aligned with the decentralised problem-solving organisational form adopted, its practical application did not come without tensions. The software development and, to some extent, maintenance practices of BBN had been criticised by the network community for it did not prevent from network failures and long term inoperability (ARPANET Study Final Report, 1972). Despite remarkable achievements regarding hardware (e.g., IMP) and software development (routing algorithm, host-to-host NCP protocol), there was a pressure for more effective (centralised and authority-based) coordination policy to avoid duplication of efforts and characterise the network as an entity. In other words, there was a need for crossadministrative and inter-organisational governance that would promote the coordination and cooperation necessary to achieve technical and commercial successes. This need became particularly significant with the growth of the network during the seventies. In 1971, two years after the official launch of the project, the director of the Center for Computer Sciences and Technology (CCST, National Bureau of Standards, U.S. Department of Commerce), Ruth M. Davis, published a note untitled “Comments and recommendations concerning the ARPANET network” (Davis, 1971). The CCST was reflecting upon “several alternatives which should immediately be considered with regard to the proper utilization or transfer of the resources and the assets of the ARPA network” (Davis, 1971). Among these alternatives,


it had been envisaged to sell the ARPANET to a private company (e.g., telephone companies) or to create a public-oriented community (e.g., an association) responsible for operating and developing the network. The major lesson that can be learned from the ARPANET project regarding governance is that the IPTO had been capable to design incentive systems, reporting structures and governance mechanisms appropriate to the various phases of the project development cycle. During the early phases of exploration of new knowledge, personal interactions, decentralisation of authority and community-based governance mechanisms had been privileged by the IPTO. These modes of management fitted with the requirements coming along with knowledge exploration dynamics. Then, while major scientific problems been solved and a dominant design for the network established, the project entered a new phase based on the exploitation of knowledge and the development of commercial applications. At this moment, the IPTO adopted a more balanced mode of management. indeed, the community-based mode of management that proved to be effective during the early phases of the project was maintained only for dealing with the NWG’s knowledgeoriented activities; by contrast, the day-to-day maintenance, operation and development of the network had been transferred from BBN, a key member of the NWG, to the Defense Communication Agency (DCA) which was given the responsibility for integrating the ARPANET to other Defense-oriented communication systems and programmes, and extending the potential application of the network infrastructure. 6. Conclusion This paper contended that adopting open, collaborative and interactive approaches to managing the development of innovation products and services requires that the innovative organisation hold specific capabilities. Building on a historical case study of the ARPANET project, the article proposes a typology of the capabilities supporting innovation that includes three categories: organisation design, knowledge management and adaptive governance. This conceptualisation suggests that the innovative organisation must be capable to identify and leverage complementarities between heterogeneous resources that are distributed and embodied in a variety of organisations and communities (e.g. firms, research communities, public agencies). Within this framework, it is argued that opening up innovation necessitates that the firm is capable to design and align product architecture, organisational forms and governance mechanisms. Finally, the paper confirms that the deployment of a user-developer


community coupled with effective communication tools is likely to enhance collective problem-solving and foster the adoption and diffusion of innovation products and services. It is hoped that the assumption put forward in this article and the historical case study methodology adopted would encourage further research into the capabilities supporting the adoption of open, collaborative and interactive approaches to innovation. 7. References on the ARPANET project ARPANET Study Final Report (1972), prepared for DARPA by RCA Service Company. Baran, P. (1964), “On distributed communication networks”, IEEE Transactions on Communications Systems, 12 (March), pp. 1-9. Bolt Beranek and Newman, BBN (1981), A History of the ARPANET: The First Decade, DARPA Report n°4799, April, 1, 1981 (Prepared for DARPA by Bolt Beranek and Newman, Inc.). Davies, D.W. and Barber, D.L.A. (1973), Communications Networks for Computers, John Wiley & Sons publishers. Davis, R.M. (1971), “Comments and recommendations concerning the ARPANET network”, Centre for Computer Sciences and Technology, National Bureau of Standard, Department of Commerce. Kleinrock, L. (1964), Communication Nets: Stochastic Message Flow and Delay, McGrawHill: New York. Kleinrock, L. (2008), “History of the Internet and its flexible future”, IEEE Wireless Communication, February, pp. 8-18. Leiner, B.M., Cerf, V.G., Clark, D.D., Kahn, R.E., Kleinrock, L., Lynch, D.C., Postel, J., Roberts, L.G. and Wolf, S.S. (1997), “The past and future history of the Internet”, Communication of the ACM, vol. 40, n°2, pp. 102-108. Licklider, J. and Clark, W. (1962), “On-line man-computer communication”, Spring Joint Computer Conference, National Pres, Palo Alto CA, May 1962, vol. 21, pp. 113-128. Roberts, L. and Marrill, T. (1966), “A cooperative network of time-sharing computers”, Computer Corporation of America, Technical Report n°11, June 1, 1966.


Roberts, L. (1985), The ARPANET & Computer Networks, NetExpress Inc. publications. (http://ia311536.us.archive.org/0/items/TheArpanetAndComputerNetwork/MML.txt). 8. Academic References Arthur, W.B. (2007), “The structure of invention”, Research Policy, vol. 36, n°2, pp. 274-287. Barney, J. (1991), “Firm resources and sustained competitive advantage”, Journal of Management, vol. 17, pp. 99-120. Chesbrough, H. and Teece, D. (1996), “When is virtual virtuous? Organising for innovation”, Harvard Business Review, vol. 74, n°1, pp. 65-73. Chesbrough, H. (2003), Open Innovation: The New Imperative for Creating and Profiting from Technology, Boston Massachusetts: Harvard business School Press. Chesbrough, H. and Appleyard, M.M. (2007), “Open innovation and strategy”, California Management Review, vol. 50, n°1, pp. 57-76. Cohen, W.M. and Levinthal, D.A. (1990), “Absorptive capacity: A new perspective on learning and innovation”, Administrative Science Quarterly, vol. 35, n°1, pp. 128-152. Dodgson, M., Gann, D.M. and Salter, A. (2005), “Craft and code: Intensification of innovation and management of knowledge”, K. Green, K. Miozzo and P. Dewick (Eds), Technology, Knowledge and the Firm: Implications for Strategy and Industrial Change, Cheltenham: Edward Elgar, pp. 11-28. Dodgson, M., Gann, D.M. and Salter, A. (2008), The Management of Technological Innovation, Strategy and Practice. Oxford University Press, New York. Dosi G., Faillo, M. and Marengo, L. (2008), “Organisational capabilities, patterns of knowledge accumulation and governance structures in business firms: An introduction”, Organisation Studies, vol. 29, n°8-9, pp. 1165-1186. Eisenhardt, K.M. (1989), “Building theory from case study research”, Academy of Management Review, vol. 14, n°4, pp. 532-550. Ethiraj, S. (2007), “Allocation of inventive effort in complex product systems”, Strategic Management Journal, vol. 28, n°6, pp. 563-584. Hargadon, A.B. and Douglas, Y. (2001), “When innovations meet institutions: Edison and the design of electric light”, Administrative Science Quarterly, vol. 46, n° 3, pp. 476-501.


Hobday, M. (1998), “Product complexity, innovation and industrial organization”, Research Policy, vol. 26, n°6, pp. 689-710. Kogut, B. and Zander, U. (1992), “Knowledge of the firm, combinative capabilities, and the replication of technology”, Organisation Science, 3, 383-397. Langlois, R.N. (2002), “Modularity in technology and organisation”, Journal of Economic Behavior and Organisation, vol. 49, n°1, pp. 19-37. Lenfle, S. (2009), “Exploration, project evaluation and design theory: A rereading of the Manhattan case”, Communication to the 11th IRNOP Conference. Berlin, October 1113, 2009, 26 pages. Nieto, M.J and Santamaria, L. (2007), “The importance of collaborative networks for the novelty of product innovation”, Technovation, vol. 27, N°6-7, pp. 367-377. Pénin, J. (2008), “More open than open innovation? Rethinking the concept of openness in innovation studies”, BETA Working Papers, n°2008-18, 21 pages. Pisano, G.P. and Teece, D.J. (2007), “How to capture value from innovation: Shaping intellectual property and industry architecture”, California Management Review, vol. 50, n°1, pp. 278-296. Prencipe, A., Davies, A. and Hobday, M. (2003), The Business of System Integration, Oxford University Press: Oxford. Sanchez, R. and Mahoney, J.T. (1996), “Modularity, flexibility, and knowledge management in product and organization design”, Strategic Management Journal, vol. 17, pp. 63-76. Scranton, P. (2007), “Turbulence and redesign: dynamic innovation and the dilemmas of US military jet propulsion development”, European Management Journal, vol. 25, n° 3, pp. 235-248. Sosa, M.E., Eppinger, S.D. and Rowles, C.M. (2004), “The misalignment of product architecture and organisational structure in complex product development”, Management Science, vol. 50, n°12, pp. 1674-1689. Teece, D.J., Pisano G.T. and Shuen, A. (1997), “Dynamic capability and strategic management”, Strategic Management Journal, vol. 18, n°7, pp. 509-533. Weick, K.E. (1993), “The collapse of sensemaking in organisations: The Mann Gulch disaster”, Administrative Science Quarterly, vol. 38, n°4, pp. 628-652.


Wernerfelt, B. (1984), “A resource-based view of the firm”, Strategic Management Journal, vol. 5, n°2, pp.171-180.


9. Appendix 9.1. Data sources We build on three major sources of data to document how the ARPANET had been developed: 1. The literature on the history of the ARPANET. The analysis of this literature provided us with a rich description of computer and telecommunication industry as a socio-technical system made up with diverse public and private organizations and research-oriented groups of scientists which collaborated to develop the ARPANET. Among the sources we used to document each case, one has been extensively referred: Bolt Beranek and Newman Inc.’s A History of the ARPANET prepared for the DARPA in 1981. The latter document provided a detailed description of the ARPANET programme from its inception in the early sixties to 1981 and a complete bibliography of all the reports and publications produced by the key actors involved in the project. 2. Academic publications in the following two domains: (i) networking and computer technology science and research, and (ii) innovation management. These sources enabled us to identify key inventions that occurred in computer remote communication science and applied research (e.g. Leonard Kleinrock’s packet switching theory) and triggered the development of specific components (e.g. the IMPs). They also provided critical information on how scholars involved in organisation and innovation theories built on the case of the ARPANET (e.g. Abbate 1999) to elaborate on theoretical arguments regarding how innovation processes are organised and managed.



Open archives have been extensively used to gather (i) original drafts and documents from the cases (e.g. Lawrence Roberts’ ARPA Resource Sharing Computer Network published in 1968) and (ii) interviews of key actors involved in the projects (all interviews are available online at the Charles Babbage Institute website: http://www.cbi.umn.edu/oh/index.phtml). This source provided detailed information about the capabilities and responsibilities of the participants involved in the two projects.

9.2. Data The analysis of the data comprised three steps. 1. Classification of the data sources according to two criteria: their form and content (Table 3). 2. Classification of the empirical evidence provided by our preliminary engagement with the data according to three topics: chronology technology and actors (Table 4). 3. Hand-made analysis resulting in the identification of the following themes: history of the project, attributes of the project, actors involved in the project and technological artefacts related to the project: (Table 5)
Criteria Form Components Published articles Technical monographs Biographies /Interviews Institutional publications Documentations Definition Published in books, academic journals and conferences proceedings Focused on the technical specification of the architecture, interfaces, and functionalities of the innovation product Dedicated to/focused on the contribution of key actors involved in the project Published by key public/government establishment, research laboratories, and/or companies Published by the participants involved in the project to inform insiders and outsiders (include archival records,


notes, slides, briefs) Content Descriptive Provided researchers with factual knowledge on specific technological, organisational, institutional and political attributes of the project Provided researchers with information and knowledge that are explicitly and directly related to any dimension of the research question Provided researchers with evidence to make comparisons between the two cases regarding the technological, organisational, institutional and political attributes of the project Provided researchers with elaborated information on specific technological, organisational, institutional and political attributes of the project




Table 3. Classification of data sources
Topics Chronology Technology Related to the chronological history of the project and identification of key events Related to the characteristics of the technology developed, embedded, and/or transformed during development phases Related to the identity, role and strategy of the various participants involved in the innovation project


Table 4. Classification by topics
Themes History Detailed items When did the project begin and why Which events played a critical role in the project Project attributes Actors Which attributes best characterise each case as a ‘product’ and as ‘project’ How to define the role played by the IPTO What best characterise the role played by the user communities How to define the role played by suppliers and prime contractors Which attributes best depict the relations between participants in the project Technology How technology was used by the participants to communicate and manage knowledge


Table 5. Thematic analysis 9.3. Organisation of the findings Two main empirical results have been isolated to characterise how the ARPANET had been developed: 1. The first result focuses on organisation design issues discriminating between three sub-results: methodology, identification of problems and selection of participants. 2. The second result emphasises how the IPTO manages to codify and disseminate knowledge and foster the adoption of the ARPANET technology. These results provided a framework for organising the presentation of the case study findings.


You're Reading a Free Preview

/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->