Vous êtes sur la page 1sur 5

Statistical Analysis Packages

Hans-Jurgen Andre, Universitt zu Kln, Kln, Germany


2015 Elsevier Ltd. All rights reserved.

Abstract

This article outlines the main features of present-day statistical software by reviewing its development from the (late) 1950s to
the present time. It puts a focus on the management and analysis of social science data. The article concludes with
a discussion of present-day software alternatives for social scientists.

Introduction has to include some functionality for data manipulation,


sometimes nearly as complex as a data bank software that
Since World War II, social sciences have become increasingly allows to index, select, append, and merge all kinds of data
empirical. This does not mean that before World War II social sets. One should also note that nowadays statistical software
sciences were only doing theories without any links to real is not conned to the analysis of numbers. The data may also
world problems or that there were no empirical investigations include text and thus statistical software also provides (often
of real world phenomena. But it is certainly true that empirical limited) possibilities for text analysis.
studies analyzing quantitative data were clearly outnumbered; Due to the rapidly changing software market, it is not
and the few that existed were using a methodology that was possible (and not useful) to give a complete overview of all
much less statistically sophisticated than the quantitative available statistical software. Instead, this article outlines the
analyses that are published nowadays in top social sciences main features of present-day statistical software by reviewing its
journals. The main reason for this development is obvious: It is development from the (late) 1950s to the present time (Section
the availability of user-friendly software that allows social A Brief History of Statistical Analysis Packages). The article
scientists to read, manipulate, and analyze complex data with concludes with a discussion of our present-day alternatives
a few keystrokes or some simple commands, even if they are no (Section What Choices Do We Have Today?).
specialists in statistics or information science. And it is also true
that the most important changes in statistics in the last 50 years
are driven by technology. More specically, by the develop- A Brief History of Statistical Analysis Packages
ment and universal availability of fast computers and of devices
to collect and store ever-increasing amounts of data (De The following historical sketch describes the development
Leeuw, 2011; emphasis added). of statistical analysis packages in three steps that roughly
Quantitative data can be analyzed with all kinds of tools: summarize the innovations of the 1960s/1970s, the 1980s, and
with paper and pencil, pocket calculators, ofce software, the 1990s. To understand the historical legacy of the rst
and specialized statistical software. This article focuses on packages (and the innovations of their followers), we start with
statistical software and not on software that can also be used a rsum of the tools that were available to social scientists
for statistics (software for statistics; e.g., a spreadsheet before the advent of statistical analysis packages.
program). It does so by putting a special emphasis on social The following historical sketch is closely linked to the
science data. While a statistical program simply needs a rect- development of information technology. From the very begin-
angular matrix of numbers to do its calculations, social ning, statistical applications were determined by the amount of
science data have a least two characteristics that need special computation necessary to draw conclusions from the data. In
attention. First, the numbers are often not the result of the early times of modern statistics computing time was clearly
quantitative measurement, they rather indicate (code) a restriction. Salsburg (2002) estimated that Sir Ronald Fishers
certain characteristics of the unit of analysis (e.g., being computations for one table in his Studies in Crop Variation. I
a male or female respondent). Second, sometimes, for certain took about 185 h on his mechanical desktop calculator.
units, no number is available, either because the corre- Nowadays, with rst the electronic (mainframe) computer and
sponding variable does not apply to that unit (e.g., the type then the personal (micro) computer, computing time is
of job for a respondent that is unemployed) or because the increasingly less of an obstacle. Moreover, the development of
corresponding information is not available (e.g., when an programing tools and interfaces made this hardware more and
employed respondent refused to provide information about more user-friendly. In addition, the renement of graphical
his job). As a consequence, statistical software specically interfaces has revolutionized the display and exploration of
designed for social science data must provide possibilities to data.
complement the data matrix with additional information
about the codes (labels) and missing data. Moreover, statis-
The Time Before Statistical Analysis Packages
tical computations have to control for the missing informa-
tion. Finally, but this applies to all kinds of quantitative data, Before the advent of statistical software on electronic com-
data sets often require a great deal of manipulation before puters social scientists had to live with mechanical computers
they are ready for analysis. Therefore, any statistical software and tabulating machines. Larger data sets that could not be

376 International Encyclopedia of the Social & Behavioral Sciences, 2nd edition, Volume 23 http://dx.doi.org/10.1016/B978-0-08-097086-8.03023-3
Statistical Analysis Packages 377

handled manually had to be transferred on cards using user misspelled a command or did not follow the xed input
specialized coding systems that represented the information in scheme for the commands. Then the whole process had to be
digital form by the presence or absence of holes in predened repeated again, often several times until all errors were elimi-
positions. These punch cards are also called Hollerith or IBM nated. At these times, the statistical analyses were simple
cards, because Herman Hollerith invented both the storage (compared to nowadays level of sophistication), but the
medium and the processing technology (tabulating machines) process was very time consuming and not all social scientists
to analyze the stored information. He founded the Tabulating (especially outside the US) had access to electronic computers.
Machine Company, which was one of four companies that Not surprisingly, only a small and selective group of social
merged to form Computing Tabulating Recording Company, scientists used this technology.
later renamed IBM. The Eleventh United States census in 1890
was the rst census to use this technology. From then on, until
The Time of the Mainframes
the 1950s, this technology was used in all industrialized
countries to process business, government, and scientic data. The explosion in the use of computers began with the third
In the 1950s, magnetic tapes and new electronic input devices generation of electronic computers using integrated circuits or
were introduced to store mass data. However, not till the microchips. The IBM System 360, rst shipped in 1965, is an
1960s, the punch card technology was gradually replaced by example of such a third generation computer. These were still
magnetic tapes as more capable electronic computers became large (mainframe) computers, both in terms of size and prize.
available. Hence, only larger organizations could afford them. In most
The early times of (quantitative) social research were cases, they were part of a computing center that offered
dominated by punch cards and tabulating machines. Since the computer time as well as input and output devices to a larger
punch card provided only 80 columns to store information, all group of users. Input devices could still be card readers, but
kinds of coding systems were invented to store as much as increasingly became computer terminals equipped with
possible information on one card. If one card did not sufce for a keyboard and a screen. In principle, this allowed some kind of
the data of one unit of analysis, one had to use several cards for interaction between the user and the computer or its software.
each unit, but then also to make sure that the whole card deck Several users could work concurrently with the computer, as
with all the information for the entire sample remained in operating systems included routines to share computing
correct order. Biographies and obituaries of older social scien- resources between users (so-called time-sharing systems).
tists are full of anecdotes about how they fought with tabu- This new generation of electronic computers was very
lating machines, multiple punch systems, and disordered card consequential for all statistical analysis packages that existed so
decks that fell down on the oor. It was not before the advent far. Designed for second generation computers and their
of the transistor based (so-called second generation) electronic operating systems, they were simply incompatible and could
computer in the 1960s (e.g., the IBM 7090), that some larger not be used anymore. As a collection of single statistical
pieces of software developed that allowed nonspecialists to programs, BMD easily made the transition and, equipped with
manage and analyze social science data. a new name: BMDP, used keywords (and not numbers) for the
One was the DATATEXT package (Armor and Couch, 1972) commands (University of California and Dixon, 1975).
developed at the end of the 1960s in Harvard, the other was However, BMDPs target group were health scientists. Social
a package of statistical routines for biomedical research (BMD), scientists missed an alternative to DATATEXT, whose transition
which was developed at the end of the 1950s at the University to the new generation of electronic computers came too late.
of California in Los Angeles (Dixon and University of At that point, a new software, the Statistical Package for the
California, 1967). (For better readability, in this article all Social Sciences (SPSS), had already started its success (Nie et al.,
package names are written in capital letters.) Both were still 1970). It was developed around 1968 by social scientists at the
carrying the footprints of the punch card times. DATATEXT University of Chicago and its fast diffusion in the market
provided, besides statistics, all kinds of routines to cope with showed that its designers had built a product that satised the
messy card decks and multiple punch systems. BMD was needs of its target group, the social scientists. One of the
simply a collection of programs; each one for a specic statis- reasons for its success story was the program manual, which
tical method (e.g., analysis of variance), but all using the same besides describing the commands provided applied intro-
command language and inputoutput system that expected the ductions into all statistical techniques; and these introductions
data to be prepared in advance in a rectangular data matrix. were easy to read even for those without a comprehensive
Program commands had to be written on punch cards using statistical education (however, at the cost of not documenting
certain columns for command names and other columns for the algorithms used for the statistical techniques). Wellman
the command parameters. While DATATEXT used command (1998) considered it one of sociologys most inuential books
keywords, BMD was controlled entirely by numbers. because it allowed ordinary researchers to do their own
The exclusive mode of processing and analyzing data was statistical analyses. Another reason for the success of SPSS was
the so-called batch mode. Program commands were written on its availability on different computers and operating systems.
punched cards and, together with a card deck of data, were In the 1970s, SPSS dominated as the tool for social science data
input into the computer. Depending on the power of the analysis, while BMDP remained a specialist tool for the
computer and the number of other active batch jobs on statisticians.
the computer, it took some time (sometimes even days) Although written for a new generation of computers, both
until the user would see an output of the program results. Very packages still had features that reminded of the punch card era.
often, the output only showed an error message, because the A prominent example is that SPSS expected parameters for its
378 Statistical Analysis Packages

commands to start exactly in the 16th column of the command All in all, the most important statistical software packages
line. Although, as mentioned, usermachine interaction was during the era of the mainframes were (in alphabetical order)
possible, BMDP and SPSS expected a batch of commands (a BMDP, SAS, and SPSS. According to De Leeuw, the three
program), which then would be executed without interven- competitors differed mainly in the type of clients they were
tion by the user. The way designers of statistical analysis targeting. And of course health scientists, social scientists, and
packages thought about their users at that time was still not business clients all needed the standard repertoire of statistical
much different from the former batch mode times. When techniques, but in addition some more specialized methods
intervention was not possible, and when program output was important in their eld. Thus the packages diverged somewhat,
mostly provided by line printers, possibly in a distant although their basic components were very much the same
computing center with quite some turnaround time, users (De Leeuw, 2011). This does not mean that no other statistical
chose the safe way of producing results. They rather asked for analysis packages were available. On the contrary, important
everything (.ALL BY ALL, STATISTICS ALL), instead of centers of statistics or survey research and individual
asking for the results that were important. Moreover, most of researchers developed similar systems: for example, to name
the data management functions of DATATEXT were lost. BMDP just a few, MINITAB from Penn State, OSIRIS from the Institute
as well as SPSS expected a rectangular matrix and the user had for Social Research at the University of Michigan, P-STAT from
to use the supplied data generation commands for data Princeton, GENSTAT from Rothamsted Experimental Station,
management and cleaning. or ALMO developed by Kurt Holm.
All in all, statistical analysis packages introduced with the Some of these other packages also introduced important
third generation of electronic computers were characterized, on innovations to the design of statistical software. GLIM, for
the one hand, by providing easy access to the statistical tools of example, was notable for encouraging an interactive, iterative
data analysis, but also, on the other hand, by the absence of any approach to statistical modeling, besides providing a unied
tools for data management. Moreover, they did not use the framework for tting a wide range of generalized linear models
potential for user intervention and processed the commands in (the GLIM syntax could be summarized on two (!) sheets of
batch mode. Finally, a more technical point: The typical storage paper). It also allowed users to dene their own macros and
medium of that time was the magnetic tape, which could only hence, provided an open programing environment compared
be read and written sequentially. Random access memory was to the closed shops of SPSS and its competitors. SYSTAT, to
scarce and mostly processed sequentially (as if it would be name another example, was a statistical analysis package that
a magnetic tape). Certainly, sequential storage media are put a special emphasis on statistical graphics.
a severe restriction for complex data bank systems. Neverthe-
less, the data management gap was lled in the second half of
The Advent of the Microcomputer
the 1970s by two software products: SIR/DBMS and SAS.
The mnemonic SIR stands for a data bank management Indeed, by the end of the mainframe era, the terrain was
system (Robinson et al., 1980) that was quite popular in the prepared for social scientists to do statistical analyses them-
social sciences because it provided, on the one side, simple selves. The software was there, its use was increasingly included
commands to manage a relational data bank and, on the other in university teaching, and even the computers came a bit closer
hand, the ability to output the contents of this data bank to any to the users with so-called mini computers such as Digital
kind of statistical analysis package. For example, the German Equipments Vax or HewlettPackards HP3000 that provided
Socio-Economic Panel Study, a renowned panel study of the computer resources to smaller workgroups. Yet, it was IBMs
German resident population was managed for quite a while personal (micro) computer that really democratized the use of
with SIR/DBMS. However, as more and more statistical analysis statistical analysis packages. Indeed, with a PC on ones desk,
packages included the most important data management everybody could manage and analyze large amounts of data.
procedures into their functionality, SIR/DBMS more or less Besides bringing (rapidly increasing) storage and computing
became a niche product. resources to the individual researcher, the main innovation of
SAS was developed almost as the same time as SPSS by this fourth generation of computers was its user interface. From
computational statisticians at North Carolina State University the very beginning, the user had total control on what was
(SAS Institute, 2013). While at the beginning a statistical soft- going on in his computer. But quite soon, operating systems for
ware for large agricultural data sets, it developed into microcomputers provided graphical user interfaces (GUIs) with
a comprehensive statistical package with often more function- windows, icons, menus, and pointers that made usermachine
ality than its competitors and with a strong emphasis on interaction much easier.
handling large and complex data. For a long time, it was All major statistical analysis packages made the transition to
available only on IBM mainframes, which clearly restricted its this new world of personal computing. Commands could be
diffusion. Later on, when it was available on all kinds of written and executed in a command window or chosen from
machines it became quite popular and a serious competitor for a menu, and the results were shown in a separate output
SPSS. In 1976, the inventors of SAS formed the SAS Institute window that could be copied and pasted into other software
Inc. and increasingly marketed SAS as a high-end software (e.g., in word processors for report writing). From now on,
product mostly targeted at researchers in private business. The statistical analysis packages could be used interactively.
accompanying price policy made SAS difcult to afford for Commands and their results could be easily tested and revised,
researchers in publicly funded academic organizations and if the results did not match the research interests of the user.
hence restricted its popularity in this part of the market for Commands need not to be remembered or taken from an
statistical software. accompanying printed manual, because the packages included
Statistical Analysis Packages 379

help menus, and manuals were stored electronically with the with a strong focus on statistical analysis. All of them have the S
program. Nevertheless, for replicating the analysis and for language as common ancestor, which starting from 1975
computer-intensive tasks, the nal commands had to be stored grew out of Bell laboratories; hence, a rather old develop-
in a syntax le that could be executed again in a batch-mode- ment and not a new generation (Chambers, 2008: pp. 475
like fashion; a small reminiscence of the good old times, but 478). According to De Leeuw, a statistician, the statistical
useful and necessary. What also remained from the mainframe techniques [.] were considerably more up-to-date than tech-
time was the massive output. Statistical analysis packages niques typically found in SPSS or SAS (De Leeuw, 2011: p. 5).
transferred from the mainframe era were still showing all But it remained a tool for the specialists, although it was freely
numbers that ever could be of interest. Obviously, the inter- available for some time to academic institutions. Also its
jection of GLIM was not heard to show only those numbers commercial version S-PLUS hardly diffused into the wider
that the user needs for making the next decision. circle of applied researchers.
During the microcomputer era we also observed a second The followers of S in the 1990s were LISP-STAT and R. Both
generation of statistical software that made use of the new languages are open source products and available on the
technologies and did not have the baggage of the mainframe personal computer, which explains at least in the case of R
era. Some of them made heavy use of the new graphical their popularity in and outside the community of statisticians.
capabilities and emphasized visualization and exploratory data LISP-STAT was developed by Tierney (2009) and provided
analysis. JMP, a computer program developed in the 1980s by statistical tools embedded in a Lisp interpreter (Lisp is
the JMP business unit of SAS Institute, was such an example. It a programing language that is often used in articial intelli-
was created to take advantage of the GUI introduced by Apples gence research). Using a (general) programing language as the
Macintosh, but nowadays is available also for other operating environment provides lots of opportunities for extending
systems. The most prominent feature of JMP is its ability to LISP-STAT. According to the Web site, it puts an emphasis on
really look at multivariate data and to screen the data from providing a framework for exploring the use of dynamic
different perspectives. graphical methods. But LISP-STATs development ceased by
STATA is another example of second generation statistical the end of the twentieth century.
software. It was created in 1985 by the UCLA graduate William R was written as an alternative implementation of the S
Gould together with his colleagues at Computing Resource language (Ihaka and Gentleman, 1996). Due to its extendibility
Center, a private company, which in 1993 moved to Texas and by user-written programs and easy availability as an open
now is named StataCorp LP (Cox, 2005). STATA is also source product, its dispersion in the market has increased
a general-purpose statistical analysis package and at rst sight, dramatically in recent years. De Leeuw says: R is many things
looks very similar to its relatives from the rst generation to many people: a rapid prototyping environment for statistical
(BMDP, SAS, and SPSS). It also had a command line interface, techniques, a vehicle for computational statistics, an environ-
and did not get a GUI until 2003. However, from the start, it ment for routine statistical analysis, and a basis for teaching
emphasized computation speed, extensibility, and user- statistics at all levels. Or, going back to the origins of S,
contributed code. Computation speed is of utmost impor- a convenient interpreter to wrap existing compiled code
tance, if one wants to support the interactive use of the (De Leeuw, 2011: p. 6). But the extendibility is also a problem.
program. To this end, STATA is reading and writing all the data For example, it is quite possible that several user programs
into the (fast) computer memory, while its relatives read and exist, which address the same problem. Which one to choose?
write the data (often sequentially) from slower memory Sometimes the documentation for this user-written code is
devices. Within STATA one can also write macros and programs insufcient and who tests whether the programing is reliable
and in doing so, extend the program functionality to ones own and efcient? If social scientists want to have the same basic
needs. Users can publish their programs on the Internet or on functionality as their old statistical analysis packages, i.e.,
Statacorps servers. STATA itself makes heavy use of the labeling variables and values, coding missing data, etc., they
Internet, both to update the program, but also to download have to download several user-written programs to their base R
data and user code. More than its relatives (perhaps with the installation. Which the most useful ones are, can certainly be
exception of SAS) it tries to tie in its users. One could even found somewhere in the Internet. But how does the applied
speak of the STATA community. It organizes user conferences, social researcher nd this information, if it is not in textbooks
has a book series and a peer-reviewed journal, provides an e- on R, which by the way are mostly written by statisticians and
mail Listserver for user discussions, and the very informative for their audience.
program documentation, both in terms of applications and
technicalities, is available with the program and online.
Nowadays, STATA is very popular in economics, sociology, and What Choices Do We Have Today?
political science, with a strong focus in academia. It is especially
attractive for its computing speed, extendibility, and prize. Without the innovations of information technology, the anal-
ysis of social science data would not have come so far. With the
help of the computer, we collect and store data in virtually
From Statistical Program Packages to Statistical Analysis
unlimited amounts; and it provides us with the computation
Languages
power to do all kinds of sophisticated statistical analyses. For
In the early 1990s, another type rather than a new generation of example, maximum likelihood as a method of estimation has
statistical software came into play. These were not prepro- been known for centuries, but it needed the electronic computer
grammed analysis packages, but rather programing languages to apply it to real-world data. Moreover, classical parametric
380 Statistical Analysis Packages

procedures have been supplemented with computer-intensive, dynamic market with signicant competitors, some of them
nonparametric methods such as resampling to compute stan- even providing their products for free. One precondition is, of
dard errors and condence intervals. Bayesian statistics have course, being at the forefront of data management and data
become popular in recent years due to the availability of Markov analysis. A second condition is user friendliness, which does
chain Monte Carlo methods, which allow sampling from not necessarily mean simplicity. Users accept that data anal-
probability distributions, which is critical in Bayesian Statistics. ysis may be a complicated matter and therefore, will accept
Finally, the availability of computing power and GUIs has made some complexity, but the easier this complexity is to under-
the dream of really looking at data come true. stand through the handling of the program, the more user-
Besides the development of the hardware, the Internet of friendly the software is. Third, having a clear and solvent
computers worldwide has revolutionized the acquisition, target group seems to be a precondition for commercial
storage, and access to social science data. Whereas the classical success. SAS, focusing primarily on private business, is
statistical problem was to make the most out of limited data, a typical example of this strategy (SPSS, now owned by IBM,
today the challenge is to process large data samples. Only with seems to go in the same direction of focusing on users in
the power of present-day computers, social scientists have all private business). Academia, on the other hand, is an unreli-
the means to analyze these vast amounts of information. able and unfaithful clientele. First, they know everything
With respect to software, our historical account has shown better and try to do it themselves. Second, academia,
the alternatives that are available to social scientists today. compared to private business, is rather poor. Hence, if a soft-
Basically, there are three: (1) general-purpose programs like ware provider is not able to tie in its users or has something to
SPSS, SAS, or STATA (BMDP has become a niche product), (2) sell that others do not supply, it is quite probable that the
statistical analysis languages, most prominently R, or (3) other software will not survive.
software that can also be used for statistics. As Brian Ripley,
Professor of Applied Statistics at the University of Oxford, once
said at a conference of the Royal Statistical Society: Lets not See also: Data Bases and Statistical Systems: Applied Social
kid ourselves: the most widely used piece of software for Research; Empirical Social Research, History of; Quantication
statistics is Excel (Ripley, 2002). Naturally, statistical add-ons in the History of the Social Sciences.
exist also for this third alternative (e.g., ANALYZE-IT, NUMXL,
or SIGMAXL). Besides these main alternatives, there are lots of
other programs often designed for specic purposes; e.g.,
MPLUS for structural equation modeling, HLM for multilevel Bibliography
regression, and many others. Wikipedia is an excellent infor-
mation source for all kinds of software and provides an up-to- Armor, D.J., Couch, A.S., 1972. Data-Text Primer: An Introduction to Computerized
date list and comparison of statistical packages. The Journal of Social Data Analysis. Free Press, New York.
Chambers, J.M., 2008. Software for Data Analysis: Programming with R. Springer-
Statistical Software, according to its Web site, publishes arti-
Verlag, New York.
cles, book reviews, code snippets, and software reviews on the Cox, N.J., 2005. A brief history of stata on its 20th anniversary. The Stata Journal 5
subject of statistical software and algorithms. Software reviews (1), 218.
can also be found in major statistical journals. De Leeuw, J., 2011. Statistical software overview. In: Lovric, M. (Ed.), International
This bulk of software and alternatives has pros and cons. Encyclopedia of Statistical Science. Springer, Berlin/Heidelberg, pp. 14701473.
Dixon, W.J., University of California, 1967. BMD: Biomedical Computer Programs.
One of the advantages is that users can compare. This does not University of California, Los Angeles. Health Sciences Computing Facility.
only mean looking for the best alternative, it also implies that Ihaka, R., Gentleman, R., 1996. R: a language for data analysis and graphics. Journal
users can test the reliability of the available software. In most of Computational and Graphical Statistics 5 (3), 299314.
cases, social scientists will not have the expertise to check the Keeling, K.B., Pavur, R.J., 2007. A comparative study of the reliability of nine
statistical software packages. Computational Statistics & Data Analysis 51 (8),
accuracy of the underlying algorithms, but if alternative soft-
38113831.
ware does not reproduce the results of their favorite software, Nie, N.H., Bent, D.H., Hull, C.H., 1970. SPSS: Statistical Package for the Social
then possibly something is wrong and they should consult an Sciences. McGraw-Hill, New York.
expert. For example, regular tests of statistical software show Ripley, B.D., 2002. Statistical Methods Need Software: A View of Statistical
that numerical accuracy cannot be taken for granted (see, e.g., Computing. Presentation at the conference of the Royal Statistical Society, Ply-
mouth, 36 September 2002. Retrieved 7.10.13 from: http://www.stats.ox.ac.uk/
Keeling and Pavur, 2007). One of the disadvantages is, of ripley/RSS2002.pdf.
course, that the uninformed user is easily lost. This is especially Robinson, B.N., Anderson, G.D., Cohen, E., Gazdzik, W.F., 1980. SIR Scientic
true for open source products, as the example of R shows (see Information Retrieval. Users Manual, second ed. SIR INC, Evanston, IL.
the discussion above). This does not mean that proprietary Salsburg, D., 2002. The Lady Tasting Tea: How Statistics Revolutionized Science in the
Twentieth Century. Henry Holt and Company.
products are without problems, but with such a contested
SAS Institute, 2013. SAS Company History. Retrieved 4.10.13 from: http://www.sas.
market for statistical software every commercial provider of com/company/about/history.html.
statistical software has a high interest in providing a reliable Tierney, L., 2009. LISP-STAT: An Object-Oriented Environment for Statistical
product (naturally, the incentive is lower for providers of Computing and Dynamic Graphics. Wiley, Hoboken, NJ.
software that is not designed for statistics in the rst place). University of California, Dixon, W.J., 1975. BMDP: Biomedical Computer Programs.
University of California Press, Los Angeles.
Our historical review has shown the rise and fall of various Wellman, B., 1998. Doing it ourselves. In: Clawson, D. (Ed.), Required Reading:
software products. As a nal point, this raises the question of Sociologys Most Inuential Books. University of Massachusetts Press, MA,
what makes a product successful in an obviously very pp. 7178.

Vous aimerez peut-être aussi