Académique Documents
Professionnel Documents
Culture Documents
Abstract
Growing interest in complex adaptive systems and nonlinear dynamical modeling has
attracted greater attention to computer simulation in the social sciences. It has also shifted the
focus from prediction to explanation and from “factors” (system attributes) to “actors” (agent-
based models). Despite growing popularity, simulation continues to invite skepticism, especially
when used as an exploratory tool for theoretical research rather than as a data-analytical
predictive tool. This reflects continued use of the term “simulation” whose meaning was shaped
by earlier techniques centered on predictive accuracy. Yet current usage increasingly refers to
highly abstract “thought experiments” that apply to an artificial world. Making these models fit
actuality would add complexity that undermines their usefulness as theoretical tools.
Computational social simulation involves the use of computer algorithms to model social
processes. These algorithms require greater precision and logical rigor than natural language yet
The use of computer simulation by social scientists has increased in recent years, due to
growing interest in modeling nonlinear dynamical processes using graphical interfaces that allow
highly intuitive visual representations of the results. Social simulation has also undergone a
This article reviews three “waves” of innovation: dynamical systems, microsimulation, and
adaptive agent models (Gilbert and Troitzsch 1999). These three approaches will be summarized
as follows:
and functionalist.
2. 1970’s: Microsimulation introduced the use of individuals as the units of analysis but
Although these three “waves” overlap (e.g., economists continue to use holistic simulation of
productive factors, microsimulation was invented in 1957, and nascent agent-based models
appeared in the late 1960’s), most of the recent excitement in computational modeling in the
2
social and behavioral sciences has centered on agent-based simulation. So too has much of the
controversy.
1. Dynamical Systems
The first wave of computer simulations in social science occurred in the 1960’s (e.g. Cyert and
March, 1963). These studies used computers to simulate dynamical systems, such as control and
feedback processes in organizations, industries, cities, and even global populations. The models
typically consist of sets of differential equations that describe changes in system attributes as a
function of other systemic changes. Applications included the flow of raw materials in a factory,
inventory control in a warehouse, urban traffic, military supply lines, demographic changes in a
world system, and ecological limits to growth (Forrester 1971; Meadows 1974).
Models of dynamical systems are functional and holistic (Gilbert and Troitzsch 1999).
Functional means that theoretical interest centers on system equilibrium or balance among
interdependent input-output units. Holistic means that the system is modeled as an irreducible
and indivisible entity. Although these models allow nested organizational levels, at any given
level, they assume that sets of related attributes (such as economic growth, suburban migration,
2. Microsimulation
The second wave of computational modeling in the social sciences developed in the late 1970s.
Known as microsimulation, it represents the first step in the progression from “factors” to
3
strategy for modeling the interacting behavior of decision makers (such as individuals, families
and firms) within a larger system. This modeling strategy utilizes data on representative samples
of decision makers, along with equations and algorithms representing behavioral processes, to
simulate the evolution through time of each decision maker, and hence of the entire population of
For example, Caldwell and Keister use CORSIM, a large-scale dynamic microsimulation
model of the U.S. population of individuals and families, to integrate individual and family level
population homogeneity. For example, the relationship between wealth and age may not be
uniform across ethnic or religious subcultures, geographic regions, or birth cohorts. Complex
interactions across subpopulations are lost when the focus is on the correlation among aggregate
factors. This precludes the ability to predict the effects of policy changes that impact only certain
groups.
Microsimulation solves this problem by making individuals, not populations, the units of
analysis. The database structure consists of individual records for a representative sample of a
population, with a set of attributes measured at multiple points in time. These individuals are
then “aged” by updating their attributes, based on a set of empirically derived state-transition
probabilities (e.g., an individual’s move from work to retirement at age 65). These probabilities
are then used to predict new states of each individual, and these are then aggregated to make
population projections.
4
3. Agent-Based Models and Complex Adaptive Systems
The third wave, agent-based models, attracted widespread interest beginning in the 1980’s.
These models extend the microanalytical approach by deriving global states from individual
behaviors, not individual attributes. In microsimulation models, individuals are inert in two
ways: 1) the state of an individual is a numerical function of other traits, and 2) individuals do
not interact (Gilbert and Troitzsch 1999, p. 8). Agent-based models assume an individual has
intentions or goals and makes choices that affect other agents whose choices in turn affect that
1. Agents interact with little or no central authority or direction. Global patterns emerge from
the bottom up, determined not by a centralized authority but by local interactions among
autonomous decision-makers.
2. Decision-makers are adaptive rather than optimizing, with decisions based on heuristics,
not on calculations of the most efficient action (Holland 1995, p. 43). These heuristics
include norms, habits, protocols, rituals, conventions, customs, and routines. They evolve
at two levels, the individual and the population. Individual learning alters the probability
Population learning alters the frequency distribution of rules competing for reproduction
through processes of selection, imitation, and social influence (Latané 1996). Genetic
algorithms (Holland 1995) are widely used to model adaptation at the population level.
5
consequences of each agent’s decisions depend in part on the choices of others. When
strategically interdependent agents are also adaptive, the focal agent’s decisions influence
the behavior of other agents who in turn influence the focal agent, generating a “complex
Thomas Schelling’s (1971) “neighborhood segregation” model was one of the earliest agent-
based social simulations using cellular automata, a method invented by Von Neumann and Ulam
in the 1940’s. Consider a residential area that is highly segregated, such that the number of
neighbors with different cultural markers (such as ethnicity) is at a minimum. If the aggregate
pattern were assumed to reflect the attitudes of the constituent individuals, one might conclude
from the distribution that the population was highly parochial and intolerant of diversity. Yet
Schelling’s simulation shows that this need not be the case. His “tipping” model shows that
highly segregated neighborhoods can form even in a population that prefers diversity. The
aggregate pattern of segregation is an emergent property that does not reflect the underlying
Another example of emergence is provided by Latané’s (1996) “social impact model.” Like
Schelling, Latané also studies a cellular world populated by agents who live on a two-
dimensional lattice. However, rather than moving, these agents adapt to those around them,
based on a rule to mimic one’s neighbors. From a random start, a population of mimics might be
expected to converge inexorably on a single profile, leading to the conclusion that cultural
diversity is imposed by factors that counteract the effects of conformist tendencies. However, the
surprising result was that “the system achieved stable diversity. The minority was able to
survive, contrary to the belief that social influence inexorably leads to uniformity” (Latané 1996,
6
p. 294). Other researchers, including Axelrod (1997), Carley (1991), and Kitts, Macy, and Flache
(1999) have used computational models that couple homophily (likes attract) and social
influence. If local similarity facilitates interaction which in turn facilitates imitation, it is indeed
curious that the global outcome is diversity, not uniformity. However, this outcome depends on
Schelling and Latané modeled agents with identical and fixed behavioral rules. Many agent-
based models assume heterogeneous populations with the ability to learn new rules. The “genetic
algorithm” invented by John Holland, is a simple and elegant way to model strategies that can
multiple genes. The string’s instructions affect the agent’s reproductive fitness and hence the
probability that the strategy will propagate. Propagation occurs when two or more mated
strategies recombine. If different rules are each effective, but in different ways, recombination
allows them to create an entirely new strategy that may integrate the best abilities of each
“parent” and thus eventually displace the parent rules in the population of strategies.
“prisoner’s dilemma,” a game in which choices that are individually rational aggregate into
collective outcomes that everyone would prefer to avoid. The results showed that strategies
based on reciprocity (such as “TIT FOR TAT”) tend to be very successful. The secret of their
success is that they perform well against copies of themselves. Axelrod’s result has been
challenged by Nowak and Sigmund (1993), by Binmore (1998), and by Macy (1995). Working
independently, these researchers found that “TIT FOR TAT,” a strategy that teaches its partner a
7
lesson, can be supplanted by “PAVLOV” (a.k.a. “WIN-STAY, LOSE-SHIFT” and “TAT FOR
TIT”) a strategy that learns (Binmore 1998). The ability to learn appears to be at least as
Learning involves adaptation at the level of the individual rather than the population.
Artificial neural networks are simple self-programmable devices that model agents who learn
through reinforcement. Like genetic algorithms, neural nets have a biological analog, in this
case, the nerve systems of living organisms. The device consists of a web of neuron-like units (or
neurodes) that fire when triggered by impulses of sufficient strength, and in turn stimulate other
units when fired. The magnitude of an impulse depends on the strength of the connection (or
"synapses") between the two neurodes. The network learns by modifying these path coefficients
in response to environmental feedback about its performance. Neural nets have been used to
study the evolution of religion (Bainbridge 1995), kin altruism (Parisi, Cecconi and Cerini), the
emergence of status inequality (Vakas-Duong and Reilly 1995), group dynamics (Nowak and
4. Criticisms of Simulation
1. Simulations, like mathematical models, tell us about a highly stylized and abstract world,
not about the world in which we live. In particular, agent-based simulations rely on highly
simplified models of rule-based human behavior that neglect the complexity of human
cognition.
2. Like mathematical models, simulations cannot tell us anything that we do not already
8
know, given the assumptions built into the model and the inability of a computer to do
3. Unlike mathematical models, simulations are numerical and therefore cannot establish
The defense against these criticisms centers on the principles of complexity and emergence.
How can simple agent-based models explain the rich complexity of social life? In complex
systems, very simple rules of interaction can produce highly complex global patterns. “Human
beings,” Simon contends (1998, p. 53), “viewed as behaving systems, are quite simple.” We
follow rules, in the form of norms, conventions, protocols, moral and social habits, and
heuristics. Although the rules may be quite simple, they can produce global patterns that may not
be at all obvious and are very difficult to understand. Hence, “the apparent complexity of our
behavior is largely a reflection of the complexity of the environment,” including the complexity
worlds allow us to explore the complexity of the social environment by removing the cognitive
If Simon, Axelrod, and the complexity theorists are right, then the artificiality of agent-based
models is a virtue, not a vice. When simulation is used to make predictions or for training
personnel (e.g., flight simulators), the assumptions need to be highly realistic, which usually
means they will also be highly complicated (Axelrod 1997, p. 5). “But if the goal is to deepen
our understanding of some fundamental process,” Axelrod continues, “then simplicity of the
9
assumptions is important and realistic representation of all the details of a particular setting is
not.” As such, the purpose of these models is to generate hypotheses, not to test them (Prietula,
Holland (1995:146) offers a classic example of the problem of building theories that too
closely resemble actuality: Aristotle’s mistaken conclusion that all bodies come to rest, based on
observation of a world that happens to be infested with friction. Had these observations been
made in a frictionless world, Aristotle would have come to the same conclusion reached by
Newton and formulated as the principle that bodies in motion persist in that motion unless
perturbed. Newton avoided Aristotle’s error by studying what happens in an artificial world in
which friction had been assumed away. Ironically, “Aristotle’s model, though closer to everyday
observations, clouded studies of the natural world for almost two millennia” (Holland 1995:146).
More generally, it is often necessary to step back from our world in order to see it more clearly.
actuality or to resemble an observed city. Rather, the “residents” live in a highly abstract cellular
world without real streets, rivers, train tracks, zoning laws, red-lining, housing markets, etc. This
artificial world shows that all neighborhoods in which residents are able to move have an
underlying tendency towards segregation even when residents are moderately tolerant. This
hypothesis can then be tested in observed neighborhoods, but even if observations do not
confirm the predicted segregation, this does not detract from the value of the simulation in
suggesting empirical conditions that were not present in the model that might account for the
10
In short, agent-based models “have a role similar to mathematical theory, shearing away
detail and illuminating crucial features in a rigorous context” (Holland 1995, p. 100).
computational “thought experiments” are not constrained by physical actuality and thus “can be
Unwrapping occurs “when the ‘solution’ is explicitly built into the program [such that] the
simulation reveals little that is new or unexpected” (Holland 1995, p. 137). Critics charge that all
computer simulations share this limitation, due to the inability of computers to act in any way
This criticism overlooks the distinction between the micro-level programming of the agents
and the macro-level patterns of interaction that emerge. These patterns are “built in” from the
outset yet the exercise is valuable because the logical implications of behavioral assumptions are
not always readily apparent. Indeed, a properly designed computational experiment can yield
highly counter-intuitive and surprising results. This theoretical possibility is based on the
principle of emergence in complex systems. Biological examples of emergence include life that
emerges from non-living organic compounds and intelligence and consciousness that emerges
from dense networks of very simple switch-like neurons. Schelling’s (1971) neighborhood model
provides a compelling example from social life: segregation is an emergent property of social
11
4.3. Rationality and Adaptive Behavior
A third criticism is that simulation models, unlike mathematical models, are numerical, not
deductive, and therefore cannot be used to form generalizations. Worse still, simulations of
evolutionary processes can be highly sensitive to the basin of attraction in which the parameters
are arbitrarily initialized (Binmore 1998). However, game-theoretic mathematical models pay a
high price for the ability to generate deductive conclusions: multiple equilibria that preclude a
uniquely rational solution. Equilibrium selection requires constraints on the perfect rationality of
the agents. Simon appreciates the paradox: “Game theory’s most valuable contribution has been
to show that rationality is effectively undefinable when competitive actors have unlimited
computational capabilities for outguessing each other, but that problem does not arise as acutely
in a world, like the real world, of bounded rationality” (1998, p. 38). But bounded rationality
often makes analytical solutions mathematically intractable. “When the agents use adaptive
rather than optimizing strategies, deducing the consequences is often impossible: simulation
becomes necessary” (Axelrod 1997, p. 4). In sum, simulation requires sensitivity analysis and
tests of robustness, but properly implemented, it not only offers a solution to the problem of
equilibrium selection but can also tell us something about how evolutionary systems move from
5. Conclusion
From a narrowly positivist perspective, agent-based artificial worlds might seem unrealistic,
unfalsifiable, and unreliable, especially when contrasted with earlier approaches to simulation
12
that were data driven, predictive, and highly realistic (e.g., flight simulators). However, agent-
based models may be essential for the study of open and complex adaptive systems, whether
does not test theory against observation. Rather, these thought experiments suggest possible
A key theme that runs through these puzzles is that it is often very difficult to recognize
concludes (1995 p. 195), “I do not think we will understand morphogenesis, or the emergence of
organizations like Adam Smith’s pin factory, or the richness of interactions in a tropical forest,
References
Caldwell, S. 1997. Dynamic Microsimulation and the Corsim 3.0 Model. Ithaca, NY: Strategic
Forecasting.
Cyert, R. and J. G. March. 1963. A Behavioral Theory of the Firm. Prentice-Hall: Englewood
13
Cliffs, NJ.
Gilbert, N. and K. Troitzsch. 1999. Simulation for the Social Scientist, Buckingham: Open
University Press.
Holland, J. 1995. Hidden Order: How Adaptation Builds Complexity. Reading, MA: Perseus.
Kitts, J., M. Macy, and A. Flache. 1999. “Structural Learning: Attraction and Conformity in
45.
Latané, B. 1996. “Dynamic Social Impact: Robust Predictions from Simple Theory.” In R.
Hegselmann, U. Mueller, and K. Troitzsch, eds. Modeling and Simulation in the Social
Sciences from a Philosophy of Science Point of View, pp. 287-310. Boston: Kluwer
Dorderecht.
Macy, M. 1995. “Natural Selection and Social Learning in Prisoner's Dilemma: Co-adaptation
with Genetic Algorithms and Artificial Neural Networks.” Sociological Methods and
Research, 25:103-137.
Meadows, D. L., W.W. Behrens III, D. H. Meadows, R. F. Naill, J. Randers, and E. K. Zahn.
1974. The Dynamics of Growth in a Finite World. Cambridge, MA: MIT Press.
Nowak, A. and R. Vallacher 1997. “Computational Social Psychology: Cellular Automata and
14
Connectionist Models of Social Reasoning and Social Behavior. Mahwah, NJ: Lawrence
Erlbaum.
Nowak, M. and K. Sigmund 1993. “A Strategy of Win-Shift, Lose-Stay that Outperforms Tit-
Prietula, M., K. Carley, and L. Gasser 1998. Simulating Organizations: Computational Models
143-186.
Simon, H. 1998. The Sciences of the Artificial. Cambridge, MA: MIT Press.
Vakas-Duong D. and K. Reilly 1995. “A System of IAC Neural Networks as the Basis for Self-
40:275-303.
15