Vous êtes sur la page 1sur 15

Michael W. Macy In N. Smelser and P. Baltes, eds.

, International Encyclopedia of the Social and Behavioral Sciences, Elsevier, 2002

2.2.150: Social Simulation

Abstract

Growing interest in complex adaptive systems and nonlinear dynamical modeling has

attracted greater attention to computer simulation in the social sciences. It has also shifted the

focus from prediction to explanation and from “factors” (system attributes) to “actors” (agent-

based models). Despite growing popularity, simulation continues to invite skepticism, especially

when used as an exploratory tool for theoretical research rather than as a data-analytical

predictive tool. This reflects continued use of the term “simulation” whose meaning was shaped

by earlier techniques centered on predictive accuracy. Yet current usage increasingly refers to

highly abstract “thought experiments” that apply to an artificial world. Making these models fit

actuality would add complexity that undermines their usefulness as theoretical tools.
Computational social simulation involves the use of computer algorithms to model social

processes. These algorithms require greater precision and logical rigor than natural language yet

lack the generality of closed-form mathematical equations.

The use of computer simulation by social scientists has increased in recent years, due to

growing interest in modeling nonlinear dynamical processes using graphical interfaces that allow

highly intuitive visual representations of the results. Social simulation has also undergone a

qualitative change as the emphasis has shifted from prediction to exploration.

This article reviews three “waves” of innovation: dynamical systems, microsimulation, and

adaptive agent models (Gilbert and Troitzsch 1999). These three approaches will be summarized

as follows:

1. 1960’s: Dynamical systems simulations were empirically grounded, inductive, holistic,

and functionalist.

2. 1970’s: Microsimulation introduced the use of individuals as the units of analysis but

retained the earlier emphasis on empirically based macro-level prediction.

3. 1980’s: Adaptive agent models revolutionized computational social science by simulating

“actors” not “factors” (system attributes). These “bottom-up” models explored

interactions among purposive decision-makers.

Although these three “waves” overlap (e.g., economists continue to use holistic simulation of

productive factors, microsimulation was invented in 1957, and nascent agent-based models

appeared in the late 1960’s), most of the recent excitement in computational modeling in the

2
social and behavioral sciences has centered on agent-based simulation. So too has much of the

controversy.

1. Dynamical Systems

The first wave of computer simulations in social science occurred in the 1960’s (e.g. Cyert and

March, 1963). These studies used computers to simulate dynamical systems, such as control and

feedback processes in organizations, industries, cities, and even global populations. The models

typically consist of sets of differential equations that describe changes in system attributes as a

function of other systemic changes. Applications included the flow of raw materials in a factory,

inventory control in a warehouse, urban traffic, military supply lines, demographic changes in a

world system, and ecological limits to growth (Forrester 1971; Meadows 1974).

Models of dynamical systems are functional and holistic (Gilbert and Troitzsch 1999).

Functional means that theoretical interest centers on system equilibrium or balance among

interdependent input-output units. Holistic means that the system is modeled as an irreducible

and indivisible entity. Although these models allow nested organizational levels, at any given

level, they assume that sets of related attributes (such as economic growth, suburban migration,

and traffic congestion) are causally linked at the aggregate level.

2. Microsimulation

The second wave of computational modeling in the social sciences developed in the late 1970s.

Known as microsimulation, it represents the first step in the progression from “factors” to

“actors.” In striking contrast to the earlier holistic approach, “microsimulation is a ‘bottom-up’

3
strategy for modeling the interacting behavior of decision makers (such as individuals, families

and firms) within a larger system. This modeling strategy utilizes data on representative samples

of decision makers, along with equations and algorithms representing behavioral processes, to

simulate the evolution through time of each decision maker, and hence of the entire population of

decision makers (Caldwell 1997).”

For example, Caldwell and Keister use CORSIM, a large-scale dynamic microsimulation

model of the U.S. population of individuals and families, to integrate individual and family level

wealth behavior with aggregate-level stratification outcomes. This microanalytic approach

allows researchers to avoid a serious limitation in the earlier generation of simulations:

population homogeneity. For example, the relationship between wealth and age may not be

uniform across ethnic or religious subcultures, geographic regions, or birth cohorts. Complex

interactions across subpopulations are lost when the focus is on the correlation among aggregate

factors. This precludes the ability to predict the effects of policy changes that impact only certain

groups.

Microsimulation solves this problem by making individuals, not populations, the units of

analysis. The database structure consists of individual records for a representative sample of a

population, with a set of attributes measured at multiple points in time. These individuals are

then “aged” by updating their attributes, based on a set of empirically derived state-transition

probabilities (e.g., an individual’s move from work to retirement at age 65). These probabilities

are then used to predict new states of each individual, and these are then aggregated to make

population projections.

4
3. Agent-Based Models and Complex Adaptive Systems

The third wave, agent-based models, attracted widespread interest beginning in the 1980’s.

These models extend the microanalytical approach by deriving global states from individual

behaviors, not individual attributes. In microsimulation models, individuals are inert in two

ways: 1) the state of an individual is a numerical function of other traits, and 2) individuals do

not interact (Gilbert and Troitzsch 1999, p. 8). Agent-based models assume an individual has

intentions or goals and makes choices that affect other agents whose choices in turn affect that

individual. These models impose three key assumptions:

1. Agents interact with little or no central authority or direction. Global patterns emerge from

the bottom up, determined not by a centralized authority but by local interactions among

autonomous decision-makers.

2. Decision-makers are adaptive rather than optimizing, with decisions based on heuristics,

not on calculations of the most efficient action (Holland 1995, p. 43). These heuristics

include norms, habits, protocols, rituals, conventions, customs, and routines. They evolve

at two levels, the individual and the population. Individual learning alters the probability

distribution of rules competing for attention, through processes like reinforcement,

Bayesian updating, or the back-propagation of error in artificial neural networks.

Population learning alters the frequency distribution of rules competing for reproduction

through processes of selection, imitation, and social influence (Latané 1996). Genetic

algorithms (Holland 1995) are widely used to model adaptation at the population level.

3. Decision-makers are strategically interdependent. Strategic interdependence means that the

5
consequences of each agent’s decisions depend in part on the choices of others. When

strategically interdependent agents are also adaptive, the focal agent’s decisions influence

the behavior of other agents who in turn influence the focal agent, generating a “complex

adaptive system” (Holland 1995, p. 10).

Thomas Schelling’s (1971) “neighborhood segregation” model was one of the earliest agent-

based social simulations using cellular automata, a method invented by Von Neumann and Ulam

in the 1940’s. Consider a residential area that is highly segregated, such that the number of

neighbors with different cultural markers (such as ethnicity) is at a minimum. If the aggregate

pattern were assumed to reflect the attitudes of the constituent individuals, one might conclude

from the distribution that the population was highly parochial and intolerant of diversity. Yet

Schelling’s simulation shows that this need not be the case. His “tipping” model shows that

highly segregated neighborhoods can form even in a population that prefers diversity. The

aggregate pattern of segregation is an emergent property that does not reflect the underlying

attitudes of the constituent individuals.

Another example of emergence is provided by Latané’s (1996) “social impact model.” Like

Schelling, Latané also studies a cellular world populated by agents who live on a two-

dimensional lattice. However, rather than moving, these agents adapt to those around them,

based on a rule to mimic one’s neighbors. From a random start, a population of mimics might be

expected to converge inexorably on a single profile, leading to the conclusion that cultural

diversity is imposed by factors that counteract the effects of conformist tendencies. However, the

surprising result was that “the system achieved stable diversity. The minority was able to

survive, contrary to the belief that social influence inexorably leads to uniformity” (Latané 1996,

6
p. 294). Other researchers, including Axelrod (1997), Carley (1991), and Kitts, Macy, and Flache

(1999) have used computational models that couple homophily (likes attract) and social

influence. If local similarity facilitates interaction which in turn facilitates imitation, it is indeed

curious that the global outcome is diversity, not uniformity. However, this outcome depends on

interactions that are locally transitive, as in a spatially distributed population.

Schelling and Latané modeled agents with identical and fixed behavioral rules. Many agent-

based models assume heterogeneous populations with the ability to learn new rules. The “genetic

algorithm” invented by John Holland, is a simple and elegant way to model strategies that can

progressively improve performance by building on partial solutions. Each strategy consists of a

string of symbols that code behavioral instructions, analogous to a chromosome containing

multiple genes. The string’s instructions affect the agent’s reproductive fitness and hence the

probability that the strategy will propagate. Propagation occurs when two or more mated

strategies recombine. If different rules are each effective, but in different ways, recombination

allows them to create an entirely new strategy that may integrate the best abilities of each

“parent” and thus eventually displace the parent rules in the population of strategies.

Axelrod [1997:14-29] used a genetic algorithm to study the evolution of cooperation in a

“prisoner’s dilemma,” a game in which choices that are individually rational aggregate into

collective outcomes that everyone would prefer to avoid. The results showed that strategies

based on reciprocity (such as “TIT FOR TAT”) tend to be very successful. The secret of their

success is that they perform well against copies of themselves. Axelrod’s result has been

challenged by Nowak and Sigmund (1993), by Binmore (1998), and by Macy (1995). Working

independently, these researchers found that “TIT FOR TAT,” a strategy that teaches its partner a

7
lesson, can be supplanted by “PAVLOV” (a.k.a. “WIN-STAY, LOSE-SHIFT” and “TAT FOR

TIT”) a strategy that learns (Binmore 1998). The ability to learn appears to be at least as

important for emergent social order as the ability to teach.

Learning involves adaptation at the level of the individual rather than the population.

Artificial neural networks are simple self-programmable devices that model agents who learn

through reinforcement. Like genetic algorithms, neural nets have a biological analog, in this

case, the nerve systems of living organisms. The device consists of a web of neuron-like units (or

neurodes) that fire when triggered by impulses of sufficient strength, and in turn stimulate other

units when fired. The magnitude of an impulse depends on the strength of the connection (or

"synapses") between the two neurodes. The network learns by modifying these path coefficients

in response to environmental feedback about its performance. Neural nets have been used to

study the evolution of religion (Bainbridge 1995), kin altruism (Parisi, Cecconi and Cerini), the

emergence of status inequality (Vakas-Duong and Reilly 1995), group dynamics (Nowak and

Vallacher 1997), and social deviance (Kitts, Macy, and Flache).

4. Criticisms of Simulation

Agent-based simulations have been criticized as unrealistic, tautological, and unreliable:

1. Simulations, like mathematical models, tell us about a highly stylized and abstract world,

not about the world in which we live. In particular, agent-based simulations rely on highly

simplified models of rule-based human behavior that neglect the complexity of human

cognition.

2. Like mathematical models, simulations cannot tell us anything that we do not already

8
know, given the assumptions built into the model and the inability of a computer to do

anything other than execute the instructions given to it by the program.

3. Unlike mathematical models, simulations are numerical and therefore cannot establish

lawful regularities or generalizations.

The defense against these criticisms centers on the principles of complexity and emergence.

How can simple agent-based models explain the rich complexity of social life? In complex

systems, very simple rules of interaction can produce highly complex global patterns. “Human

beings,” Simon contends (1998, p. 53), “viewed as behaving systems, are quite simple.” We

follow rules, in the form of norms, conventions, protocols, moral and social habits, and

heuristics. Although the rules may be quite simple, they can produce global patterns that may not

be at all obvious and are very difficult to understand. Hence, “the apparent complexity of our

behavior is largely a reflection of the complexity of the environment,” including the complexity

of interactions among strategically interdependent, adaptive agents. The simulation of artificial

worlds allow us to explore the complexity of the social environment by removing the cognitive

complexity (and idiosyncrasy) of constituent individuals.

4.1 The exploration of artificial worlds

If Simon, Axelrod, and the complexity theorists are right, then the artificiality of agent-based

models is a virtue, not a vice. When simulation is used to make predictions or for training

personnel (e.g., flight simulators), the assumptions need to be highly realistic, which usually

means they will also be highly complicated (Axelrod 1997, p. 5). “But if the goal is to deepen

our understanding of some fundamental process,” Axelrod continues, “then simplicity of the

9
assumptions is important and realistic representation of all the details of a particular setting is

not.” As such, the purpose of these models is to generate hypotheses, not to test them (Prietula,

Carley, and Gasser 1998, p. xv).

Holland (1995:146) offers a classic example of the problem of building theories that too

closely resemble actuality: Aristotle’s mistaken conclusion that all bodies come to rest, based on

observation of a world that happens to be infested with friction. Had these observations been

made in a frictionless world, Aristotle would have come to the same conclusion reached by

Newton and formulated as the principle that bodies in motion persist in that motion unless

perturbed. Newton avoided Aristotle’s error by studying what happens in an artificial world in

which friction had been assumed away. Ironically, “Aristotle’s model, though closer to everyday

observations, clouded studies of the natural world for almost two millennia” (Holland 1995:146).

More generally, it is often necessary to step back from our world in order to see it more clearly.

For example, Schelling’s neighborhood-segregation model made no effort to imitate

actuality or to resemble an observed city. Rather, the “residents” live in a highly abstract cellular

world without real streets, rivers, train tracks, zoning laws, red-lining, housing markets, etc. This

artificial world shows that all neighborhoods in which residents are able to move have an

underlying tendency towards segregation even when residents are moderately tolerant. This

hypothesis can then be tested in observed neighborhoods, but even if observations do not

confirm the predicted segregation, this does not detract from the value of the simulation in

suggesting empirical conditions that were not present in the model that might account for the

discrepancy with the observed outcome.

10
In short, agent-based models “have a role similar to mathematical theory, shearing away

detail and illuminating crucial features in a rigorous context” (Holland 1995, p. 100).

Nevertheless, Holland reminds us that we must be careful. Unlike laboratory experiments,

computational “thought experiments” are not constrained by physical actuality and thus “can be

as fanciful as desired or accidentally permitted.”

4.2. Unwrapping and Emergence

A second criticism of agent-based modeling addresses a problem known as “unwrapping.”

Unwrapping occurs “when the ‘solution’ is explicitly built into the program [such that] the

simulation reveals little that is new or unexpected” (Holland 1995, p. 137). Critics charge that all

computer simulations share this limitation, due to the inability of computers to act in any way

other than how they were programmed to behave.

This criticism overlooks the distinction between the micro-level programming of the agents

and the macro-level patterns of interaction that emerge. These patterns are “built in” from the

outset yet the exercise is valuable because the logical implications of behavioral assumptions are

not always readily apparent. Indeed, a properly designed computational experiment can yield

highly counter-intuitive and surprising results. This theoretical possibility is based on the

principle of emergence in complex systems. Biological examples of emergence include life that

emerges from non-living organic compounds and intelligence and consciousness that emerges

from dense networks of very simple switch-like neurons. Schelling’s (1971) neighborhood model

provides a compelling example from social life: segregation is an emergent property of social

interaction that is not reducible to individual intolerance.

11
4.3. Rationality and Adaptive Behavior

A third criticism is that simulation models, unlike mathematical models, are numerical, not

deductive, and therefore cannot be used to form generalizations. Worse still, simulations of

evolutionary processes can be highly sensitive to the basin of attraction in which the parameters

are arbitrarily initialized (Binmore 1998). However, game-theoretic mathematical models pay a

high price for the ability to generate deductive conclusions: multiple equilibria that preclude a

uniquely rational solution. Equilibrium selection requires constraints on the perfect rationality of

the agents. Simon appreciates the paradox: “Game theory’s most valuable contribution has been

to show that rationality is effectively undefinable when competitive actors have unlimited

computational capabilities for outguessing each other, but that problem does not arise as acutely

in a world, like the real world, of bounded rationality” (1998, p. 38). But bounded rationality

often makes analytical solutions mathematically intractable. “When the agents use adaptive

rather than optimizing strategies, deducing the consequences is often impossible: simulation

becomes necessary” (Axelrod 1997, p. 4). In sum, simulation requires sensitivity analysis and

tests of robustness, but properly implemented, it not only offers a solution to the problem of

equilibrium selection but can also tell us something about how evolutionary systems move from

one equilibrium to another.

5. Conclusion

From a narrowly positivist perspective, agent-based artificial worlds might seem unrealistic,

unfalsifiable, and unreliable, especially when contrasted with earlier approaches to simulation

12
that were data driven, predictive, and highly realistic (e.g., flight simulators). However, agent-

based models may be essential for the study of open and complex adaptive systems, whether

social, psychological, or biological. When used as an exploratory tool, “bottom-up” simulation

does not test theory against observation. Rather, these thought experiments suggest possible

mechanisms that may generate puzzling empirical patterns.

A key theme that runs through these puzzles is that it is often very difficult to recognize

the underlying causal mechanisms using conventional data-analytic methods. As Holland

concludes (1995 p. 195), “I do not think we will understand morphogenesis, or the emergence of

organizations like Adam Smith’s pin factory, or the richness of interactions in a tropical forest,

without the help of such models.”

References

Axelrod, R. 1997. The Complexity of Cooperation. Princeton: Princeton University Press.

Bainbridge, W. 1995. “Neural Network Models of Religious Belief.” Sociological Perspectives.

Binmore, K. 1998. “Axelrod’s The Complexity of Cooperation.” The Journal of Artificial

Societies and Social Simulation, 1:1.

Caldwell, S. 1997. Dynamic Microsimulation and the Corsim 3.0 Model. Ithaca, NY: Strategic

Forecasting.

Carley, K. 1991. “A Theory of Group Stability.” American Sociological Review 56:331-54.

Cyert, R. and J. G. March. 1963. A Behavioral Theory of the Firm. Prentice-Hall: Englewood

13
Cliffs, NJ.

Forrester, J. W. 1971. World Dynamics. Cambridge, MA: MIT Press.

Gilbert, N. and K. Troitzsch. 1999. Simulation for the Social Scientist, Buckingham: Open

University Press.

Holland, J. 1995. Hidden Order: How Adaptation Builds Complexity. Reading, MA: Perseus.

Kitts, J., M. Macy, and A. Flache. 1999. “Structural Learning: Attraction and Conformity in

Task-Oriented Groups.” Computational and Mathematical Organization Theory. 5: 129-

45.

Latané, B. 1996. “Dynamic Social Impact: Robust Predictions from Simple Theory.” In R.

Hegselmann, U. Mueller, and K. Troitzsch, eds. Modeling and Simulation in the Social

Sciences from a Philosophy of Science Point of View, pp. 287-310. Boston: Kluwer

Dorderecht.

Macy, M. 1995. “Natural Selection and Social Learning in Prisoner's Dilemma: Co-adaptation

with Genetic Algorithms and Artificial Neural Networks.” Sociological Methods and

Research, 25:103-137.

Meadows, D. L., W.W. Behrens III, D. H. Meadows, R. F. Naill, J. Randers, and E. K. Zahn.

1974. The Dynamics of Growth in a Finite World. Cambridge, MA: MIT Press.

Nowak, A. and R. Vallacher 1997. “Computational Social Psychology: Cellular Automata and

Neural Network Models of Interpersonal Dynamics,” in S. Read and L. Miller (Eds.)

14
Connectionist Models of Social Reasoning and Social Behavior. Mahwah, NJ: Lawrence

Erlbaum.

Nowak, M. and K. Sigmund 1993. “A Strategy of Win-Shift, Lose-Stay that Outperforms Tit-

For-Tat in the Prisoners' Dilemma Game.” Nature, 364:56-57.

Prietula, M., K. Carley, and L. Gasser 1998. Simulating Organizations: Computational Models

of Institutions and Groups. Cambridge, MA: MIT Press.

Schelling, T. 1971. “Dynamic Models of Segregation.” Journal of Mathematical Sociology 1:

143-186.

Simon, H. 1998. The Sciences of the Artificial. Cambridge, MA: MIT Press.

Vakas-Duong D. and K. Reilly 1995. “A System of IAC Neural Networks as the Basis for Self-

Organization in a Sociological Dynamical System Simulation.” Behavioral Science

40:275-303.

15

Vous aimerez peut-être aussi