Vous êtes sur la page 1sur 8

Control Eng. Practice,Vol. 5, No. 4, pp.

519-526, 1997

Pergamon PII:S0967-0661(97)00032-4

Copyright 1997 Elsevier Science Ltd Printed in Great Britain. All rights reserved 0967-0661/97 $17.00 + 0.00

THE ETHICS OF MODELLING


M.J. Rabins* and C.E. Harris, Jr.**
*Mechanical Engineering Department, Texas A&M University, College Station, TX 77843-3143, USA **Philosophy Department, Texas A&M University, College Station, TX 77843, USA

(ReceivedJanuary 1996; infinal form November 1996)


Abstract: This paper considers the professional responsibilities and ethical issues associated with engineering modelling. The ethical issues are first delineated. Several engineering cases of faulty modelling (and their disastrous consequences) are next presented, with suggestions as to how greater sensitivity to ethical considerations might have prevented the problems. The issues are then extended to large-scale economic system modelling with an example taken from the recent past of engineering-economic system modelling. Conclusions are made regarding the essential importance of the connections between modelling and ethics. Copyright 1997 Elsevier Science Ltd Keywords: Catastrophe theory, conceptual representations, dynamic modelling, ethics, fuzzy models, modelling errors, professionalism, system models

1.BACKGROUND THEORY This discussion is meant to stimulate thinking about the ethical dimension of modelling. This is not to suggest that all engineers are unethical. Nor is it to suggest that being ethically sensitive is sufficient to make a good modeler. There is no substitute for professional expertise. The authors would be the last to suggest that by being ethical one overcomes technical deficiencies. Rather, the goal is to give a new appreciation of the fact that ethical responsibility is not only a good thing in itself, but that it can also contribute to the kind of sensitivity to the consequences of various alternatives that any good modeler should have. Observations will be made about ethics and professionalism and the ideas developed will be applied to some cases. First the theory and then the application. The authors would like to begin by noting something about the professional ethics of engineers. Most engineering codes give prominence--and usually preeminence--to the admonition to hold paramount 519

the health, safety and welfare of the public. Like most of the statements in the codes, this is a very general claim that needs elaboration. One colleague has said that the principal ethical responsibility of engineers is to be aware of the consequences of his/her actions as a professional. He would probably be willing to add that these consequences have to do primarily with the health, safety and welfare of the public. The contention is that this sensitivity is a mind-set that the.professional brings to the table when problem solving starts. Thus, an engineering design is less apt to lead to negative impact on the public if the designer starts out with this consideration high on his/her list. Similarly, an economic model proposed for an employer or government is less apt to damage the company's profitability or the public's well-being if these ethical concerns are uppermost in one's mind. In his recent book on mechanical design, Professor David Ullman (1992) writes about the need to evaluate the models upon which designs are based, particularly when human subjects are part of that

520

M.J. Rabins and C.E. Hams, Jr. THE GOLDEN INFORMED RULE

evaluation. He aptly uses a Gary Larson cartoon, shown in Figure 1, to illustrate this point. The human guinea pig strapped to the top of that experimental model of the first wheel is obviously being placed in harm's way, but at least, he has given his informed consent to the risk. Note that his hands are not tied. However, not much has been added to the injunction in the codes to be concerned with the health, safety and welfare of the public. How is this bare-bones directive fleshed out? Some well-known and very basic ethical ideas, summarized in Figure 2, must be called upon. The first one is the Golden Rule: Do unto others as you would have them do unto you. Some version of the Golden Rule is present in most of the ethical and religious traditions--Jewish, Christian, Confucian, Buddhist, Hindu. The major idea in the Golden Rule is to place oneself in the position of those who experience the consequences of one's action, and ask if one would be willing to accept those consequences. In other words, one must imagine oneself as the recipient of the consequences of the action. The applic~ttion of this to modelling is probably clear, although some cases will be considered in more detail later. The Gary Larson cartoon also suggests a second ethical idea: informed consent. Ethicists have identified the major ethical tradition in Western culture as the ethics of respect for persons: respecting the status of every human being as a moral agent. The essence of moral agency is being able to formulate and pursue one's own goals and purposes. Human beings are not things to be used for the ends of someone else. They should be allowed to pursue their own ends, as long as

CONSENT

UTILITY (WELFARE) Fig. 2. Morals and Modeling: Applicable Ethical Principles. this pursuit does not interfere with.a similar right of others to pursue their own ends. This implies that people should know, insofar as possible, the consequences of the action they consider, and that they should not be constrained in the choice of these courses of action. Now, of course, modelers do not always have the ability to assure that moral agency in this sense is respected. This is often the prerogative of managers and governmental officials. But they can ask themselves whether people, if properly informed, would be likely to give their consent to the risks imposed on them by certain technologies or programs. Note that the principle of informed consent has many of the same implications as the Golden Rule: an individual places oneself in the position of those subjected to the consequences of the models one proposes and asks if informed consent would be given to those consequences. If not, it might be time to go back to the drawing board. A third ethical principle is the principle of utility. Economists are familiar with the concept of utility, but it originated in the ethical tradition--in eighteenth- and nineteenth-century English thought. The principle of utility might be what framers of the engineering codes had in mind when they referred to the "welfare" of the public. At any rate, utility in ethics refers to overall well-being, especially those aspects of physical well-being that are necessary for a person to realize individual goals and purposes. Thus, they could als0 include health and safety. Looked at in this way, there is a certain convergence between the first two principles and the third. The first two principles emphasize the rights of individuals to pursue their own moral agency, and the third principle emphasizes the material conditions necessary for realizing this moral agency. However, the first two principles can also conflict with the third. This conflict comes about in the following way. In promoting those conditions that maximize utility overall, the rights and prerogatives of individuals are sometimes neglected. Or, in respecting the rights of individuals, the overall utility is sometimes minimized.

Fig. 1. Role of Human subjects in experimental model validation tests. (From Gary Larson)

The Ethics of Modelling Many moral issues in society center on the conflict between the overall public good and the rights of individuals. These conflicts have deep roots in the Western ethical tradition. There is no royal road to reconciling this conflict, but modelers--and everyone elsemshould be aware of the potential conflict.

521

2. PRAGMATIC CONSIDERATIONS So much for the short course in ethics. Before looking at some cases, the relationship of ethical failures to engineering or economic incompetence should be discussed. As each case is described, a natural response may be, "What does that have to do with ethics? Isn't it just an example of bad engineering or incompetent economic analysis?" If an event can be explained in terms of poor engineering design or perhaps bad management, for example, one may be inclined to think there is no need to look for ethical failures as well. Furthermore, people often tend to look for explanations that are congruent with their own area of expertise: engineers look for design failures, economists look for bad economic analysis, managers look for poor management, and ethicists look for unethical conduct. Since most people are not ethicists, why bring in the ethical dimension? This question can be answered by saying something about how a disaster is explained. Whether a type of explanation is appropriate depends on two things. First, there must be a violation of the principles appropriate to that particular area. To explain a disaster, for example, in terms of bad engineering, one must say that there were violations of principles of engineering design. To explain a disaster in terms of unethical conduct, one must say that there were violations of ethical principles. Note that a violation of engineering principles does not necessarily imply a violation of ethical principles (or vice-versa). However, there are situations on the boundaries of design decision-making that cross over in a fuzzy sense (in both the informal literal sense, and the formal sense of fuzzy-set theory) from being incorrect design decisions to being unethical decisions. Engineers are often confronted with the task of such decision making when dealing with the question of just how safe to design a product that can be safely used by the public, and yet must be affordable. The second requirement is that the violation of the appropriate principles must in some sense be the cause--or one of the links in the chain of causes--of the disaster. Thus, to explain a disaster in terms of bad engineering, there must be a violation of the principles of good engineering design and this violation must be the cause--or at least a partial cause--of the disaster. It is easy to see that a disaster can be explained in

more than one way. It could be explained, for example, in terms of bad engineering (if principles of engineering design were violated) and bad ethics (if principles of ethics were violated). But one still might be inclined to say, "Why not just stop with the observation that there was bad engineering or bad economic analysis? Why go on to say there was also unethical conduct? After all, if there had not been poor engineering or poor economic analysis, the unfortunate event might not have occurred." The point is this: if there had not been bad ethics, the bad engineering or bad economic analysis might not have occurred. While good ethics does not assure a desirable outcome, refusing to violate ethical standards is clearly another way to promote a desirable outcome. Thus, good ethics is in general a part of professionalism in engineering, economics or anywhere else. It is not just a cosmetic add-on, but integral to professional activity itself. This is why the ethical dimension of modelling (or any other professional activity) is so important. Note also that the fact that an engineer did something wrong, from an engineering point of view, does not mean that unethical actions were involved. The point is that heightened sensitivity to ethical behavior can enhance technical competence. Figure 3 is a graphical representation of this thought. Now it is appropriate to consider some applications.

3. ENGINEERING CASES A much-studied and very well-known case is the catastrophic failure of the Challenger Shuttle in 1986, which killed six astronauts and a teacher. The disaster has been attributed to the failure of a booster field joint O-ring seal during the low-temperature launching of the shuttle. What is not well-known is that the same O-ring had operated safely over a wide temperature range for many launches of the smaller Titan rocket. The trouble occurred when the static model for the seal of the Titan field joint was incorrectly applied to the dynamic situation in the

Fig. 3. Tension and complementarity between Engineering and Ethics.

522

M.J. Rabins and C.E. Harris, Jr. Challenger disaster was rebuilding the confidence of the astronauts. He reports that they did not believe they were properly informed. Notice that this is an ethical criticism. Finally, turn to the third ethical principle, that of utility. If the engineers and managers had wanted to ethically justify their rather slipshod handling of the O-ring problem, they might have appealed to utility. "After all," they might have argued, "the good of the country will be enhanced by the progress of the space flight program, and this requires that the Challenger fly. If we wait until we have 100% certainty of safety, no shuttle will ever leave the launch pad." Here is an example of how utility might seem to point in a different direction from the first two principles, but care is necessary. If a disaster occurred, the space program would be severely harmed, and this would have considerable negative utility. Furthermore, most ethicists believe that utility should usually be subordinated to protecting individuals, when there is a conflict of the rights of individuals. So the principle of utility could probably not justify disregard of the O-ring problem, or failure to warn the astronauts of the risks it could produce. Another famous case is the 1981 collapse of a walkway across the atrium of the Hyatt-Regency hotel in Kansas City during a tea-dance in the lobby. The collapse killed 114 people in the worst structural catastrophe in United States history. The modelling analysis of the original design was required by local building codes to provide a factor of safety of 3 to 4 in the materials selection and geometrical design. Analysis after the collapse indicated a modelled factor of safety of about 1. Worse still, the original design was modified during fabrication, as shown in Figure 5, to yield a doubling of the load on the hanger rods, resulting in a further reduction by one half of the safety factor. In court hearings after the accident, the steel designer and construction contractor blamed each other for allowing the second error. Clearly, one or the other was not telling the truth. Both finally lost their licenses. This is certainly a case of faulty modelling, as well as dishonesty. Further, whoever made that fatal fabrication design change was not knowledgeable about the effects of the faulty model they used. But one can make a case that poor ethics also played a part in the disaster. If the engineers and contractors had put themselves in the position of those who would use the walkways--or stand under them--one wonders whether they would not have been more conscientious about checking their design and about analyzing the consequences of the decision to use two separate hanger rods instead of one continuous rod. Is this not the kind of thing professionals should constantly be doing? And is this not the kind of thing

much larger shuttle booster seal. Figure 4 shows the nature of the dynamic motions of the seal joint in the shuttle, which did not occur in the smaller Titan application. Where did the design engineers and their reviewers go wrong? Perhaps they did not sufficiently question the applicability of the smaller Titan model to the larger Shuttle. One might argue that this oversight involved a faulty engineering judgment regarding unmodeled dynamics, and this is certainly true. However, the question is whether the errors in professional judgment could have been avoided if there had been more ethical concern for the potential for harm in the modelling design approachl Hindsight is, of course, near perfect, but foresight can be improved by rigorous and severe questioning of the consequences of the controlling assumptions during the modelling phase. The authors believe sensitivity to the ethical dimensions of the problem could have promoted a more rigorous modelling procedure. Consider the three ethical principles elaborated earlier. Using the Golden Rule, ask whether the modelers and the managers responsible for assessing the modelling would have been willing to fly in the Challenger themselves, or to allow their children to fly. After all, the engineers (unlike the astronauts) did know about the O-ring temperature problem before the Challenger flight. Perhaps even more telling is the appeal to the standard of free and informed consent. People should be informed about unusual dangers they might be subjected to, and given the right to consent or not consent to the dangers. This canon was honored with respect to the dangers created by ice formation on the Challenger, due to the sub-freezing temperatures the night before the launch. The astronauts were informed of this problem and that it could cause serious difficulties. They decided to fly anyway, but at least they were able to make the decision themselves. The problem created by the O-rings was a different matter. The astronauts were never informed of this problem, even though it was known to be potentially life-threatening. Thus the ethical standard of free and informed consent was violated. One is inclined to think that if engineers and managers had known that the astronauts would be informed of the potential danger, they would also have been more likely to examine the problem more thoroughly, even before the astronauts were informed. Incidentally, the violation of these first two ethical principles seems to have created major problems for the space flight program. An account by a person in a very high-level managerial position in the program has said that a major problem in the aftermath of the

The Ethics of Modelling The evolution of the shuttle l~ostm" joint


How ~ S A appl~l salety f u ~ e 1970's 1982 m 4e~ln$ kw new ~ c ~ I r cdmmgs. 1985 1986

523

u ~ m

u m l ~ I.mmla 8tnn~

Fig. 4. Modeling the Challenger Booster O-ring joints. that ethical sensitivity would emphasize? In the case of the Hyatt Regency walkway collapse, the engineers and contractors involved lost their licenses to practice. Their lack of sensitivity to the potential consequences of their actions certainly was a factor. Next, consider the less well-known case of the Therac-25, a product of Atomic Energy of Canada, Limited (Fish, 1994). The device was designed to produce high-energy electron beams to destroy tumors in living human tissue. The Therac-25 evolved from an earlier device, the Therac-20. The main difference between the two machines is the extensive use of software control in the Therac-25, compared to the limited range of software functionality in the Therac-20. There was a rather complex software error involving a faulty assumption about the correct timing of mechanical operations on the Therac-20, versus the incorrect equivalent software modelling on the Therac-25. As a result, six people were killed in separate accidents before the software was finally corrected. Unfortunately, this example of overreliance on software to handle safety is not an isolated instance in industry. As computer models to protect safety become larger and more complex, attention must be focused on what obligations the system modeler must assume. Now the question for the professional is: how can one avoid such tragedies in the future? Of course there is no way to avoid all human errors, but ethical sensitivity can certainly help. Did the designers place themselves in the position of the patients who would use the machines? Did they ask themselves in as imaginatively vivid way as they could what protection they would want if they were being placed on the table and subjected to the electron beam? Did they ask whether they would have given informed consent if they, as patients, knew as much about the machine as the designers did? If they had concluded that they would not have given their informed consent, would they have been more likely to have come up with safety provisions (later introduced) that would have avoided the deaths? Of course, no one can know for sure how these questions would be answered. All that is known is that ethical considerations can provide powerful motivations for asking them. The author of To Engineer is Human, Henry Petroski (1982), has coined a new phrase, "Computer Aided Catastrophes," to cover situations like those just described, where computer models successfully applied in previous applications are erroneously applied to new situations. One specific example he cites is the collapse of the roof of a basketball arena in Hartford, Connecticut, under a heavy snow load, just hours after over 17,000 fans had left the arena. The failure was traced to the incorrect application of a finite-element stress analysis code to the new roof design, thereby violating some assumed limitations of

524

M.J. Rabins and C.E. Hams, Jr. affected by the models will not be as directly affected as in the cartoon shown earlier, but their economic welfare and mental (if not physical) health are certainly at stake. What kind of large-scale economic modelling errors can be made, and how do they relate to the potential harm problem? In the late 1960's and early 1970's a large-scale economic modelling approach was proposed by Dr. Jay Forrester, an electrical engineer at MIT. Dr. Forrester gained recognition as director of the U.S. Defense Department's Distant Early Warning (DEW) line in northern Canada, des!gned to protect the United States against sneak missile attacks over the north pole by the Soviets. Subsequently, he acquired even greater fame by inventing and developing the core memory for digital computers. With the income from the prime patents on that invention, he supported a major modelling initiative at MIT, resulting in a series of books and articles on large-scale dynamic systems modelling. These included Urban Dynamics (Forrester, 1969), World Dynamics (Forrester, 1971), and many articles. The engineering co-author of this talk became enamored of these models, and even taught a course on the subject at Brooklyn Polytechnic University, using Forrester's books. He used the modelling approach proposed by Forrester to correctly analyze feedback control systems and oscillator hardware designs. It was very gratifying to teach modelling subject matter that worked well for engineering hardware systems, and apparently also worked well to explain possible approaches to overcoming urban blight, ecological breakdown in Brazilian rain forests, world-wide pollution, etc. For a while in the 1970's there was even a world-dynamics society operating out of Rome ("The Club of Rome") with philanthropic industrial support.

As Built

Original Detail

Fig. 5. Modeling the Hyatt-Regency walkway hangar rods. the code. Clearly this was a case of limited knowledge of the design model utilized. Unfortunately, there are numerous examples of large computer code models where software is expected to replace hardware and human judgement in making crucial control decisions. For example, what would have happened if the star-wars defense initiative had actually been called upon to destroy incoming missiles? Would the millions of lines of computer coded models actually have worked the first time they were turned on? This is a frightening thought, but then again proponents of SDI have argued that it fulfilled its mission by bringing the Soviet defense industry and the Soviet government to its knees through economic overextension. There are many other examples and cases that can be cited to point to catastrophic results of faulty engineering or computer modelling and design. But by now, the case that faulty models can easily have catastrophic results has been established. Furthermore, these failed modols can usually be traced to faulty judgement that might have been avoided with more ethical concern about the potential for harm of unmodeled failure modes. It stands to reason that greater ethical awareness can only increase the chances of anticipating and avoiding failures.

4. AN ECONOMIC MODELLING CASE The engineering co-author of this presentation contends that the foregoing is vitally relevant to the topic of economic modelling. To make that case, one must go back 25 years in his limited personal knowledge of large-scale economic modelling. Also, one must start with an .assumption, namely that the models considered are used by industrial economic planners and government policy makers. It will further be assumed that when models are used, faults in the models can have results no less catastrophic than the faulty O-ring design model or faulty walkway hanger rods described earlier, albeit less immediate, dramatic or obvious. Perhaps the humans f / / I

-q
l

Rate /

Controlled Flow

Information \

'

Level

Fig. 6. A simple Forrester-type model. (Reprinted by permission from Urban Dynamics, by J.W. Forrester, the MIT Press, 1969.)

The Ethics of Modelling

525

Und~'mplo'p~ I // //

"-

(L 29. t O -

t~bo,

"

6
AMMP Am'll~ fro" milpama mmifip~n', 2 / /

~
UAN Ua,eimmployed amvids, no,m~

u=

...-~k

//, ~'// ,/ // / /

//// I \\ \ \\ \

\
"..

~'QUM, 21, A) I Uad~mployed mobili~ I I

{TO, 121, AQ.


r~t,-~ ".

I I [ i

I I I l

I I I I

u,a~-m,pioy,a
i -~

(OH, 9S, L ~ - ~

I ~i*;=~' I
'

(
/

mi*

~ j~b~,a* I
.

~--.:--r--- }
'~

ua~o~ ~ins population dmuity

ur~PD

UFS' t'tmily ~

I l

~, " ' '~'r~c~.1.7,A)


Tlut pit" calplm

I ~

"-. u. ,~ L ~- ~-.,, .,,. ~)


On~mplored Low c o t houl~

TPCN Tax per capita, normal

Uud=mmploy~ job,

Fig. 7. A complex Forrester-type model. (Reprinted by permission from Urban Dynamics, by J.W. Forrester, the MIT Press, 1969.) There were, however, several flaws in the Forrestertype models that subsequent investigators noted. To understand these cautionary notes, see Figures 6 and 7, which are simple and more complicated Forrestertype models. First, note that the use of integrating models with positive or negative feedback in Forrester's models automatically assumed an eigenvector-type solution with either exponentially increasing (unstable) or decreasing (stable) mathematical responses. By assuming different structural models (e.g., table look-up), entirely different types of mathematical responses would become possible. Second, in the multiplying factors in the Forrester models, if the factors are assumed to be constants, then unmodeled nonlinearities can enter into the actual system results, as compared to the predicted simulation model results. Finally, if there are causal factors in the actual system that are overlooked, then unmodeled dynamics in the actual system will make the limited model results unrealistic and of limited use. As experts on urban or ecological or world economic models examined the Forrester models more closely, more and more of these modelling problems came to light. Forrester's lack of broad perspective and unwillingness to listen to others outside of his field kept him from seeing the defects in his modelling techniques. He failed to see that great success and distinction in one area did not guarantee that he would have the same success in another and quite different area. A judgement failure blinded him to the defects of his own work, and misled many other people as well. Such judgements can be made more adequately and appropriately with increased sensitivity to the ethical issues presented at the outset of this paper. Beyond the ethical issues, one other comment is in order. In the 1980's micro-economic and macroeconomic models gradually supplanted the Forrester models. The crucial questions still remain however: what are the true causes and effects that must be modelled? What dynamics must be modelled and what can be ignored? When do time-varying or nonlinear relations have to be included in the model, and when can they be effectively represented by linearized models with small excursions of the key dependent variables? When does precision in a model interfere with the accuracy of that model? L. Zadeh referred to this point when he wrote: "As the

526

M.J. Rabins and C.E. Harris, Jr. model. It means making every effort to inform those upon whom the modeler is testing the models, so that they are fully aware of the assumptions and limitations of the models. It means asking questions about what course of action benefits the public. Sometimes it means balancing the public good against the rights of individuals. It means being honest about models, particularly with the modeler. And it means going to all necessary lengths to be objective. Many of these requirements are mandated by different discipline-specific professional codes of ethics. And codifying the ethics of a professional discipline is often one of the first steps in establishing wide acceptance of that discipline. The ethical dimension of professional practice is therefore an important aid in the goal of promoting the highest standards and in protecting the public.

complexity of a system increases, our ability to make precise and yet significant statements about its behavior diminishes until a threshold is reached beyond which precision and significance (or relevance) become almost mutually exclusive characteristics." (Zadeh, 1973). In other words, how much precision in a model is too much? There are no easy answers to these questions. Current trends in automatic controls system design and modelling call for robust models and methods that can accommodate large swings in all of the answers to these questions. 5. CONCLUSIONS The final topic is the issue of potential harm. Professionals have what is sometimes called an "implicit contract" with society. Society has provided professionals with tax-supported education, the right to regulate/accredit/enforce licensure and ethical codes, and a very attractive lifestyle and income. In return, professional codes of ethics say that the foremost obligation is to hold paramount the health, safety and welfare of society. This is the professional side of the contract. If the professional activity of an engineer consists of developing large-scale economic models, then the engineer owes it to society to be concerned about the potential harm to society if the models prove to be faulty. And that means testing the models on historical data before they enter the real world. It means subjecting the models to critical review, particularly by experts in the field being modelled, and listening carefully to what those experts have to say. It means placing oneself in the position of those who will be subjected to the consequences of the

REFERENCES Fish, B. D. (1994). The Therac-25 Case. Honors Paper in PHIL/ENGR 482 (Engineering Ethics), Texas A&M University, Fall. Forrester, J. W. (1969). Urban Dynamics. MIT Press, Cambridge. Forrester, J. W. (1971). Worm Dynamics. Wright Allen Press, Inc., Cambridge. Petroski, H. (1982). To Engineer is Human: The Role of Failure in Successful Design. St. Martin's, New York. Ullman, D. E. (1992). The Mechanical Design Practice. 53-69, 226-250. McGraw-Hill, Inc., New York. Zadeh, L. A. (1973). The Concept of a Linguistic Variable and its Application to Approximate Reasoning. Memorandum ERL-M411 Berkeley.

Vous aimerez peut-être aussi