Académique Documents
Professionnel Documents
Culture Documents
72
design, and ecological safety (Flach, 1990; Rasmussen &Vicente, 1989; Wioland &
Amalberti, 1996). This profusion of research and ideas contrasts with the fact that human
factors research does not have much application within the industry. It is the goal of this
chapter to understand better the reasons for this paradox.
Of course, that is not to say that nothing exists in the industry regarding human
factors. First, the growing adhesion of industry to cockpit resource management (CRM)
courses is more than a success for psychosociologists. Second, human factors are
intensively used by manufacturers and regulators. How can a manufacturer or an
operator ignore the end user (the pilot) if it wants to please the customer and sell a
product?
However, one must question how effectively human factors are used in design and
operations and to comprehend why these human factors are not the ones suggested by
the academics. Regardless of their success, CRM courses have encouraged a relative
capsulation of human factors theoretical concepts within flight operations. In the rest of
the industry, there is still a belief that human factors relies much more on good sense and
past experience than on theory "We are all human, so we can all do human factors"
(Federal Aviation Administration (FAA) human factors report, Abbott et al, 1996, page
124).
One further step in this analysis should allow us to comprehend what model of end
user designers and operations staffs had (and/or still have) in mind. According to the
nature of this end user model, we suspect that the design process will be different
because the design is oriented toward the replacing of end users weaknesses.
In sum, speaking about automation is like addressing a very sensitive question that
goes beyond the technical and "objective" analysis of automation-induced problems. The
problems include those of ethical questions and reflect a persisting deficit in
communication between the human factors academy and industry.
Fortunately, this situation is not frozen. Several recent initiatives by regulatory
authorities (FAA Human Factors Team, JAA Human Factors Steering Group, OCAI' s
International Human Factors Seminar) are casting new light on the problem. For safety
reasons, and also because of a growing tendency of managers and politicians to fear
from legal consequences and public disgrace if new crashes occur, authorities ask more
and more human factors academies and industry staff to be more open-minded and to
bridge their disciplines. Note that bridging does not mean unidirectional efforts. Industry
should incorporate more academic human factors, but the academic realm should equally
improve its technical knowledge, consider industry constraints, and consider the
feasibility and cost of suggested solutions.
Because of this big picture, and because numerous (good) reports, books and
chapters have already been devoted to overview of automation-induced problems in
commercial and corporate Aviation (see e.g., Amalberti, 1994; Billings, 1997; Funk, Lyall
& Riley, 1995); Parasuraman & Mouloua, 1996; Sarter & Woods, 1992, 1995; Wiener,
1988; Wise et al, 1993; etc.), this chapter gives priority to identifying mismatches
between industry and the theorists and to suggesting critical assessment of human factors
solutions, rather simply listing drawbacks and presenting new solutions as miracles.
The chapter is divided into three sections. The first section describes the current
status of aircraft automation, why it has been pushed in this direction, what end user
model triggered the choice, how far can the results be considered a success by the
aviation technical community, and how drawbacks were explained and controlled. The
second section starts from a brief description of automation drawbacks, then turns to the
presentation of a causal model based on the mismatch of the pilot's model between
industry and academia. The third and last section focuses on solutions suggested by the
theorists to improve man-machine interface or to assist industry in improving jobs. It
also considers the relevance, feasibility, and limitations of these suggestions.
74
Automation in aviation has reached all the goals assigned in the 1960s. There is a
consensus that nobody should design a modern aircraft with much less automation than
the current glass-cockpit generation.
The global performance of systems has been multiplied by a significant factor: Cat
III landing conditions have been considerably and safely extended, lateral navigation
(LNAV) has been tremendously enhanced, computer-support systems aid the pilot in
troobleshooting failures, and so forth.
The economic benefits have been equally remarkable: fuel saving, enhanced
reliability, ease in maintenance support, reduction in crew complement, and tremendous
benefits in the reduced training time. The duration of the transition course for type-rating
on modern glass cockpits is at least 1 week shorter than for the old generation and has
considerably improved cross-crew qualification among similar types of aircrafts. For
example, the transition course between A320 and A340 asks for a 11-day course instead
of a 22-day regular course. Another benefit relies upon the zero-flight training concept,
which should aim soon at type-rating pilots on a machine without any flight (flight is of
course the most expensive part of training). However, some U.S. airlines have came
recently to the conclusion that crews are not learning enough in transition and are
considering making the transition course much longer (fallout of Cali accident
Investigation).
Automation has also contributed greatly to the current level of safety. Safety
statistics show that glass cockpits have an accident rate half that of the previous
generation of aircraft. The global level of safety now appears to be equal to that of the
nuclear industry or railways (in Westen Europe). The only apparent concern is that this
(remarkable) level of safety has not improved since the 1970s. Of course, one should
also note that the current level of safety is not due only to automation, but to the overall
improvement of system, including air traffic control (ATC), airports, and the use of
simulators in training. But one has also to aknowledge that the current plateau of safety
is not solely a function of automation problems. There is growing evidence that increases
in traffic, in the complexity of organizations and management, and in the variety of
corporate and national cultures all play significant roles in the plateau and need more
consideration in future design and training.
Last, but not least, the regulation of aviation safety relies on the acceptance of a
certain risk of accident that compromises between safety costs and safety benefits; this
level of accepted risk is close to the one observed at the present time. Therefore
manufacturers and managers do not feel guilty and consider that they meet the targets
(better efficiency, less cost, with a high level of safety). Objectively, these people are
correct. The reader will easely understand that this self-satisfaction does not promote
room for dialogue when criticisms of automation are expressed.
76
78
Failures are always possible outcomes of betting, just as errors are logical outcome of
dynamics cognition (see Reason, 1990).
Field experiments in several areas confirm this pattern of behaviour. For example, a
study considered the activity of fighter pilots on advanced combat aircraft (Amalberti &
Deblon, 1992). Because of rapid changes in the short-term situation, fighter pilots prefer
to act on their perception of the situation to maintain the short-term situation in safe
conditions rather than delaying to evaluate the situation to an optimal understanding.
When an abnormal situation arises, civil and fighter pilots consider very few
hypotheses (one to three) which all result from expectation developed during flight
preparation or in flight briefings (Amalberti & Deblon, 1992; Plat and Amalberti, in
press). These prepared diagnoses are the only ones that the pilot can refer to under high
time-pressure. A response procedure is associated with each of these diagnoses, and
enables a rapid response. This art relies on anticipation and risk-taking.
Because their resources are limited, pilots need to strike a balance between several
conflicting risks. For example, pilots balance an objective risk resulting from the flight
context (risk of accident) and a cognitive risk resulting from personal resource
management (risk of overload and deterioration of mental performance).
To keep resource management practical, the solution consists in increasing outside
risks, simplifying situations, only dealing with a few hypotheses, and schematising the
reality. To keep the outside risk within acceptable limits, the solution is to adjust the
perception of reality as much as possible to fit with the simplifications set up during
mission preparation. This fit between simplification and reality is the outcome of in flight
anticipation.
However, because of the mental cost of in flight anticipation, the pilot's final hurdle
is to share resources between short-term behavior and long-term anticipation. The tuning
between these two activities is accomplished by heuristics that again rely upon another
sort of personal risk-taking : as soon as the short term situation is stabilized, pilots invest
resources in anticipations and leave the short term situation under the control of
automatic behaviour.
Hence the complete risk-management loop is multifold: Because of task complexity
and the need for resource management, pilots plan for risks in flight preparation by
simplifying the world. They control these risks by adjusting the flight situation to these
simplifications. To do so, they must anticipate. And to anticipate, they must cope with a
second risk, that of devoting resources in flight to long term anticipation to the detriment
of short-term monitoring, navigation, and collision avoidance. Thus, pilots accept a
continuous high load and high level of preplanned risk to avoid transient overload and/or
uncontrolled risk.
Any breakdown in this fragile and active equilibrium can result in unprepared
situations, where pilots are poor performers.
Protections of Cognitive Betting: Meta-knowlege, Margins, and Confidence. Findings
just given sketch a general contextual control model that serves the subject to acheive a
compromise that considers the cost-effectiveness of his performance and maintains
accepted risks at an acceptable level. This model is made with a set of control mode
parameters [Amalberti, 1996; Hollnagel, 1993]. The first characteristic of the model lies
in its margins. The art of the pilot is to plan actions that allow him to reach the desired
level of performance, and no more, with a confortable margin. Margins serve to free
resources for monitoring the cognitive system, detecting errors, anticipating, and, of
course, avoiding fatigue.
The tuning of the mode control depends both on the context and on metaknowledge and self confidence. Metaknowledge allows the subject to keep the plan and
the situation within (supposed) known areas, and therefore to bet on reasonable
outcomes. The central cue for control-mode reversion is the emerging feeling of
difficulty triggered by unexpected contextual cues or change in the rhythms of action,
which lead to activation of several heuristics to update the mental representation and to
keep situation awareness (comprehension) under (subjectively) satisfactory control.
These modes and heuristics control and adapt the level of risk when actions are
decided.
A Model of Error Detection and Error Recovery. Controlling risk is not enough.
Regardless of the level of control, errors will occur and the operator should be aware of
it. He develops a series of strategies and heuristics to detect and recover from errors
(Rizzo, Ferrente & Bagnara, 1994; Wioland & Amalberti, 1996).
Experimental results show that subjects detect over 70% of their own errors
(Alwood, 1994; Rizzo et al, 1994); this percentage of detection falls to 40% when the
subjects are asked to detect the errors of colleagues (Wioland & Doireau, 1995). This
lesser performance is because the observer is deprived of the memory of intentions and
actions (mental traces of execution), which are very effective cues for the detection of
routine errors.
Jumpseat or video observations in civil and military aviation (for an overview see
Amalberti, Paris, Valot and Wibaux, 1997) shows that error rate and error detection are
in complex interactions. The error rate is high at first when task demands are reduced
and the subjects are extremely relaxed, then converges toward a long plateau, and finally
decreases significantly only when the subjects almost reach their maximum performance
level. The detection rate is also stable above 85% during the plateau, then decreases to
55% when the subjects approach their maximum performance, precisely at the moment
subjects are making the lowest number of errors.
LEARNING METAKNOWLEDGE
and IMPROVING CONFIDENCE
Control of Cognitive
Betting
Cognitive Mode Control
Metaknowledge
and
Confidence
Defenses
error expectation
Cognitive Betting
Risk Taking in
Dynamic Resources
Management
Cognitive Betting
Initial Risk Raking
when Reducing the
Complexity of Universe
by Taking Options
Error detection
and recovery
Heuristics controlling
multitasks management
Heuristics controlling
understanding
Execution
Human skills and routines
Definition of Intentions
Definition of Expected
Level of Performance
Task-execution
Planning, Construction of
Mental Representation
Task-p reparation
Because operators are resource limited, they have to accept risks in simplifying the
universe (setting up a mental representation), using routines to maintain the job under
710
are not necessarily required, have not been carefully analyzed, and create the potential
for a series of negative interactions. They often differ from crews spontaneous behavior
(which appears as a heritage from the old generation of aircraft maneuvering), sometimes
subtly, and sometimes importantly and therefore demand more pilot knowledge, more
attention payed to comprehending how the system works and how to avoid making
judgment errors.
Understandingthesituation/Memoryaccess
Mentalresourcesinvestedinupdatingthemental
representation&improvingsituationawareness
Understandingthesituation/Memoryaccess
Mentalresourcesinvestedinupdatingthemental
representation&improvingsituationawareness
Performance
Performance
Extensionof
Performancewith
Automationand
Assistances
2
Lossofcontrol
Growingnumber
ofsurprisesand
noncomprehended
items
Longtimeto
detecterrors
EcologicalEnvelopeof
Performance
Toomanyerrors
Toomanysubtasks
Notenoughtimeto
testsituation
awareness
Ecological
Envelopeof
Performance
Lossofcontrol
Monitoringtheperformance
Mentalresourcesinvestedin
sharingmultiplestasks,andin
errorprevention,detectionand
recovery
Monitoringtheperformance
Mentalresourcesinvestedin
sharingmultiplestasks,andin
errorprevention,detectionand
recovery
Left: the space of possibilities to reach a given level of performance. The performance
envelope is normally self-limited, to remain in safe conditions (situation under controls),
thanks to a series of cognitive signals warning the operator when approaching to
unstable boundaries. Note that the operator can lose control of the situation, whatever
the level of performance, but for different reasons.
Right: A perverse effect of automation. The envelope of cognitive performance has been
artificially expanded by assistance and automated systems. This extension of the envelope
of performance is a undebatable advantage for point 1. However, it is much more
debatable for point 2, which represents a level of performance already attainable within
the ecological envelope. The expansion of the performance envelope multiplies solutions
to reach the same goal for the same level of performance and mechanichally reduces the
experience of each solution.
This is the case for many vertical profile modes on modern autopilots. For
example, for the same order of descent given by the crew, the flightpath choosen by the
autopilot is often subtly different from the crews spontaneous manual procedure of
descent, and moreover varies following the different types of aircraft. Another example is
provided by some modern autoflight systems that allows the pilot to control speed by
different means, either by varying the airplane pitch attitude (speed-on-pitch) or by
varying the engine thrust level (speed-on-thrust). Even though these same concepts can
712
be used successfully by most crews in manual flight, they still are confused for many
crews when using autoflight systems, probably because of subtle differences from crews
spontaneous attitude when selecting and implementing the flight law in service (speedon-thrust or speed-on-pitch).
Last, but not least, there is a mechanical effect on knowledge when expanding the
number of solutions. The expansion of the performance envelope multiplies solutions to
reach the same goal for the same level of performance and mechanichally reduces the
experience of each solution (and the time on training for each of them; see Fig 7.2, and
the next section).
Increasing the Time Needed to Establish Efficient Ecological Safety Defenses. The
greater the complexity of assistances and automated systems, the greater is the time
needed to stabilize expertise and self-confidence. The self-confidence model can be
described as a three-stage model (Amalberti, 1993; Lee & Moray, 1994) with analogy to
the Anderson model of expertise acquisition (Anderson, 1985). The first stage is a
cognitive stage and corresponds to the type-rating period. During this period, the
confidence is based on faith and often obeys an all or nothing law. Crews are under-or
overconfident. The second stage of expertise is an associative stage and corresponds to
the expansion of knowledge through experience. System exploration behaviors (often
termed playing) are directly related to the acquisition of confidence, because this extends
the pilot's knowledge beyond what he normally implements to better understand system
limits (and hence his own limits). Confidence is therefore based on the capacity to
comprehend relationship between system architecture and system behavior. Expertise is
statibilized (in general, far from the total possible knowledge that could learned on the
system). This confidence, which was reached in less 400 hours on the previous
generation of transport aircraft, is not stable until after 600 to 1000 hours of flight time
on glass cockpit aircraft (roughly 2 years experience on this type), due to the extreme
expansion of systems capacities and flight situations.
The drawback of the glass cockpit is not only a matter of training length. The final
quality of expertise is also changing. With complex and multimode systems, the human
expertise cannot be systematic, and therefore is built upon local explorations. Even after
1000 flight hours, pilots do not have a complete and homogeneous comprehension of
systems; they become specialists in some subareas, such as some FMS functions, and can
almost ignore the neighboring areas. When analyzing carefully what pilots think they
know (Valot & Amalberti, 1992), it is clear that this heterogeneous structure of expertise
is not mirrored accurately by meta-knowlege. Because meta-knowledge is at the core of
the control of risk taking, it is not surprising that overconfidence and underconfindence
are facilitated. Overconfidence occurs the system is engaged in areas where the pilot
discovers too late that he is beyond his depth; lack of confidence results in delayed
decision making or refusal to make decisions. The A320 accidents of Habsheim and
Bangalore occured with pilots precisely at this second stage, transitioning on the glass
cockpit with a poor representation not only of the situation but also of their own
capacities.
The third stage of expertise corresponds to the autonomous stage of Andersons
model. A retraction to an operative subset of this expertise for daily operations is the last
stage of the confidence model. The margins generated by this retraction provide pilots
with a good estimate of the level of risk to assign to their own know-how. Again, the
heterogeneous structure of expertise provokes some paradoxical retractions. Pilots
import from their explorations small personal peaces of procedures that they have found
efficient and tricky. The problem is here that these solutions are not part of standard
training nor shared by the other crew members. They result a higher risk of mutual
misunderstanding. Last, but not least, it is easy to understand that a little practice in
manual procedures does not contribute to confidence, even when these procedures are
formally known. The less the practice, the greater the retraction phase. Pilots become
increasingly hesitant to switch to a manual procedure, and tend to develop active
avoidance of these situations whenever they feel uneasy.
These effects are increased by the masking effect of both the training philosophy
and the electronic protections. Because of economical pressure and because of operator
and manufacturer competition, training tends to be limited just to the useful and to
underconsider the complexity of systems. It is not uncommon that type rating amplifies
the value of the electronic protections to magnify the aircraft, and excludes the learning
of areas of techniques, because these areas are not considered as useful or are not used
by the company, that is the vertical profile, or some other FMS functions. This approach
also favors overconfidence and the heterogeneous structure of expertise and the personal
unshared discovery of systems. The experience of cognitive approaches to boundaries
increases the efficiency of signals. But in most flying conditions with the glass cockpit,
workload is reduced, complexity is hidden by systems, human errors are much more
complex and difficult to detect (feedback postponed, error at strategic level rather than
execution level), and the result is that operators continue too long to feel at ease. The
cognitive signals when approaching to boundaries are poorly activated until the sudden
situation where all signals are coming in brutally and overwhelming cognition (explosion
of workload, misunderstanding, and errors).
Blocking or Hiding Human Errors Could Result in Increasing Risks. It is common sense
to assume that the less the human errors, the better the safety. Many systems and safety
policies have the effect of hiding or suppressing human errors, such as fly by wire,
electronic flight protections, safety procedures, regulations and defenses. When
considered individually, these defenses and protections are extremely efficient. But their
multiplication results in a poor pilots experience of certain errors. We have seen already
in the description of the cognitive model that pilots need the experience of error to set
up efficient self-defenses and meta-knowledge. Not surprisingly, the analysis of glass
cockpit incidents/accidents reveals that most (of the very rare) losses of controls occur in
situations whether the pilot experience of error has been extremely reduced in the recent
past due to increasing defenses (e.g., noncomprehension of flight protections, AirFrance
B747-400 in Papeete, Tarom A310 in Paris) or in situations where errors are of an
absolutly new style (e.g., crews back to automatic flight when lost in comprehension,
China Airlines A310 in Nagoya, Tarom A310 in Bucharest). To sum up, although it is
obvious that human error should not be encouraged, the efficiency of error reduction
techniques (whatever they address design, training, or safety policies) is not infinite.
Extreme protection against errors results in cognitive desinvestment of the human for the
considered area. Command authority can also be degraded by overly restrictive operator
policies and procedures that "hamstring" the commander's authority (Billings, 1995).
Cognition is therefore less protected, and rooms are open for rare, but unrecovered
errors. Moreover, when errors are blocked, new errors come (Wiener, 1989).
714
design, and (e) the need for a different perpective in interpreting and using flight
experience data through aviation-reporting systems
Respect for Human Authority. Human factors are first interested in the role and position
given to the end user in the system. Human factors cannot be restricted to a list of local
ergonomics principles, such as shapes, colors or information flow, it first refers to a
philosophy of human-machine coupling, giving the central role to Human. For example,
suppose a system in which crew should be a simple executor, with a perfect instrument
panel allowing a total comprehension, but without any possibility to deviate from orders,
and with a permanent and close electronic cocoon overriding crew decisions. Such a
system should no longer be considered as corresponding to a satisfactory human factors
design. The crew cannot have a second role because the members continue to be legally
and psychologically responsible for the flight. They must keep the final authority for the
system.
Fortunately, crews still have the authority with glass cockpits. But the tendency of
technique is clearly to cut down this privilege. Today, some limited systems already
override the crew decision in certain circumstances (flight or engine protections).
Tomorrow, datalink could override crews in a more effective and frequent manner. After
tomorrow, what will come? The goal of the human factors community is to improve the
value associated with these systems, not to enter into a debate on the safety balance
resulting of these solutions (as usual, we know how many accidents/incidents these
systems have caused, but we ignore how many incidents/accidents they have contributed
to avoiding). But how far can we go in this direction with crews remaining efficient and
responsible in the flight loop and with a satisfactory ethics of human factors? We must
acknowledge that human factors has no solutions for an infinite growth of system
complexity. We must also aknowledge that the current solutions are not applicable to
future complexity. Therefore, if we want the crew to remain in command, either we limit
the increasing human-machine performance until the research comes up with efficient
results preserving the authority of humans, or we accept designing systems, for example,
made with two envelopes. The first envelope is controlled by crews. Within this
envelope, crews are fully trained to comprehend and operate the system. They share full
responsibility, have full authority and can be blamed for unacceptable consequence of
their errors. Outside this first envelope, the system enters into full automatic
management. Crews are passengers. Human factors are not concerned, except for the
transition from one envelope to the other one. Last, of course, ergonomics and human
factors only tailor the first envelope.
Respect for Human Ecology. There is an extensive literature on the design of humanmachine interface fitting the natural, ecological human behaviour, for example,
ecological interface (Flach, 1990; Rasmussen & Vicente, 1989), naturalistic decision
making and its implication for design (Klein, Oranasu, Carderwood, & Zsambok, 1993),
ecological safety (Amalberti, 1996, and this chapter). Augmented error tolerance and
error visibility are also direct outcomes of such theories. All these approaches plead for a
respect for human naturalistic behavior and defenses. Any time a new system or an
organisation interact negatively with these defenses, not only are the human efficiency
and safety reduced to the detriment of the final human-machine performance, but also
new and unpredicted errors occur. But one must aknowledge that augmented error
tolerance and error visibility are already part of design process, even though they can still
improve.
716
The next human factors efforts should focus on fitting not only the surface of
displays and commands with the human ecological needs, but the inner logic of systems.
The achievement of this phase asks for a much more educational effort by design
engineers and human factors. How could a design fit the needs of a pilots cognitive
model if there is not a shared and accepted cognitive model and a shared comprehension
of technical difficulties betwen technical and life-science actors? This is the reason why
reciprocal education (technical education for human factors actors, human factors
education for technical actors) is the absolute first priority for the next 10 years.
Traceability of Choices. Human factors asks for a better traceability of design choices.
This ideally implies using a more human factors-oriented methodology to evaluate
system design. Ideally, for each system evaluation, human factors methodology should
ask for the performance measurement of a sample of pilots, test pilots, and line pilots,
interacting with the system in a series of realistic operational scenarios. The
performance evaluation should in this case rely upon the quality of situation awareness,
the workload, and the nature aud cause of errors. However, this is pure theory, not
reality. The reality is much more uncertain because of the lack of tools for measuring
such human factors concepts, and the lack of standards for determining what is
acceptable or unacceptable. For example, the certification team is greatly concerned with
the consideration to be given to human errors. What does it mean when an error is made
by a crew? Is the system to be changed, or the error to be considered as nonsignificant,
or the crew to be better taught? What meaning do the classic inferential statistics on risk
have when considering that the rate of accident is less than 1 to 1 million departures, and
that most of modern electronic system failures will arise only once in the life of aircraft?
Moreover, the existing human factors methodologies are often poorly compatible with
the (economical and human) reality of a certification campaign. Urgent (pragmatic)
human factors research is therefore required.
Need for an Early and Global Systemic Approach During Design. There is a growing
need to consider as early as possible in the design process the impact of the new system
on the global aviation system. The design not only results in a new tool, but it is always
changing the job of operators. And due to the interaction within the aviation system, this
change in the jobs of operators is not restricted to direct end users, such as pilots, but
also concerns the hierarchy, mechanics, dispatchers, air traffic controllers, and the
authorities (regulations).
Such a global approach has been underconsidered with the introduction of glass
cockpits and has resulted, for example, in several misfits between traffic guidance
strategies and the capabilities of these new aircraft (multiple radar vector guidance, late
change of runway, etc.). The situation is now improving. However, some of the future
changes are so important and so new, like the introduction of datalink or the emergence
of airplane manufacturers in Asia, that adequate predictive models of how the aviation
system will adapt are still challenging human factors and the entire aviation communauty.
Need for a Different Perpective in Interpreting and Using Flight Experience Through
Aviation -Reporting Systems. We have seen throughout this chapter that the safety
policies have long given priority to suppressing all identified human errors by all means
(protections, automation, training). This attitude was of great value for enhancing safety
while the global aviation system was not mature and the rate of accident was over one
accident per million departure. Nowadays the problem is different with a change of
paradigm. The global aviation system has become extremely sure, and the solutions that
have been efficient until reaching this level are losing their efficiency.
However, for the moment, the general trend is to continue optimizing the same
solutions : asking for more incident reports, detecting more errors, suppressing more
errors. The aviation reporting systems are exploding under the amount of information to
be stored and analysed (over 40 000 files per year in only the U.S. ASRS), and the
suppression of errors often results in new errors occuring. There is an urgent need to
reconsider the meaning of errors in a very safe environment and to reconsider the
relationship between human error and accident. Such programs are in progress at Civil
Aviation Authorities of the United Kindgom (CAA UK) and the French Direction
Gnrale de lAviation Civile (DGAC) and could result in a different data exploiting and
preventive actions.
Barriers to Implement Solutions. The FAA human factors report (Abbott et al, 1996) has
listed several generic barriers to implementing new human factors approaches in
industry; among them were the cost-effectiveness of these solutions, the maturity of
human factors solutions, the turf protection, the lack of education, and the industry
difficulty with human factors. These reasons are effective barriers, but there are also
strong indications that human factors will be much more considered in the near future.
The need on the part of industry is obvious with the growing complexity of systems and
environment. Also, mentalities have changed and are much more oriented to listening to
new directions.
An incredible window of opportunity is open. The duration of this window will
depend on the capacity of human factors specialists to educate industry and propose
viable and consistent solutions. To succeed, it is urgent to turn from a dominant critical
attitude to a constructive attitude. It is also important to avoid focusing on the lessons
from the last war and to anticipate future problems, such as the coming of datalink and
the cultural outcomes.
CONCLUSION
Several paradoxical and chronic handicaps have slowed down the consideration for
human factors in the recent past of aviation. First and fortunately, the technique has
proven is high efficiency, improving performance and safety of the aviation system so far
that a deep human factors revisitation of the fundements of design was long judged as
not useful. Second, human factors people themselves have served the discipline poorly in
the 1970s by presenting most of solutions at a surface level. Whatever the value of
ergonomics norms and standards, the very fundamental knowledge to be considered in
modern human factors is based on the result of task analysis and cognitive modeling, and
the core question to solve is the final position of human in command, not the size and
color of displays. The European ergonomics school with Rasmussen, Leplat, Reason, de
Keyser, and some U.S. scientists such as Woods or Hutchins have been early precursors
in the 1980s of this change of human factors focus. But the change in orientation takes
much more time for the industry, maybe because these modern human factors are less
quantitative and more qualitative, and ask for a much deeper investment in psychology
than ergonomics recipies did before. Third and last, many people in the society think with
a linear logic of progress and have great reluctance to consider that a successful solution
could reach an apogee beyond what the optimization could lead to more drawbacks than
718
benefits. In other world, human factors problems could be much more sensitive
tomorrow than today if the system continues to optimize on the same basis.
In contrast, the instrument is so powerful that it needs some taming. This phase
will only take place through experience, doubtless by changes in the entire aeronautic
system, in jobs, and in roles.
Let us not forget, as a conclusion, that these changes are far from being the only
ones likely to occur in the next twenty years. Another revolution, as important as that
brought about by automation, may well take place when data-link systems supporting
communications between onboard and ground computers control the aircraft's flight path
automatically. But this is another story...in a few years from now, which will certainly
require that a lot of studies be carried out in the field of human factors.
One could say also that the problems are largely exaggerated. It is simply true that
performance has been improved with automation, and that safety is remarkable, even
though better safety is always desirable.