Vous êtes sur la page 1sur 10

An Early Start to Testing: How to Test Requirements

Abstract
We accept that testing the software is an integral part of building a system.
However, if the software is based on inaccurate requirements, then despite well
written code, the software will be unsatisfactory. The newspapers are full of stories
about catastrophic software failures. What the stories don't say is that most of the
defects can be traced back to wrong, missing, vague or incomplete requirements. We
have learnt the lesson of testing software. Now we have to learn to implement a
system of testing the requirements before building a software solution.

This paper describes a set of requirements tests that cover relevance,


coherency, traceability, completeness and other qualities that successful
requirements must have. The tests have their starting point with the criterion that
each requirement has at least one quality measure. This measure is used to test
whether any given solution satisfies, or does not satisfy, the requirement.

Requirements seem to be ephemeral. They flit in and out of projects; they are
capricious, intractable, unpredictable and sometimes invisible. When gathering
requirements we are searching for all of the criteria for a system's success. We throw
out a net and try to capture all these criteria. Using Blitzing, Rapid Application
Development (RAD), Joint Application Development (JAD), Quality Function
Deployment (QFD), interviewing, apprenticing, data analysis and many other
techniques [6], we try to snare all of the requirements in our net.

The Quality Gateway


As soon as we have a single requirement in our net we can start testing. The
aim is to trap requirements-related defects as early as they can be identified. We
prevent incorrect requirements from being incorporated in the design and
implementation where they will be more difficult and expensive to find and correct. [5]

To pass through the quality gateway and be included in the requirements


specification, a requirement must pass a number of tests. These tests are concerned
with ensuring that the requirements are accurate, and do not cause problems by
being unsuitable for the design and implementation stages later in the project.

I will discuss each of the following requirements tests in a stand-alone manner.


Naturally, the tests are designed to be applied to each of the requirements in unison.

Make The Requirement Measurable


In his work on specifying the requirements for buildings, Christopher Alexander
[1] describes setting up a quality measure for each requirement.

"The idea is for each requirement to have a quality measure that makes it
possible to divide all solutions to the requirement into two classes: those for which we
agree that they fit the requirement and those for which we agree that they do not fit
the requirement."

In other words, if we specify a quality measure for a requirement, we mean that


any solution that meets this measure will be acceptable. Of course it is also true to
say that any solution that does not meet the measure will not be acceptable.
The quality measures will be used to test the new system against the
requirements. The remainder of this paper describes how to arrive at a quality
measure that is acceptable to all the stakeholders.

Quantifiable Requirements
Consider a requirement that says "The system must respond quickly to
customer enquiries". First we need to find a property of this requirement that provides
us with a scale for measurement within the context. Let's say that we agree that we
will measure the response using minutes. To find the quality measure we ask: "under
what circumstances would the system fail to meet this requirement?" The
stakeholders review the context of the system and decide that they would consider it
a failure if a customer has to wait longer than three minutes for a response to his
enquiry. Thus "three minutes" becomes the quality measure for this requirement.

Any solution to the requirement is tested against the quality measure. If the
solution makes a customer wait for longer than three minutes then it does not fit the
requirement. So far so good: we have defined a quantifiable quality measure. But
specifying the quality measure is not always so straightforward. What about
requirements that do not have an obvious scale?

Non-quantifiable Requirements
Suppose a requirement is "The automated interfaces of the system must be
easy to learn". There is no obvious measurement scale for "easy to learn". However if
we investigate the meaning of the requirement within the particular context, we can
set communicable limits for measuring the requirement.

Again we can make use of the question: "What is considered a failure to meet
this requirement?" Perhaps the stakeholders agree that there will often be novice
users, and the stakeholders want novices to be productive within half an hour. We can
define the quality measure to say "a novice user must be able to learn to successfully
complete a customer order transaction within 30 minutes of first using the system".
This becomes a quality measure provided a group of experts within this context is
able to test whether the solution does or does not meet the requirement.

An attempt to define the quality measure for a requirement helps to rationalize


fuzzy requirements. Something like "the system must provide good value" is an
example of a requirement that everyone would agree with, but each person has his
own meaning. By investigating the scale that must be used to measure "good value"
we identify the diverse meanings.

Sometimes by causing the stakeholders to think about the requirement we can


define an agreed quality measure. In other cases we discover that there is no
agreement on a quality measure. Then we substitute this vague requirement with
several requirements, each with its own quality measure.

Requirements Test 1

Does each requirement have a quality measure that can be used to test
whether any solution meets the requirement?

Keeping Track
Figure 1 is an example of how you can keep track of your knowledge about each
requirement.

Figure 1: This requirements micro spec makes your requirements knowledge visible. It
must be recorded so that it is easy for several people to compare and discuss
individual requirements and to look for duplicates and contradictions.

By adding a quality measure to each requirement we have made the requirement


visible. This is the first step to defining all the criteria for measuring the goodness of
the solution. Now let's look at other aspects of the requirement that we can test
before deciding to include it in the requirements specification.

Coherency and Consistency


When a poet writes a poem he intends that it should trigger off rich and diverse
visions for everyone who reads it. The requirements engineer has the opposite
intention: he would like each requirement to be understood in the same way by every
person who reads it. In practice many requirements specifications are more like
poetry, and are open to any interpretation that seems reasonable to the reader. This
subjectivity means that many systems are built to satisfy the wrong interpretation of
the requirement. The obvious solution to this problem is to specify the requirement in
such that it is understood in only one way.

For example, in a requirements specification that I assessed, I found the term


"viewer" in many parts of the specification. My analysis identified six different
meanings for the term, depending on the context of its use. This kind of requirements
defect always causes problems during design and/or implementation. If you are lucky,
a developer will realize that there is inconsistency, but will have to re-investigate the
requirement. This almost always causes a ripple effect that extends to other parts of
the product. If you are not lucky, the designer will choose the meaning that makes
most sense to him and implement that one. Any stakeholder who does not agree with
that meaning then considers that the system does not meet the requirement.

Requirements Test 2

Does the specification contain a definition of the meaning of every essential


subject matter term within the specification?

I point you in the direction of abstract data modeling principles [7] which
provide many guidelines for naming subject matter and for defining the meaning of
that subject matter. As a result of doing the necessary analysis, the term "viewer"
could be defined as follows:

Viewer

A person who lives in the area which receives transmission of television programmers
from our channel.

Relevant attributes are:

Viewer Name
Viewer Address
Viewer Age Range
Viewer Sex
Viewer Salary Range
Viewer Occupation Type
Viewer Socio-Economic Ranking

When the allowable values for each of the attributes are defined it provides data
that can be used to test the implementation.

Defining the meaning of "viewer" has addressed one part of the coherency
problem. We also have to be sure that every use of the term "viewer" is consistent
with the meaning that has been defined.

Requirements Test 3

Is every reference to a defined term consistent with its definition?

Completeness
We want to be sure that the requirements specification contains all the
requirements that are known about. While we know that there will be evolutionary
changes and additions, we would like to restrict those changes to new requirements,
and not have to play "catch-up" with requirements that we should have known about
in the first place. Thus we want to avoid omitting requirements just because we did
not think of asking the right questions. If we have set a context [10, 11] for our
project, then we can test whether the context is accurate. We can also test whether
we have considered all the likely requirements within that context.

The context defines the problem that we are trying to solve. The context
includes all the requirements that we must eventually meet: it contains anything that
we have to build, or anything we have to change. Naturally if our software is going to
change the way people do their jobs, then those jobs must be within the context of
study. The most common defect is to limit the context to the part of the system that
will be eventually automated [3]. The result of this restricted view is that nobody
correctly understands the organization’s culture and way of working. Consequently
there is misfit between the eventual computer system and the rest of the business
system and the people that it is intended to help.

Requirements Test 4

Is the context of the requirements wide enough to cover everything we need


to understand?

Of course this is easy to say, but we still have to be able to test whether or not
the context is large enough to include the complete business system, not just the
software. ("Business" in this sense should be means not just a commercial business,
but whatever activity - scientific, engineering, artistic - the organization is doing.) We
do this test by observing the questions asked by the systems analysts: Are they
considering the parts of the system that will be external to the software? Are
questions being asked that relate to people or systems that are shown as being
outside the context? Are any of the interfaces around the boundary of the context
being changed?

Another test for completeness is to question whether we have captured all the
requirements that are currently known. The obstacle is that our source of
requirements is people. And every person views the world differently according to his
own job and his own idea of what is important, or what is wrong with the current
system. It helps to consider the types of requirements that we are searching for:

• Conscious Requirements

Problems that the new system must solve

• Unconscious Requirements

Already solved by the current system

• Undreamed of Requirements

Would be a requirement if we knew it was possible or could imagine it

Conscious requirements are easier to discover because they are uppermost in


the stakeholders' minds. Unconscious requirements are more difficult to discover. If a
problem is already satisfactorily solved by the current system then it is less likely for it
to be mentioned as a requirement for a new system. Other unconscious requirements
are often those relating to legal, governmental and cultural issues. Undreamt of
requirements are even more difficult to discover. These are the ones that surface after
the new system has been in use for a while. "I didn't know that it was possible
otherwise I would have asked for it."

Requirements Test 5

Have we asked the stakeholders about conscious, unconscious and


undreamed of requirements?

Requirements engineering experience with other systems helps to discover


missing requirements. The idea is to compare your current requirements specification
with specifications for similar systems. For instance, suppose that a previous
specification has a requirement related to the risk of damage to property. It makes
sense to ask whether our current system has any requirements of that type, or
anything similar. It is quite possible, indeed quite probable, to discover unconscious
and undreamed of requirements by looking at other specification.

We have distilled experience from many projects and built a generic


requirements template [12] that can be used to test for missing requirement types. I
urge you to look through the template and use it to stimulate questions about
requirements that otherwise would have been missed. Similarly, you can build your
own template by distilling your own requirements specifications, and thus uncover
most of the questions that need to be asked.
Another aid in discovering unconscious and undreamed of requirements is to
build models and prototypes to show people different views of the requirements. Most
important of all is to remember that each stakeholder is an individual person. Human
communication skills are the best aid to complete requirements [2].

Requirements Test 5 (enlarged)

Have we asked the stakeholders about conscious, unconscious and


undreamed of requirements? Can you show that a modelling effort has
taken place to discover the unconscious requirements? Can you
demonstrate that brainstorming or similar efforts taken place to find the
undreamed of requirements?

Relevance
When we cast out the requirements gathering net and encourage people to tell
us all their requirements, we take a risk. Along with all the requirements that are
relevant to our context we are likely to pick up impostors. These irrelevant
requirements are often the result of a stakeholder not understanding the goals of the
project. In this case people, especially if they have had bad experiences with another
system, are prone to include requirements "just in case we need it". Another reason
for irrelevancy is personal bias. If a stakeholder is particularly interested or affected
by a subject then he might think of it as a requirement even if it is irrelevant to this
system.

Requirements Test 6

Is every requirement in the specification relevant to this system?

To test for relevance, check the requirement against the stated goals for the
system. Does this requirement contribute to those goals? If we exclude this
requirement then will it prevent us from meeting those goals? Is the requirement
concerned with subject matter that is within the context of our study? Are there any
other requirements that are dependent on this requirement? Some irrelevant
requirements are not really requirements, instead they are solutions.

Requirement or Solution?
When one of your stakeholders tells you he wants a graphic user interface and a
mouse, he is presenting you with a solution not a requirement. He has seen other
systems with graphic user interfaces, and he wants what he considers to be the most
up-to-date solution. Or perhaps he thinks that designing the system is part of his role.
Or maybe he has a real requirement that he has mentally solved by use of a graphic
interface. When solutions are mistaken for requirements then the real requirement is
often missed. Also the eventual solution is not as good as it could be because the
designer is not free to consider all possible ways of meeting the requirements.

Requirements Test 7

Does the specification contain solutions posturing as requirements?

It is not always easy to tell the difference between a requirement and a solution.
Sometimes there is a piece of technology within the context and the stakeholders
have stated that the new system must use this technology. Things like: "the new
system must be written in COBOL because that is the only language our programmers
know", "the new system must use the existing warehouse layout because we don't
want to make structural changes" are really requirements because they are genuine
constraints that exist within the context of the problem.

For each requirement ask "Why is this requirement?" Is it there because of a


genuine constraint? Is it there because it is needed? Or is it the solution to a perceived
problem? If the "requirement" includes a piece of technology, and it could be
implemented by another technology, then unless the specified technology is a
genuine constraint, the "requirement" is really a solution.

Stakeholder Value
There are two factors that affect the value that stakeholders place on a
requirement. The grumpiness that is caused by bad performance, and the happiness
that is caused by good performance. Failure to provide a perfect solution to some
requirements will produce mild annoyance. Failure to meet other requirements will
cause the whole system to be a failure. If we understand the value that the
stakeholders put on each requirement, we can use that information to determine
design priorities.

Requirements Test 8

Is the stakeholder value defined for each requirement?

Pardee [9] suggests that we use scales from 1 to 5 to specify the reward for
good performance and the penalty for bad performance. If a requirement is absolutely
vital to the success of the system then it has a penalty of 5 and a reward of 5. A
requirement that would be nice to have but is not really vital might have a penalty of
1 and a reward of 3. The overall value or importance that the stakeholders place on a
requirement is the sum of penalty and reward. In the first case a value of 10 in the
second a value of 4.

The point of defining stakeholder value is to discover how the stakeholders


really feel about the requirements. We can use this knowledge to make prioritization
and trade-off decisions when the time comes to design the system.

Traceability
We want to be able to prove that the system that we build meets each one of
the specified requirements. We need to identify each requirement so that we can
trace its progress through detailed analysis, design and eventual implementation.
Each stage of system development shapes, repartitions and organizes the
requirements to bring them closer to the form of the new system. To insure against
loss or corruption, we need to be able to map the original requirements to the solution
for testing purposes.

Requirements Test 9

Is each requirement uniquely identifiable?


In the micro spec in Figure 1 we see that each requirement must have a unique
identifier. We find the best way of doing this is simply to assign a number to each
requirement. The only significance of the number is that it is that requirement's
identifier. We have seen schemes where the requirements are numbered according to
type or value or whatever, but these make it difficult to manage changes. It is far
better to avoid hybrid numbering schemes and to use the number purely as an
identifier. Other facts about the requirement are then recorded as part of the
requirements micro spec.

Order In A Disorderly World


We have considered each requirement as a separately identifiable, measurable
entity. Now we need to consider the connections between requirements and to
understand the effect of one requirement on others. This means we need a way of
dealing with a large number of requirements and the complex connections between
them. Rather than trying to tackle everything simultaneously, we need a way of
dividing the requirements into manageable groups. Once that is done we can consider
the connections in two phases: the internal connections between the requirements in
each group; and then the connections between the groups. It reduces the complexity
of the task if our grouping of requirements is done in a way that minimizes the
connections between the groups.

Events or use cases provide us with a convenient way of grouping the


requirements [8, 4, 11,]. The event/use case is a happening that causes the system to
respond. The system's response is to satisfy all of the requirements that are
connected to that event/use case. In other words, if we could string together all the
requirements that respond to one event/use case, we would have a mini-system
responding to that event. By grouping the requirements that respond to an event/use
case, we arrive at groups with strong internal connections. Moreover, the events/use
cases within our context provide us with a very natural way of collecting our
requirements.

Figure 2: The event/use case provides a natural grouping for keeping track of the
relationships between requirements.

Figure 2 illustrates the relationships between requirements. The event/use case is a


collection of all the requirements that respond to the same happening. The n-to-n
relationship between Event/Use Case and Requirement indicates that while there are a
number of Requirement to fulfil one Event/Use Case, any Requirement could also
contribute to other Events/Use Cases. The model also shows us that one Requirement
might have more than one Potential Solution but it will only have one Chosen Solution.

The event/use case provides us with a number of small, minimally-connected


systems. We can use the event/use case partitioning throughout the development of
the system. We can analyse the requirements for one event/use case, design the
solution for the event/use case and implement the solution. Each requirement has a
unique identifier. Each event/use case has a name and number. We keep track of
which requirements are connected to which events/use cases using a requirements
tool or spreadsheet. If there is a change to a requirement we can identify all the parts
of the system that are affected.

Requirements Test 10

Is each requirement tagged to all parts of the system where it is used? For
any change to requirements, can you identify all parts of the system where
this change has an effect?

Conclusions
The requirements specification must contain all the requirements that are to be
solved by our system. The specification should objectively specify everything our
system must do and the conditions under which it must perform. Management of the
number and complexity of the requirements is one part of the task.

The most challenging aspect of requirements gathering is communicating with


the people who are supplying the requirements. If we have a consistent way of
recording requirements we make it possible for the stakeholders to participate in the
requirements process. As soon as we make a requirement visible we can start testing
it. and asking the stakeholders detailed questions. We can apply a variety of tests to
ensure that each requirement is relevant, and that everyone has the same
understanding of its meaning. We can ask the stakeholders to define the relative
value of requirements. We can define a quality measure for each requirement, and we
can use that quality measure to test the eventual solutions.

Testing starts at the beginning of the project, not at the end of the coding. We
apply tests to assure the quality of the requirements. Then the later stages of the
project can concentrate on testing for good design and good code. The advantages of
this approach are that we minimize expensive rework by minimizing requirements-
related defects that could have been discovered, or prevented, early in the project's
life.

References
1. Christopher Alexander. Notes On The Synthesis Of Form. Harvard Press.
Massachusetts, 1964.

2. Donald Gause and Gerald Weinberg. Exploring Requirements. Dorset House. New
York, 1989.

3. Michael Jackson. Software Requirements and Specifications. Addison-Wesley,


London, 1966.

4. Ivar Jacobson, Object-Oriented Software Engineering. Addison-Wesley. 1992.


5. Capers Jones, Assessment and Control of Software Risks. Prentice Hall, 1994.

6. Neil Maiden and Gordon Rugg. Acre: selecting methods for requirements
acquisition. Software Engineering Journal, May 1966.

7. Steve Mellor and Sally Schlaer. Object-Oriented Systems Analysis: Modelling the
World in Data. Prentice Hall, New Jersey, 1988

8. Steve McMenamin and John Palmer. Essential Systems Analysis. Yourdon Press. New
York, 1984.

9. William J. Pardee. How To Satisfy & Delight Your Customer. Dorset House. New York,
1996.

10. James Robertson. On Setting the Context. The Atlantic Systems Guild, 1996.

11. James and Suzanne Robertson. Complete Systems Analysis: the Workbook, the
Textbook, the Answers. Dorset House. New York, 1994.

12. James and Suzanne Robertson. Requirements Template. The Atlantic Systems
Guild. London, 1966.

Vous aimerez peut-être aussi