Vous êtes sur la page 1sur 39

MODULE 1

08.801 SOFTWARE ENGINEERING AND PROJECT MANAGEMENT


QUESTION BANK
S8 CSE









Prepared By
Divya V
AP & HOD, CSE
1


Contents

1. Introduction to software engineering
2. Scope of software engineering
3. Software engineering : A layered technology
4. Software process models
5. Capability maturity model (CMM)
6. ISO 9000
7. Phases in Software development
8. Requirement analysis
9. Requirements elicitation for software
10. Analysis principles
11. Software prototyping
12. Specification
















2


Introduction to software engineering
In order to develop a software product, user needs and constraints must be determined and explicitly stated;
the product must be designed to accommodate implementers, users and maintainers; the source code must be
carefully implemented and thoroughly tested; and supporting documents must be prepared. Software maintenance
tasks include analysis of change request, redesign and modification of the source code, thorough testing of the
modified code, updating of documents to reflect the changes and the distribution of modified work products to the
appropriate user. The need for systematic approaches to development and maintenance of software products became
apparent in the 1960s. Much software developed at that time was subject to cost overruns, schedule slippage, lack of
reliability, inefficiency, and lack of user acceptance. As computer systems become larger and complex, it became
apparent that the demand for computer software was growing faster than our ability to produce and maintain it.
There comes the need of software engineering.
Software engineering is a discipline whose aim is the production of fault-free software, delivered on time
and within budget, that satisfies the clients needs. Furthermore, the software must be easy to modify when the
users needs change. Software engineering is the application of a systematic, disciplined, quantifiable approach to
the development, operation, and maintenance of software, that is, the application of engineering to software.
Scope of software engineering
The scope of software engineering is extremely broad. Some aspects of software engineering can be
categorized as mathematics or computer science; other aspects fall into the areas of economics, management, or
psychology. To display the wide-reaching realm of software engineering, we now examine five different aspects.
1. Historical Aspects:
Software engineering should use the philosophies and paradigms of established engineering disciplines to solve
what they termed the software crisis, namely, that the quality of software generally was unacceptably low and that
deadlines and budgets were not being met. Despite many software success stories, an unacceptably large proportion
of software products still are being delivered late, over budget, and with residual faults. This can be summarized in
the following Figure.

3

Only 35 percent of the projects were successfully completed, whereas 19 percent were canceled before completion
or were never implemented. The remaining 46 percent of the projects were completed and installed on the clients
computer. However, those projects were over budget, late, or had fewer features and functionality than initially
specified. In other words, during 2006, just over one in three software development projects was successful; almost
half the projects displayed one or more symptoms of the software crisis.
It is clear that far too little software is delivered on time, within budget, fault free, and meeting its clients
needs. To achieve these goals, a software engineer has to acquire a broad range of skills, both technical and
managerial. These skills have to be applied not just to programming but to every step of software production, from
requirements to post delivery maintenance.
2. Economic Aspects:
A software organization currently using coding technique CT old discovers that new coding technique CT new would
result in code being produced in only nine-tenths of the time needed by CT old and, hence, at nine-tenths the cost.
Common sense seems to dictate that CT new is the appropriate technique to use. In fact, although common sense
certainly dictates that the faster technique is the technique of choice, the economics of software engineering may
imply the opposite.
One reason is the cost of introducing new technology into an organization. The fact that coding is 10 percent
faster when technique CT new is used may be less important than the costs incurred in introducing CT new into
the organization. It may be necessary to complete two or three projects before recouping the cost of training.
Also, while attending courses on CT new, software personnel are unable to do productive work. Even when they
return, a steep learning curve may be involved; it may take many months of practice with CT new before
software professionals become as proficient with CT new as they currently are with CT old. Therefore, initial
projects using CT new may take far longer to complete than if the organization had continued to use CT old. All
these costs need to be taken into account when deciding whether to change to CT new.
A second reason why the economics of software engineering may dictate that CT old be retained is the
maintenance consequence. Coding technique CT new indeed may be 10 percent faster than CT old, and the
resulting code may be of comparable quality from the viewpoint of satisfying the clients current needs. But the
use of technique CT new may result in code that is difficult to maintain, making the cost of CT new higher over
the life of the product. Of course, if the software developer is not responsible for any post delivery maintenance,
then, from the viewpoint of just that developer, CT new is a more attractive proposition. After all, the use of CT
new would cost 10 percent less. The client should insist that technique CT old be used and pay the higher initial
costs with the expectation that the total lifetime cost of the software will be lower. Unfortunately, often the sole
aim of both the client and the software provider is to produce code as quickly as possible. The long-term effects
of using a particular technique generally are ignored in the interests of short-term gain. Applying economic
principles to software engineering requires the client to choose techniques that reduce long-term costs.
This example deals with coding, which constitutes less than 10 percent of the software development effort. The
economic principles, however, apply to all other aspects of software production as well.
3. Maintenance Aspects:
4

Maintenance is described within the context of the software life cycle.
Life-cycle model Description of the steps performed when building a software product. There are many
different models. Life-cycle model is broken into a series of smaller steps called phases. So it is Easier to
perform a sequence of smaller tasks. Number of phases varies from model to model.
Life cycle of a product Actual series of steps performed on the software product from concept
exploration through final retirement. Phases of the life cycle may not be carried out as specified.
Although there are many variations, a software product goes through six phases
(1) Requirements phase
* Concept is explored and refined
* Clients requirements are elicited
(2) Analysis (specifications) phase
* Clients requirements are analyzed and presented in the form of the specification document (what
the product is supposed to do)
* Software Project Management Plan is drawn up, describing the proposed development in detail
(3) Design phase
* Two consecutive processes of architectural design (product is broken down into modules) and
detailed design (each module is designed)
* Resulting in two design documents (how the product does it)
(4) Implementation phase
* Various components undergo coding and testing (unit testing)
* The components are combined and tested as a whole (integration)
* When the developers are satisfied that the product functions correctly, it is tested by the client
(acceptance testing)
* Ends when product accepted by client and installed on clients computer
(5) Post delivery maintenance
* All changes to product after delivery
* Includes corrective maintenance (software repair): removal of residual faults while leaving the
specifications unchanged
* Enhancements (software updates): changes to the specifications and the implementation of those
changes
* Two types of enhancements are perfective (changes the client thinks will improve the
effectiveness of the product, such as additional functionality or decreased response time) and adaptive (changes
made in response to changes in the environment, such as new hardware/operating system or new government
regulations)
(6) Retirement
* Product removed from service
* Functionality provided no longer of use to client
5


Classical and Modern Views of Maintenance
In 1970s, software production viewed as two distinct activities of development followed by maintenance
Described as the development-then-maintenance model
A temporal definition (an activity is classified depending on when it is performed)
The development-then-maintenance model is unrealistic today
Construction of a product can take a year or more, during which the clients requirements may change
Developers have to perform maintenance before product is installed
Developers try to reuse parts of existing software products
More realistic:
Maintenance is the process that occurs when software undergoes modifications to code and associated
documentation due to a problem or the need for improvement or adaptation
An operational definition: Irrespective of when the activity takes place
Definition adopted by IEEE: Post delivery maintenance is a subset of modern maintenance
The Importance of Post delivery Maintenance
* A software product is a model of the real world
* Real world is perpetually changing
* Software has to be maintained constantly
* How much time (money) is devoted to post delivery maintenance
* Some 40 years ago, approximately two-thirds of total software costs
* Newer data show a larger proportion (70% to 80%)

Average cost percentages of the classical development phases have not changed much


6


4. Requirements, Analysis, and Design Aspects
Cost of correcting a fault increases steeply throughout the phases
The earlier one corrects a fault, the better
A fault in requirements may also appear in the specifications, the design, and the code
Studies have shown that between 60% and 70% of all faults detected in large projects are requirements,
analysis, or design faults. It is important to improve requirements, analysis, and design techniques



5. Team Development Aspects
Most software is produced by a team of software engineers
Team development leads to interface problems among code components and communication problems
among team members
Unless the team is properly organized, an inordinate amount of time can be wasted in conferences between
team members
The scope of software engineering must include techniques for ensuring that teams are properly organized
and managed
Software engineering: A layered technology
Software engineering is performed by creative, knowledgeable people who should work within a defined
and mature software process. The IEEE [IEE93] has developed a more comprehensive definition when it states;
Software engineering is the application of a systematic, disciplined, quantifiable approach to the development,
operation, and maintenance of software that is, the application of engineering to software.
Process, Methods, and Tools
Software engineering is a layered technology. Any engineering approach must rest on an organizational commitment
to quality. The bedrock that supports software engineering is a focus on quality.
7

The foundation for software engineering is the process layer. Software engineering process is the glue that
holds the technology layers together and enables rational and timely development of computer software. Process
defines a framework for a set of key process areas that must be established for effective delivery of software
engineering technology. The key process areas form the basis for management control of software projects and
establish the context in which technical methods are applied, work products are produced, milestones are
established, quality is ensured, and change is properly managed.

Software engineering methods provide the technical how tos for building software. Methods encompass
a broad array of tasks that include requirements analysis, design, program construction, testing, and maintenance.
Software engineering tools provide automated or semi-automated support for the process and the methods.
When tools are integrated so that information created by one tool can be used by another, a system for the support of
software development, called computer-aided software engineering, is established.
The Software Process
A software process can be characterized as shown in figure. A common process framework is established
by defining a small number of framework activities that are applicable to all software projects, regardless of their
size or complexity. A number of task sets (each a collection of software engineering work tasks), project milestones,
software work products and deliverables, and quality assurance points enable the framework activities to be adapted
to the characteristics of the software project and the requirements of the project team. Finally umbrella activities
such as software quality assurance, software configuration management, and measurement overlay the process
model. Umbrella activities are independent of any one-framework activity and occur throughout the process.


Software Process Models
To solve actual problems in an industry setting, a software engineer or a team of engineers must
incorporate a development strategy that encompasses the process, methods, and tools layers and the generic phases.
8

This strategy is often referred to as a process model or a software engineering paradigm. A process model for
software engineering is chosen based on the nature of the project and application, the methods and tools to be used,
and the controls and deliverables that are required.
All software development can be characterized as a problem solving loop in which four distinct stages are
encountered; status quo, problem definition, technical development, and solution integration.

Status quo represents the current state of affairs; problem definition identifies the specific problem to be
solved; technical development solves the problem through the application of some technology, and solution
integration delivers the results who requested the solution in the first place.
Linear Sequential Model
The Figure given below illustrates that linear sequential model for software engineering. Sometimes called
the classic life cycle or the waterfall model, the linear sequential model suggests a systematic, sequential
approach to software development that begins at the system level and progresses through analysis, design coding,
testing, and maintenance.

The principal stages of the waterfall model directly reflect the fundamental development activities:
9

1. System feasibility: Project begins with feasibility study. In this phase, it is determined that whether the
development of the product is technically feasible or not. Based on the study, a feasibility report is generated.
2. Requirements analysis and project planning: Both functional requirements and non functional requirements are
specified in this phase. Functional requirements involve the services to be performed by the system. The non
functional requirements involve the constraints, and goals to be satisfied by the system. They are then defined in
detail and serve as a requirement specification document. A project plan is also preparing in this phase. The project
plan specifies a set of activities to be done for developing a good quality product.
3. System design: The systems design process establishes an overall system architecture. This phase identifies the
major components of the system based on the requirements specified in the requirements document. At the end of
this phase, a system design document is generated.
4. Detailed design: The detailed design process identifies the sub components and the relationship among different
components of the system. It also specifies the detailed architecture of each component. The algorithm using for
implementing each component, the input and output of each component are specified in this phase. At the end of this
phase, a detailed design document is generated.
5. Coding: During this stage, the software design is realized as a set of programs or program units. Unit testing is
also carried out in this phase. In unit test, each module is tested for its functionality.
6. Testing and Integration: The individual program units or programs are integrated and tested as a complete system
to ensure that the software requirements have been met and the modules are working correctly at integration also.
After testing, the software system is delivered to the customer.
7. Installation: The software system is installed at the customer system. Then run the new product using real data.
Acceptance testing is done in this phase. Acceptance testing means testing whether the system is running properly or
not using real data.
8. Operation and maintenance: Normally (although not necessarily), this is the longest life cycle phase. The system
is installed and put into practical use. Maintenance involves correcting errors which were not discovered in earlier
stages of the life cycle (Corrective maintenance), changing the settings for adapting with the existing system,
improving the implementation of system units and enhancing the systems services as new requirements are
discovered (Adaptive maintenance).
In principle, the result of each phase is one or more documents that are approved (signed off). The
following phase should not start until the previous phase has finished. In practice, these stages overlap and feed
information to each other. During design, problems with requirements are identified. During coding, design
problems are found and so on. The software process is not a simple linear model but involves feedback from one
phase to another. Documents produced in each phase may then have to be modified to reflect the changes made.
During the final life cycle phase (operation and maintenance) the software is put into use. Errors and
omissions in the original software requirements are discovered. Program and design errors emerge and the need for
new functionality is identified. The system must therefore evolve to remain useful. Making these changes (software
maintenance) may involve repeating previous process stages.
10

There are a number of intermediate outputs that must be produced to produce a successful product. The
following set of documents should be produced in each project:
Requirements document
System design document
Detailed design document
Test plan and test reports
Final code
Software manuals
Review reports
Limitations of the Waterfall Model:
a. The waterfall model assumes that the requirements of a system can be frozen before the design begins. For
new systems, determining the requirements is difficult, as the user does not even know the requirements.
b. Freezing the requirements usually requires choosing the hardware .A large project may require years to get
completed. If the hardware is selected early, it is likely that the final software will use a hardware
technology on the verge of becoming obsolete.
c. The waterfall model stipulates that the requirements be completely specified before the rest of the
development can proceed. In some situations it might be desirable to first develop a part of the system
completely and then later enhance the system in phases.
d. It is a document driven process that requires formal documents at the end of each phase. This approach
tends to make the process documentation heavy and is not suitable for many applications, particularly
interactive applications.

Prototyping model
The goal of a prototyping-based development process is to counter the first limitation of the waterfall
model. The basic idea here is that instead of freezing the requirements before any design or coding can proceed, a
throwaway prototype is built to help understand the requirements. This prototype is developed based on the
currently known requirements. Development of the prototype obviously undergoes design, coding, and testing, but
each of these phases is not done very formally or thoroughly. By using this prototype, the client can get an actual
feel of the system, which can enable the client to better understand the requirements of the desired system. This
results in more stable requirements that change less frequently.
Prototyping is an attractive idea for complicated and large systems for which there is no manual process or
existing system to help determine the requirements. In such situations, letting the client play with the prototype
provides invaluable and intangible inputs that help determine the requirements for the system. It is also an effective
method of demonstrating the feasibility of a certain approach. The process model of the prototyping approach is
shown in Figure.
11



The development of the prototype typically starts when the preliminary version of the requirements
specification document has been developed. At this stage, there is a reasonable understanding of the system and its
needs and which needs are unclear or likely to change. After the prototype has been developed, the end users and
clients are given an opportunity to use and explore the prototype. Based on their experience, they provide feedback
to the developers regarding the prototype: what is correct, what needs to be modified, what is missing, what is not
needed, etc. Based on the feedback, the prototype is modified to incorporate some of the suggested changes that can
be done easily, and then the users and the clients are again allowed to use the system.
This cycle repeats until, in the judgment of the prototype developers and analysts, the benefit from further
changing the system and obtaining feedback is outweighed by the cost and time involved in making the changes and
obtaining the feedback. Based on the feedback, the initial requirements are modified to produce the final
requirements specification, which is then used to develop the production quality system.
For prototyping for the purposes of requirement analysis to be feasible, its cost must be kept low.
Consequently, only those features are included in the prototype that will have a valuable return from the user
experience. Exception handling, recovery, and conformance to some standards and formats are typically not
included in prototypes.
In prototyping, as the prototype is to be discarded, there is no point in implementing those parts of the
requirements that are already well understood. Hence, the focus of the development is to include those features that
are not properly understood. And the development approach is quick and dirty with the focus on quick
development rather than quality. Because the prototype is to be thrown away, only minimal documentation needs to
be produced during prototyping. For example, design documents, a test plan, and a test case specification are not
needed during the development of the prototype. Another important cost-cutting measure is to reduce testing.
Because testing consumes a major part of development expenditure during regular software development, this has a
considerable impact in reducing costs.
By using these types of cost-cutting methods, it is possible to keep the cost of the prototype to less than a
few percent of the total development cost. And the returns from this extra cost can be substantial. First, the
experience of developing the prototype will reduce the cost of the actual software development. Second, as
requirements will be more stable now due to the feedback from the prototype, there will be fewer changes in the
requirements. Consequently the costs incurred due to changes in the requirements will be substantially reduced.
Third, the quality of final software is likely to be far superior, as the experience engineers have obtained while
Requirement
analysis
Design
Code
Test
Code Test Design
12

developing the prototype will enable them to create a better design, write better code, and do better testing. And
finally, developing a prototype mitigates many risks that exist in a project where requirements are not well known.
Overall, prototyping is well suited for projects where requirements are hard to determine and the
confidence in the stated requirements is low. In such projects where requirements are not properly understood in the
beginning, using the prototyping process model can be the most effective method for developing the software. It is
also an excellent technique for reducing some types of risks associated with a project.

Iterative Development/incremental model
The iterative development process model counters the third and fourth limitations of the waterfall model
and tries to combine the benefits of both prototyping and the waterfall model. The basic idea is that the software
should be developed in increments, each increment adding some functional capability to the system until the full
system is implemented.
In the first step of this model, a simple initial implementation is done for a subset of the overall problem.
This subset is one that contains some of the key aspects of the problem that are easy to understand and implement
and which form a useful and usable system. A project control list is created that contains, in order, all the tasks that
must be performed to obtain the final implementation. This project control list gives an idea of how far along the
project is at any given step from the final system. Each step consists of removing the next task from the list,
designing the implementation for the selected task, coding and testing the implementation, performing an analysis of
the partial system obtained after this step, and updating the list as a result of the analysis. These three phases are
called the design phase, implementation phase, and analysis phase. The process is iterated until the project control
list is empty, at which time the final implementation of the system will be available. The iterative enhancement
model is shown in Figure.


The project control list guides the iteration steps and keeps track of all tasks that must be done. Based on
the analysis, one of the tasks in the list can include redesign of defective components or redesign of the entire
system. However, redesign of the system will generally occur only in the initial steps. In the later steps, the design
would have stabilized and there is less chance of redesign. Each entry in the list is a task that should be performed in
one step of the iterative enhancement process and should be simple enough to be completely understood. Selecting
tasks in this manner will minimize the chances of error and reduce the redesign work. The design and
implementation phases of each step can be performed in a top-down manner or by using some other technique.
13

Spiral Model
Here, the software process is represented as a spiral, rather than a sequence of activities with some
backtracking from one activity to another. Each loop in the spiral represents a phase of the software process. Thus,
the innermost loop might be concerned with system feasibility, the next loop with requirements definition, the next
loop with system design, and so on. The spiral model combines change avoidance with change tolerance. It assumes
that changes are a result of project risks and includes explicit risk management activities to reduce these risks.

Each loop in the spiral is split into six sectors:
1. Customer Communication: tasks required to establish effective communication between developer and
customer. Specific objectives for that phase of the project are defined.
2. Planning: Constraints on the process and the product are identified and a detailed management plan is drawn
up. Project risks are identified.
3. Risk analysis: For each of the identified project risks, a detailed analysis is carried out. Steps are taken to reduce
the risk. For example, if there is a risk that the requirements are inappropriate, a prototype system may be
developed. Alternative strategies, depending on these risks, may be planned.
4. Engineering: After risk evaluation, a development model for the system is chosen. Identifies the tasks required
to build one or more representations of the application. For example, throwaway prototyping may be the best
development approach if user interface risks are dominant. If safety risks are the main consideration,
development based on formal transformations may be the most appropriate process, and so on. If the main
identified risk is sub-system integration, the waterfall model may be the best development model to use.
5. Construction and release: Identifies tasks required to construct, test, install the product of the phase and provide
user support (e.g., documentation and training).
6. Customer evaluation: tasks required to obtain customer feedback based on evaluation of the software
representations created during the engineering stage and implemented during the installation stage. The project
is reviewed and a decision made whether to continue with a further loop of the spiral. If it is decided to
continue, plans are drawn up for the next phase of the project.
14

The main difference between the spiral model and other software process models is its explicit recognition
of risk. A cycle of the spiral begins by elaborating objectives such as performance and functionality. Alternative
ways of achieving these objectives, and dealing with the constraints on each of them, are then enumerated. Each
alternative is assessed against each objective and sources of project risk are identified. The next step is to resolve
these risks by information-gathering activities such as more detailed analysis, prototyping, and simulation.
Once risks have been assessed, some development is carried out, followed by a planning activity for the
next phase of the process. Informally, risk simply means something that can go wrong. For example, if the intention
is to use a new programming language, a risk is that the available compilers are unreliable or do not produce
sufficiently efficient object code. Risks lead to proposed software changes and project problems such as schedule
and cost overrun, so risk minimization is a very important project management activity.

WINWIN Spiral Model
The spiral model discussed above suggests a framework activity that addresses customer communication.
The objective of this activity is to elicit project requirements from the customer. In an ideal context, the developer
simply asks the customer what is required and the customer provides sufficient detail to proceed. Unfortunately, this
rarely happens. In reality, the customer and the developer enter into a process of negotiation, where the customer
may be asked to balance functionality, performance, and other product or system characteristics against cost and
time to market.
The best negotiations strive for a win-win result. That is, the customer wins by getting the system or
product that satisfies the majority of the customers needs and the developer wins by working to realistic and
achievable budgets and deadlines.




WINWIN spiral model defines a set of negotiation activities at the beginning of each pass around the spiral.
Rather than a single customer communication activity, the following activities are defined:
1. Identification of the system or subsystems key stakeholders.
2. Determination of the stakeholders win conditions.
15

3. Negotiation of the stakeholders win conditions to reconcile them into a set of win-win conditions for all
concerned (including the software project team).
Successful completion of these initial steps achieves a win-win result, which becomes the key criterion for
proceeding to software and system definition.
In addition to the emphasis placed on early negotiation, the WINWIN spiral model introduces three process
milestones, called anchor points that help to establish the completion of one cycle around the spiral and provide
decision milestones before the software project proceeds. In essence, the anchor points represent three different
views of progress as the project traverses the spiral. The first anchor point, life cycle objectives (LCO), defines a set
of objectives for each major software engineering activity. For example, as part of LCO, a set of objectives
establishes the definition of top-level system/product requirements. The second anchor point, life cycle architecture
(LCA), establishes objectives that must be met as the system and software architecture is defined. For example, as
part of LCA, the software project team must demonstrate that it has evaluated the applicability of off-the-shelf and
reusable software components and considered their impact on architectural decisions. Initial operational capability
(IOC) is the third anchor point and represents a set of objectives associated with the preparation of the software for
installation/distribution, site preparation prior to installation, and assistance required by all parties that will use or
support the software.

Capability Maturity Model (CMM)
A capability maturity model such as the SE-CMM describes the stages through which processes progress as
they are defined, implemented, and improved. The model provides a guide for selecting process improvement
strategies by determining the current capabilities of specific processes and identifying the issues most critical to
quality and process improvement within a particular domain, such as software engineering or systems engineering.
A capability maturity model (CMM) may take the form of a reference model to be used as a guide for developing
and improving a mature, defined process. A CMM may also be used to appraise the existence and
institutionalization of a defined process that implements the referenced practices.
The Process Maturity Framework
In many organizations, projects are often excessively late and double the planned budget. In such instances,
the organization frequently is not providing the infrastructure and support necessary to help projects avoid these
problems. Even in undisciplined organizations, however, some individual software projects produce excellent
results. When such projects succeed, it is generally through the heroic efforts of a dedicated team, rather than
through repeating the proven methods of an organization with a mature software process. In the absence of an
organization-wide software process, repeating results depends entirely on having the same individuals available for
the next project. Success that rests solely on the availability of specific individuals provides no basis for long-term
productivity and quality improvement throughout an organization. Continuous improvement can occur only through
focused and sustained effort towards building a process infrastructure of effective software engineering and
management practices.
Immature Versus Mature Software Organizations
16

Setting sensible goals for process improvement requires an understanding of the difference between
immature and mature software organizations. In an immature software organization, software processes are
generally improvised by practitioners and their management during the course of the project. Even if a software
process has been specified, it is not rigorously followed or enforced. The immature software organization is
reactionary, and managers are usually focused on solving immediate crises (better known as fire fighting). Schedules
and budgets are routinely exceeded because they are not based on realistic estimates. When hard deadlines are
imposed, product functionality and quality are often compromised to meet the schedule.
In an immature organization, there is no objective basis for judging product quality or for solving product
or process problems. Therefore, product quality is difficult to predict. Activities intended to enhance quality such as
reviews and testing are often curtailed or eliminated when projects fall behind schedule.
On the other hand, a mature software organization possesses an organization-wide ability for managing
software development and maintenance processes. The software process is accurately communicated to both
existing staff and new employees, and work activities are carried out according to the planned process. The
processes mandated are fit for use and consistent with the way the work actually gets done. These defined processes
are updated when necessary, and improvements are developed through controlled pilot-tests and/or cost benefit
analyses. Roles and responsibilities within the defined process are clear throughout the project and across the
organization.
In a mature organization, managers monitor the quality of the software products and customer satisfaction.
There is an objective, quantitative basis for judging product quality and analyzing problems with the product and
process. Schedules and budgets are based on historical performance and are realistic; the expected results for cost,
schedule, functionality, and quality of the product are usually achieved. In general, a disciplined process is
consistently followed because all of the participants understand the value of doing so, and the necessary
infrastructure exists to support the process.
Capitalizing on these observations about immature and mature software organizations requires construction
of a software process maturity framework. This framework describes an evolutionary path from ad hoc, chaotic
processes to mature, disciplined software processes. Without this framework, improvement programs may prove
ineffective because the necessary foundation for supporting successive improvements has not been established. The
software process maturity framework emerges from integrating the concepts of software process, software process
capability, software process performance, and software process maturity.
Fundamental Concepts Underlying Process Maturity
A software process can be defined as a set of activities, methods, practices, and transformations that people
use to develop and maintain software and the associated products (e.g., project plans, design documents, code, test
cases, and user manuals).As an organization matures, the software process becomes better defined and more
consistently implemented throughout the organization.
Software process capability describes the range of expected results that can be achieved by following a
software process. The software process capability of an organization provides one means of predicting the most
likely outcomes to be expected from the next software project the organization undertakes.
17

Software process performance represents the actual results achieved by following a software process. Thus,
software process performance focuses on the results achieved, while software process capability focuses on results
expected. Based on the attributes of a specific project and the context within which it is conducted, the actual
performance of the project may not reflect the full process capability of the organization; i.e., the capability of the
project is constrained by its environment. For instance, radical changes in the application or technology undertaken
may place a projects staff on a learning curve that causes their project's capability, and performance, to fall short of
the organization's full process capability.
Software process maturity is the extent to which a specific process is explicitly defined, managed,
measured, controlled, and effective. Maturity implies a potential for growth in capability and indicates both the
richness of an organization's software process and the consistency with which it is applied in projects throughout the
organization. The software process is well-understood throughout a mature organization, usually through
documentation and training, and the process is continually being monitored and improved by its users. The
capability of a mature software process is known. Software process maturity implies that the productivity and
quality resulting from an organizations software process can be improved over time through consistent gains in the
discipline achieved by using its software process. As a software organization gains in software process maturity, it
institutionalizes its software process via policies, standards, and organizational structures.
Institutionalization entails building an infrastructure and a corporate culture that supports the methods,
practices, and procedures of the business so that they endure after those who originally defined them have gone.
Overview of the Capability Maturity Model
The Capability Maturity Model for Software provides software organizations with guidance on how to gain
control of their processes for developing and maintaining software and how to evolve toward a culture of software
engineering and management excellence. The CMM was designed to guide software organizations in selecting
process improvement strategies by determining current process maturity and identifying the few issues most critical
to software quality and process improvement. By focusing on a limited set of activities and working aggressively to
achieve them, an organization can steadily improve its organization-wide software process to enable continuous and
lasting gains in software process capability.
The staged structure of the CMM is based on principles of product quality. These principles have been
adapted by the SEI into a maturity framework that establishes a project management and engineering foundation for
quantitative control of the software process, which is the basis for continuous process improvement. The maturity
framework describes five evolutionary stages in adopting quality practices.
The Five Levels of Software Process Maturity
Continuous process improvement is based on many small, evolutionary steps rather than revolutionary
innovations. The CMM provides a framework for organizing these evolutionary steps into five maturity levels that
lay successive foundations for continuous process improvement. These five maturity levels define an ordinal scale
for measuring the maturity of an organization's software process and for evaluating its software process capability.
The levels also help an organization prioritize its improvement efforts.
18

A maturity level is a well-defined evolutionary plateau toward achieving a mature software process. Each
maturity level provides a layer in the foundation for continuous process improvement. Each level comprises a set of
process goals that, when satisfied, stabilize an important component of the software process. Achieving each level of
the maturity framework establishes a different component in the software process, resulting in an increase in the
process capability of the organization. Organizing the CMM into the five levels shown in Figure prioritizes
improvement actions for increasing software process maturity. The labeled arrows in Figure indicate the type of
process capability being institutionalized by the organization at each step of the maturity framework.


The following characterizations of the five maturity levels highlight the primary process changes made at each level:
1) Initial: The software process is characterized as ad hoc, and occasionally even chaotic. Few processes are
defined, and success depends on individual effort.
2) Repeatable: Basic project management processes are established to track cost, schedule, and functionality. The
necessary process discipline is in place to repeat earlier successes on projects with similar applications.
3) Defined: The software process for both management and engineering activities is documented, standardized, and
integrated into a standard software process for the organization. All projects use an approved, tailored version of the
organization's standard software process for developing and maintaining software.
4) Managed: Detailed measures of the software process and product quality are collected. Both the software process
and products are quantitatively understood and controlled.
5) Optimizing: Continuous process improvement is enabled by quantitative feedback from the process and from
piloting innovative ideas and technologies.
Behavioral Characterization of the Maturity Levels
Maturity Levels 2 through 5 can be characterized through the activities performed by the organization to
establish or improve the software process, by activities performed on each project, and by the resulting process
capability across projects. A behavioral characterization of Level 1 is included to establish a base of comparison for
process improvements at higher maturity levels.
Level 1 - The Initial Level
19

At the Initial Level, the organization typically does not provide a stable environment for developing and
maintaining software. When an organization lacks sound management practices, the benefits of good software
engineering practices are undermined by ineffective planning and reaction-driven commitment systems. During a
crisis, projects typically abandon planned procedures and revert to coding and testing. Success depends entirely on
having an exceptional manager and a seasoned and effective software team. Occasionally, capable and forceful
software managers can withstand the pressures to take shortcuts in the software process; but when they leave the
project, their stabilizing influence leaves with them. Even a strong engineering process cannot overcome the
instability created by the absence of sound management practices. The software process capability of Level 1
organizations is unpredictable because the software process is constantly changed or modified as the work
progresses (i.e., the process is ad hoc). Schedules, budgets, functionality, and product quality are generally
unpredictable. Performance depends on the capabilities of individuals and varies with their innate skills, knowledge,
and motivations. There are few stable software processes in evidence, and performance can be predicted only by
individual rather than organizational capability.
Level 2 - The Repeatable Level
At the Repeatable Level, policies for managing a software project and procedures to implement those
policies are established. Planning and managing new projects is based on experience with similar projects. An
objective in achieving Level 2 is to institutionalize effective management processes for software projects, which
allow organizations to repeat successful practices developed on earlier projects, although the specific processes
implemented by the projects may differ. An effective process can be characterized as practiced, documented,
enforced, trained, measured, and able to improve. Projects in Level 2 organizations have installed basic software
management controls. Realistic project commitments are based on the results observed on previous projects and on
the requirements of the current project. The software managers for a project track software costs, schedules, and
functionality; problems in meeting commitments are identified when they arise. Software requirements and the work
products developed to satisfy them is base lined, and their integrity is controlled. Software project standards are
defined, and the organization ensures they are faithfully followed. The software project works with its
subcontractors, if any, to establish a strong customer-supplier relationship. The software process capability of Level
2 organizations can be summarized as disciplined because planning and tracking of the software project is stable and
earlier successes can be repeated. The project's process is under the effective control of a project management
system, following realistic plans based on the performance of previous projects.
Level 3 - The Defined Level
At the Defined Level, the standard process for developing and maintaining software across the organization
is documented, including both software engineering and management processes, and these processes are integrated
into a coherent whole. Processes established at Level 3 are used (and changed, as appropriate) to help the software
managers and technical staff perform more effectively. The organization exploits effective software engineering
practices when standardizing its software processes. There is a group that is responsible for the organization's
software process activities, e.g., a software engineering process group, or SEPG. An organization-wide training
20

program is implemented to ensure that the staff and managers have the knowledge and skills required to fulfill their
assigned roles.
Projects tailor the organization's standard software process to develop their own defined software process,
which accounts for the unique characteristics of the project. This tailored process is referred to in the CMM as the
project's defined software process. A defined software process contains a coherent, integrated set of well-defined
software engineering and management processes. A well-defined process can be characterized as including
readiness criteria, inputs, standards and procedures for performing the work, verification mechanisms (such as peer
reviews), outputs, and completion criteria. Because the software process is well defined, management has good
insight into technical progress on all projects. The software process capability of Level 3 organizations can be
summarized as standard and consistent because both software engineering and management activities are stable and
repeatable. Within established product lines, cost, schedule, and functionality are under control, and software quality
is tracked. This process capability is based on a common, organization-wide understanding of the activities, roles,
and responsibilities in a defined software process.
Level 4 - The Managed Level
At the Managed Level, the organization sets quantitative quality goals for both software products and
processes. Productivity and quality are measured for important software process activities across all projects as part
of an organizational measurement program. An organization-wide software process database is used to collect and
analyze the data available from the projects' defined software processes. Software processes are instrumented with
well-defined and consistent measurements at Level 4. These measurements establish the quantitative foundation for
evaluating the projects' software processes and products. Projects achieve control over their products and processes
by narrowing the variation in their process performance to fall within acceptable quantitative boundaries.
Meaningful variations in process performance can be distinguished from random variation (noise), particularly
within established product lines. The risks involved in moving up the learning curve of a new application domain are
known and carefully managed. The software process capability of Level 4 organizations can be summarized as
predictable because the process is measured and operates within measurable limits. This level of process capability
allows an organization to predict trends in process and product quality within the quantitative bounds of these limits.
When these limits are exceeded, action is taken to correct the situation. Software products are of predictably high
quality.
Level 5 - The Optimizing Level
At the Optimizing Level, the entire organization is focused on continuous process improvement. The
organization has the means to identify weaknesses and strengthen the process proactively, with the goal of
preventing the occurrence of defects. Data on the effectiveness of the software process is used to perform cost
benefit analyses of new technologies and proposed changes to the organization's software process. Innovations that
exploit the best software engineering practices are identified and transferred throughout the organization. Software
project teams in Level 5 organizations analyze defects to determine their causes. Software processes are evaluated to
prevent known types of defects from recurring, and lessons learned are disseminated to other projects. The software
process capability of Level 5 organizations can be characterized as continuously improving because Level 5
21

organizations are continuously striving to improve the range of their process capability, thereby improving the
process performance of their projects. Improvement occurs both by incremental advancements in the existing
process and by innovations using new technologies and methods.
THE ISO 9000 QUALITY STANDARDS
A Quality assurance system may be defined as the organizational structure, responsibilities, procedures,
processes, and resources for implementing quality management. ISO 9000 describes quality assurance elements in
generic terms that can be applied to any business regardless of the products or services offered. To become
registered to one of the quality assurance system models contained in ISO 9000, a companys quality system and
operations are scrutinized by third party auditors for compliance to the standard and for effective operation. Upon
successful registration, a company is issued a certificate from a registration body represented by the auditors.
Semiannual surveillance audits ensure continued compliance to the standard.
ISO 9000 SERIES OF STANDARDS
The ISO 9000 Series of standards is generic in scope. The five standards of this series are described briefly in the
following paragraphs.
ISO 9000,Quality Management and Quality Assurance standards, Guidelines for selection and Use,
explains fundamental quality concepts, defines key terms, and Provides guidelines for selecting, using and tailoring
the ISO 9001, 9002 and 9003 standards. It is the road map for the entire series.
ISO 9001,Quality Systems-Model for quality Assurance in design, development, production, installation
and servicing, is the most comprehensive standard in series. It contains 20 elements covering the need for an
effective quality system, from the receipt of a contract through the design/development stage and finally the service
required after delivery.
ISO 9002,Quality Systems-Model for Quality Assurance in production, Installation and servicing,
addresses the prevention, detection and correction or problems during production and installation. It is for the use of
organizations that are not involved in design. This standard addresses 19 of the 20 elements covered in the 9001
standards.
ISO 9003,Quality Systems-Model for Quality Assurance in Final Inspection and Test, is the least
comprehensive of the standards, covering 16 of the 20 elements in 9001.
ISO 9004-1,Quality Management and Quality System Elements-Guidelines, provides guidance for a supplier
to use in developing and implementing a quality system and in determining the extent to which each quality system
element is applicable.
ISO/QS 9000 Elements
1. Managing Responsibility
Three major topics are addressed here-the quality policy, responsibility and authority and
management review. The quality policy statement should be a short statement that defines the
22

organizations objectives for and commitment to quality. It should be written in easy to understanding
language. Responsibility and authority must be defined for all personnel affecting quality.
2. The quality System
This element requires the establishment and maintenance of a documented quality system. It
describes the levels of the documentation, such as policies, procedures and work instruction.
3. Control Review
A review of contracts or purchase orders is required by the standard. This review should answer
three questions. First are the requirements of the contract clearly defined?. Second are there any unusual
quality requirements? And finally does the organization have the capability to meet the requirements?
4. Design Control
The general requirements of design control is the establishment and maintenance of procedures
to control and verify that product design meets specified requirements and is aligned with the contract
review.
5. Document and Data Control
This element requires that procedures and a master list be established and maintained to control all
documents and data that affect the quality of a product or service.
6. Purchasing
The general requirement of this element is to establish and maintain documented procedures to
ensure that purchased materials or products will conform to specified requirements. To meet this
requirement, the procurement specifications must clearly describe the material, product or service being
ordered.
7. Control of Customer-Supplied Product
There are times when the customer may also supply the raw material or component parts used in
the production of a product. Because the organization does not own these items, it must take precautions to
ensure the identification and segregation of them from any similar organization-owned item.
8. Product Identification and Traceability
Where appropriate, methods shall be established for the identification of products during all stages
or production, delivery and installation. This identification can be accomplished through the use of lot or
batch numbers on smaller less critical items to the application of serial numbers on critical items and
finished goods.
9. Process Control
23

Controlling the processes used to produce a product or provide a service is the best way of
preventing problems and nonconformities.
10. Inspection and Testing
This element addresses three areas-receiving, in process and final inspection. The methods of test
and inspection must be documented ad records maintained as stated in quality plan.
11. Control of Inspection, Measuring and Test Equipment
Inspections and testing methods are of little value if the equipment used for these procedures is not accurate
and functioning properly. This element requires the control calibration and maintenance of all equipment
used to ensure product quality whether the equipment belong s to the organization, is on loan from a
customer, or is owned by an employee.
12. Inspection and Test Status
A products condition must be identified throughout the production, installation and servicing of the
product. It also must relate to the written control plan. The status should indicate whether the product has
been ( 1) inspected and accepted ( 2 ) inspected and rejected ( 3 )inspected and on hold for a decision as to
accept or reject ( 4 )not yet inspected.
13. Control of Nonconforming product
When non conforming product is identified, it must be removed immediately from further
processing, clearly marked and segregated in a manner to preclude any possible use until its disposition is
decided.

14. Corrective and Preventive Action
Documented procedures must be in place for implementing corrective and preventive actions.
Corrective action begins with the detection of any suspected nonconformance and ends in taking the
appropriate action to correct the deficiency and prevent its recurrence.
15. Handling, Storage , Packaging and Preservation and Delivery
These activities take place throughout the manufacturing process. Incoming material, in process
material and finished product must be handled in a manner that will ensure their protection from damage
and deterioration.
16. Control of quality Records
Quality records are used to demonstrate the achievement of required quality and verify the
effective and economical operation of the quality system.
17. Internal Quality Audits
24

The purpose of this element is to ensure that the quality system is working according to plan and
to provide opportunities for improvement. It is an important tool for the management review process.
Internal audits must be performed for all organization activities and should be conducted by personnel who
are independent of the activities being audited.
18. Training
The requirement for training is very general, leaving the decision of appropriate training to the
organization involved. Most programs include training in plant safety, the quality system, basic statistical
concepts and technical skills.
19. Servicing
This element simply requires procedures to be in place for performing after-delivery services on
product and verifying, through documentation, that the servicing meets the contracts specified
requirements.
20. Statistical Techniques
Statistical techniques must be implemented whenever they are suitable and practical for the
improvement and/or control of process or product quality.
Phased Development Process:
A development process consists of various phases, each phase ending with a defined output. The phases are
performed in an order specified by the process model being followed. The main reason for having a phased process
is that it breaks the problem of developing software into successfully performing a set of phases, each handling a
different concern of software development. This ensures that the cost of development is lower than what it would
have been if the whole problem were tackled together. It also helps in proper checking for quality and progress at
some defined points during the development.
Requirement Analysis: Requirement analysis is done in order to understand the problem the software system is to
solve. Why we need requirement analysis? It helps us in identifying what is needed from the system, not how the
system will achieve its goals. The developer has to satisfy the clients needs. The requirement phase ends with a
document describing all the requirements. The goal of this phase is to develop a SRS. The two major activities in
this phase: problem understanding or analysis and requirement specification.
Software design: The purpose of the design phase is to plan a solution of the problem specified by the requirements
document. It specifies how to satisfy the needs. The output of this phase is the design document. The design activity
is divided into two separate phases system design and detailed design. System design, also called as top level
design, aims to identify the modules that should be in the system, the specifications of these modules, and how they
interact with each other to produce the desired results .During detailed design, the internal logic of each of the
25

modules specified in system design is decided .During this phase further details of the data structures and
algorithmic design of each of the modules is specified.
Coding: The goal of the coding phase is to translate the design of the system into code in a given programming
language. The aim of this phase is to implement the design in the best possible manner. The coding phase affects
both testing and maintenance profoundly. Well-written code can reduce the testing and maintenance effort. Hence,
during coding the focus should be on developing programs that are easy to read and understand, and not simply on
developing programs that are easy to write. Simplicity and clarity should be strived for during the coding phase. An
important concept that helps the understandability of programs is structured programming.
Testing: Testing is the major quality control measure used during software development. Its basic function is to
detect errors in the software. Testing not only has to uncover errors introduced during coding, but also errors
introduced during the previous phases. Different levels of testing are used:
-The starting point of testing is unit testing. In this, a module is tested separately and is often performed by the coder
himself simultaneously along with the coding of the module. The purpose is to exercise the different parts of the
module code to detect coding errors.
-After this, the modules are gradually integrated into subsystems, which are then integrated to eventually form the
entire system. During integration of modules, integration testing is performed to detect design errors by focusing on
testing the interconnection between modules.
-After the system is put together, system testing is performed. Here the system is tested against the system
requirements to see if all the requirements are met and if the system performs as specified by the requirements.
- Finally acceptance testing is performed to demonstrate to the client, on the real life data of the client, the
operation of the system.
The testing process starts with a test plan that identifies all the testing related activities that must be performed and
specifies the schedule, allocates the resources, and specifies guidelines for testing. The test plan specifies conditions
that should be tested, different units to be tested, and the manner in which modules will be integrated. Then for
different test units, a test case specification document is produced which lists all the different test cases, together
with expected outputs. The final output of this phase is error report and test report.
REQUIREMENTS ANALYSIS
Requirements analysis is a software engineering task that bridges the gap between system level
requirements engineering and software design.

26

Requirements analysis allows the software engineer (sometimes called analyst in this role) to refine the
software allocation and build models of the data, functional, and behavioral domains that will be treated by software.
Requirements analysis provides the software designer with a representation of information, function, and behavior
that can be translated to data, architectural, interface, and component-level designs. Finally, the requirements
specification provides the developer and the customer with the means to assess quality, once the software is built.
Software requirements analysis may be divided into five areas of effort: (1) problem recognition, (2)
evaluation and synthesis, (3) modeling, (4) specification, and (5) review. Initially, the analyst studies the System
Specification (if one exists) and the Software Project Plan. It is important to understand software in a system context
and to review the software scope that was used to generate planning estimates. Next, communication for analysis
must be established so that problem recognition is ensured. The goal is recognition of the basic problem elements as
perceived by the customer/users.
Problem evaluation and solution synthesis is the next major area of effort for analysis. The analyst must
define all externally observable data objects, evaluate the flow and content of information, define and elaborate all
software functions, understand software behavior in the context of events that affect the system, establish system
interface characteristics, and uncover additional design constraints. Each of these tasks serves to describe the
problem so that an overall approach or solution may be synthesized. Once problems have been identified, the analyst
determines what information is to be produced by the new system and what data will be provided to the system.
Upon evaluating current problems and desired information (input and output), the analyst begins to synthesize one
or more solutions. To begin, the data objects, processing functions, and behavior of the system are defined in detail.
Once this information has been established, basic architectures for implementation are considered.
The process of evaluation and synthesis continues until both analyst and customer feel confident that
software can be adequately specified for subsequent development steps. Throughout evaluation and solution
synthesis, the analyst's primary focus is on "what," not "how." What data does the system produce and consume,
what functions must the system perform, what behaviors does the system exhibit, what interfaces are defined and
what constraints apply?
During the evaluation and solution synthesis activity, the analyst creates models of the system in an effort
to better understand data and control flow, functional processing, operational behavior, and information content. The
model serves as a foundation for software design and as the basis for the creation of specifications for the software.
REQUIREMENTS ELICITATION FOR SOFTWARE
Before requirements can be analyzed, modeled, or specified they must be gathered through an elicitation
process. A customer has a problem that may be amenable to a computer-based solution. A developer responds to the
customer's request for help. Communication has begun. The most commonly used requirements elicitation technique
is to conduct a meeting or interview. The first meeting between a software engineer (the analyst) and the customer
can be likened to the awkwardness. Neither person knows what to say or ask; both are worried that what they do say
will be misinterpreted; both are thinking about where it might lead (both likely have radically different expectations
here); both want to get the thing over with, but at the same time, both want it to be a success.

27

Informal Meeting
The analyst starts by asking three sets of context-free questions. That is, the questions that will lead to a
basic understanding of the problem, the people who want a solution, the nature of the solution that is desired, and the
effectiveness of the first encounter itself. The first set of context-free questions focuses on the customer, the overall
goals, and the benefits. For example, the analyst might ask:
Who is behind the request for this work?
Who will use the solution?
What will be the economic benefit of a successful solution?
Is there another source for the solution that you need?
These questions help to identify all stakeholders who will have interest in the software to be built. In addition,
the questions identify the measurable benefit of a successful implementation and possible alternatives to custom
software development.
The next set of questions enables the analyst to gain a better understanding of the problem and the customer to
voice his or her perceptions about a solution:
How would you characterize "good" output that would be generated by a successful solution?
What problem(s) will this solution address?
Can you show me (or describe) the environment in which the solution will be used?
Will special performance issues or constraints affect the way the solution is approached?
The final set of questions focuses on the effectiveness of the meeting.
Are you the right person to answer these questions?
Are your answers "official"?
Are my questions relevant to the problem that you have?
Am I asking too many questions?
Can anyone else provide additional information?
Should I be asking you anything else?
These questions (and others) will help to "break the ice" and initiate the communication that is essential to
successful analysis. But a question and answer meeting format is not an approach that has been overwhelmingly
successful. In fact, the Q&A session should be used for the first encounter only and then replaced by a meeting
format that combines elements of problem solving, negotiation, and specification.
Facilitated Application Specification Techniques
Too often, customers and software engineers have an unconscious "us and them" mind-set. Rather than
working as a team to identify and refine requirements, each constituency defines its own "territory" and
communicates through a series of memos, formal position papers, documents, and question and answer sessions.
History has shown that this approach doesn't work very well. Misunderstandings abound, important information is
omitted, and a successful working relationship is never established.
28

It is with these problems in mind that a number of independent investigators have developed a team-
oriented approach to requirements gathering that is applied during early stages of analysis and specification. Called
facilitated application specification techniques (FAST), this approach encourages the creation of a joint team of
customers and developers who work together to identify the problem, propose elements of the solution, negotiate
different approaches and specify a preliminary set of solution requirements. FAST has been used predominantly by
the information systems community, but the technique offers potential for improved communication in applications
of all kinds.
Many different approaches to FAST have been proposed. Each makes use of a slightly different scenario,
but all apply some variation on the following basic guidelines:
A meeting is conducted at a neutral site and attended by both software engineers and customers.
Rules for preparation and participation are established.
An agenda is suggested that is formal enough to cover all important points but informal enough to encourage the
free flow of ideas.
A "facilitator" (can be a customer, a developer, or an outsider) controls the meeting.
A "definition mechanism" (can be work sheets, flip charts, or wall stickers or an electronic bulletin board, chat
room or virtual forum) is used.
The goal is to identify the problem, propose elements of the solution, negotiate different approaches, and specify a
preliminary set of solution requirements in an atmosphere that is conducive to the accomplishment of the goal.
To better understand the flow of events as they occur in a typical FAST meeting, we present a brief
scenario that outlines the sequence of events that lead up to the meeting, occur during the meeting, and follow the
meeting. Initial meetings between the developer and customer occur and basic questions and answers help to
establish the scope of the problem and the overall perception of a solution. Out of these initial meetings, the
developer and customer write a one- or two-page "product request." A meeting place, time, and date for FAST are
selected and a facilitator is chosen. Attendees from both the development and customer/user organizations are
invited to attend. The product request is distributed to all attendees before the meeting date.
While reviewing the request in the days before the meeting, each FAST attendee is asked to make a list of
objects that are part of the environment that surrounds the system, other objects that are to be produced by the
system, and objects that are used by the system to perform its functions. In addition, each attendee is asked to make
another list of services (processes or functions) that manipulate or interact with the objects. Finally, lists of
constraints (e.g., cost, size, business rules) and performance criteria (e.g., speed, accuracy) are also developed. The
attendees are informed that the lists are not expected to be exhaustive but are expected to reflect each persons
perception of the system. In reality, considerably more information would be provided at this stage. But even with
additional information, ambiguity would be present, omissions would likely exist, and errors might occur.
The FAST team is composed of representatives from marketing, software and hardware engineering, and
manufacturing. An outside facilitator is to be used. Each person on the FAST team develops the lists of components,
services, constraints and performance criteria. As the FAST meeting begins, the first topic of discussion is the need
and justification for the new producteveryone should agree that the product is justified. Once agreement has been
29

established, each participant presents his or her lists for discussion. The lists can be pinned to the walls of the room
using large sheets of paper, stuck to the walls using adhesive backed sheets, or written on a wall board.
Alternatively, the lists may have been posted on an electronic bulletin board or posed in a chat room environment
for review prior to the meeting. Ideally, each list entry should be capable of being manipulated separately so that
lists can be combined, entries can be deleted and additions can be made. At this stage, critique and debate are strictly
prohibited.
After individual lists are presented in one topic area, a combined list is created by the group. The combined
list eliminates redundant entries, adds any new ideas that come up during the discussion, but does not delete
anything. After combined lists for all topic areas have been created, discussioncoordinated by the facilitator
ensues. The combined list is shortened, lengthened, or reworded to properly reflect the product/ system to be
developed. The objective is to develop a consensus list in each topic area (objects, services, constraints, and
performance). The lists are then set aside for later action. Once the consensus lists have been completed, the team is
divided into smaller sub teams; each works to develop mini-specifications for one or more entries on each of the
lists. Each mini-specification is an elaboration of the word or phrase contained on a list. Each sub team then presents
each of its mini-specs to all FAST attendees for discussion. Additions, deletions, and further elaboration are made.
In some cases, the development of mini-specs will uncover new objects, services, constraints, or performance
requirements that will be added to the original lists. During all discussions, the team may raise an issue that cannot
be resolved during the meeting. An issues list is maintained so that these ideas will be acted on later.
After the mini-specs are completed, each FAST attendee makes a list of validation criteria for the
product/system and presents his or her list to the team. A consensus list of validation criteria is then created. Finally,
one or more participants (or outsiders) are assigned the task of writing the complete draft specification using all
inputs from the FAST meeting. FAST is not a panacea for the problems encountered in early requirements
elicitation. But the team approach provides the benefits of many points of view, instantaneous discussion and
refinement, and is a concrete step toward the development of a specification.

Quality Function Deployment
Quality function deployment (QFD) is a quality management technique that translates the needs of the
customer into technical requirements for software. QFD concentrates on maximizing customer satisfaction from
the software engineering process. To accomplish this, QFD emphasizes an understanding of what is valuable to the
customer and then deploys these values throughout the engineering process. QFD identifies three types of
requirements
Normal requirements: The objectives and goals that are stated for a product or system during meetings with the
customer. If these requirements are present, the customer is satisfied. Examples of normal requirements might be
requested types of graphical displays, specific system functions, and defined levels of performance.
Expected requirements: These requirements are implicit to the product or system and may be so fundamental that
the customer does not explicitly state them. Their absence will be a cause for significant dissatisfaction. Examples of
30

expected requirements are: ease of human/machine interaction, overall operational correctness and reliability, and
ease of software installation.
Exciting requirements: These features go beyond the customers expectations and prove to be very satisfying when
present. For example, word processing software is requested with standard features. The delivered product contains
a number of page layout capabilities that are quite pleasing and unexpected.
In actuality, QFD spans the entire engineering process. However, many QFD concepts are applicable to the
requirements elicitation activity. We present an overview of only these concepts (adapted for computer software) in
the paragraphs that follow. In meetings with the customer, function deployment is used to determine the value of
each function that is required for the system. Information deployment identifies both the data objects and events that
the system must consume and produce. These are tied to the functions. Finally, task deployment examines the
behavior of the system or product within the context of its environment. Value analysis is conducted to determine
the relative priority of requirements determined during each of the three deployments. QFD uses customer
interviews and observation, surveys, and examination of historical data (e.g., problem reports) as raw data for the
requirements gathering activity.
These data are then translated into a table of requirementscalled the customer voice tablethat is
reviewed with the customer. A variety of diagrams, matrices, and evaluation methods are then used to extract
expected requirements and to attempt to derive exciting requirements.
Use-Cases
As requirements are gathered as part of informal meetings, FAST, or QFD, the software engineer (analyst)
can create a set of scenarios that identify a thread of usage for the system to be constructed. The scenarios, often
called use-cases, provide a description of how the system will be used.
To create a use-case, the analyst must first identify the different types of people (or devices) that use the
system or product. These actors actually represent roles that people (or devices) play as the system operates. Defined
somewhat more formally, an actor is anything that communicates with the system or product and that is external to
the system itself.
It is important to note that an actor and a user are not the same thing. A typical user may play a number of
different roles when using a system, whereas an actor represents a class of external entities (often, but not always,
people) that play just one role. As an example, consider a machine operator (a user) who interacts with the control
computer for a manufacturing cell that contains a number of robots and numerically controlled machines. After
careful review of requirements, the software for the control computer requires four different modes (roles) for
interaction: programming mode, test mode, monitoring mode, and troubleshooting mode. Therefore, four actors can
be defined: programmer, tester, monitor, and troubleshooter. In some cases, the machine operator can play all of
these roles. In others, different people may play the role of each actor.
Because requirements elicitation is an evolutionary activity, not all actors are identified during the first
iteration. It is possible to identify primary actors during the first iteration and secondary actors as more is learned
about the system. Primary actors interact to achieve required system function and derive the intended benefit from
31

the system. They work directly and frequently with the software. Secondary actors support the system so that
primary actors can do their work.
Once actors have been identified, use-cases can be developed. The use-case describes the manner in which
an actor interacts with the system. A number of questions that should be answered by the use-case:
What main tasks or functions are performed by the actor?
What system information will the actor acquire, produce, or change?
Will the actor have to inform the system about changes in the external environment?
What information does the actor desire from the system?
Does the actor wish to be informed about unexpected changes?
In general, a use-case is simply a written narrative that describes the role of an actor as interaction with the
system occurs. Each use-case provides an unambiguous scenario of interaction between an actor and the software. It
can also be used to specify timing requirements or other constraints for the scenario.
ANALYSIS PRINCIPLES
Over the past two decades, a large number of analysis modeling methods have been developed.
Investigators have identified analysis problems and their causes and have developed a variety of modeling notations
and corresponding sets of heuristics to overcome them. Each analysis method has a unique point of view. However,
all analysis methods are related by a set of operational principles:
1. The information domain of a problem must be represented and understood.
2. The functions that the software is to perform must be defined.
3. The behavior of the software (as a consequence of external events) must be represented.
4. The models that depict information, function and behavior must be partitioned in a manner that uncovers detail in
a layered (or hierarchical) fashion.
5. The analysis process should move from essential information toward implementation detail.
By applying these principles, the analyst approaches a problem systematically. The information domain is examined
so that function may be understood more completely. Models are used so that the characteristics of function and
behavior can be communicated in a compact fashion. Partitioning is applied to reduce complexity. Essential and
implementation views of the software are necessary to accommodate the logical constraints imposed by processing
requirements and the physical constraints imposed by other system elements. In addition to these operational
analysis principles, there are some other set of guiding principles for requirements engineering:
Understand the problem before you begin to create the analysis model. There is a tendency to rush to a solution,
even before the problem is understood. This often leads to elegant software that solves the wrong problem!
Develop prototypes that enable a user to understand how human/machine interaction will occur. Since the
perception of the quality of software is often based on the perception of the friendliness of the interface,
prototyping (and the iteration that results) are highly recommended.
Record the origin of and the reason for every requirement. This is the first step in establishing traceability back to
the customer.
32

Use multiple views of requirements. Building data, functional, and behavioral models provide the software
engineer with three different views. This reduces the likelihood that something will be missed and increases the
likelihood that inconsistency will be recognized.
Rank requirements. Tight deadlines may preclude the implementation of every software requirement. If an
incremental process model is applied, those requirements to be delivered in the first increment must be identified.
Work to eliminate ambiguity. Because most requirements are described in a natural language, the opportunity for
ambiguity abounds. The use of formal technical reviews is one way to uncover and eliminate ambiguity.
A software engineer who takes these principles to heart is more likely to develop a software specification
that will provide an excellent foundation for design.
The Information Domain
All software applications can be collectively called data processing. Interestingly, this term contains a key
to our understanding of software requirements. Software is built to process data, to transform data from one form to
another; that is, to accept input, manipulate it in some way, and produce output. The first operational analysis
principle requires an examination of the information domain and the creation of a data model. The information
domain contains three different views of the data and control as each is processed by a computer program: (1)
information content and relationships (the data model), (2) information flow, and (3) information structure. To fully
understand the information domain, each of these views should be considered.
Information content represents the individual data and control objects that constitute some larger collection
of information transformed by the software.
Information flow represents the manner in which data and control change as each move through a system.
Input objects are transformed to intermediate information (data and/or control), which is further transformed to
output. Along this transformation path (or paths), additional information may be introduced from an existing data
store (e.g., a disk file or memory buffer). The transformations applied to the data are functions or sub functions that
a program must perform. Data and control that move between two transformations (functions) define the interface
for each function.
Information structure represents the internal organization of various data and control items. Are data or
control items to be organized as an n-dimensional table or as a hierarchical tree structure? Within the context of the
structure, what information is related to other information? Is all information contained within a single structure or
are distinct structures to be used? How does information in one information structure relate to information in another
structure? These questions and others are answered by an assessment of information structure. It should be noted
that data structure, refers to the design and implementation of information structure within the software.
33


Modeling
We create functional models to gain a better understanding of the actual entity to be built. When the entity
is a physical thing (a building, a plane, a machine), we can build a model that is identical in form and shape but
smaller in scale. However, when the entity to be built is software, our model must take a different form. It must be
capable of representing the information that software transforms the functions (and sub functions) that enables the
transformation to occur, and the behavior of the system as the transformation is taking place.
The second and third operational analysis principles require that we build models of function and behavior.
Functional models: Software transforms information, and in order to accomplish this, it must perform at least three
generic functions: input, processing, and output. When functional models of an application are created, the software
engineer focuses on problem specific functions. The functional model begins with a single context level model (i.e.,
the name of the software to be built). Over a series of iterations, more and more functional detail is provided, until a
thorough delineation of all system functionality is represented.
Behavioral models: Most software responds to events from the outside world. This stimulus/response characteristic
forms the basis of the behavioral model. A computer program always exists in some statean externally observable
mode of behavior (e.g , waiting, computing, printing, polling) that is changed only when some event occurs. For
example, software will remain in the wait state until (1) an internal clock indicates that some time interval has
passed, (2) an external event (e.g., a mouse movement) causes an interrupt, or (3) an external system signals the
software to act in some manner. A behavioral model creates a representation of the states of the software and the
events that cause software to change state. Models created during requirements analysis serve a number of important
roles:
The model aids the analyst in understanding the information, function, and behavior of a system, thereby making
the requirements analysis task easier and more systematic.
The model becomes the focal point for review and, therefore, the key to a determination of completeness,
consistency, and accuracy of the specifications.
The model becomes the foundation for design, providing the designer with an essential representation of software
that can be "mapped" into an implementation context.
Partitioning
Problems are often too large and complex to be understood as a whole. For this reason, we tend to partition
(divide) such problems into parts that can be easily understood and establish interfaces between the parts so that
34

overall function can be accomplished. The fourth operational analysis principle suggests that the information,
functional, and behavioral domains of software can be partitioned.
In essence, partitioning decomposes a problem into its constituent parts. Conceptually, we establish a
hierarchical representation of function or information and then partition the uppermost element by (1) exposing
increasing detail by moving vertically in the hierarchy or (2) functionally decomposing the problem by moving
horizontally in the hierarchy. In fact, partitioning of information flow and system behavior will provide additional
insight into software requirements. As the problem is partitioned, interfaces between functions are derived. Data and
control items that move across an interface should be restricted to inputs required to perform the stated function and
outputs that are required by other functions or system elements.
Essential and Implementation Views
An essential view of software requirements presents the functions to be accomplished and information to be
processed without regard to implementation details. By focusing attention on the essence of the problem at early
stages of requirements engineering, we leave our options open to specify implementation details during later stages
of requirements specification and software design.
The implementation view of software requirements presents the real world manifestation of processing functions and
information structures. In some cases, a physical representation is developed as the first step in software design.
However, most computer-based systems are specified in a manner that dictates accommodation of certain
implementation details.
We have already noted that software requirements engineering should focus on what the software is to
accomplish, rather than on how processing will be implemented. However, the implementation view should not
necessarily be interpreted as a representation of how. Rather, an implementation model represents the current mode
of operation; that is, the existing or proposed allocation for all system elements. The essential model (of function or
data) is generic in the sense that realization of function is not explicitly indicated.
SOFTWARE PROTOTYPING
Analysis should be conducted regardless of the software engineering paradigm that is applied. However,
the form that analysis takes will vary. In some cases it is possible to apply operational analysis principles and derive
a model of software from which a design can be developed. In other situations, requirements elicitation (via FAST,
QFD, use-cases, or other "brainstorming" techniques) is conducted, analysis principles are applied, and a model of
the software to be built, called a prototype, is constructed for customer and developer assessment. Finally, some
circumstances require the construction of a prototype at the beginning of analysis, since the model is the only means
through which requirements can be effectively derived. The model then evolves into production software.
Selecting the Prototyping Approach.
The prototyping paradigm can be either close-ended or open-ended. The close-ended approach is often
called throwaway prototyping. Using this approach, a prototype serves solely as a rough demonstration of
requirements. It is then discarded, and the software is engineered using a different paradigm. An open-ended
approach, called evolutionary prototyping, uses the prototype as the first part of an analysis activity that will be
continued into design and construction. The prototype of the software is the first evolution of the finished system.
35

Before a close-ended or open-ended approach can be chosen, it is necessary to determine whether the
system to be built is amenable to prototyping. A number of prototyping candidacy factors can be defined:
application area, application complexity, customer characteristics, and project characteristics. In general, any
application that creates dynamic visual displays, interacts heavily with a user, or demands algorithms or
combinatorial processing that must be developed in an evolutionary fashion is a candidate for prototyping. However,
these application areas must be weighed against application complexity. If a candidate application (one that has the
characteristics noted) will require the development of tens of thousands of lines of code before any demonstrable
function can be performed, it is likely to be too complex for prototyping. If, however, the complexity can be
partitioned, it may still be possible to prototype portions of the software.
Because the customer must interact with the prototype in later steps, it is essential that (1) customer
resources be committed to the evaluation and refinement of the prototype and (2) the customer is capable of making
requirements decisions in a timely fashion. Finally, the nature of the development project will have a strong bearing
on the efficacy of prototyping. Is project management willing and able to work with the prototyping method? Are
prototyping tools available?


Prototyping Methods and Tools
For software prototyping to be effective, a prototype must be developed rapidly so that the customer may
assess results and recommend changes. To conduct rapid prototyping, three generic classes of methods and tools are
available:
Fourth generation techniques: Fourth generation techniques (4GT) encompass a broad array of database query and
reporting languages, program and application generators, and other very high-level nonprocedural languages.
Because 4GT enable the software engineer to generate executable code quickly, they are ideal for rapid prototyping.
Reusable software components: Another approach to rapid prototyping is to assemble, rather than build, the
prototype by using a set of existing software components. Melding prototyping and program component reuse will
work only if a library system is developed so that components that do exist can be cataloged and then retrieved. It
should be noted that an existing software product can be used as a prototype for a "new, improved" competitive
product. In a way, this is a form of reusability for software prototyping.
Formal specification and prototyping environments: Over the past two decades, a number of formal specification
languages and tools have been developed as a replacement for natural language specification techniques. Today,
developers of these formal languages are in the process of developing interactive environments that (1) enable an
36

analyst to interactively create language-based specifications of a system or software, (2) invoke automated tools that
translate the language-based specifications into executable code, and (3) enable the customer to use the prototype
executable code to refine formal requirements.
SPECIFICATION
There is no doubt that the mode of specification has much to do with the quality of solution. Software
engineers who have been forced to work with incomplete, inconsistent, or misleading specifications have
experienced the frustration and confusion that invariably results. The quality, timeliness, and completeness of the
software suffer as a consequence.
Specification Principles
Specification, regardless of the mode through which we accomplish it, may be viewed as a representation
process. Requirements are represented in a manner that ultimately leads to successful software implementation.
1. Separate functionality from implementation.
2. Develop a model of the desired behavior of a system that encompasses data and the functional responses of a
system to various stimuli from the environment.
3. Establish the context in which software operates by specifying the manner in which other system components
interact with software.
4. Define the environment in which the system operates and indicate how a highly intertwined collection of agents
react to stimuli in the environment (changes to objects) produced by those agents.
5. Create a cognitive model rather than a design or implementation model. The cognitive model describes a system
as perceived by its user community.
6. Recognize that the specifications must be tolerant of incompleteness and augmentable. A specification is
always a modelan abstractionof some real (or envisioned) situation that is normally quite complex. Hence, it
will be incomplete and will exist at many levels of detail.
7. Establish the content and structure of a specification in a way that will enable it to be amenable to change.
This list of basic specification principles provides a basis for representing software requirements. However,
principles must be translated into realization. In the next section we examine a set of guidelines for creating a
specification of requirements.
Representation
We have already seen that software requirements may be specified in a variety of ways. However, if requirements
are committed to paper or an electronic presentation medium (and they almost always should be!) a simple set of
guidelines is well worth following:
Representation format and content should be relevant to the problem.
A general outline for the contents of a Software Requirements Specification can be developed. However, the
representation forms contained within the specification are likely to vary with the application area.
Information contained within the specification should be nested. Representations should reveal layers of
information so that a reader can move to the level of detail required. Paragraph and diagram numbering schemes
37

should indicate the level of detail that is being presented. It is sometimes worthwhile to present the same information
at different levels of abstraction to aid in understanding.
Diagrams and other notational forms should be restricted in number and consistent in use. Confusing or
inconsistent notation, whether graphical or symbolic, degrades understanding and fosters errors.
Representations should be revisable. The content of a specification will change. Ideally, CASE tools should be
available to update all representations that are affected by each change. Investigators have conducted numerous
studies on human factors associated with specification. There appears to be little doubt that zymology and
arrangement affect understanding. However, software engineers appear to have individual preferences for specific
symbolic and diagrammatic forms. Familiarity often lies at the root of a person's preference, but other more tangible
factors such as spatial arrangement, easily recognizable patterns, and degree of formality often dictate an
individual's choice.
The Software Requirements Specification
The Software Requirements Specification is produced at the culmination of the analysis task. The function
and performance allocated to software as part of system engineering are refined by establishing a complete
information description, a detailed functional description, a representation of system behavior, an indication of
performance requirements and design constraints, appropriate validation criteria, and other information pertinent to
requirements. The National Bureau of Standards, IEEE (Standard No. 830-1984), and the U.S. Department of
Defense have all proposed candidate formats for software requirements specifications (as well as other software
engineering documentation).
The Introduction of the software requirements specification states the goals and objectives of the software,
describing it in the context of the computer-based system. Actually, the Introduction may be nothing more than the
software scope of the planning document.
The Information Description provides a detailed description of the problem that the software must solve.
Information content, flow, and structure are documented. Hardware, software, and human interfaces are described
for external system elements and internal software functions.
A description of each function required to solve the problem is presented in the Functional Description. A
processing narrative is provided for each function, design constraints are stated and justified, performance
characteristics are stated, and one or more diagrams are included to graphically represent the overall structure of the
software and interplay among software functions and other system elements.
The Behavioral Description section of the specification examines the operation of the software as a
consequence of external events and internally generated control characteristics. Validation Criteria is probably the
most important and, ironically, the most often neglected section of the Software Requirements Specification. How do
we recognize a successful implementation? What classes of tests must be conducted to validate function,
In many cases the Software Requirements Specification may be accompanied by an executable prototype
(which in some cases may replace the specification), a paper prototype or a Preliminary User's Manual. The
Preliminary User's Manual presents the software as a black box. That is, heavy emphasis is placed on user input and
38

the resultant output. The manual can serve as a valuable tool for uncovering problems at the human/machine
interface.
SPECIFICATION REVIEW
A review of the Software Requirements Specification (and/or prototype) is conducted by both the software
developer and the customer. Because the specification forms the foundation of the development phase, extreme care
should be taken in conducting the review.
The review is first conducted at a macroscopic level; that is, reviewers attempt to ensure that the
specification is complete, consistent, and accurate when the overall information, functional, and behavioral domains
are considered. However, to fully explore each of these domains, the review becomes more detailed, examining not
only broad descriptions but the way in which requirements are worded. For example, when specifications contain
vague terms (e.g., some, sometimes, often, usually, ordinarily, most, or mostly), the reviewer should flag the
statements for further clarification.
Once the review is complete, the Software Requirements Specification is "signedoff" by both the customer
and the developer. The specification becomes a "contract" for software development. Requests for changes in
requirements after the specification is finalized will not be eliminated. But the customer should note that each after
the fact change is an extension of software scope and therefore can increase cost and/or protract the schedule.
Even with the best review procedures in place, a number of common specification problems persist. The
specification is difficult to "test" in any meaningful way, and therefore inconsistency or omissions may pass
unnoticed. During the review, changes to the specification may be recommended. It can be extremely difficult to
assess the global impact of a change; that is, how a change in one function affects requirements for other functions.
Modern software engineering environments incorporate CASE tools that have been developed to help solve these
problems.

Vous aimerez peut-être aussi