Académique Documents
Professionnel Documents
Culture Documents
Define software
engineering? What are the characteristics of software product?
The process that deals with the technical and management issues of software development
is called a software process. Clearly, many different types of activities need to be performed
to develop software. If the process is weak, the end product will undoubtedly suffer, but an
obsessive overreliance on process is also dangerous. We have embraced structured
programming languages (product) followed by structured analysis methods (process)
followed by data encapsulation (product) followed by the current emphasis on the Software
Engineering Institute's Software Development Capability Maturity Model (process). The
observations we can make on the artifacts of software and its development demonstrate a
fundamental duality between product and process. You can never derive or understand the
full artifact, its context, use, meaning, and worth if you view it as only a process or only a
product.
[Software engineering is] the establishment and use of sound engineering principles in order
to obtain economically software that is reliable and works efficiently on real machines.
The IEEE [IEE93] has developed a more comprehensive definition when it states:
Software Characteristics:
Software is (1) instructions (computer programs) that when executed provide desired
function and performance, (2) data structures that enable the programs to adequately
manipulate information, and (3) documents that describe the operation and use of the
programs.
Software is a logical rather than a physical system element. Therefore, software has
characteristics that are considerably different than those of hardware:
Software costs are concentrated in engineering. This means that software projects cannot
be managed as if they were manufacturing projects.
2. What is the role of software architecture in a software system? What are the different
views of architecture, describe each of them in detail? How are the disciplines of
classical architecture and the software architecture similar? How do they differ?
Before discussing the role of software architecture in software systems, we have to first take a
firm look on what the software architecture is. At a top level, architecture is a design of a
system which gives a very high level view of the parts of the system and how they are
related to form the whole system. That is, architecture partitions the system in logical parts
such that each part can be comprehended independently, and then describes the system
in terms of these parts and the relationship between these parts. In fact any complex system
can be partitioned into simple logical parts with the aid of software architecture. So the
formal definition goes like this ―software architecture of a system is the structure or structures
of the system, which comprise software elements, the externally visible properties of those
elements, and the relationships among them”
2. Reuse: Architecture descriptions can help software reuse. Reuse is considered one of
the main techniques by which productivity can be improved, thereby reducing the
cost of software. The architecture has to be chosen in a manner such that the
components which have to be reused can fit properly and together with other
components that may be developed, they provide the features that are needed.
Architecture also facilitates reuse among products that are similar and building
product families such that the common parts of these different but similar products
can be reused.
3. Construction and Evolution: As architecture partitions the system into parts, some
architecture provided partitioning can naturally be used for constructing the system,
which also requires that the system be broken into parts such that different teams (or
individuals) can separately work on different parts. It is clear from the definition that
the parts specified in a software is relatively independent (dependence comes
through the relationship). Not only does architecture guide the development, it also
establishes the constraints—the system should be constructed in a manner that the
structures chosen during the architecture creation are preserved. Delivery of the
product—a software system also evolves with time. During evolution, often new
features are added to the system. The architecture of the system can help in deciding
where to add the new features with minimum complexity and effort, and what the
impact on the rest of the system might be of adding the new features.
4. Analysis: It is highly desirable if some important properties about the behaviour of the
system can be determined before the system is actually built. This will allow the
designers to consider alternatives and select the one that will best suit the needs. It is
possible to predict the features or analyse the properties of the system being built from
its architecture. Such an analysis will also help in meeting the quality and the reliability
of the software.
Different Views of Software Architecture:
A view generally describes the structure of the systems. Many types of views have been
proposed but most of views generally belong to one of the following three types.
a) Module
c) Allocation
Module: In a module view, the system is viewed as a collection of code units, each
implementing some part of the system functionality. That is, the main elements in this view
are modules. These views are code-based and do not explicitly represent any runtime
structure of the system. Examples of modules are packages, a class, a procedure, a method,
a collection of functions, and a collection of classes. The relationships between these
modules are also code-based and depend on how code of a module interacts with another
module. Examples of relationships in this view are "is a part of" (i.e., module B is a part of
module A), "uses or depends on" (a module A uses services of module B to perform its own
functions and correctness of module A depends on correctness of module B,) and
"generalization or specialization" (a module B is a generalization of a module A.)
In C&C view, system is viewed as a collection of runtime entities called component. That is,
a component is a unit which has an identity in the executing system. Objects (not classes), a
collection of objects, and a process are examples of components. While executing,
components need to interact with others to support the system services. Connectors provide
means for this interaction. Examples of connectors are pipes and sockets. Shared data also
act as connectors.
Allocation View:
An allocation view focuses on how the different software units are allocated to resources like
the hardware, file systems, and people. That is, an allocation view specifies the relationship
between software elements and elements of the environments in which the software system
is executed. They expose structural properties like which processes run on which processor,
and how the system files are organized on a file system.
No, it is not reasonable to assume that if software is easy to test, it will be easy to maintain.
Generally, the software produced is not easily maintainable because the development
process used for developing software does not have maintainability as a clear goal.
One possible reason for this is that the developers frequently develop the software, install it,
and then hand it over to a different set of people called maintainers.
Usually the maintainers don’t belong to the organization that developed the software.
In such a situation, clearly there is no incentive for the developers to develop maintainable
software, as they don’t have to put in the effort for maintenance.
This situation can be alleviated if the developers are made responsible for maintenance, at
least for a couple of years after the delivery of software.
Suppose that by putting extra effort in design and coding you increase the cost of these
phases by 15%, but you reduce the cost of maintenance by 5%. Will you decide to put the
extra effort? Why?
could be:
There are some observations we can make from the table.
First is that design and coding consume only a small percentage of the development effort.
This is against the common naive notion that developing software is largely concerned with
writing programs and that programming is the major activity.
Second observation from the data is that testing consumes the most resources during
development. Underestimating the testing effort often causes the planners to allocate
insufficient resources for testing, which in turn, results in unreliable software or schedule
slippage.
In the life of software, the maintenance costs generally exceed the development costs.
Clearly, if we want to reduce the overall cost of software or achieve "global" optimality in
terms of cost rather than "local" optimality in terms of development cost only, the goal of
development should be to reduce the maintenance effort. That is, one of the important
objectives of the development project should be to produce software that is easy to
maintain and the process used should ensure this maintainability.
Both testing and maintenance depend heavily on the quality of design and code, and
these costs can be considerably reduced if the software is designed and coded to make
testing and maintenance easier. Hence, during the early phases of the development
process the prime issues should be "can it be easily tested" and "can it be easily modified".
So, it is reasonable to put extra effort in design and coding to reduce the cost of testing and
maintenance.
4. Suppose a program for solving a problem costs X, and industrial level software for
solving that problem costs 10X. Where do you think this extra 9X cost is spent? Suggest
a possible breakdown of this extra cost with you reasons and justifications?
A program for solving a problem costs X, and an industrial level software for solving that
problem costs 10X. To find out where this extra 9X cost is spent first of all we need to
understand the difference between a ―problem solving program‖ and ―industrial level
software‖.
An industrial level software is very different from a program in terms of quality (including
usability, reliability, robustness, portability, etc.). High quality requires heavy testing, which
consumes 30-50% of total development effort. A large amount of investment is done by the
company in terms of resources like man power and money. Developer is a team consisting a
number of persons and for them bugs are not tolerable, UI is very important, and they
prepare documentation.
However, in a simple program to solve a problem, quality (including usability, reliability,
robustness, portability, etc) is not important. Developer is a single user and for him bugs are
tolerable, UI is not important, and he doesn’t prepare documentation. No investment is
required.
Therefore, industrial strength software is very expensive primarily due to the fact that software
development is extremely labor-intensive.
Software costs more to maintain than to develop from scratch. The maintenance costs
for systems with a long-life, may be several times its development costs
Roughly 60% of costs are development costs, and 40% are testing costs. Evolution costs
often exceed development costs in custom software
Costs vary depending on the type and requirements of system under development
Ex :
1. waterfall model -
specification – 1.5X
design- 2.5X
development- 2X
specification – 1X
iterative development – 6X
system testing – 3X
specification – 2X
development- 3X
To get an idea of the costs involved, let us consider the current state of practice in the
industry. Lines of code (LOC) or thousands of lines of code (KLOC) delivered is by far the
most commonly used measure of software size in the industry. As the main cost of producing
software is the manpower employed, the cost of developing software is generally measured
in terms of person-months of effort spent in development. And productivity is frequently
measured in the industry in terms of LOC (or KLOC) per person-month.
So , SW is very expensive
As the figure shows, in the early days, the cost of hardware used to dominate the system
cost. As the cost of hardware has lessened over the years and continues to decline, and as
the power of hardware doubles every 2 years or so (the Moore's law) enabling larger
software systems to be run on it, cost of software has now become the dominant factor in
systems.
5. What is the relationship between a process model, process specification, and process
for a project? What are the key outputs in a development project that follows the
prototyping model? Write an ETVX specification for this process model?
6. Give a brief description of software prototyping and briefly discuss the various
prototyping techniques? Write an example; illustrate the use of prototyping as a
method for problem analysis? Discuss its advantages and disadvantages?
The basic idea prototyping model is that instead of freezing the requirements before any
design or coding can proceed, a prototype i.e. incomplete version of software program is
built to help understand the requirements. This prototype is developed based on the
currently known requirements. Development of the prototype obviously undergoes design,
coding, and testing, but each of these phases is not done very formally or thoroughly.
By using this prototype, the client can get an actual feel of the system; because the
interactions with the prototype can enable the client to, better understand the requirements
of the desired system. Using prototyping model, we can overcome the limitations of waterfall
model.
Determine basic requirements including the input and output information desired. Details,
such as security, can typically be ignored.
3. Review:
The customers, including end-users, examine the prototype and provide feedback on
additions or changes.
Using the feedback both the specifications and the prototype can be improved.
Negotiation about what is within the scope of the contract/product may be necessary. If
changes are introduced then a repeat of steps #3 and #4 may be needed.
1. Throwaway prototyping
Also called close ended prototyping. Throwaway or Rapid Prototyping refers to the creation
of a model that will eventually be discarded rather than becoming part of the final delivered
software. After preliminary requirements gathering is accomplished, a simple working model
of the system is constructed to visually show the users what their requirements may look like
when they are implemented into a finished system.
Throwaway Prototyping involved creating a working model of various parts of the system at
a very early stage, after a relatively short investigation. The method used in building it is
usually quite informal, the most important factor being the speed with which the model is
provided. The model then becomes the starting point from which users can re-examine their
expectations and clarify their requirements. When this has been achieved, the prototype
model is 'thrown away', and the system is formally developed based on the identified
requirements.
4. Repeat if necessary
2. Evolutionary prototyping
To minimize risk, the developer does not implement poorly understood features. The partial
system is sent to customer sites. As users work with the system, they detect opportunities for
new features and give requests for these features to developers. Developers then take these
enhancement requests along with their own and use sound configuration-management
practices to change the software-requirements specification, update the design, recode
and retest.
3. Incremental prototyping
The final product is built as separate prototypes. At the end the separate prototypes are
merged in an overall design.
4. Extreme prototyping
The development of the prototype typically starts when the preliminary version of the
requirements specification document has been developed. At this stage, there is a
reasonable understanding of the system and its needs and which needs are unclear or likely
to change. After the prototype has been developed, the end users and clients are given an
opportunity to use the prototype and play with it. Based on their experience, they provide
feedback to the developers regarding the prototype: what is correct, what needs to be
modified, what is missing, what is not needed, etc. Based on the feedback, the prototype is
modified to incorporate some of the suggested changes that can be done easily, and then
the users and the clients are again allowed to use the system. This cycle repeats until, in the
judgment of the prototypers and analysts, the benefit from further changing the system and
obtaining feedback is outweighed by the cost and time involved in making the changes
and obtaining the feedback. Based on the feedback, the initial requirements are modified
to produce the final requirements specification, which is then used to develop the
production quality system.
Advantages of prototyping
Reduced time and costs: Prototyping can improve the quality of requirements and
specifications provided to developers. Because changes cost exponentially more to
implement as they are detected later in development, the early determination of what the
user really wants can result in faster and less expensive software.
Improved and increased user involvement: Prototyping requires user involvement and allows
them to see and interact with a prototype allowing them to provide better and more
complete feedback and specifications. The presence of the prototype being examined by
the user prevents many misunderstandings and miscommunications that occur when each
side believe the other understands what they said.
Disadvantages of prototyping
Insufficient analysis: The focus on a limited prototype can distract developers from properly
analyzing the complete project. This can lead to overlooking better solutions, preparation of
incomplete specifications or the conversion of limited prototypes into poorly engineered final
projects that are hard to maintain.
User confusion of prototype and finished system: Users can begin to think that a prototype,
intended to be thrown away, is actually a final system that merely needs to be finished or
polished. This can lead them to expect the prototype to accurately model the performance
of the final system when this is not the intent of the developers.
Developer misunderstanding of user objectives: Developers may assume that users share
their objectives (e.g. to deliver core functionality on time and within budget), without
understanding wider commercial issues.
Excessive development time of the prototype: A key property to prototyping is the fact that it
is supposed to be done quickly. If the developers lose sight of this fact, they very well may try
to develop a prototype that is too complex. When the prototype is thrown away the
precisely developed requirements that it provides may not yield a sufficient increase in
productivity to make up for the time spent developing the prototype.
Expense of implementing prototyping: the start up costs for building a development team
focused on prototyping may be high.
7. Explain different process models along with their relative merits and demerits. Explain
four significant attributes that every software product should possess.
Waterfall Model
• Linear sequence of stages/phases
• Requirements – HLD – DD – Coding – Testing – Deploy
• A phase starts only when the previous has completed; no feedback
• The phases partition the project, each addressing a separate concern
• Linear ordering implies each phase should have some output
• The output must be validated/ certified
• Outputs of earlier phases: work products
• Common outputs of a waterfall: SRS, project plan, design docs, test plan and reports,
final code, supporting docs
Waterfall Advantages
• Conceptually simple, cleanly divides the problem into distinct phases that can be
performed independently
• Natural approach for problem solving
• Easy to administer in a contractual setup – each phase is a milestone
Waterfall disadvantages
• Assumes that requirements can be specified and frozen early
• May fix hardware and other technologies too early
• Follows the ―big bang‖ approach – all or nothing delivery; too risky
• Very document oriented, requiring docs at the end of each phase Waterfall Usage
• Has been used widely
• Well suited for projects where requirements can be understood easily and
technology decisions are easy
• i.e. for familiar type of projects it still may be the most optimum
Prototyping
• Prototyping addresses the requirement specification limitation of waterfall
• Instead of freezing requirements only by discussions, a prototype is built to
understand the requirements
• Helps alleviate the requirements risk
• A small waterfall model replaces the requirements stage
• Development of prototype
– Starts with initial requirements
– Only key features which need better understanding are included in prototype
– No point in including those features that are well understood
– Feedback from users taken to improve the understanding of the requirements
• Cost can be kept low
– Build only features needing clarification
– ―quick and dirty‖ – quality not important, scripting etc can be used
– Things like exception handling, recovery, standards are omitted
– Cost can be a few % of the total
– Learning in prototype building will help in building, besides improved requirements
Iterative Development
• Counters the ―all or nothing‖ drawback of the waterfall model
• Combines benefit of prototyping and waterfall
• Develop and deliver software in increments
• Each increment is complete in itself
• Can be viewed as a sequence of waterfalls
• Feedback from one iteration is used in the future iterations
• Products almost always follow it
• Used commonly in customized development also
– Businesses want quick response for SW
– Cannot afford the risk of all-or-nothing
• Newer approaches like XP, Agile- all rely on iterative development
Applicability: where response time is important, risk of long projects cannot be taken,
all requirement not
known
Timeboxing:
• Iterative is linear sequence of iterations
• Each iteration is a mini waterfall – decide the specifications, then plan the iteration
• Time boxing – fix iteration duration, then determine the specifications
• Divide iteration in a few equal stages
• Use pipelining concepts to execute iterations in parallel
• General iterative development – fixes the functionality for each iteration, then plan
and executes it
• In time boxed iterations – fix the duration of iteration and adjust the functionality to fit
it
• Completion time is fixed; the functionality to be delivered is flexible
• This itself very useful in many situations
• Has predictable delivery times
• Overall product release and marketing can be better planned
• Makes time a non-negotiable parameter and helps focus attention on schedule
• Prevents requirements bloating
• Overall dev time is still unchanged
• What if we have multiple iterations executing in parallel
• Can reduce the average completion time by exploiting parallelism
• For parallel execution, can borrow pipelining concepts from hardware
• This leads to Time-boxing Process Model
• Development is done iteratively in fixed duration time boxes
• Each time box divided in fixed stages
• Each stage performs a clearly defined task that can be done independently
• Each stage approximately equal in duration
• There is a dedicated team for each stage
• When one stage team finishes, it hands over the project to the next team
• With this type of time boxes, can use pipelining to reduce cycle time
• Like hardware pipelining – view each iteration as an instruction
• As stages have dedicated teams, simultaneous execution of different iterations is
possible
Summary of waterfall
Strengths weakness Type of projects
Simple All or nothing – too Well understood
Easy to execute risky problems, short
Intuitive and Requirement frozen duration projects,
logical early automation of
Easy contractually May chose outdated existing manual
hardware/technology systems
Disallows changes
No feedback from
users
Encourages
requirement bloating
Summary of prototype
Strengths weakness Types of projects
Helps requirement Front heavy Systems with
elicitation Possibly higher cost novice users; or
Reduces risk and schedule areas with
Better and more Encourages requirement
stable final system requirement uncertainty.
bloating Heavy reporting
Disallows later based systems can
change benefit from UI
prototyping
Summary of iterative:
strengths weakness Types of
projects
Regular deliveries, Overhead of planning For businesses
leading to biz benefit each iteration where time is
Can accommodate Total cost may important;
changes naturally increase risk of long
Allows user feedback System architecture projects
Avoids requirement and design may suffer cannot be
bloating Rework may increase taken;
Naturally prioritizes requirement
requirement not known and
Allows reasonable exit evolve with
points time
Reduces risks
Summary of time-boxing:
At the top level, for a software product, these attributes can be defined as follows:
• Functionality. The capability to provide functions which meet stated and implied needs
when the software is used
8. What is the need for validating the requirements? Explain any requirement validation
techniques. Mention the six specific design process activities. Give explanation for two
of them.
The development of software starts with a requirements document, which is also used
to determine eventually whether or not the delivered software system is acceptable. It is
therefore important that the requirements specification contains no errors and specifies the
client's requirements correctly. Furthermore the longer an error remains undetected, the
greater the cost of correcting it. Hence, it is extremely desirable to detect errors in the
requirements before the design and development of the software begin. Due to the nature
of the requirement specification phase, there is a lot of room for misunderstanding and
committing errors, and it is quite possible that the requirements specification does not
accurately represent the client's needs. The basic objective of the requirements validation
activity is to ensure that the SRS reflects the actual requirements accurately and clearly. A
related objective is to check that the SRS document is itself of "good quality‖.
Sorry! couldn’t get answers for the last two parts of this question.
9. Differentiate between the following terms: Milestone and deliverable. Requirements
Definition and Specification, a software product and a software process.
(a) Milestone and Deliverable:
A milestone is a point some way through a project plan that indicates how far
the project has progressed. It is an important point in time such as ―contract signed‖,
―project approved‖ etc, and usually has zero duration. A deliverable refers to a
tangible product that is produced signifying the reaching of the milestone. It is
something that people work on to finish the project – such as a completed
requirements document, or a test plan. A milestone has a symbolic purpose and is not
a physical creation (and therefore can represent things that are not tangible, such as
hitting the 3 month mark of the project). A deliverable, on the other hand, defines the
class of tangible (i.e. physical) products that the project produces on its path towards
achieving its ultimate goal. As a result, a project will have significantly fewer milestones
than deliverables.
The product is something tangible that you get after going through a process. After doing
some systems analysis work, the analyst will write a report. Doing the analysis is a process and
the report is a product of that phase. Each product can be used as part of carrying out the
next process. The analyst's report could be used as part of the design process. The resulting
design, which is a product, is then used in writing the programs, which is another process.
Thus there is no product that is not formed through a process.
10. Explain the spiral model. Discuss the features of a software project for which the spiral
model could be a preferred model? Justify your answer?
11. Describe the role of management on software development. Describe the major phases
in software development. Discuss the error distribution and cost of correcting the errors
during development.
Effective software management focuses on the four P’s: people, product, process, and
project. The order is not arbitrary. The manager who forgets that software engineering work is
an intensely human endeavor will never have success in project management. A manager
who fails to encourage comprehensive customer communication early in the evolution of a
project risks building an elegant solution for the wrong problem. The manager who pays little
attention to the process runs the risk of inserting competent technical methods and tools into
a vacuum. The manager who embarks without a solid project plan jeopardizes the success
of the product. The cultivation of motivated, highly skilled software people has been
discussed since the 1960s. In fact, the ―people factor‖ is so important that the Software
Engineering Institute has developed a people management capability maturity model (PM-
CMM), ―to enhance the readiness of software organizations to undertake increasingly
complex applications by helping to attract, grow, motivate, deploy, and retain the talent
needed to improve their software development capability‖. Management of people
includes recruiting, selection, performance management, training, compensation, career
development, organization and work design, and team/culture development. Organizations
that achieve high levels of maturity in the people management area have a higher
likelihood of implementing effective software engineering practices.
Before a project can be planned, product1 objectives and scope should be established,
alternative solutions should be considered, and technical and management constraints
should be identified. Without this information, it is impossible to define reasonable (and
accurate) estimates of the cost, an effective assessment of risk, a realistic breakdown of
project tasks, or a manageable project schedule that provides a meaningful indication of
progress.
As process of SE proceeds, there are many reasons that software projects get into trouble.
The scale of many development efforts is large, leading to complexity, confusion, and
significant difficulties in coordinating team members. Uncertainty is common, resulting in a
continuing stream of changes that ratchets the project team. Interoperability has become a
key characteristic of many systems. To deal with these issues an effective management is
necessary.
It is a set of phases, each phase being a sequence of steps. Sequence of steps for a phase
defines the methodologies for that phase. We divide the development process into phases
as this helps to
Requirements analysis:
o Because software is always part of a larger system (or business), work begins by
establishing requirements for all system elements and then allocating some
subset of these requirements to software. This system view is essential when
software must interact with other elements such as hardware, people, and
databases. The requirements gathering process is intensified and focused
specifically on software. To understand the nature of the program(s) to be built,
the software engineer ("analyst") must understand the information domain for
the software, as well as required function, behavior, performance, and
interface. Requirements for both the system and the software are documented
and reviewed with the customer.
Design:
o Software design is actually a multistep process that focuses on four distinct
attributes of a program: data structure, software architecture, interface
representations, and procedural (algorithmic) detail. The design process
translates requirements into a representation of the software that can be
assessed for quality before coding begins. Like requirements, the design is
documented and becomes part of the software configuration
Coding:
o The design must be translated into a machine-readable form.The code
generation step performs this task. If design is performed in a detailed
manner,code generation can be accomplished mechanistically.
Testing:
o Once code has been generated, program testing begins. The testing process
focuses on the logical internals of the software, ensuring that all statements have
been tested, and on the functional externals; that is, conducting tests to
uncover errors and ensure that defined input will produce actual results that
agree with required results.
Support
o Software will undoubtedly undergo change after it is delivered to the customer
(a possible exception is embedded software). Change will occur because errors
have been encountered, because the software must be adapted to
accommodate changes in its external environment (e.g., a change required
because of a new operating system or peripheral device), or because the
customer requires functional or performance enhancements. Software
support/maintenance reapplies each of the preceding phases to an existing
program rather than a new one.
The notion that programming is the central activity during software development is largely
due to programming being considered a difficult task and sometimes an "art." Another
consequence of this kind of thinking is the belief that errors largely occur during
programming, as it is the hardest activity in software development and offers many
opportunities for committing errors. It is now clear that errors can occur at any stage during
development. An example distribution of error occurrences by phase is:
Requirements 20%
Design 30%
Coding 50%
As we can see, errors occur throughout the development process. However, the cost of
correcting errors of different phases is not the same and depends on when the error is
detected and corrected. The relative cost of correcting requirement errors as a function of
where they are detected is shown
Software Engineering is an engineering discipline that applies theories, methods and tools to
solve problems related to software production and maintenance.
Description:
The simplest process model is the waterfall model, which states that the phases are
organized in a linear order. The model was originally proposed by Royce though variations of
the model have evolved depending on the nature of activities and the flow of control
between them. In this model, a project begins with feasibility analysis. Upon successfully
demonstrating the feasibility of a project, the requirements analysis and project planning
begins. The design starts after the requirements analysis is complete, and coding begins after
the design is complete. Once the programming is completed, the code is integrated and
testing is done. Upon successful completion of testing, the system is installed. After this, the
regular operation and maintenance of the system takes place.
Limitations:
1. It assumes that the requirements of a system can be frozen (i.e., baselined) before the
design begins. This is possible for systems designed to automate an existing manual
system. But for new systems, determining the requirements is difficult as the user does
not even know the requirements. Hence, having unchanging requirements is
unrealistic for such projects.
2. Freezing the requirements usually requires choosing the hardware (because it forms a
part of the requirements specification). A large project might take a few years to
complete. If the hardware is selected early, then due to the speed at which hardware
technology is changing, it is likely that the final software will use a hardware
technology on the verge of becoming obsolete. This is clearly not desirable for such
expensive software systems.
3. It follows the "big bang" approach—the entire software is delivered in one shot at the
end. This entails heavy risks, as the user does not know until the very end what they are
getting. Furthermore, if the project runs out of money in the middle, then there will be
no software. That is, it has the "all or nothing" value proposition.
Advantages:
1. Conceptually it is very simple and it can easily divide the problem into distinct phases
that can be performed independently.
3. Easy to administer is contractual set up- each phase acts as a mile stone.
Evolutionary Development Model:
For software products that have their feature sets redefined during development because of
user feedback and other factors, the traditional waterfall model is no longer appropriate.
Evolutionary Development Model (EVO) uses small, incremental product releases, frequent
delivery to users, and dynamic plans and processes. Although EVO is relatively simple in
concept, its implementation at HP has included both significant challenges and notable
benefits.
The EVO development model divides the development cycle into smaller, incremental
waterfall models in which users are able to get access to the product at the end of each
cycle. The users provide feedback on the product for the planning stage of the next cycle
and the development team responds, often by changing the product, plans, or process.
These incremental cycles are typically two to four weeks in duration and continue until the
product is shipped.
Benefits:
Successful use of EVO can benefit not only business results but marketing and internal
operations as well. From a business perspective, the biggest benefit of EVO is a significant
reduction in risk for software projects. This risk might be associated with any of the many ways
a software project can go awry, including missing scheduled deadlines, unusable products,
wrong feature sets, or poor quality. By breaking the project into smaller, more manageable
pieces and by increasing the visibility of the management team in the project, these risks
can be addressed and managed. Because some design issues are cheaper to resolve
through experimentation than through analysis, EVO can reduce costs by providing a
structured, disciplined avenue for experimentation. Finally, the inevitable change in
expectations when users begin using the software system is addressed by EVO’s early and
ongoing involvement of the user in the development process. This can result in a product
that better fits user needs and market requirements.
EVO allows the marketing department access to early deliveries, facilitating development of
documentation and demonstrations. Although this access must be given judiciously, in some
markets it is absolutely necessary to start the sales cycle well before product release. The
ability of developers to respond to market changes is increased in EVO because the
software is continuously evolving and the development team is thus better positioned to
change a feature set or release it earlier.
Short, frequent EVO cycles have some distinct advantages for internal processes and people
considerations. First, continuous process improvement becomes a more realistic possibility
with one-to-four-week cycles. Second, the opportunity to show their work to customers and
hear customer responses tends to increase the motivation of software developers and
consequently encourages a more customer-focused orientation. In traditional software
projects, that customer-response payoff may only come every few years and may be so
filtered by marketing and management that it is meaningless
2. Design: Design activity begins with a set of requirements. Design is done before the
system is implemented. It is the intermediate language between requirements and
coding. Goal of design phase is to create a plan to satisfy the requirements and
perhaps it is the most critical activity during system development. Design also
determines the major characteristics of a system.
3. Implementation or Coding: The goal of the coding or programming activity is to
implement the design in the best possible manner. The coding activity affects both
testing and maintenance profoundly. The time spent in coding is a small percentage
of the total software cost, while testing and maintenance consume the major
percentage. Thus, it should be clear that the goal during coding should not be to
reduce the implementation cost, but the goal should be to reduce the cost of later
phases, even if it means that the cost of this phase has to increase. In other words, the
goal during this phase is not to simplify the job of the programmer. Rather, the goal
should be to simplify the job of the tester and the maintainer.
13. What are the objectives of software engineering? What is SRS? What are functional and
non-functional requirements in software engineering? The basic goal of the requirement
activity is to get a SRS that has some desirable properties, explain these desirable properties?
Develop methods and procedures for software development that can scale up for large
systems and that can be used to consistently produce high-quality software at low cost and
with a small cycle time.
1. Consistency
2. Low cost
3. High Quality
5. Scalability
The basic approach that software engineering takes is to separate the development process
from the developed product (i.e. software). Software engineering focuses on the process
with the belief that the quality of products developed using a process are influenced mainly
by the process.
Design of proper software process and their control is the primary goal of software
engineering.
It is the focus on process for producing the products that distinguishes it from most other
computing disciplines.
SRS is a document that completely describes what the proposed software should do without
describing how the software will do it.
Basic goal of the requirements phase is to produce the SRS, which describes the complete
behavior of the proposed software.
1. An SRS establishes the basis for agreement between the client and the supplier on
what the software product will do.
Functional Requirements
Functional requirements specify which outputs should be produced from the given inputs.
They describe the relationship between the input and output of the system. For each
functional requirement, a detailed description of all the data inputs and their source, the
units of measure, and the range of valid inputs must be specified.
All the operations to be performed on the input data to obtain the output should be
specified. This includes specifying the validity checks on the input and output data,
parameters affected by the operation, and equations or other logical operations that must
be used to transform the inputs into corresponding outputs.
An important part of the specification is the system behavior in abnormal situations, like
invalid input or error during computation. The functional requirement must clearly state what
the system should do if such situations occur.
Behavior for situations where the input is valid but the normal operation cannot be
performed should also be specified. For example: an airline reservation system, where a
reservation cannot be made even for valid passengers if the airplane is fully booked.
Therefore, the system behavior for all foreseen inputs and all foreseen system states should
be specified.
Non-functional requirements
1. Performance Requirements
This part of an SRS specifies the performance constraints on the software system.
All the requirements relating to the performance characteristics of the system must be clearly
specified. There are two types of performance requirements: static and dynamic.
Static requirements are those that do not impose constraint on the execution characteristics
of the system. These include requirements like the number of simultaneous users to be
supported, and the number of files that the system has to process and their sizes. These are
also called capacity requirements of the system.
Dynamic requirements specify constraints on the execution behavior of the system. These
typically include response time and throughput constraints on the system. Response time is
the expected time for the completion of an operation under specified circumstances.
Throughput is the expected number of operations that can be performed in a unit time.
2. Design Constraints
There are a number of factors in the client's environment that may restrict the choices of a
designer. Such factors include standards that must be followed, resource limits, operating
environment, reliability and security requirements, and policies that may have an impact on
the design of the system.
Standards Compliance: This specifies the requirements for the standards the system must
follow.
Hardware Limitations: The software may have to operate on some existing or predetermined
hardware, thus imposing restrictions on the design.
Reliability and Fault Tolerance: Fault tolerance requirements can place a major constraint on
how the system is to be designed. Recovery requirements are often an integral part here,
detailing what the system should do if some failure occurs to ensure certain properties.
Reliability requirements are very important for critical applications.
Security: Security requirements are particularly significant in defense systems and many
database systems.
All the interactions of the software with people, hardware, and other software should be
clearly specified. For the user interface, the characteristics of each user interface of the
software product should be specified. A preliminary user manual should be created with all
user commands, screen formats, an explanation of how the system will appear to the user,
and feedback and error messages.
For hardware interface requirements, the SRS should specify the logical characteristics of
each interface between the software product and the hardware components. If the
software is to execute on existing hardware or on predetermined hardware, all the
characteristics of the hardware, including memory restrictions, should be specified. In
addition, the current use and load characteristics of the hardware should be given.
A good SRS is
1. Correct
2. Complete
3. Unambiguous
4. Verifiable
5. Consistent
7. Modifiable
8. Traceable
1. An SRS is correct if every requirement included in the SRS represents something required in
the final system. Correctness ensures that which is specified is done correctly.
2. An SRS is complete if everything the software is supposed to do and the responses of the
software to all classes of input data are specified in the SRS. Completeness ensures that
everything is indeed specified.
3. An SRS is unambiguous if and only if every requirement stated has one and only one
interpretation. Requirements are often written in natural language, which are inherently
ambiguous.
5. An SRS is consistent if there is no requirement that conflicts with another. Terminology can
cause inconsistencies; for example, different requirements may use different terms to refer
to the same object.
6. An SRS is ranked for importance and/or stability if for each requirement the importance
and the stability of the requirement are indicated. Stability of a requirement reflects the
chances of it changing in future.
7. An SRS is modifiable if its structure and style are such that any necessary change can be
made easily
An SRS is traceable if the origin of each of its requirements is clear and if it facilitates the
referencing of each requirement in future development
14. What is SRS? Explain the DFD? What is structured analysis? Write a SRS for the following: a)
Student registration system. b) Library automation system.
SRS
SRS establishes basis of agreement between the user and the supplier.
Users needs have to be satisfied, but user may not understand software
Developers will develop the system, but may not know about problem domain
SRS is the medium to bridge the communication gap and specify user needs in a
manner both can understand
the goal is not just to automate a manual system, but also to add value through IT
to satisfy the quality objective, must begin with high quality SRS
Substantial savings; extra effort spent during requirement saves multiple times that
effort
Characteristics of an SRS
To properly satisfy the basic goals, an SRS should have certain properties and should contain
different types of requirements. In this section, we discuss some of the desirable
characteristics of an SRS and components of an SRS. A good SRS is :
1. Correct
2. Complete
3. Unambiguous
4. Verifiable
5. Consistent
7. Modifiable
8. Traceable
Components of an SRS
Completeness of specifications is difficult to achieve and even more difficult to verify. Having
guidelines about what different things an SRS should specify will help in completely
specifying the requirements. Here we describe some of the system properties that an SRS
should specify. The basic issues an SRS must address are:
• Functionality
• Performance
• External interfaces
Conceptually, any SRS should have these components. If the traditional approach to
requirement analysis is being followed, then the SRS might even have portions corresponding
to these. However, functional requirements might be specified indirectly by specifying the
services on the objects or by
DFD
Data-flow based modeling, often referred to as the structured analysis technique, uses
function-based decomposition while modeling the problem. It focuses on the functions
performed in the problem domain and the data consumed and produced by these
functions. It is a top-down refinement approach, which was originally called structured
analysis and specification, and was proposed for producing the specifications. However, we
will limit our attention to the analysis aspect of the approach. Before we describe the
approach, let us the describe the data flow diagram and data dictionary on which the
technique relies heavily.
Data flow diagrams (also called data flow graphs) are commonly used during problem
analysis. Data flow diagrams (DFDs) are quite general and are not limited to problem
analysis for software requirements specification. They were in use long before the software
engineering discipline began. DFDs are very useful in understanding a system and can be
effectively used during analysis. A DFD shows the flow of data through a system. It views a
system as a function that transforms the inputs into desired outputs. Any complex system will
not perform this transformation in a "single step," and a data will typically undergo a series of
transformations before it becomes the output. The DFD aims to capture the transformations
that take place within a system to the input data so that eventually the output data is
produced. The agent that performs the transformation of data from one state to another is
called a process (or a bubble). So, a DFD shows the movement of data through the different
transformations or processes in the system. The processes are shown by named circles and
data flows are represented by named arrows entering or leaving the bubbles. A rectangle
represents a source or sink and is a net originator or consumer of data. A source or a sink is
typically outside the main system of study.
Views a system as a network of data transforms through which the data flows
Uses data flow diagrams (also called data flow graphs or DFDs) and functional
decomposition in modelling
The structured system analysis and design (SSAD) methodology uses DFD to organize
information, and guide analysis
DFD captures how transformation occurs from input to output as data moves through
the transforms
Focus on what transforms happen, how they are done is not important
DFD Conventions
OR relationship represented by +
Work your way consistently from inputs to outputs, and identify a few high-level
transforms to capture full transformation
When high-level transforms defined, then refine each transform with more detailed
transformations
Never show control logic; if thinking in terms of loops/decisions, stop & restart
Label each arrows and bubbles; carefully identify inputs and outputs of each
transform
Leveled DFDs
Makes drawing the leveled DFD a top-down refinement process, and allows modeling
of large and complex systems
Data Dictionary
Missing processes
http://www.scribd.com/doc/9321885/Online-University-Admission-System
SRS for Library Automation System -
http://www.scribd.com/doc/17337071/Srs-Library-Management-System
This must be noted that above two are detailed examples. In exam you need not write this
much explanation. Structure (point wise) should remain same.
15. We strive for the lowest possible coupling and high cohesion, while designing the
software, why? Explain why maximizing cohesion and minimizing coupling leads to more
maintainable systems? Which are the design attributes influence system more maintainable
and why?
16. Briefly bring out the difference between verification and validation. What is COCOMO
model? Describe its approach to estimate person months. Explain in detail at least one
software cost estimation technique other than COCOMO.
Verification is the process of determining whether or not the products of a given phase
of software development fulfill the specifications established during the previous phase.
Software Verification:
Confirm that you “built it the right way”.
Provides objective evidence that the design outputs for a phase of the software
development lifecycle meet all of the specified requirements for that phase.
Looks for consistency, completeness, and correctness of the software and supporting
documentation as it is being developed.
Software Validation:
Confirm that you “built the right thing”.
Provides objective evidence that the software is appropriate for its intended use and
will be reliable and safe.
Ensures that all software requirements have been implemented correctly and
completely and are traceable to system requirements.
Estimation of person-months:
This model estimates the total effort in terms of person-months. The basic steps in this
model are:
There are 15 different attributes, called cost driver attributes that determine the
multiplying factors. These factors depend on product, computer, personnel, and technology
attributes (called project attributes). These factors are:
• Product attributes
– Required software reliability
– Size of application database
– Complexity of the product
• Hardware attributes
– Run-time performance constraints
– Memory constraints
– Volatility of the virtual machine environment
– Required turnabout time
• Personnel attributes
– Analyst capability
– Software engineering capability
– Applications experience
– Virtual machine experience
– Programming language experience
• Project attributes
– Use of software tools
– Application of software engineering methods
– Required development schedule
Each of the 15 attributes receives a rating on a six-point scale that ranges from "very
low" to "extra high" (in importance or value).
The multiplying factors for all 15 cost drivers are multiplied to get the effort adjustment
factor (EAF).
The coefficients ai and exponent bi depend on the project type and are given
in the following table.
Softwar a b
e project
Organic 3.2 1.05
Semi- 3.0 1.12
detached
Embed 2.8 1.20
ded
External inputs consist of all the data entering the system from external sources and
triggering the processing of data. Fields of a form are not usually counted individually but a
data entry form would be counted as one external input.
External outputs consist of all the data processed by the system and sent outside the
system. Data that is printed on a screen or sent to a printer including a report, an error
message, and a data file is counted as an external output.
External inquiries are input and output requests that require an immediate response
and that do not change the internal data of the system. The process of looking up a
telephone number would be counted as one external inquiry.
External interfaces consist of all the data that is shared with other software systems
outside the system. Examples include shared files, shared databases, and software libraries.
Internal files include the logical data and control files internal to the system. An internal
file could be a data file containing addresses. A data file containing addresses and
accounting information could be counted as two internal files.
When a function is identified for a given category, the function’s complexity must also
be rated as low, average, or high as shown in Table.
Each function count is multiplied by the weight associated with its complexity and all
of the function counts are summed to obtain the count for the entire system, known as the
unadjusted function points (UFP). This calculation is summarized by the following equation:
∑∑
Where wij is the weight for row i, column j, and xij is the function count in cell i, j.
1. Data communications
2. Distributed functions
3. Performance
4. Heavily used configuration
5. Transaction rate
6. Online data entry
7. End user efficiency
8. Online update
9. Complex processing
10. Reusability
11. Installation ease
12. Operational ease
13. Multiple sites
14. Facilitates change
The ratings given to each of the characteristics above ci are then entered into the
following formula to get the Value Adjustment Factor (VAF):
Finally, the UFP and VAF values are multiplied to produce the delivered FP (AFP) count:
DFP UFP VAF
17. Explain in details all the activities under risk management paradigm? Explain the
importance of project staffing and different staff structures along with their advantages.
Explain in detail the various management activities in a software engineering project.
b) Risk projection: Risk projection, also called risk estimation, attempts to rate each
risk in two ways—the likelihood or probability that the risk is real and the
consequences of the problems associated with the risk, should it occur. The
project planner, along with other managers and technical staff, performs four
risk projection activities: (1) establish a scale that
reflects the perceived likelihood of a risk, (2) delineate the consequences
of the risk, (3) estimate the impact of the risk on the project and the product,
and (4) note the overall accuracy of the risk projection so that there will be no
misunderstandings.
c) Risk refinement: During early stages of project planning, a risk may be stated
quite generally. As time passes and more is learned about the project and the
risk, it may be possible to refine the risk into a set of more detailed risks, each
somewhat easier to mitigate, monitor, and manage.
One way to do this is to represent the risk in condition-transition-
consequence (CTC) format. That is, the risk is stated in the following form:
Given that <condition> then there is concern that (possibly)
<consequence>.
This general condition can be refined in the following manner:
Subcondition 1. Certain reusable components were developed by a third
party with no knowledge of internal design standards.
Subcondition 2. The design standard for component interfaces has not
been solidified and may not conform to certain existing reusable components.
Subcondition 3. Certain reusable components have been implemented in
a language that is not supported on the target environment.
d) Risk mitigation, monitoring & management: All of the risk analysis activities
presented to this point has a single goal—to assist the project team in
developing a strategy for dealing with risk. An effective strategy must consider
three issues:
• Risk avoidance
• Risk monitoring
• Risk management and contingency planning
Question ii) Explain the importance of project staffing and different staff structures
along with their advantages.
Answer: Once the effort is estimated, various schedules (or project duration) are
possible, depending on the number of resources (people) put on the project. For example,
for a project whose effort estimate is 56 person-months, a total schedule of 8 months is
possible with 7 people. A schedule of 7 months with 8 people is also possible, as is a schedule
of approximately 9 months with 6 people.
A schedule cannot be simply obtained from the overall effort estimate by deciding on
average staff size and then determining the total time requirement by dividing the total
effort by the average staff size. Brooks has pointed out that person and months (time) are
not interchangeable. According to Brooks, "... man and months are interchangeable only for
activities that require no communication among men, like sowing wheat or reaping cotton.
This is not even approximately true of software...."
Often, the staffing level is not changed continuously in a project and approximations
of the Rayleigh curve are used: assigning a few people at the start, having the peak team
during the coding phase, and then leaving a few people for integration and system testing.
For ease of scheduling, particularly for smaller projects, often the required people are
assigned together around the start of the project. This approach can lead to some people
being unoccupied at the start and toward the end. This slack time is often used for
supporting project activities like training and documentation.
(Note: I did not have the different staffing structures. If any1 gets pls add that.)
18. Differentiate between top down approach and bottom approach. Who should be
involved in a requirements review? Draw a process model showing how a requirements
review might be organized.
The top-down approach starts from the highest-level component of the hierarchy and
proceeds through to lower levels. By contrast, a bottom-up approach starts with the lowest-
level component of the hierarchy and proceeds through progressively higher levels to the
top-level component. A top-down design approach starts by identifying the major
components of the system, decomposing them into their lower-level components and
iterating until the desired level of detail is achieved. A bottom-up design approach starts
with designing the most basic or primitive components and proceeds to higher-level
components that use these lower-level components. Bottom-up methods work with layers of
abstraction. A top-down approach is suitable only if the specifications of the system are
clearly known and the system development is from scratch. However, if a system is to be built
from an existing system, a bottom-up approach is more suitable, as it starts from some
existing components. So, for example, if an iterative enhancement type of process is being
followed, in later iterations, the bottom-up approach could be more suitable.
The requirements review group should include the author of the requirements document,
someone who understands the needs of the client, a person of the design team, and the
person(s) responsible for maintaining the requirements document. It is also good practice to
include some people not directly involved with product development, like a software quality
engineer.
The following waterfall model shows how a requirements review might be organized.
19. Explain the use of design reviews in verifying a design. What is structure chart and how
are different types of modules represented in a structure chart? Illustrate with suitable
example. Which is the single attribute of software that allows a program to be intellectually
manageable and why, explain?
If the design is not specified in a formal, executable language, it cannot be processed through tools, and
other means for verification have to be used. The most common approach for verification is design review or
inspections. We discuss this approach here. The purpose of design reviews is to ensure that the design satisfies
the requirements and is of "good quality." If errors are made during the design process, they will ultimately
reflect themselves in the code and the final system. As the cost of removing faults caused by errors that occur
during design increases with the delay in detecting the errors, it is best if design errors are detected early, before
they manifest themselves in the system. Detecting errors in design is the purpose of design reviews.
For a function-oriented design, the design can be represented graphically by structure charts. The
structure of a program is made up of the modules of that program together with the interconnections between
modules. Every computer program has a structure, and given a program its structure can be determined. The
structure chart of a program is a graphic representation of its structure. In a structure chart a module is
represented by a box with the module name written in the box. An arrow from module A to module B
represents that module A invokes module B. B is called the subordinate of A, and A is called the superordinate
of B. The arrow is labeled by the parameters received by B as input and the parameters returned by B as output,
with the direction of flow of the input and output parameters represented by small arrows. The parameters can
be shown to be data (unfilled circle at the tail of the label) or control (filled circle at the tail).
The structure chart representation of the different types of modules is shown below:
As an example consider the structure of the following program:
Modularity is the single attribute of software that allows a program to be intellectually manageable.
Monolithic software (i.e., a large program composed of a single module) cannot be easily grasped by a reader.
The number of control paths, span of reference, number of variables, and overall complexity would make
understanding close to impossible. It enhances design clarity, which in turn eases implementation, debugging,
testing, documenting and maintainence of the software product. Modularity is where abstraction and
partitioning come together. For easily understandable and maintainable systems, modularity is clearly the basic
objective.
20. What is the importance of design in the software engineering? If some existing modules
are to be re-used in building a new system, will you use a top-down or bottom approach,
and why?