Académique Documents
Professionnel Documents
Culture Documents
SUBMITTED TO:
SUBMITTED BY :
ANJANI KUNWAR
RA1803A10
10807973
B.TECH(CSE)-H
Q. 1 Suggest the most appropriate software process model that might be used as a
basis for managing the development of the following systems: Also mention the
reasons for you selection.
a) University accounting system that replaces the existing system
b) A system to control anti-lock braking in a car
c) An interactive system that allows
ANS:
Since this new system will be replacing an established system, eliciting requirements in
advance of design/implementation is quite feasible. The resulting system will also be
quite large. Both of these conditions lead one to conclude that a Waterfall-type process
model would be the most appropriate here. “Incremental” or “Spiral” received
substantial partial credit. Many people decided that upgrading the existing system was
the right approach, and spoke about using reusing components of the existing system.
That approach was rewarded as well, even though one can’t rely on being able to reuse
components of an existing system if reuse wasn’t a consideration when it was developed.
An interactive timetable system with a complex user interface but which must be stable and
reliable. The stability/reliability implies something like “Waterfall” or at least
“Incremental”, but the presence of a user interface (which requires user feedback to be
developed) is problematic. The best approach would probably be to employ a throw-away
prototype to find requirements and then use a Waterfall or Incremental model.
Partial credit was given for “Evolutionary Development”. One problem with evolutionary
development is that the final system might not be as robust as it needs to be. One point in its
favour is that the user interface will need to evolve during development, as its requirements
are refined.
Q. 2 What are the advantages and disadvantages of developing a product in which
quality is “ good enough” ? That is, what happens when we emphasize
development speed over product quality .
ANS:
Regarding quantitative delivery, the statement of work, contract terms and regular
communications are all tools to establish expectations and measure results. Lacking these, were
you counting on good will alone meet your needs?
Time, however, is almost always quantitative. When time is the essence, it may be entirely
foolish to risk the unknown vendor. Test unknown vendors on easier tasks, and upgrade them to
favored status as earned - and, of course, pay your favored vendors according to their worth.
Once you've written clean, modular code to perform a task, you should strive to reuse it as much
as possible. Package it up in a class, library, or just a directory—whatever option you have
available given the language and environment you're working in. Presuming it's been tested and
debugged, you can just drop it in to your next project. That's one more black box in your next
design you don't have to worry about.
The advantages of this should be obvious: you don’t waste time rewriting the same code over
and over again, you know the code being used works and has been debugged, and profits go up
accordingly.
By the way, "cut and paste" is NOT reusing code properly. Aside from being a sign of a true
hacker (in the poorest sense possible, perhaps "kludger" would be a better term), it doesn't make
good software engineering sense. What happens if you discover some obscure bug in that code?
Or make an upgrade? Are you going to find all the software you ever incorporated that code
into? It would be a complete waste of time.
This should also make clear another benefit of good design—assuming you publish the interface
to your class/routines, and stick to those interfaces, you can improve the code without breaking
things for everyone. For example, say you write a search class that basically provides grep-like
functionality. People use it in their various projects, and life is good. One day, you figure out
how to make the search faster by a factor of 5. If people are just using your published interfaces,
and you can improve the code without changing them, everyone’s software will suddenly see
search speed improvements with no work on their part. This assumes use of a technology such as
shared libraries. In static libraries or classes, a recompile would be necessary, but a small price to
pay for such an improvement.
Most software development is a chaotic activity, often characterized by the phrase "code and
fix". The software is written without much of an underlying plan, and the design of the system is
cobbled together from many short term decisions. This actually works pretty well as the system
is small, but as the system grows it becomes increasingly difficult to add new features to the
system. Furthermore bugs become increasingly prevalent and increasingly difficult to fix. A
typical sign of such a system is a long test phase after the system is "feature complete". Such a
long test phase plays havoc with schedules as testing and debugging is impossible to schedule.
The original movement to try to change this introduced the notion of methodology. These
methodologies impose a disciplined process upon software development with the aim of making
software development more predictable and more efficient. They do this by developing a detailed
process with a strong emphasis on planning inspired by other engineering disciplines - which is
why I like to refer to them as engineering methodologies (another widely used term for them is
plan-driven methodologies).
Engineering methodologies have been around for a long time. They've not been noticeable for
being terribly successful. They are even less noted for being popular. The most frequent criticism
of these methodologies is that they are bureaucratic. There's so much stuff to do to follow the
methodology that the whole pace of development slows down.
ANS:
2. Each increment builds on the next and no one wants to build on a low quality foundation.
3. If the first increments are low in quality, customers and users may become concerned (about
the competence of the team); unnecessary tension may develop, and follow-on communication
suffers
Q. 4 Programs developed using evolutionary model are difficult to maintain .Discuss the
major reasons for this.
ANS:
Evolutionary prototyping
Evolutionary Prototyping (also known as breadboard prototyping) is quite different from
Throwaway Prototyping. The main goal when using Evolutionary Prototyping is to build a very
robust prototype in a structured manner and constantly refine it. "The reason for this is that the
Evolutionary prototype, when built, forms the heart of the new system, and the improvements
and further requirements will be built.
When developing a system using Evolutionary Prototyping, the system is continually refined and
rebuilt.
This technique allows the development team to add features, or make changes that couldn't be
conceived during the requirements and design phase.
For a system to be useful, it must evolve through use in its intended operational
environment. A product is never "done;" it is always maturing as the usage environment
changes…we often try to define a system using our most familiar frame of reference---
where we are now. We make assumptions about the way business will be conducted and
the technology base on which the business will be implemented. A plan is enacted to
develop the capability, and, sooner or later, something resembling the envisioned system
is delivered.
In Evolutionary Prototyping, developers can focus themselves to develop parts of the system that
they understand instead of working on developing a whole system.
To minimize risk, the developer does not implement poorly understood features. The
partial system is sent to customer sites. As users work with the system, they detect
opportunities for new features and give requests for these features to developers.
Developers then take these enhancement requests along with their own and use sound
configuration-management practices to change the software-requirements specification,
update the design, recode and retes.
5.Discuss the problems of using natural language for defining user and system
requirements .and show, using small examples ,how structuring natural language
into forms can help avoid some of these difficulties.
ANS:
The main problem with expression requirements in natural language is the ambiguity that results.
Vague terms can be interpreted differently by different viewpoints. Further, programmers and
customers might use the same word to mean different things (e.g., “fast” is likely to refer to
response time for users, but efficiency for programmers). Some ways to mitigate these
problems:
• Use a standard format (or a form) for the requirements. Makes them easier to read and
understand the role of the sentences within a requirement specification
• Use consistent language. E.g., “must” implies a mandatory requirement, “should” a
desirable requirement
• Avoid use of computer jargon
Natural language is a more serious problem in the specification of system requirements, where
more detail and precision is required. Structuring the text into standard forms helps here. A
form would include standardized fields; this forces the writer to answer certain questions, and
helps the reader find the answers quickly. A standard form could include the following fields:
ANS:
Functional Requirements
The official definition for a functional requirement specifies what the system should do:
"A requirement specifies a function that a system or component must be able to perform."
"Display the heart rate, blood pressure and temperature of a patient connected to the patient
monitor."
Business Rules
Administrative functions
Authentication
External Interfaces
Certification Requirements
Reporting Requirements
Historical Data
Non-Functional Requirements
The official definition for a non-functional requirement specifies how the system should behave:
Non-functional requirements specify all the remaining requirements not covered by the
functional requirements. They specify criteria that judge the operation of a system, rather than
specific behaviors, for example:
"Display of the patient's vital signs must respond to a change in the patient's status within 2
seconds."
Scalability
Capacity
Availability
Reliability
Recoverability
Maintainability
Serviceability
Security
Regulatory
Manageability
Environmental
Data Integrity
Usability
Interoperability