Vous êtes sur la page 1sur 15

Characteristics of Software

1. Software is developed or design, it is not manufactured in the classical sense.


2. Software can not wear out.
3. Industry based on component based assembly, Software custom built.

Software Application

1. System Software: Device drivers, PCB Designing.


2. Real time Software: That monitors / analyzes / controls real world event.
3. Business Software: MIS, Payroll, Accounts etc.
4. Engineering and scientific Software:
5. Embedded Software:
6. Personal Computer Software: Office, DBMS, Multimedia
7. Artificial intelligence Software: (Expert System / Knowledge based System)

System Development Lift Cycle

System Development Lift Cycle / Classic Life Cycle / Waterfall Model / Linear sequential model

1. Software requirement analysis.


2. Design
3. Code generation
4. Testing
5. Support.
Cause of failure of above model.

1. Real Projects rarely follow the sequential flow that the model proposes.
2. It is often difficult for the customer to state all requirements explicitly.
3. The customer must have experience. A minor change in system may lead whole development to be
rework.

PROTOTYPING MODLE

Applicable: When your customer has a legitimate need but it clueless about the details, develop a prototype as a
first step. The prototype can serve as “The first system”
THE RAD MODEL (Rapid action development)

1. Business modeling: What information drives, generated, where go, who generates.
2. Data modeling: E-R Diagram & DFD preparation.
3. Process modeling: DBMS operation like add, delete edit etc.
4. Application generation: RAD uses 4 GL, not 3 GL (Re-use code / component)
5. Testing and turnover.

DRAWBACK OF RAD

1. For large but scalable projects, RAD requires sufficient human resources to create right number of RAD
teams.
2. RAD required developer and customer who are committed to the Rapid fire activities.
3. Not all of applications are appropriate for RAD, If not modularized, building component is problematic.
If high performance required and for that tuning in required RAD approach may not work.
4. RAD is not appropriate when technical risks are high.
THE SPIRAL MODEL

1. Proposed by Boehm
2. Combination of a) Iterative nature of Prototyping & b) Systematic aspect of linear sequential model.
3. Provides potential to Rapid development of incremental version of Software
4. In this Software developed in series of incremental release.

Working of Spiral Model:

Spiral model consist of number of framework activities called as “Task regions”. Six in numbers.
1. Customer communication: Effective communication between customer and developers.
2. Planning: It defines resources, timelines and other project planning.
3. Risk Analysis: Assess both technical and Management risk.
4. Engineering: Required to build one or more representation of the application.
5. Construction and Release: Construct, test, install and support to user.
6. Customer evolution: Customer feedback on evolution of s/w.

Question:
1. Which of s/w engineer Paradigm presented in this unit do you think would be most effective? Why.
2. Provide 5 example of Software development project that would be amenable to prototyping. 2 or 3
application more difficult to prototype.
3. Provide 3 example of 4 GT.
UNIT 2 (PROJECT MANAGEMENT CONCEPT)

4 P’s of Project management concept: PEOPLE, PROCESS, PROJECT AND PRODUCT

People: PM-CMM (People Management – Capability maturity model)


• Key practices defined by PM-CMM are recruiting, selection, performance management, training,
compensation, career development, organisation and work design.

Process: 1. Framework with plan of Software design.


2. Small no. of activities required in all projects.
3. Task set, milestone, work procedure and quality assurance point.

Project: It is the only known way to manage complexity.


In 1990 in a survey 26% failed, 46% over run
TESTING

Testing is a series of test case that are intend to demolish the software
It is destructive process rather than constructive.

Testing Objectives

1. Testing is a process of executing a program with the intent of finding an error.


2. A good test case is one that has a high probability of finding an as-yet undiscovered_error.
3. A successful test is one that uncovers as yet undiscovered error.
• Objective is that design test cases that systematically uncover different classes of errors with a minimum
amount of time and efforts.
• Testing can not show the absence of errors and defects it can show only that software errors and defects
are presents.

Testing principle

All tests should be traceable to customer requirements : As we have seen, the objective of software testing is
to uncover errors. It follows that the most severe defects (from the customer’s point of view) are those that
cause the program to fail to meet its requirements.

Tests should be planned long before testing begins: Test planning in as soon as the requirements model is
complete. Detailed definition of test cases can begin as soon as the design model has been solidified. Therefore,
all tests can be planned and designed before any code has been generated.

The Pareto principle applies to software testing: Stated simply, the Pareto principle implies that 80% of all
errors uncovered during testing will likely traceable to 20 percent of all program components. The problem, of
course, is to isolate these suspect components and to thoroughly test them.

‘Testing should begin “in the small” and progress toward testing “in the large.”: The first tests planned
and executed generally focus on individual components. As testing progresses, focus shifts in an attempt to find
errors in integrated clusters of components and ultimately in the entire system.

Exhaustive testing is not possible. The nurnber of path permutations for even a moderately sized program is
exceptionally large. For this reason, it is impossible to execute every combination of paths during testing. It is
possible, however, to adequately cover program logic and to ensure that all conditions in the component-level
design have been exercised.

To be most effective, testing should be conducted by an independent third party. By most effective, we
mean testing that has the highest probability of finding errors (the primary objective of testing). The Software
engineer who created the system is not the best person to conduct all tests for the software.
Testability
Software testability is simply how easily [a computer program] can be tested.
Testability is used to mean how adequately a particular set of tests will cover the product.

Operability: “The better it works, the more efficiently it can be tested”.


• The system has few bugs (bugs add analysis and reporting overhead to the test process).
• No bugs block the execution of tests.
• The product evolves in functional stages (allows simultaneous development and testing).
Observability: ‘What you see is what you test’
• Distinct output is generated for each input.
• System states and variables are visible or queriable during execution.
• Past system states and variables are visible or queriable (e.g., transaction logs).
• All factors affecting the output are visible.
• Incorrect output is easily identified.
• Internal errors are automatically detected through self testing mechanisms.
• Internal errors are automatically reported.
• Source code is accessible.
Controllability: “The better we can control the Software, the more the testing can be automated and
optimized.”
• A1l possible outputs can be generated through some combination of input.
• All code is executable through some combination of input.
• Software and hardware states and variables can be controlled directly by the test engineer.
• Input and output formats are consistent and structured.
• Tests can be conveniently specified, automated, and reproduced.
Decomposability: “By controlling the of scope of testing, we can more quickly isolate problems and perform
smarter testing”
• The software system is built from independent modules.
• Software modules can be tested independently.
Simplicity: “The less there is to test, the more quickly we can test it.”
• Functional simplicity (e.g., the feature set is the minimum necessary to meet requirements).
• Structural simplicity (e.g., architecture is modularized to limit the propagation of faults).
• Code simplicity (e.g., a coding standard is adopted for ease of inspection and maintenance).
Stability: The fewer the changes, the fewer the disruptions to testing.
• Changes to the software are infrequent.
• Changes to the software are controlled.
• Changes to the software do not invalidate existing tests.
• The software recovers well from failures.
Understandability: ‘The more information we have, the smarter we will test.’
• The design is well understood.
• Dependencies between internal, external, and shared components are well understood.
• Changes to the design are communicated.
• Technical documentation is instantly accessible.
• Technical documentation is well organized.
• Technical documentation is specific and detailed.
• Technical documentation is accurate.

Attributes of a “Good” Test.


1. A good test has a high probability of finding an error.
2. A good test is not redundant.
3. A good test should be best of breed.
4. A good test should be neither too simple nor too complex.
Testing can be done in 2 ways

1. Knowing the specified function that a product has been designed to perform, tests can be conducted
that demonstrate each function is fully operational while at the same time searching for errors in
each function.
2. Knowing the internal working of a product, tests can be conducted to ensure that “all gears mesh”
that is internal operations are performed according to specifications and all internal components have
been adequately exercised.

1st process is called as “BLACK BOX” testing. (External Interface and working)
2nd process is called as “WHITE BOX” testing. (Internal operations as per specification)

• Black box testing alludes to tests that are conducted of Software interface, Software function that
inputs is properly accepted and output is correctly generated and that integrity of external (Database)
information is maintained.
• White box testing of Software is predicted on close examination of procedure details.
a. Loops – Nested Loops – Condition – Loops (should not fail in White box testing)

White Box Testing (Glass box):

a) Guarantee that all independent paths within a module have been exercised at least once.
b) Exercised all logical decision on their true and false sides.
c) Execute all loops at their boundaries and within their operational bounds.
d) Exercise internal data structure to ensure their validity.

Reason to conduct Black box testing:

1. Logic errors and incorrect assumption are inversely proportional to the probability that a program
path will be executed.
2. Our believe that logical path is not likely to be executed when in fact, it may be executed on regular
basis.
3. Typographical errors are random.

Basis path testing

1. Flow graph notation


2. Cyclomatic complexity
3. Deriving test case
4. Graph metrics

Control structure testing

1. Condition testing
2. Data flow testing
3. Loop testing
a. Simple Loop
b. Nested Loop
c. Concatenated Loop
d. Unstructured Loop
Black Box Testing, Also called as behavioural testing.
• It focuses on requirements of the software.
• It not an alternative to white box testing.
• It enables s/w engineer to derive set of input condition that will fully exercise all functional requirements
for a program.

Black box testing find errors in 1. incorrect or missing functions. 2. interface errors 3. errors in data structures
or external data base access. 4. behaviour or performance errors and 5. initialization and termination errors.

1. Graph based teseting methods: In this method s/w engineer being by creating a graph – a collection of
nodes that represent objects; link that represent the relationship between objects; node weights that
describe the properties of a node (e.g. a specific data value or state behaviour) and link weights the
describe some characteristics of a link.

Number of behavioural methods that make use of graph.


a) Transaction flow modeling
b) Finite state modeling
c) Data flow modeling
d) Timing modeling.

2. Equivalence Partitioning:
3. Boundary value analysis
4. Comparison testing
Software Reengineering
Document restructuring:
1. Creating documentation.
2. Documentation must be updated
3. The system is business critical and must be fully redocumented.

Reverse engineering:
1. The term reverse engineering has its origins in the hardware world.
2. Reverse engineering for software is quite similar.
3. Reverse engineering for software is the process of analyzing a program in an effort to create a
representation of the program at a higher level of abstraction than source code.
4. Reverse engineering is a process of design recovery. Reverse engineering tools extract data,
architectural, and procedural design information from an existing program.

Code restructuring: Old Program is constructed into very systematic manner, but modules are not.
Individual modules were coded in a way that makes them difficult to understand, test, and maintain. In
such cases, the code within the suspect modules can be restructured.
To accomplish this activity, the source code is analyzed using a restructuring tool. Violations of
structured programming constructs are noted and code is then restructured (this can be done
automatically). The restructured code is reviewed and tested to ensure that no anomalies have been
introduced. Internal code documentation is updated.

Data restructuring: Example of Sales first in within and outside state and then also exported.

Forward engineering: Forward engineering, also called renovation or reclamation [CHI90], not only
recovers design information from existing software, but uses this information to alter or reconstitute
the existing system in an effort to improve its overall quality. In most cases, reengineered software
reimplements the function of the existing system and also adds new functions and/or improves overall
performance.

REVERSE ENGINEERING

The abstraction level of a reverse engineering process and the tools used to effect it refers to the
sophistication of the design information that can be extracted from source code. (Should be able to all
activity like relationship, data flow etc..)
The core of reverse engineering is an activity called extract abstractions. The engineer must evaluate
the old program and from the (often undocumented) source code, extract a meaningful specification
of the processing that is performed, the user interface that is applied, and the program data structures
or database that is used.

1. Reverse Engineering to Understand Processing: Analysis of Code (system, program,


component, pattern and statement). The relationship between difference systems, programs,
components and their results)

2. Reverse Engineering to Understand Data:

a) Internal data structures: Identify local data structure, flags and global data structure and their
relationship for overall program execution.

b) Database structure:
1. Build an initial object model : Inspect newly added attributes and their values
2. Determine candidate keys: find candidate keys if any in new attributes
3. Refine the tentative classes: Can similar classes can be combined to one.
4. Define generalizations: Make classes common having similar functionality
5. Discover associations: Discover common methods in classes.

3. Reverse Engineering User Interfaces: To fully understand an existing user interface (UI), the
structure and behavior of the interface must be specified. Merlo and his colleagues [MER93] suggest
three basic questions that must be answered as reverse engineering of the UI commences:

• What are the basic actions (e.g., keystrokes and mouse clicks) that the interface must
process?
• What is a compact description of the behavioral response of the system to these actions?
• What is meant by a “replacement,” or more precisely, what concept of equivalence of
interfaces is relevant here?

Example: An old UI requests that a user provide a scale factor (ranging from 1 to 10) to shrink or
magnify a graphical image. A reengineered GUI might use a slide-bar and mouse to accomplish the
same function.
THE STRUCTURE OF CLIENT/SERVER SYSTEMS

Hardware, software, database, and network technologies all contribute to distributed and cooperative
computer architectures.

A root system, some-times a mainframe, serves as the repository for corporate data. The root system
is connected to servers (typically powerful workstations or PCs) that play a dual role. The servers
update and request corporate data maintained by the root system. They also maintain local
departmental systems and play a key role in networking user-level PCs via a local area network
(LAN).

In a c/s structure, the computer that resides above another computer (in Figure) is called the server,
and the computer(s) at the level below is called the client.
The client requests services,2 and the server provides them.

File servers: The client requests specific records from a file. The server transmits these records to
the client across the network.

Database servers: The client sends structured query language (SQL) requests to the server. These
are transmitted as messages across the net-work.
The server processes the SQL request and finds the requested information, passing back the results
only to the client.

Transaction servers: The client sends a request that invokes remote procedures at the server site.
The remote procedures are a set of SQL statements. A transaction occurs when a request results in
the execution of the remote procedure with the result transmitted back to the client.
Groupware servers: When the server provides a set of applications that enable communication
among clients (and the people using them) using text, images, bulletin boards, video, and other
representations, a groupware architecture exists.

Software Components for c/s Systems

User interaction/presentation subsystem: This subsystem implements all functions that are
typically associated with a graphical user interface.

Application subsystem: This subsystem implements the requirements defined by the application
within the context of the domain in which the application operates. For example, a business
application might produce a variety of printed reports based on numeric input, calculations, database
information, and other considerations. A groupware application might pro-vide the facilities for
enabling bulletin board communication or e-mail. In both cases, the application software may be
partitioned so that some components reside on the client and others reside on the server.

Database management subsystem: This subsystem performs the data manipulation and
management required by an application. Data manipulation and management may be as simple as
the transfer of a record or as complex as the processing of sophisticated SQL transactions.

Middleware: In addition to these subsystems, another software building block, often called
middleware, exists in all c/s systems. Middleware comprises software components that exist on both
the client and the server and includes elements of network operating systems as well as specialized
application software that supports database-specific applications, object request broker standards,
groupware technologies, communication management, and other features that facilitate the
client/server connection. Orfali, Harkey, and Edwards have referred to middleware as “the nervous
system of a client/server system.”
The Distribution of Software Components

FAT Clients & FAT Servers

When most of the functionality associated with each of the three subsystems is allocated to the
server, a fat server design has been created. Conversely, when the client implements most of the
user interaction/presentation, application, and data-base components, a fat client design has been
created.
Fat clients are commonly encountered when file server and database server architectures are
implemented. In this case, the server provides data management support, but all application and GUI
software resides at the client. Fat servers are often designed when transaction and groupware
systems are implemented. The server provides application support required to respond to
transactions and communication from the clients. The client software focuses on GUI and
communication management.

Distributed presentation: In this rudimentary client/server approach, database logic and the
application logic remain on the server, typically a mainframe. The server also contains the logic for
preparing screen information, using software such as CICS. Special PC-based software is used to
convert character-based screen information transmitted from the server into a GUI presentation on a
PC.

Remote presentation: An extension of the distributed presentation approach, primary database and
application logic remain on the server, and data sent by the server is used by the client to prepare the
user presentation.

Distributed logic: The client is assigned all user presentation tasks and the processes associated
with data entry, such as field-level validation, server query formulation, and server update information
and requests. The server is assigned database management tasks and the processes for client
queries, server file updates, client version control, and enterprise-wide applications.

Remote data management: Applications on the server create a new data source by formatting data
that have been extracted from elsewhere (e.g., from a corporate level source). Applications allocated
to the client are used to exploit the new data that has been formatted by the server. Decision support
systems are included in this category.

Distributed databases: The data forming the database is spread across multiple servers and clients.
Therefore, the client must support data management software components as well as application and
GUI components.

Thin Client: A thin client is a so-called “network computer” that relegates all application processing to
a fat server. Thin clients (network computers) offer substantially lower per unit cost at little or no
significant performance loss when compared to desktop machines.

Vous aimerez peut-être aussi