Académique Documents
Professionnel Documents
Culture Documents
Assignment 4
Part A
Ans
Software quality attributes are likely the most neglected category of overall project scope
on software projects. This is
due to a perfect storm of influences:
• We naturally think of requirements in terms of the functional capabilities of our system,
• The discipline of extracting and refining these quality attributes is a process requiring
the collaboration of all the
project stakeholders, and
• There is little guidance in the popular literature on how to break down this difficult
problem.
When we discuss the scope of a product, we almost invariably discuss in terms of
capabilities, features, or how this
system needs to do the same things as that system that we are replacing. Whether this is
done as a list of features on
the back of a dinner napkin, a more detailed collection of use cases or other analysis
artifacts, or the elusive detailed set
of functional requirements, this is the traditional way of looking at things. This perspective
is critical, to be sure, but
focusing solely on this view is a key reason that many software projects end up
disappointing the client in the end.
Understanding the functionality of the system allows us to discern between a piece of tax
software and the latest firstperson shooter game, but we need to go beyond this to
understand many of the distinctions between one tax
application and another. The two systems may have roughly the same features (based on
the current taxation business
rules), but very often, one package will be far more attractive to the end user. This
difference can and should be
expressed in advance, so that we can actually build these distinctions proactively, rather
than hoping for the best or
trying to retrofit them in after we discover their absence in our first attempts to integrate
the system. Many of these
distinctions are covered in the characteristics of software quality discussed here.
Specifying the quality of our system is a
challenge if we attempt to leap directly to
testable statements of different aspects of
quality. This is analogous to attempting to
craft testable detailed functional
requirements statements without having
applied the appropriate analysis
techniques to better understand our
system first. This article provides a series
of steps, each of which is focused and
straightforward. In combination, though,
we gain a procedural approach for
specifying our software quality over a
comprehensive set of characteristics.
Clarrus Consulting Group Inc.
Software Quality Attributes: Following All the Steps 1
Correctness as a Quality Attribute
It is interesting to note that functionality, which many teams consider the
sole focus of requirements issues, is merely one element in a broad
landscape of considerations for overall product quality. This sentiment
echoes the original taxonomy that I was exposed to for overall quality,
from the Rome Air Development Center (RADC), when I was involved in
avionics-based embedded systems development. Their overall taxonomy
consisted of thirteen items, correctness being one of them.
While correctness or functionality appears in a number of different
taxonomies, it is reasonable to leave this out of a practical taxonomy for
quality attributes, as it is generally addressed adequately through other
means on most projects.
Measurement of software quality factors
There are varied perspectives within the field on measurement. There are a great many measures that are
valued by some professionals—or in some contexts, that are decried as harmful by others. Some believe
that quantitative measures of software quality are essential. Others believe that contexts where quantitative
measures are useful are quite rare, and so prefer qualitative measures. Several leaders in the field
of software testing have written about the difficulty of measuring what we truly want to measure well.[8][9]
One example of a popular metric is the number of faults encountered in the software. Software that contains
few faults is considered by some to have higher quality than software that contains many faults. Questions
that can help determine the usefulness of this metric in a particular context include:
1. What constitutes “many faults?” Does this differ depending upon the purpose of the
software (e.g., blogging software vs. navigational software)? Does this take into account the size
2. Does this account for the importance of the bugs (and the importance to the stakeholders
of the people those bugs bug)? Does one try to weight this metric by the severity of the fault, or the
incidence of users it affects? If so, how? And if not, how does one know that 100 faults discovered
3. If the count of faults being discovered is shrinking, how do I know what that means? For
example, does that mean that the product is now higher quality than it was before? Or that this is a
smaller/less ambitious change than before? Or that fewer tester-hours have gone into the project
than before? Or that this project was tested by less skilled testers than before? Or that the team
sense measures of human behavior, since humans create software.[8] If a team discovers that they will
benefit from a drop in the number of reported bugs, there is a strong tendency for the team to start reporting
fewer defects. That may mean that email begins to circumvent the bug tracking system, or that four or five
bugs get lumped into one bug report, or that testers learn not to report minor annoyances. The difficulty is
measuring what we mean to measure, without creating incentives for software programmers and testers to
Software quality factors cannot be measured because of their vague definitions. It is necessary to find
measurements, or metrics, which can be used to quantify them as non-functional requirements. For
example, reliability is a software quality factor, but cannot be evaluated in its own right. However, there are
related attributes to reliability, which can indeed be measured. Some such attributes are mean time to
failure, rate of failure occurrence, and availability of the system. Similarly, an attribute of portability is the
A scheme that could be used for evaluating software quality factors is given below. For every characteristic,
there are a set of questions which are relevant to that characteristic. Some type of scoring formula could be
developed based on the answers to these questions, from which a measurement of the characteristic can be
obtained.
[edit]Understandability
Are variable names descriptive of the physical or functional property represented? Do uniquely recognisable
functions contain adequate comments so that their purpose is clear? Are deviations from forward logical flow
[edit]Completeness
Are all necessary components available? Does any process fail for lack of resources or programming? Are
all potential pathways through the code accounted for, including proper error handling?
[edit]Conciseness
Is all code reachable? Is any code redundant? How many statements within loops could be placed outside
the loop, thus reducing computation time? Are branch decisions too complex?
[edit]Portability
Does the program depend upon system or library routines unique to a particular installation? Have machine-
dependent statements been flagged and commented? Has dependency on internal bit representation of
alphanumeric or special characters been avoided? How much effort would be required to transfer the
program from one hardware/software system or environment to another? Software portability refer the terms
of support and existence in different environments like window environments, mac, linux etc.
[edit]Consistency
Is one variable name used to represent different logical or physical entities in the program? Does the
program contain only one representation for any given physical or mathematical constant? Are functionally
similar arithmetic expressions similarly constructed? Is a consistent scheme used for indentation,
[edit]Maintainability
Has some memory capacity been reserved for future expansion? Is the design cohesive—i.e., does each
module have distinct, recognizable functionality? Does the software allow for a change in data structures
(object-oriented designs are more likely to allow for this)? If the code is procedure-based (rather than object-
oriented), is a change likely to require restructuring the main program, or just a module?
[edit]Testability
Are complex structures employed in the code? Does the detailed design contain clear pseudo-code? Is the
pseudo-code at a higher level of abstraction than the code? If tasking is used in concurrent designs, are
[edit]Usability
Is a GUI used? Is there adequate on-line help? Is a user manual provided? Are meaningful error messages
provided?
[edit]Reliability
Are loop indexes range-tested? Is input data checked for range errors? Is divide-by-zero avoided? Is
exception handling provided? It is the probability that the software performs its intended functions correctly
in a specified period of time under stated operation conditions, but there could also be a problem with the
requirement document...
[edit]Efficiency
Have functions been optimized for speed? Have repeatedly used blocks of code been formed into
subroutines? Has the program been checked for memory leaks or overflow errors?
[edit]Security
Does the software protect itself and its data against unauthorized access and use? Does it allow its operator
to enforce security policies? Are security mechanisms appropriate, adequate and correctly implemented?
Can the software withstand attacks that can be anticipated in its intended environment?
Software Quality Metrics focus on the process, project and product. By analyzing the metrics the
organization the organization can take corrective action to fix those areas in the process, project or product
which are the cause of the software defects.
The de-facto definition of software quality consists of the two major attributes based on intrinsic product
quality and the user acceptability. The software quality metric encapsulates the above two attributes,
addressing the mean time to failure and defect density within the software components. Finally it assesses
user requirements and acceptability of the software. The intrinsic quality of a software product is generally
measured by the number of functional defects in the software, often referred to as bugs, or by testing the
software in run time mode for inherent vulnerability to determine the software "crash" scenarios.In
operational terms, the two metrics are often described by terms namely the defect density (rate) and mean
time to failure (MTTF).
Although there are many measures of software quality, correctness, maintainability, integrity and usability
provide useful insight.
[edit]
Correctness
A program must operate correctly. Correctness is the degree to which the software performs the required
functions accurately. One of the most common measures is Defects per KLOC. KLOC means thousands
(Kilo) Of Lines of Code.) KLOC is a way of measuring the size of a computer program by counting the
number of lines of source code a program has.
[edit]
Maintainability
Maintainability is the ease with which a program can be correct if an error occurs. Since there is no direct
way of measuring this an indirect way has been used to measure this. MTTC (Mean time to change) is one
such measure. It measures when a error is found, how much time it takes to analyze the change, design the
modification, implement it and test it.
[edit]
Integrity
This measure the system’s ability to with stand attacks to its security. In order to measure integrity two
additional parameters are threatand security need to be defined. Threat – probability that an attack of certain
type will happen over a period of time. Security – probability that an attack of certain type will be removed
over a period of time. Integrity = Summation [(1 - threat) X (1 - security)]
[edit]
Usability
How usable is your software application ? This important characteristic of your application is measured in
terms of the following characteristics:
In context of the Software Quality Metrics, one of the popular standards that addresses the quality model,
external metrics, internal metrics and the quality in use metrics for the software development process is ISO
9126.
[edit]
Defect Removal Efficiency
Defect Removal Efficiency (DRE) is a measure of the efficacy of your SQA activities.. For eg. If the DRE is
low during analysis and design, it means you should spend time improving the way you conduct formal
technical reviews.
DRE = E / ( E + D )
Where E = No. of Errors found before delivery of the software and D = No. of Errors found after delivery of
the software.
Ideal value of DRE should be 1 which means no defects found. If you score low on DRE it means to say you
need to re-look at your existing process. In essence DRE is a indicator of the filtering ability of quality control
and quality assurance activity . It encourages the team to find as many defects before they are passed to the
next activity stage. Some of the Metrics are listed out here:
Test Coverage = Number of units (KLOC/FP) tested / total size of the system Number of tests per unit size =
Number of test cases per KLOC/FP Defects per size = Defects detected / system size Cost to locate defect
= Cost of testing / the number of defects located Defects detected in testing = Defects detected in testing /
total system defects Defects detected in production = Defects detected in production/system size Quality of
Testing = No. of defects found during Testing/(No. of defects found during testing + No of acceptance
defects found after delivery) *100 System complaints = Number of third party complaints / number of
transactions processed Effort Productivity = Test Planning Productivity = No of Test cases designed / Actual
Effort for Design and Documentation Test Execution Productivity = No of Test cycles executed / Actual Effort
for testing Test efficiency= (number of tests required / the number of system errors)
Measure Metrics
Number of system enhancement requests per year Number of maintenance fix requests
1. Customer per year User friendliness: call volume to customer service hotline User friendliness:
satisfaction index training time per new user Number of product recalls or fix releases (software vendors)
Number of production re-runs (in-house information systems groups)
Normalized per function point (or per LOC) At product delivery (first 3 months or first
2. Delivered defect year of operation) Ongoing (per year of operation) By level of severity By category or
quantities cause, e.g.: requirements defect, design defect, code defect, documentation/on-line
help defect, defect introduced by fixes, etc.
3. Responsiveness Turnaround time for defect fixes, by level of severity Time for minor vs. major
(turnaround time) to enhancements; actual vs. planned elapsed time (by customers) in the first year after
users product delivery
McCabe's cyclomatic complexity counts across the system Halstead’s measure Card's
7. Complexity of
design complexity measures Predicted defects and maintenance costs, based on
delivered product
complexity measures
Breadth of functional coverage Percentage of paths, branches or conditions that were
8. Test coverage actually tested Percentage by criticality level: perceived level of risk of paths The ratio of
the number of detected faults to the number of predicted faults.
Business losses per defect that occurs during operation Business interruption costs;
costs of work-arounds Lost sales and lost goodwill Litigation costs resulting from defects
9. Cost of defects
Annual maintenance cost (per function point) Annual operating cost (per function point)
Measurable damage to your boss's career
Costs of reviews, inspections and preventive measures Costs of test planning and
preparation Costs of test execution, defect tracking, version and change control Costs
10. Costs of quality of diagnostics, debugging and fixing Costs of tools and tool support Costs of tools and
activities tool support Costs of test case library maintenance Costs of testing & QA education
associated with the product Costs of monitoring and oversight by the QA organization (if
separate from the development and test organizations)
Re-work effort (hours, as a percentage of the original coding hours) Re-worked LOC
11. Re-work (source lines of code, as a percentage of the total delivered LOC) Re-worked software
components (as a percentage of the total delivered components)
Availability (percentage of time a system is available, versus the time the system is
needed to be available) Mean time between failure (MTBF) Mean time to repair (MTTR)
12. Reliability
Reliability ratio (MTBF / MTTR) Number of product recalls or fix releases Number of
production re-runs as a ratio of production runs
[edit]
Reference
http://geekswithblogs.net/srkprasad/archive/2003/11/04/394.aspx
[edit]
The Department of Defense and Open Source Software Quality Assurance – Join
SQA Forums
One great way to stay in the loop of what is going on in your field is to be involved with a community of
like minded people. For Software Quality Assurance, the SQA Forums is a pretty good place to start out as
it is filled with a ton of good information and has an active membership which can help you get any
specific questions you might have answered. The forum also covers automation, best practices, standards
and templates…. pretty much anything or everything you might want to discuss in the wonderful world of
Q3.Write a short note on SEI Capability Maturity Model(CMM).How does it differ from ISO
9000?
Ans
SEI Capability Maturity Model
Emanuel R. Baker, Ph.D., and Frank J. Koch
In 1991 the Software Engineering Institute (SEI) at Carnegie Mellon University introduced the Capability
Maturity Model for Software (CMM). This event marked a major milestone in the evolution of software
process management because, for the first time, the software community had a comprehensive description
of how software organizations "mature" or improve, in their ability to develop software.
The CMM defines five levels of process capability, each of which represents an evolutionary plateau towards
a disciplined, measured, and continuously improving software process.
the process belongs to the organization, not to individuals. At times of stress, the Level 3 practices are not
abandoned.
The second role of the CMM is to provide a consistent basis for conducting appraisals of software
processes. The CMM defines a scale for measuring process maturity, thus allowing an organization to
accurately compare its process capability to that of another organization. ISO is using the CMM in its efforts
to develop international standards for software process assessments.
The CMM's third key role is to serve as a blueprint for software process improvement. The CMM can help an
organization focus on the areas it must address in order to advance to the next level of maturity.
Today, leading software organizations are adopting the CMM as their core strategy for improving quality and
productivity.
Part B
Q1.As size is the main factor determining the cost of a project,an accurate size can be used to
estimate the cost and schedule of the software project.Give your view in favour and against
of the statement?Also write in brief about contract management and human resource
engineering
From Wikipedia, the free encyclopedia
The ability to accurately estimate the time and/or cost taken for a project to come in to its
successful conclusion is a serious problem for software engineers. The use of a repeatable,
clearly defined and well understood software development process has, in recent years,
shown itself to be the most effective method of gaining useful historical data that can be used
for statistical estimation. In particular, the act of sampling more frequently, coupled with the
loosening of constraints between parts of a project, has allowed more accurate estimation
and more rapid development times.
There are many, many ways to estimate a software project, ranging from formal models
that can only be worked by a University professor through to a salesperson's hopeful
guess.
Here we present a simple, proven software project estimation method that produces
reasonably accurate results and has been used estimate substantial fixed price work.
This method is based on a quick but robust initial design on the premise that later
versions of the design will reduce the amount of work as developers come up with smart
ideas for saving work.
This method also calls for participation by a substantial number of people, increasing
buy-in and gaining accuracy. It also follows the principles of the Cardboard Checklist for
Planning, where relevant.
Overview
This estimation method has three steps:
1. Design the System.
2. Estimate each Part of the System.
3. Schedule the Work.
The outputs are:
1. A project schedule with allowances for schedule risk, internal dependencies and
known outages.
2. A list of other assumptions made during the estimation.
3. A list of the project's external dependencies.
4. A list of other risks to the project schedule.
The inputs are such documents as are available, including:
1. The statement of project scope.
2. User requirements - use cases, functional specs.
3. Technical requirements.
Of these, the only one that is absolutely required is the Statement of Project Scope.
However, thore more information available, the more accurate the final result will be.
[edit]
The Human Resource Management Guide for Market Testing and Contracting Out is a
compilation of documents to assist APS agencies with the human resource management
(HRM) aspects of market testing and contracting out.
This Guide is designed to provide an overview for APS agencies on the management of
staff affected by the market testing and contracting out process. It is not intended to
replace legal or detailed advice for individual situations as they arise, but provides a
framework within which agencies can manage HRM aspects of the market testing and
contracting out process.
This version of the Guide updates the material originally prepared by the former Office of
Asset Sales and Commercial Support (OASACS) and the Australian Public Service
Commission (the Commission) drawing on information provided by the Australian Tax
Office (ATO), the Department of Employment, Workplace Relations (DEWR) and the
Department of Finance and Administration (Finance).
The Commission's publication Outsourcing—Human Resource Management Issues (June
2002) provides more detailed advice on the human resource aspects of outsourcing. This
publication is available on the Commission’s website at www.apsc.gov.au.
The reference material provided in this Guide may change as legislation and Public
Service procedures are amended. The Commission aims to provide links to updated
documentation through its website.
Whether you are a prime contractor, subcontractor or government agency that is letting out and managing contracts, xpdoffice™
contract management software is the tool that puts you in charge. Accessible from anywhere through an intuitive Web interface,
contract specialists can begin using it right from Day One.
• DCAA compliance
• Peachtree, QuickBooks synchronization
• Microsoft Project
• Universal web access
xpdcontracts™ contract management software can be your key within today's high-performance business environment to
maintaining relationships with customers and suppliers alike. xpdcontracts™ supports these customer and supplier relationships
effectively and efficiently. It compiles all contract data including personnel information, allocated time, complete description, and
task breakdown.
xpdcontracts offers a powerful array of features:
Budget creation
Easily develop your contract budgets from the contract level down to the job and task level, allocating every hour.
Work breakdown
Simple process for assigning jobs and tasks to employees and managers, and rolling all of that up to the contract level.
Schedule development
Create project schedules and establish key milestones.
Data sharing
Integrates automatically with major accounting systems (including Peachtree, QuickBooks, Great Plains and ERP systems), as
well as with project management tools like Microsoft Project. It also shares data with all open system databases and any
application that accepts or exports Microsoft Excel data.
Universal access
xpdcontracts operates as a stand-alone Web-based service, with no installation, integration, maintenance or on-site hosting
required.
Security
To prevent access from unauthorized internal or external sources, xpdcontracts uses SSL128 and offers user-selected permission
levels along with user IDs and passwords for the project team and for clients. xpdcontracts is hosted at a remote secure server
site. For redundancy, a backup site is mirrored in another facility at another location.
Cost effectiveness
User licenses on our software cost just pennies per day per user. You can add or cancel licenses at any time.
xpdcontracts is part of the xpdoffice™ suite of business productivity and PSA solutions from xpdientinc, a division of Scientific
Systems and Software International, a software and technical services firm in operation since 1985. xpdoffice™ applications work
seamlessly with each other and incorporate modules such astimesheet software and human resources management software that
bring increased efficiency to projects, human resource functions and other time-consuming business applications.
To find out more about xpdcontracts or xpdoffice™, request a demo, send us an email, or call 888-777-4638, ext. 264.
Q3.Describe function point analysis. How does function points are used in estimation of
cost nad efforts using decomposition techniques?
The purpose of this article is to provide an introduction to Function Point Analysis and its application in non-
traditional computing situations. Software engineers have been searching for a metric that is applicable for a
broad range of software environments. The metric should be technology independent and support the need
for estimating, project management, measuring quality and gathering requirements. Function Point Analysis
is rapidly becoming the measure of choice for these tasks.
Function Point Analysis has been proven as a reliable method for measuring the size of computer software.
In addition to measuring output, Function Point Analysis is extremely useful in estimating projects, managing
change of scope, measuring productivity, and communicating functional requirements.
There have been many misconceptions regarding the appropriateness of Function Point Analysis in
evaluating emerging environments such as real time embedded code and Object Oriented programming.
Since function points express the resulting work-product in terms of functionality as seen from the user's
perspective, the tools and technologies used to deliver it are independent.
The following provides an introduction to Function Point Analysis and is followed by further discussion of
potential benefits.
Data Functions
Transactional Functions
• External Inputs
• External Outputs
• External Inquiries
Internal Logical Files - The first data function allows users to utilize data they are responsible for
maintaining. For example, a pilot may enter navigational data through a display in the cockpit prior to
departure. The data is stored in a file for use and can be modified during the mission. Therefore the pilot is
responsible for maintaining the file that contains the navigational information. Logical groupings of data in a
system, maintained by an end user, are referred to as Internal Logical Files (ILF).
External Interface Files - The second Data Function a system provides an end user is also related to
logical groupings of data. In this case the user is not responsible for maintaining the data. The data resides
in another system and is maintained by another user or system. The user of the system being counted
requires this data for reference purposes only. For example, it may be necessary for a pilot to reference
position data from a satellite or ground-based facility during flight. The pilot does not have the responsibility
for updating data at these sites but must reference it during the flight. Groupings of data from another
system that are used only for reference purposes are defined as External Interface Files (EIF).
The remaining functions address the user's capability to access the data contained in ILFs and EIFs. This
capability includes maintaining, inquiring and outputting of data. These are referred to as Transactional
Functions.
External Input - The first Transactional Function allows a user to maintain Internal Logical Files (ILFs)
through the ability to add, change and delete the data. For example, a pilot can add, change and delete
navigational information prior to and during the mission. In this case the pilot is utilizing a transaction
referred to as an External Input (EI). An External Input gives the user the capability to maintain the data in
ILF's through adding, changing and deleting its contents.
External Output - The next Transactional Function gives the user the ability to produce outputs. For
example a pilot has the ability to separately display ground speed, true air speed and calibrated air speed.
The results displayed are derived using data that is maintained and data that is referenced. In function point
terminology the resulting display is called an External Output (EO).
External Inquiries - The final capability provided to users through a computerized system addresses the
requirement to select and display specific data from files. To accomplish this a user inputs selection
information that is used to retrieve data that meets the specific criteria. In this situation there is no
manipulation of the data. It is a direct retrieval of information contained on the files. For example if a pilot
displays terrain clearance data that was previously set, the resulting output is the direct retrieval of stored
information. These transactions are referred to as External Inquiries (EQ).
In addition to the five functional components described above there are two adjustment factors that need to
be considered in Function Point Analysis.
Functional Complexity - The first adjustment factor considers the Functional Complexity for each unique
function. Functional Complexity is determined based on the combination of data groupings and data
elements of a particular function. The number of data elements and unique groupings are counted and
compared to a complexity matrix that will rate the function as low, average or high complexity. Each of the
five functional components (ILF, EIF, EI, EO and EQ) has its own unique complexity matrix. The following is
the complexity matrix for External Outputs.