Vous êtes sur la page 1sur 17

CAP412

Software PROJECT MANAGEMENT

Assignment 4

Part A

Q1. Discuss about software quality factors and attributes?

Ans

Software quality attributes are likely the most neglected category of overall project scope
on software projects. This is
due to a perfect storm of influences:
• We naturally think of requirements in terms of the functional capabilities of our system,
• The discipline of extracting and refining these quality attributes is a process requiring
the collaboration of all the
project stakeholders, and
• There is little guidance in the popular literature on how to break down this difficult
problem.
When we discuss the scope of a product, we almost invariably discuss in terms of
capabilities, features, or how this
system needs to do the same things as that system that we are replacing. Whether this is
done as a list of features on
the back of a dinner napkin, a more detailed collection of use cases or other analysis
artifacts, or the elusive detailed set
of functional requirements, this is the traditional way of looking at things. This perspective
is critical, to be sure, but
focusing solely on this view is a key reason that many software projects end up
disappointing the client in the end.
Understanding the functionality of the system allows us to discern between a piece of tax
software and the latest firstperson shooter game, but we need to go beyond this to
understand many of the distinctions between one tax
application and another. The two systems may have roughly the same features (based on
the current taxation business
rules), but very often, one package will be far more attractive to the end user. This
difference can and should be
expressed in advance, so that we can actually build these distinctions proactively, rather
than hoping for the best or
trying to retrofit them in after we discover their absence in our first attempts to integrate
the system. Many of these
distinctions are covered in the characteristics of software quality discussed here.
Specifying the quality of our system is a
challenge if we attempt to leap directly to
testable statements of different aspects of
quality. This is analogous to attempting to
craft testable detailed functional
requirements statements without having
applied the appropriate analysis
techniques to better understand our
system first. This article provides a series
of steps, each of which is focused and
straightforward. In combination, though,
we gain a procedural approach for
specifying our software quality over a
comprehensive set of characteristics.
Clarrus Consulting Group Inc.
Software Quality Attributes: Following All the Steps 1
Correctness as a Quality Attribute
It is interesting to note that functionality, which many teams consider the
sole focus of requirements issues, is merely one element in a broad
landscape of considerations for overall product quality. This sentiment
echoes the original taxonomy that I was exposed to for overall quality,
from the Rome Air Development Center (RADC), when I was involved in
avionics-based embedded systems development. Their overall taxonomy
consisted of thirteen items, correctness being one of them.
While correctness or functionality appears in a number of different
taxonomies, it is reasonable to leave this out of a practical taxonomy for
quality attributes, as it is generally addressed adequately through other
means on most projects.
Measurement of software quality factors

There are varied perspectives within the field on measurement. There are a great many measures that are

valued by some professionals—or in some contexts, that are decried as harmful by others. Some believe

that quantitative measures of software quality are essential. Others believe that contexts where quantitative

measures are useful are quite rare, and so prefer qualitative measures. Several leaders in the field

of software testing have written about the difficulty of measuring what we truly want to measure well.[8][9]

One example of a popular metric is the number of faults encountered in the software. Software that contains

few faults is considered by some to have higher quality than software that contains many faults. Questions

that can help determine the usefulness of this metric in a particular context include:

1. What constitutes “many faults?” Does this differ depending upon the purpose of the

software (e.g., blogging software vs. navigational software)? Does this take into account the size

and complexity of the software?

2. Does this account for the importance of the bugs (and the importance to the stakeholders

of the people those bugs bug)? Does one try to weight this metric by the severity of the fault, or the

incidence of users it affects? If so, how? And if not, how does one know that 100 faults discovered

is better than 1000?

3. If the count of faults being discovered is shrinking, how do I know what that means? For

example, does that mean that the product is now higher quality than it was before? Or that this is a

smaller/less ambitious change than before? Or that fewer tester-hours have gone into the project

than before? Or that this project was tested by less skilled testers than before? Or that the team

has discovered that fewer faults reported is in their interest?


This last question points to an especially difficult one to manage. All software quality metrics are in some

sense measures of human behavior, since humans create software.[8] If a team discovers that they will

benefit from a drop in the number of reported bugs, there is a strong tendency for the team to start reporting

fewer defects. That may mean that email begins to circumvent the bug tracking system, or that four or five

bugs get lumped into one bug report, or that testers learn not to report minor annoyances. The difficulty is

measuring what we mean to measure, without creating incentives for software programmers and testers to

consciously or unconsciously “game” the measurements.

Software quality factors cannot be measured because of their vague definitions. It is necessary to find

measurements, or metrics, which can be used to quantify them as non-functional requirements. For

example, reliability is a software quality factor, but cannot be evaluated in its own right. However, there are

related attributes to reliability, which can indeed be measured. Some such attributes are mean time to

failure, rate of failure occurrence, and availability of the system. Similarly, an attribute of portability is the

number of target-dependent statements in a program.

A scheme that could be used for evaluating software quality factors is given below. For every characteristic,

there are a set of questions which are relevant to that characteristic. Some type of scoring formula could be

developed based on the answers to these questions, from which a measurement of the characteristic can be

obtained.

[edit]Understandability

Are variable names descriptive of the physical or functional property represented? Do uniquely recognisable

functions contain adequate comments so that their purpose is clear? Are deviations from forward logical flow

adequately commented? Are all elements of an array functionally related?...

[edit]Completeness

Are all necessary components available? Does any process fail for lack of resources or programming? Are

all potential pathways through the code accounted for, including proper error handling?

[edit]Conciseness

Is all code reachable? Is any code redundant? How many statements within loops could be placed outside

the loop, thus reducing computation time? Are branch decisions too complex?

[edit]Portability

Does the program depend upon system or library routines unique to a particular installation? Have machine-

dependent statements been flagged and commented? Has dependency on internal bit representation of

alphanumeric or special characters been avoided? How much effort would be required to transfer the

program from one hardware/software system or environment to another? Software portability refer the terms

of support and existence in different environments like window environments, mac, linux etc.

[edit]Consistency
Is one variable name used to represent different logical or physical entities in the program? Does the

program contain only one representation for any given physical or mathematical constant? Are functionally

similar arithmetic expressions similarly constructed? Is a consistent scheme used for indentation,

nomenclature, the color palette, fonts and other visual elements?

[edit]Maintainability

Has some memory capacity been reserved for future expansion? Is the design cohesive—i.e., does each

module have distinct, recognizable functionality? Does the software allow for a change in data structures

(object-oriented designs are more likely to allow for this)? If the code is procedure-based (rather than object-

oriented), is a change likely to require restructuring the main program, or just a module?

[edit]Testability

Are complex structures employed in the code? Does the detailed design contain clear pseudo-code? Is the

pseudo-code at a higher level of abstraction than the code? If tasking is used in concurrent designs, are

schemes available for providing adequate test cases?

[edit]Usability

Is a GUI used? Is there adequate on-line help? Is a user manual provided? Are meaningful error messages

provided?

[edit]Reliability

Are loop indexes range-tested? Is input data checked for range errors? Is divide-by-zero avoided? Is

exception handling provided? It is the probability that the software performs its intended functions correctly

in a specified period of time under stated operation conditions, but there could also be a problem with the

requirement document...

[edit]Efficiency

Have functions been optimized for speed? Have repeatedly used blocks of code been formed into

subroutines? Has the program been checked for memory leaks or overflow errors?

[edit]Security

Does the software protect itself and its data against unauthorized access and use? Does it allow its operator

to enforce security policies? Are security mechanisms appropriate, adequate and correctly implemented?

Can the software withstand attacks that can be anticipated in its intended environment?

Q2.Develop your own metrics for correctness,maintainability,integrity and usability os


software.What is statistical quality Assurance(SQA)?
oftware Quality Metrics
We best manage what we can measure. Measurement enables the Organization to improve the software
process; assist in planning, tracking and controlling the software project and assess the quality of the
software thus produced. It is the measure of such specific attributes of the process, project and product that
are used to compute the software metrics. Metrics are analyzed and they provide a dashboard to the
management on the overall health of the process, project and product. Generally, the validation of the
metrics is a continuous process spanning multiple projects. The kind of metrics employed generally account
for whether the quality requirements have been achieved or are likely to be achieved during the software
development process. As a quality assurance process, a metric is needed to be revalidated every time it is
used. Two leading firms namely, IBM and Hewlett-Packard have placed a great deal of importance on software
quality. The IBM measures the user satisfaction and softwareacceptability in eight dimensions which are
capability or functionality, usability, performance, reliability, ability to be installed, maintainability,
documentation, and availability. For the Software Quality Metrics the Hewlett-Packard normally follows the
five Juran quality parameters namely the functionality,the usability, the reliability, the performance and the
serviceability. In general, for most software quality assurance systems the common software metrics that are
checked for improvement are the Source lines of code, cyclical complexity of the code, Function point
analysis, bugs per line of code, code coverage, number of classes and interfaces, cohesion and coupling
between the modules etc.

Common software metrics include:

 Bugs per line of code


 Code coverage
 Cohesion
 Coupling
 Cyclomatic complexity
 Function point analysis
 Number of classes and interfaces
 Number of lines of customer requirements
 Order of growth
 Source lines of code
 Robert Cecil Martin’s software package metrics

Software Quality Metrics focus on the process, project and product. By analyzing the metrics the
organization the organization can take corrective action to fix those areas in the process, project or product
which are the cause of the software defects.

The de-facto definition of software quality consists of the two major attributes based on intrinsic product
quality and the user acceptability. The software quality metric encapsulates the above two attributes,
addressing the mean time to failure and defect density within the software components. Finally it assesses
user requirements and acceptability of the software. The intrinsic quality of a software product is generally
measured by the number of functional defects in the software, often referred to as bugs, or by testing the
software in run time mode for inherent vulnerability to determine the software "crash" scenarios.In
operational terms, the two metrics are often described by terms namely the defect density (rate) and mean
time to failure (MTTF).

Although there are many measures of software quality, correctness, maintainability, integrity and usability
provide useful insight.

[edit]
Correctness

A program must operate correctly. Correctness is the degree to which the software performs the required
functions accurately. One of the most common measures is Defects per KLOC. KLOC means thousands
(Kilo) Of Lines of Code.) KLOC is a way of measuring the size of a computer program by counting the
number of lines of source code a program has.

[edit]
Maintainability

Maintainability is the ease with which a program can be correct if an error occurs. Since there is no direct
way of measuring this an indirect way has been used to measure this. MTTC (Mean time to change) is one
such measure. It measures when a error is found, how much time it takes to analyze the change, design the
modification, implement it and test it.

[edit]
Integrity

This measure the system’s ability to with stand attacks to its security. In order to measure integrity two
additional parameters are threatand security need to be defined. Threat – probability that an attack of certain
type will happen over a period of time. Security – probability that an attack of certain type will be removed
over a period of time. Integrity = Summation [(1 - threat) X (1 - security)]

[edit]
Usability

How usable is your software application ? This important characteristic of your application is measured in
terms of the following characteristics:

 -Physical / Intellectual skill required to learn the system


 -time required to become moderately efficient in the system.
 -the net increase in productivity by use of the new system.
 -subjective assessment(usually in the form of questionnaire on the new system)
[edit]
Standard for the Software Evaluation

In context of the Software Quality Metrics, one of the popular standards that addresses the quality model,
external metrics, internal metrics and the quality in use metrics for the software development process is ISO
9126.

[edit]
Defect Removal Efficiency

Defect Removal Efficiency (DRE) is a measure of the efficacy of your SQA activities.. For eg. If the DRE is
low during analysis and design, it means you should spend time improving the way you conduct formal
technical reviews.

DRE = E / ( E + D )

Where E = No. of Errors found before delivery of the software and D = No. of Errors found after delivery of
the software.

Ideal value of DRE should be 1 which means no defects found. If you score low on DRE it means to say you
need to re-look at your existing process. In essence DRE is a indicator of the filtering ability of quality control
and quality assurance activity . It encourages the team to find as many defects before they are passed to the
next activity stage. Some of the Metrics are listed out here:

Test Coverage = Number of units (KLOC/FP) tested / total size of the system Number of tests per unit size =
Number of test cases per KLOC/FP Defects per size = Defects detected / system size Cost to locate defect
= Cost of testing / the number of defects located Defects detected in testing = Defects detected in testing /
total system defects Defects detected in production = Defects detected in production/system size Quality of
Testing = No. of defects found during Testing/(No. of defects found during testing + No of acceptance
defects found after delivery) *100 System complaints = Number of third party complaints / number of
transactions processed Effort Productivity = Test Planning Productivity = No of Test cases designed / Actual
Effort for Design and Documentation Test Execution Productivity = No of Test cycles executed / Actual Effort
for testing Test efficiency= (number of tests required / the number of system errors)

Measure Metrics
Number of system enhancement requests per year Number of maintenance fix requests
1. Customer per year User friendliness: call volume to customer service hotline User friendliness:
satisfaction index training time per new user Number of product recalls or fix releases (software vendors)
Number of production re-runs (in-house information systems groups)
Normalized per function point (or per LOC) At product delivery (first 3 months or first
2. Delivered defect year of operation) Ongoing (per year of operation) By level of severity By category or
quantities cause, e.g.: requirements defect, design defect, code defect, documentation/on-line
help defect, defect introduced by fixes, etc.
3. Responsiveness Turnaround time for defect fixes, by level of severity Time for minor vs. major
(turnaround time) to enhancements; actual vs. planned elapsed time (by customers) in the first year after
users product delivery
McCabe's cyclomatic complexity counts across the system Halstead’s measure Card's
7. Complexity of
design complexity measures Predicted defects and maintenance costs, based on
delivered product
complexity measures
Breadth of functional coverage Percentage of paths, branches or conditions that were
8. Test coverage actually tested Percentage by criticality level: perceived level of risk of paths The ratio of
the number of detected faults to the number of predicted faults.
Business losses per defect that occurs during operation Business interruption costs;
costs of work-arounds Lost sales and lost goodwill Litigation costs resulting from defects
9. Cost of defects
Annual maintenance cost (per function point) Annual operating cost (per function point)
Measurable damage to your boss's career
Costs of reviews, inspections and preventive measures Costs of test planning and
preparation Costs of test execution, defect tracking, version and change control Costs
10. Costs of quality of diagnostics, debugging and fixing Costs of tools and tool support Costs of tools and
activities tool support Costs of test case library maintenance Costs of testing & QA education
associated with the product Costs of monitoring and oversight by the QA organization (if
separate from the development and test organizations)
Re-work effort (hours, as a percentage of the original coding hours) Re-worked LOC
11. Re-work (source lines of code, as a percentage of the total delivered LOC) Re-worked software
components (as a percentage of the total delivered components)
Availability (percentage of time a system is available, versus the time the system is
needed to be available) Mean time between failure (MTBF) Mean time to repair (MTTR)
12. Reliability
Reliability ratio (MTBF / MTTR) Number of product recalls or fix releases Number of
production re-runs as a ratio of production runs
[edit]
Reference

http://geekswithblogs.net/srkprasad/archive/2003/11/04/394.aspx

[edit]

The Department of Defense and Open Source Software Quality Assurance – Join
SQA Forums
One great way to stay in the loop of what is going on in your field is to be involved with a community of
like minded people. For Software Quality Assurance, the SQA Forums is a pretty good place to start out as
it is filled with a ton of good information and has an active membership which can help you get any
specific questions you might have answered. The forum also covers automation, best practices, standards
and templates…. pretty much anything or everything you might want to discuss in the wonderful world of

Q3.Write a short note on SEI Capability Maturity Model(CMM).How does it differ from ISO
9000?
Ans
SEI Capability Maturity Model
Emanuel R. Baker, Ph.D., and Frank J. Koch

In 1991 the Software Engineering Institute (SEI) at Carnegie Mellon University introduced the Capability
Maturity Model for Software (CMM). This event marked a major milestone in the evolution of software
process management because, for the first time, the software community had a comprehensive description
of how software organizations "mature" or improve, in their ability to develop software.

The CMM defines five levels of process capability, each of which represents an evolutionary plateau towards
a disciplined, measured, and continuously improving software process.

The Initial Level


At the Initial level (Level 1) few, if any, organized processes exist. Each developer utilizes whatever methods
or techniques strike his or her fancy. The situation is sometimes described as chaotic and ad hoc. Software
quality is more a matter of chance, and is highly dependent on the capabilities of specific individuals within
the organization.

The Repeatable Level


To reach Level 2, a software development organization must put into place basic project management
practices. This includes the capability to estimate the size of the software to be produced, estimate
resources to execute the project, and track progress against these estimates. Also included is the
implementation of software configuration management and quality assurance practices, the capability to
effectively manage the requirements definition process, and the capability to manage subcontractors (if
applicable). This level is referred to as the Repeatable level; the organization has mastered tasks previously
learned. The organization is still highly dependent on individuals for the success of a project. In times of
stress, the organization tends to revert back to behaving as a Level 1 organization.

The Defined Level


Level 3 is characterized as the Defined level. At this level, the organization has defined and established the
software development and maintenance practices specific to the types of applications they produce. They
have put into place a set of standards and procedures to codify these practices, and the organization follows
them consistently. Training in these practices is provided. Peer reviews are performed as in-process
evaluations of product quality. Integrated project management exists. The organization is no longer highly
dependent on key individuals;

the process belongs to the organization, not to individuals. At times of stress, the Level 3 practices are not
abandoned.

The Managed Level


At Level 3 and below, the primary focus is on product quality. At Level 4 and above, the primary focus shifts
to process quality (although some amount of attention is paid to process quality below Level 4). To reach
Level 4, the Managed level, the organization focuses on establishing a set of process measures and uses
them to initiate corrective actions. Once these measures have been established, the organization is ready to
begin to use them to implement continuous process improvement.

The Optimizing Level


At Level 5, the Optimizing level, these measures are not only being used to improve existing processes, but
also to evaluate candidate new processes. They are also being used as the basis for determining the
efficacy of introducing new technologies into the organization.
Using the CMM
How can the CMM help your organization? There are three key roles the CMM plays. First, the CMM helps
build an understanding of software process by describing the practices that contribute to a level of process
maturity.

The second role of the CMM is to provide a consistent basis for conducting appraisals of software
processes. The CMM defines a scale for measuring process maturity, thus allowing an organization to
accurately compare its process capability to that of another organization. ISO is using the CMM in its efforts
to develop international standards for software process assessments.

The CMM's third key role is to serve as a blueprint for software process improvement. The CMM can help an
organization focus on the areas it must address in order to advance to the next level of maturity.

Today, leading software organizations are adopting the CMM as their core strategy for improving quality and
productivity.

A Comparison of ISO 9001 and the Capability Maturity


Model for Software
The Capability Maturity Model for Software (CMM), developed by the Software Engineering Institute, and the ISO 9000
series of standards, developed by the International Standards Organization, share a common concern with quality and
process management. The two are driven by similar concerns and intuitively correlated. The purpose of this report is to
contrast the CMM and ISO 9001, showing both their differences and their similarities. The results of the analysis indicate
that, although an ISO 9001-compliant organization would not necessarily satisfy all of the level 2 key process areas, it
would satisfy most of the level 2 goals and many level 3 goals. Because there are practices in the CMM that are not
addressed in ISO 9000, it is possible for a level 1 organization to receive 9001 registration; similarly, there are areas
addressed by ISO 9001 that are not addressed in the CMM. A level 3 organization would have little difficulty in obtaining
ISO 9001 certification, and a level 2 organization would have significant advantages in obtaining certification.

Part B

Q1.As size is the main factor determining the cost of a project,an accurate size can be used to
estimate the cost and schedule of the software project.Give your view in favour and against
of the statement?Also write in brief about contract management and human resource

Cost estimation in software


management?

engineering
From Wikipedia, the free encyclopedia

The ability to accurately estimate the time and/or cost taken for a project to come in to its
successful conclusion is a serious problem for software engineers. The use of a repeatable,
clearly defined and well understood software development process has, in recent years,
shown itself to be the most effective method of gaining useful historical data that can be used
for statistical estimation. In particular, the act of sampling more frequently, coupled with the
loosening of constraints between parts of a project, has allowed more accurate estimation
and more rapid development times.
There are many, many ways to estimate a software project, ranging from formal models
that can only be worked by a University professor through to a salesperson's hopeful
guess.
Here we present a simple, proven software project estimation method that produces
reasonably accurate results and has been used estimate substantial fixed price work.
This method is based on a quick but robust initial design on the premise that later
versions of the design will reduce the amount of work as developers come up with smart
ideas for saving work.
This method also calls for participation by a substantial number of people, increasing
buy-in and gaining accuracy. It also follows the principles of the Cardboard Checklist for
Planning, where relevant.
Overview
This estimation method has three steps:
1. Design the System.
2. Estimate each Part of the System.
3. Schedule the Work.
The outputs are:
1. A project schedule with allowances for schedule risk, internal dependencies and
known outages.
2. A list of other assumptions made during the estimation.
3. A list of the project's external dependencies.
4. A list of other risks to the project schedule.
The inputs are such documents as are available, including:
1. The statement of project scope.
2. User requirements - use cases, functional specs.
3. Technical requirements.
Of these, the only one that is absolutely required is the Statement of Project Scope.
However, thore more information available, the more accurate the final result will be.

[edit]

HUMAN RESOURCE MANAGEMENT GUIDE FOR


MARKET TESTING AND CONTRACTING OUT

The Human Resource Management Guide for Market Testing and Contracting Out is a
compilation of documents to assist APS agencies with the human resource management
(HRM) aspects of market testing and contracting out.
This Guide is designed to provide an overview for APS agencies on the management of
staff affected by the market testing and contracting out process. It is not intended to
replace legal or detailed advice for individual situations as they arise, but provides a
framework within which agencies can manage HRM aspects of the market testing and
contracting out process.
This version of the Guide updates the material originally prepared by the former Office of
Asset Sales and Commercial Support (OASACS) and the Australian Public Service
Commission (the Commission) drawing on information provided by the Australian Tax
Office (ATO), the Department of Employment, Workplace Relations (DEWR) and the
Department of Finance and Administration (Finance).
The Commission's publication Outsourcing—Human Resource Management Issues (June
2002) provides more detailed advice on the human resource aspects of outsourcing. This
publication is available on the Commission’s website at www.apsc.gov.au.
The reference material provided in this Guide may change as legislation and Public
Service procedures are amended. The Commission aims to provide links to updated
documentation through its website.

ontract Management Solution

Whether you are a prime contractor, subcontractor or government agency that is letting out and managing contracts, xpdoffice™
contract management software is the tool that puts you in charge. Accessible from anywhere through an intuitive Web interface,
contract specialists can begin using it right from Day One.

Set up and administer contracts.


xpdcontracts lets you create a contract framework and pull together all the loose ends. Establish a contract and its jobs and tasks,
assign job hours and billing rates based on labor categories, designate contract officers and technical representatives, assign
teams and managers, and more.

View and report contract data.


With xpdoffice™ contract management software, all contract details are at your fingertips. Screen views show all contract data, by
customer, by job and by task. And xpdcontracts lets you easily generate reports, including charts that illustrate completion status
and budget updates and breakdowns.

Meet DCAA requirements.


xpdcontracts is DCAA compliant, meeting the Defense Contract Audit Agency's CAS (Cost Accounting Standards) and FAR
(Federal Acquisition Regulations) requirements. xpdcontracts can perform direct and indirect cost segregation, track cumulative-
to-date direct costs by work breakdown structure and handle government per diem monitoring, among other capabilities.

Communicate with your team and customers.


For good communication, you need a contract plan that clearly spells out everyone's roles. xpdcontracts enables you to create
that plan in a straightforward, yet comprehensive manner. Once work starts, as you enter your hours worked, budget consumed,
and task and job completions, xpdoffice™ provides a real-time accounting of contract status. Reports are easily printed out or
emailed to team members or customers.

• DCAA compliance
• Peachtree, QuickBooks synchronization
• Microsoft Project
• Universal web access

xpdcontracts™ contract management software can be your key within today's high-performance business environment to
maintaining relationships with customers and suppliers alike. xpdcontracts™ supports these customer and supplier relationships
effectively and efficiently. It compiles all contract data including personnel information, allocated time, complete description, and
task breakdown.
xpdcontracts offers a powerful array of features:
Budget creation
Easily develop your contract budgets from the contract level down to the job and task level, allocating every hour.

Work breakdown
Simple process for assigning jobs and tasks to employees and managers, and rolling all of that up to the contract level.

Schedule development
Create project schedules and establish key milestones.

Contract-focused data capture


With a few clicks you can create records for new customers, or go into existing records and edit customer data. Again, a few clicks
are all that's needed to create a new contract record or add data to an existing one using xpd's contract management software.

Earned Value Management EVM


Assess a contract's Earned Value by using simple tools that let you roll up completion percentage estimates from the task and job
level to the overall contract level.

Data sharing
Integrates automatically with major accounting systems (including Peachtree, QuickBooks, Great Plains and ERP systems), as
well as with project management tools like Microsoft Project. It also shares data with all open system databases and any
application that accepts or exports Microsoft Excel data.

Universal access
xpdcontracts operates as a stand-alone Web-based service, with no installation, integration, maintenance or on-site hosting
required.

Security
To prevent access from unauthorized internal or external sources, xpdcontracts uses SSL128 and offers user-selected permission
levels along with user IDs and passwords for the project team and for clients. xpdcontracts is hosted at a remote secure server
site. For redundancy, a backup site is mirrored in another facility at another location.

Cost effectiveness
User licenses on our software cost just pennies per day per user. You can add or cancel licenses at any time.

xpdcontracts is part of the xpdoffice™ suite of business productivity and PSA solutions from xpdientinc, a division of Scientific
Systems and Software International, a software and technical services firm in operation since 1985. xpdoffice™ applications work
seamlessly with each other and incorporate modules such astimesheet software and human resources management software that
bring increased efficiency to projects, human resource functions and other time-consuming business applications.

To find out more about xpdcontracts or xpdoffice™, request a demo, send us an email, or call 888-777-4638, ext. 264.

Q2.Why it is important for a software development organization to obtain ISO9126


certification? Applying the ISO 9126 model to the
Despite the widespread use of e-learning system
s and the considerable investment in purchasing
or developing them in house, there is no consensus on a standard framework for
evaluating
system quality. This paper proposes the ISO 9126 Quality Model as a useful tool for
evaluating
such systems, particularly for teachers and educational administrators. The authors
demonstrate
the validity of the model in a case study in which they apply it to a commonly available
elearning system and show how it can be used to detect design flaws. It is proposed that
the
metric would be applicable to other e-learning systems and could be used as the basis for
a
comparison to inform purchase decisions.
Keywords: e-learning, ISO, ISO 9126, Blackboard, online learning
Introduction
Most universities and colleges use e-learning systems to support face to face learning in
the classroom or
to implement distance learning programmes. The growth of e-learning systems has
increased greatly in
recent years thanks to the demand by students for more flexible learning options and
economic pressures
on educational institutions, who see technology as a cost saving measure. Yet, there has
been
considerable criticism of the quality of the systems currently being used. Problems
include low
performance, poor usability, and poor customisability, which make it difficult to serve the
specific needs
of different learners. Furthermore, online education has often been criticised as not
supporting learner
centred education but replicating traditional face to face instruction (Vrasidas 2004).
Despite the widespread use of e-learning systems and the considerable investment in
purchasing or
developing them in house, there is no consensus on devising a standard framework for
evaluating system
quality in this area. The lack of an agreed e-learning system quality model is in stark
contrast to the
extensive work on software quality assurance in general (Crosby 1979; Garvin 1984; Juran
1988; Norman
& Pfleeger 2002).
This paper proposes the ISO 9126 Quality Model (ISO 1991) as a useful tool for evaluating
such systems.
The ISO 9126 model was developed by the International Organization for Standardisation
(ISO) and is
one of a large group of internationally recognised standards applicable across a wide
range of
applications. To date, ISO 9126 has not been applied extensively to the e-learning
environment.
Nevertheless, the authors believe that it has potential to provide a useful evaluation tool:
this belief
derives from the many years of industry experience that one of the researchers has had in
software quality
assurance. Perspectives from this domain could provide insights relevant to e-learning
educators. In this
paper we propose that the ISO 9126 model could be used as the basis for a comparison of
e-learning
systems to inform decisions regarding review of existing systems and the purchase of
new ones.
First of all, the paper examines the e-learning system literature and evaluates some of the
software quality
tools and frameworks that have been proposed. Secondly, we introduce the ISO 9126
Quality Model as a
basis for evaluating e-learning tools and explain the characteristics and sub-
characteristics of the model.
The main objective of our paper was to demonstrate how the model can be used to
evaluate an e-learning
system. With this in mind, we chose a commonly used system, Blackboard, as a basis for
our research
and adopted a case study approach. We applied the model to the system in the context of
an Information
Technology subject in an undergraduate programme. In this paper, we summarise the
results of the
evaluation of the system: generally, our results show the model is a good framework for
assessing elearning systems, although we do identify several possible refinements to the
model. Finally, we analyse
the implications of using the ISO 9126 Quality Model to evaluate and improve e-learning
system
The ISO 9126 model
The International Organization for Standardisation (ISO) was founded in 1946 in order to
facilitate
international trade, international coordination and unification of industrial standards by
providing a single
set of standards that would be recognised and respected (Praxiom Research Group). ISO
9126 was
originally developed in 1991 to provide a framework for evaluating software quality and
then refined
over a further ten year period (Abran et al. 2003). Many studies criticise ISO 9126 for not
prescribing
specific quality requirements, but instead defining a general framework for the evaluation
of software
quality (Valenti 2002). We believe that this is in fact one of its strengths as it is more
adaptable and can
be used across many systems, including e-learning systems. The original model defined
six product
characteristics (see Figure 1). These six characteristics are further subdivided into a
number of subcharacteristics (see Table 1). Chua & Dyson
186
ISO 9126
Portability
Are the required functions available in the
software?
How reliable is the
software?
How efficient is the software?
How easy is to modify the
software?
How easy is to transfer to
another environment?
Is the software easy to
use?
Reliability
Functionality
Efficiency
Maintainability Usability
Figure 1: (Source: ISO 1991)
Table 1: ISO 9126 Characteristic and sub-characteristics (Source: ISO 1991; Abran 2003)
Characteristic Sub-characteristic Explanation

Suitability Can software perform the tasks required?


Accurateness Is the result as expected?
Interoperability Can the system interact with another system?
Functionality
Security Does the software prevent unauthorised access?
Maturity Have most of the faults in the software been eliminated over
time?
Fault tolerance Is the software capable of handling errors?
Reliability
Recoverability Can the software resume working and restore lost data after
failure?
Understandability Does the user comprehend how to use the system easily?
Learnability Can the user learn to use the system easily?
Operability Can the user use the system without much effort?
Usability
Attractiveness Does the interface look good?
Efficiency Time Behaviour How quickly does the system respond?
Resource Utilisation Does the system utilise resources efficiently?
Analysability Can faults be easily diagnosed?
Changeability Can the software be easily modified?
Stability Can the software continue functioning if changes are made?
Maintainability
Testability Can the software be tested easily?
Adaptability Can the software be moved to other environments?
Installability Can the software be installed easily?
Conformance Does the software comply with portability standards?
Portability
Replaceability Can the software easily replace other software?
All characteristics Compliance Does the software comply with laws or regulations?

Q3.Describe function point analysis. How does function points are used in estimation of
cost nad efforts using decomposition techniques?

An Introduction to Function Point Analysis


by Roger Heller

The purpose of this article is to provide an introduction to Function Point Analysis and its application in non-
traditional computing situations. Software engineers have been searching for a metric that is applicable for a
broad range of software environments. The metric should be technology independent and support the need
for estimating, project management, measuring quality and gathering requirements. Function Point Analysis
is rapidly becoming the measure of choice for these tasks.

Function Point Analysis has been proven as a reliable method for measuring the size of computer software.
In addition to measuring output, Function Point Analysis is extremely useful in estimating projects, managing
change of scope, measuring productivity, and communicating functional requirements.
There have been many misconceptions regarding the appropriateness of Function Point Analysis in
evaluating emerging environments such as real time embedded code and Object Oriented programming.
Since function points express the resulting work-product in terms of functionality as seen from the user's
perspective, the tools and technologies used to deliver it are independent.

The following provides an introduction to Function Point Analysis and is followed by further discussion of
potential benefits.

Introduction to Function Point Analysis


One of the initial design criteria for function points was to provide a mechanism that both software
developers and users could utilize to define functional requirements. It was determined that the best way to
gain an understanding of the users' needs was to approach their problem from the perspective of how they
view the results an automated system produces. Therefore, one of the primary goals of Function Point
Analysis is to evaluate a system's capabilities from a user's point of view. To achieve this goal, the analysis
is based upon the various ways users interact with computerized systems. From a user's perspective a
system assists them in doing their job by providing five (5) basic functions. Two of these address the data
requirements of an end user and are referred to as Data Functions. The remaining three address the user's
need to access data and are referred to as Transactional Functions.

The Five Components of Function Points

Data Functions

• Internal Logical Files


• External Interface Files

Transactional Functions

• External Inputs
• External Outputs
• External Inquiries

Internal Logical Files - The first data function allows users to utilize data they are responsible for
maintaining. For example, a pilot may enter navigational data through a display in the cockpit prior to
departure. The data is stored in a file for use and can be modified during the mission. Therefore the pilot is
responsible for maintaining the file that contains the navigational information. Logical groupings of data in a
system, maintained by an end user, are referred to as Internal Logical Files (ILF).

External Interface Files - The second Data Function a system provides an end user is also related to
logical groupings of data. In this case the user is not responsible for maintaining the data. The data resides
in another system and is maintained by another user or system. The user of the system being counted
requires this data for reference purposes only. For example, it may be necessary for a pilot to reference
position data from a satellite or ground-based facility during flight. The pilot does not have the responsibility
for updating data at these sites but must reference it during the flight. Groupings of data from another
system that are used only for reference purposes are defined as External Interface Files (EIF).

The remaining functions address the user's capability to access the data contained in ILFs and EIFs. This
capability includes maintaining, inquiring and outputting of data. These are referred to as Transactional
Functions.

External Input - The first Transactional Function allows a user to maintain Internal Logical Files (ILFs)
through the ability to add, change and delete the data. For example, a pilot can add, change and delete
navigational information prior to and during the mission. In this case the pilot is utilizing a transaction
referred to as an External Input (EI). An External Input gives the user the capability to maintain the data in
ILF's through adding, changing and deleting its contents.

External Output - The next Transactional Function gives the user the ability to produce outputs. For
example a pilot has the ability to separately display ground speed, true air speed and calibrated air speed.
The results displayed are derived using data that is maintained and data that is referenced. In function point
terminology the resulting display is called an External Output (EO).

External Inquiries - The final capability provided to users through a computerized system addresses the
requirement to select and display specific data from files. To accomplish this a user inputs selection
information that is used to retrieve data that meets the specific criteria. In this situation there is no
manipulation of the data. It is a direct retrieval of information contained on the files. For example if a pilot
displays terrain clearance data that was previously set, the resulting output is the direct retrieval of stored
information. These transactions are referred to as External Inquiries (EQ).

In addition to the five functional components described above there are two adjustment factors that need to
be considered in Function Point Analysis.

Functional Complexity - The first adjustment factor considers the Functional Complexity for each unique
function. Functional Complexity is determined based on the combination of data groupings and data
elements of a particular function. The number of data elements and unique groupings are counted and
compared to a complexity matrix that will rate the function as low, average or high complexity. Each of the
five functional components (ILF, EIF, EI, EO and EQ) has its own unique complexity matrix. The following is
the complexity matrix for External Outputs.

Vous aimerez peut-être aussi