Vous êtes sur la page 1sur 15

Zero-Defect Software Development Published Jun 13 2000 02:53 PM in General Programming By Steve Pavlina (Dexterity Software) As an independent game

developer, I will work from six months to a couple years to develop and release a new computer game. I can sell sequels or expansion packs, but I generally cannot sell upgrades as with other software. When I release a new game, I must make sure it is of very high quality because my users won't think to look for an upgrade. Because of the short lifespan of most computer games and their dependence on transient technology, my opportunities to improve a released product based on customer feedback are minimal. To deal with these issues, I gradually adopted a system of Quality Assurance (QA) practices that allowed me to significantly increase product quality while simultaneously reducing development time. Not to be taken as meaning "bug-free," Zero-Defect Software Development (ZDSD) is a practice of developing software that is maintained in the highest quality state throughout the entire development process. "Defects" are aspects of the evolving software that would not be suitable for the final product as-is. This broad definition includes bugs as well as unwanted deviations from the desired final outcome. Defects in the development of a computer game would include unpolished artwork, an unacceptably low frame rate on the target system, levels that aren't fun enough, or any number of unfinished features. The basic tenet of ZDSD is this: Maintain your product in what you believe to be a defect-free state throughout the development process. This sounds simple, but it is a rare practice. The most common approach is to delay major testing until the final QA phase of software development, where defects are often discovered for the first time. Most bugs are not detected or fixed until long after their introduction. The longer a defect remains, the harder it is to fix. On large software products, each stage of development that a defect survives will increase the cost of fixing the defect by ten to fifty times. A defect introduced in the design phase can cost hundreds of times more to fix in the testing phase than it would if fixed immediately after its introduction. By focusing on product quality throughout the development lifecycle, you will actually complete products faster than if you didn't pay attention to quality until the end of the project. The general rule of software quality is counter-intuitive: Improving quality actually reduces development time. This is because you eliminate all the time spent fixing bugs and reworking code, which can account for as much as 50% of development costs on a large project. The typical programmer writes between eight and twenty lines of code a day; the rest of the day is usually spent on debugging. ZDSD shortens schedules by eliminating most debugging time. Extensive studies done at NASA, IBM, and elsewhere have shown that better QA leads to shorter schedules. An IBM study concluded that software projects that make quality a top priority typically have the shortest schedules, the highest productivity, and even the best sales. Here are the ten basic rules of ZDSD:

Test your product every day as you develop it, and fix defects as soon as you find them. Apply the daily build and smoke test. At the end of every day you work on your project, build the current version of your software, and test it for basic functionality. Microsoft enforces this policy religiously, using large teams to build each project on a daily basis. A programmer whose code breaks the build may be called in the middle of the night and must go back to work to fix the problem immediately. For independent game developers working on small projects, this is far easier. At the end of each day, test your program for at least ten minutes. Make a list of anything you would consider a "defect," and resolve to fix all defects before implementing any new features. Once you find a defect, fixing it becomes your number one priority, and you avoid writing any new code until the defect is 100% eliminated. Review your code regularly. When most people think of QA, they think of testing, but testing is actually one of the least cost-effective strategies for finding bugs. The most rigorous testing will typically find less than 60% of all bugs in a program, and there are certain types of bugs that testing will rarely find. Studies conducted at many large software organizations have concluded that code inspections are far more cost-effective than testing. A NASA study found that code reading detected almost twice as many defects per hour as testing. Whenever you've added a few hundred lines of new code to your project, set aside an hour or two to read over your work and look for mistakes. One hour of code review is equivalent to two or more hours of methodical testing. As you gain experience, keep a list of the types of defects you find, and run down your list whenever reviewing new code. To find even more defects, have someone else read your code as well. Rewrite poor-quality modules. When you discover an obscure new bug, do you ever pray, "Oh no! Please don't let it be in that module!" We all have monster modules of legacy code that were written when we weren't such seasoned programmers as we are today. Don't fear them; rewrite them. Often a better approach will only become clear when an inferior solution has already been implemented. This is certainly true for John Carmack, who coded dozens of different approaches when writing the Quake engine before discovering one that satisfied him. Defects will not be distributed evenly across your code. You will typically find that 20% of your routines are responsible for 80% of your errors. In my programs it is normally the modules that interact with the hardware or with third-party drivers, especially DirectX, that are the most buggy. Raise your standards for those modules that seem to produce a never-ending supply of bugs, and take the time to rewrite them from scratch. You may find that other intermittent bugs disappear completely as a result. Assume full responsibility for every bug. 95% of all software defects are caused by the programmer. Only 1% of defects are hardware errors, and the remaining 4% are caused by the compiler, the OS, or other software. Never dismiss a potential bug; find out the exact cause of any anomaly. When the Mars probe suffered serious software glitches during its mission, it was learned that the same glitch had occurred only once during testing on earth, but the engineers dismissed it as a temporary hardware hiccup. Unless your hardware drinks soda, it does not hiccup. Handle change effectively. You will always think of great new features to add after you have started coding. Carefully consider how each change will impact your pre-existing code. Poor integration of unanticipated features is a major cause of defects.

Rewrite all prototyping code from scratch. Sometimes you may quickly prototype a new feature to see if it will be viable. Often this is done by sacrificing code quality in the name of rapid development. If you eventually decide to keep the feature, it is very tempting to simply tack on some basic error checking to the prototyping code. Don't fall into this trap. If you weren't writing the code originally with quality as a priority, scrap the prototyping code, and re-implement the feature from scratch. Rapidly prototyped features that slip into the final product are a major source of bugs because they are not subject to the same quality standards as the rest of the code. Set QA objectives at the beginning of every project. Studies have shown that developers who set reasonable QA goals will usually achieve them. Decide in advance if your product must be fast, small, feature-rich, intuitive, scalable, etc. Then prioritize those objectives. When designing the interface code for an upcoming game, I decided that my top three priorities were to make it beginner-intuitive, fast, and fun, in that order. Consequently, my game's interface isn't as graphically rich as other games, but it is easier to use and faster than any other game of its type. Whenever you have to make a design decision, keep your objectives in mind. If you do not set clear QA goals, then you are doomed to accept the results of random chance. Don't rush debugging work. Fully 50% of all bug fixes are done incorrectly the first time, often introducing new bugs in the process. Never experiment by simply changing "x-1" to "x+1" to see if that will do the trick. Take the time to understand the source of the bug. Long ago when I was a boy scout and had to put out a campfire, the Scoutmaster would sometimes test my thoroughness by asking me to put my hand in the ashes. I learned very quickly how to put out a fire so well that I had complete confidence it was 100% extinguished. When you find a defect, it means your code is on fire. As long as the defect remains, any new code you write will add fuel to that fire. Whenever you find a defect, drop everything to fix it, and don't move on until you are 100% confident that your fix is correct. If you don't take the time to do it right the first time, when will you find the time to do it over? Treat the quality of your code at the same level of importance as the quality of your product. Rate your code on a scale of one to ten for overall quality. The first time I did this, I rated my 30,000-line project as a four. I rewrote the worst of the code until I reached an eight overall. It was one of the best investments of time I ever made because I was then able to add new features at double my previous rate. The quality of your code is highly indicative of the quality of your product. You may find as I have that your best selling products also receive your highest ratings for code quality. Learn from every bug; each one represents a mistake that you made. Learn why you made each mistake, and see if you can change something about your development practices to eliminate it. Over the years I have adopted many simple coding practices that allow me to avoid common bugs that used to plague me. There are many types of bugs that I now never encounter because my coding style makes it physically impossible for me to introduce them.

Definition
Software Configuration Management are the practices and procedures for administering source code, producing software development builds, controlling change, and managing software configurations.

Specifically, SCM ensures the integrity, reliability and reproducibility of developing software products from conception to release.

What is SCM?
SCM encapsulates the practices and procedures for administering source code, producing software development builds, controlling change, and managing software configurations. More specifically, SCM ensures the integrity, reliability and reproducibility of developing software products from planning to release.

Why Use SCM?


Using SCM greatly reduces your risk in developing software. No one wants surprises when trying to meet a tight software release deadline.

SCM Tool Administration


Source Code is the intellectual property of any organization. Understanding and properly maintaining this valuable asset is SCM's highest priority. Source Code Administration Information

Software Builds
Software building, otherwise known as integration, is the process of taking all source code files that make up an application and compiling it into build artifacts such as binaries or executables. SCM ensures that this building process adheres to the following best practices: The process is fully automated The process is repeatable The process is reproducible The process is adhered

Software Build Engineering

Change Control
The ability to apply proper controls to the software development process to ensure only appropriate and approved changes are being added to the application. For more information about Software Change Management

Please Review your Software Change Tool or Defect Tracking SoftwareSoftware Tracking Tool Reviews

Software Project Management


Define Project Management - obviously, Project Management is the act of managing a project. Project Management can be considered an approach to manage work within the constraints of cost, quality and time. In addition, it is a body of knowledge concerned with principles, techniques, and tools used in initiating, planning, executing, controlling and completing projects. Project Management is a methodical approach to planning and guiding project processes from start to finish. Spice - Presentation Transcript

1.
2.

Software Quality Assurance S oftware P rocess I mprovement and C apability D E termination Seminar: Oana FEIDI Quality Manager Continental Automotive What is SPICE? SPICE Software Process Improvement & Capability dEtermination also known as ISO/IEC 15504 is an international standard for SW process assessments Mainly used in Europe and Australia by the automotive industry Goal

objective, comparable

To provide assessment results that are repeatable,

Future Automotive SPICE launched in April 2006 its usage will increase mainly driven by HIS (Audi, BMW, Daimler Chrysler, Porsche, Volkswagen), Ford & Volvo, Fiat Each OEM have different target level; if these are not met, then:

development processes

Suppliers are requested to improve the

3.

In case of high risks/low capability levels, the suppliers are excluded from sourcing The Goal of Process Assessment & Improvement

The goal of an improvement is to change an organizations processes so

that they achieve a higher ability to meet its businesses goals o Assessments deliver the input for any improvement by detecting strength and weaknesses in the organizations processes o Assessments are also a tool used by customers to ascertain the ability of their suppliers to meet their needs

Process Assessment Rating Improvement 4. SPICE Models Structure

o
dimensional

The reference model architecture for this assessment model is 2-

Process dimension -> contains processes in groups Process Capability dimension -> allows the capability of each process to be measuredindependently Processes Capability 5. SPICE Models Structure

Process dimension

Characterized by set of purpose statements which describe in measurable terms what has to be achieved in order to attain the defined purpose of the process Process Capability dimension Characterizes the level of capability that an organization unit has attained for a particular process or, May be used by the organization as a target to be attained Represent measurable characteristics necessary to manage a process and improve its capability to perform

6. 7.

Process structure Process structure Capability Dimension overview Level 5 Optimizing process Level 4 Predictable process Level 3 Established process Level 2 Managed process Level 1 Performed process Base practices of the process are performed ad hoc and poorly controlled . Work products of the process are identifiable . Base practices of the process are planned and tracked . Products are conformed to standards and requirements . The process is managed and performed using a defined process . Projects are using a tailored version of the standard process. The process is performed consistently in practice within defined control limits. The quality of work products is quantitatively known . The process performance is optimized to meet current and future business needs. Capability Level 1 & 2 Level 1 Level 2 The purpose of the process is generally achieved Work products proof implementation of base practices No documented process No planning or checks of performance of the process No quality requirements for work product are expressed

8.

9.

o o o o o o o

o o
or team

The performance of the process is planned and checked The responsibility for developing the work products is assigned to a person Requirements for the work products are identified, documented and traced Work products are put under configuration management

o o

and quality assurance o No documented or defined process 10. Capability Level 3

A documented standard process with tailoring guidelines exists and is

used in the project o Historical process performance data is gathered

o o

Experience from the performance of the process is used for process

improvement Resources and needed infrastructure for the performance of the process are

identified and made available o The process is not yet quantitatively understood or managed

Process improvement is reactive 11. Process Dimension overview

Primary Life Cycle Processes Category Acquisition Supply

Engineering: ENG.4 (Software requirements analysis), ENG.5 (Software design), ENG.6 ( Softwareconstruction ), ENG.7 (Software integration), ENG.8 ( Software testing ) o Supporting Life Cycle Processes Category Support: SUP.1 ( Quality assurance ), SUP.2 ( Verification ), SUP.8 (Configuration management), SUP.10 (Change request management) o Organizational Life Cycle Process Category

Management)

Management: MAN.3 (Project Management), MAN.5 (Risk

Process Improvement: PIM.3 (Process Improvement) Reuse: REU.2 (Reuse program management)

12. Process Attributes Level Attributes

o o o o

Process innovation Continuous optimization Process measurement Process control

o o o o o

Process definition Process deployment Performance management Work product management Process performance

Level 5 Level 4 Level 3 Level 2 Level 1 13. Measure capability levels

The fulfillment of a process attribute (PA) is measured along a scale from 0

100% in the following predefined stages: N (not achieved): 0 15%

fulfillment

The are no or only very limited indications of PA P (partially achieved): 16 50%

There are some indicators that the PA is implemented to the measured extent. In some aspects the process remains unpredictable, though. L (largely achieved): 51 85% There is evidence that the PA is implemented to the measured extend in a useful and systematic way. Process performance might still show some weaknesses F (fully achieved): 86 100%

14.
o o o o o o

There is evidence for a complete and systematic PA execution to the measured extent. Process performance does not show any significant shortcomings due to the analyzed processes SPICE Assessments Planning Data collection Data analysis Process rating REPORT Assessment input: purpose

scope constraints qualified assessor extended process definition

Assessment Process o Process model

o o o o o

process purpose practices Assessment instrument process indicators process management indicators

o o o

Assessment output process capability level ratings assessment record 15. SPICE Assessments results 16. Debate

o
organization

Lets rate the base practices for ENG.8 (Software testing) in your

Definition and Summary: Applying statistical process control (use of control charts) to
the management of software development efforts, to effect software process improvement. Statistical Process Control (SPC) can be applied to software development processes. A process has one or more outputs, as depicted in the figure below. These outputs, in turn, have measurable attributes. SPC is based on the idea that these attributes have two sources of variation: natural (also known as common) and assignable (also known as special) causes. If the observed variability of the attributes of a process is within the range of variability from natural causes, the process is said to be under statistical control. The practitioner of SPC tracks the variability of the process to be controlled. When that variability exceeds the range to be expected from natural causes, one then identifies and corrects assignable causes.

SPC is a powerful tool to optimize the amount of information needed for use in making management decisions. Statistical techniques provide an understanding of the business baselines, insights for process improvements, communication of value and results of processes, and active and visible involvement. SPC provides real time analysis to establish controllable process baselines; learn, set, and dynamically improve process capabilities; and focus business on areas needing improvement. SPC moves away from opinion-based decision making. These benefits of SPC cannot be obtained immediately by all organizations. SPC requires defined processes and a discipline of following them. It requires a climate in which personnel are not punished when problems are detected, and strong management commitment.

DESCRIPTION OF THE PRACTICE: SUMMARY DESCRIPTION


Statistical Process Control (SPC) can be applied to software development processes. A process has one or more outputs, as depicted in the first figure below. These outputs, in turn, have measurable attributes. SPC is based on the idea that these attributes have two sources of variation: natural (also known as common) and assignable (also known as special) causes. If the observed variability of the attributes of a process is within the range of variability from natural causes, the process is said to be under statistical control. The practitioner of SPC tracks the variability of the process to be controlled. When that variability exceeds the range to be expected from natural causes, one then identifies and corrects assignable causes.

The key steps for implementing Statistical Process Control are: o o o o o o Identify defined processes Identify measurable attributes of the process Characterize natural variation of attributes Track process variation If the process is in control, continue to track If the process is not in control: Identify assignable cause Remove assignable cause Return to Track process variation

DETAILED DESCRIPTION
Statistical Process Control (SPC) can be applied to software development processes. A process has one or more outputs, as depicted in Figure 1. These outputs, in turn, have measurable attributes. SPC is based on the idea that these attributes have two sources of variation: natural

(also known as common) and assignable (also known as special) causes. If the observed variability of the attributes of a process is within the range of variability from natural causes, the process is said to be under statistical control. The practitioner of SPC tracks the variability of the process to be controlled. When that variability exceeds the range to be expected from natural causes, one then identifies and corrects assignable causes. Figure 2 depicts the steps in an implementation of SPC.

Figure 1: Statistical Process Control

Figure 2: How To Perform SPC

In practice, reports of SPC in software development and maintenance tend to concentrate on a few software processes. Specifically, SPC has been used to control software (formal) inspections, testing, maintenance, and personal process improvement. Control charts are the most common tools for determining whether a software process is under statistical control. A variety of types of control charts are used in SPC. Table 1, based on a survey [Radice 2000] of SPC usage in organizations attaining Level 4 or higher on the SEI CMM metric of process maturity, shows what types are most commonly used in applying SPC to software. The combination of an Upper Control Limit (UCL) and a Lower Control Limit (LCL) specify, on control charts, the variability due to natural causes. Table 2 shows the levels commonly used in setting control limits for software SPC. Table 3 shows the most common statistical techniques, other than control charts, used in software SPC. Some of these techniques are used in trial applications of SPC to explore the natural variability of processes. Some are used in techniques for eliminating assignable causes. Analysis of defects is the most common technique for eliminating assignable causes. Causal Analysis-related techniques, such as Pareto analysis, Ishikawa diagrams, the Nominal Group Technique (NGT), and brainstorming, are also frequently used for eliminating assignable causes.

Table 1: Usage of Control Charts Type of Control/Attribute Chart Percentage Xbar-mR u-Chart Xbar c-Chart z-Chart Not clearly stated 33.3% 23.3% 13.3% 6.7% 6.7% 16.7%

From Ron Radices survey of 25 CMM Level 4 and Level 5 organizations [Radice 2000] Table 2: Location of UCL-LCL in Control Charts Location Percentage Three-sigma Two-sigma One-Sigma Combination None/Not Clear 16% 4% 8% 16% 24%

From Ron Radices survey of 25 CMM level 4 and level 5 organizations [Radice 2000] Table 3: Usage of Other Statistical Techniques Statistical Technique Percentage Run Charts 22.8%

Histograms Pareto Analysis Scatter Diagrams Regression Analysis Pie Charts Radar/Kiviat Charts Other

21.1% 21.1% 10.5% 7.0% 3.5% 3.5% 10.5%

From Ron Radices survey of 25 CMM level 4 and level 5 organizations [Radice 2000] Control charts are a central technology for SPC. Figure 3 shows a sample control chart constructed from simulated data. This is an X-chart, where the value of the attribute is graphed. Control limits are graphed. In this case, the control limits are based on a priori knowledge of the distribution of the attribute when the process is under control. The control limits are at three sigma. For a normal distribution, 0.2% of samples would fall outside the limits by chance. This control chart indicates the process is out of control. If this control chart were for real data, the next step would be to investigate the process to identify assignable causes and to correct them, thereby bringing the process under control.

Figure 3: A Control Chart Some have extended the focus of SPC in applying it to software processes. In manufacturing, the primary focus of control charts is to bring the process back into control. In software, the product is also a focus. When a software process exceeds the control limits, rework is typically performed on the product. In manufacturing, the cost of stopping a process is high. In software, the cost of stopping is lower, and few shutdown and startup activities are needed [Jalote and Saxena 2002]. SPC is one way of applying statistics to software engineering. Other opportunities for applying statistics exist in software engineering. Table 4 shows, by lifecycle phase, some of these uses of statistics. The National Research Council recently sponsored the Panel on Statistical Methods in Software Engineering [NRC 1996]. The panel recommended a wide range of areas for applying statistics, from visualizing test and metric data to conducting controlled experiments to demonstrate new methodologies. Phase Table 4: Some Applications of Statistics in Software Engineering Use of Statistics

Requirements Design Coding Testing

Specify performance goals that can be measured statistically, e.g., no more than 50 total field faults and zero critical faults with 90% confidence. Pareto analysis to identify fault-prone modules. Use of design of experiments in making design decisions empirically. Statistical control charts applied to inspections. Coverage metrics provides attributes. Design of experiments useful in creating test suites. Statistical usage testing is based on specified operational profile. Reliability models can be applied. Based on [Dalal, et. al. 1993]

Those applying SPC to industrial organizations, in general, have built process improvements on top of SPC. The focus of SPC is on removing variation caused by assignable causes. As defined here, SPC is not intended to lower process variation resulting from natural causes. Many corporations, however, have extended their SPC efforts with Six Sigma programs. Six Sigma provides continuous process improvement and attempts to reduce the natural variation in processes. Typically, Six Sigma programs use the Seven Tools of Quality (Table 5). The Shewhart Cycle (Figure 4) is a fundamental idea for continuous process improvement. Table 5: The Seven Tools of Quality Example of Use To count occurrences of problems. To identify central tendencies and any skewing to one side or the other. To identify the 20% of the modules which yield 80% of the issues. For identifying assignable causes. For identifying correlation and suggesting causation. For identifying processes that are out of control. For visually displaying data, e.g., in a pie chart.

Tool Check Sheet Histogram Pareto Chart Cause and Effect Diagram Scatter Diagram Control Chart Graph

Vous aimerez peut-être aussi