Académique Documents
Professionnel Documents
Culture Documents
The fundamentals of Statistical Process Control (though that was not what it was called at the time) and the associated tool of the Control Chart were developed by Dr Walter A Shewhart in the mid-1920s. His reasoning and approach were practical, sensible and positive. In order to be so, he deliberately avoided overdoing mathematical detail. In later years, significant mathematical attributes were assigned to Shewharts thinking with the result that this work became better known than the pioneering application that Shewhart had worked up. The crucial difference between Shewharts work and the inappropriately-perceived purpose of SPC that emerged, that typically involved mathematical distortion and tampering, is that his developments were in context, and with the purpose, of process improvement, as opposed to mere process monitoring. I.e. they could be described as helping to get the process into that satisfactory state which one might then be content to monitor. Note, however, that a true adherent to Demings principles would probably never reach that situation, following instead the philosophy and aim of continuous improvement.
former case, we know what to expect in terms of variability; in the latter we do not. We may predict the future, with some chance of success, in the former case; we cannot do so in the latter.
So why SPC?
The plain fact is that when a process is within statistical control, its output is indiscernible from random variation: the kind of variation which one gets from tossing coins, throwing dice, or shuffling cards. Whether or not the process is in control, the numbers will go up, the numbers will go down; indeed, occasionally we shall get a number that is the highest or the lowest for some time. Of course we shall: how could it be otherwise? The question is - do these individual occurrences mean anything important? When the process is out of control, the answer will sometimes be yes. When the process is in control, the answer is no. So the main response to the question "Why SPC?" is therefore this: It guides us to the type of action that is appropriate for trying to improve the functioning of a process. Should we react to individual results from the process (which is only sensible, if such a result is signalled by a control chart as being due to a special cause) or should we instead be going for change to the
process itself, guided by cumulated evidence from its output (which is only sensible if the process is in control)? Process improvement needs to be carried out in three chronological phases: Phase 1: Stabilisation of the process by the identification and elimination of special causes: Phase 2: Active improvement efforts on the process itself, i.e. tackling common causes; Phase 3: Monitoring the process to ensure the improvements are maintained, and incorporating additional improvements as the opportunity arises. Control charts have an important part to play in each of these three Phases. Points beyond control limits (plus other agreed signals) indicate when special causes should be searched for. The control chart is therefore the prime diagnostic tool in Phase 1. All sorts of statistical tools can aid Phase 2, including Pareto Analysis, Ishikawa Diagrams, flow-charts of various kinds, etc., and recalculated control limits will indicate what kind of success (particularly in terms of reduced variation) has been achieved. The control chart will also, as always, show when any further special causes should be attended to. Advocates of the British/European approach will consider themselves familiar with the use of the control chart in Phase 3. However, it is strongly recommended that they consider the use of a Japanese Control Chart (q.v.) in order to see how much more can be done even in this Phase than is normal practice in this part of the world.
Source: www.managers-net.com
Background
A marked increase in the use of control charts occurred during World War II in the United States to ensure the quality of munitions and other strategically important products. The use of SPC diminished somewhat after the war, though was subsequently taken up with great effect in Japan and continues to the present day. (For more, see The History of Quality) Many SPC techniques have been rediscovered by American firms in recent years, especially
as a component of quality improvement initiatives like Six Sigma. The widespread use of control charting procedures has been greatly assisted by statistical software packages and ever-more sophisticated data collection systems. Over time, other process-monitoring tools have been developed, including: Cumulative Sum (CUSUM) charts: the ordinate of each plotted point represents the algebraic sum of the previous ordinate and the most recent deviations from the target. Exponentially Weighted Moving Average (EWMA) charts: each chart point represents the weighted average of current and all previous subgroup values, giving more weight to recent process history and decreasing weights for older data. More recently, others have advocated integrating SPC with Engineering Process Control (EPC) tools, which regularly change process inputs to improve performance.
Source:http://asq.org/learn-about-quality/statistical-process-control/overview/overview.html
History
Statistical process control was pioneered by Walter A. Shewhart in the early 1920s. W. Edwards Deming later applied SPC methods in the United States during World War II, thereby successfully improving quality in the manufacture of munitions and other strategically important products. Deming was also instrumental in introducing SPC methods to Japanese industry after
the war had ended. Shewhart created the basis for the control chart and the concept of a state of statistical control by carefully designed experiments. While Dr. Shewhart drew from pure mathematical statistical theories, he understood that data from physical processes seldom produces a "normal distribution curve" (a Gaussian distribution, also commonly referred to as a "bell curve"). He discovered that observed variation in manufacturing data did not always behave the same way as data in nature (for example, Brownian motion of particles). Dr. Shewhart concluded that while every process displays variation, some processes display controlled variation that is natural to the process (common causes of variation), while others display uncontrolled variation that is not present in the process causal system at all times (special causes of variation). In 1988, the Software Engineering Institute introduced the notion that SPC can be usefully applied to non-manufacturing processes, such as software engineering processes, in the Capability Maturity Model (CMM). This idea exists today within the Level 4 and Level 5 practices of the Capability Maturity Model Integration (CMMI). This notion that SPC is a useful tool when applied to non-repetitive, knowledge-intensive processes such as engineering processes has encountered much skepticism, and remains controversial today.
General
In mass-manufacturing, the quality of the finished article was traditionally achieved through postmanufacturing inspection of the product; accepting or rejecting each article (or samples from a production lot) based on how well it met its design specifications. In contrast, Statistical Process Control uses statistical tools to observe the performance of the production process in order to predict significant deviations that may later result in rejected product. A main concept is that, for any measurable process characteristic, the notion that causes of variation can be separated into two distinct classes: 1) Normal (sometimes also referred to as common or chance) causes of variation and 2) assignable (sometimes also referred to as special) causes of variation. The idea is that most processes have many causes of variation, most of them are minor, can be ignored, and if we can only identify the few dominant causes, then we can focus our resources on those. SPC allows us to detect when the few dominant causes of variation are present. If the dominant (assignable) causes of variation can be detected, potentially they can be identified and removed. Once removed, the process is said to be stable, which means that its resulting variation can be expected to stay within a known set of limits, at least until another assignable cause of variation is introduced. For example, a breakfast cereal packaging line may be designed to fill each cereal box with 500 grams of product, but some boxes will have slightly more than 500 grams, and some will have slightly less, in accordance with a distribution of net weights. If the production process, its inputs, or its environment changes (for example, the machines doing the manufacture begin to wear) this distribution can change. For example, as its cams and pulleys wear out, the cereal filling machine may start putting more cereal into each box than specified. If this change is allowed to continue unchecked, more and more product will be produced that fall outside the tolerances of the manufacturer or consumer, resulting in waste. While in this case, the waste is in the form of "free" product for the consumer, typically waste consists of rework or scrap.
By observing at the right time what happened in the process that led to a change, the quality engineer or any member of the team responsible for the production line can troubleshoot the root cause of the variation that has crept in to the process and correct the problem.
in processes. Although it has most notably been applied to manufacturing, experts in the field have shown that statistical process control can be applied to virtually any process that needs improvement. The ideas behind statistical process control are still relatively new, but adherents to its strategies firmly believe that it can improve not just business processes but also governmental, organizational, and individual processes.
unexpected inefficiencies will always find their way into the system. Because inefficiencies often have not one single cause but many causes feeding off each other, statistical analysis helps managers quickly get to the bottom of what aspect of the system is off.
Nonmanufacturing applications
Weve already used book printing as an example of how statistical process control can help make a system run more efficiently, but manufacturing is by no means the only potential application for these strategies. The next obvious application is in the service industry, where statistical models can help managers identify inefficiencies in the production process. Of course, the idea of using statistical process control in the service industry does raise some significant questions, particularly with regard to human subjectivity. Since the value of products and services is always qualitatively determined, one dissatisfied customer or a small number of dissatisfied customers can lead to a mistaken impression that a process is flawed when it in fact works exactly as its creators intended.
This problem can be avoided, however, by upping the data sample. A few dissatisfied customers may indicate misperceptions, incorrect expectations, or mere crankiness on the part of the customers. But a thousand dissatisfied customers in a data pool of a few thousand customers indicates that something is indeed wrong with the process and that the data needs to be evaluated. Thats when the company can begin looking at their data points from within the process itself to see if any of the data is outside its ideal range. In cases like these, human subjectivity can actually give useful information about the process and its flaws. For this reason, many companies have implemented sophisticated customer opinion surveys to determine the exact nature of any customer dissatisfaction. This only goes so far, however, as customers are of course not aware of the process itself and do not always know exactly why they are dissatisfied. In the end, the sources of the customer dissatisfaction must be located within the process by those who are familiar with it and have access to the data. Outside business, statistical process control can also be used in governmental applications, though the rigidity of governments has so far kept such new methods from being implemented widely. In any event, one can imagine how a well-designed statistical process control system could be effective in helping governments reduce waste and inefficiency. In an age where austerity is a buzzword throughout the world, governments could greatly benefit from analytical models that help their systems run better.
Personal applications
Although statistical process control was conceived with large-scale systems in mind, its fundamental principles are applicable to systems at all levels of scale. All thats needed is a large enough sample of data to minimize statistical aberrations. So, for example, if one wants to use statistical process control to regulate ones personal health, it would be important to think in the long term. Otherwise, the statistics might lead to supposed solutions that are actually unhealthy. In this scenario, the individual might create a health plan based on ideal ranges of various data points. This could be done based on current recommendations from health authorities. One could investigate how much of each significant vitamin and nutrient is needed for the body to run smoothly, and this data would go along with other points such as sleep time, exercise time, sexual activity, drug and alcohol use, relaxation time, and so on. This model would of course have to be flexible. Once the ideal ranges are set and implemented over a period of several weeks or months, the individual would keep track of all the data points daily and after several weeks or months would take stock and evaluate how well the system is working. If he or she has been doing everything within the ranges put in place at the outset, then any health issues would need to be addressed by adjusting levels. The problem with personal applications of statistical process control is that they can take too long. In the personal health system weve been outlining, the person would have to go several months before a reasonable amount of data could be accumulated, and then each stage of adjustment would require more months. For an individual, this requires an incredible level of
commitment that few would have the patience for. Meanwhile, the subjectivity issue is also at play here. For personal applications of statistical process control, emotions will always cloud the picture. With enough data, however, and an ability to view things as quantitatively as possible, subjectivity can be overcome. Then the only question that remains is whether this type of system is worth all the work. For anyone skeptical of popular claims about health, using statistical process control may be appealing because it lets one study ones own body and the health effects of various things. However, one probably requires skill with statistics and the will to stick with the system over long periods. As anyone can see, statistical process control will probably never catch on for personal use. Not everyone has a knack for statistics, and many people are more results-oriented than processoriented and do not possess the big-picture view of how the two are intertwined. Yet for anyone for whom statistical process control makes sense, the possibilities are endless.
Four Considerations for Improving Quality Using Statistical Process Control (SPC)
1. Value / worth: How a quality can be defined? Each and every people have different idea and thought about quality. Few will think that quality refers to satisfying the customer, and some other people will say the quality is goodness. So quality differs from each people mind set. So the exact explanation of quality is can be defined as follows: The quality is that group of inborn characteristic that fulfills the needs. What is Statistical Process Control where it can be used? Statistical Process control is simply referred as Statistical Process Control. In order to represent and to understand the performance of the product this SP control is important. The ability of the process can be measured by establishing the requirements. These requirements can be established with the help of Statistical process. We are able to decide whether a method / process suits for the proposed specification or not with the help of process capability. 2. Establishing the SPC If the quality is improved in a system, then it is obvious that it will lead the system in high range so as to get appraisal. Most of the system gets failed because they wont confirm that the product is working properly after finishing the product. So this system has to be reprocessed and it needs rework to convert the product into good / conform condition. This is an extra work and the time; money is wasted in huge amount. To avoid this situation the best method is prevention. The error can be identified early and the necessary steps to correct these errors can be taken by this prevention process. If you are using a prevention system then the activities like defects finding and fitting the defect is not needed. In what way SP control will help? To avoid non conformance this Statistical Process Control will closely examine them. This SP control process will not only help to monitor the process but also it correctly identifies the fault before the delivery of the product. The reoccurring problems are eliminated quickly.
3. Establishing presentation standards This defines how for your product is right? These standards will leave the employee to think that non conformance is essential and they accept it. You are able to get a product with Zero defects by comparing process specification and capability, this is done by SP control process. The non conformance frequency is reached zero by using the SP control process. 4. Quality and process depth More amount of money is needed is wasted in almost all industry because they are failed to correct the faults in developing the product at first time. To reduce the cost and to increase the share market rate, non conformance elimination is the best choice. Therefore using this Statistical Process Control in an organization will help to reduce the faults and prevent loss of money, time that is wasted in manufacturing. Now this system is improved with latest terms so that its benefits are further increased. How to Effectively Use Statistical Process Control? Statistical process control is set of methods for reducing variances and inefficiencies in a system. It relies on statistics to eliminate human error from the quality-control process, measuring everything in raw numbers rather than based on human response. So, in light of the fact that statistical process control relies almost exclusively on numbers, it is most useful in improving processes that are designed based on fairly rigid protocols. Applications of Statistcal Process Control In manufacturing, for instance, one of the goals is often to develop a process for creating items that are infinitely replicable with as little variance as possible. There are many practical reasons for this. For one, most companies want their customers to feel secure in their expectations. For another thing, staying within certain statistical ranges is often required of products that are regulated or where precisely calibrated operation is essential. For example, statistical process control is useful for ensuring that mass-produced foods stay in line with the nutritional information printed on the label. Of course, not having the actual content of the food in line with the label can lead to serious consequences if regulators find out. Another example is in the production of medical equipment, where manufacturing variances can mean the difference between life and death. But statistical process control can be useful outside manufacturing. Wherever there are rigidly controlled processes with the potential for inefficiencies, bottle necks, and small deteriorations along a complex network of interactive parts, statistical process control is an efficient way to cut through the confusion and drill down to the source of the problem. Where the statistics are off, thats liable to be where the source of the problem lies. Using Statistical Process Control Statistical process control is not any single method but rather a large group of methods that can be applied in any number of ways to an unlimited variety of processes. In general, however, establishing a system of statistical process control involves first putting together a process that
runs as well as possible, taking statistics based on the freshly created process, and then continuously monitoring the statistics. When variances become apparent, the sources of the variances are usually easily traced to unusual points in the data. Companies have come up with all sorts of sophisticated ways to monitor statistics. In the 21st century, the trajectory is toward digital statistical process control systems with as much automation as possible. In some cases, it is even possible to automate the diagnosis of the problem. But for the most part, the technology is still only sophisticated enough to call attention to anomalies that may lead to variances, and then its left to people to diagnose the problem and take the appropriate steps to reduce the variances. Whether automated or completely human controlled, most SPC systems rely on data maps and control charts that present the data in an organized way. In most cases, there are preset ranges within which each data point must fall, and the job of the person monitoring the charts is to watch for data points out of the preset ranges. Ultimately, this is what makes SPC a useful system; while the system can be difficult to set up, it makes checking for problems in the process almost too easy. New to Statistical Process Control? Here are the Basics? Statistical process control began as a set of methods for companies to monitor the quality of their products and eliminate variances from item to item. The methods borrow ideas from the field of statistics and apply them to sophisticated and often complex processes that are difficult to monitor without an innovative monitoring system. Its used most often by companies whose large and complex manufacturing processes are cumbersome to track via old-fashioned methods, and its also widely used by companies that need as little variance as possible in their manufactured products. And for adventurous statistics buffs, SPC can also be applied to many other areas of life. How Statistical Process Control Works Imagine, for example, a company that manufactures frozen burritos and ships them to grocery stores across the U.S. The company is required to print a detailed list of all the ingredients contained in the burritos, and they also must provide accurate information regarding the nutritional value of each item. Regulations allow for a small amount of variance from item to item, but each burrito must nevertheless fall into a very narrow range in terms of ingredients and nutritional value, not to mention other categories like size and weight. How does the company go about ensuring that every single burrito that is shipped out meets the required specs? Its rather simple, actually. The company has data points associated with every step of the manufacturing process, and they have required ranges for every data point. The finished burritos are regularly monitored for variances, and when some items begin coming out of the process outside the specs required by regulation, the quality control team examines the stats and searches for points that are outside the required ranges. When a problem is found in the data, it may relate to an aging piece of equipment, for example, or an improperly trained worker. In any case, locating the problem in the statistics leads the quality controllers straight to the source of the issue.
The Benefits of Statistical Process Control From the above example, one can already begin to see how incredibly useful SPC can be. Without such a system in place, correcting variances in products can be an exhausting process requiring far too many resources. Imagine if that company lacked a quality control system and suddenly discovered that the ratio of ingredients inside their burritos was off in a few respects. The only way to locate the problem would be to engage in a long survey of every step of the process, every worker assigned to that process, and every ingredient that goes into it. In such scenarios, any variance can mean a huge loss of productivity and hence of profits. In other words, statistical process control is far more efficient than quality control practices that rely completely on human observation. Plus, it also removes that flawed human element that is so prone to getting things wrong. If that food company were to take a guess at what was off in their process and get it wrong, it could lead to serious consequences. Using statistics greatly diminishes the potential for human error and thus protects companies against fines, lawsuits, lost business, and so on. How Statistical Process Control Reduces Human Error in Evaluating Processes? Statistical process control sounds like a complex term, and indeed it can become rather complicated once you get into it. But for beginners, it can be defined quite simply: Statistical process control refers to a group of strategies for monitoring processes using methods originally set forth in the field of statistics. While it is typically used in manufacturing and other types of business, it can be applied to virtually any fairly rigid process. It can even be applied to everyday processes in peoples lives. Controlling The Human Element There are many reasons why statistical analysis is useful for evaluating processes, but the biggest factor in favor of statistical process control is its objectivity. While other evaluation methods involve human judgment and hence are subject to flaws of subjectivity and simple human error, statistical process control looks at the actual results with no evaluation and no human judgment. The underlying assumption of statistical process control is that by quantifying elements of processes and examining them in raw numbers, the truth about where the processes work and dont work can be reached. But of course, its important to keep in mind that not everything can be measured statistically. For instance, statistical process control can be used to make sure every item that comes off an assembly line falls within the range of certain specs, but what it cannot so easily do is quantify customer satisfaction with the items. Subjective human reactions to things are inherently unreliable as data, especially since people often dont know exactly why they do or do not like something. But with that being said, there are ways that statistical process control can be applied to human reactions to thingsnamely, by taking a large sample size. So, for example, if you poll five people about why they do or do not like a product, youre liable to get a range of answers, some unexpected, and you probably wont get any actionable information out of it. But if you analyze the behavior of 5,000 people with regard to a product, this gives you very clear data about
human tendencies. Protecting Against Human Error While human reactions to a problem dont necessarily apply directly to the process under analysis, what they can do is point toward parts of the process that could be improved. The failure of a product can stem from a variety of sources. Sometimes it boils down to poor planning or design, sometimes it relates to bad craftsmanship or shoddy materials, and sometimes its from lack of quality control. Getting a sense of the human response to a product should point to which of these is most relevant. Setting aside how people respond to products, the essential benefit of statistical process control is that it is useful in virtually any type of manufacturing that follows a fairly rigid process. There can be some difficulty in pinpointing where in the process to take the statistical data, but once this is set, it becomes an incredibly efficient way to locate points in the process that have deteriorated or where things could be improved. If the data is read correctly, then human judgment doesnt even enter into it. The Business Uses of Statistical Process Control ince the development of statistical process control during the middle of the 20th century, the methods that conventionally fall under the SPC designation have become increasingly sophisticated, and theyre now being applied in countless ways in virtually all segments of business and manufacturing. In fact, if you were to do a survey of the most successful companies in the world, you would undoubtedly find SPC methods being used in some capacity by the vast majority of them. And the fact that SPC can be used in such a wide range of applications shows just how useful it is. For anyone new to statistical process control, it may be difficult to imagine exactly how these methods can be put into action, so lets look at a few real-world applications in which SPC is now being used. Food manufacturing: In most developed countries, there are very strict regulations governing how food can be processed and marketed. In most places, makers of food products are required to print full ingredient lists along with detailed information about the nutritional value of the food. In order to stay true to whats printed on the label, companies must make sure that the products they make have very little variance from item to item. Thats where statistical process control comes in. When items begin coming out of the manufacturing process with flaws or variances, the managers simply go to the statistics and look for data points that are out of line with expectations. More often than not, finding the source of the problem is as simple as locating the problematic statistic, tracing its cause, and making small, precise adjustments to the process. Medical supplies: Statistical process control has proved immensely useful in fields where life and death depend on items being manufactured to precision. Before statistical process control, the manufacturing of medical supplies required huge quality control teams to monitor all stages of the process and test every piece of equipment as it came out of the manufacturing process.
Today, while there is still extensive quality control that must be done, statistical process control has made monitoring for and eliminating variances far more efficient. What was once done by a full department in a company can now be done by one or a few individuals. Vehicle manufacturing: As with medical supplies, vehicles such as cars, trucks, and airplanes, must be manufactured with every element within very specific ranges, or else the safety of the vehicles operators and passengers is put at risk. Since Henry Ford pioneered many aspects of the modern assembly line in the early 20th century, the automobile industry has always been at the cutting edge of the worlds manufacturing processes, and the industrys use of SPC keeps it at the cutting edge even to this day. The average consumer of motor vehicles doesnt realize just how complex todays cars are. A typical vehicle can have upwards of 10,000 parts, and for aircraft this figure might be multiplied a few times. Making sure all these thousands of parts are assembled well and run perfectly requires extensive statistical monitoring. Today, much of the process is automated, but human process control managers still play a large role. Useful Tools in Statistical Process Control From a philosophical standpoint, it is obvious that statistical process control is a powerful method for eliminating inefficiencies and variances in a system. But for those who actually practice statistical process control, its not about the philosophy so much as the nitty-gritty of setting, gathering, reading, and interpreting data. Much of this has been automated in recent years thanks to developing technologies, but actual humans are still deeply involved in every step of statistical process control. And when humans are involved in interpreting often complex data, it helps to have a few tools to make the information clearer. Here are a few types of tools that are commonly used by professionals in the field of statistical process control: Flow charts: A flow chart maps a process in all its complex parts from beginning to end. While they are not unique to statistical process control and in fact have little to do with the statistics themselves, flow charts are useful for giving quality control managers an overall picture of the process, which makes it easier to evaluate data discrepancies pointing to potential variances. Theyre also useful for giving a clear picture of where portions of the process fall within an overall timeline and in relation to other portions. Run graphs: A run graph is simply a graph that displays data in terms of time. While run graphs can be useful for looking at a single point of data, in statistical process analysis they are often used to get a quick snapshot of the relationships between different data points. For example, if two data points almost always rise and fall at the same time, then this points to a likely relationship, whether direct or indirect, between what is measured by the two statistics. Run graphs are most useful for two variables and can become quite chaotic when more variables are introduced, but using them to interpret the relationships between three or more data points is not unheard of. Designed experiments: A designed experiment is exactly what it sounds like. When a process begins creating results that are out of line with expectations, quality control managers may
undergo a set of experiments to pinpoint the source of the flaws. This may involve segmenting the process, taking some elements out, or adding elements in an attempt to attenuate the flaws for informational purposes. Pareto charts: Pareto charts are based on the Pareto Principle, which states that that not all causes of a phenomenon have the same impact. So, for example, if a product is flawed, there may be four different causes of the flaw, but it may be that one of the causes is the biggest culprit while the others would not be significant without the primary cause. Pareto charts try to make sense of this, mapping the relative impact of different factors in a system. Control graphs: Control graphs are essentially maps of the frequency of variations over time. If a process is statistically stable, a control graph shows nothing. Its only when instability occurs and variances start creeping in that a control graph becomes useful. These graphs are helpful for locating patterns of variance and identifying whether variances result from mere chance or are results of a flaw in the process.
Source; http://www.statisticalprocesscontrol.org/