Vous êtes sur la page 1sur 16

Statistical Process Control - SPC

The fundamentals of Statistical Process Control (though that was not what it was called at the time) and the associated tool of the Control Chart were developed by Dr Walter A Shewhart in the mid-1920s. His reasoning and approach were practical, sensible and positive. In order to be so, he deliberately avoided overdoing mathematical detail. In later years, significant mathematical attributes were assigned to Shewharts thinking with the result that this work became better known than the pioneering application that Shewhart had worked up. The crucial difference between Shewharts work and the inappropriately-perceived purpose of SPC that emerged, that typically involved mathematical distortion and tampering, is that his developments were in context, and with the purpose, of process improvement, as opposed to mere process monitoring. I.e. they could be described as helping to get the process into that satisfactory state which one might then be content to monitor. Note, however, that a true adherent to Demings principles would probably never reach that situation, following instead the philosophy and aim of continuous improvement.

Explanation and Illustration:

What do in control and out of control mean?


Suppose that we are recording, regularly over time, some measurements from a process. The measurements might be lengths of steel rods after a cutting operation, or the lengths of time to service some machine, or your weight as measured on the bathroom scales each morning, or the percentage of defective (or non-conforming) items in batches from a supplier, or measurements of Intelligence Quotient, or times between sending out invoices and receiving the payment etc., etc.. A series of line graphs or histograms can be drawn to represent the data as a statistical distribution. It is a picture of the behaviour of the variation in the measurement that is being recorded. If a process is deemed as stable then the concept is that it is in statistical control. The point is that, if an outside influence impacts upon the process, (e.g., a machine setting is altered or you go on a diet etc.) then, in effect, the data are of course no longer all coming from the same source. It therefore follows that no single distribution could possibly serve to represent them. If the distribution changes unpredictably over time, then the process is said to be out of control. As a scientist, Shewhart knew that there is always variation in anything that can be measured. The variation may be large, or it may be imperceptibly small, or it may be between these two extremes; but it is always there. What inspired Shewharts development of the statistical control of processes was his observation that the variability which he saw in manufacturing processes often differed in behaviour from that which he saw in so-called natural processes by which he seems to have meant such phenomena as molecular motions. Wheeler and Chambers combine and summarise these two important aspects as follows: "While every process displays variation, some processes display controlled variation, while others display uncontrolled variation." In particular, Shewhart often found controlled (stable variation in natural processes and uncontrolled (unstable variation in manufacturing processes. The difference is clear. In the

former case, we know what to expect in terms of variability; in the latter we do not. We may predict the future, with some chance of success, in the former case; we cannot do so in the latter.

Why is "in control" and "out of control" important?


Shewhart gave us a technical tool to help identify the two types of variation: the control chart (see Control Charts as the annex to this topic). What is important is the understanding of why correct identification of the two types of variation is so vital. There are at least three prime reasons. First, when there are irregular large deviations in output because of unexplained special causes, it is impossible to evaluate the effects of changes in design, training, purchasing policy etc. which might be made to the system by management. The capability of a process is unknown, whilst the process is out of statistical control. Second, when special causes have been eliminated, so that only common causes remain, improvement then has to depend upon management action. For such variation is due to the way that the processes and systems have been designed and built and only management has authority and responsibility to work on systems and processes. As Myron Tribus, Director of the American Quality and Productivity Institute, has often said: The people work in a system. The job of the manager is To work on the system To improve it, continuously, With their help. Finally, something of great importance, but which has to be unknown to managers who do not have this understanding of variation, is that by (in effect) misinterpreting either type of cause as the other, and acting accordingly, they not only fail to improve matters they literally make things worse. These implications, and consequently the whole concept of the statistical control of processes, had a profound and lasting impact on Dr Deming. Many aspects of his management philosophy emanate from considerations based on just these notions.

So why SPC?
The plain fact is that when a process is within statistical control, its output is indiscernible from random variation: the kind of variation which one gets from tossing coins, throwing dice, or shuffling cards. Whether or not the process is in control, the numbers will go up, the numbers will go down; indeed, occasionally we shall get a number that is the highest or the lowest for some time. Of course we shall: how could it be otherwise? The question is - do these individual occurrences mean anything important? When the process is out of control, the answer will sometimes be yes. When the process is in control, the answer is no. So the main response to the question "Why SPC?" is therefore this: It guides us to the type of action that is appropriate for trying to improve the functioning of a process. Should we react to individual results from the process (which is only sensible, if such a result is signalled by a control chart as being due to a special cause) or should we instead be going for change to the

process itself, guided by cumulated evidence from its output (which is only sensible if the process is in control)? Process improvement needs to be carried out in three chronological phases: Phase 1: Stabilisation of the process by the identification and elimination of special causes: Phase 2: Active improvement efforts on the process itself, i.e. tackling common causes; Phase 3: Monitoring the process to ensure the improvements are maintained, and incorporating additional improvements as the opportunity arises. Control charts have an important part to play in each of these three Phases. Points beyond control limits (plus other agreed signals) indicate when special causes should be searched for. The control chart is therefore the prime diagnostic tool in Phase 1. All sorts of statistical tools can aid Phase 2, including Pareto Analysis, Ishikawa Diagrams, flow-charts of various kinds, etc., and recalculated control limits will indicate what kind of success (particularly in terms of reduced variation) has been achieved. The control chart will also, as always, show when any further special causes should be attended to. Advocates of the British/European approach will consider themselves familiar with the use of the control chart in Phase 3. However, it is strongly recommended that they consider the use of a Japanese Control Chart (q.v.) in order to see how much more can be done even in this Phase than is normal practice in this part of the world.
Source: www.managers-net.com

Statistical Process Control (SPC)


Statistical process control (SPC) procedures can help you monitor process behavior. Arguably the most successful SPC tool is the control chart, originally developed by Walter Shewhart in the early 1920s. A control chart helps you record data and lets you see when an unusual event, e.g., a very high or low observation compared with typical process performance, occurs. Control charts attempt to distinguish between two types of process variation: Common cause variation, which is intrinsic to the process and will always be present. Special cause variation, which stems from external sources and indicates that the process is out of statistical control. Various tests can help determine when an out-of-control event has occurred. However, as more tests are employed, the probability of a false alarm also increases.

Background
A marked increase in the use of control charts occurred during World War II in the United States to ensure the quality of munitions and other strategically important products. The use of SPC diminished somewhat after the war, though was subsequently taken up with great effect in Japan and continues to the present day. (For more, see The History of Quality) Many SPC techniques have been rediscovered by American firms in recent years, especially

as a component of quality improvement initiatives like Six Sigma. The widespread use of control charting procedures has been greatly assisted by statistical software packages and ever-more sophisticated data collection systems. Over time, other process-monitoring tools have been developed, including: Cumulative Sum (CUSUM) charts: the ordinate of each plotted point represents the algebraic sum of the previous ordinate and the most recent deviations from the target. Exponentially Weighted Moving Average (EWMA) charts: each chart point represents the weighted average of current and all previous subgroup values, giving more weight to recent process history and decreasing weights for older data. More recently, others have advocated integrating SPC with Engineering Process Control (EPC) tools, which regularly change process inputs to improve performance.
Source:http://asq.org/learn-about-quality/statistical-process-control/overview/overview.html

Statistical process control


Statistical process control (SPC) is the application of statistical methods to the monitoring and control of a process to ensure that it operates at its full potential to produce conforming product. Under SPC, a process behaves predictably to produce as much conforming product as possible with the least possible waste. While SPC has been applied most frequently to controlling manufacturing lines, it applies equally well to any process with a measurable output. Key tools in SPC are control charts, a focus on continuous improvement and designed experiments. Much of the power of SPC lies in the ability to examine a process and the sources of variation in that process using tools that give weight to objective analysis over subjective opinions and that allow the strength of each source to be determined numerically. Variations in the process that may affect the quality of the end product or service can be detected and corrected, thus reducing waste as well as the likelihood that problems will be passed on to the customer. With its emphasis on early detection and prevention of problems, SPC has a distinct advantage over other quality methods, such as inspection, that apply resources to detecting and correcting problems after they have occurred. In addition to reducing waste, SPC can lead to a reduction in the time required to produce the product or service from end to end. This is partially due to a diminished likelihood that the final product will have to be reworked, but it may also result from using SPC data to identify bottlenecks, wait times, and other sources of delays within the process. Process cycle time reductions coupled with improvements in yield have made SPC a valuable tool from both a cost reduction and a customer satisfaction standpoint.

History
Statistical process control was pioneered by Walter A. Shewhart in the early 1920s. W. Edwards Deming later applied SPC methods in the United States during World War II, thereby successfully improving quality in the manufacture of munitions and other strategically important products. Deming was also instrumental in introducing SPC methods to Japanese industry after

the war had ended. Shewhart created the basis for the control chart and the concept of a state of statistical control by carefully designed experiments. While Dr. Shewhart drew from pure mathematical statistical theories, he understood that data from physical processes seldom produces a "normal distribution curve" (a Gaussian distribution, also commonly referred to as a "bell curve"). He discovered that observed variation in manufacturing data did not always behave the same way as data in nature (for example, Brownian motion of particles). Dr. Shewhart concluded that while every process displays variation, some processes display controlled variation that is natural to the process (common causes of variation), while others display uncontrolled variation that is not present in the process causal system at all times (special causes of variation). In 1988, the Software Engineering Institute introduced the notion that SPC can be usefully applied to non-manufacturing processes, such as software engineering processes, in the Capability Maturity Model (CMM). This idea exists today within the Level 4 and Level 5 practices of the Capability Maturity Model Integration (CMMI). This notion that SPC is a useful tool when applied to non-repetitive, knowledge-intensive processes such as engineering processes has encountered much skepticism, and remains controversial today.

General
In mass-manufacturing, the quality of the finished article was traditionally achieved through postmanufacturing inspection of the product; accepting or rejecting each article (or samples from a production lot) based on how well it met its design specifications. In contrast, Statistical Process Control uses statistical tools to observe the performance of the production process in order to predict significant deviations that may later result in rejected product. A main concept is that, for any measurable process characteristic, the notion that causes of variation can be separated into two distinct classes: 1) Normal (sometimes also referred to as common or chance) causes of variation and 2) assignable (sometimes also referred to as special) causes of variation. The idea is that most processes have many causes of variation, most of them are minor, can be ignored, and if we can only identify the few dominant causes, then we can focus our resources on those. SPC allows us to detect when the few dominant causes of variation are present. If the dominant (assignable) causes of variation can be detected, potentially they can be identified and removed. Once removed, the process is said to be stable, which means that its resulting variation can be expected to stay within a known set of limits, at least until another assignable cause of variation is introduced. For example, a breakfast cereal packaging line may be designed to fill each cereal box with 500 grams of product, but some boxes will have slightly more than 500 grams, and some will have slightly less, in accordance with a distribution of net weights. If the production process, its inputs, or its environment changes (for example, the machines doing the manufacture begin to wear) this distribution can change. For example, as its cams and pulleys wear out, the cereal filling machine may start putting more cereal into each box than specified. If this change is allowed to continue unchecked, more and more product will be produced that fall outside the tolerances of the manufacturer or consumer, resulting in waste. While in this case, the waste is in the form of "free" product for the consumer, typically waste consists of rework or scrap.

By observing at the right time what happened in the process that led to a change, the quality engineer or any member of the team responsible for the production line can troubleshoot the root cause of the variation that has crept in to the process and correct the problem.

How to Use SPC


Statistical Process Control may be broadly broken down into three sets of activities: understanding the process, understanding the causes of variation, and elimination of the sources of special cause variation. In understanding a process, the process is typically mapped out and the process is monitored using control charts. Control charts are used to identify variation that may be due to special causes, and to free the user from concern over variation due to common causes. This is a continuous, ongoing activity. When a process is stable and does not trigger any of the detection rules for a control chart, a process capability analysis may also be performed to predict the ability of the current process to produce conforming (i.e. within specification) product in the future. When excessive variation is identified by the control chart detection rules, or the process capability is found lacking, additional effort is exerted to determine causes of that variance. The tools used include Ishikawa diagrams, designed experiments and Pareto charts. Designed experiments are critical to this phase of SPC, as they are the only means of objectively quantifying the relative importance of the many potential causes of variation. Once the causes of variation have been quantified, effort is spent in eliminating those causes that are both statistically and practically significant (i.e. a cause that has only a small but statistically significant effect may not be considered cost-effective to fix; however, a cause that is not statistically significant can never be considered practically significant). Generally, this includes development of standard work, error-proofing and training. Additional process changes may be required to reduce variation or align the process with the desired target, especially if there is a problem with process capability. For digital SPC charts, so-called SPC rules usually come with some rule specific logic that determines a 'derived value' that is to be used as the basis for some (setting) correction. One example of such a derived value would be (for the common N numbers in a row ranging up or down 'rule'); derived value = last value + average difference between the last N numbers (which would, in effect, be extending the row with the to be expected next value). Most SPC charts work best for numeric data with Gaussian assumptions. Recently a new control chart: The real-time contrasts chart was proposed to handle process data with complex characteristics, e.g. high-dimensional, mix numerical and categorical, missing-valued, nonGaussian, non-linear relationship.
source: www.wikipedia.org

Statistical Process Control


Statistical process control is a set of strategies used for discovering and correcting inefficiencies

in processes. Although it has most notably been applied to manufacturing, experts in the field have shown that statistical process control can be applied to virtually any process that needs improvement. The ideas behind statistical process control are still relatively new, but adherents to its strategies firmly believe that it can improve not just business processes but also governmental, organizational, and individual processes.

What is statistical process control?


Statistical process control is a way of accounting for the virtually infinite variables that can go into the success or failure of any process. For example, consider a company that mass produces large numbers of books. If some of the books begin coming out with imperfections such as bad binding, stuck pages, inconsistent text appearance, or poor alignment, it may be difficult to pinpoint exactly where in the process the error is occurring. There may be many opinions among the employees and management at the company, but these are based on subjective perception and thus inherently limited. Meanwhile, there may be some flaws in the book printing process that, to some peoples perspective, arent flaws at all. For instance, one person might think the ink is too light while another might think its just right. Such qualitative evaluations are not without merit, but they are generally not the best tools for analysis when it comes to complex processes such as book printing. Statistical process control is the opposite of qualitative evaluation. It measures success based on actual numbers that are set beforehand. In the book printing process, statistical process control would establish measurable ways to quantify the success of the finished product. But another key to statistical process control is that it does not just look at the end result. While measuring things like margin size and print shade can tell us whether a book has been produced according to specs, this does not necessarily tell us where exactly in the process the inefficiencies are occurring. A successful statistical process control model assigns ideal numbers to as many aspects of the process as possible. In the book printing business, for instance, there may be an ideal manufacturing room temperature that leads to an optimal appearance of the ink on the page, or perhaps there are optimal speeds at which the mechanical equipment must run in order to achieve the best results. Finding this data is one of the challenges of statistical process control, but once the numbers are set, this approach is extremely useful for locating inefficiencies in the system. Philosophically speaking, the main purpose behind statistical process control is to deal with chaosand indeed it is no coincidence that this method of monitoring processes came about at roughly the same time that chaos theory was in development. In any moderately complex system, there are simply too many factors for a human mind to keep track of, and the way all these variables interact with another is nearly impossible to predict with any exactness. Thats why statistical process control typically deals with ranges rather than exact figures. In the book manufacturing plant, a statistical process control model would not set exact data points that need to be achieved because a set of exact points is impossible to reach even in the most well-run system. Even if everything at the manufacturing plant is set up to run perfectly,

unexpected inefficiencies will always find their way into the system. Because inefficiencies often have not one single cause but many causes feeding off each other, statistical analysis helps managers quickly get to the bottom of what aspect of the system is off.

Advantages and disadvantages


The main advantages of statistical process control have already been outlined. It takes the human element out of the process and allows for a level of objectivity that cannot be achieved through other methods of evaluating process. When there is data available to cover virtually every aspect of the process as well as good information about the ideal ranges for all data points, then it is easy to keep the process running smoothly. The main disadvantage with evaluation using statistical process control is that it can become rigid and inflexible. When everything about a process is boiled down to numbers, there is a tendency to trust the numbers a little too much. What is needed, if the process is to work as it should while remaining flexible, is an individual or team to continually evaluate the numbers. Returning to the book printing examplethere should be someone within the printing company who monitors the numbers as well as the results (i.e., the finished books) and works to locate the problem whenever there are flaws. This same person would be in charge of implementing changes within the ideal data ranges when the process is adjusted. So, in the long run, this need for human monitoring over the statistical process control can be perceived as a disadvantage in that it makes the system more complex than it otherwise would be and adds a layer of bureaucracy. But of course, in this case the layer of bureaucracy is one that monitors quality and actually encourages flexibility rather than hinders it. Thats the way its supposed to work, anyway, but the human element is only as effective as the humans running it. When properly set up and well run, a statistical process control system can be amazingly powerful in making sure a system runs smoothly. When something in the system is off, checking for inefficiencies can be almost instantaneous, which makes it vastly more timely than humanrun inspection processes. And when the data does not seem to shed light on the cause of the problem, this signals to the people running the system that there are factors that have not yet been accounted for. Of course, finding the unaccounted factors can pose significant challenges, but it usually comes down to a process of eliminationand this is one aspect of statistical process control where human input can actually be invaluable.

Nonmanufacturing applications
Weve already used book printing as an example of how statistical process control can help make a system run more efficiently, but manufacturing is by no means the only potential application for these strategies. The next obvious application is in the service industry, where statistical models can help managers identify inefficiencies in the production process. Of course, the idea of using statistical process control in the service industry does raise some significant questions, particularly with regard to human subjectivity. Since the value of products and services is always qualitatively determined, one dissatisfied customer or a small number of dissatisfied customers can lead to a mistaken impression that a process is flawed when it in fact works exactly as its creators intended.

This problem can be avoided, however, by upping the data sample. A few dissatisfied customers may indicate misperceptions, incorrect expectations, or mere crankiness on the part of the customers. But a thousand dissatisfied customers in a data pool of a few thousand customers indicates that something is indeed wrong with the process and that the data needs to be evaluated. Thats when the company can begin looking at their data points from within the process itself to see if any of the data is outside its ideal range. In cases like these, human subjectivity can actually give useful information about the process and its flaws. For this reason, many companies have implemented sophisticated customer opinion surveys to determine the exact nature of any customer dissatisfaction. This only goes so far, however, as customers are of course not aware of the process itself and do not always know exactly why they are dissatisfied. In the end, the sources of the customer dissatisfaction must be located within the process by those who are familiar with it and have access to the data. Outside business, statistical process control can also be used in governmental applications, though the rigidity of governments has so far kept such new methods from being implemented widely. In any event, one can imagine how a well-designed statistical process control system could be effective in helping governments reduce waste and inefficiency. In an age where austerity is a buzzword throughout the world, governments could greatly benefit from analytical models that help their systems run better.

Personal applications
Although statistical process control was conceived with large-scale systems in mind, its fundamental principles are applicable to systems at all levels of scale. All thats needed is a large enough sample of data to minimize statistical aberrations. So, for example, if one wants to use statistical process control to regulate ones personal health, it would be important to think in the long term. Otherwise, the statistics might lead to supposed solutions that are actually unhealthy. In this scenario, the individual might create a health plan based on ideal ranges of various data points. This could be done based on current recommendations from health authorities. One could investigate how much of each significant vitamin and nutrient is needed for the body to run smoothly, and this data would go along with other points such as sleep time, exercise time, sexual activity, drug and alcohol use, relaxation time, and so on. This model would of course have to be flexible. Once the ideal ranges are set and implemented over a period of several weeks or months, the individual would keep track of all the data points daily and after several weeks or months would take stock and evaluate how well the system is working. If he or she has been doing everything within the ranges put in place at the outset, then any health issues would need to be addressed by adjusting levels. The problem with personal applications of statistical process control is that they can take too long. In the personal health system weve been outlining, the person would have to go several months before a reasonable amount of data could be accumulated, and then each stage of adjustment would require more months. For an individual, this requires an incredible level of

commitment that few would have the patience for. Meanwhile, the subjectivity issue is also at play here. For personal applications of statistical process control, emotions will always cloud the picture. With enough data, however, and an ability to view things as quantitatively as possible, subjectivity can be overcome. Then the only question that remains is whether this type of system is worth all the work. For anyone skeptical of popular claims about health, using statistical process control may be appealing because it lets one study ones own body and the health effects of various things. However, one probably requires skill with statistics and the will to stick with the system over long periods. As anyone can see, statistical process control will probably never catch on for personal use. Not everyone has a knack for statistics, and many people are more results-oriented than processoriented and do not possess the big-picture view of how the two are intertwined. Yet for anyone for whom statistical process control makes sense, the possibilities are endless.

Four Considerations for Improving Quality Using Statistical Process Control (SPC)
1. Value / worth: How a quality can be defined? Each and every people have different idea and thought about quality. Few will think that quality refers to satisfying the customer, and some other people will say the quality is goodness. So quality differs from each people mind set. So the exact explanation of quality is can be defined as follows: The quality is that group of inborn characteristic that fulfills the needs. What is Statistical Process Control where it can be used? Statistical Process control is simply referred as Statistical Process Control. In order to represent and to understand the performance of the product this SP control is important. The ability of the process can be measured by establishing the requirements. These requirements can be established with the help of Statistical process. We are able to decide whether a method / process suits for the proposed specification or not with the help of process capability. 2. Establishing the SPC If the quality is improved in a system, then it is obvious that it will lead the system in high range so as to get appraisal. Most of the system gets failed because they wont confirm that the product is working properly after finishing the product. So this system has to be reprocessed and it needs rework to convert the product into good / conform condition. This is an extra work and the time; money is wasted in huge amount. To avoid this situation the best method is prevention. The error can be identified early and the necessary steps to correct these errors can be taken by this prevention process. If you are using a prevention system then the activities like defects finding and fitting the defect is not needed. In what way SP control will help? To avoid non conformance this Statistical Process Control will closely examine them. This SP control process will not only help to monitor the process but also it correctly identifies the fault before the delivery of the product. The reoccurring problems are eliminated quickly.

3. Establishing presentation standards This defines how for your product is right? These standards will leave the employee to think that non conformance is essential and they accept it. You are able to get a product with Zero defects by comparing process specification and capability, this is done by SP control process. The non conformance frequency is reached zero by using the SP control process. 4. Quality and process depth More amount of money is needed is wasted in almost all industry because they are failed to correct the faults in developing the product at first time. To reduce the cost and to increase the share market rate, non conformance elimination is the best choice. Therefore using this Statistical Process Control in an organization will help to reduce the faults and prevent loss of money, time that is wasted in manufacturing. Now this system is improved with latest terms so that its benefits are further increased. How to Effectively Use Statistical Process Control? Statistical process control is set of methods for reducing variances and inefficiencies in a system. It relies on statistics to eliminate human error from the quality-control process, measuring everything in raw numbers rather than based on human response. So, in light of the fact that statistical process control relies almost exclusively on numbers, it is most useful in improving processes that are designed based on fairly rigid protocols. Applications of Statistcal Process Control In manufacturing, for instance, one of the goals is often to develop a process for creating items that are infinitely replicable with as little variance as possible. There are many practical reasons for this. For one, most companies want their customers to feel secure in their expectations. For another thing, staying within certain statistical ranges is often required of products that are regulated or where precisely calibrated operation is essential. For example, statistical process control is useful for ensuring that mass-produced foods stay in line with the nutritional information printed on the label. Of course, not having the actual content of the food in line with the label can lead to serious consequences if regulators find out. Another example is in the production of medical equipment, where manufacturing variances can mean the difference between life and death. But statistical process control can be useful outside manufacturing. Wherever there are rigidly controlled processes with the potential for inefficiencies, bottle necks, and small deteriorations along a complex network of interactive parts, statistical process control is an efficient way to cut through the confusion and drill down to the source of the problem. Where the statistics are off, thats liable to be where the source of the problem lies. Using Statistical Process Control Statistical process control is not any single method but rather a large group of methods that can be applied in any number of ways to an unlimited variety of processes. In general, however, establishing a system of statistical process control involves first putting together a process that

runs as well as possible, taking statistics based on the freshly created process, and then continuously monitoring the statistics. When variances become apparent, the sources of the variances are usually easily traced to unusual points in the data. Companies have come up with all sorts of sophisticated ways to monitor statistics. In the 21st century, the trajectory is toward digital statistical process control systems with as much automation as possible. In some cases, it is even possible to automate the diagnosis of the problem. But for the most part, the technology is still only sophisticated enough to call attention to anomalies that may lead to variances, and then its left to people to diagnose the problem and take the appropriate steps to reduce the variances. Whether automated or completely human controlled, most SPC systems rely on data maps and control charts that present the data in an organized way. In most cases, there are preset ranges within which each data point must fall, and the job of the person monitoring the charts is to watch for data points out of the preset ranges. Ultimately, this is what makes SPC a useful system; while the system can be difficult to set up, it makes checking for problems in the process almost too easy. New to Statistical Process Control? Here are the Basics? Statistical process control began as a set of methods for companies to monitor the quality of their products and eliminate variances from item to item. The methods borrow ideas from the field of statistics and apply them to sophisticated and often complex processes that are difficult to monitor without an innovative monitoring system. Its used most often by companies whose large and complex manufacturing processes are cumbersome to track via old-fashioned methods, and its also widely used by companies that need as little variance as possible in their manufactured products. And for adventurous statistics buffs, SPC can also be applied to many other areas of life. How Statistical Process Control Works Imagine, for example, a company that manufactures frozen burritos and ships them to grocery stores across the U.S. The company is required to print a detailed list of all the ingredients contained in the burritos, and they also must provide accurate information regarding the nutritional value of each item. Regulations allow for a small amount of variance from item to item, but each burrito must nevertheless fall into a very narrow range in terms of ingredients and nutritional value, not to mention other categories like size and weight. How does the company go about ensuring that every single burrito that is shipped out meets the required specs? Its rather simple, actually. The company has data points associated with every step of the manufacturing process, and they have required ranges for every data point. The finished burritos are regularly monitored for variances, and when some items begin coming out of the process outside the specs required by regulation, the quality control team examines the stats and searches for points that are outside the required ranges. When a problem is found in the data, it may relate to an aging piece of equipment, for example, or an improperly trained worker. In any case, locating the problem in the statistics leads the quality controllers straight to the source of the issue.

The Benefits of Statistical Process Control From the above example, one can already begin to see how incredibly useful SPC can be. Without such a system in place, correcting variances in products can be an exhausting process requiring far too many resources. Imagine if that company lacked a quality control system and suddenly discovered that the ratio of ingredients inside their burritos was off in a few respects. The only way to locate the problem would be to engage in a long survey of every step of the process, every worker assigned to that process, and every ingredient that goes into it. In such scenarios, any variance can mean a huge loss of productivity and hence of profits. In other words, statistical process control is far more efficient than quality control practices that rely completely on human observation. Plus, it also removes that flawed human element that is so prone to getting things wrong. If that food company were to take a guess at what was off in their process and get it wrong, it could lead to serious consequences. Using statistics greatly diminishes the potential for human error and thus protects companies against fines, lawsuits, lost business, and so on. How Statistical Process Control Reduces Human Error in Evaluating Processes? Statistical process control sounds like a complex term, and indeed it can become rather complicated once you get into it. But for beginners, it can be defined quite simply: Statistical process control refers to a group of strategies for monitoring processes using methods originally set forth in the field of statistics. While it is typically used in manufacturing and other types of business, it can be applied to virtually any fairly rigid process. It can even be applied to everyday processes in peoples lives. Controlling The Human Element There are many reasons why statistical analysis is useful for evaluating processes, but the biggest factor in favor of statistical process control is its objectivity. While other evaluation methods involve human judgment and hence are subject to flaws of subjectivity and simple human error, statistical process control looks at the actual results with no evaluation and no human judgment. The underlying assumption of statistical process control is that by quantifying elements of processes and examining them in raw numbers, the truth about where the processes work and dont work can be reached. But of course, its important to keep in mind that not everything can be measured statistically. For instance, statistical process control can be used to make sure every item that comes off an assembly line falls within the range of certain specs, but what it cannot so easily do is quantify customer satisfaction with the items. Subjective human reactions to things are inherently unreliable as data, especially since people often dont know exactly why they do or do not like something. But with that being said, there are ways that statistical process control can be applied to human reactions to thingsnamely, by taking a large sample size. So, for example, if you poll five people about why they do or do not like a product, youre liable to get a range of answers, some unexpected, and you probably wont get any actionable information out of it. But if you analyze the behavior of 5,000 people with regard to a product, this gives you very clear data about

human tendencies. Protecting Against Human Error While human reactions to a problem dont necessarily apply directly to the process under analysis, what they can do is point toward parts of the process that could be improved. The failure of a product can stem from a variety of sources. Sometimes it boils down to poor planning or design, sometimes it relates to bad craftsmanship or shoddy materials, and sometimes its from lack of quality control. Getting a sense of the human response to a product should point to which of these is most relevant. Setting aside how people respond to products, the essential benefit of statistical process control is that it is useful in virtually any type of manufacturing that follows a fairly rigid process. There can be some difficulty in pinpointing where in the process to take the statistical data, but once this is set, it becomes an incredibly efficient way to locate points in the process that have deteriorated or where things could be improved. If the data is read correctly, then human judgment doesnt even enter into it. The Business Uses of Statistical Process Control ince the development of statistical process control during the middle of the 20th century, the methods that conventionally fall under the SPC designation have become increasingly sophisticated, and theyre now being applied in countless ways in virtually all segments of business and manufacturing. In fact, if you were to do a survey of the most successful companies in the world, you would undoubtedly find SPC methods being used in some capacity by the vast majority of them. And the fact that SPC can be used in such a wide range of applications shows just how useful it is. For anyone new to statistical process control, it may be difficult to imagine exactly how these methods can be put into action, so lets look at a few real-world applications in which SPC is now being used. Food manufacturing: In most developed countries, there are very strict regulations governing how food can be processed and marketed. In most places, makers of food products are required to print full ingredient lists along with detailed information about the nutritional value of the food. In order to stay true to whats printed on the label, companies must make sure that the products they make have very little variance from item to item. Thats where statistical process control comes in. When items begin coming out of the manufacturing process with flaws or variances, the managers simply go to the statistics and look for data points that are out of line with expectations. More often than not, finding the source of the problem is as simple as locating the problematic statistic, tracing its cause, and making small, precise adjustments to the process. Medical supplies: Statistical process control has proved immensely useful in fields where life and death depend on items being manufactured to precision. Before statistical process control, the manufacturing of medical supplies required huge quality control teams to monitor all stages of the process and test every piece of equipment as it came out of the manufacturing process.

Today, while there is still extensive quality control that must be done, statistical process control has made monitoring for and eliminating variances far more efficient. What was once done by a full department in a company can now be done by one or a few individuals. Vehicle manufacturing: As with medical supplies, vehicles such as cars, trucks, and airplanes, must be manufactured with every element within very specific ranges, or else the safety of the vehicles operators and passengers is put at risk. Since Henry Ford pioneered many aspects of the modern assembly line in the early 20th century, the automobile industry has always been at the cutting edge of the worlds manufacturing processes, and the industrys use of SPC keeps it at the cutting edge even to this day. The average consumer of motor vehicles doesnt realize just how complex todays cars are. A typical vehicle can have upwards of 10,000 parts, and for aircraft this figure might be multiplied a few times. Making sure all these thousands of parts are assembled well and run perfectly requires extensive statistical monitoring. Today, much of the process is automated, but human process control managers still play a large role. Useful Tools in Statistical Process Control From a philosophical standpoint, it is obvious that statistical process control is a powerful method for eliminating inefficiencies and variances in a system. But for those who actually practice statistical process control, its not about the philosophy so much as the nitty-gritty of setting, gathering, reading, and interpreting data. Much of this has been automated in recent years thanks to developing technologies, but actual humans are still deeply involved in every step of statistical process control. And when humans are involved in interpreting often complex data, it helps to have a few tools to make the information clearer. Here are a few types of tools that are commonly used by professionals in the field of statistical process control: Flow charts: A flow chart maps a process in all its complex parts from beginning to end. While they are not unique to statistical process control and in fact have little to do with the statistics themselves, flow charts are useful for giving quality control managers an overall picture of the process, which makes it easier to evaluate data discrepancies pointing to potential variances. Theyre also useful for giving a clear picture of where portions of the process fall within an overall timeline and in relation to other portions. Run graphs: A run graph is simply a graph that displays data in terms of time. While run graphs can be useful for looking at a single point of data, in statistical process analysis they are often used to get a quick snapshot of the relationships between different data points. For example, if two data points almost always rise and fall at the same time, then this points to a likely relationship, whether direct or indirect, between what is measured by the two statistics. Run graphs are most useful for two variables and can become quite chaotic when more variables are introduced, but using them to interpret the relationships between three or more data points is not unheard of. Designed experiments: A designed experiment is exactly what it sounds like. When a process begins creating results that are out of line with expectations, quality control managers may

undergo a set of experiments to pinpoint the source of the flaws. This may involve segmenting the process, taking some elements out, or adding elements in an attempt to attenuate the flaws for informational purposes. Pareto charts: Pareto charts are based on the Pareto Principle, which states that that not all causes of a phenomenon have the same impact. So, for example, if a product is flawed, there may be four different causes of the flaw, but it may be that one of the causes is the biggest culprit while the others would not be significant without the primary cause. Pareto charts try to make sense of this, mapping the relative impact of different factors in a system. Control graphs: Control graphs are essentially maps of the frequency of variations over time. If a process is statistically stable, a control graph shows nothing. Its only when instability occurs and variances start creeping in that a control graph becomes useful. These graphs are helpful for locating patterns of variance and identifying whether variances result from mere chance or are results of a flaw in the process.
Source; http://www.statisticalprocesscontrol.org/

Vous aimerez peut-être aussi