Vous êtes sur la page 1sur 96

SQC 10ME668

SUBJECT: STATISTICAL QUALITY CONTROL 10ME668


UNIT 1: INTRODUCTION

Quality
It is a relative word. It lies in the eyes of the perceiver. According to ISO 9000:2000, it is
defined as the degree to which a set of inherent characteristics fulfills the requirements.
Q = P/E where P is performance and E is expectations.
The extent to which a product or service successfully meets the expectations etc is illustrated
in the diagram below. Quality is an expression of the gap between the standard expected and
the standard provided. When the two coincide (there is no gap) you have reached good or
satisfactory quality, when there is a gap there is cause for dissatisfaction and opportunity for
improvement.

The inherent characteristics of a product or service created to satisfy customer needs,


expectations and requirements are quality characteristics. Physical and functional
characteristics such as weight, shape, speed, capacity, reliability, portability, taste etc. Price
and delivery are assigned characteristics and not inherent characteristics of a product or
service and are both transient features whereas the impact of quality is sustained long after
the attraction or the pain of price and delivery has subsided. Quality characteristics that are
defined in a specification are quality requirements and hence any technical specification for a
product or service that is intended to reflect customer expectations, needs and requirements is
a quality requirement, Most people simply regard such requirements as product or service
requirements and calling them 'quality' requirements introduces potential for
misunderstanding.

Quality Improvement
Quality Improvement is a formal approach to the analysis of performance and systematic
efforts to improve it. The ISO definition of quality improvement states that it is the actions
taken throughout the organization to increase the effectiveness of activities and processes to
provide added benefits to both the organization and its customers. In simple terms, quality
improvement is anything which causes a beneficial change in quality performance. There are

Dr. N Venkatesh, CEC


SQC 10ME668

two basic ways of bringing about improvement in quality performance. One is by better
control and the other by raising standards. We don't have suitable words to define these two
concepts. Doing better what you already do is improvement but so is doing something new.
Juran uses the term control for maintaining standards and the term breakthrough for
achieving new standards. Imai uses the term Improvement when change is gradual and
Innovation when it is radical. Hammer uses the term Reengineering for the radical changes.
All beneficial change results in improvement whether gradual or radical so we really need a
word which means gradual change or incremental change. The Japanese have the word
Kaizen which means continuous improvement.

Quality control:
Quality control is the combination of all the devices and techniques that are used to control
product quality at the most economical costs which yield adequate customer satisfaction

Dimensions of Quality
What are the dimensions of quality?
Before we discuss on dimensions of quality, we must discuss three aspects associated with
definition of quality: quality of design, quality of conformance, and quality of performance.

Dr. N Venkatesh, CEC


SQC 10ME668

Quality of design is all about set conditions that the product or service must minimally have
to satisfy the requirements of the customer. Thus, the product or service must be designed in
such a way so as to meet at least minimally the needs of the consumer. However, the design
must be simple and also less expensive so as to meet the customers' product or service
expectations. Quality of design is influenced by many factors, such as product type, cost,
profit policy, demand of the product, availability of parts and materials, and product
reliability.

Quality of conformance is basically meeting the standards defined in the design phase after
the product is manufactured or while the service is delivered. This phase is also concerned
about quality is control starting from raw material to the finished product. Three broad
aspects are covered in this definition, viz. defect detection, defect root cause analysis , and
defect prevention. Defect prevention deals with the means to deter the occurrence of defects
and is usually achieved using statistical process control techniques. Detecting defects may be
by inspection, testing or statistical data analysis collected from process . Subsequently, the
root causes behind the presence of defects are investigated, and finally corrective actions are
taken to prevent recurrence of the defect.

Quality of performance is how well the product functions or service performs when put to
use. It measures the degree to which the product or Service satisfies the customer from the
perspective of both quality of design and the quality of conformance. Meeting customer
expectation is the focus when we talk about quality of performance.
There are eight such dimensions of quality. These are:
1. Performance:
It involves the various operating characteristics of the product. For a television set, for
example, these characteristics will be the quality of the picture, sound and longevity of the
picture tube.
2. Features:
These are characteristics that are supplemental to the basic operating characteristics. In an
automobile, for example, a stereo CD player would be an additional feature.
3. Reliability:
Reliability of a product is the degree of dependability and trustworthiness of the benefit of the
product for a long period of time.
It addresses the probability that the product will work without interruption or breaking down.
4. Conformance:
It is the degree to which the product conforms to pre- established specifications. All quality
products are expected to precisely meet the set standards.
5. Durability:
It measures the length of time that a product performs before a replacement becomes
necessary. The durability of home appliances such as a washing machine can range from 10
to 15 years.
6. Serviceability:
Serviceability refers to the promptness, courtesy, proficiency and ease in repair when the
product breaks down and is sent for repairs.

Dr. N Venkatesh, CEC


SQC 10ME668

7. Aesthetics:
Aesthetic aspect of a product is comparatively subjective in nature and refers to its impact on
the human senses such as how it looks, feels, sounds, tastes and so on, depending upon the
type of product. Automobile companies make sure that in addition to functional quality, the
automobiles are also artistically attractive.
8. Perceived quality:
An equally important dimension of quality is the perception of the quality of the product in
the mind of the consumer. Honda cars, Sony Walkman and Rolex watches are perceived to be
high quality items by the consumers.

History of Quality Methodology


Reach back into antiquity, especially into China, India, Greece and the Roman Empire :
skilled crafstmanship.
Industrial Revolution (18th century): need for more consistent products that are mass-
produced and needed to be interchangeable. Rise of inspection after manufacturing completed
and separate quality departments. Science of modern quality methodology started by R. A.
Fisher perfected scientific shortcuts for shifting through mountains of data to spot key cause-
effect relationships to speed up development of crop growing methods.
Statistical methods at Bell Laboratories: W. A. Shewhart transformed Fishers methods into
quality control discipline for factories (inspired W.E. Deming and J. M. Juran); Control
Charts developed by W. A. Shewhart; Acceptance sampling methodology developed by H. F.
Dodge and H. G. Romig World War II: Acceptance of statistical quality control concepts in
manufacturing industries (more sophisticated weapons demanded more
careful production and reliability); The American Society for Quality Control formed (1946).
Quality in Japan: W.E. Deming invited to Japan to give lectures; G. Taguchi developed
Taguchi method for scientific design of experiments; The Japanese Union of Scientists and
Engineers (JUSE) established Deming Price (1951); The Quality Control Circle concept is
introduced by K. Ishikawa (1960). Quality awareness in U.S. manufacturing industry during
1980s: Total Quality Management; Quality control started to be used as a management
tool.
Malcolm Baldrige National Quality Award (1987) International Standard Organizations
(ISO) 9000 series of standards: in 1980s Western Europe began to use; interest increase in
US industry in 1990s; Became widely accepted today: necessary requirement to world-wide
distribution of product and a significant competitive advantage.
The transition is as shown below

The summary is as follows:

Dr. N Venkatesh, CEC


SQC 10ME668

Year Event

1700-1900 Quality was largely determined by the efforts of an individual craftsman.

1915-1919 WWI- British government began a supplier certification program.

Technical Inspection Association was formed in England, this later becomes


1919
the Institute of Quality Assurance.

W A Shewhart introduced the concept of control charts in Bell Laboratories


1924
technical memorandum.

Acceptance sampling techniques were developed and refined by H. F. Dodge


1928
and H. G. Romig at Bell Labs.

W. A. Shewhart published Economic control of quality of manufactured


1931 product outlining statistical methods for use in production and control chart
methods.

British textile industry began use of statistical techniques for product/process


1932-1933
development.

1944 Industrial Quality Control began publication.

1954 E. S. Page introduced CUSUM control chart.

1960 The concept of quality control circle was introduced in japan by K. Ishikawa.

1960 The zero defects (ZD) programs are introduced in US industries.

1970-1980 Oil shock and further development of Company Wide Quality Control

1981-1990 Malcolms Baldridge award, ISO, Motorolas Six Sigma

Statistical Methods for Quality Control and Improvement


Three major areas:
Statistical process control (SPC)
Design of experiments (DOX)
Acceptance sampling

Statistical Process Control (SPC)


Statistical process control (SPC) is a method of quality control which uses statistical methods.
SPC is applied in order to monitor and control a process. Monitoring and controlling the
process ensures that it operates at its full potential.

Dr. N Venkatesh, CEC


SQC 10ME668

The concepts of Statistical Process Control (SPC) were initially developed by Dr. Walter
Shewhart of Bell Laboratories in the 1920's, and were expanded upon by Dr. W. Edwards
Deming, who introduced SPC to Japanese industry after WWII. After early successful
adoption by Japanese firms, Statistical Process Control has now been incorporated by
organizations around the world as a primary tool to improve product quality by reducing
process variation.
Dr. Shewhart identified two sources of process variation: Chance variation that is inherent in
process, and stable over time, and Assignable, or Uncontrolled variation, which is unstable
over time - the result of specific events outside the system. Dr. Deming relabeled chance
variation as Common Cause variation, and assignable variation as Special Cause variation.
Based on experience with many types of process data, and supported by the laws of statistics
and probability, Dr. Shewhart devised control charts used to plot data over time and identify
both Common Cause variation and Special Cause variation.

Design of experiments (DOE)


This branch of applied statistics deals with planning, conducting, analyzing and interpreting
controlled tests to evaluate the factors that control the value of a parameter or group of
parameters.
Design of experiments (DOE) is a systematic method to determine the relationship between
factors affecting a process and the output of that process. In other words, it is used to find
cause-and-effect relationships. This information is needed to manage process inputs in order
to optimize the output.
A strategically planned and executed experiment may provide a great deal of information
about the effect on a response variable due to one or more factors. Many experiments involve
holding certain factors constant and altering the levels of another variable. This OneFactor
ataTime (or OFAT) approach to process knowledge is, however, inefficient when
compared with changing factor levels simultaneously.
A wellperformed experiment may provide answers to questions such as:
What are the key factors in a process?
At what settings would the process deliver acceptable performance?
What are the key, main and interaction effects in the process?
What settings would bring about less variation in the output?

Acceptance sampling
Acceptance sampling is a major component of quality control and is useful when the cost of
testing is high compared to the cost of passing a defective item or when testing is destructive.
It is a compromise between doing 100% inspection and no inspection at all.
There are two types:
1. Outgoing inspection - follows production
2. Incoming inspection - before use in production

Dr. N Venkatesh, CEC


SQC 10ME668

Total Quality Management


It is defined as both philosophy and a set of guiding principles that represent the foundation
of a continuously improving organization.
TQM is a corporate business management philosophy which recognizes that customer
needs and business goals are inseparable. It is appropriate within both industry and
commerce
TQM is an integrated organisational approach in delighting customers by meeting
their expectations on a continuous basis through everyone involved within
organisation working on a continuous improvement in all products, services and
processes along with proper problem solving methodology

Quality Philosophy contribution of quality Gurus


Demings 14 points
1. Create constancy of purpose for continual improvement of product and service
Set the course today for better tomorrow
Preventive maintenance
Long term planning of resources
Take care not to get tangled in one while loosening the other
Stability with innovation
Minimization of variability and dispersion
2. Adopt the new philosophy for economic stability
Quality is important than quantity
Higher quality at lower price
Mobilization of everyone is required
A paradigm shift is required
3. Cease dependence on inspection to achieve quality
Build quality during design/development stage through off line inspection and
in production through online inspection
Eliminate inspection of final product
Do right first time instead of do until right
Checking without considering how to improve is not useful
Fallacy of divided responsibility
4.End the practice of awarding business on price tag alone
It is not the fault of operator for faulty material
Price is not the only ultimate source
Stick to sole supplier
Loss of using lowest priced product is always high
5. Improve constantly and forever the system of production and service
Search for problems
Prevent rather than fire fight
Never get into bottleneck stage
Innovation must be applied to the whole system
6.Institute training on the job
Knowledge must be enhanced by all employees

Dr. N Venkatesh, CEC


SQC 10ME668

Training is not non productive


Experience is not the solution for everything
Theory of optimization win-win situation
Knowledge of statistical theory
Knowledge of psychology
7. Adopt and institute modern methods of supervision and leadership
Supervisors should be teachers than observers
Motivate by example than fear
Counselors and not judges
Should be supportive, sympathetic, encouraging
Variety among people should be taken for improvement
8. Drive out fear
Encourage two way communication
Fear is a barrier for improvement
Fear is counterproductive
Stick to what you know becomes inevitable
9. Break down barriers between departments and individuals
Work as team
Destructive competition must be overcome by constructive
Contribution to the company as a whole
Everyone is customer to everyone
10. Eliminate the use of slogans, posters and exhortions
Slogans remain as words not facts
People are good, system make them bad
Give proper training instead of slogans
11.Eliminate work standards and numerical quotas
Eliminate MBO
Competent leadership develops productivity
MBO expects more than what can be done
Ratio of 85:15::common problems: special problem

12. Remove barriers that rob the hourly worker of right to pride in work
Remove physical and mental obstacles
Barriers are MBO and performance appraisal
These increase internal destructive competition
Reduces risk taking
13. Institute vigorous program of education and retraining
Training and retraining must continue
Commitment to permanent employment
14. Define top managements permanent commitment to ever improving quality and
productivity

Dr. N Venkatesh, CEC


SQC 10ME668

Demings (PDSA) cycle

PLAN
DO
STUDY (CHECK)
ACT

Plan
Plan the route of action
Decision based on objectives, changes needed, performance measures, persons
responsible, availability of resources

Do
Involvement of everyone
Training, survey of customers, identification of core process
Small scale implementation of planned change
Study (Check)
Measuring and observing the effects, analysis of results and feedback
Deviations from the original plan should be evaluated
Act
Take corrective steps
Standardize the improvement

Joseph Juran : Jurans trilogy


I stage: Quality planning
Identify customers (internal and external) and their needs
Translate needs to everyones language
Optimize the process of production
Transfer process into operation
II stage: Quality control
Corrective action to control sporadic (special) problems 15%
Aim to reduce chronic (common)waste 85%
Compare actual performance to quality goals and act on the difference
III stage: Quality improvement
Quality breakthrough is needed to improve to very high levels

Dr. N Venkatesh, CEC


SQC 10ME668

Requires long range planning, company wide training, good coordination, top
management commitment, etc.

Source: Dale Besterfield, et.al., Total Quality Management, pearson education, third edition,
2005

Quality Costs
The value of quality must be based on its ability to contribute to profits. The efficiency of
business is measured in terms of money it earns. This cost is no different than other costs. It
is the sum of the money that the organization spends in ensuring that the customer
requirements are met on a continual basis and also the costs wasted through failing to achieve
the desired level of quality.

Dr. N Venkatesh, CEC


SQC 10ME668

Prevention Costs Internal Failure Costs

Systems development Net cost of scrap


Quality engineering Net cost of spoilage
Quality training Rework labor and overhead
Quality circles Re-inspection of reworked products
statistical process control Retesting of reworked products
Supervision of prevention activities Downtime caused by quality problems
Quality data gathering, analysis, and reporting Disposal of defective products
Quality improvement projects Analysis of the cause of defects in production
Technical support provided to suppliers Re-entering data because of keying errors
Audits of the effectiveness of the quality Debugging software errors
system
Appraisal Costs External Failure Costs

Test and inspection of incoming materials Cost of field servicing and handling
Test and inspection of in-process goods complaints
Final product testing and inspection Warranty repairs and replacements
Supplies used in testing and inspection Repairs and replacements beyond the
Supervision of testing and inspection activities warranty period
Depreciation of test equipment Product recalls
Maintenance of test equipment Liability arising from defective products
Plant utilities in the inspection area Returns and allowances arising from quality
Field testing and appraisal at customer site problems
Lost sales arising from a reputation for poor
quality.

The following graph gives the relationship between conformance and non-conformance costs.

Dr. N Venkatesh, CEC


SQC 10ME668

Quality and Productivity


Productivity can be defined as the ratio of total output to total input (raw materials,
man-hours, capital cost, etc.). Quality is a measure of excellence and can be defined
as the overall performance (reliability, durability, serviceability, etc. ) as compared to
customer expectations.
There is a positive correlation between the two in any business environment.
Better quality sets the product apart and increases sales. It also results in lesser
defects, increases production efficiency, reduces replacement/repair costs and
increases overall customer satisfaction.
The decrease in defects and repair costs decreases the input cost and increases the
overall productivity.
The exception to this positive correlation would be in the case of more time and
resources spent in quality production, especially in manual production.

Legal aspects of quality


Anything illegal doesnt exist
Not following legal prescriptions lowered very much the quality of the whole
operation
Most seriously, not following legal prescriptions could endanger the position of the NSI in
the society and have consequences on the quality of other results
Legal Aspects
Unregulated entry with quality control means the absence of any legal barriers to new
entrants to a market, provided however that they comply with certain limited obligations
pertaining to the quality of their operations or services.
Unregulated entry
Unregulated entry, in the context of the urban bus transport sector, is synonymous with an
open market for that sector where any person can provide urban bus transport services
where he wants, when he wants. Consequently restrictions on bus operators are either non-
existent or minimal.
When such minimal restrictions do exist, they are generally in the nature of restrictions
imposed by laws of general application, such as laws regulating:
Unacceptable business practices (anti-competitive behavior, price gouging, etc.).
Contract of carriage of passengers.
Motor vehicle construction and use.
Motor vehicle registration and insurance.
Payment of fuel taxes and levies.
Movement of traffic on highways or city streets.

Dr. N Venkatesh, CEC


SQC 10ME668

UNIT 2: MODELLING PROCESS QUALITY

Basic Statistics
Mean, Mode, Median, and Standard Deviation
Statistics is the practice of collecting and analyzing data. The analysis of statistics is
important for decision making in events where there are uncertainties.
Measures of Central Tendency:
A good way to begin analyzing data is to summarize the data into a single representative
value.
The three most common measures of central tendency are mean, median and mode.
The sample mean is the average and is computed as the sum of all the observed
outcomes from the sample divided by the total number of events. We use x as the
symbol for the sample mean. In math terms,

where n is the sample size and the x correspond to the observed value.
The mode of a set of data is the number with the highest frequency, one that occurs
maximum number of times.

One problem with using the mean, is that it often does not depict the typical outcome. If
there is one outcome that is very far from the rest of the data, then the mean will be strongly
affected by this outcome. Such an outcome is called an outlier.

An alternative measure is the median. The median is the middle score. If we have an even
number of events we take the average of the two middles. The median is better for describing
the typical value. It is often used for income and home prices.
Example
Suppose you randomly selected 10 house prices. You are interested in the typical house
price. In lakhs the prices are 2.7, 2.9, 3.1, 3.4, 3.7, 4.1, 4.3, 4.7, 4.7, 40.8
If we computed the mean, we would say that the average house price is 744,000. Although
this number is true, it does not reflect the price for available housing in Central Mangaluru.
A closer look at the data shows that the house valued at
40.8 x 100,000 = 40.8 Lakhs skews the data. Instead, we use the median. Since there is an
even number of outcomes, we take the average of the middle two (3.7 + 4.1)/2 = 3.9.
Therefore, the median house price is 390,000. This better reflects what a house shopper
should have to buy a house.
Relation between mean, mode and median is Mean Mode = 3 (Mean-Median)

Variance and Standard deviation


The mean, mode, median, do a nice job in telling where the centre of the data set is, but often
we are interested in more.

Dr. N Venkatesh, CEC


SQC 10ME668

For example, a pharmaceutical engineer develops a new drug that regulates iron in the blood.
Suppose she finds out that the average sugar content after taking the medication is the
optimal level. This does not mean that the drug is effective. There is a possibility that half of
the patients have dangerously low sugar content while the other half has dangerously high
content. Instead of the drug being an effective regulator, it is a deadly poison. What the
pharmacist needs is a measure of how far the data is spread apart. This is what the variance
and standard deviation do.

Measures of Dispersion:
Dispersion gives information about how spread out the values are in the data set.
Common measures of dispersion are range, standard deviation
Range:
Definition: The range in a data set measures the difference between the smallest entry
value and the largest entry value.
Formula: Range = (largest entry value - smallest entry value)
Standard Deviation:
Definition: Standard deviation measures the variation or dispersion exists from the
mean.
A low standard deviation indicates that the data points tend to be very close to the
mean, whereas high standard deviation indicates that the data points are spread over a
large range of values.

Illustration:
The owner of a restaurant is interested in how much people spend at the restaurant. He
examines 10 randomly selected receipts for parties of four and writes down the following
data.
44, 50, 38, 96, 42, 47, 40, 39, 46, 50
Mean = 49.2

Dr. N Venkatesh, CEC


SQC 10ME668

Hence, = 2600.4/10 = 260.04 = 16.12

Illustration 2:
2). Table shows the number of goals scored in a game of football shootout (each person
gets 5 kicks at the goal) by students in a class.
Goal 012345
Frequency 149853
a). Calculate the mean and median.

Illustration 3:

Dr. N Venkatesh, CEC


SQC 10ME668

Dr. Demings Funnel Experiment


The funnel experiment is a visual representation of a process. It shows that a process
in control delivers the best results if left alone. The funnel experiment shows the
adverse effects of tampering with a process through the four setting rules.
The experiment was devised by Dr. W. Edwards Deming. It described in his famous
book titled 'Out of the Crisis'
The funnel experiment is a mechanical representation of many real world processes at
our places of work. The aim of the experiment is to demonstrate the losses caused by
tampering with these very same processes. The primary source of this tampering is
the use of Management by Results, reactions to every individual result.
In the experiment, a marble is dropped through a funnel, and allowed to drop on a
sheet of paper, which contains a target. The objective of the process is to get the
marble to come to a stop as close to the target as possible. The experiment uses
several methods to attempt to manipulate the funnels location such that the spread
about the target is minimized. These methods are referred to as rules.
During the first setup, the funnel is aligned above the target, and marbles dropped
from this location. No action is taken to move the funnel to improve performance.
This rule serves as our initial baseline for comparison with improved rules.
The results of rule 1 are a disappointment. The marble does not appear to behave
consistently. The marble rolls off in various directions for various distances.
Certainly there must be a better (smart) way to position the funnel to improve the
pattern.

Dr. N Venkatesh, CEC


SQC 10ME668

During rule 2, we examine the previous result and take action to counteract the
motion of the marble. We correct for the error of the previous drop. If the marble
rolled 2 inches northeast, we position the funnel 2 inches to the southwest of where it
last was.
A common example is worker adjustments to machinery. A worker may be working
to make a unit of uniform weight. If the last item was 2 pounds underweight, increase
the setting for the amount of material in the next item by 2 pounds.
Other examples include taking action to change policies and production levels based
upon on last months budget variances, profit margins, and output.

A possible flaw in rule 2 was that it adjusted the funnel from its last position, rather
than relative to the target. If the marble rolled 2 inches northeast last time, we should
set the funnel 2 inches southwest of the target. Then when the marble again rolls 2
inches northeast, it will stop on the target. The funnel is set at an equal and opposite
direction from the target to compensate for the last error.
We see rule 3 at work in systems where two parties react to each others actions.
Their goal is to maintain parity. If one country increases its nuclear arsenal, the rival
country increases their arsenal to maintain the perceived balance.
A common example provided in economics courses is agriculture. A drought occurs
one year causing a drop in crop output. Prices rise, causing farmers to plant more
crop next year. In the next year, there are surpluses, causing the price to drop.
Farmers plant less next year. The cycle continues

Dr. N Venkatesh, CEC


SQC 10ME668

In an attempt to reduce the variability of the marble drops, we decide to allow the
marble to fall where it wants to. We position the funnel over the last location of the
marble, as that appears to be the tendency of where the marble tends to stop.
A common example of Rule 4 is when we want to cut lumber to a uniform length.
We use the piece we just cut in order to measure the location of the next cut.
Other examples of Rule 4 include:
Brainstorming (without outside help)
Adjusting starting time of the next meeting based upon actual starting time of the last
meeting
Benchmarking, in order to find examples to follow A message is passed from one
person to the next, who repeats it to another person, and so forth.
The junior worker trains the next new worker, who then trains the next, and so forth.

Tampering
Rules 2, 3, and 4 are all examples of process tampering. We take action (dont just
stand there - do something!) as a result of the most recent result.
Rule 2 leads to a uniform circular pattern, whose size is 40% bigger than the Rule 1
circle. This is because the error in distance from the funnel is independent from one
marble drop to the next. In positioning the funnel relative to the previous marble
drop, we add the error from the first drop (by repositioning the funnel) to the second
drop (the error in the marble).
The standard deviation of adding n independent random variables is the square root of
n times the standard deviation of the individual. So the combined standard deviation
is 1.4 times the original standard deviation. Note, this statistical principle is a

Dr. N Venkatesh, CEC


SQC 10ME668

standard question that appears on every Certified Quality Engineer exam in some
form or another.
The problems of Rule 2 are corrected with dead bands in automated feedback
mechanisms and better calibration programs. We wait for a certain error to build up
before taking action. But how is the dead band determined? A control chart provides
the answer. Plot the results on a control chart, and recalibrate (or give a feedback
signal) when a statistically significant change is detected. Program dead bands
approximate the control chart action.
Rules 3 and 4 tend to blow up. In rule 3, results swing back and forth with greater
and greater oscillations from the target. In rule 4, the funnel follows a drunken walk
off the edge of the table. In both cases, errors accumulate from one correction to
the next, and the marble (or system) heads off to infinity. Rules 3 and 4 represent
unstable systems, with over-corrections tending to occur.

Final words

Schemes to control the location of the funnel should be control chart based. In
addition, we may have to think outside of the box to fix this system.
If we lowered the height of the funnel, we would fundamentally reduce the variation
in the process.
If we added more layers of cloth or paper to cushion the marbles landing, then the
marble would roll less. The impact of these changes would be detected by the control
chart, and would prove whether or not an improvement did occur.

Central limit theorem


In probability theory, the central limit theorem (CLT) states that, given certain
conditions, the arithmetic mean of a sufficiently large number of iterates of
independent random variables, each with a well-defined expected value and well-
defined variance, will be approximately normally distributed, regardless of the
underlying distribution.
To illustrate what this means, suppose that a sample is obtained containing a large
number of observations, each observation being randomly generated in a way that
does not depend on the values of the other observations, and that the arithmetic
average of the observed values is computed. If this procedure is performed many
times, the central limit theorem says that the computed values of the average will be
distributed according to the normal distribution (commonly known as a "bell curve").
A simple example of this is that if one flips a coin many times, the probability of
getting a given number of heads should follow a normal curve, with mean equal to
half the total number of flips.
The central limit theorem has a number of variants. In its common form, the random
variables must be identically distributed. In variants, convergence of the mean to the
normal distribution also occurs for non-identical distributions or for non-independent
observations, given that they comply with certain conditions.

Dr. N Venkatesh, CEC


SQC 10ME668

If the population from which samples are taken is NOT normal, the distribution of
SAMPLE AVERAGES will tend toward normality provided that sample size, n, is at
least 4.
Tendency gets better as n
Standardized normal for distribution of averages

Area under normal curve


Areas under portions of a normal distribution can be computed by using calculus. Since this
is a non-mathematical treatment of statistics, we will rely on computer programs and tables to
determine these areas.
Figure 1 shows a normal distribution with a mean of 50 and a standard deviation of 10. The
shaded area between 40 and 60 contains 68% of the distribution.
Figure 2 shows Normal distribution with a mean of 100 and standard deviation of 20. Here
also 68% of the area is within one standard deviation (20) of the mean (100).

Figure 1

Dr. N Venkatesh, CEC


SQC 10ME668

Figure 2
Figure 3 shows a normal distribution with a mean of 75 and a standard deviation of
10. The shaded area contains 95% of the area and extends from 55.4 to 94.6.
For all normal distributions, 95% of the area is within 1.96 standard deviations of the
mean.
For quick approximations, it is sometimes useful to round off and use 2 rather than
1.96 as the number of standard deviations you need to extend from the mean so as to
include 95% of the area

Figure 3

Dr. N Venkatesh, CEC


SQC 10ME668

Calculation of area under normal curve


A tutor sets a piece of English Literature coursework for the 50 students in his class.
We make the assumption that when the scores are presented on a histogram, the data
is found to be normally distributed. The mean score is 60 out of 100 and the standard
deviation (in other words, the variation in the scores) is 15 marks
Having looked at the performance of the tutor's class, one student, Sharath, has asked
the tutor if, by scoring 70 out of 100, he has done well. Bearing in mind that the mean
score was 60 out of 100 and that Sharath scored 70, then at first sight it may appear
that since Sharath has scored 10 marks above the 'average' mark, he has achieved one
of the best marks.
However, this does not take into consideration the variation in scores amongst the 50
students (in other words, the standard deviation). After all, if the standard deviation is
15, then there is a reasonable amount of variation amongst the scores when compared
with the mean.
While Sharath has still scored much higher than the mean score, he has not
necessarily achieved one of the best marks in her class. The question arises: How well
did Sharath perform in his English Literature coursework compared to the other 50
students?
Secondly, the tutor has a dilemma. In the next academic year, he must choose which
of his students have performed well enough to be entered into an advanced English
Literature class. He decides to use the coursework scores as an indicator of the
performance of his students. As such, he feels that only those students that are in the
top 10% of the class should be entered into the advanced English Literature class. The
question arises: Which students came in the top 10% of the class?
As such, we can use the standard normal distribution and its related z-scores to
answer these questions much more easily.

Z Score
Z-scores are expressed in terms of standard deviations from their means. Resultantly,
these z-scores have a distribution with a mean of 0 and a standard deviation of 1. The
formula for calculating the standard score is given below:

Dr. N Venkatesh, CEC


SQC 10ME668

clearly we can see that Sharath did better than a large proportion of students, with
74.86% of the class scoring lower than his.
However, the key finding is that Sharath's score was not one of the best marks. It
wasn't even in the top 10% of scores in the class How? Let us See
A better way of phrasing second question would be to ask: What mark would a
student have to achieve to be in the top 10% of the class and qualify for the advanced
English Literature class?
If we refer to our frequency distribution below, we are interested in the area to the
right of the mean score of 60 that reflects the top 10% of marks. As a decimal, the top

Dr. N Venkatesh, CEC


SQC 10ME668

10% of marks would be those marks above 0.9 (i.e., 100% - 90% = 10% or 1 - 0.9 =
0.1). In this case, we need to do the exact reverse to find our z-score.
This forms the second part of the z-score. Putting these two values together, the z-
score for 0.8997 is 1.28 (i.e., 1.2 + 0.08 = 1.28)
Standard
Score Mean z-score
Deviation

(X) s Z

? 60 15 1.282

Interpretation of Z score
A z-score less than 0 represents an element less than the mean.
A z-score greater than 0 represents an element greater than the mean.
A z-score equal to 0 represents an element equal to the mean.
A z-score equal to 1 represents an element that is 1 standard deviation greater than the
mean; a z-score equal to 2, 2 standard deviations greater than the mean; etc.
A z-score equal to -1 represents an element that is 1 standard deviation less than the
mean; a z-score equal to -2, 2 standard deviations less than the mean; etc.
If the number of elements in the set is large, about 68% of the elements have a z-score
between -1 and 1; about 95% have a z-score between -2 and 2; and about 99% have a
z-score between -3 and 3.

Dr. N Venkatesh, CEC


SQC 10ME668

UNIT 3: METHODS ANS PHILOSOPHY OF SPC


Definition:
Statistical process control (SPC) involves inspecting a random sample of the output from a
process and deciding whether the process is producing products with characteristics that fall
within a predetermined range. SPC answers the question of whether the process is
functioning properly or not. Statistical process control is a collection of tools that when used
together can result in process stability and variability reduction.

The seven major tools are


1. Check sheet / stratification
2. Pareto diagram
3. Cause and effect diagram
4. Graphs
5. Control charts
6. Histograms
7. Scatter diagram

Check sheets
Check sheets are a simple way of gathering data so that decisions can be based on facts,
rather than anecdotal evidence. Figure 4 shows a checklist used to determine the causes of
defects in a hypothetical assembly process. It indicates that "not-to-print" is the biggest cause
of defects, and hence, a good subject for improvement. Checklist items should be selected to
be mutually exclusive and to cover all reasonable categories. If too many checks are made in
the "other" category, a new set of categories is needed.

They could also be used to relate the number of defects to the day of the week to see if there
is any significant difference in the number of defects between workdays. Other possible
column or row entries could be production line, shift, product type, machine used, operator,
etc., depending on what factors are considered useful to examine. So long as each factor can
be considered mutually exclusive, the chart can provide useful data. An Ishikwa Diagram
may be helpful in selecting factors to consider. The data gathered in a checklist can be used
as input to a Pareto chart for ease of analysis.

Pareto Charts
Alfredo Pareto was an economist who noted that a few people controlled most of a nations
wealth. "Paretos Law" has also been applied to many other areas, including defects, where a
few causes are responsible for most of the problems. Separating the "vital few" from the
"trivial many" can be done using a diagram known as a Pareto chart. Figure below shows the
data from the checklist shown in above Figure organized into a Pareto chart.

Dr. N Venkatesh, CEC


SQC 10ME668

Stratification is simply the creation of a set of Pareto charts for the same data, using different
possible causative factors. For example, Figure below plots defects against three possible
sets of potential causes. The figure shows that there is no significant difference in defects
between production lines or shifts, but product type three has significantly more defects than
do the others. Finding the reason for this difference in number of defects could be
worthwhile.

Cause and Effect Diagram

Ishikawa diagrams are named after their inventor, Kaoru Ishikawa. They are also called
fishbone charts, after their appearance, or cause and effect diagrams after their function. Their

Dr. N Venkatesh, CEC


SQC 10ME668

function is to identify the factors that are causing an undesired effect (e.g., defects) for
improvement action, or to identify the factors needed to bring about a desired result (e.g., a
winning proposal). The factors are identified by people familiar with the process involved. As
a starting point, major factors could be designated using the "four M's": Method, Manpower,
Material, and Machinery; or the "four P's": Policies, Procedures, People, and Plant. Factors
can be subdivided, if useful, and the identification of significant factors is often a prelude to
the statistical design of experiments.
Above figure is a partially completed Ishikawa diagram attempting to identify potential
causes of defects in a wave solder process.

Graphs
Graphs come in many types, a type is usually better fitting a specific purpose. According to
the situation to analyze or information to share, the choice of the most suitable type of graph,
and beyond the type, scale and other parameters of the graph will highlight or hide certain
aspects.
The first type of graph, maybe the most
common are line charts, lines joining
plots and each plot is the graphical
depiction of a pair of coordinates and
those coordinates are the translation of
specific parameters to check, e.g. km or
miles per hour (speed), temperature
over time, units per hour a day as above,
etc.

A radar chart is a graphical method of displaying multivariate data in the form of a two-
dimensional chat of three or more quantitative variables represented on axes starting from the
same point. The relative position and angle of the axes is typically uninformative.
For example above graph predicts the performance of the company with respect to its usage
of budget. The total area covered by the either blue shape or red shape will give clear picture.

Dr. N Venkatesh, CEC


SQC 10ME668

Similarly, we can plot the performance of two or more companies under the parameters
mentioned above and identify the best company base on the area covered.

Control Charts
Control charts are the most complicated of the seven basic tools of TQM, but are based on
simple principles. The charts are made by plotting in sequence the measured values of
samples taken from a process. For example, the mean length of a sample of rods from a
production line, the number of defects in a sample of a product, the miles per gallon of
automobiles tested sequentially in a model year, etc. These measurements are expected to
vary randomly about some mean with a known variance. From the mean and variance,
control limits can be established. Control limits are values that sample measurements are not
expected to exceed unless some special cause changes the process. A sample measurement
outside the control limits therefore indicates that the process is no longer stable, and is
usually reason for corrective action.
Other causes for corrective action are non-random behavior of the measurements within the
control limits. Control limits are established by statistical methods depending on whether the
measurements are of a parameter, attribute or rate.

Histograms
Histograms are another form of bar chart in which measurements are grouped into bins; in
this case each bin representing a range of values of some parameter. For example, in Figure
below, X could represent the length of a rod in inches. The figure shows that most rods
measure between 0.9 and 1.1 inches. If the target value is 1.0 inches, this could be good
news. However, the chart also shows a wide variance, with the measured values falling
between 0.5 and 1.5 inches. This wide a range is generally a most unsatisfactory situation

Dr. N Venkatesh, CEC


SQC 10ME668

Besides the central tendency and spread of the data, the shape of the histogram can also be of
interest.

Scatter diagrams
Scatter diagrams are a graphical, rather than statistical, means of examining whether or not
two parameters are related to each other. It is simply the plotting of each point of data on a
chart with one parameter as the x-axis and the other as the y-axis. If the points form a narrow
"cloud" the parameters are closely related and one may be used as a predictor of the other. A
wide "cloud" indicates poor correlation. Figure below shows a plot of defect rate vs.
temperature with a strong positive correlation,

It should be noted that the slope of a line drawn through the center of the cloud is an artefact
of the scales used and hence not a measure of the strength of the correlation. Unfortunately,
the scales used also affect the width of the cloud, which is the indicator of correlation. When
there is a question on the strength of the correlation between the two parameters, a correlation
coefficient can be calculated. This will give a rigorous statistical measure of the correlation
ranging from -1.0 (perfect negative correlation), through zero (no correlation) to +1.0 (perfect
correlation).

Dr. N Venkatesh, CEC


SQC 10ME668

SOURCES OF VARIATION: COMMON AND ASSIGNABLE CAUSES


If you look at bottles of a soft drink in a grocery store, you will notice that no two bottles are
filled to exactly the same level. Some are lled slightly higher and some slightly lower.
Similarly, if you look at cakes in a bakery, you will notice that some are slightly larger than
others and some have more blueberries than others. These types of differences are completely
normal. No two products are exactly alike because of slight differences in materials, workers,
machines, tools, and other factors. These are called common, or random, causes of
variation. Common causes of variation are based on random causes that we cannot identify.
These types of variation are unavoidable and are due to slight differences in processing. An
important task in quality control is to nd out the range of natural random variation in a
process.
The second type of variation that can be observed involves variations where the causes can be
precisely identied and eliminated. These are called assignable causes of variation.
Examples of this type of variation are poor quality in raw materials, an employee who needs
more training, or a machine in need of repair. In each of these examples the problem can be
identied and corrected. Also, if the problem is allowed to persist, it will continue to create a
problem in the quality of the product. In the example of the soft drink bottling operation,
bottles filled with 250 Mls of liquid would signal a problem. The machine may need to be
readjusted. This would be an assignable cause of variation. We can assign the variation to a
particular cause (machine needs to be readjusted) and we can correct the problem (readjust
the machine)
Chance and assignable causes of variation
Chance Causes Assignable Causes
Consists of one or just a few
(i) Consist of many individual causes.
individual causes.
(ii) Any one chance causes results in only a minute
Any one assignable cause can
amount of variation. (However, many of the chance
result in a large amount of
causes act simultaneously so that the total amount of
variation.
chance variation is substantial).
(lit) Some typical chance causes of variation are :
Some typical assignable causes of
(a) Slight variations in raw material (though within
variation are : Batch of defective
the specifications) (6) Slight vibration of machine (c)
raw material
Lack of human perfection in reading instruments
Faulty set up Untrained operator
and setting controls.
The presence of assignable
(jy) As a practical matter, chance variation cannot variation can be detected and
economically be eliminated from a process. action to eliminate the causes is
usually justified.
Chance (or common) causes account for the uncontrollable, natural variation present in any
repetitive process. A process that is operating with only chance causes of variation is said to
be in statistical control or in control . The chance causes are an inherent part of the process.
Assignable (or special) causes are those whose effect can be detected and controlled.
Assignable causes are not the part of chance causes. A process that is operating in the
presence of assignable causes is said to be an out-of-control process.

Dr. N Venkatesh, CEC


SQC 10ME668

Sources of assignable causes


Improperly adjusted or controlled machines, operator errors or defective raw materials etc.
See the following Figure for both chance and assignable causes of variation
Note: The statistical process control (SPC) is useful to detect the occurrence of assignable
causes of process shifts so that investigation of the process and corrective action may be
undertaken before producing many non-conforming items. The ultimate goal of statistical
process control is the elimination of variability in the process.

Examples of common-cause and special-cause variation


Process Common cause of variation Special cause of variation
Baking a loaf of The oven's thermostat allows the Changing the oven's temperature or
bread temperature to drift up and down opening the oven door during
slightly. baking can cause the temperature to
fluctuate needlessly.
Recording An experienced operator makes An untrained operator new to the
customer contact an occasional error. job makes numerous data-entry
information errors.
Injection molding Slight variations in the plastic Changing to a less reliable plastic
of plastic toys from a supplier result in minor supplier leads to an immediate shift
variations in product strength in the strength and consistency of
from batch to batch. your final product.

Statistical Basis of the Control Chart: Basic principles


A typical control chart contains three horizontal lines:
Center line (CL), Upper control limit (UCL) and Lower control limit (LCL).
See the following Figure
Center line: : CL represents the average value of the quality characteristic corresponding to
the in-control state. As long as the points plot within the control limits, the process is
assumed to be in control, and no action is necessary. Points plot outside of the control limits
is interpret as evidence that the process is out of control and corrective action are required to
find and eliminate the assignable cause(s). Even all the points plot inside the control limits, if
they behave in a systematic or nonrandom manner, then this could be an indication that the
process is out of control. If the process is in control, all the plotted points should have an
essentially random pattern.

Dr. N Venkatesh, CEC


SQC 10ME668

How control chart works?

The control limits as pictured in the graph might be 0.001 probability limits. If so, and if
chance causes alone were present, the probability of a point falling above the upper limit
would be one out of a thousand, and similarly, a point falling below the lower limit would be
one out of a thousand. We would be searching for an assignable cause if a point would fall
outside these limits. Where we put these limits will determine the risk of undertaking such a
search when in reality there is no assignable cause for variation.
Since two out of a thousand is a very small risk, the 0.001 limits may be said to give practical
assurances that, if a point falls outside these limits, the variation was caused be an assignable
cause. It must be noted that two out of one thousand is a purely arbitrary number. There is no
reason why it could not have been set to one out a hundred or even larger. The decision
would depend on the amount of risk the management of the quality control program is willing
to take. In general (in the world of quality control) it is customary to use limits that
approximate the 0.002 standard.
Letting X denote the value of a process characteristic, if the system of chance causes
generates a variation in X that follows the normal distribution, the 0.001 probability limits
will be very close to the 3 limits. From normal tables we glean that the 3 in one direction is
0.00135, or in both directions 0.0027. For normal distributions, therefore, the 3 limits are
the practical equivalent of 0.001 probability limits.

Types of Process Variability


Stationary and uncorrelated - data vary around a fixed mean in a stable or predictable
manner
Stationary and auto-correlated - successive observations are dependent with tendency
to move in long runs on either side of mean
Non-stationary - process drifts without any sense of a stable or fixed mean

Dr. N Venkatesh, CEC


SQC 10ME668

Objectives of control charts


To provide information for current decisions in regard to recently produced parts.
To provide information for current decisions in regard to the production process
To provide information for current decisions in regard to the Product specifications,
and inspection procedures
To provide a method of instructing the operators and supervisors

Significance of the control chart


Important uses
Most processes do not operate in a state of statistical control.
Consequently, the routine and attentive use of control charts will identify assignable
causes. If these causes can be eliminated from the process, variability will be reduced
and the process will be improved.
The control chart only detects assignable causes. Management, operator, and
engineering action will be necessary to eliminate the assignable causes.
Out-of-control action plans (OCAPs) are an important aspect of successful control
chart usage

Popularity of control charts


Control charts are extremely popular right from 1930s till date. The reasons are listed below.
1) Control charts are a proven technique for improving productivity.
2) Control charts are effective in defect prevention.
3) Control charts prevent unnecessary process adjustment.
4) Control charts provide diagnostic information.
5) Control charts provide information about process capability.

Choice of Control Limits


Warning Limits on Control Charts
Warning limits (if used) are typically set at 2 standard deviations from the mean.
If one or more points fall between the warning limits and the control limits, or close to
the warning limits the process may not be operating properly.
Good thing: warning limits often increase the sensitivity of the control chart.
Bad thing: warning limits could result in an increased risk of false alarms.

Dr. N Venkatesh, CEC


SQC 10ME668

99.7% of the Data


If approximately 99.7% of the data lies within 3 of the mean (i.e., 99.7% of the
data should lie within the control limits), then 1 - 0.997 = 0.003 or 0.3% of the data
can fall outside 3 (or 0.3% of the data lies outside the control limits). (Actually, we
should use the more exact value 0.0027)
Three-Sigma Limits
The use of 3-sigma limits generally gives good results in practice.
If the distribution of the quality characteristic is reasonably well approximated by the
normal distribution, then the use of 3-sigma limits is applicable.
These limits are often referred to as action limits.

Type Iand type II errors


Type I error (concluding the process is out of control when it is really in control). Type II
error (concluding the process is in control when it is really out of control).
0.0027 is the probability of a Type I error or a false alarm in this situation.
A type I error (or error of the first kind) is the incorrect rejection of a under control process.
Usually a type I error leads one to conclude that a supposed effect or relationship exists when
in fact it doesn't. The misinterpretation of a common cause data point as being a special cause
variation is referred to as a Type I error. Examples of type I errors include a test that shows a
patient to have a disease when in fact the patient does not have the disease, a fire alarm going
off indicating a fire when in fact there is no fire, or an experiment indicating that a medical
treatment should cure a disease when in fact it does not.
A type II error (or error of the second kind) is the failure to reject a out of control process. A
Type II error makes the opposite mistake (i.e. misinterprets special cause variation as
common cause variation). Examples of type II errors would be a blood test failing to detect
the disease it was designed to detect, in a patient who really has the disease; a fire breaking
out and the fire alarm does not ring; or a clinical trial of a medical treatment failing to show
that the treatment works when really it does

Types of control charts

Control charts fall into two categories: Variable and Attribute Control Charts.
Variable data are data that can be measured on a continuous scale such as a
thermometer, a weighing scale, or a tape rule.
Attribute data are data that are counted, for example, as good or defective, as
possessing or not possessing a particular characteristic.
Variables answer the question how much? and are measured in quantitative units,
for example weight, voltage or time.
Attributes answer the question how many? and are measured as a count, for
example the number of defects in a batch of products.

Dr. N Venkatesh, CEC


SQC 10ME668

The type of control chart you use will depend on the type of data you are working with.
It is always preferable to use variable data.
Variable data will provide better information about the process than attribute data.
Additionally, variable data require fewer samples to draw meaningful conclusions.

Control charts for variables


Two types of charts are used to track variable data; one for averages and one for ranges.
These charts are commonly used together and are known as an X-bar & R Chart.
Raw data are not plotted on X-bar & R charts. Instead, samples of data are collected
in subgroups of 2 to 5 data points and the mean and the range of those samples are
plotted on the charts.
The Chart monitors the process center, or location.
The subgroup mean (all of the points in the subgroup added up and divided by the
number of points in the subgroup) is plotted on the X-bar Chart.

The R Chart monitors the process variation, or dispersion.


The subgroup range (highest point minus the lowest point in the subgroup) is plotted
on the R Chart.

Dr. N Venkatesh, CEC


SQC 10ME668

The Standard Deviation Chart (s-Chart)


Let xij, j=1,2,.....,n be the measurements on ith sample (i=1,2,,k). The range si for ith sample is given by

Then the mean of sample ranges is given by

Let us now decide the control limits for si

Dr. N Venkatesh, CEC


SQC 10ME668

As the data points for each subgroup are plotted, the points are connected to the previous
point and the charts are interpreted to determine if one of the out-of-control patterns has
occurred.
Typically, only one of the charts will go out-of-control at any one time.
Remember to add a comment on a chart to indicate the action taken to correct an out-
of-control situation.

Control Charts for Discrete Data or attributes


c-Chart
Used when identifying the total count of defects per unit (c) that occurred during the
sampling period, the c-chart allows the practitioner to assign each sample more than one
defect. This chart is used when the number of samples of each sampling period is essentially
the same.
Example of c-Chart

Dr. N Venkatesh, CEC


SQC 10ME668

u-Chart
Similar to a c-chart, the u-chart is used to track the total count of defects per unit (u) that
occur during the sampling period and can track a sample having more than one defect.
However, unlike a c-chart, a u-chart is used when the number of samples of each sampling
period may vary significantly.
Example of u-Chart

np-Chart
Use an np-chart when identifying the total count of defective units (the unit may have one or
more defects) with a constant sampling size.
Example of np-Chart

Dr. N Venkatesh, CEC


SQC 10ME668

p-Chart
Used when each unit can be considered pass or fail no matter the number of defects a p-
chart shows the number of tracked failures (np) divided by the number of total units (n).
Example of p-Chart

Dr. N Venkatesh, CEC


SQC 10ME668

Notice that no discrete control charts have corresponding range charts as with the variable
charts. The standard deviation is estimated from the parameter itself (p, u or c); therefore, a
range is not required.

When the mean fraction of defectives p of the population is not known.


Let us select m samples, each of size n. If there are Di defective items in ith sample, then the fraction defec
the ith sample is p'i= Di/n, i=1,2,....,m.The average of these individual sample fraction defectives is

How to Select a Control Chart?


Although this article describes a plethora of control charts, there are simple questions a
practitioner can ask to find the appropriate chart for any given use. Figure 13 walks through
these questions and directs the user to the appropriate chart.
How to Select a Control Chart?

Dr. N Venkatesh, CEC


SQC 10ME668

A number of points may be taken into consideration when identifying the type of control
chart to use, such as:
Variables control charts (those that measure variation on a continuous scale) are more
sensitive to change than attribute control charts (those that measure variation on a
discrete scale).
Variables charts are useful for processes such as measuring tool wear.
Use an individuals chart when few measurements are available (e.g., when they are
infrequent or are particularly costly). These charts should be used when the natural
subgroup is not yet known.
A measure of defective units is found with u and c-charts.
In a u-chart, the defects within the unit must be independent of one another, such as
with component failures on a printed circuit board or the number of defects on a
billing statement.
Use a u-chart for continuous items, such as fabric (e.g., defects per square meter of
cloth).
A c-chart is a useful alternative to a u-chart when there are a lot of possible defects on
a unit, but there is only a small chance of any one defect occurring (e.g., flaws in a
roll of material).
When charting proportions, p and np-charts are useful (e.g., compliance rates or
process yields).

Setting up a Variables Control Chart


The five steps for setting up X-bar & R control charts are:
1. Collect and Calculate Subgroup Data
Collect (at least) 20 subgroups of data from the process.
2. Calculate the Centerlines and Control Limits
The formulas for calculating the centerlines and control limits are given in Appendix
1. The control chart factors youll need for the limits can be found in Appendix 2.
Set the scales for the charts.
Add the centerlines and control limits.
3. Plot the Data
Plot on both the X-bar and the R Charts.
4. Interpret the Control Chart
If no points are outside the limits and there are no unusual patterns, the process is
stable.
If more than two points are outside of the limits, it is not stable.
If one point is outside the limits, drop it, recalculate the centerline and limits, and
replot the data. In this case, the process is stable if all points are within the control
limits.
5. Take ActionUse the calculated limits if the process is stable.
Improve the process if it is not stable.
You cannot use a control chart on an unstable process.

Dr. N Venkatesh, CEC


SQC 10ME668

Consequences of misinterpreting the process


Blaming people for problems that they cannot control
Spending time and money looking for problems that do not exist
Spending time and money on unnecessary process adjustments
Taking action where no action is warranted
Asking for worker-related improvements when process improvements are needed first

Sample Size and Sampling Frequency


The sample size is an important feature of any empirical study in which the goal is to make
inferences about a population from a sample. In practice, the sample size used in a study is
determined based on the expense of data collection, and the need to have sufficient statistical
power.

There are many things that can dictate the size of your sample. Let's start by figuring out the
ideal sample size, the one that you would have if you lived in a perfect world. Then, we'll
look at how real-world issues can play a role in determining what that sample size actually
ends up being.

In general, a larger sample size is better. Why is this? Well, all research is interested in asking
inferences about the population at large. The larger the sample size, the closer you are to
having everyone in the population in your study.

For example, what would happen if you decided to do your coffee study on just three people?
Maybe one of them is taking a drug that interacts with caffeine. As a result, when this person
drinks coffee, they don't really get any more energetic or productive.

In your study, one-third of the sample has no reaction to coffee due to that drug. But in the
actual population, maybe only two or three percent of people take this drug. Your study
makes it look like a lot of people don't react to coffee because of the drug. Your results are
not accurate.

Inaccuracy due to a difference in the sample and the population is called error in research. A
larger sample size reduces error. If, for example, you increased your sample size from three
people to three hundred people, it is less likely that one-third of your sample will be taking
the drug that makes them less sensitive to caffeine.

Sampling frequency indicates how often the samples need to be considered for studies. In
designing a control chart, both the sample size to be selected and the frequency of selection
must be specified. Larger samples make it easier to detect small shifts in the process. Current
practice tends to favor smaller, more frequent samples.

The Average Run Length


The Average Run Length is the number of points that, on average, will be plotted on a control
chart before an out of control condition is indicated (for example a point plotting outside the
control limits).
The run length is a random variable and is defined as the number of points plotted on the
chart until an out-of-control condition is signaled. The beginning point at which we count the

Dr. N Venkatesh, CEC


SQC 10ME668

number of plotted points depends on whether we are finding the in-control run length or the
out-of-control run length.The in-control run length measures the number of plotted points
from the beginning of the monitoring period until an out-of-control signal, given that there
have been no changes in the process. We want the average in-control run length to be high.

The out-of-control run length measures the number of plotted points from the time of
a process change until an out-of-control signal is given. Its value depends upon the
size of the shift. We want the average out-of-control run length to be small.
Assuming that the statistic being plotted is independent over time (i.e., one plotted
point is independent of other plotted points), the run length follows a geometric
distribution.
The average run length (ARL) is the average number of points plotted on the chart
until an out-of-control condition is signaled. It is the expected value of the run length
distribution. It is related to the OC curve as follows:

1
ARL .
1 Pa

If the process is in control:

If the process is out of control:

Where is the probability of a Type I error and the probability of a Type II error.

Consider a problem with control limits set at 3standard deviations from the mean.
The probability that a point plots beyond the control limits is again, 0.0027 (i.e., p =
0.0027). Then the average run length is

1
ARL 370
0.0027
What does the ARL tell us?

The average run length gives us the length of time (or number of samples) that should
plot in control before a point plots outside the control limits.
For our problem, even if the process remains in control, an out-of-control signal could
be generated every 370 samples, on average.

Rational Subgroups

The key to successful control charts is the formation of rational subgroups. Control charts
rely upon rational subgroups to estimate the short-term variation in the process. This short-

Dr. N Venkatesh, CEC


SQC 10ME668

term variation is then used to predict the longer-term variation defined by the control limits,
which differentiate between common and special causes of variation.

A rational subgroup is simply a sample in which all of the items are produced under
conditions in which only random effects are responsible for the observed variation
Subgroups or samples should be selected so that if assignable causes are present, the
chance for differences between subgroups will be maximized, while the chance for
differences due to these assignable causes within a subgroup will be minimized.

A rational subgroup has the following properties:

1. The observations within a subgroup are from a single, stable process. If subgroups
contain the elements of multiple process streams, or if other special causes occur
frequently within subgroups, then the within subgroup variation will be large relative
to the variation between subgroup averages. This large within subgroup variation
forces the control limits to be too far apart, resulting in a lack of sensitivity to process
shifts. Western Electric Run Test 7 (15 successive points within one sigma of center
line) is helpful in detecting this condition.
2. The subgroups are formed from observations taken in a time-ordered sequence. In
other words, subgroups cannot be randomly formed from a set of data (or a box of
parts); instead, the data comprising a subgroup must be a "snapshot" of the process
over a small window of time, and the order of the subgroups would show how those
snapshots vary in time (like a "movie"). The size of the "small window of time" is
determined on an individual process basis to minimize the chance of a special cause
occurring in the subgroup.
3. The observations within the subgroups are independent, implying that no observation
influences, or results from, another. If observations are dependent on one another, the
process has autocorrelation (also known as serial correlation). In many cases, the
autocorrelation causes the within subgroup variation to be unnaturally small and a
poor predictor of the between subgroup variation. The small within subgroup
variation forces the control limits to be too narrow, resulting in frequent out of control
conditions, leading to the tampering.

Selection of Rational Subgroups


Select consecutive units of production.
Provides a snapshot of the process.
Effective at detecting process shifts.
Select a random sample over the entire sampling interval.
Can be effective at detecting if the mean has wandered out-of-control and then
back in-control.
Often used to make decisions about acceptance of product
Effective at detecting shifts to out-of-control state and back into control state
between samples
Care must be taken because we can often make any process appear to be in
statistical control just by stretching out the interval between observations in
the sample.

Dr. N Venkatesh, CEC


SQC 10ME668

Analysis of Patterns on Control Charts

A control chart may indicate an out-of-control condition either when one or more points fall
beyond the upper and lower control limits or when then plotted point exhibit some
nonrandom pattern.

Run: A run is sequence of observations of the same type. When we have 4 or more points in a
row increase in magnitude or decrease in magnitude, this arrangement of points is called run.

Several criteria may be applied simultaneously to a control chart to determine whether the
process is out of control. The basic criteria is one or more points outside of the control limits.
The supplementary criteria are sometimes used to increase the sensitivity of the control charts
to a small process shift so that one may response more quickly to the assignable cause. Some
sensitizing rules for Shewhart control charts are as follows:
1) One or more points plot outside the control limits.
2) Two out of the three consecutive points outside the 2-sigma warning limits but still
inside the control limits.
3) Four of five consecutive points beyond the 1-sigma limits.
4) A run of eight consecutive points on one side of the center.
5) Six points in a row steadily increasing or decreasing.
6) 15 points in a row in zone C (both above and below the center line).
7) 14 points in a row alternating up and down.
8) 8 points in a row in both sides of the center line with none in zone C.
9) An unusual or nonrandom pattern in the data.
10) One or more points near a warning or control limit.
Among the 10 rules, first four are called the Western Electric Rules (1956)

Dr. N Venkatesh, CEC


SQC 10ME668

Limitations of control charts


There are very few limitations of control charts themselves; they are a simple, effective and
reasonably intuitive tool. However, there are a number of barriers that, whilst not
insurmountable, may affect the efficacy of applying control charts:
Aggregation of data: The more data is aggregated the harder it becomes to make meaningful
interpretations (as it masks some of the useful variability that underpins how control charts
are constructed)
Lack of clear and consistent data collection standards and criteria: If these are not defined
early on, changes in data values can be due to data artefacts or a change in definition rather
than actual interventions.
Lack of skills and knowledge to construct the most appropriate control chart: There are a
number of different types of chart that can be used and there is some mathematical
knowledge required to construct these properly. However, by using this guide and appropriate
SPC software this issue can be easily addressed.

The control chart is most effective when integrated into a comprehensive SPC program.The
seven major SPC problem-solving tools should be used routinely to identify improvement
opportunities. The seven major SPC problem-solving tools should be used to assist in
reducing variability and eliminating waste.

Dr. N Venkatesh, CEC


SQC 10ME668

UNIT 4: CONTROL CHARTS FOR VARIABLES

All natural processes are affected by intrinsic variation. In nature, no matter how hard we try,
there can never be two identical actions that generate exactly the same result. This simple
statement contains a deeper truth that is connected with change and entropy (a measure of
disorder in the environment). Change is a constant in nature that is not only necessary but
literally vital. There would be no life without change.

How does this intrinsic characteristic of nature affect the activities of an organization?

By definition, an organization (system) is a set of interdependent individuals and activities


that work together to achieve a well-defined goal. Each of the activities that make up the
organization and every individual are affected by variation, and this variation influences the
possibility of achieving the goal. Any study we make of variation is aimed at understanding it
and reducing it, whatever it concerns, be it improvement of production or monitoring of
incoming cash.

In order to have information that can be used to make the right decisions, we need a technical
tool and the right mindset. We find both of these within Statistical Process Control (SPC),
introduced by W. Shewhart in the first half of the 20th century. This period saw the birth of
the Quality movement, first as a philosophy for production management, and later as a
general approach for organizations. It would be a mistake to consider SPC as a technicality.
As the founding father of Quality, Dr. Deming used to say, SPC is not simply a technique but
a way of thinking.

Variation in the production process leads to quality defects and lack of product consistency.
Categories of variation
Within-piece variation
One portion of surface is rougher than another portion.
Apiece-to-piece variation
Variation among pieces produced at the same time.
Time-to-time variation
Service given early would be different from that given later in the day.
Sources of Variation
Equipment
Tool wear, machine vibration,
Material
Raw material quality
Environment
Temperature, pressure, humadity
Operator
Operator performs- physical & emotional

Dr. N Venkatesh, CEC


SQC 10ME668

Variation in a process occurs due to Common or chance causes and Assignable causes
Chance causes - common cause
inherent to the process or random and not controllable
if only common cause present, the process is considered stable or in control
Assignable causes - special cause
variation due to outside influences
if present, the process is out of control

Control chart may be used to discover assignable causes

If you look at bottles of a soft drink in a grocery store, you will notice that no two bottles are
filled to exactly the same level. Some are filled slightly higher and some slightly lower.
These types of differences are completely normal. No two products are exactly alike because
of slight differences in materials, workers, machines, tools, and other factors. These are called
common, or random, causes of variation. Common causes of variation are based on random
causes that we cannot identify. These types of variation are unavoidable and are due to slight
differences in processing.
The second type of variation that can be observed involves variations where the causes can be
precisely identified and eliminated. These are called assignable causes of variation. Examples
of this type of variation are poor quality in raw materials, an employee who needs more
training, or a machine in need of repair. In each of these examples the problem can be
identified and corrected. Also, if the problem is allowed to persist, it will continue to create a
problem in the quality of the product. In the example of the soft drink bottling operation,
bottles filled with 15.6 ounces of liquid would signal a problem. The machine may need to be
readjusted. This would be an assignable cause of variation. We can assign the variation to a
particular cause (machine needs to be readjusted) and we can correct the problem.

How to develop a control chart?

I. Define the Problem


Use other quality tools to help determine the general problem thats occurring and the
process thats suspected of causing it.
II. Select a quality characteristic to be measured
Identify a characteristic to study - for example, part length or any other variable
affecting performance.

III. Choose a subgroup to be sampled


Choose homogeneous subgroups
Homogeneous subgroups are produced under the same conditions, by the same
machine, the same operator, the same mold, at approximately the same time.
Try to maximize chance to detect differences between subgroups, while minimizing
chance for difference with a group.
IV. Collect the data

Dr. N Venkatesh, CEC


SQC 10ME668

Generally, collect 20-25 subgroups (100 total samples) before calculating the control
limits.
Each time a subgroup of sample size n is taken, an average is calculated for the
subgroup and plotted on the control chart.
V. Determine center line
The centerline should be the population mean,
Since it is unknown, we use X Double bar, or the grand average of the subgroup
averages.
VI. Determine control limits
The normal curve displays the distribution of the sample averages.
A control chart is a time-dependent pictorial representation of a normal curve.
Processes that are considered under control will have 99.73% of their graphed
averages fall within 6.

Control Charts for Variables

Our objectives for this section are to learn how to use control charts to monitor
continuous data. We want to learn the assumptions behind the charts, their
application, and their interpretation.

Since statistical control for continuous data depends on both the mean and the
variability, variables control charts are constructed to monitor each. The most
commonly used chart to monitor the mean is called the X chart. There are two
commonly used charts used to monitor the variability: the R chart and the s chart.

Procedure for using variables control charts:

1. Determine the variable to monitor.

2. At predetermined, even intervals, take samples of size n (usually n=4 or 5).

3. Compute X and R (or s) for each sample, and plot them on their respective control
charts. Use the following relationships:

n n
Xi ( Xi X ) 2
X i 1 , R = Xmax - Xmin, s i 1 .
n n 1

Dr. N Venkatesh, CEC


SQC 10ME668

4. After collecting a sufficient number of samples, k (k>20), compute the control


limits for the charts (see the table on page 4 for the appropriate control limit
calculations). The following additional calculations will be necessary:

k k k
Xj Rj sj
j 1 j 1 j 1
X , R , s .
k k k

5. If any points fall outside of the control limits, conclude that the process is out of
control, and begin a search for an assignable or special cause. When the special cause
is identified, remove that point and return to step 4 to re-evaluate the remaining
points.

6. If all the points are within limits, conclude that the process is in control, and use
the calculated limits for future monitoring of the process.

Because the limits of the X chart are based on the variability of the process, we will
first discuss the variability charts. I suggest that you first determine if the R chart (or s chart)
shows a lack of control. If so, you cannot draw conclusions from the X chart.
The R chart

The R chart is used to monitor process variability when sample sizes are small (n<10),
or to simplify the calculations made by process operators.
This chart is called the R chart because the statistic being plotted is the sample range.
R
Using the R chart, the estimate of the process standard deviation, , is .
d2
The s chart
The s chart is used to monitor process variability when sample sizes are large (n10),
or when a computer is available to automate the calculations.
This chart is called the s chart because the statistic being plotted is the
sample standard deviation.
s
Using the s chart, the estimate of the process standard deviation, , is .
c4

The X Chart:

This chart is called the X chart because the statistic being plotted is the sample mean.
The reason for taking a sample is because we are not always sure of the process
distribution. By using the sample mean we can "invoke" the central limit theorem to
assume normality.

Dr. N Venkatesh, CEC


SQC 10ME668

Limits for Variables Control Charts


Variability Measure Standards
Chart Limits
(and)

Range X
Known A

Range X
Not Known X A2 R

Standard Deviation X
Known A

Standard Deviation X
Not Known X A3 s

centerline=d2
Range R
Known LCL=D1
UCL=D2

centerline= R
Range R
Not Known LCL=D3 R
UCL=D4 R

Standard Deviation
centerline=c4
s
Known LCL=B5
UCL=B6

Standard Deviation
centerline= s
s
Not Known LCL=B3 s
UCL=B4 s

Dr. N Venkatesh, CEC


SQC 10ME668

Illustration

From Table above:


Sigma X-bar = 50.09
Sigma R = 1.15
m = 10
Thus;
X-Double bar = 50.09/10 = 5.009 cm
R-bar = 1.15/10 = 0.115 cm
UCLx-bar = X-D bar + A2 R-bar = 5.009 + (0.577)(0.115) = 5.075 cm
LCLx-bar = X-D bar - A2 R-bar = 5.009 - (0.577)(0.115) = 4.943 cm
UCLR = D4R-bar = (2.114)(0.115) = 0.243 cm
LCLR = D3R-bar = (0)(0.115) = 0 cm
For A2, D3, D4: see Table B, Appendix
X bar chart
5.10
UCL
5.08
5.06
5.04
X bar

5.02
5.00 CL
4.98
4.96 LCL
4.94
0 1 2 3 4 5 6 7 8 9 10 11
Subgroup

Dr. N Venkatesh, CEC


SQC 10ME668

The process is out of control


R chart

0.25 UCL

0.20

0.15
Range

CL

0.10

0.05
LCL
0.00
0 1 2 3 4 5 6 7 8 9 10 11
Subgroup

The process is under control

Dr. N Venkatesh, CEC


SQC 10ME668

UNIT 5: PROCESS CAPABILITY

INTRODUCTION:
Process capability is the ability of the process to meet the design specifications for a service
or product. Nominal value is a target for design specifications. Tolerance is an allowance
above or below the nominal value.
Traditional capability rates are calculated when a product or service feature is measured
through a quantitative continuous variable, assuming the data follows a normal probability
distribution. A normal distribution features the measurement of a mean and a standard
deviation, making it possible to estimate the probability of an incident within any data set.
The most interesting values relate to the probability of data occurring outside of customer
specifications. These are data appearing below the lower specification limit (LSL) or above
the upper specification limit (USL). An ordinary mistake lies in using capability studies to
deal with categorical data, turning the data into rates or percentiles. In such cases,
determining specification limits becomes complex. For example, a billing process may
generate correct or incorrect invoices. These represent categorical variables, which by
definition carry an ideal USL of 100 percent error free processing, rendering the traditional
statistical measures (Cp, Cpk, Pp and Ppk) inapplicable to categorical variables.
When working with continuous variables, the traditional statistical measures are quite useful,
especially in manufacturing. The difference between capability rates (Cp and Cpk) and
performance rates (Pp and Ppk) is the method of estimating the statistical population standard
deviation. The difference between the centralized rates (Cp and Pp) and unilateral rates (Cpk
and Ppk) is the impact of the mean decentralization over process performance estimates.
The following example details the impact that the different forms of calculating capability
may have over the study results of a process. A company manufactures a product thats
acceptable dimensions, previously specified by the customer, range from 155 mm to 157 mm.
The first 10 parts made by a machine that manufactures the product and works during one
period only were collected as samples during a period of 28 days. Evaluation data taken from
these parts was used to make a Xbar-S control chart

Dr. N Venkatesh, CEC


SQC 10ME668

This chart presents only common cause variation and as such, leads to the conclusion that the
process is predictable. Calculation of process capability presents the results in Figure 2.
Figure 2: Process Capability of Dimension

Four cases of process capability

Dr. N Venkatesh, CEC


SQC 10ME668

Calculating Cp
The Cp rate of capability is calculated from the formula:

where s represents the standard deviation for a population taken from , with s-bar
representing the mean of deviation for each rational subgroup and c4 representing a statistical
coefficient of correction.
In this case, the formula considers the quantity of variation given by standard deviation and
an acceptable gap allowed by specified limits despite the mean. The results reflect the
populations standard deviation, estimated from the mean of the standard deviations within
the subgroups as 0.413258, which generates a Cp of 0.81.
Rational Subgroups
A rational subgroup is a concept developed by Shewart while he was defining control
graphics. It consists of a sample in which the differences in the data within a subgroup are
minimized and the differences between groups are maximized. This allows a clearer
identification of how the process parameters change along a time continuum. In the example
above, the process used to collect the samples allows consideration of each daily collection as
a particular rational subgroup.

The Cpk capability rate is calculated by the formula:


considering the same criteria of standard deviation.
In this case, besides the variation in quantity, the process mean also affects the indicators.
Because the process is not perfectly centralized, the mean is closer to one of the limits and, as

Dr. N Venkatesh, CEC


SQC 10ME668

a consequence, presents a higher possibility of not reaching the process capability targets. In
the example above, specification limits are defined as 155 mm and 157 mm. The mean
(155.74) is closer to one of them than to the other, leading to a Cpk factor (0.60) that is lower
than the Cp value (0.81). This implies that the LSL is more difficult to achieve than the USL.
Non-conformities exist at both ends of the histogram.
Estimating Pp
Similar to the Cp calculation, the performance Pp rate is found as follows:

where s is the standard deviation of all data.


The main difference between the Pp and Cp studies is that within a rational subgroup where
samples are produced practically at the same time, the standard deviation is lower. In the Pp
study, variation between subgroups enhances the s value along the time continuum, a process
which normally creates more conservative Pp estimates. The inclusion of between-group
variation in the calculation of Pp makes the result more conservative than the estimate of Cp.
With regard to centralization, Pp and Cp measures have the same limitation, where neither
considers process centralization (mean) problems. However, it is worth mentioning that Cp
and Pp estimates are only possible when upper and lower specification limits exist. Many
processes, especially in transactional or service areas, have only one specification limit,
which makes using Cp and Pp impossible (unless the process has a physical boundary [not a
specification] on the other side). In the example above, the populations standard deviation,
taken from the standard deviation of all data from all samples, is 0.436714 (overall), giving a
Pp of 0.76, which is lower than the obtained value for Cp.
Estimating Ppk
The difference between Cp and Pp lies in the method for calculating s, and whether or not the
existence of rational subgroups is considered. Calculating Ppk presents similarities with the
calculation of Cpk. The capability rate for Ppk is calculated using the formula:

Once more it becomes clear that this estimate is able to diagnose decentralization problems,
aside from the quantity of process variation. Following the tendencies detected in Cpk, notice
that the Pp value (0.76) is higher than the Ppk value (0.56), due to the fact that the rate of
discordance with the LSL is higher. Because the calculation of the standard deviation is not
related to rational subgroups, the standard deviation is higher, resulting in a Ppk (0.56) lower
than the Cpk (0.60), which reveals a more negative performance projection.

Dr. N Venkatesh, CEC


SQC 10ME668

UNIT 6: CONTROL CHARTS FOR ATTRIBUTES


Our objectives for this section are to learn how to use control charts to monitor discrete data.
We want to learn the assumptions behind the charts, their application, and their interpretation.
Here we are counting either the number of nonconforming items, or the number of
nonconformities. If we are counting the number of nonconforming items (i.e., the number of
"bad" parts) in a sample, the charts used are referred to as binomial count charts. If instead
we are counting the number of nonconformities (for example, the number of flaws on the
surface), the charts are referred to as area of opportunity charts.

Type of Attribute Charts


p charts
This chart shows the fraction of nonconforming or defective product produced by a
manufacturing process.
It is also called the control chart for fraction nonconforming.
np charts
This chart shows the number of nonconforming. Almost the same as the p chart.
c charts
This shows the number of defects or nonconformities produced by a manufacturing
process.
u charts
This chart shows the nonconformities per unit produced by a manufacturing process.
Defect a single nonconforming quality characteristic.
Defective items having one or more defects.

Control Chart for Fraction Defective (p-Chart)


The fraction defective is defined as the ratio of the number of defectives in a population to
the total number of items in the population.
Suppose the production process is operating in a stable manner, such that the probability that
any item produced will not conform to specifications is p and that successive items produced
are independent. Then each item produced is a realization of a Bernouli random variable with
parameter p. If a random sample of n items of product is selected, and if D is the number of
items of product that are defectives, then D has a binomial distribution with parameter n and
p; that is P{D=x}=nCxp (1-p)n-x, x=0,1,...n. The mean and variance of the random variable D
are np and np(1-p), respectively.
The sample fraction defective is defined as the ratio of the number of defective items in the
sample of size n; that is p'= D/n. The distribution of the random variable p' can be obtained
from the binomial distribution. The mean and variance of p' are p and p(1-p)/n, respectively.
When the mean fraction of defectives p of the population is not known.
Let us select m samples, each of size n. If there are Di defective items in ith sample, then the
fraction defectives in the ith sample is p'i= Di/n, i=1,2,....,m.The average of these individual
sample fraction defectives is

Dr. N Venkatesh, CEC


SQC 10ME668

k
xi
i 1 the total number of defective items in all the samples taken
p k
=
the total number of items sampled
ni
i 1

Control Chart for Number of Defectives (np-Chart)


It is also possible to base a control chart on the number of defectives rather than the fraction
defectives.
Illustration
The following refers to the number of defective knitwears in samples of size 180.

The process is out of control since two points are above UCL The point below LCL is not
considered as it yields better result than anticipated.
Control Chart for Defects (c-Chart)
Consider the occurrence of defects in an inspection of product(s). Suppose that defects occur
in this inspection according to Poisson distribution; that is

Dr. N Venkatesh, CEC


SQC 10ME668

Where x is the number of defects and c is known as mean and/or variance of the Poisson
distribution.

When the mean number of defects c in the population is not known. Let us select n samples.
If there are ci defects in ith sample, then the average of these defects in samples of size n is

Note: If this calculation yields a negative value of LCL then set LCL=0.
Illustration
The following dataset refers to the number of holes (defects) in knitwears.

Dr. N Venkatesh, CEC


SQC 10ME668

Control Chart for Defects (u Chart)


The u chart is mathematically equivalent to the c chart.

u
c
u
c
n n

u u
UCL u 3 LCL u 3
n n

In case of U chart the UCL and LCL for each day is calculated and plotted in graph.

Illustration: One process has the following number of defects

ID Number Subgroup n c u UCL u -Bar LCL


30-Jan 1 110 120 1.091 1.51 1.20 0.89
31-Jan 2 82 94 1.146 1.56 1.20 0.84
01-Feb 3 96 89 0.927 1.54 1.20 0.87
02-Feb 4 115 162 1.409 1.51 1.20 0.89
03-Feb 5 108 150 1.389 1.52 1.20 0.88
04-Feb 6 56 82 1.464 1.64 1.20 0.76
28-Feb 26 101 105 1.040 1.53 1.20 0.87
01-Mar 27 122 143 1.172 1.50 1.20 0.90
02-Mar 28 105 132 1.257 1.52 1.20 0.88
03-Mar 29 98 100 1.020 1.53 1.20 0.87
04-Mar 30 48 60 1.250 1.67 1.20 0.73
Only partial data is given

Dr. N Venkatesh, CEC


SQC 10ME668

For January 30:


u
c 3389 1.20
n 2823
c 120
u Jan 30 1.09
n 110

1.20
UCLJan 30 1.20 3 1.51
110

1.20
LCL Jan 30 1.20 3 0.89
110

Dr. N Venkatesh, CEC


SQC 10ME668

Control Charts for Variables vs. Charts for Attributes

X bar and R chart p, np, c and u chart

Used for Variables Used for Attributes

Accurate measurement is required Not required

Sensitive for variation Not sensitive

Sample size is small Sample size is large

Use normal distribution Use Binomial or Poisson distribution

Advantages of attribute control charts


Allowing for quick summaries, that is, the engineer may simply classify products as
acceptable or unacceptable, based on various quality criteria.
Thus, attribute charts sometimes bypass the need for expensive, precise devices and
time-consuming measurement procedures.
More easily understood by managers unfamiliar with quality control procedures.

Advantages of variable control charts


More sensitive than attribute control charts.
Therefore, variable control charts may alert us to quality problems before any actual
"unacceptables" (as detected by the attribute chart) will occur.
Montgomery (1985) calls the variable control charts leading indicators of trouble that
will sound an alarm before the number of rejects (scrap) increases in the production
process.

What are defects and defectives?


Customers expect products and services to meet their specifications. When they don't, a
defect or defective is present.
Defects
A defect is any item or service that exhibits a departure from specifications. A defect
does not necessarily mean that the product or service cannot be used. A defect
indicates only that the product result is not entirely as intended.
Suppose service in a restaurant is being evaluated. If a waiter greets his table after 5
minutes, the customer can still order and enjoy a meal even though the promptness of
the greeting did not meet expectations. Therefore, this could be considered a defect
(late greeting) in the service.
Defectives
A defective is an item or service that is considered completely unacceptable for use.
Each item or service experience is either considered defective or notthere are only
two choices.

Dr. N Venkatesh, CEC


SQC 10ME668

Before final shipment, a quality inspector evaluates auto supply parts and rates each
item as pass or fail to ensure that the company does not ship any parts that will be
unusable.

A comparison of defects and defectives


A defective item contains one or more defects. However, not all items with defects are
defective. It depends on the severity of the defect. New cars may have several defects, some
of which may not even be noticed by the customer. However, if the car contains a defect that
is measured and reported, the car (or part of the car) may be considered defective.
Consider the loan application process. In this case, the processing department is the customer.
You want to know how many defects they see. Your form has 36 entries. You sample 50
forms to estimate the defect rate. One application has 7 incorrect entriesthere are 7 defects
present on this form. Another application has 4 incorrect entriesthere are 4 defects present
on this form.
Overall, 18 forms have at least one defect, so 18 forms are defective out of 50. Overall, there
were 62 total defects per 1800 opportunities (36 opportunities per form * 50 forms).

Analyses for defects and defectives data


The type of statistical analysis that you use depends on whether you are evaluating defects or
defectives:
To evaluate defectives, you use analyses that are based on a binomial probability
model, such as a 1 Proportion test, a 2 Proportions test, a P chart, an NP chart, or a
binomial capability analysis. These analyses evaluate the proportion of defectives in
your process.
To evaluate defects, you use analyses that are based on a Poisson probability model,
such as a 1-Sample Poisson rate test, a 2-Sample Poisson rate test, a C chart, a U
chart, or a Poisson capability analysis. These analyses evaluate the rate of defects in
your process.

Note
The term "nonconformity" is sometimes used to signify a defect. The term
"nonconforming" is sometimes used to signify a defective.

Dr. N Venkatesh, CEC


SQC 10ME668

Measure Chart Limits

number of nonconforming np p (1 p )
UCL p 3
items per sample n

p (1 p )
LCL max 0, p 3
n

proportion of nonconforming p
UCL np 3 np (1 p )
items per sample


LCL max 0, np 3 np (1 p )

number of nonconformities per c UCL= c 3 c


area of opportunity
LCL= max 0, c 3 c

u
number of nonconformities per u UCL= u 3
a
unit area of opportunity u
LCL= max 0, u 3
a

Control Chart Selection


Quality Characteristic
variable attribute
defective defect
no
n>1? x and MR
yes constant
yes constant
p or sampling
sample
np unit?
n>=10 or no size?
x and R
computer? yes no
no
yes
p-chart with c u
x and s variable sample
size

43

Dr. N Venkatesh, CEC


SQC 10ME668

UNIT 7: LOT BY LOT ACCPTANCE SAMPLING FOR ATTRIBUTES

Acceptance sampling is an inspection procedure used to determine whether to accept Or


reject a specific quantity of material. As more firms initiate total quality management (TQM)
programs and work closely with suppliers to ensure high levels of quality, the need for
acceptance sampling will decrease. The TQM concept is that no defects should be passed
from a producer to a customer, whether the customer is an external or internal customer.
However, in reality, many firms must still rely on checking their materials inputs.
Lot Acceptance Sampling
A SQC technique, where a random sample is taken from a lot, and upon the results
of appraising the sample, the lot will either be rejected or accepted
A procedure for sentencing incoming batches or lots of items without doing 100%
inspection
The most widely used sampling plans are given by Military Standard (MIL-STD-
105E)

The basic procedure is straightforward.


1. A random sample is taken from a large quantity of items and tested or measured relative to
the quality characteristic of interest.
2. If the sample passes the test, the entire quantity of items is accepted.
3. If the sample fails the test, either (a) the entire quantity of items is subjected to 100 percent
inspection and all defective items repaired or replaced or (b) the entire quantity is returned to
the supplier.

Acceptance Sampling Plan Decisions


Acceptance sampling involves both the producer (or supplier) of materials and the consumer
(or buyer). Consumers need acceptance sampling to limit the risk of rejecting good-quality
materials or accepting bad-quality materials. Consequently, the consumer, sometimes in
conjunction with the producer through contractual agreements, specifies the parameters of the
plan. Any company can be both a producer of goods purchased by another company and a
consumer of goods or raw materials supplied by another company.

Quality and Risk Decisions


Two levels of quality are considered in the design of an acceptance sampling plan. The first is
the acceptable quality level (AQL), or the quality level desired by the consumer. The
producer of the item strives to achieve the AQL, which typically is written into a contract or
purchase order. For example, a contract might call for a quality level not to exceed one
defective unit in 10,000, or an AQL of 0.0001. The producers risk () is the risk that the
sampling plan will fail to verify an acceptable lots quality and, thus, reject ita type I error.
Most often the producers risk is set at 0.05, or 5 percent. Although producers are interested
in low risk, they often have no control over the consumers acceptance sampling plan.
Fortunately, the consumer also is interested in a low producers risk because sending good
materials back to the producer (1) disrupts the consumers production process and increases
the likelihood of shortages in materials, (2) adds unnecessarily to the lead time for finished
products or services, and (3) creates poor relations with the producer.

The second level of quality is the lot tolerance proportion defective (LTPD), or the worst
level of quality that the consumer can tolerate. The LTPD is a definition of bad quality that
the consumer would like to reject. Recognizing the high cost of defects, operations managers
have become more cautious about accepting materials of poor quality from suppliers. Thus,

Dr. N Venkatesh, CEC


SQC 10ME668

sampling plans have lower LTPD values than in the past. The probability of accepting a lot
with LTPD quality is the consumers risk (), or the type II error of the plan. A common
value for the consumers risk is 0.10, or 10 percent.

Need of acceptance sampling

Sampling Plans

Dr. N Venkatesh, CEC


SQC 10ME668

All sampling plans are devised to provide a specified producers and consumers risk.
However, it is in the consumers best interest to keep the average number of items inspected
(ANI) to a minimum because that keeps the cost of inspection low. Sampling plans differ
with respect to ANI. Three often-used attribute sampling plans are the single-sampling plan,
the double-sampling plan, and the sequential-sampling plan. Analogous plans also have been
devised for variable measures of quality.
Based on the number of samples required for a decision. These include:
Single-sampling plans
Double-sampling plans
Multiple-sampling plans
Sequential-sampling plans
Single-, double-, multiple-, and sequential sampling plans can be designed to produce
equivalent results. Factors to consider include:
Administrative efficiency
Type of information produced by the plan
Average amount of inspection required by plan
Impact of the procedure on manufacturing flow

Single-Sampling Plan The single-sampling plan is a decision rule to accept or reject a lot
based on the results of one random sample from the lot. The procedure is to take a random
sample of size (n) and inspect each item. If the number of defects does not exceed a specified
acceptance number (c), the consumer accepts the entire lot. Any defects found in the sample
are either repaired or returned to the producer. If the number of defects in the sample is
greater than c, the consumer subjects the entire lot to 100 percent inspection or rejects the
entire lot and returns it to the producer. The single-sampling plan is easy to use but usually
results in a larger ANI than the other plans. After briefly describing the other sampling plans,
we focus our discussion on this plan.

Dr. N Venkatesh, CEC


SQC 10ME668

Double-Sampling Plan In a double-sampling plan, management specifies two sample sizes


( n1 and n2) and two acceptance numbers (c1 and c2). If the quality of the lot is very good or
very bad, the consumer can make a decision to accept or reject the lot on the basis of the first
sample, which is smaller than in the single-sampling plan. To use the plan, the consumer
takes a random sample of size . If the number of defects is less than or equal to c1, the
consumer accepts the lot. If the number of defects is greater than c2, the consumer rejects the
lot. If the number of defects is between c1 and c2, the consumer takes a second sample of
size. If the combined number of defects in the two samples is less than or equal to c2, the
consumer accepts the lot. Otherwise, it is rejected. A double-sampling plan can significantly
reduce the costs of inspection relative to a single-sampling plan for lots with a very low or
very high proportion defective because a decision can be made after taking the first sample.
However, if the decision requires two samples, the sampling costs can be greater than those
for the single-sampling plan.

Dr. N Venkatesh, CEC


SQC 10ME668

Sequential-Sampling Plan A further refinement of the double-sampling plan is the


sequential-sampling plan, in which the consumer randomly selects items from the lot and
inspects them one by one. Each time an item is inspected, a decision is made to (1) reject the
lot, (2) accept the lot, or (3) continue sampling, based on the cumulative results so far. The
analyst plots the total number of defectives against the cumulative sample size, and if the

Dr. N Venkatesh, CEC


SQC 10ME668

number of defectives is less than a certain acceptance number (c1), the consumer accepts the
lot. If the number is greater than another acceptance number (c2), the consumer rejects the
lot. If the number is somewhere between the two, another item is inspected. Figure 7.1
illustrates a decision to reject a lot after examining the 40th unit. Such charts can be easily
designed with the help of statistical tables that specify the accept or reject cut-off values as a
function of the cumulative sample size.
Fig. 7.1 Sequential sampling plan

The ANI is generally lower for the sequential-sampling plan than for any other form of
acceptance sampling, resulting in lower inspection costs. For very low or very high values of
the proportion defective, sequential sampling provides a lower ANI than any comparable
sampling plan. However, if the proportion of defective units falls between the AQL and the
LTPD, a sequential-sampling plan could have a larger ANI than a comparable single- or
double-sampling plan (although that is unlikely). In general, the sequential-sampling plan
may reduce the ANI to 50 percent of that required by a comparable single-sampling plan and,
consequently, save substantial inspection costs.

Operating Characteristic Curves


Analysts create a graphic display of the performance of a sampling plan by plotting the
probability of accepting the lot for a range of proportions of defective units. This graph,
called an operating characteristic (OC) curve, describes how well a sampling plan
discriminates between good and bad lots. Undoubtedly, every manager wants a plan that
accepts lots with a quality level better than the AQL 100 percent of the time and accepts lots
with a quality level worse than the AQL 0 percent of the time. This ideal OC curve for a
single-sampling plan is shown in Figure 7.2. However, such performance can be achieved
only with 100 percent inspection.

Dr. N Venkatesh, CEC


SQC 10ME668

Fig. 7.2 Operating characteristic curve

A typical OC curve for a single-sampling plan, plotted in red, shows the probability a of
rejecting a good lot (producers risk) and the probability of accepting a bad lot (consumers
risk). Consequently, managers are left with choosing a sample size n and an acceptance
number to achieve the level of performance specified by the AQL, , LTPD, and .

Specific points on OC curve


The poorest quality level for the suppliers process that a consumer would
consider to be acceptable as a process average is called the acceptable quality
level (AQL)
AQL is a property of the suppliers manufacturing process, not a
property of the sampling plan
The protection obtained for individual lots of poor quality is established by the
lot tolerance percent defective (LTPD)
Also called rejectable quality level (RQL) and the limiting quality level
(LQL)
LTPD is a level of lot quality specified by the consumer, not a
characteristic of the sampling plan
Sampling plans can be designed to have specified performance at the
AQL and the LTPD points

Drawing the OC Curve


The sampling distribution for the single-sampling plan is the binomial distribution because
each item inspected is either defective (a failure) or not (a success). The probability of
accepting the lot equals the probability of taking a sample of size n from a lot with a
proportion defective of p and finding c or fewer defective items. However, if n is greater than
20 and p is less than 0.05, the Poisson distribution can be used as an approximation to the
binomial to take advantage of tables prepared for the purpose of drawing OC curves (see
Table G). To draw the OC curve, look up the probability of accepting the lot for a range of
values of p. For each value of p,
1. Multiply p by the sample size n.

Dr. N Venkatesh, CEC


SQC 10ME668

2. Find the value of np in the left column of the table.


3. Move to the right until you find the column for c.
4. Record the value for the probability of acceptance, Pa.
When p = AQL, the producers risk, , is 1 minus the probability of acceptance. When (p =
LTPD) , the consumers risk, , equals the probability of acceptance.

Constructing an OC Curve

The Noise King Muffler Shop, a high-volume installer of replacement exhaust muffler
systems, just received a shipment of 1,000 mufflers. The sampling plan for inspecting these
mufflers calls for a sample size and an acceptance number . The contract with the muffler
manufacturer calls for an AQL of 1 defective muffler per 100 and an LTPD of 6 defective
mufflers per 100. Calculate the OC curve for this plan, and determine the producers risk and
the consumers risk for the plan.
SOLUTION
c=1
n = 60
Let p = 0.01. Then multiply n by p to get 60(0.01) = 0.60. Locate 0.60 in Table G. Move to
the right until you reach the column for . Read the probability of acceptance: 0.878. Repeat
this process for a range of p values. The following table contains the remaining values for the
OC curve.

DECISION POINT
Note that the plan provides a producers risk of 12.2 percent and a consumers risk of 12.6
percent. Both values are higher than the values usually acceptable for plans of this type (5
and 10 percent, respectively). Figure 7.3 shows the OC curve and the producers and
consumers risks. Management can adjust the risks by changing the sample size.

Dr. N Venkatesh, CEC


SQC 10ME668

Fig. 7.3: The OC Curve for Single-Sampling Plan with n=60 and c = 1

Explaining Changes in the OC Curve


Above example raises the question: How can management change the sampling plan to
reduce the probability of rejecting good lots and accepting bad lots? To answer this question,
let us see how n and c affect the shape of the OC curve. In the Noise King example, a better
single sampling plan would have a lower producers risk and a lower consumers risk.
Sample Size Effect What would happen if we increased the sample size to 80 and left the
acceptance level, c, unchanged at 1? We can use Table G.1 (pp. G.9G.11). If the proportion
defective of the lot is , then and the probability of acceptance of the lot is only 0.809. Thus,
the producers risk is 0.191. Similarly, if , the probability of acceptance is 0.048. Other values
of the producers and consumers risks are shown in the following table:

Dr. N Venkatesh, CEC


SQC 10ME668

Fig. 7.5: Effects of Increasing Acceptance Number


Fig. 7.4: Effects of Increasing Sample Size While While Holding Sample Size Constant
Holding Acceptance Number Constant

The results are plotted in Figure 7.5. They demonstrate the following principle: Increasing c
while holding n constant decreases the producers risk and increases the consumers risk.
The producer of the mufflers would welcome an increase in the acceptance number because it
makes getting the lot accepted by the consumer easier. If the lot has only 1 percent defectives
(the AQL) with a sample size of 60, we would expect only 0.01(60) = 0.6 defect in the
sample. An increase in the acceptance number from one to two lowers the probability of
finding more than two defects and, consequently, lowers the producers risk. However,
raising the acceptance number for a given sample size increases the risk of accepting a bad
lot. Suppose that the lot has 6 percent defectives (the LTPD). We would expect to have
0.6(60) = 3.6 defectives in the sample. An increase in the acceptance number from one to two
increases the probability of getting a sample with two or fewer defects and, therefore,
increases the consumers risk. Thus, to improve Noise Kings single-sampling acceptance
plan, management should increase the sample size, which reduces the consumers risk, and
increase the acceptance number, which reduces the producers risk. An improved
combination can be found by trial and error using Table G.
The following table shows that a sample size of 111 and an acceptance number of 3 are best.
This combination actually yields a producers risk of 0.026 and a consumers risk of 0.10 (not
shown). The risks are not exact because c and n must be integers.

Dr. N Venkatesh, CEC


SQC 10ME668

Average Outgoing Quality


We have shown how to choose the sample size and acceptance number for a single-sampling
plan, given AQL, , LTPD, and parameters. To check whether the performance of the plan is
what we want, we can calculate the plans average outgoing quality (AOQ), which is the
expected proportion of defects that the plan will allow to pass. We assume that all defective
items in the lot will be replaced with good items if the lot is rejected and that any defective
items in the sample will be replaced if the lot is accepted. This approach is called rectified
inspection. The equation for AOQ is

The analyst can calculate AOQ to estimate the performance of the plan over a range of
possible proportion defectives in order to judge whether the plan will provide an acceptable
degree of protection. The maximum value of the average outgoing quality over all possible
values of the proportion defective is called the average outgoing quality limit (AOQL). If
the AOQL seems too high, the parameters of the plan must be modified until an acceptable
AOQL is achieved.

Dr. N Venkatesh, CEC


SQC 10ME668

Average Total Inspection (ATI)

Calculating the AOQL


Suppose that Noise King is using rectified inspection for its single-sampling plan. Calculate
the average outgoing quality limit for a plan with n=110, c=3 and N = 1000. Use Table G to
estimate the probabilities of acceptance for values of the proportion defective from 0.01 to
0.08 in steps of 0.01.

Dr. N Venkatesh, CEC


SQC 10ME668

Plot the OC curve as shown below

Step 3: Identify the largest AOQ value, which is the estimate of the AOQL. In this example,
the AOQL is .0.0155 at p = 0.03

Illustration
An inspection station has been installed between two production processes. The feeder
process, when operating correctly, has an acceptable quality level of 3 percent. The
consuming process, which is expensive, has a specified lot tolerance proportion defective of 8
percent. The feeding process produces in batch sizes; if a batch is rejected by the inspector,
the entire batch must be checked and the defective items reworked. Consequently,
management wants no more than a 5 percent producers risk and, because of the expensive
process that follows, no more than a 10 percent chance of accepting a lot with 8 percent
defectives or worse.
a. Determine the appropriate sample size, n, and the acceptable number of defective
items in the sample, c.
b. Calculate values and draw the OC curve for this inspection station.
c. What is the probability that a lot with 5 percent defectives will be rejected?

Dr. N Venkatesh, CEC


SQC 10ME668

Dr. N Venkatesh, CEC


SQC 10ME668

Dr. N Venkatesh, CEC


SQC 10ME668

Dr. N Venkatesh, CEC


SQC 10ME668

Dr. N Venkatesh, CEC


SQC 10ME668

Dr. N Venkatesh, CEC


SQC 10ME668

Dr. N Venkatesh, CEC


SQC 10ME668

UNIT 8: CUSUM AND EWMA CHART


Overview
The Shewhart control charts are relatively insensitive to the small shifts in the process, say in the order of about 1.5
or less. Then, a very effective alternative is an advanced control chart: Cumulative sum (CUSUM) control chart.
Illustration: Yarn Strength Data
Consider the yarn strength (cN.tex-1) data as shown here. The first twenty of these observations were taken from a
normal distribution with mean =10 cN.tex-1 and standard deviation =1cN.tex-1. The last ten observations were
taken from a normal distribution with mean =11 cN.tex-1 and standard deviation =1cN.tex-1. The observations are
plotted on a Basic Control Chart as shown in the next slide.

The observations are plotted on a Basic Control Chart as shown in the next slide.

Illustration: Basic Control Chart

Dr. N Venkatesh, CEC


SQC 10ME668

CUSUM: What is it?


The cumulative sum (CUSUM) of observations is defined as

When the process remains in control with mean at , the cumulative sum is a random walk with mean zero.
When the mean shifts upward with a value 0 such that > 0 then an upward or positive drift will be developed in
the cumulative sum.
When the mean shifts downward with a value 0 such that < 0 then a downward or negative drift will be developed
in the CUSUM.
Illustration: CUSUM

Dr. N Venkatesh, CEC


SQC 10ME668

Tabular CUSUM
The tabular CUSUM works by accumulating deviations from (the target value) that are above the target with one
statistic C+ and accumulating deviations from (the target value) that are below the target with another statistic C -.
These statistics are called as upper CUSUM and lower CUSUM, respectively.

If the shift in the process mean value is expressed as

where 1 denotes the new process mean value and and indicate the old process mean value and the old process
standard deviation, respectively. Then, K is the one-half of the magnitude of shift.

If either Ci+ and Ci+ exceed the decision interval H, the process is considered to be out of control. A reasonable value
for H is five times the process standard deviation, H=5.
Illustration: Tabular CUSUM (Missing Eqn )

Dr. N Venkatesh, CEC


SQC 10ME668

Illustration: Tabular CUSUM (Missing Eqn)

Illustration: CUSUM Status Chart

Dr. N Venkatesh, CEC


SQC 10ME668

Concluding Remarks

Moving Average: What is it? [2]

Dr. N Venkatesh, CEC


SQC 10ME668

Control Limits
If denotes the target value of the mean used as the center line of the control chart, then the three-sigma control
limits for Mi are

The control procedure would consists of calculating the new moving average Mi as each observation xi becomes
available, plotting Mi on a control chart with upper and lower limits given earlier and concluding that the process is
out of control if Mi exceeds the control limits. In general, the magnitude of the shift of interest and w are inversely
related; smaller shifts would be guarded against more effectively by longer-span moving averages, at the expense of
quick response to large shifts.
Illustration
The observations xi of strength of a cotton carded rotor yarn for the periods 1i30 are shown in the table. Let us
set-up a moving average control chart of span 5 at time i. The targeted mean yarn strength is 4.5 cN and the standard
deviation of yarn strength is 0.5 cN.
Data

Dr. N Venkatesh, CEC


SQC 10ME668

Calculations
The statistic Mi plotted on the moving average control chart will be for periods i5. For time periods i<5 the average
of the observations for periods 1,2,,i is plotted. The values of these moving averages are also shown in the table.

Dr. N Venkatesh, CEC


SQC 10ME668

Control Chart

Conclusion
Note that there is no point that exceeds the control limits. Also note that for the initial periods i<w the control limits
are wider than their final steady-state value. Moving averages that are less than w periods apart are highly
correlated, which often complicates interpreting patterns on the control chart. This is clearly seen in the control
chart.
Comparison Between Cusum Chart and MA Chart
The MA control chart is more effective than the Shewhart control chart in detecting small process shifts. However, it
is not as effective against small shifts as cusum chart. Nevertheless, MA control chart is considered to be simpler to
implement than cusum chart in practice.

Exponentially Weighted Moving Average (EWMA): What is it?


Suppose the individual observations are x1,x2,x3, The exponentially weighted moving average is defined as

where 0<1 is a constant and z0=, where is process mean.


What is it called Exponentially Weighted MA?

Dr. N Venkatesh, CEC


SQC 10ME668

The control limits are

where L is known to be the width of the control chart.


The choice of the parameters L and will be discussed shortly.
Choice of and L
The choice of and L are related to average run length (ARL). ARL is the average number of points that must be
plotted before a point indicates an out-of-control condition. So, ARL=1/p, where p stands for the probability that any
point exceeds the control limits.
As we know, for three-sigma limit, p=0.0027, so ARL=1/0.0027=370. This means, even if the process is in control,
an out-of-control signal will be generated every 370 samples, on the average.
In order to detect a small shift in process mean, which is what is the goal behind the set-up of an EWMA control
chart, the parameters and L are required to be selected to get a desired ARL performance.
The following table illustrates this.

Dr. N Venkatesh, CEC


SQC 10ME668

As a rule of thumb, should be small to detect smaller shifts in process mean. It is generally found that 0.050.25
work well in practice. It is also found that L=3 (3-sigma control limits) works reasonably well, particularly with
higher values of . But, when is small, that is, 0.1, the choice of L between 2.6 and 2.8 is advantageous.
Illustration
Let us take our earlier example of yarn strength in connection with MA control chart. Here, the process mean is
taken as =4.5 cN and process standard deviation is taken as =0.5 cN. We choose =0.1 and L=2.7. We would
expect this choice would result in an in-control average run length equal to 500 and an ARL for detecting a shift of
one standard deviation in the mean of ARL=10.3. The observations of yarn strength, EWMA values, and the control
limit values are shown in the following table.
Table

Dr. N Venkatesh, CEC


SQC 10ME668

Graph

Conclusion
Note that there is no point that exceeds the control limits. We therefore conclude that the process is in control.

References
1. Grant, E. L. and Leavenworth, R. S., Statistical Quality Control, Tata McGraw Hill Education Private Limited,
New Delhi, 2000.

2. Montgomery, D. C., Introduction to Statistical Quality Control, John Wiley & Sons, Inc., Singapore, 2001.
3. R.C Gupta, Statistical Quality Control, Khanna Publishers, New Delhi, 2005

Dr. N Venkatesh, CEC


SQC 10ME668

Dr. N Venkatesh, CEC