Académique Documents
Professionnel Documents
Culture Documents
Done By:
DJOUMETIO JEUGO DIVANE FE15A053
NJIKI MCBRYNE TOMVEN FE15A164
NESTOR ABIANGANG ABIAWUH FE15A151
Table Of Content
i
3.1 TRADITIONAL METHODS VS STATISTICAL PROCESS CONTROL .......................... 22
3.2 TYPES OF VARIATION ........................................................................................................... 22
3.3 ‘IN CONTROL’ AND ‘OUT OF CONTROL’ ........................................................................ 22
3.4 SPECIFICATION AND NATURAL TOLERANCE LIMITS ............................................... 23
3.5 NORMAL DISTRIBUTION:..................................................................................................... 26
3.6 CONTROL CHARTS ................................................................................................................. 27
REFERENCES........................................................................................................................................ 35
ii
CHAPTER I: GENERAL INTRODUCTION
1.1 INTRODUCTION
The objectives of this topic is, first, to define quality as it relates to the manufacturing and service
sector, to introduce the terminology related to quality, and to set up a framework for the design and
implementation of quality. Importantly will be the ability to identify the unique needs of the customer,
which will assist in maintaining and growing market share. A study of activity-based product costing
will be introduced along with the impact of quality improvement on various quality-related costs. We
should be able to interpret the relationships among quality, productivity, long-term growth, and
customer satisfaction.
1.2.1. Quality
Quality of a product or service refers to the degree to which the product or service is able to
satisfy (stated or implied) needs.
Quality is the degree to which a product/service conforms to its requirements.
Every product possess a number of characteristics that are critical to quality (for the
user/consumer):
Length of mechanical components
Duty of batteries
Thickness of the coat of paint
Amount of material in a tube of toothpaste
GARVIN in 1984, he define quality as: The ability of a product or service to meet or exceeding
its intended use as required by the customer.
Most organizations find it difficult (and expensive) to provide the customer with products that
have quality characteristics, which are always identical from unit to unit.
Charts are a major component of quality control. They help to visualize calculations and
relationships between the processes and the measurements of their quality.
1
1.2.2 Quality Characteristics.
It demonstrates that one or more elements define the intended quality level of a product or service.
These elements, known as quality characteristics, can be categorized in these groupings: Structural
characteristics include such elements as the length of a part, the strength of a beam, the viscosity of a
fluid, and so on; sensory characteristics include the taste of good food, the smell of a sweet fragrance,
and the beauty of a model, among others; time-oriented characteristics include such measures as a
warranty, reliability, and maintainability; and ethical characteristics include honesty, courtesy,
friendliness, and so on.
1.2.4 Defects
A defect is associated with a quality characteristic that does not meet certain standards. Furthermore,
the severity of one of more defects in a product or service may cause it to be unacceptable (or
defective). The modern term for defect is nonconformity, and the term for defective is nonconforming
item
2
1.2.6 Quality Control
A part of quality management focused on fulfilling quality requirements that is, the operational
techniques and activities used to fulfil requirements for quality. It is the physical verification that the
product conforms to the plan arrangements by inspection, measurement etc.
1.2.8 Variability
Two products cannot be ever identical (e.g. the diameter of a screw).
If the variation is large, the customer may perceive the unit to be undesirable and unacceptable.
Beyond this, if the variation is large, these units cannot be interchangeable (e.g. problems in the
assembly process).
Most common sources of variability:
Differences in materials.
Differences in the performance of the manufacturing equipment.
Differences in the way operators perform their tasks
Example the diameter (D) of a work piece manufactured hole cannot be identical in all the
products as shown on figure 1 below
3
Figure 1: Variability in diameter
4
quality
Quality of
conformance
5
development, engineering, manufacturing, marketing, and servicing. A key advantage of such a team is
that it promotes cross-disciplinary flow of information in real time as it solves the problem. When
design changes are made, the feasibility of equipment and tools in meeting the new requirements must
be analyzed. It is thus essential for information to flow between design, engineering, and
manufacturing
6
Excitement needs
Customer satisfaction
Performance needs
Product or service
Basic needs
Figure 3: kano model
• Specification: a set of conditions and requirements, of specific and limited application, that provide a
detailed description of the procedure, process, material, product, or service for use primarily in
procurement and manufacturing. Standards may be referenced or included in a specification.
• Standard: a prescribed set of conditions and requirements, of general or broad application,
established by authority or agreement, to be satisfied by a material, product, process, procedure,
7
convention, test method; and/or the physical, functional, performance, or conformance characteristic
thereof. A physical embodiment of a unit of measurement (for example, an object such as the standard
kilogram or an apparatus such as the cesium beam clock).
M
w
aa
sn
ta
eg
eb
8
CHAPTER 2: STATISTICAL FOUNDATION AND METHODS OF QUALITY
CONTROL AND IMPROVEMENTS.
2.1: INTRODUCTION AND CHAPTER OBJECTIVES:
In this chapter we build a foundation for the statistical concepts and techniques used in quality control
and improvement. Statistics is a subtle science, and it plays an important role in quality programs. Only
a clear understanding of statistics will enable you to apply it properly. They are often misused, but a
sound knowledge of statistical principles will help you formulate correct procedures in different
situations and will help you interpret the results properly. When we analyze a process, we often find it
necessary to study its characteristics individually. Breaking the process down allows us to determine
whether some identifiable cause has forced a deviation from the expected norm and whether a remedial
action needs to be taken. Thus, our objective in this chapter is to review different statistical concepts
and techniques along two major themes. The first deals with descriptive statistics, those that are used to
describe products or processes and their characteristic features, based on collected data. The second
theme is focused on inferential statistics, whereby conclusions on product or process parameters are
made through statistical analysis of data. Such inferences, for example, may be used to determine if
there has been a significant improvement in the quality level of a process, as measured by the
proportion of nonconforming product.
9
With respect to telecommunication knowing the population or getting a sample of the population of a
certain area is very important, thats why we do survey in networking planning process.
One can assure good Quality through intensive sampling if everything in the population is sampled and
measured. This is generally not done due to excessive cost. In some situations it may be impossible to
do, for example, if one needs to do destructive testing. The modern approach is to assure Quality by
concentrating on improving the Process itself, then to use sampling to audit or monitor the Process.
This is the basis of SPC. To make accurate inferences, the sample has to be representative. A
representative sample is one in which each and every member of the population has an equal and
mutually exclusive chance of being selected.
The study sample is the sample chosen from the study population.
Non-random samples have certain limitations. The larger group (target population) is difficult to
identify. This may not be a limitation when generalization of results is not intended. The results would
be valid for the sample itself (internal validity). They can, nevertheless, provide important clues for
further studies based on random samples. Another limitation of non-random samples is that statistical
inferences such as confidence intervals and tests of significance cannot be estimated from non-random
10
samples. However, in some situations, the investigator has to make crucial judgments. One should
remember that random samples are the means but representativeness is the goal. When non-random
samples are representative (compare the socio-demographic characteristics of the sample subjects with
the target population), generalization may be possible.
In order to select a simple random sample from a population, it is first necessary to identify all
individuals from whom the selection will be made. This is the sampling frame. In developing countries,
listings of all persons living in an area are not usually available. Census may not catch nomadic
population groups. Voters’ and taxpayers’ lists may be incomplete. Whether or not such deficiencies
are major barriers in random sampling depends on the particular research question being investigated.
To undertake a separate exercise of listing the population for the study may be time consuming and
tedious. Two-stage sampling may make the task feasible.
The usual method of selecting a simple random sample from a listing of individuals is to assign a
number to each individual and then select certain numbers by reference to random number tables which
are published in standard statistical textbooks. Random number can also be generated by statistical
software such as EPI INFO developed by WHO and CDC Atlanta.
b) Systematic sampling
A simple method of random sampling is to select a systematic sample in which every nth person is
selected from a list or from other ordering. A systematic sample can be drawn from a queue of people
or from patients ordered according to the time of their attendance at a clinic. Thus, a sample can be
drawn without an initial listing of all the subjects. Because of this feasibility, a systematic sample may
have some advantage over a simple random sample.
To fulfill the statistical criteria for a random sample, a systematic sample should be drawn from
subjects who are randomly ordered. The starting point for selection should be randomly chosen. If
every fifth person from a register is being chosen, then a random procedure must be used to determine
11
whether the first, second, third, fourth, or fifth person should be chosen as the first member of the
sample.
c) Multistage sampling
Sometimes, a strictly random sample may be difficult to obtain and it may be more feasible to draw the
required number of subjects in a series of stages. For example, suppose we wish to estimate the number
of CATSCAN examinations made of all patients entering a hospital in a given month in the state of
Maharashtra. It would be quite tedious to devise a scheme which would allow the total population of
patients to be directly sampled. However, it would be easier to list the districts of the state of
Maharashtra and randomly draw a sample of these districts. Within this sample of districts, all the
hospitals would then be listed by name, and a random sample of these can be drawn. Within each of
these hospitals, a sample of the patients entering in the given month could be chosen randomly for
observation and recording. Thus, by stages, we draw the required sample. If indicated, we can
introduce some element of stratification at some stage (urban/rural, gender, age).
It should be cautioned that multistage sampling should only be resorted to when difficulties in simple
random sampling are insurmountable. Those who take a simple random sample of 12 hospitals, and
within each of these hospitals select a random sample of 10 patients, may believe they have selected
120 patients randomly from all the 12 hospitals. In statistical sense, they have in fact selected a sample
of 12 rather than 120.
d) Stratified sampling
If a condition is unevenly distributed in a population with respect to age, gender, or some other
variable, it may be prudent to choose a stratified random sampling method. For example, to obtain a
stratified random sample according to age, the study population can be divided into age groups such as
0–5, 6–10, 11–14, 15–20, 21–25, and so on, depending on the requirement. A different proportion of
each group can then be selected as a subsample either by simple random sampling or systematic
sampling. If the condition decreases with advancing age, then to include adequate number in the older
age groups, one may select more numbers in older subsamples.
e) Cluster sampling
In many surveys, studies may be carried out on large populations which may be geographically quite
dispersed. To obtain the required number of subjects for the study by a simple random sample method
will require large costs and will be cumbersome. In such cases, clusters may be identified (e.g.
households) and random samples of clusters will be included in the study; then, every member of the
12
cluster will also be part of the study. This introduces two types of variations in the data – between
clusters and within clusters – and this will have to be taken into account when analyzing data.
Cluster sampling may produce misleading results when the disease under study itself is distributed in a
clustered fashion in an area. For example, suppose we are studying malaria in a population. Malaria
incidence may be clustered in villages having stagnant water collections which may serve as a source
of mosquito breeding. In villages without such water stagnation, there will be lesser malaria cases. The
choice of few villages in cluster sampling may give erroneous results. The selection of villages as a
cluster may be quite unrepresentative of the whole population by chance.
2.2.3. Probability
Our discussion of the concepts of probability is intentionally brief. For an in-depth look at probability,
see the references at the end of the chapter. The probability of an event describes the chance of
occurrence of that event. A probability function is bounded between 0 and 1, with 0 representing the
definite non occurrence of the event and 1 representing the certain occurrence of the event. The set of
all outcomes of an experiment is known as the sample space S.
13
A. Relative Frequency Definition of Probability
If each event in the sample space is equally likely to happen, the probability of an event A is given by
P(A) = nA / N
where:
P(A) = probability of events
nA = number of occurances of event A
N = size of the sample space.
This definition is associated with the relative frequency concept of probability. It is applicable to
situations where historical data on the outcome of interest are available. The probability associated with
the sample space is 1 [i.e., P(S) = 1].
Example 3.5 A certain man crimps several cables in one day. He buys a packet of connects in which
are 50pieces in a packet. Inside this packet 20 connectors where found noncomforming. If this man
picks a connector from the packet, whats the probability of it being noncomforming?
solution
We define event A as getting a connector that is nonconforming. The sample space S
consists of 50 connectors (i, e.,N= 50). The number of occurrences of event A(n A ) is 20. Thus,
if the inspector is equally likely to choose any one of the 50 connectors,
P(A) = 20 / 50 = 0.4
14
service time in the fast-food restaurant is no more than 3 minutes (min). Suppose we find that the
sample average service time (based on a sample of 500 people) is 3.5 min. We then need to determine
whether this observed average of 3.5 min is significantly greater than the claimed mean of 3 min. Such
procedures fall under the heading of inferential statistics. They help us draw conclusions about the
conditions of a process. They also help us determine whether a process has improved by comparing
conditions before and after changes. For example, suppose that the management of the fast-food
restaurant is interested in reducing the average time to serve a customer. They decide to add two people
to their service staff. Once this change is implemented, they sample 500 customers and find that the
average service time is 2.8 min. The question then is whether this decrease is a statistically significant
decrease or whether it is due to random variation inherent to sampling. Procedures that address such
problems are discussed later.
15
Counting events usually costs less than measuring the corresponding continuous variables. The
discrete variable is merely classified as being, say, unacceptable or not; this can be done through a
go/no-go gage, which is faster and cheaper than finding exact measurements. However, the reduced
collection cost may be offset by the lack of detailed information in the data.
Sometimes, continuous characteristics are viewed as discrete to allow easier data collection and
reduced inspection costs. For example, the hub diameter in a tire is actually a continuous random
variable, but rather than precisely measuring the hub diameter numerically, a go/no-go gage is used to
quickly identify the characteristic as either acceptable or not. Hence, the acceptability of the hub
diameter is a discrete random variable. In this case, the goal is not to know the exact hub diameter but
rather to know whether it is within certain acceptable limits.
Accuracy and Precision: The accuracy of a data set or a measuring instrument refers to the degree of
uniformity of the observations around a desired value such that, on average, the target value is realized.
Let's assume that the target thickness of a metal plate is 5.25 mm.
The population mean (μ) is found by adding all the data values in the population and dividing by the
size of the population (N). It is calculated as
16
B. Median
The median is the value in the middle, when the observations are ranked. If there are an even number
of observations, the simple average of the two middle numbers is chosen as the median. The median
has the property that 50% of the values are less than or equal to it.
C. Mode
The mode is the value that occurs most frequently in the data set. It denotes a "typical" value from the
process.
D. Trimmed Mean
The trimmed mean is a robust estimator of the central tendency of a set of observations. It is obtained
by calculating the mean of the observations that remain after a proportion of the high and low values
have been deleted. The a% trimmed mean, denoted by Γ(α), is the average of the observations that
remain after trimming (or deleting) a% of the high observations and a% of the low observations. This is
a suitable measure when it is believed that existing outliers do not represent usual process
characteristics. Thus, analysts will sometimes trim extreme observations caused by a faulty
measurement process to obtain a better estimate of the population's central tendency.
17
exceed customer requirements and is vital in the manufacturing part of businesses. Some quality
control methods are;
Quality Assurance: this method covers activities such as development, design, production,
servicing, and production Quality assurance can also cover areas of management production,
inspection, materials, assembly, services and other areas related to the quality of the product or
service.
Failure Testing: this method involves testing a product until it fails It can be placed under
different stresses such as humidity, vibration, temperature, etc. This method will expose the
weaknesses of the product in question.
Statistical Control: almost all manufacturing companies use statistical control. This process
involves randomly sampling and testing a portion of the output.
Company Quality: with management leading the quality improvement process and other
departments following, a successful product or service will emerge
Total Quality Control: measure used in cases where sales decrease despite implementation of
statistical quality control techniques or quality improvements.
A quality control culture greatly depends upon the members of an organization. The corporate culture
of an organization will define what behaviors are appropriate for an organization. The culture will
determine which behaviors are good and add value towards company goals, and which are bad and
negate company goals. Corporate culture is important because it helps define risk that individuals of the
organization can take to help manage organizational risk overall. There is no such thing as a perfect
risk culture but there are ways to help promote a positive risk culture.
Individual decision making: The greatest accountability for a decision is when one person
makes that decision. Everything is on the line for that person. This causes the person who is
making the decision to analyze every detail.
Question everything: Members of a quality control culture should question everything. This
brings out different ways to do things so that the best idea can be selected.
Honesty: Honesty must be present in all levels of an organization. For example, admitting
when you do not know something instead of making something up to make yourself look
intelligent can save an organization down the road.
18
2.4: Quality Control Improvements
These are efforts taken to increase efficiency of actions and procedures with the purpose of achieving
additional benefits for the organization and its users. It is also the continuous study and improvement of
a process, system, or organization. A continuous process that identifies problems, examines solutions to
those problems, and regularly monitors the solution implemented for improvement.
It is aimed at improvement that is measuring where you are, and figuring out ways to make things better. It
specifically attempts to avoid blame, and to create systems to prevent errors from happening.
Quality Improvement activities can be very helpful in improving how things work. Trying to find where
the “defect” in the system is, and figuring out new ways to do things can be challenging and fun. It’s a great
opportunity to “think outside the box.”
19
2.4.1 IMPROVEMENT TECHNIQUES
2.4.1.1 KAIZEN
This is a concept of continual improvement and making changes for the best on a continual basis.
Improvement aspects involves people, process and products. Action is taken immediately on the shop
floor without hours of analysis. Results are evident in only days. The goal of this process is to identify
waste by forcing production problems to surface so that they become visible for everyone to see. Once
identified such problems are solved with worker consensus.
20
2.4.1.2 FADE MODEL
Another improvement technique is the FADE MODEL.
21
CHAPTER 3: STATISTICAL PROCESS CONTROL
These are methods that make it possible to control quality characteristics during production (on-line),
in order to maintain the process under-control and to detect and correct possible abnormalities.
22
Understanding The Causes Of Variation
• Control charts are used to identify variation that may be due to special causes, and to free the user
from concern over variation due to common causes.
• It is a continuous, ongoing activity
• When a process is stable and does not trigger any of the detection rules for a control chart, a process
capability analysis may also be performed to predict the ability of the current process to produce
conforming product in the future.
When excessive variation is identified by the control chart detection rules, or the process capability is
found lacking, additional effort is exerted to determine causes of that variance
. • The tools used include
• Ishikawa diagrams
•Designed experiments
•Pareto charts
•Designed experiments are critical -only means of objectively quantifying the relative importance of
the many potential causes of variation.
For every product quality characteristic (e.g. geometrical dimensions) we define the
specification limits (USL, LSL).
23
Figure 7: Specification limits for a product quality characteristic
A process operating with only chance causes of variation (not other assignable causes)
generally show a random pattern (also defined as white noise).
Typically it follows a Normal Distribution.
Example: nonrandom pattern (RUN), due to the presence of assignable causes (e.g. thermal
expansion, tool wear…), the process mean may have shifted slightly.
A run is defined as a sequence of consecutive points, all of which are on the same side of the
centre line.
24
Figure 9: shifting of mean due to the presence of assignable causes
Example: nonrandom pattern, due to the presence of two points related to assignable causes
(e.g. failures in the process).
25
It is customary to define the upper and lower natural tolerance limits, say UNTL and LNTL, as
3σ above and below the process mean.
To calculate the natural tolerance NT ≡ 6σ, we should know the standard deviation (σ) of the
population.
σ can be estimated by using the sample standard deviations (s) or the sample
ranges (R), related to several samples extracted from the population
Example: m samples are extracted from the population; each sample is made of n observations.
Standard deviation of the population (σ) can be estimated using (sj) or (Rj).
26
3.6 CONTROL CHARTS
Control charts are practical tools to monitor the evolution of production processes.
In any production process a certain amount of natural variability will always exist (this is the
cumulative effect of small and unavoidable causes).
A process that is operating in the presence of chance causes of variation only is said to be in
statistical control.
Control charts are not designed to provide any information about the process conformity with
specification limits.
A process that is operating in the presence of assignable causes (sources of variability that are
not part of the chance causes) is said to be out of control, Three main sources of assignable
causes:
Improperly adjusted or controlled machines (or failures);
Operator errors;
Defective raw materials.
27
In other terms, a process is out of control when it does not follow a random pattern and the
reason of this can be univocally associated to one of the previous causes.
Control Chart Contains:
A center line (CL)
An upper control limit (UCL)
A lower control limit (LCL)
Basic Criteria
A point that plots within the control limits indicates that the process is in control → no action is
necessary
A point that plots outside the control limits is evidence that the process is out of control
Furthermore, in the presence of chance causes of variation only, plotted points should exhibit a
random pattern
A control chart basically consists of a display of ‘quality characteristics’ which are found from
samples taken from production at regular intervals. Typically the mean, together with a measure
of the variability (the range or the standard deviation) of measurements taken from the items
being produced may be the variables under consideration. The appearance of a typical control
chart is shown below in Figure 11.
The centre line on the figure represent the ideal value of the characteristic being measured.
Ideally, the points on the figure should hover around this value in a random manner indicating
that only chance variation is present in the production process.
The upper and lower limits are chosen in such a way that so long as the process is in statistical
control, virtually all of the points plotted will fall between these two lines. Note that the points
are joined up so that it is easier to detect and trends present in the data. At this stage, a note of
caution is necessary.
The fact that a particular control chart shows all of its points lying between the upper and lower
limits does not necessarily imply that the production process is in control.
Even points lying within the upper and lower limits that do not exhibit random behavior can
indicate that a process is either out of control or going out of control.
28
Figure 12: A Typical Control Chart
It is also worth noting at this stage that there is a close connection between control charts and
hypothesis testing. If we formulate the hypotheses:
H0: the process is in control
H1: the process is not in control
Then a point lying with the upper and lower limits is telling us that we do not have the evidence to
reject the null hypothesis and a point lying outside the upper and lower limits is telling us to reject the
null hypothesis. From previous comments made you will realize that these statements are not an
absolute truth but that they are an indicative truth. Model of Control Chart
Let w be sample statistic with mean µw, and the standard deviation of w is σw:
Where L is the distance of the control limits from the center line, in general we use L=3
There are Two Main Types of control charts:
29
For Variables (quality characteristics measured on a numerical scale; e.g.
geometrical dimensions, weights, …)
µ (mean) control chart
R (range) control charts
𝑆 2 (sample variance) control charts
S (sample standard deviation) control charts
Xi (control charts for individual measurements)
Example: consider the sample data below
Control Charts
The fraction nonconforming (p) is defined as the ratio of the number of defective items in a
population to the total number of items in that population
30
If we want to consider the sample (n) fraction nonconforming
Since p represents a probability, negative values of LCL are senseless. If calculated and
LCL < 0, we just say LCL = 0
31
32
Importance of Control Charts
The following reasons are given for the popularity of control charts in industries today.
They are simple to use. Production process operators do not need to be fluent in statistical
methods to be able to use control charts.
They are effective. Control charts help to keep a production process in control. This avoids
wastage and hence unnecessary costs.
They are diagnostic in the sense that as well as indicating when adjustments need to be made to
a process, they also indicate when adjustments do not need to be made.
33
They are well-proven. Control charts have a long and successful history. They were introduced
in the 1920s and have proved themselves over the years.
They provide information about the stability of a production process over time. This
information is of interest to production engineers for example. A very stable process may
enable fewer measurements to be taken - this has cost implications of course
34
REFERENCES
35