Vous êtes sur la page 1sur 48

MINOR PROJECT

A Study on Six Sigma Techniques And Its application in reduction of seat rejection At BOSCH LTD.

Submitted by
Suyog Gholap(107269) R.Rahul(107254) Chandra shekhar.L(107266) Sudip Pal(107237) K.Seshi Kiran Reddy(107249)

Introduction to Six Sigma:


Sigma () is a letter in the Greek alphabet that has become the statistical symbol and metric of process variation. The sigma scale of measure is perfectly correlated to such characteristics as defects-per-unit, parts-per-million defectives, and the probability of a failure. Six is the number of sigma measured in a process, when the variation around the target is such that only 3.4 outputs out of one million are defects under the assumption that the process average may drift over the long term by as much as 1.5 standard deviations. Six sigma may be defined in several ways. Tomkins defines Six Sigma to be a program aimed at the near-elimination of defects from every product, process and transaction. Harry (1998) defines Six Sigma to be a strategic initiative to boost profitability, increase market share and improve customer satisfaction through statistical tools that can lead to breakthrough quantum gains in quality. Six sigma was launched by Motorola in 1987. It was the result of a series of changes in the quality area starting in the late 1970s, with ambitious ten-fold improvement drives. The top-level management along with CEO Robert Galvin developed a concept called Six Sigma. After some internal pilot implementations, Galvin, in 1987, formulated the goal of achieving Six-Sigma capability by 1992 in a memo to all Motorola employees. The results in terms of reduction in process variation were on-track and cost savings totaled US$13 billion and improvement in labor productivity achieved 204% increase over the period 19871997.In the wake of successes at Motorola, some leading electronic companies such as IBM, DEC, and Texas Instruments launched Six Sigma initiatives in early 1990s. However, it was not until 1995 when GE and Allied Signal launched Six Sigma as strategic initiatives that a rapid dissemination took place in non-electronic industries all over the world. In early 1997, the Samsung and LG Groups in Korea began to introduce Six Sigma within their companies. The results were amazingly good in those companies. For instance, Samsung SDI, which is a company under the Samsung Group, reported that the cost savings by Six Sigma projects totaled US$150 million. At the present time, the number of large companies applying Six Sigma in Korea is growing exponentially, with a strong vertical deployment into many small- and medium-size enterprises as well. Six sigma tells us how good our products, services and processes really are through statistical measurement of quality level. It is a new management strategy under leadership of top-level management to create quality innovation and total customer satisfaction. It is also a quality culture. It provides a means of doing things right the first time and to work smarter by using data information. It also provides an atmosphere for solving many CTQ (critical-to-quality) problems through team efforts. CTQ could be a critical process/product result characteristic to quality, or a critical reason to quality characteristic.

Defect rate, PPM and DPMO:


The defect rate, denoted by p, is the ratio of the number of defective items which are out of specification to the total number of items processed (or inspected). Defect rate or fraction of defective items has been used in industry for a long time. The number of defective items out of one million inspected items is called the ppm (parts-per-million) defect rate. Sometimes a ppm defect rate cannot be properly used, in particular, in the cases of service work. In this case, a DPMO (defects per million opportunities) is often used. DPMO is the number of defective opportunities which do not meet the required specification out of one million possible opportunities.

Standard Deviation:
In probability theory and statistics, standard deviation is a measure of the variability or dispersion of a population, a data set, or a probability distribution. A low standard deviation indicates that the data points tend to be very close to the same value (the mean), while high standard deviation indicates that the data are spread out over a large range of values. For example, the average height for adult men in the United States is about 70 inches, with a standard deviation of around 3 inches. This means that most men (about 68%, assuming a normal distribution) have a height within 3 inches of the mean (67 inches 73 inches), while almost all men (about 95%) have a height within 6 inches of the mean (64 inches 76 inches). If the standard deviation were zero, then all men would be exactly 70 inches high. If the standard deviation were 20 inches, then men would have much more variable heights, with a typical range of about 50 to 90 inches.

Fig: A data set with a mean of 50 (shown in blue) and a standard deviation () of 20.

Fig: A plot of a normal distribution (or bell curve). Each colored band has a width of one standard deviation. In addition to expressing the variability of a population, standard deviation is commonly used to measure confidence in statistical conclusions. For example, the margin of error in polling data is determined by calculating the expected standard deviation in the results if the same poll were to be conducted multiple times. (Typically the reported margin of error is about twice the standard deviation,

the radius of a 95% confidence interval.) In science, researchers commonly report the standard deviation of experimental data, and only effects that fall far outside the range of standard deviation are considered statistically significant. Standard deviation is also important in finance, where the standard deviation on the rate of return on an investment is a measure of the risk. Consider a population consisting of the following values

There are eight data points in total, with a mean (or average) value of 5:

To calculate the standard deviation, we compute the difference of each data point from the mean, and square the result:

Next we average these values and take the square root, which gives the standard deviation:

Therefore, the population above has a standard deviation of 2. Note that we are assuming that we are dealing with a complete population. If our 8 values are obtained by random sampling from some parent population, we might prefer to compute the sample standard deviation using a denominator of 7 instead of 8. The standard deviation of a discrete random variable is the root-mean-square (RMS) deviation of its values from the mean. If the random variable X takes on N values (which are real numbers) with equal

probability, then its standard deviation can be calculated as follows:

1. 2.
3.

Find the mean, , of the values. For each value xi calculate its deviation ( Calculate the squares of these deviations. Find the mean of the squared deviations. This quantity is the variance 2. Take the square root of the variance. ) from the mean.

4.
5.

This calculation is described by the following formula:

Where

is the arithmetic mean of the values xi, defined as:

If not all values have equal probability, but the probability of value xi equals pi, the standard deviation can be computed by:

and

Where

And N' is the number of non-zero weight elements.

For example
Suppose we wished to find the standard deviation of the data set consisting of the values 3, 7, 7, and 19. Step 1: find the arithmetic mean (average) of 3, 7, 7, and 19,

Step 2: find the deviation of each number from the mean,

Step 3: square each of the deviations, which amplifies large deviations and makes negative values positive,

Step 4: find the mean of those squared deviations,

Step 5: take the non-negative square root of the quotient (converting squared units back to regular units),

So, the standard deviation of the set is 6. This example also shows that, in general, the standard deviation is different from the mean absolute deviation (which is 5 in this example).Note that if the above data set represented only a sample from a greater population, a modified standard deviation would be calculated to estimate the population standard deviation, which would give 6.93 for this example.

Rules for normally distributed data

Dark blue is less than one standard deviation from the mean. For the normal distribution, this accounts for 68.27 % of the set; while two standard deviations from the mean (medium and dark blue) account for 95.45%; three standard deviations (light, medium, and dark blue) account for 99.73%; and four standard deviations account for 99.994%. The two points of the curve which are one standard deviation from the mean are also the inflection points. The central limit theorem says that the distribution of a sum of many independent, identically distributed random variables tends towards the normal distribution. If a data distribution is approximately normal then about 68% of the values are within 1 standard deviation of the mean (mathematically, , where is the arithmetic mean), about 95% of the values are within two standard deviations ( 2), and about 99.7% lie within 3 standard deviations ( 3). This is known as the 68-95-99.7 rule, or the empirical rule. For various values of z, the percentage of values expected to lie in the symmetric confidence interval (z,z) are as follows: z 1 1.645 1.960 2 2.576 3 percentage 68.2689492% 90% 95% 95.4499736% 99% 99.7300204%

3.2906 4 5 6 7

99.9% 99.993666% 99.9999426697% 99.9999998027% 99.9999999997440%

In Fig: Dark blue is less than one standard deviation from the mean. For the normal distribution, this accounts for 68.27 % of the set; while two standard deviations from the mean (medium and dark blue) account for 95.45%; three standard deviations (light, medium, and dark blue) account for 99.73%; and four standard deviations account for 99.994%. The two points of the curve which are one standard deviation from the mean are also the inflection points. The standard deviation of a data set is the same as that of a discrete random variable that can assume precisely the values from the data set, where the point mass for each value is proportional to its multiplicity in the data set. The term "standard deviation" was first used in writing by Karl Pearson in 1894 following use by him in lectures. This was as a replacement for earlier alternative names for the same idea: for example Gauss used "mean error". A useful property of standard deviation is that, unlike variance, it is expressed in the same units as the data.

Sigma quality level


Specification limits are the tolerances or performance ranges that customer's demand of the products or processes they are purchasing. Figure 1.8 illustrates specification limits as the two major vertical lines in the figure. In the figure, LSL means the lower specification limit, USL means the upper specification limit and T means the target value. The sigma quality level (in short, sigma level) is the distance from the process mean () to the closer specification limit. In practice, we desire that the process mean to be kept at the target value. However, the process mean during one time period is usually different from that of another time period for various reasons. This means that the process mean constantly shifts around the target value. To address typical maximum shifts of the process mean, Motorola added the shift value 1.5 s to the process mean. This shift of the mean is used when computing a process sigma level. From this figure, we note that a 6 sigma quality level corresponds to a 3.4ppm rate.

Fig: Sigma quality levels of 6 and 3


Sigma level for discrete data:
Suppose two products out of 100 products have a quality characteristic which is outside of specification limits. Then in one million parts 20,000 parts will be defects so, sigma level will be between 3 & 4.Preciously it will come as 3.51. The broad classification of sigma level is shown below-

PPM Defectives
6,91,000 3,09,000 67,000 6,200 230 3.4

Sigma level
1 2 3 4 5 6

Product Definition:

Fig: DSLA Nozzle Assembly

Fig: Injector Assembly Shoulder Turning

Step Turning

Dowel hole drilling Guide Bore Drilling Inlet hole Drilling

Seat Profile Grinding

Pressure Chamber machining Fig: Body of DSLA type nozzle

Sack Hole Seat Surface Seat- seen under Microscope

DMAIC Process in Six Sigma methodology: 9

The most important methodology in Six Sigma management is perhaps the formalized improvement methodology characterized by DMAIC (define-measure-analyze-improve control) process. This DMAIC process works well as a breakthrough strategy. Six Sigma companies everywhere apply this methodology as it enables real improvements and real results.

Literature Survey Case study of manufacturing Industry Identification of problem Industry Data Collection
Identify Specific problem

Define

Define customer Requirements Set Goals SIPOC diagram Measurement System Analysis

Measure

Data Collection Plan Identify variation due to measurement system SIPOC diagram Draw conclusion from data verification

Analyze

Process Capability Analysis Determine root causes Map cause & effect diagram Create improvement Ideas

Improve

Create solution statement Implement improvement solutions Monitor Improvement progress

Control
Improvement Results Conclusions Scope of future work

Make needed adjustments Establish standard measures to maintain performance

Fig:

Flow diagram of DMAIC methodology adopted

DEFINE PHASE:

10

1. Why the project? (The Business case) DSLA nozzle parts are hardened at UDA (Hardening
process) and after subsequent chamfer grinding they come at UVA (High precision internal grinding) machines for Guide bore and Seat grinding. The seat and guide bore surface grinding is done on UVA and then they are sent to inspection for seat visual checking. At seat visual checking section the no. of parts getting rejected are quite high. From Jan08 to July08 average 22600 ppm (parts per million) were rejected due to Bad seat problem (Rejection due to other reasons are not included in the scope of the project). Due to these rejections the first pass yield and type wise fulfillment of parts decreases. Also Due to added seat repair operation at UVA the m/c utilization decreases and at the same time it increases the defect cost associated with it. By successfully implementing the project we can save up to 1, 50 TINR.per month.

2. Initial project charter


Due to this current project scenario an 8 member team was formed for this project. Source of the project is Seat repair monthly data available with inspection dept. Enclosures include Rejection analysis of W174. Characteristics of the project is: Seat Repair Measures : Monthly PPM Defect Definition : Seat visually not OK.

3. SIPOC (Supplier-Input-Process-Output-Customer):
SIPOC is a six sigma tool. The acronym SIPOC stands for Suppliers, inputs, process, outputs, and customers. A SIPOC is completed most easily by starting from the right ("Customers") and working towards the left. Suppliers to UVA process are Company, TEF1, TEF2, PLP, and MSEB. Inputs to UVA process are Man, Machine, Electricity, Drawings, and H.T. over parts, Gauges, Tooling Compressed air, JML, Cutting oil, Check list , Instruction charts, Program etc. Process taking place at UVA process is Internal grinding of seat surface. Output of the UVA process are Seat Grinding over parts, Worn out tooling, Grinding muck, PMI chart, Re-release chart. Customers of the UVA process are Inspection, Repair process, Stores, Scrap yard, Etamic check, Honing, Profile Grinding. Using this data a SIPOC diagram is created.

11

SUPPLIER

INPUT
Man Machine Electricity Drawings H.T. over parts Gauges, Tooling Compressed air JML ,Cutting oil Check list Instruction charts Program

PROCESS
UVA process High Precision Internal Grinding Process

OUTPUT
Seat Grinding over parts Worn out tooling Grinding muck PMI chart Re-release chart

CUSTOMER
Inspection Repair process Stores Scrap yard Etamic check, Honing Profile Grinding

Company Electricity Maintenance TEF1 Purchase

Soft Stage Operations

Hardening

UVA process
(High Precision Internal Grinding)

Seat Visual Inspection

Profile Grinding

Fig: SIPOC for UVA (Internal grinding) process. 4. CTQ (Critical to Quality) Identification:
A CTQ tree (Critical-to-quality tree) is used to decompose broad customer requirements into more easily quantified requirements. CTQ Tree is often used in the Six Sigma methodology. CTQs are derived from customer needs. Customer delight may be an add-on while deriving Critical to Quality parameters. For cost considerations one may remain focused to customer needs at the initial stage. CTQs (Critical to Quality) are the key measurable characteristics of a product or process whose performance standards or specification limits must be met in order to satisfy the customer. They align improvement or design efforts with customer requirements. CTQs represent the product or service characteristics that are defined by the customer (internal or external). They may include the upper and lower specification limits or any other factors related to the product or service. A CTQ usually must be interpreted from a qualitative customer statement to an actionable, quantitative business specification. To put it in layman's terms, CTQs are what the customer expects of a product... the spoken needs of the customer. The customer may often express this in plain English, but it is up to us to convert them to measurable terms using tools such as DFMEA, etc. The requirements of the output of the process and measures of Critical Process Issues are collected as CTQs. They have to be derived from customer/business requirements, risks, economics, and regulations. CTQs could be the combination of CTBs or CTCs, where CTB means Critical to Business and CTC means Critical to Customer.

12

The CTQ tree is generated because it Translates broad customer/business requirements into specific critical-to-quality (CTQ) requirements. Helps the team to move from high-level (Big Y) to Specific Measurable CTC/CTB (Small y). Ensures that all aspects of the need are addressed. CTQ tree is generated when there are Unspecific customer/business requirements or complex, broad needs from the customer.

Steps to create CTQ diagram?


1. List customer/business needs. 2. Identify the major drivers for these needs (major means those which will ensure that the need is addressed). 3. Break each driver into greater detail. 3. Stop the breakdown of each level when you have reached sufficient detailed information that enables you to measure whether you meet the customer/business need or not. Taper repair Guide bore repair To reduce UVA process Repair Repair Seat repair Scrap Guide bore scrap Seat scrap

Fig: CTQ tree for UVA process.


By the reference of CTQ tree there are 5 elements in UVA process seat repair. To select the right CTQ for the project Pareto Analysis was performed on the data gathered from Jan08 to July08.

Pareto Analysis:
The Pareto chart was introduced in the 1940s by Joseph M. Juran, who named it after the Italian economist and statistician Vilfredo Pareto, 18481923. It is applied to distinguish the vital few from the trivial many as Juran formulated the purpose of the Pareto chart. It is closely related to the so called 80/20 rule 80% of the problems stem from 20% of the causes, or in Six Sigma terms 80% of the poor values in Y stem from 20% of the Xs. In the Six Sigma improvement methodology, the Pareto chart has two primary applications. One is for selecting appropriate improvement projects in the define phase. Here it offers a very objective basis for selection, based on, for example, frequency of occurrence, cost saving and improvement potential in process performance. The other primary application is in the analyze phase for identifying the vital few causes (Xs) that will constitute the greatest improvement in Y if appropriate measures are taken. A procedure to construct a Pareto chart is as follows: 1) Define the problem and process characteristics to use in the diagram.

13

2) Define the period of time for the diagram for example, weekly, daily, or shift. Quality improvements over time can later be made from the information determined within this step. 3) Obtain the total number of times each characteristic occurred. 4) Rank the characteristics according to the totals from step 3. 5) Plot the number of occurrences of each characteristic in descending order in a bar graph along with a cumulative percentage overlay. 6) Trivial columns can be lumped under one column designation; however, care must be exercised not to omit small but important items.
Analysis of Total Rejections
20000 100 80

C ount

60 10000 40 20 0 0
a Se t p air Re air Rep . B. G Ba d p er Ta r ap Sc B. G. a Se t rap Sc

D efect
Count Percent Cum %

10718 54.1 54.1

5520 27.9 82.0

1613 8.1 90.2

1448 7.3 97.5

500 2.5 100.0

From this Analysis we clearly see that Seat repair is the most critical of all rejections.

Kano model of Quality:


The Kano model is a theory of product development and customer satisfaction developed in the 80's by Professor Noriaki Kano which classifies customer preferences into five categories: Attractive One-Dimensional Must-Be Indifferent Less the better

The Kano model offers some insight into the product attributes which are perceived to be important to customers. The purpose of the tool is to support product specification and discussion through better development team understanding. Kano's model focuses on differentiating product features, as opposed to focusing initially on customer needs. Kano also produced a methodology for mapping consumer responses to questionnaires onto his model.

P ercent

14

As per Kano model of Quality A CTQ specification table is generated for giving the specifications of rejections. CTQ G.B. Repair Seat Repair Taper bad Repair G.B. Scrap Seat Scrap MEASURE Monthly PPM Monthly PPM Monthly PPM Monthly PPM Monthly PPM SPECIFICATION -Seat Damage/ Finish Bad --Seat Damage Fig: CTQ table DEFECT DEFINITION G.B. size out of specification Seat visually not O.K. Taper out of specification G.B. size out of specification Seat visually not O.K. KANO STATUS Must Be Less the Better Less the Better Less the Better Less the Better

5. FINAL PROJECT CHARTER:

15

BOSCH Six sigma project charter Project: To reduce UVA process (High pressure Internal grinding process) seat repair. 1. Background & reasons for selecting the project : Average seat repair from Jan.08 to May.08 is 22,600 ppm. 2. Aim of the project : To reduce defect cost. To increase first pass yield. To increase m/c utilization. To increase type wise fulfillment. To reduce process seat repair from 22,600 to 11,300 ppm. 3. Sponsor: Mr. Klaassen Enno. 4.Team Leader : Kulkarni Rahul G. 5.Team members: 6.Mr.Khiratkar Ajay 1. Mr.Ganesh B. 7. Mr.Vidwans Devendra 2. Mr.Mishra Vipin 8.W174 Associates 3. Mr. Deshak rahul 9.W174 Foremen 4. Mr.Singhal K.P 10.W172 Associates 5. Mr.Jadhav Kailash 11.W172 Foremen 6.Characteristics of output of product / process & its measures: Characteristics Seat Repair Measures Monthly PPM Defect Identification Seat visually not OK.

7. Source of the project: Monthly repair data available with inspection. 8. Benefits / Cost impact: Process seat repair reduction from 22,600 ppm. To 11,300 ppm. (Expected saving of 150 TINR per month 9. Meeting frequency: Every Thursday at 03.30pm. 10. Enclosures: Rejection analysis of W174. Dept: MFN2 Page: 1 Revision : 0 Date: 05/08/2008

MEASURE PHASE: 16

Collect baseline data on defects & possible causes

Develop a sampling strategy

Validate your measurement system using Gauge R & R.

Analyze patterns in data

Determine process capability

Fig: Approach to measure phase.

Creating a data collection plan: As per the approach specified a plan for collecting the base line data is created. It is given below.

Data Collection Plan

Action: Data collection from Seat Rejection

What question do you want to answer? Body seat visually OK? Data What Measure type/ data type Discrete data How measured visually Operational definition and procedures Related How/where Sampling conditions to recorded notes record (Attached form) lot wise 100% --

Seat defects

Fig: Data collection plan


During weekly project meeting it was decided to change the format for recording of parts checked at seat visual section as it was outdated. So with the help of line foremen new format was developed by brainstorming. It is as follows:

New format developed for Seat visual section:


BOSCH Nashik plant Date Shift

Name _________________________ Item No. Bad Rings Finish Seat Defects Rubbing at sack hole

Token No:

Qty. Inspected

Qty. OK

Qty Rejected

Patches

No sack hole

Unground seat

Scrap

Lot No.

Type

Segregation of defects observed at seat visual section:

17

Day count Day-1 Day-2 Day-3 Day-4 Day-5 Day-6 Day-7 Day-8 Day-9 Day-10 Day-11 Day-12 Day-13 Day-14 Day-15 Day-16 Day-17 Day-18 Day-19 Day-20

Date 6/8/2008 7/8/2008 8/8/2008 10/8/2008 12/8/2008 13/08/08 14/08/08 17/08/08 18/08/08 19/08/08 20/08/08 22/08/08 23/08/08 25/08/08 26/08/08 270808 28/08/08 30/08/08 1/9/2008 2/9/2008

Total no. of Bad finish parts checked (rough surface)

Rings 93 172 63 440 165 24 78 293 43 56 95 90 90 70

Patches 0 6 9 12 25 2 4 57 0 0 0 8 12 6

Unground seat 1 5 2 4 0 0 0 0 0 0 0 0 0 0

No sack hole 1 1 0 12 1 0 0 8 0 2 1 0 0 0

372 367 174 1114 607 47 163


450

277 182 100 646 416 20 80 90 2 20 74 115 204 113

Rubbing at sack hole end due to burr 0 1 0 0 0 1 1 2 1 7 0 1 2 0

46
85 170 214 308 189

192
119 101 163 99 78

82
32 38 32 12 31 2566

94
86 63 55 37 43 2150

16
1 0 9 48 4 219

0
0 0 66 1 0 79

0
0 0 1 0 0 27

0
0 0 0 0 0 16

5058

Pareto Analysis of Seat rejections:

Seat Defect Segregation


5000 4000 100 80 60 40 20 0
h ug Ro i sh fi n s ng Ri es t ch Pa s nd rou ng U e at rs he Ot

Count

3000 2000 1000 0

Defect
Count Percent Cum %

2566 50.7 50.7

2150 42.5 93.3

219 4.3 97.6

79 1.6 99.1

43 0.9 100.0

Measurement System Analysis:

Percent

18

A Measurement System Analysis, abbreviated MSA, is a specially designed experiment that seeks to identify the components of variation in the measurement. Just as processes that produce a product may vary, the process of obtaining measurements and data may have variation and produce defects. A Measurement Systems Analysis evaluates the test method, measuring instruments, and the entire process of obtaining measurements to ensure the integrity of data used for analysis (usually quality analysis) and to understand the implications of measurement error for decisions made about a product or process. MSA is an important element of Six Sigma methodology and of other quality management systems. MSA analyzes the collection of equipment, operations, procedures, software and personnel that affects the assignment of a number to a measurement characteristic. A Measurement Systems Analysis considers the following: Selecting the correct measurement and approach Assessing the measuring device Assessing procedures & operators Assessing any measurement interactions Calculating the measurement uncertainty of individual measurement devices and/or

measurement systems Common tools and techniques of Measurement Systems Analysis include: calibration studies, fixed effect ANOVA, components of variance, Attribute Gage Study, Gage R&R, ANOVA Gage R&R, Destructive Testing Analysis and others. The tool selected is usually determined by characteristics of the measurement system itself.

Accuracy and Precision:


In the fields of engineering, industry and statistics, accuracy is the degree of closeness of a measured or calculated quantity to its actual (true) value. Accuracy is closely related to precision, also called reproducibility or repeatability, the degree to which further measurements or calculations show the same or similar results. The results of calculations or a measurement can be accurate but not precise, precise but not accurate, neither, or both. A measurement system or computational method is called valid if it is both accurate and precise. The related terms are bias (non-random or directed effects caused by a factor or factors unrelated by the independent variable) and error (random variability), respectively.

19

Fig: Accuracy indicates proximity to the true value, precision to the repeatability or reproducibility of the measurement.

Accuracy versus precision:


Accuracy is the degree of veracity while precision is the degree of reproducibility. The analogy used here to explain the difference between accuracy and precision is the target comparison. In this analogy, repeated measurements are compared to arrows that are shot at a target. Accuracy describes the closeness of arrows to the bulls eye at the target center. Arrows that strike closer to the bulls eye are considered more accurate. The closer a system's measurements to the accepted value, the more accurate the system is considered to be. To continue the analogy, if a large number of arrows are shot, precision would be the size of the arrow cluster. (When only one arrow is shot, precision is the size of the cluster one would expect if this were repeated many times under the same conditions.) When all arrows are grouped tightly together, the cluster is considered precise since they all struck close to the same spot, if not necessarily near the bulls eye. The measurements are precise, though not necessarily accurate.

Fig: High accuracy, but low precision

Fig: High precision, but low accuracy

However, it is not possible to reliably achieve accuracy in individual measurements without precisionif the arrows are not grouped close to one another, they cannot all be close to the bulls eye. (Their average position might be an accurate estimation of the bulls eye, but the individual arrows are inaccurate.) See also Circular error probable for application of precision to the science of ballistics.

20

Factors affecting MSA include:


Equipment: measuring instrument, calibration, fixturing, etc People: operators, training, education, skill, care Process: test method, specification Samples: materials, items to be tested (sometimes called "parts"), sampling plan, sample

preparation, etc Environment: temperature, humidity, conditioning, pre-conditioning, Management: training programs, metrology system, support of people, support of quality

management system, etc.

ANOVA Gauge Repeatability & Reproducibility: (GRR study)


ANOVA Gauge R&R (or ANOVA Gauge Repeatability & Reproducibility) is a Measurement Systems Analysis technique which uses Analysis of Variance (ANOVA) model to assess a measurement system. The evaluation of a measurement system is not limited to gauges (or gages) but to all types of measuring instruments, test methods, and other measurement systems. ANOVA Gauge R&R measures the amount of variability induced in measurements that comes from the measurement system itself and compares it to the total variability observed to determine the viability of the measurement system. There are several components affecting a measurement system including: Measuring instruments, the gauge or instrument itself and all mounting blocks, supports, fixtures, load cells etc. The machine ease of use, sloppiness among mating parts, "zero" blocks are examples of sources of variation in the measurement system; Operators (people), the ability and/or discipline of a person to follow the written or verbal instructions. Test methods, how to setup your devices, how to fixture your parts, how to record the data, etc. Specification, the measurement is reported against a specification or a reference value. The range or the engineering tolerance does not affect the measurement, but is an important factor affecting the viability of the measurement system. Parts (what is being measured), some items are easier to measure than others. A measurement system may be good for measuring steel block length but not for measuring rubber pieces.

21

There are two important aspects on a Gauge R&R:

1. Repeatability is the variation in measurements taken by a single person or instrument on the


same item and under the same conditions. A measurement may be said to be repeatable when this variation is smaller than some agreed limit. Repeatability conditions include: the same measurement procedure the same observer the same measuring instrument, used under the same conditions the same location Repetition over a short period of time.

The repeatability coefficient is a precision measure which represents the value below which the absolute difference between two repeated test results may be expected to lie with a probability of 95%. The standard deviation under repeatability conditions is part of precision and accuracy.

2. Reproducibility is the variability induced by the operators. It is the variation induced when
different operators (or different laboratories) measure the same part. Reproducibility is one of the main principles of the scientific method, and refers to the ability of a test or experiment to be accurately reproduced, or replicated, by someone else working independently. The results of an experiment performed by a particular researcher or group of researchers are generally evaluated by other independent researchers who repeat the same experiment themselves, based on the original experimental description. Then they see if their experiment gives similar results to those reported by the original group. The result values are said to be commensurate if they are obtained (in distinct experimental trials) according to the same reproducible experimental description and procedure. Reproducibility is different from repeatability, which measures the success rate in successive experiments, possibly conducted by the same experimenters. Reproducibility relates to the agreement of test results with different operators, test apparatus, and laboratory locations. It is often reported as a standard deviation.

How to perform GR & R:


The Gauge R&R (GRR) is performed by measuring parts using the established measurement system. The goal is to capture as many sources of measurement variation as possible, so they can all be assessed and addressed. Please note that the purpose is not to "pass". A small variation reported on a GRR may be because an important source of error was missed during the study. To capture reproducibility errors, multiple operators are needed. Some (ASTM code) call for at least ten operators (or laboratories) but others use only 2 or 3 to measure the same parts. To capture repeatability errors, the same part is usually measured several times per operator. To capture

22

interactions of operators with parts (e.g. one part may be more difficult to measure than other), usually between 5 and 10 parts are measured. There is not a universal criteria for minimum requirements for the GRR matrix, being up to the Quality Engineer to assess risks depending on how critical the measurement is and how costly they are. The 30x2x2 (30 parts, 2 operators, 2 repetitions) is acceptable for some studies, although it has very few degrees of freedom for the operator component. Several methods of determining the sample size and degree of replication are available In this project GRR study I along with a quality over checker took 30 parts and checked its angle twice. The recorded measurements were fed to standard Minitab software and the results obtained are as follows:

If GRR <10 If 10<GRR<30 If 30<GRR

Gauge is acceptable Gauge is conditionally acceptable Gauge is unacceptable & must be replaced/modified.

Misconceptions about GR & R:


Need only one GRR per family of gauges. It is usual to say "There is an acceptable GRR for this caliper". This statement is false, as a GRR is for the measurement system, which includes the part, specification, operator and method. As an example, measuring a steel block with a caliper may be achieved with a good precision, but the same caliper may not be suitable to measure soft rubber parts that may deform while it is being measured. The GRR will not pass using parts, so it has to be done with standard weights and blocks. The GRR done in this way will assess the precision while measuring standard weights. The device might not be suitable to measure that specific type of parts. If the part "changes" while being measured, this has to be counted as a measurement system error. Need to report on PPAP documentation GRR results for everything that is measured. This is not necessarily a requirement. The Quality Engineer usually makes an educated assessment. If the characteristic is critical to safety, a valid GRR is required. Instead, if there is enough understanding that some particular part is easy to measure with acceptable precision, a formal GRR is not required. Customers may ask for additional GRRs during PPAP reviews. Knowing that a GRR is not good and still uses the measurement system does not make sense. This is like using bent calipers to get measurements, you get a number but it does not mean anything. Performing a GRR is very expensive. To perform a GRR usually a number of parts (sometimes between 5 to 10) is required to be measured by at least 3 operators (some suggest Measuring Table-20249 Gage R & R No. Of Distinct Categories 18.82 8 Measuring Table-19389 13.23 10 ten or more) 2 to 3 times. So the

23

measurement costs are the ones associated with those additional measurements. For simple devices this may not be very costly, and the results is a known measurement error that can be used to assess all measurements subsequent to that. The costs can be higher for destructive testing. GRRs must be within 10% to pass. There are AIAG guidelines for GRR errors relative to the specification, and what to report on a PPAP process. The final call is between the supplier and customer, and it is a function of the criticality of the characteristic and the assessed measurement error. GRR is a tool that helps making this assessment, but it does not give you the answer.

Process Capability Analysis Process capability analysis was performed to find out the actual state of the process. Minitab was used to draw a process capability analysis curve for Seat Rejections measured over a month. As the data is discrete the Sigma level what we get is in terms of PPM (Defective Parts per Million Opportunities)The Minitab output obtained for the Analysis is shown below.
Capability Analysis of Seat Visual Process
P Chart 0.026 Proportion 0.024 0.022 0.020 LCL=0.019202 1 4 7 10 13 16 19 22 25 28 Sample Expected Defectives UCL=0.026045 425 400 375 350 360 390 420 Observed Defectives Dist of % Defective Summary Stats 2.30 % Defective 2.28 2.26 2.24 2.22 5 10 15 20 Sample 25 30 (using 95.0% confidence) % Defective: Lower CI : Upper CI : Target: PPM Def: Lower CI : Upper CI : Process Z: Lower CI : Upper CI : 2.26 2.22 2.30 0.00 22624 22217 23035 2.0024 1.9947 2.0100 Tar 8 6 4 2 0 00 35 70 05 40 75 10 45 0. 0. 0. 1. 1. 1. 2. 2. Binomial Plot

_ P=0.022624

Cumulative % Defective

Fig 8: Process Capability analysis of Seat visual process before Implementing DMAIC methodology From Results the PPM Def level is 22,624 (i.e.22, 624 Defectives in 1 Million parts.) The below table shows different Sigma levels for PPM rejections. PPM Defectives Sigma level 6,91,000 1 3,09,000 2 67,000 3 6,200 4 230 5 3.4 6 Fig: PPM defectives & Sigma level Comparison By doing interpolation between 3 & 3 levels the Sigma level of the Seat visual process comes out to be 3.5 Sigma.

24

ANALYZE PHASE:
To analyze the defects and its generation, the tool of brainstorming is used. The suspected Sources of variations were identified using tool of tree diagram.

Brainstorming: It is a group creativity technique designed to generate a large number of ideas for
the solution of a problem. The method was first popularized in the late 1930s by Alex Faickney Osborn in a book called Applied Imagination. Osborn proposed that groups could double their creative output with brainstorming. Four basic rules were followed in brainstorming. These ware intended to reduce social inhibitions among groups members, stimulate idea generation, and increase overall creativity of the group.

1. To keep Focus on quantity: This rule was a means of enhancing divergent production, aiming
to facilitate problem solving through the maxim, quantity breeds quality. The assumption is that the greater the number of ideas generated, the greater the chance of producing a radical and effective solution.

2. To withhold criticism: In brainstorming, criticism of ideas generated was put 'on hold'. Instead,
participants focused on extending or adding to ideas, reserving criticism for a later 'critical stage' of the process. By suspending judgment, participants feel free to generate unusual ideas.

3. Welcome unusual ideas: To get a good and long list of ideas, unusual ideas were welcomed.
They can be generated by looking from new perspectives and suspending assumptions. These new ways of thinking may provide better solutions.

4. To combine and improve ideas: Good ideas may be combined to form a single better good
idea, as suggested by the slogan "1+1=3". It is believed to stimulate the building of ideas by a process of association.

Method: Method followed during brainstorming is as followsSet the problem: Before a brainstorming session, it is critical to define the problem. The problem must be clear, not too big, and captured in a specific question. If the problem is too big, the facilitator should break it into smaller components, each with its own question. Create a background memo: The background memo is the invitation and informational letter for the participants, containing the session name, problem, time, date, and place. The problem is described in the form of a question, and some example ideas are given. The memo is sent to the participants well in advance, so that they can think about the problem beforehand. Select participants: The facilitator composes the brainstorming panel, consisting of the participants and an idea collector. A group of 10 or fewer members is generally more productive. Many variations are possible but the following composition is suggested. Several core members of the project who have proved themselves. Several guests from outside the project, with affinity to the problem.

25

One idea collector who records the suggested ideas.

Create a list of lead questions: During the brainstorm session the creativity may decrease. At this moment, the facilitator should stimulate creativity by suggesting a lead question to answer, such as Can we combine these ideas? Or How about looking from another perspective? It is best to prepare a list of such leads before the session begins.

The process Participants who have ideas but were unable to present them are encouraged to write down

the ideas and present them later. The idea collector should number the ideas, so that the chairperson can use the number to

encourage an idea generation goal, for example: We have 44 ideas now, lets get it to 50!. The idea collector should repeat the idea in the words he or she has written verbatim, to

confirm that it expresses the meaning intended by the originator. When more participants are having ideas, the one with the most associated idea should have

priority. This to encourage elaboration on previous ideas. During a brainstorming session, managers and other superiors may be discouraged from

attending, as it may inhibit and reduce the effect of the four basic rules, especially the generation of unusual ideas.

Evaluation Brainstorming is not just about generating ideas for others to evaluate and select. Usually the group itself will, in its final stage, evaluate the ideas and select one as the solution to the problem proposed to the group. The solution should not require resources or skills the members of the group do not have or cannot acquire. If acquiring additional resources or skills is necessary, that needs to be the first part of the solution. There must be a way to measure progress and success. The steps to carry out the solution must be clear to all, and amenable to being assigned to the members so that each will have an important role. There must be a common decision making process to enable a coordinated effort to proceed, and to reassign tasks as the project unfolds. There should be evaluations at milestones to decide whether the group is on track toward a final solution. There should be incentives to participation so that participants maintain their efforts.

26

Cham fer height variation. Uneven c ham fer band

Acqueous C leaning not ok

Jet broken,Pum p pressure less G uide to shaft TR not checked after TBT as per freq. TR m ore than 100 m icrons M easure by gauge

Guide to shaft TR not ok

Vibrations & chatter m ark s on seat in soft stage


R ough finish, Rings, Patches, No sack hole, Rubbing at sack hole, Unground seat

Roundness, Straightness, Guide bore to seat TR

No specification in drawing

UV A PRO CE SS REP A IR & SC RAP

S eat repair

I/P parts

100% sack hole check ing poka yok e on all 5 spinner P arts without sack hole from soft stage
S ack hole D rill breakage on R etco

Possibility of poka yok e failure

Poka yoke not working properly

Type M ix-up ( P type in DS LA & vise versa

P ossibility on all operations during lot change, 80% on B enzinger, ECM (10% ), R em aining 10%

Manual elem ent

G uide bore to shaft T.R bad

Guide to shaft TR not checked after TB T as per freq.

TR m ore than 100 m icrons

Seat TR wrt guide bore Seat angle in soft stage Cham fer m andrel angle in hard stage

On spinner & retco m /c

m ore than 70 m icrons specification 58.8 (+/- 0.2) M ore/less than spec.

On spinner & retco m /c

M ore/less than spec.

Fig: Tree diagram created from brainstorming session for Input part parameters

27

Vibration RPM W orkhead Spindle height Female center Job clamping pressure Loading spring wornout Loading/ Unloading Loading alignment of component Loading cylinder Cylinder swing Angle master

Today not Known value-2250 Repeatability Grinding Chuck clamp grinding Changing freq. once in 2 months Visual check OK/ Not OK Air leakage In / Out positions Changing freq. Seat profile

Consult Mr.Kumavat

Below 20 Decide freq. Once in a month

Once in 2 m onths

To be decided To be studied Scope condition to be studied

As per freq.

Checking bench UVA process repair Seat Rejections M/C parameters spindles spindle cooling Initial setting Setting parameters New seat wheel New wheel diameter Adaptor Dressing ring coolant systems TR < 10 periodic replcment & TR 3.5 to 4 bar grinding / dressing coolant Tip breakage sensing poka yoke Grinding wheel Dressing depth of cut Dressing freq. Visual inspection microscopes

Alignment of both eyes Frequent checking by associates value-60,000 To be asked to maintenance wheel form wear Ensure positive cutting after dressing 4,600 mm

Prepare schedule

RPM

Provision to fix pressure gauge atleast to one m/c Ref.setting piece to be made Height gauge to check height diff. After dressing 4,300 mm

changing freq. every 3 months

confirmation of poka yoke once in a shift 3 6 parts

Grinding Fig: Tree diagram duerate machine related parameters Feed to Details to be taken From two tree diagrams created above it is clear that there are 7 parameters related to input part

parameters & 23 machine related parameters. To know the impact of each parameter on seat rejections it was necessary to validate each parameter using statistical methods. In Six Sigma method used for root cause validation is Hypothesis testing.

Statistical hypothesis testing:


A statistical hypothesis test is a method of making statistical decisions using experimental data. It is sometimes called confirmatory data analysis. In frequency probability, these decisions are almost always made using null-hypothesis tests; that is, ones that answer the question assuming that the null hypothesis is true, what is the probability of observing a value for the test statistic that is at least as extreme as the value that was actually observed? The use of hypothesis testing is deciding whether experimental results contain enough information to cast doubt on conventional wisdom.

Null hypothesis (H0) formally describes some aspect of the statistical behaviour of a set of data; this description is treated as valid unless the actual behaviour of the data contradicts this assumption. Thus, the null hypothesis is contrasted against another hypothesis. Statistical hypothesis testing is used to make a decision about whether the data contradicts the null hypothesis: this is called significance testing. A null hypothesis is never proven by such methods, as the absence of evidence against the null hypothesis does not establish it. In other words, one may either reject, or not reject the null hypothesis; one cannot accept it. Failing to reject it gives no strong reason to change

28

decisions predicated on its truth, but it also allows for the possibility of obtaining further data and then re-examining the same hypothesis. Alternative hypothesis is always set out for a particular significance test in conjunction with a null hypothesis. Although in some cases it may seem reasonable to consider the alternative hypothesis as simply the negation of the null hypothesis, this would be misleading. In fact, significance testing and statements about hypotheses always take place within the context of a set of assumptions (which may unfortunately be unstated). This provides a way of considering alternative hypotheses which are the negation of the null hypothesis within the context of the overall assumptions. However not all alternative hypotheses are of this "negation type": the simplest cases are directional hypotheses. An important case arises in testing for differences across a number of different groups, where the null hypothesis may be "no difference across groups" with the alternative hypothesis being that the mean values for the groups would be in a certain pre-specified order. In the theory of statistical hypothesis testing, the triple of "assumptions", "null hypothesis" and "alternative hypothesis" provides the basis for choosing an appropriate test statistic. Example For example, one may want to compare the test scores of two random samples of men and women, and ask whether or not one group (population) has a mean score (which really is) different from the other. A null hypothesis would be that the mean score of the male population was the same as the mean score of the female population: H0 : 1 = 2 Where: H0 = the null hypothesis 1 = the mean of population 1, and 2 = the mean of population 2. Alternatively, the null hypothesis can postulate (suggest) that the two samples are drawn from the same population, so that the variance and shape of the distributions are equal, as well as the means. Formulation of the null hypothesis is a vital step in testing statistical significance. One can then establish the probability of observing the obtained data (or data more different from the prediction of the null hypothesis) if the null hypothesis is true. That probability is what is commonly called the "significance level" of the results. That is, in scientific experimental design, we may predict that a particular factor will produce an effect on our dependent variable this is our alternative hypothesis. We then consider how often we would expect to observe our experimental results or results even more extreme, if we were to take many samples from a population where there was no effect (i.e. we test against our null hypothesis). If we find that this happens rarely (up to, say, 5% of the time), we can conclude that our results support our experimental prediction we reject our null hypothesis.

29

P-value: In statistical hypothesis testing, the p-value is the probability of obtaining a result at least as extreme as the one that was actually observed, assuming that the null hypothesis is true. The fact that p-values are based on this assumption is crucial to their correct interpretation. The lower the p-value, the less likely the result, assuming the null hypothesis, so

the more "significant" the result, in the sense of statistical significance one often uses p-values of 0.05 or 0.01, corresponding to a 5% chance or 1% of an outcome that extreme, given the null hypothesis. More technically, a p-value of an experiment is a random variable defined over the sample space of the experiment such that its distribution under the null hypothesis is uniform on the interval [0,1]. Many p-values can be defined for the same experiment. Generally, one rejects the null hypothesis if the p-value is smaller than or equal to the significance level,often represented by the Greek letter (alpha). If the level is 0.05, then results that are only 5% likely or less are deemed extraordinary, given that the null hypothesis is true. In the above example we have: null hypothesis (H0) fair coin; observation (O) 14 heads out of 20 flips; and Probability (p-value) of observation (O) given H0 p(O | H0) = 0.0577 2 (two-tailed) =

0.1154 (percentage expressed as 11.54%). The calculated p-value exceeds 0.05, so the observation is consistent with the null hypothesis that the observed result of 14 heads out of 20 flips can be ascribed to chance alone as it falls within the range of what would happen 95% of the time were this in fact the case. In our example, we fail to reject the null hypothesis at the 5% level. Although the coin did not fall evenly, the deviation from expected outcome is just small enough to be reported as being "not statistically significant at the 5% level". However, had a single extra head been obtained, the resulting p-value (two-tailed) would be 0.0414 (4.14%). This time the null hypothesis - that the observed result of 15 heads out of 20 flips can be ascribed to chance alone - is rejected. Such a finding would be described as being "statistically significant at the 5% level". Some common misunderstandings about p-values.

1.
very

The p-value is not the probability that the null hypothesis is true. (This false small.)

conclusion is used to justify the "rule" of considering a result to be significant if its p-value is In fact, frequentist statistics does not, and cannot, attach probabilities to hypotheses. Comparison of Bayesian and classical approaches shows that a p-value can be very close to zero while the posterior probability of the null is very close to unity. This is the JeffreysLindley paradox.

2.

The p-value is not the probability that a finding is "merely a fluke." (Again, this

conclusion arises from the "rule" that small p-values indicate significant differences.) As the calculation of a p-value is based on the assumption that a finding is the product of

30

chance alone, it patently cannot also be used to gauge the probability of that assumption being true. This is subtly different from the real meaning which is that the p-value is the chance that null hypothesis explains the result: the result might not be "merely a fluke," andbe explicable by the null hypothesis with confidence equal to the p-value.

3. 4. 5. 6.

The p-value is not the probability of falsely rejecting the null hypothesis. This error is

a version of the so-called prosecutor's fallacy. The p-value is not the probability that a replicating experiment would not yield the

same conclusion. 1 (p-value) is not the probability of the alternative hypothesis being true. The significance level of the test is not determined by the p-value.

The significance level of a test is a value that should be decided upon by the agent interpreting the data before the data are viewed, and is compared against the p-value or any other statistic calculated after the test has been performed.

7.

The p-value does not indicate the size or importance of the observed effect (compare

with effect size).

31

Validation of all SSVs using Statistical testing: (Input part parameters)

32

Sr.N Root o. cause Actions taken Trial taken Start date Test used Results obtained

sub cause

(SSV's)

Suspected sources of variations

End date

conclusions

Aqueous cleaning not ok Jet broken, Pump pressure less

Seat does not get cleaned properly so location of part Take a trial which involves on chamfer grinding m/c is processing parts without outside due to dirt present. aqueous cleaning. This outside location results in seat rejections.

0 bad parts in The impact of Take 275 parts with cleaning & 25 2 275 ok parts aqueous cleaning on parts without cleaning & process 8-Nov-08 8-Nov-08 proportions 0 bad parts in 25 chamfer height them on same chamfer grinding test without cleaning variation is m/c & same UVA m/c. parts Insignificant.

chamfer height variations

Chamfer height variation causes seat rejections at UVA

Part location in UVA becomes improper due to chamfer variation.

To take a trial this involves taking parts with chamfer height more, less & within specification & processing them on UVA.

All parts came ok Take 30 parts with chamfer height on UVA, chamfer The impact of chamfer 2 (-30 to -10), 60 parts within spec 15-Nov- 15-Novheight variation height variation on proportions (-10 to +10) & 30 parts with (+10 08 08 did not cause seat rejections is test to +30) & process them on UVA. any defect on Insignificant UVA.

Uneven chamfer Guide to shaft TR not ok band A trial TR checking gauge is developed

Guide to shaft TR is not checked in soft stage

12 parts bad in Take 50 parts with TR more 2 50 TR bad parts than 85 & put them on UVA also 3-Mar-09 3-Mar-09 proportions 1 bad in 50 TR process 50 normal parts test ok parts

The impact of Uneven chamfer band on Seat rejections is Significant The seat RZ & The impact of drill life Rmax values of on all parts are seat rejections is within limits Insignificant 16-Dec8-Jan-08 08 2 proportions test 49 bad in 50 with The impact of drill chatter marks, 1 damage in soft stage bad in 50 without on Seat rejections is 16-Dec8-Jan-08 chatter marks Significant. 08

Roundness, Straightness GB to seat TR not checked Vibration in soft stage s& chatter 3 marks on seat in soft stage Spinners & RetcoDrill damage on When such parts come on UVAsort out such parts & put them on UVA for trial.

The drill form deteriorates Take one parts each from with spinners & Retco having usage & the parts at later different tool life & give them stages to FMR lab for seat form of tool life have more checking roughness

One part from each machine given to FMR lab, Life no. are noted

Due to drill damage on machines vibrations & deep lines are produced on seat.

50 parts with chatter marks were processed on UVA along with 50 ok parts

33

(Input part parameters continued..)

34

Suspected sources of variations Trial taken Start date Test used Results obtained

Actions taken

End date

Conclusions

One no sack hole part was Poka Yoke put off Collect at least 15 No sack put due hole parts prefarably of on UVA 20315 & it's effect on to various DSLA normal Shaft reasons rejections was observed

No sack hole part breaks the 2 13-Jan- 13-Jangrinding wheel tip & m/c gets proportions 09 09 immediately stopped, during test redressing 50 parts came bad.

The impact of No sack hole parts on seat rejections is Significant

80% on

Collect at least 15 mix up parts

One type mix up part was put on UVA 20315 & it's effect on seat rejections is observed

220-Nov- 20-Novproportions 08 08 test

p-type in DSLA lot breaks the The impact of adaptor& grinding wheel, which type mix up on results in 50 bad in 50,with normal Seat rejections is parts 0 bad in 50. Significant.

Angle not 285 parts with seat angle Trial is taken which involves checked as per more 2seat angle more parts are 21-Nov- 28-Novfrequency/Drill life were processed up to seat proportions processed up to seat visual 08 08 over, Drill visual test for checking. resharpening along with 300 angle ok parts

3 bad in 285 angle more parts, 0 bad in 300 angle ok parts

The impact of Seat angle more on seat rejections is Insignificant

Chamfer mandrel 4 mandrels given to tool angle to room be verified in tool for chamfer angle verification room

Chamfer mandrel angles checked by Sine bar method & Microscope method

25-Nov- 25-Dec08 08

No variation in output

As there in no variation in output statistical test cannot be performed

The impact of chamfer mandrel angle on seat rejections is Insignificant

35

No.Sr. Root cause

sub cause

(SSV's)

Start date

End date Results obtained conclusions

Test used

13-Feb-09 16-Feb-09

No variation output observed

Workhead vibration values The impact of workhead of all machines are within 3 vibration on seat mm/sec. rejections is Insignificant

Poka yoke failure on spinner machine

Parts without sack hole from soft stage Poka yoke failure on Retco machine

No variation in 13-Feb-09 16-Feb-09 output observed At both rpm values all 50 parts came visually ok

The impact of Workhead rpm on seat rejections is Insignificant

Actions taken for machine related parameters


5 Part type mix up

12-Mar-09 12-Mar-09

2-proportions test

At both repeatability levels all parts came visually ok

The impact of Spindle height repeatability on Seat rejections is Insignificant

75% Benzinger, 10% on ECM. Manual element may be present, Possibility on all operations Elevator condition in soft stage is poor

12-Mar-09 12-Mar-09

All parts before doing female center grinding 2-proportions came ok, also all parts after test doing female center grinding came ok

The impact of female center grinding on seat rejections is Insignificant

Seat angle in soft & Retco machinesOn spinner stage

improper

30-Jan-09 30-Jan-09

2 proportions test

At 5 bar pressure 0 bad in 50, at 4 bar pressure 29 bad in 50 parts.

The impact of Job clamping pressure on seat rejections is Insignificant.

Chamfer mandrel angle in soft stage

less than More or specification

36

Results obtained Actions taken

conclusions

Sr. Root cause sub cause No.

Suspected sources of variations (SSV's)

Trial taken

The impact of loading spring with ok spring all 50 parts came ok, with broken spring broken on seat rejections is 35 bad in 50. Significant

1
Vibration

Earlier not known

The impact of Loading with & without checking loading alignment all 50 parts alignment of component on came visually ok seat rejections is Insignificant.

Workhead vibration values of Check workhead vibration all machines are checked with values of all machines help of vibratometer

The quick hit achieved

The impact of Air cylinder on seat rejections is Insignificant.

2
RPM

value-1800 rpm

Rated RPM value is 2150 Take 50 parts with 2150 rpm, RPM take 50 parts with 1750 rpm

(Machine related parameters continued)

GRR found to be ok

The impact of Angle master on seat rejections is Insignificant. Workhead

Spindle height

When 50 parts checke with faulty microscope 35 came The impact of seat visual bad, when they are checked with ok scope only 50 microscope condition on seat came bad. rejections is Significant.

Check repeatability<20, 50 parts each were processed Repeatability take trial with processing with repeatability of 10 & below 20 parts with different at20. repeatability values.

4
The impact of Air supply for parts cleaning on Seat rejections is Significant

Female center

Grinding freq. not decided

without air cleaning 10 parts bad in 50, With air cleaning

50 parts were processed we checked parts before & before doing female center after doing female center grinding & 50 parts were grinding for checking processed after doing female difference center grinding

The job clamping pressure Job Air supply to job clamping Chuck clamp was varied ti 4 bar & 5 bar & clamping is varied to different levels grinding it's impact on seat rejections is pressure & it's effect was observed observed.

37

Suspected sources of variations sub cause (SSV's) Actions taken Trial taken Start date Test used

End date

Loading spring wornout 30-Jan-09 30-Jan-09

Changing freq.

Loading spring was changed with a broken one & it's effect on seat rejections was observed

Changing freq. once in two months.

2 proportions test

Loading alignment of component

Visual check

While setting machine check alignment for ok / Not ok

Take a trial without checking loading alignment of component. 30-Jan-09 30-Jan-09

2 proportions test

Loading cylinder

Air leakage

Electrical servo motor used

No problem of air leakage

30-Jan-09 30-Jan-09

No hypothesis test performed

Angle master showing wrong reading Master

Checking freq. to be reduced

angle mastertake GRR of seat 30-Jan-09 30-Jan-09

No test performed

Check requirement of frequent verification of microscope not thereof both eyes Alignment condition

Scope condition study schedule to be prepared 18-Dec-08 18-Dec-08 A workshop on microscope handling to be arranged 2 proportions test

Visual inspection microscope

Frequent checkingAssociates awareness about microscope adjustment to be by associates done.

50 parts taken with air cleaning air cleaned & without air cleaningParts to be checked with & 50 parts taken without air cleaning 30-Jan-09 30-Jan-09 2 proportions test 22 parts bad in 50.

for parts Air supply

No supply provided

38

Actions taken

Trial taken

Start date

End date Test used Results obtained conclusions

Sr. Root cause No.

100 parts processed with 60,000 rpm, 100 Take a trial with parts processed with 50,000 rpm 15-Jan-09 30-Jan-09

3 parts in bad 100 with 2 proportions 60,000 rpm,1 bad in 100 with test 50,000 rpm

The impact of Spindle cooling on Seat rejections is Insignificant

Loading / Unloading

Check whether spindle cooling systems of all machines are running ok 30-Jan-09 30-Jan-09 Spindle cooling systems of all No test machines are found to be performed working ok.

All systems chekced with Maintenance people

The impact of Spindle cooling system on Seat rejections is Insignificant.

(Machine related parameters continued)


9

Initial setting was disturbed & it's impact 30-Jan-09 30-Jan-09

Initial setting parameters were disturbed & trial is taken.

2 proportions test

When initial setting ok 0 bad in 50, when initial setting disturbed 25 bad in 50.

The impact of Initial setting on seat rejections is Significant

The new seat wheel height was set at New seat wheel height & it's effect on seat rejections was 3.15mm observed. 30-Jan-09 30-Jan-09 2 proportions test

When new seat wheel setting ok 0 bad in 50, when initial setting not ok 30 bad in 50.

The impact of new seat wheel setting on Seat rejections is Significant.

10 Checking bench

Adaptor Tr checked take 50 parts with adaptor TR<10 every time machine is & again take 50 parts with adaptor TR>10 disturbed & it's impact 30-Jan-09 30-Jan-09

2 proportions test

when TR<10 0 bad in 50, when TR>10 0 bad in 50

The impact of Adaptor TR on Seat rejections is Insignificant.

11

cleaning

Trial taken which involves placing a worn One worn out ring was placed & wheel was Parts are taken for trial. 30-Jan-09 out ring on Machine & dressed with that ring. taking parts

30-Jan-09

2 proportions with wornout spring 45 badin test 50, with ok ring 2 bad in 50.

The impact of Dressing ring worn-out on seat rejections is Significant.

39

Start date

End date Test used Results obtained conclusions

Sr. Root cause No. sub cause (SSV's)

Suspected sources of variations

30-Jan-09 Grinding spindles

30-Jan-09

2 proportions The coolant system within limits test parameters are

The impact of coolant systems on Seat rejections is Insignificant. 12 RPM

value to be 60,000 RPM

different RPM values

30-Jan-09 13

30-Jan-09

Poka yoke o tip 1 bad in 50, 2 proportions when poka yoke not on tip 16 test bad in 50. Spindle cooling

The impact of poka yoke on Seat rejections is Significant.

To be asked to maintainance

(Machine related parameters continued)

30-Jan-09 14

30-Jan-09

2 proportions test

0 bad in 50 with 3 depth of cut. 0 bad in 50 with 2 depth of cut.

The impact of dressing depth of cut on Seat rejctions is Insignificant.

Initial setting

Wheel form wear

on seat rejections was observed

30-Jan-09

30-Jan-09

The impact of dressing 2 proportions 0 bad in 50 with 6 parts freq. 0 freq. on Seat rejections test bad in 50 with 8 parts freq. is Insignificant.

Setting parameters

15

setting New seat wheel

30-Jan-09

30-Jan-09

The impact of feed rate with 100% feed rate all 50 2 proportions on Seat parts okwith 50 % feed rate all test rejections is 50 parts ok again. Insignificant.

Ensure positive cutting after dressing

to be set 3.1 mm, take trial with more height.

16

Adaptors

TR<10

30-Jan-09

30-Jan-09

2 proportions test

Due to continuous rejections from assembly section fear is set in visual operators.

The impact of Operator equalization on seat rejections is Significant.

If TR out of specification seat bad comes

on seat rejections observed

17

Dressing ring

40

If dressing ring is worn out, the Periodic grinding wheel form replacement & gets damaged. Due TR to which part comes seat bad.

Only checking is involved as taking a trial is very dangerous.

Take 50 parts with 8 parts dressing freq. Again take 50 parts with 6 parts dressing freq.

Take parts with 3 depth of cut. Take parts with 2 depth of cut.

The feed rate was Take parts with 50% feed changed manually & it's rate, effect on seat rejections is Take parts with 100% feed observed. rate.

50 parts taken when poka yoke on tip, again 50 parts taken with poka yoke in backsword position.

Poka yoke was shifted to Tip breakage Confirmation of poka backward position & its sensing poka in a shift yoke once effect on seat rejections yoke was observed.

Dressing depth of cut is varied & trial is taken

Dressing freq. changed & trial is taken

3.5 to 4 bar grinding/ The dressing/ Grinding dressing pressure varies coolant

Dressing freq.

No.Sr. Root cause

18

22

19

20

21

Ishikawa Diagram for Major defects:


Ishikawa diagrams (also called fishbone diagrams or cause-and-effect diagrams) are diagrams that show the causes of a certain event. Ishikawa diagrams were proposed by Kaoru Ishikawa in the 1960s, who pioneered quality management processes in the Kawasaki shipyards, and in the process became one of the founding fathers of modern management. It was first used in the 1960s, and is considered one of the seven basic tools of quality management, along with the histogram, Pareto chart, check sheet, control chart, flowchart, and scatter diagram. It is known as a fishbone diagram Causes in the diagram are often based on a certain set of causes, such as the 6 M's, described below. Cause-and-effect diagrams can reveal key relationships among various variables, and the possible causes provide additional insight into process behavior. Causes in a typical diagram are normally grouped into categories, the main ones of which are: The 6 M's Machine, Method, Materials, Maintenance, Man and Mother Nature (Environment): Note: a more modern selection of categories is Equipment, Process, People, Materials, Environment, and Management. Causes should be derived from brainstorming sessions. Then causes should be sorted through affinity-grouping to collect similar ideas together. These groups should then be labeled as categories

23

Operator

Grinding wheel

Grinding program

Coolant systems

Lack of operator equalization

Dressing depth of cut

sub cause

Feed rate

Incorrect decision due to fear of getting rejected from assembly.

Suspected sources of variations (SSV's)

Manual knob present

3 microns

6 parts

Daily rejections at seat visual is checked for verifications

Check pressure, temperature of coolant system

Actions taken

50 border case parts were shown to operators & they were shown to assembly operators.

Trial taken

41

of the fishbone. They will typically be one of the traditional categories mentioned above but may be something unique to our application of this tool. Causes should be specific, measurable, and controllable. Most Ishikawa diagrams have a box at the right hand side, where the effect to be examined is written. The main body of the diagram is a horizontal lines from which stem the general causes, represented as "bones". These are drawn towards the left-hand side of the paper and are each labeled with the causes to be investigatedoften brainstormed beforehandand based on the major causes listed above. Off each of the large bones there may be smaller bones highlighting more specific aspects of a certain cause, and sometimes there may be a third level of bones or more. These can be found using the '5 Whys' technique. When the most probable causes have been identified, they are written in the box along with the original effect. The more populated bones generally outline more influential factors, with the opposite applying to bones with fewer "branches". Further analysis of the diagram can be achieved with a Pareto chart.
Fish bone Diagram for Vital few Defects
Environment Method Material

Dirt accum ulates on part as it is near to window

Gauges not Tool Quality calibrated on elevator getting Drill Breakage jam ed m Work In com quality ing Instructions are bad Rough Com plex Checking freq. is Finish & procedures less Motivation less New operator Negligence Awareness
Man

Frequent breakdowns Coolant pressure varies Detection is poor No Poka Yoke exist
Machine

Rings formation on Seat

Fig: Cause & Effect diagram for majority of defects The Five elements of Fish bone diagram generated during Brainstorming session are:
Man: Motivation less in workmen due to incentive less. New operator working in area Negligence during night shift Lack of Awareness among operators

Machine: Frequent Breakdowns, causing increase in vibration level Detection of Defects is not effective Coolant pressure varies abruptly No Poka Yoke present to detect Drill breakage which causes ring formation

Material:

42

Tool quality not up to the mark, drill life less Drill breakage due to drill overuse In coming quality of parts not ok (Part bend which causes drill breakage) Checking frequency is less Gauges are not calibrated on daily basis Elevator which lifts the part to chuck gets jammed causing part damage Work instructions are over dated Program corrections are complex during type change Machine is near to open window which causes dirt accumulation on part which damages surface during grinding.

Method:

Environment:

Bar chart The ideas generated during Brainstorming session were verified by Process Experts and the causes having positive impact on rejections were listed out. Bar chart analysis was performed on these parameters to know the causes which have significant impact on rejections.
Causes & their contribution in Rejections 50 45 40 35 30 25 20 15 10 5 0 45

% Rejections

21 15 8 11 % wise causes Drill overuse No Poka Gauges not Yoke present calibrated on to detect Drill time breakage Causes Coolant pressure varies Others

Fig 11: Bar Chart for Significant parameters Chart clearly indicates that some system for early detection of Drill breakage needs to be developed.

43

Causes & their contribution in Rejections 50 45 40 35 30 25 20 15 10 5 0 45

% Rejections

21 15 8 11 % wise causes Drill overuse No Poka Gauges not Yoke present calibrated on to detect Drill time breakage Causes Coolant pressure varies Others

Fig: Bar chart for causes & their contribution

IMPROVE PHASE: A) Detection of drill breakage on machine: To reduce rejections which were caused by drill breakage, a new Laser sensor was installed on machine and its feedback was given to PLC logic of machine. When tip of drill is Ok Laser falls on drill & gets distracted, ensuring the machine to run continuously. This Tip Breakage Sensor (TBS) was installed such that it overlaps with part loading, so change in cycle time due to Sensor installation is zero.

Fig 12: Tool breakage sensing Poka Yoke with OK drill mounted on machine

44

Fig 13: Tool breakage sensing Poka Yoke when tip of drill is broken After successfully implementing this on one pilot machine, there was horizontal deployment of this Poka yoke on all 8 machines. B) Drill overuse by operator: When 5 why Analysis was done for this problem, it was found that the new drills were issues from stores on monthly basis, so at the end of every month drill overuse was a common problem. It was decided to top-up drill shortage on every Saturday of week so as to maintain drill float on the line. Line foremen were given clear instructions about drill records maintenance. Accurate drill breakage/obsolescence is maintained and this point is added to Surprise audit committee. C) Gauges & Microscopes are not calibrated on time: For this cause a team of operators was formed to escalate the matter immediately when gauges are not calibrated. Also calibration work was equally divided among quality people who calibrate gauges once in three days. D) Coolant pressure varies: For this cause complete hydraulic circuit was checked for leakage. The team found that on Flow control valve was faulty (worn out). The team insisted to change every valve of the circuit and complete hydraulic circuit connections were changed with new one. Due to this major action the leakage completely stopped. The coolant pressure variation problem is completely eliminated. E) Others: For all other causes following actions are taken Window responsible for dirt accumulation was permanently closed & one exhaust fan was installed at that place. For new operator coming in area training sessions & supervision by skilled operators was made compulsory. Warning letters were issued for negligence from operators. New & updated work instructions were put on machine boards.

CONTROL PHASE: This phase defines control plans specifying process monitoring and corrective actions. It ensures that the new process conditions are documented and monitored. All possible causes of specific identified problems from the analysis phase were tackled in the control phase. Control solutions to identified problems have been prepared in sequence to the improvements as explained above. This will prevent

45

the problems from recurring. The proposed control solutions to improve the previous solutions are listed in sequence as follows. A) Drill breakage Poka Yoke: A Poka yoke monitoring sheet is maintained by shop. One shop Forman daily checks that all Poka Yoke are working correctly & records it on a check sheet. A clear escalation model for problem reporting is prepared for Poka Yoke failure. B) Drill overuse by operators: As weekly drill quantity top-up is done, it automatically ensures that every week drill quantity is verified for shortage. A record sheet is maintained to keep all drills records. C) Gauges calibration: This issue was taken seriously by quality department & they have assigned special audit team to ensure that gauges are calibrated on time. D) Coolant pressure: For all hydraulic circuits in shop, one preventive maintenance program is prepared. Operators are given authorities to stop machine if leakage is found on it. E) Operator related issues: All operator related issues were taken to Worker Union and after their consent it is decided to take strict action against the operator negligence is company.

RESULTS: After completing the DMAIC methodology of Six Sigma, again the process capability Analysis was done to know the improvement in Sigma level. One month data on Control phase was taken for the Analysis.
Capability Analysis of Seat Visual Process
P Chart
1 1

Binomial Plot Expected Defectives 300 250 200 150 160 240 320 Observed Defectives Dist of % Defective Summary Stats (using 95.0% confidence) Tar 12 9 6 3 0 0.0 0.3 0.6 0.9 1.2 1.5 1.8

0.020 Proportion

0.015

0.010 1
1 11 1 1 1

UCL=0.01344 _ P=0.01104 LCL=0.00864

10 13 16 19 22 25 28 Sample

Cumulative % Defective

1.2 % Defective

1.1

1.0 5 10 15 20 Sample 25 30

% Defective: Lower CI : Upper CI : Target: PPM Def: Lower CI : Upper CI : Process Z: Lower CI : Upper CI :

1.10 1.08 1.13 0.00 11039 10754 11330 2.2890 2.2791 2.2989

46

Figure 14: Process capability of seat visual process after applying DMAIC methodology From the Minitab output it is clear that PPM defect level is reduced from 22,624 ppm to 11,031 ppm. And Sigma level is Improved from 3.5 to 3.79 .
Rejections in PPM 25,000 20,000 PPM level 15,000 10,000 5,000 0 PPM rejections before Project PPM rejections after DMAIC Project Rejections in PPM 22,624

Sigma level-3.5 Sigma level improved to 3.79

11039

Fig 15: Results showing improvement in Sigma level of the process A few more agreed recommendations are still to be implemented during plant shut down. The estimated savings from the project after the implementation of all recommendations are expected to be 1, 50,000Rs per Annum.

CONCLUSIONS: The immediate goal of Six Sigma is defect reduction. Reduced defects lead to yield improvement; higher yields improve customer satisfaction. The ultimate goal is enhanced net income. The money saved is often the attention getter for senior executives. It has a process focus and aims to highlight process improvement opportunities through systematic measurement. Six Sigma defect reduction is intended to lead to cost reduction. Six sigma is a toolset, not a management system and can be used in conjunction with other comprehensive quality standards present in the industry. The application of Six Sigma technique for this project shows that company has taken a small step towards Six Sigma Implementation on Company wide basis. Once Six Sigma finds its rightful place in the minds of higher management, enormous gains can always be expected from its application. It is clear that the Six Sigma methodology is highly beneficial to improve the performance of any manufacturing plant.

47

References
1) Kumar, P. (2002) Six Sigma in manufacturing, Productivity Journal, Vol. 43, No. 2, pp.196 202. 2) Harry, M.J. and Schroeder, R. (1999) Six Sigma: The Breakthrough Management Strategy Revolutionizing the Worlds Top Corporations, New York, NY: Double Day. 3) Henderson, K.M. and Evans, J.R. (2000) Successful implementation of Six Sigma: benchmarking: General Electric Company, Benchmarking: An International Journal, Vol. 7, No. 4, pp.260281. 4) Mathew.H, Barth.B, and Sears.B, (2005) Leveraging Six Sigma discipline to drive improvement, Int. J. Six Sigma and Competitive Advantage, Vol. 1, No. 2, pp.121133. 5) Park, S.H. (2002) Six Sigma for productivity improvement: Korean business corporations, Productivity Journal, Vol. 43, No. 2, pp.173183.

48

Vous aimerez peut-être aussi