Vous êtes sur la page 1sur 125

1

CHAPTER-1

INTRODUCTION

1.1. Introduction

The increase of customer needs for quality products (more

precise tolerances and better product surface roughness) have driven

the metal cutting process. In the global antagonism, manufacturing

organizations are working in the direction of improving the product

quality and performance of the product with lower cost in short span

of time, but practices aimed at lowering the costs dont usually

improve quality. There is hierarchical relationship between cost and

quality. The intense international competition has focused the

attention of the manufacturers on automation as means to increase

productivity and improve quality. To realize full automation in

machining, computer numerically controlled (CNC) machine tools

have been implemented during the past decades. CNC machine tools

require less operator input, provide greater improvements in

productivity and increase the quality of the machined part.

Among several CNC industrial machining processes, milling is a

fundamental machining operation. End milling is the most common

metal removal operation encountered. It is widely used in a variety of

manufacturing industries including the aerospace and automotive

sectors, where quality is an important factor in the production of

slots, pockets, precision molds and dies etc.,.


2

After fabrication, they require further machining to facilitate

dimensional control for easy assembly and functional aspects. The

surface roughness plays an imperative role in the manufacturing

industry. The quality of surface affects functional requirements and

plays a very important role in the performance of the milling as a good

quality of milled surface significantly improves fatigue strength,

corrosion resistance or creep life. Surface roughness also affects the

several functional attributes of parts such as contact causing surface

friction, wearing, light reflection, heat transmission and ability of

distributing and holding lubricant coating or resisting fatigue.

Therefore the desired finish surface is usually specified and the

appropriate processes are selected to reach the required quality.

Several factors will influence the final surface roughness in a

CNC milling operation. The final surface roughness might be

considered as the sum of two independent effects.

1. The ideal surface roughness is a result of the geometry of the

tool & feed and


2. The natural surface roughness is a result of the irregularities in

the cutting operation. (Boothoryd and Knight, 1989).


Factors such as spindle speed, feed rate, axial depth of cut &

radial depth of cut, Nose radius that control the cutting operation can

be set up in advance. However factors such as tool geometry, tool

wear, chip loads and chip formations or the material properties of

both tool and work piece are uncontrolled [HUYNTH and FAN, 1992].

Even in the occurrence of chatter or vibrations of the machine tool,


3

defects in the structure of the work material, wear of tool or

irregularities of the chip formation contribute to the surface damage

in practice during machining [Boothoryd and Knight, 1989]. In order

to obtain better surface roughness, the proper setting of cutting

parameters is crucial before the process takes place. As a starting

point for determining cutting parameters, technologists could use the

hands on data tables that are furnished in data hand books. Lin

(1994) suggested that are trial and error approach could be followed in

order to obtain the optimal machining conditions for a particular

operation. To achieve the desired surface finish one should develop

techniques to model, prediction and optimizing the surface finish of a

product before milling in order to evaluate the fitness of machining

parameters such as Nose radius, spindle speed, feed rate, axial depth

of cut and radial depth of cut for keeping the desired surface finish

and increasing product quality.


In this work, the design of experiments techniques such as

orthogonal arrays in Taguchi design, Response Surface Methodology

and Artificial Neural Network has been implemented to develop model

and analysis for a better product quality.


The quality may be defined by many ways as
1. Fitness for use [JURAN].
2. Quality is a conformance to requirements [CROSBY].
3. The efficient production of the quality that the market

expects [DEMING].
4. Quality is what the customer says, it is [FEIGENBAUM].
5. Quality is the minimum loss imparted by the product to the

society from the time of the product is shipped [BRYNE &

TAGUCHI].
4

6. The totality of features and characteristics of a product or

services that bear on its ability to satisfy stated or implied

needs of the customers [ASQC].


The design of experiment is an effective approach to optimize

the through put in various manufacturing related processes (FIDDEN,

KRAFT, RUFF & DERBY, 1998}. The modeling techniques are

presented below.

Modeling Techniques
A. Conventional modeling techniques
1. Full factorial design.
2. Response surface methodology.
3. Regression modeling.
4. Bond graph.
5. FEM.

B. Nonconventional modeling techniques


1. Artificial Neural Networks.
2. Fuzzy Logic.
3. Different combinations of ANN, Genetic Algorithm and

Fuzzy Logic etc.

In the above methods, orthogonal arrays in Taguchi design,

Response surface methodology and Artificial Neural networks is used

to predict the surface roughness and Material removal rate (MRR) in

the end milling process.

1.2. Objective of the Project Work

A solemn attempt is made in this project work to study the

effect of surface roughness and Material Removal Rate in a machining

of P20 mould steel on CNC end milling machine with coated carbide
5

cutting tool. The feasibility of implementation of design of experiments

(DOE), Response surface methodology and Artificial Neural network in

milling process is analyzed and presented.

1.3. Organization of Thesis

The project work has been carried out on CNC End milling

machine with coated carbide cutting tool aiming at reduction of

variation in quality of the product. The study has been reported in

seven chapters as given in the following.

Chapter-1: Introduction.

Chapter-2: Deals with literature survey on surface finish

measurement, machining parameters, metal cutting and also deals

with the brief literature review.

Chapter -3: Deals with design of experiments and Artificial Neural

networks.

Chapter-4: Deals with data collection.

Chapter-5: Deals with methodology adapted to least squares

technique to develop the response surface model, Artificial Neural

network.

Chapter-6: Deals with results and discussions.

Chapter-7: Based on the experimental results and analytical results

conclusions and future scope are drawn and highlighted.


6

CHAPTER-2

LITERATURE SURVEY

2.1. Introduction

A brief literature survey on surface texture is presented in this

chapter bring out the machining parameter, metal cutting, types of

surfaces, different surface roughness measurement techniques etc.,

and also brief literature review is presented.

2.2. Introduction to Milling Machine

A milling machine is a machine tool that removes metal as the

work is fed against a rotating multi-point cutter. The cutter rotates at

a high speed and because of the multiple cutting edges it removes

metal at a very fast rate. The machine can hold one or more number
7

of cutters at a time. This is why a milling machine finds wide

application in production work. This is superior to other machines as

regards accuracy and better surface finish, and is designed for

machining a variety of tool room work.

The first milling machine came into existence in about 1770 and

was of French origin. The milling cutter was first developed by

Jacques de Vaucanson in the year 1782. The first successful plain

milling machine was designed by, Eli Whitney in the year 1818.

Joseph R Brown a member of brown and Sharpe Company invented

the first universal milling machine in the year 1861.

2.3. Machining Parameters

The machining process depends upon several parameters such

as the work piece material, the cutting tool material, the rigidity of the

machine, the rigidity of the work piece, cutting speed, feed, axial

depth of cut, radial depth of cut, chatter, tool wear etc.

Some of them that we have considered in this project work are

described below:

2.3.1. Cutting Speed

The speed of milling cutter is its peripheral linear speed

resulting from rotation. It is normally expressed in terms of surface

speed in m/min. The cutting speed can be derived as

N= (1000*V) / ( *d) -------------- (2.1)


8

Where N = Cutter Speed in rpm,

V = Cutting speed in m/min,

d = Diameter of the tool holder or cutter in mm

2.3.2. Feed

The feed rate in milling machine is defined as the rate with

which the work piece advances under the cutter. The Units are

expressed in mm/min or mm/rev or mm/tooth. The feed is

expressed in a milling machine by the following three different

methods.

Feed per tooth (sz)

The feed per tooth is defined by the distance the work advances

in the time between engagements by the two successive teeth. It is

expressed in mm/tooth.

Feed per cutter revolution (srev)

The feed per cutter revolution is the distance the work advances

in the time when the cutter turns through one complete revolution. It

is expressed in mm/revolution of the cutter.

Feed per minute (sm)

The feed per minute is defined by the distance the work

advances in one minute. It is expressed in mm/min.


9

The feed per tooth, the feed per cutter revolution, and the feed

per minute are related by the formula which is given below.

sm = N* srev = z*N*sz ------------------ (2.2)

Where z = number of teeth in the cutter,

N = the cutter speed in rpm.

The cutting speed and feed of a cutting tool is largely influenced by

the following factors.

1. Material being machined.

2. Material of the cutting tool.

3. Geometry of the cutting tool.

4. Required degree of surface finish.

5. Rigidity of the machine tool being used, etc.,

2.3.3.(A) Axial Depth of cut

The depth of cut in milling is the thickness of material removed

by one pass of the work under the cutter . It is the perpendicular

distance between original to the final surface of the work piece. It is

expressed in mm.

2.3.3.(B) Radial depth of cut

The depth of the tool along its radius in the work piece as it

makes a cut. If the radial depth of cut is less than the tool radius, the

tool is only partially engaged and is making a peripheral cut. If the


10

radial depth of cut is equal to the tool diameter, the cutting tool is

fully engaged and is making a slot cut.

Fig. 2.1. Axial and Radial depth of cut.

2.4. Surface Integrity

Surface integrity is the sum of all elements that describes all the

conditions existing on or at the surface of a work piece. Surface

integrity has two aspects. The first is surface topography which

describes the roughness, lay, or texture of this outer most layer of the

work piece i.e., its interface with the environment. The second is

surface metallurgy, which describes the nature of the altered layers

below the surface with respect to the base of matrix material. This
11

term assesses the effect of manufacturing process on the properties of

the work piece material.

Surface: A surface is a boundary that separates an object from

another object or substance.

2.5. Types of Surfaces

2.5.1. Nominal Surface

A nominal surface is a theoretical, geometrically perfect

surface which does not exist in practice, but is an average of the

irregularities that are superimposed on it.

2.5.2. Real Surface

A real surface is the actual boundary of an object. It deviates

from the nominal surface as a result of the manufacturing process

that created the surface. The deviation also depends on the properties,

composition, and structure of the material the object is made of.


12

Fig.2.2. Illustration of Typical surface roughness.

2.6. Surface Texture

Surface texture is the combination of fairly short wavelength

deviations of a surface from the nominal surface. Texture includes

roughness, waviness, and lay.

Fig.2.3. Surface Texture.


13

2.6.1. Roughness

Roughness includes the finest (shortest wavelength)

irregularities of a surface. Roughness generally results from a

particular production process or material condition.

2.6.2. Waviness

Waviness includes the more widely spaced (longer wavelength)

deviations of a surface from its nominal shape.

2.6.3. Lay

Lay refers to the predominant direction of the surface texture.

Ordinarily lay is determined by the particular production method and

geometry used.

Turning, milling, drilling, grinding and other machining

processes usually produce a surface that has lay: striations or peaks

and valleys in the direction that the tool was drawn across the

surface. The shape of the lay can take one of several forms as shown

below. Other processes produce surfaces with no characteristic

direction: sand casting, peening, and grit blasting. Sometimes these

surfaces are said to have a non-directional or particulate lay.

Several different types of lay are possible depending on the

manufacturing and machining processes.


14

Lay direction is important consideration for optical properties of

a surface. A smooth finish will look rough if it has a strong lay. A

rougher surface will look more uniform if it has no lay.

2.6.4. Flaws

Flaws are unidirectional, interruptions in the topography typical

of a part surface.

Different types of lays


15

Fig.2.4. Different types of Lays.

2.6.5. Roughness sampling length

The roughness sampling length is the sampling length within

which the roughness average is determined. This length is chosen, or

specified, to separate the profile irregularities which are designated as

roughness from those irregularities designated as waviness.

2.7. Surface Profiles

2.7.1. Profile

A profile is, mathematically, the line of intersection of a surface

with a sectioning plane which is (ordinarily) perpendicular to the

surface. It is a two-dimensional slice of the three-dimensional surface.

Almost always profiles are measured across the surface in a direction

perpendicular to the lay of the surface.

2.7.2. Roughness Profile

The roughness profile includes only the shortest wavelength

deviations of the measured profile from the nominal profile. The

roughness profile is the modified profile obtained by filtering a

measured profile to attenuate the longer wavelengths associated with

waviness and form error. Optionally, the roughness may also exclude
16

(by filtering) the very shortest wavelengths of the measured profile

which are considered noise or features smaller than those of interest.

Roughness is of significant interest in manufacturing because it

is the roughness of a surface (given reasonable waviness and form

error) that determines its friction in contact with another surface. The

roughness of a surface defines how that surfaces feels, how it looks,

how it behaves in contact with another surface, and how it behaves

for coating or sealing. For moving parts the roughness determines how

the surface will wear, how well it will retain lubricant, and how well it

will hold a load.

Fig.2.5. Roughness Profile.

2.8. Reference Mean Lines

2.8.1. Mean Line


17

A mean line is a reference line from which profile deviations are

measured. It is the zero level for a total or modified profile.

2.8.2. Least Squares Mean Line

A least squares mean line is a line through a profile such that

the sum of the squares of the deviations of the profile from the mean

line is minimized. In practice, this is done with a digitized profile.

Fig.2.6. Least Squares Mean Line.

The most common application of a least squares Mean line is to

"level" the raw traced profile. The traced profile is relative to the

straight line reference of the profiling instrument. Unless the

instrument is perfectly aligned with the part, that reference will be

tilted with respect to the measured surface. A least squares line fit

through the raw traced profile may be used as a reference line to

remove the misalignment.

More sophisticated instruments give greater control over this

leveling process, either by providing for "relieving" or by providing


18

alternatives to the least squares mean line. This is because a least

squares mean line is distorted by flaws or unusually shaped profiles.

2.8.3. Center Line

The center line of a profile is the line drawn through a segment

(usually a sample length) of the profile such that the total areas

between the line and the profile are the same above and below the

line. This concept is a little bit used in modern instruments; it mainly

served as a graphical method for drawing a mean line on the output of

a profile recording instrument with no built-in parameter processing.

2.9. Profile Peaks and Valleys

2.9.1. Profile Height

The height of a profile at a particular point is the distance from

the profile to its mean line. Profile height is considered as positive

above the mean line and negative below the mean line.

2.9.2. Profile Peak

A profile peak is a region of the profile that lies above the mean

line and intersects the mean line at each end. In the fig.2.7, each

shaded region is a peak. The height of a peak is defined to be the point

of maximum height within the region.

2.9.3. Profile Valley


19

Fig.2.7. Profile Peaks.

A profile valley analogous to a profile peak is a region of the

profile that lies below the mean line and intersects it at each end. The

depth of a valley is the depth of the lowest point within the valley.

Fig.2.8. Profile Valleys.

2.10. Factors Affecting Surface Quality

There are six major factors influencing surface quality in

machining processes. They are,

1. Finish ability of work material.

2. Type and condition of cutting tool.


20

3. Application of cutting fluid.

4. Method of chip removal.

5. Geometry of cutting tool.

6. The cutting variables: Feed depth of cut and cutting speed.

2.11. Roughness Symbols

The most common technique is to indicate surface roughness by

a basic symbol consisting of two legs of unequal length at

approximately 60 to the line representing the surface under

consideration.

1. Figure1 shows the symbol to indicate the surface is

machined, without indicating the grade of roughness or

the process to be used.

2. If the removal of the material by machining is required, a

bar is added to the basic symbol, as shown in Fig.2.

3. If the removal of the material is not permitted, a circle is

added to the basic symbol as shown in Fig.3.

4. Fig.4 indicates the various specifications of the surface

roughness, placed relative to the symbol.

Fig.1 Fig.2
21

Fig.3 Fig.4

Fig.2.9. Roughness Representation.

Where a = Roughness value, Ra,

b = Production method, treatment or coating,

c = Sampling length,

d = Direction of lay,

e = Machining allowance,

f = other roughness values (in bracket).

Table.2.1. Equivalent Surface Roughness Symbols

Roughness values Roughness Grade Roughness Grade


Ra(m) number symbol

50 N12 ~

25 N11

12.5 N10

6.3 N9

3.2 N8

1.6 N7

0.8 N6
22

0.4 N5

0.2 N4

0.1 N3

0.05 N2

0.025 N1

2.12. Surface Finish Parameters

Surface finish could be specified in many different parameters.

Due to the need for different parameters in a wide variety of

machining operations, a large number of newly developed surface

roughness parameters were developed. Some of the popular

parameters of surface finish specification are described as follows.

Roughness average(Ra)

This parameter is also known as the arithmetic mean roughness

value, AA (arithmetic average) or CLA (center average value).R a is

universally recognized and the most used international parameter of

roughness. Therefore,

Ra = ----------------------- (2.3.)

Where Ra= the arithmetic average deviation from the mean line,

L= sampling length,

Y= the ordinate of the profile curve.


23

It is the arithmetic mean of the departure of the roughness profile

from the mean line.

Root-mean-square (Rms) roughness (Rq)

This is the root mean square parameter corresponding to Ra.

Rq = --------------------- (2.4)

Maximum peak to valley roughness height (Ry or Rmax)

This is the distance between two lines parallel to the mean line that

contacts the extreme upper and lower points on the profile within the

roughness sampling length.

2.13. Measurement of Surface Roughness

Measurement of roughness is made in two general ways. One is

by comparison of surfaces with a standard reference surfaces. The

other is by the use of instruments that make direct measurements.

2.13.1. Method of Comparison

Surfaces are observed through a magnifying glass or micro

scope for visual comparisons. Magnified surfaces are often

photographed. However caution must be exercised in examining

photographs. Often sight-and-touch is the best method of comparison.

Photographs are so dependent on illumination or focus that they may


24

not reveal true depth pattern. Most comparative methods require skill

and judgment.

2.13.2. BY USING INSTRUMENT

The surface roughness may be measured using any one of the

following.

1. Straight edge.

2. Straight gauge.

3. Optical flat.

4. Tool makers microscope.

5. Profilometer.

6. Profilograph.

7. Talysurf.

8. Surtronic machine.

For measuring the surface roughness value Surtronic 3+

machine has been used for the present study.

2.14. Literature Review

Abbas Fadhel Ibraheem et.al.[5] investigated the effect of cutting

speed, feed, axial and radial depth of cut on cutting force in

machining of modified AISI P20 tool steel in end milling process. They
25

concluded that, higher the feed rates, larger the cutting forces. They

also developed the genetic network model to predict the cutting forces.

Muammer Nalbant et.al.[6] used the multiple regression

analysis and artificial neural network models for predicting the

surface roughness in turning of AISI 1030 steel material. These

techniques used full factorial design and analysis of variance

(ANOVA). According to them, Surface roughness increases with

increase of feed rate but decreases with increase of insert nose radius.

R.A. Ekanayake and P. Mathew[7], investigated the effect of

cutting speed, feed and depth of cut on cutting forces with different

inserts while milling AISI1020 steel. According to them, the tool

offsets and run-outs affect significantly on the cutting forces when it

comes to high speed milling, where small cut sections are employed.

This can cause uneven wear of the tool tips due to uneven chip loads.

Lajis et al.[8] developed the response surface model to predict

the tool life in end milling of hardened steel AISI D2. This technique

used central composite design in the design of experiments and

ANOVA. The objective was to obtain the contribution percentages o f

the cutting parameters (cutting speed, feed and depth of cut) on the

tool life.

Richard Dewes et al.[9] carried out the study on rapid

machining of hardened AISI H13 and D2 moulds, dies and press tools.

The primary objective was to assess the drilling and tapping of AISI
26

D2 and H13 with carbide cutting tools, in terms of tool life, work piece

quality, productivity and costs. The secondary aim was to assess the

performance of a number of water-based dielectric uids, intended

primarily for EDM operations, against a standard soluble oil cutting

uid, in order to assess the feasibility of a duplex machining

arrangement involving HSM and EDM on one machine tool.

Mohammad Reza Soleymani Yazdi and Saeed Zare

Chavoshi[10], studied the effect of cutting parameters and cutting

forces on rough and finish surface operation and material removal

rate (MRR) of AL6061 in CNC face milling operation. The objective was

to develop the multiple regression analysis and artificial neural

network models for predicting the surface roughness and material

removal rate. According to them, in rough operation, the feed rate and

depth of cut are the most significant effect parameters on R a and MRR

and increases with the increase of the cutting forces.

Abou-El-Hossein et al.[11] predicted the cutting forces in an end

milling operation of modied AISI P20 tool steel using the response

surface methodology (RSM) and Minitab software.

Khalid Hafiz et al.[12] developed the response surface model to

predict the tool life in end milling of hardened steel AISI H13 hardened

tool steel. This technique used central composite design in the design

of experiments and ANOVA. The objective was to obtain the

contribution percentages o f the cutting parameters (cutting speed,

feed and depth of cut) on the tool life.


27

Rahman et al.[13] (2001, 2002) compared the machinability of

the P20 mould steel (357 HB) in dry and wet milling conditions. They

considered a range of 75125 m/min for the cutting speed and a feed

ranging between 0.3 and 0.7 mm/tooth: they found the cutting forces

in both processes to be similar, but with the flank wear acceleration

higher in dry milling. Furthermore, they observed a better surface

finish with wet milling.

Liao and Lin[15] (2007) studied the milling process of P20 steel

with MQL lubrication. The cutting speeds were from 200-500m/min

and the feed between 0.1-0.2mm/tooth. The authors found that the

tool life is higher with MQL, due to an oxide layer formed on the tool

inserts that helped to lengthen the tool life.


28

CHAPTER-3

DESIGN OF EXPERIMENTS AND ARTIFICIAL NEURAL

NETWORKS

3.1. Introduction

A detailed design of experiments and Artificial Neural Network

techniques are explained in this chapter.

3.2. Design of Experiments (DOE)

A Design of Experiment (DOE) is a structured, organized

method for determining the relationship between factors affecting a

process and the output of that process.

Other Definitions

1. Conducting and analyzing controlled tests to evaluate the factors

that control the value of parameter or group of parameters.

2. "Design of Experiments" (DOE) refers to experimental methods used

to quantify indeterminate measurements of factors and interactions


29

between factors statistically through observance of forced changes

made methodically as directed by mathematically systematic tables.

3.3. Design of Experiment Techniques

1. Factorial Design.

2. Response Surface methodology.

3. Mixture Design.

4. Taguchi Design.

In the above methods, Response surface methodology and

Orthogonal array in Taguchi design has been used in the present

project work.

3.4. Orthogonal arrays

In order to reduce the total number of experiments sir Ronald

Fisher developed the solution: orthogonal arrays. The orthogonal

array can be thought of as a distillation mechanism through which

the engineers experiment passes (Ealey, 1998). The array allows the

engineer to vary multiple variables at one time and obtain the effects

which that set of variables has an average and the dispersion.

Taguchi employs design experiments using specially

constructed table, known as "Orthogonal Arrays (OA)" to treat the

design process, such that the quality is build into the product during

the product design stage.


30

Orthogonal Arrays (OA) are a special set of Latin squares,

constructed by Taguchi to lay out the product design experiments.

An orthogonal array is a type of experiment where the columns

for the independent variables are orthogonal to one another.

Orthogonal arrays are employed to study the effect of several control

factors. Orthogonal arrays are used to investigate quality.

Orthogonal arrays are not unique to Taguchi. They were

discovered considerably earlier (Bendell, 1998). However Taguchi has

simplified their use by providing tabulated sets of standard orthogonal

arrays and corresponding linear graphs to fit specific projects (ASI,

1989; Taguchi and Kenishi, 1987).

Minimum number of experiments to be conducted

The design of experiments using the orthogonal array is, in most

cases, efficient when compared to many other statistical designs. The

minimum number of experiments that are required to conduct the

Taguchi method can be calculated based on the degrees of freedom

approach.

---------- (3.1)

For example, in case of 8 independent variables study having 1

independent variable with 2 levels and remaining 7 independent

variables with 3 levels (L18 orthogonal array), the minimum number


31

of experiments required based on the above equation are 16. Because

of the balancing property of the orthogonal arrays, the total number of

experiments shall be multiple of 2 and 3. Hence the number of

experiments for the above case is 18.

3.4.1. Application of Orthogonal Array

Taguchi's OA analysis is used to produce the best parameters

for the optimum design process, with the least number of

experiments.

OA is usually applied in the design of engineering products, test

and quality development, and process development.

3.4.2. Advantages and Disadvantages of Orthogonal Array

Conclusions valid over the entire region spanned by the control

factors and their settings.

Large saving in the experiment effort.

Analysis is easy.

OA techniques are not applicable, such as a process involving

influencing factors that vary in time and cannot be quantified

exactly.

3.5. Analysis of variance (ANOVA)


32

Analysis of variance (ANOVA) is a statistical method for

determining the existence of differences among several population

means.

While the aim of ANOVA is the detect differences among several

populations means, the technique requires the analysis of different

forms of variance associated with the random samples under study-

hence the name analysis of variance.

The original ideas analysis of variance was developed by the

English Statistician Sir Ronald A. Fisher during the first part of this

century. Much of the early work in this area dealt with agricultural

experiments where crops were given different treatments, such as

being grown using different kinds of fertilizers. The researchers

wanted to determine whether all treatments under study were equally

effective or whether some treatments were better than others. Better

referred to those treatments that would produce crops of greater

average weight. This question is answered by the analysis of variance.

3.6. Response Surface Methodology (RSM)

The response surface methodology is a widely adopted tool for

the quality engineering field. The response surface methodology

comprises regression surface fitting to obtain approximate responses,

design of experiments to obtain minimum variances of the responses

and optimizations using the approximated responses.


33

Response surface methodology (RSM) is a collection of

mathematical and statistical techniques that are useful for modeling,

analysis and optimizing the process in which response of interest is

influenced by several variables and the objective is to optimize this

response.

Originally RSM was developed to model experimental responses

(Box and Draper, 1987) and then migrated into modeling of numerical

experiments. The difference is in the type of error generated by the

response. In physical experiments inaccuracy can be due, for

example, to measurement errors while, computer experiments,

numerical noise is a result of in-complete convergence of iterative

processes, round-off errors or the discrete representation of

continuous physical phenomena (Giunta et al., 1996; Van Campen et

al., 1990, Toropov et al., 1996). In RSM the errors are assumed to be

random.

The most extensive applications of RSM are in the particular

situations where several input variables potentially influence some

performance measure or quality characteristic is called the response.

The input variables are sometimes called independent variables, and

they are subject to control of the scientist or engineer. The field of

response surface methodology consists of experimental strategy for

exploring the space of the process or independent variables, empirical

statistical modeling to develop an appropriate approximating

relationship between the yield and the process variables, and


34

optimization methods for finding the values of the process variables

that produce desirable values of the response.

Generally, the structure of the relationship between the

response and the independent variables is unknown. The first step in

RSM is to find a suitable approximation to the true relationship. The

most common forms are low-order polynomials (first or second order).

An important aspect of RSM is the design of experiments (Box

and Draper, 1987), usually abbreviated as DOE. These strategies were

originally developed for the model fitting of physical experiments, but

can also be applied to numerical experiments. The objective of DOE is

the selection of the points where the response should be evaluated.

Most of the criteria for optimal design of experiments are

associated with the mathematical model of the process. Generally

these mathematical models are polynomials with an unknown

structure, so the corresponding experiments are designed only for

every particular problem. The choice of the design of experiments can

have a large influence on the accuracy of the approximation and the

cost of constructing the response. The DOE methods are

1. Full factorial Design.

2. Central composite design.

3. D-optimal designs.

4. Taguchis experimental design.

5. Latin hyper cube design.


35

6. Audze-Eglais approach.

7. Van Keulens approach.

In statistical modeling to develop an appropriate approximating

model between the response Y and independent variables {x 1, x2,-------

xn} in general, the relationship is written in the form of

Y = f(x1, x2, -------xn) + ;----------------------------( 3.2.)

Where the form of the true response function Y is unknown and

perhaps very complicated, and is a term that represents other

sources of variability not accomplished for in Y. usually includes

effects such as measurement error on response, back ground noise,

the effect of the other variables and so on. Usually is treated as

statistical error, often assuming it to have a normal distribution with

mean zero and variance 2.


E(y) = Y = E [f (x1, x2, -------xn)] + E () = f (x1, x2, -------xn); ---------- (3.3)

The variables x1, x2, -------xn equation (3.3) are usually called the

natural variables, because they are expressed in the natural units of

measurements such as degrees, Celsius, pounds/square inch etc. in

much RSM work it is convenient to transform the natural variables to

coded variables x1, x2, -------xn, which are usually defined to be

dimensionless with mean zero and the same standard deviation. In

terms of the coded variables the response function (3.3) will be written

as

Y = f (x1, x2, -------xn) ;------------------------( 3.4)


36

Because the forms of the true response function Y is unknown.

So the response function must appropriate. In fact successful use of

RSM is critically dependent upon experimenters ability to develop a

suitable approximation for Y. usually low-order polynomial in some

relatively small region of the independent space is appropriate. In

many cases, either first order or second order model is used.

The first order model is likely to be appropriate when the

experimenter is interested in approximating the true response surface

over a relatively small region of the independent variable space in a

location where there is a little curvature in Y.

For the case of three variables, the first order model in terms of

coded variables is

= 0 + 1X1 + 2X2 + 3X3---------------- (3.5)

The form of the first order model in equation (3.5) is sometimes

called a main effects model, because it includes only the main effects

of the true variables x1, x2, and x3. If there is an interaction between

these variable, it can be added to the model easily as follows.

= 0 + 1X1 + 2X2 + 3X3 + 12X1X2 + 23X2X3 + 13 X1X3----------- (3.6)

This is the first order model with interaction. Adding the

interaction term introduces curvature into the response function.

Often the curvature in the true response surface is strong enough that
37

the first order model (even with interaction term included) is

inadequate. A second order model will likely be required in these

situations. For the case of three variables the second order model is

written as

= 0 + 1X1 + 2X2 + 3X3 + 12X1X2 + 23X2X3 + 13 X1X3 + 11X12 +

22X22 + 33 X32 -------- (3.7)

This model would likely be useful as an approximation to the true

response surface in a relatively small region.

The second order is widely used in response surface methodology for

several reasons.

1. The second order model is very flexible. It can take on a wide

variety of functional forms, so it will often work well s an

approximation to the true response surface.

2. It is easy to estimate the parameters (the s) in the second

order model. The method of least squares can be used for this

purpose.

3. There is a considerable practical experience indicating that the

second order models work well in solving real response surface

problems.

In general the first order model is

= 0 + 1X1 + 2X2 + 3X3 +-------------+ kXk ---------------------- (3.8)

The second order model is


38

-------------------- (3.9)

In some infrequent situations, approximating polynomials of

order greater than two are used. The general motivation for a

polynomial approximation for the true response function Y is based on

the Taylor series expansion around the point X10, X20-------------, Xk0.

Finally, lets note that there is a close connection between RSM and

linear regression analysis. For example consider the model,

= 0 + 1X1 + 2X2 + 3X3 +-------------+ kXk + ----------------- (3.10)

The s are set of unknown parameters. To estimate the values

of these parameters, collect the data from the system to be studying.

Because, in general, polynomial models are linear functions of the

unknown s, it is referred to the technique as linear regression

analysis.

3.6.1. Uses of RSM

To determine the factor levels that will simultaneously satisfy a

set of desired specifications.

To determine the optimum combination of factors that yields a

desired response and describes the response near the optimum.

To determine how a specific response is affected by changes in

the level of the factors over the specified levels of interest.


39

To achieve a quantitative understanding of the system behavior

over the region tested.

To product properties throughout the region-even at factor

combinations not actually run.

To find conditions for process stability=insensitive spot.

3.6.2. Correlation and regression

Def: when ever two variables are so related that the changes in one

variable are followed by changes in the other is called correlation.

Scatter (or dot) diagram

To obtain a measure of relationship between two variables, we plot

their corresponding values on the graph taking one of the variables

along X-axis and other along the Y-axis. The resulting showing a

collection of dots is called scatter or dot diagram.

The Two variables under consideration a scatter diagram shows

the location of points on a rectangular coordinate system. If all the

points in this scatter diagram seem to line near a line the correlation

is called linear. If all points seem to line near some curve the

correlation is called non-linear, if there is no relationship indicated

between the variables they are said to be independent or un-

correlated.
40

If an increase (or decrease) in the values of one variable

corresponds to an increase (or decrease) in the other, the correlation

is aid to be positive. If the increase (or decrease) in one variable

corresponds to the decrease (or increase) in the other the correlation

is said to be negative. When only two variables are involved, it is called

simple correlation or simple regression, when more than two variables

are involved it is called multiple regression or multiple correlation.

Correlation Coefficient

The numerical measure of the correlation is called the coefficient of

correlation and is defined by the relation.

SS error
R 2 1 ---------------- (3.11)
SS tota l

Regression

Regression is the measure of the average relationship between two are

more variables in terms of the original units of the data.

3.7. Artificial Neural Networks (ANN)

An Artificial Neural Network (ANN) is an information processing

paradigm that is inspired by the way biological nervous systems, such

as the brain, process information. The key element of this paradigm is

the novel structure of the information processing system. It is

composed of a large number of highly interconnected processing

elements (neurons) working in unison to solve specific problems.

ANNs, like people, learn by example. An ANN is configured for a


41

specific application, such as pattern recognition or data classification,

through a learning process. Learning in biological systems involves

adjustments to the synaptic connections that exist between the

neurons.

Neural network simulations appear to be a recent development.

However, this field was established before the advent of computers,

and has survived at least one major setback and several eras.

The first artificial neuron was produced in 1943 by the

neurophysiologist Warren McCulloch and the logician Walter Pits. But

the technology available at that time did not allow them to do too

much.

Neural networks, with their remarkable ability to derive

meaning from complicated or imprecise data, can be used to extract

patterns and detect trends that are too complex to be noticed by either

humans or other computer techniques. A trained neural network can

be thought of as an "expert" in the category of information it has been

given to analyze.

Other advantages include:

1. Adaptive learning: An ability to learn how to do tasks based on

the data given for training or initial experience.


42

2. Self-Organization: An ANN can create its own organization or

representation of the information it receives during learning

time.

3. Real Time Operation: ANN computations may be carried out in

parallel, and special hardware devices are being designed and

manufactured which take advantage of this capability.

4. Fault Tolerance via Redundant Information Coding: Partial

destruction of a network leads to the corresponding degradation

of performance. However, some network capabilities may be

retained even with major network damage.

3.7.1. How the Human Brain Learns?

Much is still unknown about how the brain trains itself to

process information, so theories abound. The human brain consists

of approximately 1011 neurons and each neuron consists of 10,000

synapses. A typical neuron collects signals from others through a host

of fine structures called dendrites. The neuron sends out spikes of

electrical activity through a long, thin stand known as an axon, which

splits into thousands of branches. At the end of each branch, a

structure called a synapse converts the activity from the axon into

electrical effects that inhibit or excite activity from the axon into

electrical effects that inhibit or excite activity in the connected

neurons. When a neuron receives excitatory input that is sufficiently

large compared with its inhibitory input, it sends a spike of electrical


43

activity down its axon. Learning occurs by changing the effectiveness

of the synapses so that the influence of one neuron on another

changes. The structure of the biological neuron as shown in Fig 3.1.

and structure of the artificial neuron as shown in Fig 3.2.

Dendrites

Cell body / soma

Axon hillock
Axon

Synapse

Fig.3.1: Biological Neuron


44

Bias

bk
I1
Wk1

k Output
Wk2
I2
Summing Activation /
Ok
Wk3 Junction Transfer function
I3

.
.

Wkm
.
Fig.3.2. Artificial Neuron
Im

3.7.2. Types of Neural Networks

Inputs
Weights NN.
1. Feed-Forward

2. Recurrent NN.

3. Bi-directional Associative Memory.

4. Self-organizing Map.

5. Radial Basis Function.

6. Others.

3.7.3. Feed-forward Neural Networks

Feed-forward neural networks (FF networks) are the most

popular and most widely used models in many practical applications.

They are known by many different names, such as "multi-layer

perceptrons." Fig. 3.3. Illustrates a one-hidden-layer FF network with

inputs x1, x2, -------xn and output y . Each arrow in the figure
45

symbolizes a parameter in the network. The network is divided into

layers. The input layer consists of just the inputs to the network. Then

follows a hidden layer, which consists of any number of neurons or

hidden units placed in parallel. Each neuron performs a weighted

summation of the inputs, which then passes a nonlinear activation

function , also called the neuron function.

Fig.3.3. A feed forward network with one hidden layer and one

output.

Mathematically the functionality of a hidden neuron is described by

---------------- (3.12)

Where the weights {wj, bj} are symbolized with the arrows feeding into

the neuron.

The network output is formed by another weighted summation

of the outputs of the neurons in the hidden layer. This summation on

the output is called the output layer. In Figure 3.3 there is only one
46

output in the output layer since it is a single-output problem.

Generally, the number of output neurons equals the number of

outputs of the approximation problem. The neurons in the hidden

layer of the network in Figure 3.3 are similar in structure to those of

the perceptron, with the exception that their activation functions can

be any differential function. The output of this network is given by

------------------(3.13.)

Where n is the number of inputs and nh is the number of

neurons in the hidden layer. The variables { , , , } are the

parameters of the network model that are represented collectively by

the parameter vector . The size of the input and output layers are

defined by the number of inputs and outputs of the network and,

therefore, only the number of hidden neurons has to be specified

when the network is defined. The network in Figure 3.3 is sometimes

referred to as a three-layer network, counting input, hidden, and

output layers. However, since no processing takes place in the input

layer, it is also sometimes called a two-layer network. To avoid

confusion this network is called a one-hidden-layer FF network

throughout this documentation.

In training the network, its parameters are adjusted

incrementally until the training data satisfy the desired mapping as

well as possible; that is, until ( ) matches the desired output y as


47

closely as possible up to a maximum number of iterations. The

nonlinear activation function in the neuron is usually chosen to be a

smooth step function. The default is the standard sigmoid,

-----------(3.14.)

3.7.4. The Back Propagation Algorithm

There are several algorithms that can be used to determine the

weights and biases for the network. In this application, a back

propagation algorithm was chosen. This algorithm implies an iterative

process. The error of the neural networks is measured by the sum of

the squared errors between the prediction and the experimental data.

The weights and biases are changed incrementally after each iteration.

The back propagation algorithm (Rumelhart and McClelland, 1986) is

used in layered feed-forward ANNs. This means that the artificial

neurons are organized in layers, and send their signals forward, and

then the errors are propagated backwards. The network receives

inputs by neurons in the input layer, and the output of the network is

given by the neurons on an output layer. There may be one or more

intermediate hidden layers. The back propagation algorithm uses

supervised learning, which means that we provide the algorithm with

examples of the inputs and outputs we want the network to compute,

and then the error (difference between actual and expected results) is
48

calculated. The idea of the back propagation algorithm is to reduce

this error, until the ANN learns the training data. The training begins

with random weights, and the goal is to adjust them so that the error

will be minimal.

The activation function of the artificial neurons in ANNs

implementing the back propagation algorithm is a weighted sum (the

sum of the inputs xi multiplied by their respective weights wij ):

--------------- (3.15)

Where net j is the total or net input and N is the number of

inputs to the jth neuron in the hidden layer. W ij is the weight of the

connection from the ith neuron in the forward layer to the j th neuron in

the hidden layer. A neuron in the network produces its output (Out j)

by processing the net input through an activation (Transfer) function

f, such as Logistic function given below:

1
Out f(net )
j j netj ------------------- (3.16)
1 e

Now, the goal of the training process is to obtain a desired

output when certain inputs are given. Since the error is the difference

between the actual and the desired output, the error depends on the

weights, and we need to adjust the weights in order to minimize the

error. We can define the error function for the output of each neuron:
49

-------------- (3.17)

Where di is the desired response (or target signal), y i are the

output units of the network, and the sums run over time and over the

output units. When the mean square error is minimized, the power of

the error (i.e. the power of the difference between the desired and the

actual ANN output) is minimized.

We take the square of the difference between the output and the

desired target because it will be always positive, and because it will be

greater if the difference is big, and lesser if the difference is small. The

error of the network will simply be the sum of the errors of all the

neurons in the output layer.

The back propagation algorithm now calculates how the error

depends on the output, inputs, and weights. After we find this, we can

adjust the weights using the method of gradient descendent method:

new old
Wij = Wij + Wij -------------------- (3.18)

-------------------- (3.19)

Where E is the mean square error and out j is the jth neuron output.

is the learning rate [step size, momentum] parameter.

3.7.5. Application of Artificial Neural Networks


50

The artificial neural networks are the excellent tools in modeling

the complex manufacturing process. The use of artificial neural

networks (ANN) has been well accepted in the areas of

Telecommunication,

Signal processing.

Prediction.

Process control.

Financial analysis, Pattern recognition, etc.

CHAPTER-4

DATA COLLECTION

4.1. Introduction

Milling is very important operation in which multi point cutting

tool removes the metal from flat work piece. During the operation,

high temperature and forces affect the life of the cutting tool. If cutting

tool fails, it will lead to poor surface finish. So, the surface roughness

is the very important response to evaluate the cutting performance.

The experiments are planned using Taguchis orthogonal array

in the design of experiments (DOE), which helps in reducing the

number of experiments. The experiments were conducted according to

orthogonal array on CNC end milling machine using coated carbide

cutting tool. The five cutting parameters selected for the present work

is nose radius (r, mm), cutting speed (V, m/min), feed rate (f,

mm/min), axial depth of cut (d, mm) and radial depth of cut (rd, mm).
51

Since the considered factors are multi-level variables and their

outcome effects are not linearly related, it has been decided to use five

level tests for cutting speed, feed rate, axial depth of cut, radial depth

of cut and two levels for nose radius. The tool holder used for milling

operation was KENAMETAL tool holder BT40ER40080M. The CNC

milling machine and its specifications are given in Fig.4.1 and Table-

4.1 respectively.

Table-4.1: CNC MILLING machine specifications:

DESCRIPTION SPECIFICATION

Model VMC 600 II

Table clamping area 20 TOOLS ATC STANDARD

Maximum load on the table 700 kgs

Travel X-axis 600 mm

Y-axis 510 mm

Z-axis 510mm

Spindle taper BT-40

Spindle speeds 8-8000RPM

Power 13 KW

Feed rates 0-12 m/min

Rapid traverse x,y,z-axis 30 m/min

Accuracy VDI/DGQ 3441

Positioning 0.005mm
52

Repeatability 0.0025 mm

Power supply 415 V,50Hz, 3 phase

CNC control system SIEMENS 810D

Tool clamping PNEUMATIC

Tool weight CONFORMED FOR SPINDLE

Make HARDINGE(TAIWAN)

DNC port RS 232

Baudrate 9600

Fig.4.1. Vertical milling machine 600 II.

The workpiece used for the present investigation is P20 mould

steel of flat work pieces of 100mm*100mm*10mm and the density of

the material in metric units is 7.8g/cc. The chemical composition of


53

the work material is given in the Table 4.2. The different coated

carbide cutting tool inserts (TN450) used for the present project work

are shown in Fig. 4.3, Fig. 4.4.

Table 4.2. Chemical Composition of P20 mould steel

Composition Weight (%)

Carbon 0.35-0.45
Silicon 0.2-0.4
Manganese 1.3-1.6
Chromium 1.8-2.1
molybdenum 0.15-0.25

Fig.4.2. P20 mould steel.

Fig.4.3. Tool insert of nose radius 0.8mm.


54

Fig.4.4. Tool insert of nose radius 1.2 mm.

The machining parameters used and their levels chosen are given in

Table 4.3.

Table 4.3: Machining Parameters and Their Levels

Levels
Control
Units symbol Level Level Level Level Level
parameters
1 2 3 4 5

Nose radius mm R 0.8 1.2 - - -

Cutting
m/min V 75 80 85 90 95
speed

Feed mm/tooth f 0.1 0.125 0.15 0.175 0.2

Axial depth
mm d 0.5 0.75 1 1.25 1.5
of cut

Radial
depth of mm rd 0.3 0.4 0.5 0.6 0.7
cut
55

Taguchis orthogonal array of L50 (21*511) is most suitable for

this experiment. Because, nose radius with two levels and cutting

speed, cutting feed, axial depth of cut and radial depth of cut with five

levels each and then 25555=1250 runs were required in the

experiments for five independent variables. But using Taguchis

orthogonal array the number of experiments reduced to 50

experiments from 1250 experiments. This needs 50 runs

(experiments) and has 49 degrees of freedoms (DOFs). The L 50

orthogonal array is presented in Table- 4.4.

Table-4.4: L50 (21*511) orthogonal array.

L50(2^1*5^11)
S. NO Nose Cutting Cutting Axial depth Radial depth
Radius Speed Feed of cut of cut
1 1 1 1 1 1
2 1 1 2 2 2
3 1 1 3 3 3
4 1 1 4 4 4
5 1 1 5 5 5
6 1 2 1 2 3
7 1 2 2 3 4
8 1 2 3 4 5
9 1 2 4 5 1
10 1 2 5 1 2
11 1 3 1 3 5
12 1 3 2 4 1
13 1 3 3 5 2
14 1 3 4 1 3
15 1 3 5 2 4
16 1 4 1 4 2
17 1 4 2 5 3
18 1 4 3 1 4
19 1 4 4 2 5
20 1 4 5 3 1
21 1 5 1 5 4
22 1 5 2 1 5
23 1 5 3 2 1
56

24 1 5 4 3 2
25 1 5 5 4 3
26 2 1 1 1 4
27 2 1 2 2 5
28 2 1 3 3 1
29 2 1 4 4 2
30 2 1 5 5 3
31 2 2 1 2 1
32 2 2 2 3 2
33 2 2 3 4 3
34 2 2 4 5 4
35 2 2 5 1 5
36 2 3 1 3 3
37 2 3 2 4 4
38 2 3 3 5 5
S. NO Nose Cutting Cutting Axial depth Radial depth
Radius Speed Feed of cut of cut
39 2 3 4 1 1
40 2 3 5 2 2
41 2 4 1 4 5
42 2 4 2 5 1
43 2 4 3 1 2
44 2 4 4 2 3
45 2 4 5 3 4
46 2 5 1 5 2
47 2 5 2 1 3
48 2 5 3 2 4
49 2 5 4 3 5
50 2 5 5 4 1

In the above Table-4.4, 1, 2, 3, 4and 5 in columns represents

the levels of factors corresponding to the particular variable presented

in the column. For the above coded values of machining parameters,

actual setting values are presented in Table- 4.5.

Table-4.5: Actual Setting Values for the Coded Values:

S.No Nose Cutting Feed Axial Radial


Radius Speed depth of depth of
(mm) (mm/min) (mm/tooth) cut cut (mm)
57

(mm)

1 0.8 75 0.1 0.5 0.3

2 0.8 75 0.125 0.75 0.4

3 0.8 75 0.15 1 0.5

4 0.8 75 0.175 1.25 0.6

5 0.8 75 0.2 1.5 0.7

6 0.8 80 0.1 0.75 0.5

7 0.8 80 0.125 1 0.6

8 0.8 80 0.15 1.25 0.7

Axial
Nose Cutting Feed Radial
depth of
S.No Radius Speed depth of
cut
(mm) (mm/min) (mm/tooth) cut (mm)
(mm)

9 0.8 80 0.175 1.5 0.3

10 0.8 80 0.2 0.5 0.4

11 0.8 85 0.1 1 0.7

12 0.8 85 0.125 1.25 0.3

13 0.8 85 0.15 1.5 0.4

14 0.8 85 0.175 0.5 0.5

15 0.8 85 0.2 0.75 0.6

16 0.8 90 0.1 1.25 0.4

17 0.8 90 0.125 1.5 0.5

18 0.8 90 0.15 0.5 0.6

19 0.8 90 0.175 0.75 0.7

20 0.8 90 0.2 1 0.3

21 0.8 95 0.1 1.5 0.6

22 0.8 95 0.125 0.5 0.7

23 0.8 95 0.15 0.75 0.3

24 0.8 95 0.175 1 0.4


58

25 0.8 95 0.2 1.25 0.5

26 1.2 75 0.1 0.5 0.6

27 1.2 75 0.125 0.75 0.7

28 1.2 75 0.15 1 0.3

29 1.2 75 0.175 1.25 0.4

30 1.2 75 0.2 1.5 0.5

31 1.2 80 0.1 0.75 0.3

32 1.2 80 0.125 1 0.4

33 1.2 80 0.15 1.25 0.5

Axial
Nose Cutting Feed Radial
depth of
S.No Radius Speed depth of
cut
(mm) (mm/min) (mm/tooth) cut (mm)
(mm)

34 1.2 80 0.175 1.5 0.6

35 1.2 80 0.2 0.5 0.7

36 1.2 85 0.1 1 0.5

37 1.2 85 0.125 1.25 0.6

38 1.2 85 0.15 1.5 0.7

39 1.2 85 0.175 0.5 0.3

40 1.2 85 0.2 0.75 0.4

41 1.2 90 0.1 1.25 0.7

42 1.2 90 0.125 1.5 0.3

43 1.2 90 0.15 0.5 0.4

44 1.2 90 0.175 0.75 0.5

45 1.2 90 0.2 1 0.6

46 1.2 95 0.1 1.5 0.4

47 1.2 95 0.125 0.5 0.5

48 1.2 95 0.15 0.75 0.6

49 1.2 95 0.175 1 0.7


59

50 1.2 95 0.2 1.25 0.3

The surface roughness was measured by using Surtronic 3+

stylus type instrument manufactured by Taylor Hobson is shown in

Fig.4.5. The specification of the Surtronic 3+ machine is presented in

Table -4.6

Fig.4.5. Taylor Hobson Surtronic 3+ machine

Table-4.6: Specifications of the Surtronic 3 + surface roughness

machine.

Parameter Description

Battery Alcaline 600 measurements of 4 mm


measurement length.

Traverse speed 1 mm/sec.

Measurement metric/inch.

Cut-off values 0.25 mm, 0.80 mm, 2.50 mm.

Display LCD matrix.


60

Parameters Ra, Rq, Rz.

Calculation time Less than reversal time (00) 2 sec


whichever is longer.

The experimental results are presented in Table- 4.7.

Table 4.7: Experimental results for Ra and MRR.

Ra MRR MRR
S.NO S.NO R Ra (m)
(m) (cm3/min) (cm3/min)
1 0.94 23.39 26 0.98 46
2 1.16 58.73 27 1.2 97.4
3 1.12 116.6 28 1.68 69.06
4 0.86 197.8 29 1.06 136.3
5 0.66 298.8 30 0.52 230.8
6 0.82 63.13 31 1.14 37.07
7 1.44 121.7 32 2.48 83.54
8 0.7 206.6 33 1.74 154.9
9 0.92 127.7 34 1.48 253.4
10 1.28 66.4 35 1.72 109.2
11 1.08 117.9 36 0.52 88.5
12 1.3 81.54 37 0.92 161.3
13 1.48 158.4 38 0.76 257.7
14 1.44 76.1 39 0.64 45.33
15 1.54 152.1 40 0.96 105.3
16 0.56 94.13 41 0.8 156.3
17 0.46 174 42 0.5 103.8
18 0.42 81.3 43 1.54 56.18
19 0.58 161.3 44 1.27 120.6
20 0.5 109.1 45 1.32 214.1
21 0.46 173.4 46 0.87 119.3
22 1.1 81.97 47 1.1 61.35
61

23 0.86 65.33 48 0.78 128.6


24 0.48 137.6 49 1.14 227.3
25 0.74 241.4 50 0.87 143.7

Chapter-5

METHODOLOGY

5.1. Methodology

This chapter consists of the methodology adapted to form the

response tables using least square technique and to develop the

Response surface model and ANN model methodology for the data

presented in chapter-4.

5.2. Least Square Technique

The main objective of the many statistical investigations is to

make predictions preferably on the basis of mathematical equations.

The method of least squares technique is used where independent

variable is used to predict in terms of independent variables.

The method of finding the equation of the line which the best fits for a

given set of paired data, called the method of least squares.

Case1: surface roughness

Ra = 0+1(R)+2(V)+3(f)+4(d)+5(rd)------------------------------------(5.1)

Ra = n0+1R+2V+3 f+4 d+5 rd---------------------------- (5.2)


62

Ra*R = 0R+1R2+2RV+3Rf+4Rd+5Rrd----------------- (5.3)

Ra*v = 0V+1VR+2V2+3Vf+4Vd+5Vrd------------------ (5.4)

Ra*f = 0f+1fR+2fV+3f2+4fd+5frd -----------------------(5.5)

Ra*d = 0d+1Rd+2Vd+3Fd+4d2+5drd-------------------(5.6)

Ra*rd= 0rd+1Rrd+2Vrd+3Frd+4drd+5rd2-------------(5.7)

Where n= number of trails, 0, 1, 2, 3, 4, 5 are the estimated

regression coefficients.

The coefficient can be estimated by solving the normal

equations. By substituting these coefficients in the equation (5.1) we

will get the first order regression equation.

Case 2: Material removal rate

MRR = 0+1(R)+2(V)+3(f)+4(d)+5(rd)--------------------------------(5.8)

MRR = n0+1R+2V+3 f+4 d+5 rd-------------------------(5.9)

MRR*R = 0R+1R2+2RV+3Rf+4Rd+5Rrd-------------(5.10)

MRR*v = 0V+1VR+2V2+3Vf+4Vd+5Vrd -------------(5.11)

MRR*f = 0f+1fR+2fV+3f2+4fd+5frd ------------------(5.12)

MRR*d =0d+1Rd+2Vd+3Fd+4d2+5drd---------------(5.13)

MRR*rd= 0rd+1Rrd+2Vrd+3Frd+4drd+5rd2--------(5.14)
63

Where n= number of trails, 0, 1, 2, 3, 4, 5 are the estimated

regression coefficients.

The coefficient can be estimated by solving the normal

equations. By substituting these coefficients in the equation (5.8) we

will get the first order regression equation.

5.3. Development of RSM model for Surface Roughness and MRR

The data collected from the experiments was used to build the

mathematical model using response surface methodology. The

response surface methodology is a collection of mathematical and

statistical techniques that are used for modeling, analysis and

optimizing the mathematical model in which response of interest is

influenced by nose radius (mm), cutting speed (m/min), feed

(mm/tooth), axial depth of cut (mm), radial depth of cut (mm) and the

objective is to develop the mathematical model for surface roughness

and MRR.

Case 1: RSM model for surface roughness

The proposed first order response surface representing the

surface roughness can be expressed as function of cutting parameters

such as nose radius (mm), cutting speed (m/min), feed (mm/tooth),

axial depth of cut (mm) and radial depth of cut (mm). The relationship

between the surface roughness and machining parameters has been

expressed as follows.

Ra = 0+1(R)+2(V)+3(f)+4(d)+5(rd)----------------------------------(5.15)
64

From the observed data for surface roughness, the estimated

regression coefficients for average surface roughness in un-coded

units have been determined using least square technique.

The multiple regression coefficient of the first order model was

found to be 0.217. This shows that first order model can explain the

variation of the extent of 21.7%.

The response function has been determined in un-coded units as

Ra = 2.28840+0.509R-0.01866v+0.836f-0.2744d-0.089rd.--------(5.16)

The second order response surface representing the surface

roughness can be expressed as function of cutting parameters such as

nose radius (mm), cutting speed (m/min), feed (mm/tooth), axial

depth of cut (mm) and radial depth of cut (mm). The relationship

between the surface roughness and machining parameters has been

expressed as follows

Ra=0+1(R)+2(V)+3(f)+4(d)+5(rd)+6R2+7v2+8f2+9d2+10rd2+11Rv

+12Rf+ 13Rr+ 14Rrd+ 15vf+ 16vd+ 17vrd+ 18rrd------------(5.17)

From the observed data for surface roughness, the estimated

regression coefficients for average surface roughness in un-coded

units have been determined using least square technique.

The multiple regression coefficient of the second order model

was found to be 0.304. This shows that second order model can

explain the variation of the extent of 30.4%.


65

The response function has been determined in un-coded units as

Ra = -11.7967+0.509R+0.262569v+25.2817f+0.831886d-0.089rd-

0.00165429V*V-81.4857f*f-0.553143d*d. ------------------(5.18)

Results of ANOVA for the response function surface roughness

are presented in the Table 5.1. This analysis is carried out for a level

of significance of 5% i.e., for a level of confidence of 95%.

Table-5.1. Analysis of variance of Ra

Source DF Seq SS Adj SS Adj MS F P

Regression 8 2.68075 2.68075 0.335094 2.24 0.044

Linear 5 1.91085 1.91085 0.382170 2.55 0.042

Square 3 0.76990 0.76990 0.256635 1.71 0.179

Residual Error 41 6.13970 6.13970 0.149749

Total 49 8.82046

From the analysis of Table-5.1, it is apparent that, the F

calculated value is greater than F table value (F 0.05, 8, 41= 2.176). Hence

the second order response function developed is quite adequate.

Case 2: RSM model for MRR

The proposed first order response surface representing the MRR

can be expressed as function of cutting parameters such as nose

radius (mm), cutting speed (m/min), feed (mm/tooth), axial depth of

cut (mm) and radial depth of cut (mm). The relationship between the

MRR and machining parameters has been expressed as follows.


66

MRR = 0+1(R)+2(V)+3(f)+4(d)+5(rd)-------------------------------(5.19)

From the observed data for MRR, the estimated regression

coefficients for average MRR in un-coded units has been determined

using least square technique.

The multiple regression coefficient of the first order model was

found to be 0.93. This shows that first order model can explain the

variation of the extent of 93.0%.

The response function has been determined in un-coded units as

MRR = -275.586+2.08R+0.514940v+784.560f+123.377d-233.079rd

-----(5.20)

The second order response surface representing the MRR can be

expressed as function of cutting parameters such as nose radius

(mm), cutting speed (m/min), feed (mm/tooth), axial depth of cut (mm)

and radial depth of cut (mm). The relationship between the MRR and

machining parameters has been expressed as follows

MRR = 0+1(R)+2(V)+3(f)+4(d)+5(rd)+6R2+7v2+8f2+9d2+10rd2+

11Rv+12Rf+13Rr+14Rrd+15vf+16vd+17vrd+ 18rrd---(5.21)

From the observed data for MRR, the estimated regression

coefficients for average MRR in un-coded units have been determined

using least square technique.


67

The multiple regression coefficient of the second order model

was found to be 0.998. This shows that second order model can

explain the variation of the extent of 99.8%.

The response function has been determined in un-coded units as

MRR=1135.67-21.1811v-4915.39f+166.731d-226.271rd +0.09638v*v

+3000.75f*f+35.0434d*d-166.407rd*rd+47.5774v*f-2.51327v*d

+2.29446v*rd +1408.57f*rd+226.457d*rd------------------ (5.22)

Results of ANOVA for the response function MRR are presented

in the table 5.2. This analysis is carried out for a level of significance

of 5% i.e., for a level of confidence of 95%.

Table 5.2.: Analysis of variance of MRR

Source DF Seq SS Adj SS Adj MS F P

Regression 13 202467 202467 15574.4 1466.90 0.000

Linear 4 188596 167027 41756.7 3932.91 0.000

Square 4 1006 1914 478.6 45.08 0.000

Interaction 5 12865 12865 2573.0 242.34 0.000

Residual Error 36 382 382 10.6

Total 49 202850

From the analysis of Table 5.2, it is apparent that, the F

calculated value is greater than F table value (F 0.05, 13, 36 = 2.009).

Hence the second order response function developed is quite

adequate.
68

5.4. Development of ANN model for Surface Roughness and MRR

One of the key issues when designing a particular neural

network is to calculate proper weights for neuronal activities. These

are obtained from the training process applied to the given neural

network. To that end, a training sample is provided, i.e. a sample of

observations consisting of inputs and their respective outputs. The

observations are fed to the network. In the training process the

algorithm is used to calculate neuronal weights, so that the squared

error between the calculated outputs and observed outputs from the

training set is minimized.

5.4.1. Designing of the Neural Network Architecture

A Generalized feed forward networks is used for developing ANN

model. These networks are used for a generalization of the MLP (Multi-

layer perceptron) such that connections can jump over one or more

layers. The network has five inputs of Nose radius, cutting speed,

feed, axial depth of cut and radial depth of cut and outputs of surface

roughness and MRR. The size of hidden layer is two of the most

important considerations when solving actual problems using multi-

layer feed forward network. Two hidden layer was adopted for the

present model. Attempts have been made to study the network

performance with a different number of hidden neurons. A number of

networks are constructed, each of them is trained separately, and the

best network is selected based on the accuracy of the predictions in

the testing phase. The general network is supposed to be 5-n-n-2,


69

which implies 5 neurons in the input layer, n neurons in the two

hidden layer and two neurons in the output layer. Using a neural

network package developed in Neuro Solution 4.0, different network

configurations with different number of hidden neurons were trained,

and their performance is checked. The performance of the different

networks is checked with the means square error and best network is

selected which has the lowest mean square error among the different

networks. In this study 5-8-8-2 network was selected which has the

minimum mean square error.

The optimal neural network architecture 5-8-8-2 was used in

this study, was designed using NeuroSolutions 4.0 and shown in Fig

5.5. The network consists of one input, two hidden and one output

layer. The input layer has five neurons, two hidden layer has eight

neurons and output layer has two neurons respectively. Since surface

finish and MRR prediction in terms of Nose radius, cutting speed,

Feed, axial depth of cut and radial depth of cut was the main interest
Hidden layrer-1 Hidden layrer-2
in this research. Neurons in the input layer corresponding to the Nose

radius, cutting
Input layer speed, feed, axial depth of cut and radial depth of cut,
Output layer
the output layer corresponds to surface finish and MRR. The input
Nose radius
Surface
layer, hidden and output layer will apply a sigmoidal roughness
activation
Cutting speed
function.
Outj=
Feed
f (netj) = 1/ (1+e-netj)
Axial depth of cut
MRR

Radial depth of cut


70

Fig.5.1. Structure of Feed-Forward Neural Network

5.4.2. Generation of Train and Test Data

In creating the ANN models, a new data set obtained from 50

data sets is utilized. The new data set consists of 50 analysis results

and corresponds to the combination of five most important process

parameters affecting the surface roughness and MRR. The fourteen

data set used for testing of the developed model which were not used

in developing the model. The surface roughness and MRR values

corresponding to the training data and test data were found by

analyses which were done by NeuroSolutions 4.0 software. The results

predicted by artificial neural networks for train and test data are given

in Table 5.3 and Table 5.4 and percentage deviation is calculated and

shown in Fig.5.2, Fig.5.3, Fig.5.6 and Fig.5.7. respectively.


71

Table-5.3: Comparison of Experimental and ANN Output of Train

Data for surface roughness and MRR.

MRR ANN ANN MRR ANN ANN


Ra Ra
S.No (cm3/ Ra MRR S.No (cm3/ Ra MRR
(m) (m)
min) (m) (cm /min)
3 min) (m) (cm3/min)
1. 0.94 23.39 0.9395 23.704 26 0.98 46 0.9791 45.993
2. 1.16 58.73 1.1608 59.075 27 1.2 97.4 1.1998 97.5
3. 1.12 116.6 1.12 116.47 28 1.68 69.06 1.6799 68.824
4. 0.86 197.8 0.8599 198.12 29 1.06 136.3 1.0607 135.77
5. 0.66 298.8 0.6617 297.31 30 0.52 230.8 0.5199 230.9
6. 0.82 63.13 0.8212 62.538 31 1.14 37.07 1.1401 37.07
7. 1.44 121.7 1.4397 121.9 32 2.48 83.54 2.4794 83.622
8. 0.7 206.6 0.7006 206.62 33 1.74 154.9 1.7393 155.46
9. 0.92 127.7 0.9203 127.87 34 1.48 253.4 1.4794 252.87
10. 1.28 66.4 1.2805 65.519 35 1.72 109.2 1.7191 109.3
11. 1.08 117.9 1.0804 117.86 36 0.52 88.5 0.5183 87.755
12. 1.3 81.54 1.2999 81.645 37 0.92 161.3 0.9205 161.08
13. 1.48 158.4 1.48 158.23 38 0.76 257.7 0.759 258.24
14. 1.44 76.1 1.4397 76.962 39 0.64 45.33 0.6387 46.357
15. 1.54 152.1 1.5406 151.65 40 0.96 105.3 0.9601 105.44
16. 0.56 94.13 0.5586 94.07 41 0.8 156.3 0.7999 156.51
17. 0.46 174 0.4619 174.51 42 0.5 103.8 0.5034 104.16
18. 0.42 81.3 0.4203 81.159 43 1.54 56.18 1.5408 55.708
19. 0.58 161.3 0.579 161.3 44 1.27 120.6 1.2709 120.81
20. 0.5 109.1 0.4979 108.87 45 1.32 214.1 1.3202 214.07
21. 0.46 173.4 0.4667 173.52 46 0.87 119.3 0.867 118.69
22. 1.1 81.97 1.0997 81.962 47 1.1 61.35 1.1009 62
23. 0.86 65.33 0.8602 65.287 48 0.78 128.6 0.7801 128.3
24. 0.48 137.6 0.4777 137.58 49 1.14 227.3 1.1395 227.27
25. 0.74 241.4 0.7394 241.55 50 0.87 143.7 0.8699 143.56
72

2
%Deviation Absolute % Deviation
1.5
n 1

0.5
tio
via
0
De
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49
% -0.5

-1

-1.5

-2
Experiment No.

Fig.5.2. % Deviation of surface roughness for ANN vs. Experiment No.

2.5
2 % Deviation Absolute % Deviation

1.5
1
0.5
0
-0.5 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49
-1
-1.5
% Deviation for MRR
-2
-2.5
Experiment No.

Fig.5.3. % Deviation of MRR for ANN vs. Experiment No.

5.4.3. Network Training

For calculation of weight variables, often referred to as network

training, the weights are given quasi-random, intelligently chosen

initial values. They are then iteratively updated until convergence to

the certain values using the gradient descent method. Gradient


73

descent method updates weights so as to minimize the mean square

error (MSE) between the network prediction and training data set as

shown below:

new old
W W W -------------- (5.23)
ij ij ij

k E
W k -t out ------------ (5.24)
ij t 1 W j
ij

Where E is the MSE and outj is the jth neuron output. is the learning

rate [step size, momentum] parameter, controlling the stability and

rate of convergence of the network. The learning rate [step size 1.0,

momentum 1] selected and the training process takes place on an

Intel(R) premium(R) D CPU 3.4 GHz 3.39GHz processor PC for 65,000

training iterations. The MSE was obtained after training of the

network with 65000 epochs and multiple training (three times) as

2.41E-06. Fig.5.4 and Fig.5.5, depicts the average MSE with standard

deviation boundaries for three runs and convergence of MSE with

epochs. The comparison between ANN model output and experimental

output for training data sets are presented in Table 5.3. In Order to

judge the ability and efficiency of the model to predict the surface

roughness and MRR values percentage deviation () and the average

percentage deviation ( ) were used and defined as

Experimental Predicted
i 100%
Experimental

(4.10)
74

Where i = percentage deviation of single sample data

n
i
i1
n

Where = average percentage deviation of all sample data and n=

size of the sample data.

Table 5.4: Comparison of Experimental and ANN output of Test

Data for surface roughness and MRR.

Exp ANN
Exp MRR ANN MRR
S.No Ra Ra
(cm3/min) (cm3/min)
(m) (m)
1. 1.96 73 1.992 56.34
2. 1.96 73 1.992 56.34
3. 0.6 203.5 0.728 277.9
4. 0.78 200.5 0.789 272.7
5. 0.64 134 0.641 134.6
6. 1.02 154 1.075 148.7
7. 1.16 107 1.01 141.8
8. 1.56 92 1.512 92.05
9. 0.68 179.2 0.687 221.5
10. 1.24 92 1.321 122.3
11. 0.94 104 0.955 102.9
12. 1.18 118 1.182 136.2
13. 1.28 84 1.3 91.33
14. 0.62 197 0.64 212.7
75

Fig.5.4. Average MSE with standard deviation.

Fig.5.5. Training MSE with epoch No. boundaries for 3 runs

5.4.4. Neural Network Testing

The ANN predicted results are in good agreement with

experimental results and the network can be used for testing of the

network. Hence the testing data sets are applied which were never

used in the training process. The results predicted by the network

were compared with the experimental result values and percentage


76

deviation has been calculated and shown in Fig.5.6 and Fig.5.7,

respectively.

Fig.5.6. % Deviation of surface roughness for ANN vs. Experiment No.

50
% Deviation Absolute % deviation
40
30
20
10
0
-10 1 2 3 4 5 6 7 8 9 10 11 12 13 14
-20
-30
% Deviation for MRR
-40
-50
Experiment No.

Fig.5.7. % Deviation of MRR for ANN vs. Experiment No.


77

CHAPTER-6

RESULTS ANALYSIS AND DISCUSSION

6.1. Introduction

The effect of nose radius (mm), cutting speed (m/min), feed

(mm/tooth), axial depth of cut (mm) and radial depth of cut (mm) on

surface roughness and MRR were studied in this chapter. The

experiments were conducted using L50 orthogonal array in the design

of experiments (DOE). The surface roughness of all the components

was measured. Based on the output results, the results and

discussion are presented in this chapter.

6.2. Response Surface Analysis

Case1: RSM analysis for surface roughness

The response surface model presented in equation 5.18 is plotted in

Fig 6.1. to Fig. 6.8 as contours. These response contours can help in

the prediction of the surface roughness at any zone of the

experimental domain.
78

Fig.6.1. Cutting speed vs. Nose radius

From the Fig.6.1, it is observed that at nose radius increases

the surface roughness increases and cutting speed increases the

surface roughness decreases. The surface roughness decreases due to

increase of cutting speed because at higher cutting speeds, cutting

forces and tendency towards built up edge formation weakens due to

increase in temperature and consequent decrease of frictional stress

at the rake face.

Fig.6.2. Feed vs. Nose radius


79

Fig.6.2. shows that at lower levels of nose radius and feed rates

the surface roughness decreases. The feed increases the material

removal rate increases. The increase of feed rate increased the heat

generation and hence, toolwear which resulted in the higher surface

roughness. The increase in federate also increased the chatter and

produced incomplete machining at a faster traverse which led to

higher surface roughness.

Fig.6.3. Axial depth of cut vs. Nose radius

From the Fig.6.3, it is observed that as nose radius increases

the surface roughness increases while axial depth of cut increases the

surface roughness decreases. The reason being, at low axial depth of

cut the material cant be removed fully which leads to high surface

roughness. Where as at high depth of cut the material is being clear of

from the surface and produce good surface finish.


80

Fig.6.4. Radial depth of cut vs. Nose radius

Fig.6.4. shows that when nose radius and radial depth of cut

increases the surface roughness decreases. When the nose radius is

high and radial depth of cut increases the MRR increases due to this

the surface roughness decreases.

Fig.6.5. Feed vs. Cutting speed


81

From the Fig.6.5, it is observed that as cutting speed increases

the surface roughness decreases and feed increases the surface

roughness increases.

Fig.6.6. Axial depth of cut vs. Cutting speed

Fig.6.6. shows that as cutting speed and axial depth of cut

increases the surface roughness decreases.

Fig.6.7. Radial depth of cut vs. Cutting speed


82

From the Fig.6.7, it is observed that when cutting and radial depth of

cut increases the surface roughness decreases.

Fig.6.8. Axial depth of cut vs. Feed

Fig.6.8. shows that as feed is constant and axial depth of cut

increases the surface roughness decreases. When feed is increases

and axial depth of cut constant the surface roughness increases up to

0.16mm/tooth after that it decreases.

Analyzing the residual plots for surface roughness prediction

model:

The regression model is used for determining the residuals of

each individual experimental run. The difference between the

measured values and predicted values are called residuals. The

residuals are calculated and ranked in ascending order. The normal

probabilities of residuals are shown in Fig.6.9. The normal probability


83

plot is used to verify the normality assumption. As shown in Fig.6.9,

the data are spread roughly along the straight line. Hence it is

considered that the data are normally distributed Fig.6.10 shows, the

residuals against the observation order. Fig.6.10 is used to show

correlation between the residuals. From the Fig.6.10 it is emphasized

that a tendency to have a runs of positive and negative residuals

indicates the existence of a certain correlation. Also the plots show

that the residuals are distributed evenly in both positive and negative

along the run. Hence the data can be said to be independent.

99

95
(response is Ra)
90

80
70
60
Percent

50
40
30
20

10

1
-1.0 -0.5 0.0 0.5 1.0
Residual

Fig.6.9. Normal plot of Residuals.


84

(response is Ra)
1.0

0.5
Residual

0.0

-0.5

1 5 10 15 20 25 30 35 40 45 50
Observation Order

Fig.6.10 Residuals vs. Observation order for Ra.

1.0 (response is Ra)

0.5
Residual

0.0

-0.5

0.2 0.4 0.6 0.8 1.0 1.2 1.4


Fitted Value

Fig.6.11 Residuals vs. Fits for Ra.


85

Table 6.1: Comparison of experimental and RSM output of train


data for Ra.

Order of Experimental RSM Ra Order of Experimental RSM Ra


data Ra (m) (m) data Ra (m) (m)
1 0.94 0.962 26 0.98 1.139
2 1.16 1.162 27 1.2 1.339
3 1.12 1.191 28 1.68 1.412
4 0.86 1.049 29 1.06 1.27
5 0.66 0.736 30 0.52 0.957
6 0.82 1.01 31 1.14 1.232
7 1.44 1.141 32 2.48 1.362
8 0.7 1.101 33 1.74 1.322
9 0.92 0.934 34 1.48 1.111
10 1.28 1.068 35 1.72 1.244
11 1.08 0.906 36 0.52 1.128
12 1.3 1.013 37 0.92 1.189
13 1.48 0.903 38 0.76 1.08
14 1.44 1.139 39 0.64 1.36
15 1.54 1.033 40 0.96 1.254
16 0.56 0.695 41 0.8 0.872
17 0.46 0.688 42 0.5 0.909
18 0.42 1.025 43 1.54 1.246
19 0.58 1.021 44 1.27 1.243
20 0.5 0.891 45 1.32 1.068
21 0.46 0.288 46 0.87 0.509
22 1.1 0.727 47 1.1 0.948
23 0.86 0.87 48 0.78 1.046
24 0.48 0.797 49 1.14 0.973
25 0.74 0.553 50 0.87 0.774

The results predicted by the RSM model were compared with the

experimental result values and percentage deviation and average

percentage deviation has been calculated for surface roughness and

shown in Fig. 6.12 and Fig.6.13.


86

200
% deviation Ra absolute % deviation Ra
150
100

50
0
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49
-50
% Deviation of Ra

-100

-150
-200
Exeriment No.

Fig.6.12. % Deviation of surface roughness for train data vs.


Experiment No.

Table 6.2: Comparison of experimental and RSM output of test data


for Ra.

Exp Ra RSM Ra
S.No R V F d rd
(m) (m)
1 0.8 78 0.14 0.8 0.4 1.96 1.24442
2 0.8 78 0.14 0.8 0.4 1.96 1.24442
3 0.8 78 0.16 1.35 0.6 0.6 1.04679
4 0.8 87 0.14 1.35 0.6 0.78 0.93657
5 0.8 87 0.16 0.8 0.6 0.64 1.14984
6 0.8 87 0.16 1.35 0.4 1.02 0.97109
7 1.2 78 0.16 1.35 0.4 1.16 1.26819
8 1.2 78 0.16 0.8 0.6 1.56 1.44694
9 1.2 78 0.14 1.35 0.6 0.68 1.23367
10 1.2 87 0.16 0.8 0.4 1.24 1.37124
11 1.2 87 0.14 1.35 0.4 0.94 1.15797
12 1.2 87 0.14 0.8 0.6 1.18 1.33672
13 0.8 90 0.1 1.5 0.3 1.28 0.53186
14 0.8 95 0.1 1.5 0.7 0.62 0.27889
87

Fig.6.13. % Deviation of surface roughness for test data vs.


Experiment No.

Case 2: RSM analysis for MRR

The response surface model presented in equation 5.22 is plotted in

Fig 6.14. to Fig. 6.19 as contours. These response contours can help

in the prediction of the MRR at any zone of the experimental domain.

Fig.6.14. Radial depth of cut Vs. Axial depth of cut.


88

Fig.6.14. shows that as axial depth of cut and radial depth of

cut increases the MRR also increases. Generally, as the depth cut

increases the material removal rate increases.

Fig.6.15. Radial depth of cut vs. Cutting feed

From the Fig.6.15, it is observed that as cutting feed and radial

depth of cut increases the material removal rate increases. When feed

increases the material removal rate increases.


89

Fig.6.16. Axial depth of cut vs. Cutting feed

Fig.6.16. shows that as cutting feed and axial depth of cut

increases the material removal rate increases. When feed increases

the material removal rate increases.

Fig.6.17. Radial depth of cut vs. Cutting speed


90

Fig.6.18. Axial depth of cut vs. Cutting speed

Fig.6.19. Cutting feed vs. Cutting speed.

From the Fig.6.17 - 6.19, it is observed that as cutting speed,

axial depth of cut, radial depth of cut and cutting feed increases the

material removal rate increases. The reason being, at high depth of

cut the material is being clear of from the surface, hence MRR

increases.
91

Analyzing the residual plots for MRR prediction model:

The regression model is used for determining the residuals of

each individual experimental run. The difference between the

measured values and predicted values are called residuals. The

residuals are calculated and ranked in ascending order. The normal

probabilities of residuals are shown in Fig.6.20. The normal

probability plot is used to verify the normality assumption. As shown

in Fig.6.20. The data are spread roughly along the straight line. Hence

it is considered that the data are normally distributed. Fig.6.21,

Shows the residuals against the observation order. Fig 6.21 is used to

show correlation between the residuals. From the fig 6.21 it is

emphasized that a tendency to have a runs of positive and negative

residuals indicates the existence of a certain correlation. Also the plots

show that the residuals are distributed evenly in both positive and

negative along the run. Hence the data can be said to be independent.

99
(response is MRR)

95

90

80
70
Percent

60
50
40
30
20

10

1
-7.5 -5.0 -2.5 0.0 2.5 5.0
Residual

Fig6.20. Normal plot of Residuals for MRR.


92

(response is MRR)
5.0

2.5
Residual

0.0

-2.5

-5.0

-7.5
1 5 10 15 20 25 30 35 40 45 50
Observation Order

Fig.6.21. Residuals vs. Observation order for MRR.

Fig.6.22. Residuals vs. Fits for MRR.


93

Table 6.3: Comparison of experimental and RSM output of train

data for MRR.

Order Experimental Predicted Order Experimental Predicted


of MRR MRR of MRR MRR
data (Cm3/min) (Cm3/min) data (Cm3/min) (Cm3/min)
1 23.39 27.4 26 46 42.44
2 58.73 61.185 27 97.4 93.789
3 116.55 118.139 28 69.06 68.053
4 197.78 198.262 29 136.31 136.466
5 298.8 301.553 30 230.77 228.049
6 63.13 63.461 31 37.07 36.489
7 121.65 122.95 32 83.54 84.269
8 206.61 205.608 33 154.89 155.217
9 127.66 131.223 34 253.38 249.335
10 66.4 62.071 35 109.17 112.825
11 117.92 116.36 36 88.5 89.084
12 81.54 79.127 37 161.29 162.567
13 158.39 158.216 38 257.73 259.22
14 76.1 78.999 39 45.33 39.927
15 152.13 154.216 40 105.34 103.435
16 94.13 92.183 41 156.25 158.516
17 174.01 173.807 42 103.81 101.235
18 81.3 84.526 43 56.18 56.859
19 161.29 162.278 44 120.58 122.902
20 109.05 110.523 45 214.13 212.113
21 173.41 177.997 46 119.33 116.83
22 81.97 78.652 47 61.35 62.39
23 65.33 64.048 48 128.64 130.968
24 137.55 138.231 49 227.27 222.714
25 241.38 235.583 50 143.68 150.911
The results predicted by the RSM model were compared with the

experimental result values. The percentage deviation and average

percentage deviation has been calculated for MRR and shown in Fig.

6.23 and Fig.6.24 for train and test data respectively.


94

20
% deviation MRR absolute % deviation MRR
15

10

0
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49
-5
% deviation MRR
-10

-15

-20
Experiment No.

Fig.6.23. % Deviation MRR for train data vs. Experiment no.

Table 6.4: Comparison of experimental and RSM output of test data

for MRR.

S.No R V f d rd Exp MRR RSM MRR


(Cm3/min) (Cm3/min)
1 0.8 78 0.14 0.8 0.4 73 64.914
2 0.8 78 0.14 0.8 0.4 73 64.914
3 0.8 78 0.16 1.35 0.6 203.5 208.718
4 0.8 87 0.14 1.35 0.6 200.5 192.193
5 0.8 87 0.16 0.8 0.6 134 123.966
6 0.8 87 0.16 1.35 0.4 154 143.972
S.No R V f d rd Exp MRR RSM MRR
(Cm3/min) (Cm3/min)

7 1.2 78 0.16 1.35 0.4 107 145.242


8 1.2 78 0.16 0.8 0.6 92 108.665
10 1.2 87 0.16 0.8 0.4 92 81.271
11 1.2 87 0.14 1.35 0.4 104 130.222
12 1.2 87 0.14 0.8 0.6 118 104.582
13 0.8 90 0.1 1.5 0.3 84 89.627
14 0.8 95 0.1 1.5 0.7 197 203.589
95

Fig.6.24. % Deviation MRR for test data vs. Experiment no.

6.3. Artificial Neural Network

ANNs are non-linear mapping systems and hence can be used

to develop the prediction models. Neuro solution v4 software has been

used in the present study. The network is selected is a multi layer

perceptron (MLP), which consist of atleast three layers. The activation

function used is sigmoidal function, which is nonlinear function.

The best network structure is the one, which yields better

prediction results. The various possible network structures are to be

trained by using the best combination learning parameters. In the

present problem the number of neurons in the input layer is 5 i.e.,

nose radius, cutting speed, feed, axial depth of cut, radial depth of cut

and the number of neurons in the output layer is 2 i.e., surface

roughness and MRR. The different network structures and the

number of neurons are considered in each hidden layer and varied

from 1 to 8. All the possible combinations of different structures are


96

trained for 65,000 epochs with multiple training (3times). It is

observed that 5-8-8-2 network structure has the lowest MSE and it is

considered for further training.

Table.6.5. Performance of the network Ra and MRR.

Performance Ra MRR
MSE 2.00597E-06 0.189657695
NMSE 1.13711E-05 4.67483E-05
MAE 0.000871394 0.308762985
Min Abs Error 8.69751E-06 7.98035E-05
Max Abs Error 0.00669662 1.491375732
R 0.999994394 0.999977327

From the Table 6.5, the MSE value for surface roughness is

2.00597E-06 and for MRR is 0.189657695. The regression value for

surface roughness is 0.999994394 and MRR is 0.999977327. The

comparison between ANN model output and experimental output for

training and testing data set are presented in Table.6.6 and Table 6.7.

Table-6.6. Comparison of experimental and ANN output of train data

for surface roughness and MRR.

S.No R V f d rd Ra MRR ANN Ra ANN MRR


(m) (Cm3/min) (m) (Cm3/min)
1 0.8 75 0.1 0.5 0.3 0.94 23.39 0.939 23.7
2 0.8 75 0.125 0.75 0.4 1.16 58.73 1.161 59.07
3 0.8 75 0.15 1 0.5 1.12 116.6 1.12 116.5
4 0.8 75 0.175 1.25 0.6 0.86 197.8 0.86 198.1
5 0.8 75 0.2 1.5 0.7 0.66 298.8 0.662 297.3
6 0.8 80 0.1 0.75 0.5 0.82 63.13 0.821 62.54
7 0.8 80 0.125 1 0.6 1.44 121.7 1.44 121.9
8 0.8 80 0.15 1.25 0.7 0.7 206.6 0.701 206.6
9 0.8 80 0.175 1.5 0.3 0.92 127.7 0.92 127.9
10 0.8 80 0.2 0.5 0.4 1.28 66.4 1.281 65.52
97

11 0.8 85 0.1 1 0.7 1.08 117.9 1.08 117.9


12 0.8 85 0.125 1.25 0.3 1.3 81.54 1.3 81.65
13 0.8 85 0.15 1.5 0.4 1.48 158.4 1.48 158.2
14 0.8 85 0.175 0.5 0.5 1.44 76.1 1.44 76.96
15 0.8 85 0.2 0.75 0.6 1.54 152.1 1.541 151.6
16 0.8 90 0.1 1.25 0.4 0.56 94.13 0.559 94.07
17 0.8 90 0.125 1.5 0.5 0.46 174 0.462 174.5
18 0.8 90 0.15 0.5 0.6 0.42 81.3 0.42 81.16
19 0.8 90 0.175 0.75 0.7 0.58 161.3 0.579 161.3
20 0.8 90 0.2 1 0.3 0.5 109.1 0.498 108.9
21 0.8 95 0.1 1.5 0.6 0.46 173.4 0.467 173.5
22 0.8 95 0.125 0.5 0.7 1.1 81.97 1.1 81.96
23 0.8 95 0.15 0.75 0.3 0.86 65.33 0.86 65.29
24 0.8 95 0.175 1 0.4 0.48 137.6 0.478 137.6
25 0.8 95 0.2 1.25 0.5 0.74 241.4 0.739 241.6
S.No R V f d rd Ra MRR ANN Ra ANN MRR
(m) (Cm3/min) (m) (Cm3/min)
26 1.2 75 0.1 0.5 0.6 0.98 46 0.979 45.99
27 1.2 75 0.125 0.75 0.7 1.2 97.4 1.2 97.5
29 1.2 75 0.175 1.25 0.4 1.06 136.3 1.061 135.8
30 1.2 75 0.2 1.5 0.5 0.52 230.8 0.52 230.9
31 1.2 80 0.1 0.75 0.3 1.14 37.07 1.14 37.07
32 1.2 80 0.125 1 0.4 2.48 83.54 2.479 83.62
33 1.2 80 0.15 1.25 0.5 1.74 154.9 1.739 155.5
34 1.2 80 0.175 1.5 0.6 1.48 253.4 1.479 252.9
35 1.2 80 0.2 0.5 0.7 1.72 109.2 1.719 109.3
36 1.2 85 0.1 1 0.5 0.52 88.5 0.518 87.76
37 1.2 85 0.125 1.25 0.6 0.92 161.3 0.92 161.1
38 1.2 85 0.15 1.5 0.7 0.76 257.7 0.759 258.2
39 1.2 85 0.175 0.5 0.3 0.64 45.33 0.639 46.36
40 1.2 85 0.2 0.75 0.4 0.96 105.3 0.96 105.4
41 1.2 90 0.1 1.25 0.7 0.8 156.3 0.8 156.5
42 1.2 90 0.125 1.5 0.3 0.5 103.8 0.503 104.2
43 1.2 90 0.15 0.5 0.4 1.54 56.18 1.541 55.71
44 1.2 90 0.175 0.75 0.5 1.27 120.6 1.271 120.8
45 1.2 90 0.2 1 0.6 1.32 214.1 1.32 214.1
46 1.2 95 0.1 1.5 0.4 0.87 119.3 0.867 118.7
47 1.2 95 0.125 0.5 0.5 1.1 61.35 1.101 62
48 1.2 95 0.15 0.75 0.6 0.78 128.6 0.78 128.3
49 1.2 95 0.175 1 0.7 1.14 227.3 1.14 227.3
50 1.2 95 0.2 1.25 0.3 0.87 143.7 0.87 143.6
98

Table 6.7: Comparison of experimental and ANN output of test

data for surface roughness and MRR.

Exp Exp MRR ANN Ra ANN MRR


S.No R V f d rd
Ra(m) (Cm3/min) (m) (Cm3/min)
1 0.8 78 0.14 0.8 0.4 1.96 73 1.992 56.34
2 0.8 78 0.14 0.8 0.4 1.96 73 1.992 56.34
3 0.8 78 0.16 1.35 0.6 0.6 203.5 0.728 277.9
4 0.8 87 0.14 1.35 0.6 0.78 200.5 0.789 272.7
5 0.8 87 0.16 0.8 0.6 0.64 134 0.641 134.6
6 0.8 87 0.16 1.35 0.4 1.02 154 1.075 148.7
7 1.2 78 0.16 1.35 0.4 1.16 107 1.01 141.8
8 1.2 78 0.16 0.8 0.6 1.56 92 1.512 92.05
9 1.2 78 0.14 1.35 0.6 0.68 179.2 0.687 221.5
10 1.2 87 0.16 0.8 0.4 1.24 92 1.321 122.3
11 1.2 87 0.14 1.35 0.4 0.94 104 0.955 102.9
12 1.2 87 0.14 0.8 0.6 1.18 118 1.182 136.2
13 0.8 90 0.1 1.5 0.3 1.28 84 1.3 91.33
14 0.8 95 0.1 1.5 0.7 0.62 197 0.64 212.7

6.3.1. Parametric analysis

The sensitivity tests were performed to study the various

network parameters and to obtain the variables that affects the

surface roughness and MRR.


99

Fig.6.25. Mean square error vs. Number of neurons.

From the Fig.6.25, it is observed that as number of neurons

increases the mean square error is constant. The mean square error

at 5-8-8-2 is 2.13126E-05.

Fig.6.26. surface roughness vs. Number of neurons.

Fig.6.27. MRR vs. Number of neurons.

The variation of surface roughness and MRR against the

number of neurons is shown in Fig.6.26 and Fig.6.27.


100

Fig.6.28. Desired output and actual network output.

The comparison between experimental outputs and ANN

outputs are shown in Fig.6.28. There is high correlation between the

predicted values by the ANN model and the measured values resulted

from experimental tests. The correlation coefficients for R a and MRR

were 0.999994394 and 0.999977327 respectively, which shows there

is a strong correlation in modeling Ra and MRR.

Table.6.8. Sensitivity analysis for Ra and MRR.

Sensitivity Ra MRR
R 1.611631274 51.30732727
V 0.023935925 3.643531799
f 10.01920319 1249.439209
d 0.3577438 142.9523315
rd 0.633553028 439.4009094
101

Fig.6.29. Sensitvity about the mean.

From the Table.6.8 and Fig.6.29, it is observed that the

feedrate is most significant parameter on surface roughness and MRR.

The cutting speed is the least influencing factor on the surface

roughness and MRR. The variation of surface roughness and MRR

against the input parameters are shown in Fig.6.30 - 6.39

Fig.6.30. surface roughness vs. Nose radius.


102

Fig.6.31. MRR vs. Nose radius.

Fig.6.32. Surface roughness vs. Cutting speed.

Fig.6.33. MRR vs. cutting speed.


103

n /

Fig.6.34. Surface roughness vs. Cutting Feed.

Fig.6.35. MRR vs. Cutting Feed.

Fig.6.36. Surface roughness vs. Axial depth of cut.


104

Fig.6.37. MRR vs. Axial depth of cut.

Fig 6.38. Surface roughness vs. Radial depth of cut.

Fig 6.39. MRR vs. Radial depth of cut.


105

6.4. Comparision between RSM, ANN with Experimental values.

Table-6.9:Comparision between RSM, ANN with Exp. Values for

train data.
ANN ANN RSM RSM
Ra MRR
Exp.No Ra MRR MRR Ra
(m) (Cm3/min)
(m) (Cm3/min) (Cm3/min) (m)
106
1 0.94 23.39 0.9395 23.704 27.4 0.962
2 1.16 58.73 1.1608 59.075 61.185 1.162
3 1.12 116.55 1.12 116.47 118.139 1.191
4 0.86 197.78 0.8599 198.12 198.262 1.049
5 0.66 298.8 0.6617 297.31 301.553 0.736
6 0.82 63.13 0.8212 62.538 63.461 1.01
7 1.44 121.65 1.4397 121.9 122.95 1.141
8 0.7 206.61 0.7006 206.62 205.608 1.101
9 0.92 127.66 0.9203 127.87 131.223 0.934
10 1.28 66.4 1.2805 65.519 62.071 1.068
11 1.08 117.92 1.0804 117.86 116.36 0.906
12 1.3 81.54 1.2999 81.645 79.127 1.013
13 1.48 158.39 1.48 158.23 158.216 0.903
14 1.44 76.1 1.4397 76.962 78.999 1.139
15 1.54 152.13 1.5406 151.65 154.216 1.033
16 0.56 94.13 0.5586 94.07 92.183 0.695
17 0.46 174.01 0.4619 174.51 173.807 0.688
18 0.42 81.3 0.4203 81.159 84.526 1.025
19 0.58 161.29 0.579 161.3 162.278 1.021
20 0.5 109.05 0.4979 108.87 110.523 0.891
21 0.46 173.41 0.4667 173.52 177.997 0.288
22 1.1 81.97 1.0997 81.962 78.652 0.727
23 0.86 65.33 0.8602 65.287 64.048 0.87
24 0.48 137.55 0.4777 137.58 138.231 0.797
25 0.74 241.38 0.7394 241.55 235.583 0.553
26 0.98 46 0.9791 45.993 42.44 1.139
27 1.2 97.4 1.1998 97.5 93.789 1.339
ANN ANN RSM RSM
Ra MRR
Exp.No Ra MRR MRR Ra
(m) (Cm3/min)
(m) (Cm3/min) (Cm3/min) (m)
28 1.68 69.06 1.6799 68.824 68.053 1.412
29 1.06 136.31 1.0607 135.77 136.466 1.27
30 0.52 230.77 0.5199 230.9 228.049 0.957
31 1.14 37.07 1.1401 37.07 36.489 1.232
32 2.48 83.54 2.4794 83.622 84.269 1.362
33 1.74 154.89 1.7393 155.46 155.217 1.322
34 1.48 253.38 1.4794 252.87 249.335 1.111
35 1.72 109.17 1.7191 109.3 112.825 1.244
36 0.52 88.5 0.5183 87.755 89.084 1.128
37 0.92 161.29 0.9205 161.08 162.567 1.189
38 0.76 257.73 0.759 258.24 259.22 1.08
39 0.64 45.33 0.6387 46.357 39.927 1.36
40 0.96 105.34 0.9601 105.44 103.435 1.254
41 0.8 156.25 0.7999 156.51 158.516 0.872
42 0.5 103.81 0.5034 104.16 101.235 0.909
43 1.54 56.18 1.5408 55.708 56.859 1.246
44 1.27 120.58 1.2709 120.81 122.902 1.243
45 1.32 214.13 1.3202 214.07 212.113 1.068
46 0.87 119.33 0.867 118.69 116.83 0.509
47 1.1 61.35 1.1009 62 62.39 0.948
48 0.78 128.64 0.7801 128.3 130.968 1.046
107

Fig.6.40. Surface roughness vs. Number of runs.

Fig.6.41. Material removal rate vs. Number of runs.

Fig.6.40 and Fig.6.41, Show that the comparison between the

experimental and predicted surface roughness, MRR by the RSM and

ANN model for training data. From the Fig.6.40, it is concluded that,

there is high correlation between the predicted values by the ANN

model and the measured values resulted from experimental tests. It is

difficult to distinguish the experimental and the fit curve because the

fit is so good.
108

Table.6.10. Comparision between RSM, ANN with Exp.Values for

Test data.

Exp Exp. RSM ANN


RSM Ra ANN Ra
Exp.No Ra MRR MRR MRR
(m) (m)
(m) (Cm /min)
3
(Cm /min)
3
(Cm3/min)
1 1.96 73 1.24442 64.914 1.99217 56.34389
2 1.96 73 1.24442 64.914 1.99217 56.34389
3 0.6 203.5 1.04679 208.718 0.7276 277.8615
4 0.78 200.5 0.93657 192.193 0.78921 272.6558
5 0.64 134 1.14984 123.966 0.64122 134.5998
6 1.02 154 0.97109 143.972 1.07519 148.6801
7 1.16 107 1.26819 145.242 1.01021 141.8212
8 1.56 92 1.44694 108.665 1.51246 92.04693
9 0.68 179.2 1.23367 197.898 0.68687 221.4779
10 1.24 92 1.37124 81.271 1.32132 122.3025
11 0.94 104 1.15797 130.222 0.95471 102.9099
12 1.18 118 1.33672 104.582 1.18208 136.2455
13 1.28 84 0.53186 89.627 1.29952 91.32626
14 0.62 197 0.27889 203.589 0.63967 212.7486

Fig.6.42. Surface roughness vs. Number of runs


109

Fig.6.43. Material removal rate vs. Number of runs.

From the Fig.6.42 and Fig.6.43, it is observed that the

comparison between the experimental and predicted surface

roughness, MRR by the RSM and ANN model for testing data. It is

concluded that, there is high correlation between the predicted values

by the ANN model and the measured values resulted from

experimental tests.

Chapter-7

CONCLUSIONS AND FUTURE SCOPE

Using Taguchis orthogonal array design in the design of

experiments, the machining parameters which are influencing the

surface roughness in the end milling of P20 mould steel has been

modeled using RSM and artificial neural networks. Based on

experimental and results predicted by RSM and ANN, the following

conclusions are drawn.


110

1. Relatively small number of experimental runs could be possible

using Taguchis orthogonal array and hence reduces the cost of

experimentation.
2. The feed is most dominant parameter for the surface roughness

and MRR. The cutting speed shows the minimal effect on the

surface roughness and MRR.


3. For achieving good surface finish of P20 mould steel low feed rates

higher cutting speeds and smaller depth of cuts are preferred.


4. For achieving higher MRR of P20 mould steel higher feed rates

higher cutting speeds and higher depth of cuts are preferred.


5. Using design of experiment, the machining parameters which are

influencing the surface roughness and MRR on the machining of

P20 mould steel, the second order model has been modeled using

response surface methodology for CNC end milling of P20 mould

steel. This technique is convenient to predict the effect of different

influential combinations of machining parameters.


6. The second order response model can be used to predict the

surface roughness of P20 mould steel at 95% confidence level.

The predicted and measured values are quite adequate, which

indicates that the developed model can be effectively used to

predict the surface roughness and MRR of P20 mould steel.


7. The verification test results revealed that the developed models for

surface roughness and MRR can be effectively used for predicting

the surface roughness and MRR in machining of P20 mould steel.


8. In Response Surface Methodology the regression value for surface

roughness and MRR is 30.4% and 99.8%. Where as in Artificial

Neural Network the regression value for surface roughness and

MRR is 99.9994394% and 99.9977327%.


111

9. The ANN predicted values are fairly close to the experimental

values, which indicates that the developed model can be

effectively used to predict the surface roughness and MRR of P20

mould steel.

10. The ANN model could predict the surface roughness and MRR

with average percentage deviation of 0.132642% and 0.333992%

from training data set.

11. The ANN model could predict the surface roughness and MRR

with average percentage deviation of 4.3785%and 17.45823%

from test data set.

12. The correlation coefficient for surface roughness was

0.999994394, which shows there is a strong correlation in

modeling surface roughness.

13. The correlation coefficient for MRR was 0.999977327, which

shows there is a strong correlation in modeling MRR.

7.1. Future Scope


1. Further study could consider factors as materials, different nose

radius of tool inserts and different shapes, cutting angles,

lubricants, by more number of levels of cutting conditions would

affect on the surface roughness and MRR. In addition, artificial

intelligent system such as the fuzzy logic system, simulated

annealing, genetic algorithms might be used to enhance the

ability of the prediction system.


2. The accuracy of the developed model can be improved more

number of parameters and levels.


112

3. Further study could also consider the tool wear that would affect

on the surface roughness and MRR.

Appendix-1
Table-1: Performance of Artificial Neural Network

Regression Regression Regression Regression


Neuron Neuron
MSE Coefficient Coefficient MSE Coefficient Coefficient
s s
of Ra of MRR of Ra of MRR

1 0.033038 0.233725 0.986103 1-7 0.022831 0.664336 0.952645

2 0.019866 0.6627707 0.952347 1-8 0.009858 0.8809176 0.975387

3 0.01403 0.7817769 0.989246 1-9 0.02265 0.6683379 0.952651

4 0.009995 0.8713732 0.9787 2-1 0.032414 0.3471266 0.970795

5 0.009916 0.8877117 0.971626 2-2 0.01988 0.6667433 0.987337

6 0.009837 0.8701608 0.985848 2-3 0.015407 0.7687341 0.984192

7 0.009846 0.8683374 0.985003 2-4 0.014221 0.7943555 0.979307

8 0.009883 0.8575771 0.990429 2-5 0.009683 0.884455 0.972598

9 0.009772 0.8839356 0.978565 2-6 0.00992 0.8759729 0.972643

10 0.00979 0.8713591 0.988519 2-7 0.013549 0.790255 0.989926

11 0.009746 0.8921185 0.973921 2-8 0.009977 0.8730167 0.977929

12 0.009673 0.8803883 0.985621 2-9 0.013543 0.7975788 0.985501

13 0.009664 0.8772668 0.984922 3-1 0.031202 0.3592534 0.978276

14 0.009733 0.8775836 0.985702 3-2 0.010352 0.8496116 0.98878

15 0.009711 0.8752388 0.986767 3-3 0.009999 0.884595 0.969011

16 0.009671 0.8681504 0.989501 3-4 0.099945 0.8675359 0.981467

1-1 0.033114 0.0235237 0.98506 3-5 0.010009 0.8998684 0.957631

1-2 0.032803 0.2436732 0.987 3-6 0.009999 0.8629162 0.983811

1-3 0.032142 0.291589 0.984515 3-7 0.009999 0.859532 0.986237

1-4 0.021731 0.6937948 0.949622 3-8 0.009955 0.8773816 0.976781

1-5 0.031342 0.3360228 0.983208 3-9 0.009989 0.8673539 0.984576


113

1-6 0.032094 0.287441 0.986069

Regression Regression Regression Regression


Neuron Neuron
MSE Coefficient Coefficient MSE Coefficient Coefficient
s s
of Ra of MRR of Ra of MRR

4-1 0.028771 0.5175 0.953083 6-5 0.009921 0.877203 0.97792

4-2 0.009955 0.893845 0.971502 6-6 0.098923 0.883022 0.975775

4-3 0.009994 0.866146 0.982887 6-7 0.009882 0.876454 0.981256

4-4 0.009974 0.879228 0.974061 6-8 0.00989 0.903568 0.959732

4-5 0.009929 0.886832 0.96957 6-9 0.009876 0.885504 0.973727

4-6 0.009964 0.883864 0.973458 7-1 0.024677 0.666398 0.930905

4-7 0.009916 0.887301 0.97173 7-2 0.009874 0.896285 0.975172

4-8 0.009962 0.894628 0.962418 7-3 0.009923 0.878274 0.981242

4-9 0.009948 0.883033 0.975373 7-4 0.009913 0.8841 0.975781

5-1 0.025377 0.621719 0.946309 7-5 0.009898 0.090038 0.967104

5-2 0.009932 0.892932 0.969951 7-6 0.009879 0.900312 0.963675

5-3 0.009971 0.884829 0.977772 7-7 0.009893 0.871855 0.98462

5-4 0.009907 0.870901 0.985271 7-8 0.0099 0.887766 0.974268

5-5 0.009949 0.880884 0.974109 7-9 0.00992 0.874579 0.981642

5-6 0.009899 0.882759 0.977748 8-1 0.024595 0.65584 0.937977

5-7 0.009938 0.884217 0.97334 8-2 0.009894 0.834454 0.982654

5-8 0.009934 0.868253 0.983971 8-3 0.009921 0.877825 0.982752

5-9 0.009894 0.883724 0.972599 8-4 0.009926 0.8707 0.984841

6-1 0.02478 0.665593 0.929908 8-5 0.009857 0.875505 0.987873

6-2 0.009962 0.874261 0.982734 8-6 0.009879 0.88107 0.980386

6-3 0.009941 0.884859 0.976411 8-7 0.009936 0.871812 0.98267

6-4 0.009783 0.902381 0.970306 8-8 2.41E-06 0.999994 0.999977

Appendix-2

Back Propagation Learning Algorithm


114

Step-1: Normalize the inputs and outputs with respect to their

maximum values. It is proved that the neural networks work better if

input and outputs lie between 0-1. For each training pair, Assume

there are l inputs given by and n outputs in a normalized

form.

Step-2: Assume the number of neurons in the hidden layer to lie

between l<m<2l.

Step -3: [V] represents the weights of synapses connecting input

neurons and hidden neurons and [W] represents weights to synapses

connecting hidden neuron and output neurons. Initialize the weights

to small random values usually from -1 to 1. For general problems,

can be assumed as 1 and the threshold values can be taken as zero.

= [random weights]

= [random weights]

= =0

Step-4: Further training data, present one set of inputs and outputs.

Present the pattern to the input larger [I] I as inputs to the input layer.

By using linear activation function, the outputs of the input layer may

be evaluated as = .
115

Step-5: Compute the inputs to the hidden layer by multiplying

corresponding weights of synapses as

Step-6: Let the hidden layer units evaluate the output using the

sigmoidal function as

Step-7: Compute the inputs to the output layer by multiplying

corresponding weights of synapses as

Step-8: Let the output layer units evaluate the output using sigmoidal

function as

The above is the network output.

Step-9: Calculate the error and the difference between the network

output and the desired output as for the ith training set as
116

Step-10: Find d as

Step-11: Find [Y] matrix as

Step-12: Find

Step-13: Find

Find [X] matrix as

Step-14: Find .

Step-15: Find .

Step-16: Find error rate as

Step-17: Repeat steps 4-16 until the convergence in the error rate is

less than the tolerance rate.


117

Nose radius
Hidden layer
Cutting speed
Surface roughness

Feed
Axial depth of cut
MRR

Radial depth of cut

Case-1: surface roughness

Step-1:

Step-2:

Step-3:
118

Step-4:

Step-5:

step-6:

Step-7: Error =

Step-8: Adjust the weight

d=

step-9:

Step-10:

Step-11: =
119

Step-12:

Step-13:

Step-14:

Case-2: Material Removal Rate

Step-1:
120

Step-2:

Step-3:

Step-4:

Step-5:

Step-6:

Step-7: Error =

Step-8:

d=

Step-9:
121

Step-10:

Step-11: =

Step-12: =

Step-13:

Step-14:
122

REFFERENCES

1. Dr. Mike S. Lou, Dr. Joseph C. Chen & Dr. Caleb M. Li surface

roughness prediction technique for CNC End milling, (Volume

15, Number1-November1998-January1999).
2. Mr. John L. Yang & Dr. Joseph C. Chen A systematic approach

for identifying optimum surface roughness in End milling

(Volume 17, Number2-Febraury 2001- April 2001).


3. P. Nanda Kumar, G. Ranga Janardhana, Kim S. Kun Prediction

of high accuracy surface roughness in turning process through

ANN approach( Volume 3, Number2-November2007-January

2008).
4. B. Sidda Reddy, K. Thirupathi Reddy, Syed Altaf Hussian

Application of Response Surface Methodology for the machining

of Aluminum alloy using Carbide cutting tool ( Volume 3,

Number2-November2007-January 2008).

5. Abbas Fadhel Ibraheem, Saad Kareem Shather & Kasim A.

Khalaf, Prediction of Cutting Forces by using Machine Parameters

in end Milling Process, Eng.&Tech. 26 (2008)11, 1-4.

6. Muammer Nalbant, H asan G o kk aya, and Ihsan To ktas

Comparison of Regression and Articial Neural Network Models

for Surface Roughness Prediction with the Cutting Parameters in

CNC Turning, Modelling and Simulation in Engineering. (2007),

1-14, doi:10.1155/2007/92717.
123

7. R.A. Ekanayake and P. Mathew, An Experimental Investigation of

High Speed End Milling in Proc. 5th Australasian Congress on

Applied Mechanics, ACAM 2007 10-12 December 2007, Brisbane,

Australia, pp.1-7..

8. M.A. Lajis A.N. Mustafizul KARIM, A.K.M. Nurul AMIN, A.M.K.

HAFIZ and L.G. Turnad, Prediction of Tool Life in End Milling of

Hardened Steel AISI D2, European Journal of Scientific Research,

21(2008) 4, 592-602.

9. Helen Coldwell, Richard Woods, Martin Paul, Philip Koshy,

Richard Dewes, David Aspinwall, Rapid machining of hardened

AISI H13 and D2 moulds, dies and press tools, J. Mat. Proc.

Tech., 135 (2003), 301311.

10. Mohammad Reza Soleymani Yazdi and Saeed Zare Chavoshi,

Analysis and estimation of state variables in CNC face milling of

AL6061, Prod. Eng. Res. Devel. 4 (2010) 535543, doi:

10.1007/s11740-010-0232-7.

11. K. A Abou-El-Hossein, Kadirgama K, Hamdi M, Benyounis KY,

Prediction of cutting force in end-milling operation of modied

AISI P20 tool steel. J Mater Process Technol 182 (2007)(13):

241247.

12. A.M. Khalid Hafiz, AKM Nurul Amin, ANM Mustafizul Karim and

M.A. Lajis, Development of tool life prediction model for PCBN

inserts during the high speed end milling of AISI H13 Hardened

tool steel, Proc. of the Int. Conf. on Mechanical Engineering 2007

(ICME2007) 29- 31 December 2007( Dhaka, Bangladesh), pp.1-5.


124

13. M. Rahman, A. Senthil Kumar, and M.U. Salam, Experimental

evaluation of the effect of minimal quantities of lubricant in

milling, Int. J. Machine Tools and Manuf., 42(2002) 5, pp.539

547.

14. M. Rahman, A. Senthil Kumar and U.I. Salem Manzoor,

Evaluation of minimal quantities of lubricant in end milling, Int.

J. Advanced Manuf. Tech., 18(2001) 4, pp.235241.

15. Y.S. Liao, and H.M. Lin, Mechanism of minimum quantity

lubrication in high-speed milling of hardened steel, Int. J.

Machine Tools and Manuf., 47(2007) 11, pp.16601666.

16. Neurosolutions 4.0 documentation available at

http://www.neurosolutions.com/downloads/documentation.htm.

17. D.C Montgomery, Design and analysis of Experiments, fifth ed.,

Wiley, New York, 2001.

18. Jadranka skorin-kapov and K. Wendy Tang, Training Artificial

Neural Networks: Back propagation via Nonlinear Optimization,

Int. J. Computing and information technology, 1(2001), 1-14.

19. Probability and statistics for engineers and scientists.

Ronals E. Wal pole

20. Miler and Freunds probability and statistics for Engineers.

Richard A. Johnson

21. Probability and statistics Schaums outline series

22. A text book of Engineering Metrology Manohar Mahajan

23. Elements of Workshop Technology Vol-II S.K. Hajra

Choudhury,
125

A.K. Hajra Choudhury & Nirjhar Roy

Vous aimerez peut-être aussi