Vous êtes sur la page 1sur 180

COMPUTER METHODS IN MATERIALS SCIENCE

Informatyka w Technologii Materia w


Publishing House AKAPIT

Vol. 13, 2013, No. 2

Contents Spis treci

Anthony R. Thornton, Thomas Weinhart, Vitaliy Ogarko, Stefan Luding MULTI-SCALE METHODS FOR MULTI-COMPONENT GRANULAR MATERIALS .............................. 197 Marta Serafin, Witold Cecot NUMERICAL ASPECTS OF COMPUTATIONAL HOMOGENIZATION .................................................... 213 Liang Xia, Balaji Raghavan, Piotr Breitkopf, Weihong Zhang A POD/PGD REDUCTION APPROACH FOR AN EFFICIENT PARAMETERIZATION OF DATA-DRIVEN MATERIAL MICROSTRUCTURE MODELS ..................................................................... 219 Marek Klimczak, Witold Cecot LOCAL NUMERICAL HOMOGENIZATION IN MODELING OF HETEROGENEOUS VISCO-ELASTIC MATERIALS ....................................................................................................................... 226 Balbina Wciso, Jerzy Pamin NUMERICAL SIMULATIONS OF STRAIN LOCALIZATION FOR LARGE STRAIN DAMAGEPLASTICITY MODEL ...................................................................................................................................... 231 Andrzej Milenin, Piotr Kustra, Dorota J. Byrska-Wjcik THE MULTI-SCALE NUMERICAL AND EXPERIMENTAL ANALYSIS OF COLD WIRE DRAWING FOR HARDLY DEFORMABLE BIOCOMPATIBLE MAGNESIUM ALLOY .............................................. 238 Piotr Gurgul, Marcin Sieniek, Maciej Paszyski, ukasz Madej THREE-DIMENSIONAL ADAPTIVE ALGORITHM FOR CONTINUOUS APPROXIMATIONS OF MATERIAL DATA USING SPACE PROJECTION ........................................................................................ 245 Wacaw Ku, Radosaw Grski PARALLEL IDENTIFICATION OF VOIDS IN A MICROSTRUCTURE USING THE BOUNDARY ELEMENT METHOD AND THE BIOINSPIRED ALGORITHM ................................................................... 251 Krzysztof Muszka, ukasz Madej APPLICATION OF THE THREE DIMENSIONAL DIGITAL MATERIAL REPRESENTATION APPROACH TO MODEL MICROSTRUCTURE INHOMOGENEITY DURING PROCESSES INVOLVING STRAIN PATH CHANGES ....................................................................................................... 258 Ewa Majchrzak, Bohdan Mochnacki IDENTIFICATION OF INTERFACE POSITION IN TWO-LAYERED DOMAIN USING GRADIENT METHOD COUPLED WITH THE BEM .......................................................................................................... 264

195 196

ISSN 1641-8581

INFORMATYKA W TECHNOLOGII MATERIAW

Agnieszka Cebo-Rudnicka, Zbigniew Malinowski, Beata Hadaa, Tadeusz Telejko INFLUENCE OF THE SAMPLE GEOMETRY ON THE INVERSE DETERMINATION OF THE HEAT TRANSFER COEFFICIENT DISTRIBUTION ON THE AXIALLY SYMMETRICAL SAMPLE COOLED BY THE WATER SPRAY ............................................................................................... 269 Michael Petrov, Pavel Petrov, Juergen Bast, Anatoly Sheypak INVESTIGATION OF THE HEAT TRANSPORT DURING THE HOLLOW SPHERES PRODUCTION FROM THE TIN MELT ..................................................................................................................................... 276 Sawomir wio AN EXPERIMENTAL STUDY OF MATERIAL FLOW AND SURFACE QUALITY USING IMAGE PROCESSING IN THE HYDRAULIC BULGE TEST ..................................................................................... 283 Szymon Lechwar SELECTION OF SIGNIFICANT VISUAL FEATURES FOR CLASSIFICATION OF SCALES USING BOOSTING TREES MODEL ............................................................................................................................ 289 Stanisawa Kluska-Nawarecka, Zenon Pirowski, Zora Jankov, Milan Vroina, Ji David, Krzysztof Regulski, Dorota Wilk-Koodziejczyk A USER-INSPIRED KNOWLEDGE SYSTEM FOR THE NEEDS OF METAL PROCESSING INDUSTRY ........................................................................................................................................................ 295 Stanisawa Kluska-Nawarecka, Edward Nawarecki, Grzegorz Dobrowolski, Arkadiusz Haratym, Krzysztof Regulski THE PLATFORM FOR SEMANTIC INTEGRATION AND SHARING TECHNOLOGICAL KNOWLEDGE ON METAL PROCESSING AND CASTING ........................................................................ 304 Jan Kusiak, Gabriel Rojek, ukasz Sztangret, Piotr Jarosz INDUSTRIAL PROCESS CONTROL WITH CASE-BASED REASONING APPROACH ........................... 313 Krzysztof Regulski, Danuta Szeliga, Jacek Roda, Andrzej Kuniar, Rafa Puc RULE-BASED SIMPLIFIED PROCEDURE FOR MODELING OF STRESS RELAXATION ..................... 320 Sawomir wio EXPERIMENTAL APPARATUS FOR SHEET METAL HEMMING ANALYSIS ....................................... 326 Sawomir wio, Piotr Czyewski AN EXPERIMENTAL AND NUMERICAL STUDY OF MATERIAL DEFORMATION OF A BLANKING PROCESS ................................................................................................................................. 333 Piotr Lacki, Janina Adamus, Wojciech Wickowski, Julita Winowiecka MODELLING OF STAMPING PROCESS OF TITANIUM TAILOR-WELDED BLANKS ......................... 339

COMPUTER METHODS IN MATERIALS SCIENCE

Andrzej Woniakowski, Jzef Deniszczyk, Omar Adjaoud, Benjamin P. Burton FIRST PRINCIPLES PHASE DIAGRAM CALCULATIONS FOR THE CdSe-CdS WURTZITE, ZINCBLENDE AND ROCK SALT STRUCTURES ........................................................................................ 345 Andrzej Woniakowski, Jzef Deniszczyk PHASE DIAGRAM CALCULATIONS FOR THE ZnSe BESE SYSTEM BY FIRST-PRINCIPLES BASED THERMODYNAMIC MONTE CARLO INTEGRATION ................................................................. 351 Michal Gzyl, Andrzej Rosochowski, Andrzej Milenin, Lech Olejnik MODELLING MICROSTRUCTURE EVOLUTION DURING EQUAL CHANNEL ANGULAR PRESSING OF MAGNESIUM ALLOYS USING CELLULAR AUTOMATA FINITE ELEMENT METHOD ................. 357 Bartek Wierzba THE MIGRATION OF KIRKENDALL PLANE DURING DIFFUSION ........................................................ 364 Onur Gvenc, Thomas Henke, Gottfried Laschet, Bernd Bttger, Markus Apel, Markus Bambach, Gerhard Hirt MODELING OF STATIC RECRYSTALLIZATION KINETICS BY COUPLING CRYSTAL PLASTICITY FEM AND MULTIPHASE FIELD CALCULATIONS ............................................................. 368

196

COMPUTER METHODS IN MATERIALS SCIENCE


Informatyka w Technologii Materia w
Publishing House AKAPIT

Vol. 13, 2013, No. 2

MULTI-SCALE METHODS FOR MULTI-COMPONENT GRANULAR MATERIALS


ANTHONY R. THORNTON1,2, THOMAS WEINHART1, VITALIY OGARKO1, STEFAN LUDING1 Multi-Scale Mechanics, Department of Mechanical Engineering, University of Twente, 7500 AE Enschede, The Netherlands 2 Mathematics of Computational Science, Department of Applied Mathematics, University of Twente, 7500 AE Enschede, The Netherlands *Corresponding author: a.r.thornton@utwente.nl
Abstract In this paper we review recent progress made to understand granular chutes flow using multi-scale modeling techniques. We introduce the discrete particle method (DPM) and explain how to construct continuum fields from discrete data in a way that is consistent with the macroscopic concept of mass and momentum conservation. We present a novel advanced contact detection method that is able of dealing with multiple distinct granular components with sizes ranging over orders of magnitude. We discuss how such advanced DPM simulations can be used to obtain closure relations for continuum frameworks (the mapping between the micro-scale and macro-scale variables and functions): the micro-macro transition. This enables the development of continuum models that contain information about the micro-structure of the granular materials without the need for a priori assumptions. The micro-macro transition will be illustrated with two granular chute/avalanche flow problems. The first is a shallow granular chute flow where the main unknown in the continuum models is the macro-friction coefficient at the base. We investigate how this depends on both the properties of the flow particles and the surface over which the flow is taking place. The second problem is that of gravity-driven segregation in poly-dispersed granular chute flows. In both these problems we consider small steady-state periodic box DPM simulations to obtain the closure relations. Finally, we discuss the issue of the validity of such closure-relations for complex dynamic problems, that are a long way from the simple period box situation from which they were obtained. For simple situations the pre-computed closure relations will hold. In more complicated situations new strategies are required were macro-continuum and discrete micromodels are coupled with dynamic, two-way feedback between them. Key words: coupled multiscale model, multi-component granular materials, Navier-Stokes equation, discrete particle simulations
1

1. INTRODUCTION Granular materials are everywhere in nature and many industrial processes use materials in granular form, as they are easy to produce, process, transport and store. Many natural flows are comprised of granular materials and common examples include rock slides than can contain many cubic kilometers of material. Granular materials are, after water, the second most widely manipulated substance on the

planet (de Gennes, 2008); however, the field is considerable behind the field of fluids and currently no unified continuum description exists, i.e. there is no granular Navier-Stokes style constitutive equations. However, simplified descriptions do exist for certain limiting scenarios: examples include rapid granular flows where kinetic theory is valid (e.g., Jenkins & Savage, 1983; Lun et al., 1984) and shallow dense flows where shallow-layer models are applicable

197 212

ISSN 1641-8581

INFORMATYKA W TECHNOLOGII MATERIAW

(e.g. Savage & Hutter, 1989; Gray, 2003; Bokhove & Thornton, 2012). For the case of quasi-static materials the situations is even more complicated and, here, more research on a continuum description is required. Flows in both nature and industry show highly complex behaviour as they are influenced by many factors such as: poly-dispersity, in size and density; variations in density; nonuniform shape; complex basal topography; surface contact properties; coexistence of static, steady and accelerating material; and, flow obstacles and constrictions. Discrete particle methods (DPMs) are a very powerful computational tool that allows the simulation of individual particles with complex interactions, arbitrary shapes, in arbitrary geometries, by solving Newton's laws of motion for each particle (e.g. Weinhart et al., 1012). How to capture these elaborate interactions of sintering, complex (nonspherical) shape, breaking and cohesional particles, by the contact model, is an active area of research and many steps forward have recently been made. DPM is very powerful and flexibility tool; however, it is computationally very expensive. With the recent increase in computational power it is now possible to simulate flows containing a few million particles; however, for 1 mm particles this would represent a flow of approximately 1 litre which is many orders of magnitude smaller than the flows found in industrial and natural flows.

Continuum methods are able to simulate the volume of real industrial or geophysical flows, but have to make averaging approximations reducing the properties of a huge number of particles to a handful of averaged quantities. Once these averaged parameters have been tuned via experimental or historical data, these models can be surprisingly accurate; but, a model tuned for one flow configuration often has no prediction power for another setup. Therefore, it is not possible in this fashion to create a unified model capable of describing a new scenario. DPM can be used to obtain the mapping between the microscopic and macroscopic parameters allowing determination of the macroscopic data without the need for a priori knowledge. In simple situations, it is possible to pre-compute the relations between the particle and continuum (micro-macro transition method); but, in more complicated situations twoway coupled multi-scale modelling (CMSM) is required. For the micro-macro transition, discrete particle simulations are used to determine unknown functions or parameters in the continuum model as a function of microscopic particle parameters and other state variables; these mappings are referred to as closure relations (e.g. Thornton et al., 2012; 2013). For CMSM, continuum and micro-scale models are dynamically coupled with two-way feedback between the computational models. The coupling is done in selective regions in space and time, thus reducing computational expense and allowing simulation of complex granular flows. For problems

COMPUTER METHODS IN MATERIALS SCIENCE

Fig. 1. Illustration of the modeling philosophy for the undertaken research. Solid lines indicate the main steps of the method and dashed lines the verification steps; (a) shows the idea for the micro-macro transition and (b) for two way coupled multi-scale modelling (CMSM).

198

INFORMATYKA W TECHNOLOGII MATERIAW

that contain only small complex regions, one can use a localised hybrid approach where a particle method is applied in the complex region and is coupled through the boundary conditions to a continuum solver (Markesteijn, 2011). For large complex regions or globally complex problems, and iterative approach can be used, where a continuum solver is used everywhere, and small particle simulations are run each time the closure relations need to be evaluated, see e.g. (Weinan, 2007). The ultimate aim of this research would be to determine the unknowns (material/contact properties) in the contact law from a few standard experiments on individual particles. Our approach is illustrated in figure 1. The idea is to obtain the particle material properties from small (individual) particle experiments and use this information to determine the parameters of the contact model for DPM simulations. We then perform small scale periodic box particles simulations and use this data to determine unknowns in the continuum models (i.e to close the model). It is then expected that this closed continuum model is able to simulate the flow of the same particles in more complex and larger systems. The validity of this closed model will be investigated by comparing its results with both computationally expensive large-scale simulations and experiments. For situations were this one-way coupled micro-macro approach fails, we will use the computationally more expensive two-way coupled models to simulate the flow. 1.1. Outline It is possible to apply CMSM or micro-macro methods to completely general three-dimensional Cauchy mass and momentum equations and use the DPM to determine the unspecified constitutive relations for the stresses; however, we will not take this approach. We will focus on scenarios where simplifying approximations are made which lead to continuum models (still containing undetermined quantities) that are valid only in certain limits. In this paper we discuss the approach we are taking, review the current steps we have made and discuss the future directions and open issues with this approach. Firstly, we will consider shallow granular flows (of major importance to many areas of geophysics) and secondly, gravity-driven segregation of poly-dispersed granular material. For the second problem, the efficiency of DPM becomes an issue and a new algorithm will have to be considered.

The outline for the rest of the paper will be: 2 introduction to DPM, 3 how to construct continuum fields from discrete particle data (how to perform the micro-macro transition); 4 the micromacro transition for shallow granular flows; 5 DPM simulations with wide particle distribution; 5 collision detection; 6 micro-macro transition for segregating flows; and 7 future prospects and conclusions. 2. INTRODUCTION TO DPM In the discrete particle method, often called the discrete element method, Newton's laws are solved for both the translational and the rotational degrees of freedom. The problem is fully determined by specifying the forces and torques between two interacting particles. Here, we will briefly review three commonly used contact laws for almost spherical granular particles: linear spring-dashpot, Hertzian springs and plastic models. Each particle i has diameter di, mass mi, position ri, velocity vi and angular velocity i. Since we are assuming that particles are almost spherical and only slightly soft, the contacts can be treated as occurring at a single point. The relative distance between two particles i and j is rij = ri-rj, the branch vector (the vector from the centre of particle i to the contact
n / 2 2 , where the unit point) is bij d i dij n

ij ri r j / rij , and the relative velocity normal is n


is vij = vi - vj , and the overlap is:

ijn max 0,

1 d i + d j rij 2

n ij n ij vij vij n t ij n ij i bij j b ji vij vij vij n

The total force on particle i is a combination of the normal and tangential contact forces fijn fijt from each particle j that is in contact with particle i, and external forces, which for this investigation will be limited to gravity, mig. Different contact models exist for the normal, f ijn , and tangential, f ijt , forces. For each contact model, when the tangential-tonormal force ratio becomes larger than a contact friction coefficient, c, the tangential force yields

199

COMPUTER METHODS IN MATERIALS SCIENCE

Two particles are in contact if their overlap is positive. The normal and tangential relative velocities at the contact point are given by:

INFORMATYKA W TECHNOLOGII MATERIAW

and the particles slide, and we truncate the magnitude of the tangential force as necessary to satisfy

unloading/reloading of the contact and no dash-pot is used i.e.:

fijt c fijn . We integrate the resulting force and


torque relations in time using Velocity-Verlet and forward Euler (Allen & Tildesley, 1989) with a time step t = tc = 50, where tc is the collision time, see e.g. (Luding, 2008):

fijn( p )

ij k1n ijn n n e ij k2 ij n 0

if if if

n e k2 ij k1n ijn n e k1n ijn k 2 ij 0 n e k2 ij 0

(5a)
t t f ijt ( p ) f ijt ( sd ) k t ij t vij

tc

kn n mij 2mij
2

(1)

(5b)

e n max n n max n with ij ij ij 1 k1 / k2 and, ij is max ij

with the reduced mass mij = mimj/(mi + mj). For the spring-dashpot case (Cundall & Strack, 1979; Luding, 2008; Weinhart, 2012a) the normal,

the maximum overlap during the contact. Unlike n (Luding, 2008, Walton & Braun, 1986), we take k 2 to be constant, so that the normal coefficient of restitution is given by n =
n . However, for enk1n / k2

f ijn( sd ) , and tangential, f ijt( sd ) , forces are modelled


with linear elastic and linear dissipative contributions, hence:
n ij n vij f ijn( sd ) k n ijn n , t t fijt( sd ) k t ij t vij

during contacts the dissipation is smaller than in the spring-dashpot case, since oscillations on the unn ) branch do not dissipate enerloading/reloading ( k2

(2)

with spring constants kn, kt and damping coefficients


t n, t. The elastic tangential displacement, ij , is

gy. For a more detailed review of contact laws, in general, we refer the reader to (Luding, 2008). 3. THE MICRO-MACRO TRANSITION For all multi-scale methods, one of the biggest challenges is how to obtain continuum fields from large amounts of discrete particle data. Here, we give a short overview, then review in more detail the approach we prefer. There are many papers in the literature on how to go from the discrete to the continuum: binning micro-scale fields into small volumes (Irving & Kirkwood, 1950; Schoeld & Henderson, 1982; Luding, 2004; Luding et al., 2001), averaging along planes (Todd et al., 1995), or coarse-graining spatially and temporally (Babic, 1997; Shen & Atluri, 2004; Goldhirsch, 2010). Here, we use the coarse graining approach described by Weinhart et al. (2012b) as this is still valid within one coursegraining width of the boundary. The coarse-graining method has the following advantages over other methods: (i) the fields produced automatically satisfy the equations of continuum mechanics, even near the flow base; (ii) it is neither assumed that the particles are rigid nor spherical; and, (iii) the results are even valid for single particles as no averaging over groups of particles is required. The only assumptions are that each particle pair has a single point of contact (i.e., the particle shapes are convex), the contact area can be

defined to be zero at the initial time of contact, and its rate of change is given by:

d t t t ij vij rij1 ij vij nij dt

(3)

COMPUTER METHODS IN MATERIALS SCIENCE

In equation (3), the first term is the relative tangential velocity at the contact point, and the t second term ensures that ij remains normal to nij, see (Weinhart, 2012a) for details. This model is designed to model particles that are elastic, but dissipated with clearly defined coefficient of restitution, .
For the Hertzian case we modify the interaction force with:

n/t ij ( Hertz )

ijn
d

t fijn(/sd )

(4)

see e.g. (Silbert et al., 2001). This models follows from the theory of elasticity and takes account of the full non-linear elastic response. Finally, for the plastic case (designed to capture small plastic deformation) we modify the normal force using the (hysteretic) elastic-plastic form of Walton and Braun e.g. (Luding, 2008; Walton & Braun, 1986), therefore, in the normal direction a different spring constant is taken for loading and

200

INFORMATYKA W TECHNOLOGII MATERIAW

replaced by a contact point (i.e., the particles are not too soft), and that collisions are not instantaneous. 3.1. Notation and basic ideas

3.2.

Mass balance

Vectorial and tensorial components are denoted by Greek letters in order to distinguish them from the Latin particle indices i; j. Bold vector notation will be used when convenient. Assume a system given by Nf owing particles and Nb fixed basal particles with N = Nf + Nb. Since we are interested in the flow, we will calculate macroscopic fields pertaining to the owing particles only. From statistical mechanics, the microscopic mass density of the flow, mic, at a point r at time t is defined by:

Next, we will consider how to obtain the other fields of interest: the momentum density vector and the stress tensor. As before, all macroscopic variables will be defined in a way compatible with the continuum conservation laws. The coarse-grained momentum density vector p(r, t) is defined by:

p r , t =

m v t W r r
i 1 i i i

Nf

(10)

where the vi s are the velocity components of particle i. The macroscopic velocity field V(r,t) is then defined as the ratio of momentum and density fields,:

mic

r ri t r , t mi
i 1

Nf

(6)

V r , t =

p r , t r, t

(11)

where (r) is the Dirac delta function and mi is the mass of particle i. The following definition of the macroscopic density of the flow is used:

It is straightforward to confirm that equations (7) and (10) satisfy exactly the continuity equation:

r , t mi W r ri t
i 1

Nf

(7)

p 0 t r

(12)

thus replacing the Dirac delta function in (6) by an integrable coarse-graining function w whose integral over space is unity. We will take the coarsegraining function to be a Gaussian:

with the Einstein summation convention for Greek letters. 3.3. Momentum balance

W r ri t

r r t 2 i exp 3 2 2 w 2 w 1

(8)

Finally, we will consider the momentum conservation equation with the aim of establishing the macroscopic stress field. In general, the desired momentum balance equations are written as:

r,t =

R3

W r r t r ', t dr '
mic i

(9)

where is the stress tensor, and g is the gravitational acceleration vector. Since we want to describe boundary stresses as well as internal stresses, the boundary interaction force density, or surface traction density, t, has been included, as described in detail in (Weinhart et al. 2012b). Expressions (10) and (11) for the momentum p and the velocity V have already been defined. The next step is to compute their temporal and spatial derivatives, respectively, and reach closure. Then after some algebra, see (Weinhart et al., 2012b) for details, we arrive at the following expression for the stress:

201

COMPUTER METHODS IN MATERIALS SCIENCE

with width or variance w. Other choices of the coarse-graining function are possible, but the Gaussian has the advantage that it produces smooth fields and the required integrals can be analysed exactly. According to Goldhirsch (2010), the coarse-grained fields depend only weakly on the choice of function, and the width w is the key parameter. It is clear that as w 0 the macroscopic density defined in (8) reduces to the one in (7). The coarsegraining function can also be seen as a convolution integral between the micro and macro definitions, i.e.:

p V V t g r r r

(13)

INFORMATYKA W TECHNOLOGII MATERIAW

fij rij W r ri sbik ds


i 1 j 11 b ik ik 0

Nf

Nf

f b
i 1 k 1

Nf

Nf

W r r sb ds m v v W
i ik 0 i 1 , i i , i

Nf

(14) Equation (14) differs from the results of Goldhirsch.(2010) by an additional term that accounts for the stress created by the presence of the base, as detailed by Weinhart et al., (2012b). The contribution to the stress from the interaction of two flow particles i; j is spatially distributed along the contact line from ri to rj, while the contribution from the interaction of particles i with a fixed particle k is distributed along the line from ri to the contact point cik = ri + bik. There, the boundary interaction force density:

t fikW
i 1 k 1

Nf

Nb

r cik

(15)

is active, measuring the force applied by the base to the flow. It has nonzero values only near the basal surface and can be introduced into continuum models as a boundary condition. The strength of this method is that the spatially coarse-grained fields by construction satisfy the mass and momentum balance equations exactly at any given time, irrespective of the choice of coarsegraining function. Further details about the accuracy of the stress definition (14) are discussed by Weinhart et al., (2012b). The expression for the energy is also not treated in this publication, we refer the interested reader to (Babic, 1997). 4. SHALLOW GRANULAR FLOWS

for shallow variations in the flow height and basal topography. Despite the massive reduction in degrees of freedom made, shallow-layer models tend to be surprisingly accurate, and are thus an effective tool in modelling geophysical flows. Moreover, they are now used as a geological risk assessment and hazard planning tool (Dalby et al., 2008). In addition to these geological applications, shallow granular equations have been applied to analyse small-scale laboratory chute flows containing obstacles (Gray, 2003), wedges (Hakonardottir & Hogg, 2005; Gray, 2007) and contractions (Vreman, 2007), showing good quantitative agreement between theory and experiment. We will consider flow down a slope with inclination , the x-axis downslope, y-axis across the slope and the z-axis normal to the slope. In general, the free-surface and base locations are given by z = s(x,y) and z = b(x,y), respectively. Here, we will only consider flows over rough at surfaces where b can be taken as constant. The height of the flow is h = s - b and velocity components are u = (u,v,w)T. Depthaveraging the mass and momentum balance equations and retaining only leading and first order terms (in the ratio of height to length of the flow) yields the depth-averaged shallow-granular equations, (e.g.Gray, 2003), which in one-dimension are given by:

h hu hv 0 t x y

(16a) (16b)

g 2 2 hu hu K h cos S x t x 2

COMPUTER METHODS IN MATERIALS SCIENCE

4.1.

Background

where g is the gravitational acceleration, u the depth-averaged velocity and the source term is given by:
u S x gh cos tan u2 v2
Before these equations can be solved, closure relations need to be specified for three unknowns: the velocity shape factor, , the ratio of the two diagonal stress components, K, and the friction, , between the granular materials and the basal surface over which it flows. These closure relations can either be postulated (from theory or phenomenologically), or determined from experiments or DPM simulations. Our philosophy was to determine these unknown relations using

Shallow-layer granular continuum models are often used to simulate geophysical mass flows, including snow avalanches (Cui et al., 2007), dense pyroclastic flows, debris flows (Denlinger & Iverson, 2001), block and ash flows (Dalby et al., 2008), and lahars (Williamset al., 2008). Such shallowlayer models involve approximations reducing the properties of a huge number of individual particles to a handful of averaged quantities. Originally these models were derived from the general continuum incompressible mass and momentum equations, using the long-wave approximation (Savage & Hutter, 1989; Gray, 2003; Bokhove & Thornton, 2012)

202

INFORMATYKA W TECHNOLOGII MATERIAW

small-scale periodic box simulations DPMs similar to the ones used by Silbert et al. (2001). Below some of the main findings are summarized. Initially, we consider only the springdashpot contact model and looked at the closures across different basal surfaces (Weinhart et al., 2012a). The chute is periodic and of size 20d10d in the xydirections, with inclination , see figure 2. In this case flow particles are monodispersed. The base was created from particles and roughness was changed by modifying the ratio of the size of base and flow particles, . 4.2. Closure for K

form flow is only possible at a single inclination, , below which the flow arrests, and above which the flow accelerates indefinitely. Detailed experimental investigations (GDR MiDi, 2004; Pouliquen, 1999; Pouliquen & Forterre, 2002) for the flow over rough uniform beds showed that steady flow emerges at a range of inclinations, 1 < < 2, where 1 is the minimum angle required for flow, and 2 is the maximum angle at which steady uniform flow is possible. In (Pouliquen & Forterre, 2002), the measured height hstop () of stationary material left behind when a owing layer has been brought to rest, was fitted to:

hstop Ad

For the shallow layer theory presented in (16), K, is the ratio of two stress component K = xx/zz. First the K was found to be linear in the inclination angle and independent of (for all but the smooth base case of = 0):

tan 2 tan 1 2 tan tan 1

(18)

where d is the particle diameter and A a characteristic dimensionless scale over which the friction varies. They also observed that the Froude number F =

K fit 1
o

d1
d0
o

u / gh cos , scaled linear with this curve:

(17)

with d0 = 132 and d1 = 21.30 .

h 1 2 hstop

(19)

where and are two experimentally determined constants. From these relations you can show that the friction closure is given by:

h, F tan 1

tan 2 tan 1 h 1 Ad F

(20)

Fig. 2. DPM simulation for Nf/200 = 17:5, inclination = 24 and the diameter ratio of free and fixed particles, = 1, at time t = 2000; gravity direction g as indicated. The domain is periodic in x- and y-directions. In the z-direction, fixed particles (black) form a rough base while the surface is unconstrained. Colours indicate speed: increasing from blue via green to orange.

4.3.

Closure for

By far the most studies closure relation for shallow granular flows is the basal friction coefficient . In the early models a constant friction coefficient was assumed (Hungr & Morgenstern, 1984; Savage & Hutter, 1989), i.e. = tan , where is a fixed basal frictional angle. For these models, steady uni-

This experimentally determined law has previously been shown to hold from DPM simulations (e.g. Silbert et al., 2001). In (Thornton, 2012; Weinhart et al., 2012a) we investigate how the parameters A, 1, 2, and change as a function of the contact friction between bed and owing particles, the particle size ratio and even the type of contact law. The main conclusion were: The law (18) holds for the spring-dashpot, plastic and Hertzian contact models The properties of the basal particles have very little affect on the macroscopic friction, that is, only a weak effect on and. The geometric roughness is more important than the contract friction in the interaction law, c. The coefficient of restitution of the particles only affects, not the friction angles.

203

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW

Full details of the values of A, 1, 2, and as a function of both macro- and microscopic parameters can be found in (Thornton, 2012; Weinhart et al., 2012a).

This was done in two steps: firstly, it was observed that the vertical flow velocity structure contained three parts, see figure 4 for details. These were then fitted separately and from these fits the

Fig. 3. Flow velocity profiles for varying height for 4000 particles and = 1. We observe a Bagnold velocity profile,

5 hz u u 1 3 h
9.42.

3/2

, in the bulk, a linear profile near the surface and a convex profile near the base [z < b1hstop()/h) with b1 =

4.4.

Closure for

Finally, we consider the closure the velocity shape factor. This is the shape of the velocity profile with height and is defined as

shape factor was computed. The values of as a function of height and angle, , is shown in figure 4. 4.5. Future directions

We have now established closure relations for shallow-granular flows and the natural question of (21) bs h the range of validity of closure relations derived udz from this small steady periodic box simulations. b This closed continuum model has recently been implemented in an inhouse discontinuous Galerkin finite element package, hpGEM (Pesch, 2007; van der Vegt). A series of test cases and currently being investigated include complicated features with contractions and obstacles. The results of this closed model are compared with computationally expensive full-scale DPM simulations of the same scenarios. This comparison and verification set is represented by the dashed lines in figure 1. It is anticipated that this closured continuum model will work Fig. 4. Shape factor from simulations (markers), for case = 1, and fit (h,) (dotted fine for simple flow scenarios; howlines). For comparison, plug flow = 1, Bagnold profile = 1:25 and lines profile = ever, for complex flow containing 1:5. particle free regions and static mate2

u dz 1

COMPUTER METHODS IN MATERIALS SCIENCE

204

INFORMATYKA W TECHNOLOGII MATERIAW

rials it is likely to fail. For this situation, a fully twoway coupled code will have to be developed. More discussion of problems associated with the development of this code can be found in chapter 7. 5. COLLISION DETECTION The performance of the DPM computation relies on several factors, which include both the contact model and the contact detection algorithm. The collision detection of short-range pair wise interactions between particles in DPM is usually one of the most time-consuming tasks in computations (Williams & O'Connor, 1999). If you were to undertake the task of collision detection in a naive fashion you would have to perform N2 checks were N is the number of particles; however, this becomes impractical even for relatively small systems. The most commonly used method for contact detection of nearly mono-sized particles with shortrange forces is the Linked-Cell method (Hockney & Eastwood, 1981; Allen & Tildesley, 1989). Due to its simplicity and high performance, it has been utilized since the beginning of particle simulations, and is easily implemented in parallel codes (Form, 1993; Stadler et al., 1997). Nevertheless, the Linked-Cell method is unable to efficiently deal with particles of greatly varying sizes (Iwai, 1999), which will be the case in the next problem considered. This can effectively be addressed by the use of a multilevel contact detection algorithm (Ogarko & Luding, 2012), which we review here. This advanced contact detection algorithm is already implemented in Mercury (Thornton et al.), the open-source code developed here, and that is used for all the simulations in this paper. An extensive review of various approaches to contact detection is given in Munjiza, 2004. The performance difference between them is studied in (Muth, 2007; Ogarko & Luding, 2012; Raschdorf & Kolonko, 2011). 5.1. Algorithm

The algorithm is made up of two phases. In the first mapping phase all the particles are mapped into a hierarchical grid space (subsection 5.1.1). In the second contact detection phase (subsection 5.1.2) for every particle in the system the potential contact partners are determined, and the geometrical intersection tests with them are made. 5.1.1. Mapping phase The d-dimensional hierarchical grid is a set of L regular grids with different cell sizes. Every regular grid is associated with a hierarchy level h 1, 2,, L, where L is the integer number of hierarchy levels. Each level h has a different cell size sh R, where the cells are d-dimensional cubes. Grids are ordered with increasing cell size so that h = 1 corresponds to the grid with smallest cell size, i.e., sh < sh + 1. For a given number of levels and cell sizes, the hierarchical grid cells are defined by the following spatial mapping, M, of points r R d to a cell at specified level h:
r1 rd M : (r, h) c = ,..., s s h h

(22)

The present algorithm is designed to determine all the pairs in a set of N spherical particles in a ddimensional Euclidean space that overlap. Every particle is characterized by the position of its centre rp and its radius ap. For differently-sized spheres, amin and amax denote the minimum and the maximum particle radius, respectively, and = amin/amax is the extreme size ratio.

cp M rp , h p

(23)

where h(p) is the level of insertion to which particle p is mapped to. The level of insertion h(p) is the lowest level where the cell is big enough to contain the particle p:

205

COMPUTER METHODS IN MATERIALS SCIENCE

where [r] denotes the floor function (the largest integer nor greater than r). The first d components of a(d + 1)-dimensional vector c represent cell indices (integers), and the last one is the associated level of hierarchy. The latter is limited whereas the former are not. It must be noted that the cell size of each level can be set independently, in contrast to contact detection methods which use a tree structure for partitioning the domain (Ericson, 1993; Raschdorf & Kolonko, 2011; Thatcher, 2000) where the cell sizes are taken as double the size of the previous lower level of hierarchy, hence sh + 1 = 2sh. The flexibility of independent sh allows one to select the optimal cell sizes, according to the particle size distribution, to improve the performance of the simulations. How to do this is explained by (Ogarko & Luding, 2012). Using the mapping M, every particle p can be mapped to its cell:

INFORMATYKA W TECHNOLOGII MATERIAW

h p c p min h : sh 2rp
1 h L

(24)

In this way the diameter of particle p is smaller or equal to the cell size in the level of insertion and therefore the classical Linked-Cell method (Allen & Tildesley, 1989) can be used to detect the contacts among particles within the same level of hierarchy. Figure 5 illustrates a 2-dimensional two-level grid for the special case of a bi-disperse system with amin = 3/2, size ratio = 8/3, and cell sizes s1 = 3, and s2 = 8. Since the system contains particles of only two different sizes, two hierarchy levels are sufficient here.

half of the surrounding cells are searched, to avoid testing the same particle pair twice. The second step is the cross-level search. For a given particle p, one searches for potential contacts only at levels h lower than the level of insertion: 1 h < h(p). This implies that the particle p will be checked only against the smaller ones, thus avoiding double checks for the same pair of particles. The cross-level search for particle p (located at h(p)) with level h is detailed here: 1. Define the cells cstart and cend at level h as

c start : M rc , h , and cend : M rc , h

(25)

where a search box (cube in 3D) is defined by rc = rp

i 1 i

e , with = ap + 0.5sh and ei is the

standard basis forRd. Any particle q from level h, i.e., h(q) = h, with centre xq outside this box can not be in contact with p, since the diameter of the largest particle at this level can not exceed sh. 2. The search for potential contacts is performed in every cell c = (c1, , cd; h) for which

cistart ci ciend for all i 1, d ,


and cd 1 h h p (26) where ci denotes the i-th component of vector c. In other words, each particle which was mapped to one of these neighbour cells is tested for contact with particle p. In figure 5, the level h = 1 cells where that search has to be performed (for particle B) are marked in grey. To test two particles for contacts, first, the axisaligned bounding boxes (AABB) of the particles (Moore & Wilhelms, 1988) are tested for overlap. Then, for every particle pair which passed this test, the exact geometrical intersection test is applied (Particles p and q collide only if r p rq < ap +aq, where is Euclidean norm.). Since the overlap test for AABBs is computationally cheaper than for spheres, performing such test first usually increases the performance. 5.2. Performance test

COMPUTER METHODS IN MATERIALS SCIENCE

Fig. 5. A 2-dimensional two-level grid for the special case of a bi-disperse system with cell sizes s1 = 2amin = 3, and s2 = 2amax = 8 (a.u.). The first level grid is plotted with dashed lines while the second level is plotted with solid lines. The radius of the particle B is aB = 4 (a.u.) and its position is rB = (10.3; 14.4). Therefore, according to equations (23) and (24), particle B is mapped to the second level to the cell cB = (1, 1, 2). Correspondingly, particle A is mapped to the cell cA = (4, 2, 1). The cells where the cross-level search for particle B has to be performed from (1,3,1) to (5,6,1) are marked in grey, and the small particles which are located in those cells are dark (green). Note, that in the method of Iwai et al. (1999) the search region starts at cell (1, 2, 1), i.e., one more layer of cells (which also includes particle A).

5.1.2. Contact detection phase The contact detection is split into two steps, and the search is done by looping over all particles p and performing the first and second steps consecutively for each p. The first step is the contact search at the level of insertion of p, h(p), using the classical Linked-Cell method (Allen & Tildesley, 1989). The search is done in the cell where p is mapped to, i.e., cp, and in its neighbour (surrounding) cells. Only

In this section we present numerical results on the performance of the algorithm when applied for bi-disperse particle systems, i.e., two different sizes, as will be considered for the segregation case in the next section. For such systems, the cell sizes of the two-level grid can be easily selected as the two diameters of each particle species. However, for some

206

INFORMATYKA W TECHNOLOGII MATERIAW

situations this may be not as efficient as the use of the single-level Linked-Cell method, as we show below. How the algorithm performs for polydisperse systems and how to select optimal cell sizes and number of levels for such systems is shown in (Ogarko & Luding, 2012). We use homogeneous and isotropic disordered systems of colliding elastic spherical particles in a unit cubical box with hard walls. The motion of particles is governed by Newtons second law with a linear elastic contact force during overlap. For simplicity, every particle undergoes only translational motion (without rotation) and gravity is set to zero. For more details on numerical procedure and preparation of initial configurations see (Ogarko & Luding, 2012). We consider a bi-disperse size distribution with the same volume of small and large particles. This distribution can be characterized by only one parameter, , which is the ratio between small and large particle radius, i.e., in this convention 0 < 1. The considered systems have volume fraction close to the jamming density. Namely, the volume fraction of systems with = 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2 is 0.631, 0.635, 0.642, 0.652, 0.665, 0.682, 0.703, 0.723, respectively. For the influence of the volume fraction on the performance of the algorithm, see (Ogarko & Luding, 2011).

the use of the two-level grid slightly (within 40%) slows down the performance of the algorithm. This is due to the overhead associated with cross-level tests. With increasing difference in particle size, i.e., decreasing , the speed-up is increasing. For > 0.7 the speed-up exceeds unity and the use of the twolevel grid becomes advantageous. The maximum speed-up of 22 is achieved in the case of the lowest considered = 0.2. Much higher speed-up is expected for < 0.2. 6. MICRO-MACRO FOR SEGREGATING FLOWS 6.1. Background

Fig. 6. The speed-up S of the two-level grid relative to the single-level grid (Linked-Cell method) for bidisperse systems with different size ratios . The number of particles used is N = 128 000 for > 0.4, N = 256 000 for = 0.3 and N = 768 000 for = 0.2. Three independent runs were performed for every and the average CPU time values are used for the calculation of S.

Figure 6 shows the speed-up S of the two-level grid relative to the single-level grid (Linked-Cell method). For similar sizes of particles, i.e., > 0.7,

Except for the very special case of all particles being identical in density and size, segregation effects can be observed in granular materials. In both natural and industrial situations, segregation plays an important, but poorly understood, role on the flow dynamics Iverson, 2003; Shinbrot et al, 1999). There are many mechanisms for the segregation of dissimilar grains in granular flows; however, segregation due to size-differences is often the most important (Drahun & Bridgewater, 1983). We will focus on dense granular chute flows where kinetic sieving (Middleton, 1970; Savage & Lun, 1988) is the dominant mechanism for particle-size segregation. The basic idea is: as the grains avalanche down-slope, the local void ratio fluctuates and small particles preferentially fall into the gaps that open up beneath them, as they are more likely to fit into the available space than the large ones. The small particles, therefore, migrate towards the bottom of the flow and lever the large particles upwards due to force imbalances. This was termed squeeze expulsion by Savage and Lun (1988). The first model of kinetic sieving was developed by Savage and Lun (1988), using a statistical argument about the distribution of void spaces. Later, Gray and Thornton (2005) developed a similar model from a mixture-theory framework. Their derivation has two key assumptions: firstly, as the different particles percolate past each other there is a Darcystyle drag between the different constituents (i.e., the small and large particles) and, secondly, particles falling into void spaces do not support any of the bed weight. Since the number of voids available for small particles to fall into is greater than for large particles, it follows that a higher percentage of the small particles will be falling and, hence, not sup-

207

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW

porting any of the bed load. In recent years, this segregation theory has been developed and extended in many directions: including the addition of a passive background fluid (Thornton et al., 2006) the effect of diffusive remixing (Gray & Chugunov, 2006), and the generalization to multi-component granular flows (Gray & Ancey, 2011). We will use the two-particle size segregation-remixing version derived by Gray and Chugunov (2006); however, it should be noted that Dolgunin and Ukolov (1995) were the first to suggest this form, by using phenomenological arguments. The bi-dispersed segregation remixing model contains two dimensionless parameters. These in general will depend on flow and particle properties, such as: size-ratio, material properties, shear-rate, slope angle, particle roughness, etc. One of the weaknesses of the model is that it is not able to predict the dependence of the two parameters on the particle and flow properties. Here are summarized the main results of (Thornton, 2013), where the ratio of these parameters was determined from DPM simulations. The two-particle segregation-remixing equation (Gray & Chugunov, 2006) takes the form of a nondimensional scalar conservation law for the small particle concentration - as a function of the spatial , y , and time t . and z coordinates x

ested in a steady state solution to (27) subject to no = 0 (the botnormal flux boundary condition, at z and tom) and 1 (the top), that is independent of x . Gray and Chugunov (2006) showed that such y a solution takes the form:

1 exp 0 Ps exp 0 z Ps 1 exp 1 0 Ps 1 exp 0 Ps exp 0 z Ps

(28) where Ps = Sr/Dr is the segregation Peclet number and 0 is the mean concentration of small particles. This solution represents a balance between the last two terms of (27) and is related to the logistic equation. In general, Ps will be a function of the particle properties, and we will use DPM to investigate the dependence of Ps on the particle size ratio = ds/dl. It should be noted that has been defined such that it is consistent with the original theory of Savage and Lun (1988); however, with this definition only values between 0 and 1 are possible. Therefore, we will present the results in terms of -1, which ranges from 1 to infinity. 6.2. The Micro-Macro transition

v w u t x y z Sr 1 Dr z z z

(27)

COMPUTER METHODS IN MATERIALS SCIENCE

where Sr is the dimensionless measure of the segregation-rate, whose form in the most general case is discussed in Thornton et al. (2006) and Dr is a dimensionless measure of the diffusive remixing. In (27), is used to indicate a partial derivative, and is the downthe hat a dimensionless variable. x normal to the cross-slope and z slope coordinate, y

, v and w are the base coordinate. Furthermore u the dimensionless bulk velocity components in the , y directions, respectively. and z x The conservation law (27) is derived under the assumption of uniform porosity and is often solved subject to the condition that there is no normal flux of particles through the base or free surface of the flow. We limit our attention to small-scale DPM simulations, periodic in the x and y-directions, and investigate the final steady-states. Therefore, we are inter-

Figure 7 shows a series of images from the DPM simulations at different times and values of -1. The simulations take place in a box, which is periodic in x and y, is 5ds wide and 83.3ds long, inclined at an angle of 25 to the horizontal. The base was created by adding fixed small particles randomly to a flat surface. The simulations are performed with 5000 flowing small particles and the number of large particles is chosen such that the total volume of large and small particles is equal, i.e., 0 = 0:5 (to within the volume of one large particle). Figure 8 shows a fit of equation (28) to the small particle volume fraction for several cases. The fit is performed using non-linear regression as implemented in MATLAB. The t is reasonable in all cases, especially considering there is only one degree of freedom, Ps. From these plots it is possible to obtain Ps as a function of -1 and this was found to be given by:
1 Ps Pmax 1 exp k 1

(29)

where k = 5.21 is the saturation constant and Pmax = 7.35.

208

INFORMATYKA W TECHNOLOGII MATERIAW

Fig. 7. A series of snapshots from the DPM simulations with large (orange) and small (blue) particles. The rows correspond to distinct particle sizes and columns to different times. Along the top row -1 = 1.1, middle row -1 = 1.5 and bottom row -1 = 2; whereas, the left column is for t = 1, middle t = 5 and right t = 60.

simulation data and the blue lines are the fit to equation (28) produced with MATLABs non-linear regression function. For the fit only Ps is used as a free parameter. Dotted lines shows the 95% confidence intervals for the fit.

6.3.

Future directions

For the segregation case the micro-macro transition has been shown to be useful in establishing the relations between the parameters that appear in the continuum descriptions and the material parameters of the particles. Additionally, in this case, we highlighted a discrepancy between the particles simulations and theory, see figure 8, i.e. the inflection point near the base in the simulation concentration profiles. Further analysis of the simulation data has shown that this discrepancy arises because the particle diffusion is not constant with depth, as assumed

by the model. Therefore, for this situation the model has to be improved to capture the full dynamics in these situations. From a modeling point of view one of the opentopic at the moment is the determination of segregation shallow-water models, see e.g. (Woodhouse, 2012), but this is beyond the scope of this review. 7. CONCLUSIONS AND FUTURE PERSPECTIVE Here, we have shown that continuum parameters such as the macroscopic friction can be accurately

209

COMPUTER METHODS IN MATERIALS SCIENCE

f . The black lines are the coarse-grained DPM Fig. 8. Plots of the small particle volume fraction as a function of the scaled depth z

INFORMATYKA W TECHNOLOGII MATERIAW

extracted from particle simulations. We have shown that the micro-macro transition can be achieved using small particle simulations, i.e., we can determine the closure relations for a continuum model as a function of the microscopic parameters. Here, this one-way coupling from micro- to macro-variables was achieved for steady uniform flows, but can in principle be used to predict non-uniform, timedependent flows, provided that the variations in time and space are small. Comparisons with large-scale experiments and large DPM simulations are needed to determine the range of parameters for which the steady uniform closure laws hold, as indicated in figure 1a. However, for strongly varying flows, such as arresting flows, avalanching flows, flow near boundaries or near abrupt phase changes (dead zones, shocks), no closure relations in functional form are known. Ideally, the full 3D granular flow rheology could be determined in the full parameter space and then introduced into a pure continuum solver. However, since the parameter space is just too wide and situations can and will occur that are not covered by a systematic parameter study, other strategies and approaches can be thought of. For such interesting situations, where the rheology enters unknown regimes, or where changes are too strong and/or fast, a two-way coupling to a particle solver is a valid approach. If these complex regions are small, one can use a two-way boundary coupling, where a particle solver is used in the complex region and a continuum solver in the remaining domain, with an overlapping region where both solvers are used and where the two methods are coupled by using suitable boundary conditions (Markesteijn, 2011). Alternatively, if the complex regions are too large to be solved by particle simulations, one can use a continuum solver where a small particle simulation is run each time the closure relations need to be evaluated (Weinan et al., 2007). This particle simulation is two-way coupled to the continuum solution in the sense that it has to satisfy the parameters set by the continuum solution (such as height, depth-averaged velocity and depth-averaged velocity gradient and boundary conditions) and return the closure relations (such as friction and velocity shape factor). Both alternative strategies provide plenty of unsolved challenges in communicating between discrete and continuous "worlds" concerning nomenclature, parameters, boundary conditions and their respective control.

The next versions of both the in-house continuum solver hpGEM (van der Vegt et al.) DPM code Mercury (Thornton et al.), are designed such that they can be easily coupled and hence used to form the basis of a granular two-way coupled code. Acknowledgements. The authors would like to thank the late Institute of Mechanics, Processes and Control, Twente (IMPACT) for the primary financial support of this work as part of the research program Superdispersed multiphase flows". The DPM simulations performed for this paper are undertaken in Mercury-DPM, which was initially developed within this IMPACT program. It is primarily developed by T. Weinhart, A. R. Thornton and D. Krijgsman as a joint project between the Multi-Scale Mechanics (Mechanical Engineering) and the Mathematics of Computational Science (Applied Mathematics) groups at the University of Twente. We also thank the NWO VICI grant 10828, the DFG project SPP1482 and B12 the Technology Programme of the Ministry of Economic Afairs (STW MuST project 10120) for financial support. REFERENCES
Allen, M.P., Tildesley, D.J., 1989, Computer simulations of liquids, Clarendon Press, Oxford. Babic, B., 1997, Average balance equations for granular materials, Int. J. Eng. Science, 35 (5), 523-548. Bokhove, O., Thornton, A.R., 2012, Shallow granular flows, in: Fernando, H.J., ed., Handbook of environmental fluid dynamics, CRC Press. Cui, X., Gray, J.M.N.T., Johannesson, T., 2007, Deflecting dams and the formation of oblique shocks in snow avalanches at flateyri, JGR, 112. Cundall, P.A., Strack, O.D.L., 1979, A discrete numerical model for granular assemblies, Geotechnique, 29, 47-65. Dalbey, K., Patra, A.K., Pitman, E.B., Bursik, M.I., Sheridan, M.F., 2008, Input uncertainty propagation methods and hazard mapping of geophysical mass flows, J. Geophys. Res., 113. Denlinger, R.P. Iverson, R.M., 2001, Flow of variably fluidized granular masses across three-dimensional terrain, 2. numerical predictions and experimental tests, J. Geophys Res., 106 (B1), 533-566. Dolgunin, V.N., Ukolov, A.A., 1995, Segregation modelling of particle rapid gravity flow, Powder Tech., 83 (2), 95-103. Drahun, J.A., Bridgewater, J., 1983, The mechanisms of free surface segregation, Powder Tech., 36, 39-53. Ericson, C., 2004, Real-time collision detection (The Morgan Kaufmann Series in Interactive 3-D Technology), Morgan Kaufmann Publishers Inc., San Francisco. Form, W., Ito, N., Kohring, G.A., 1993, Vectorized and parallelized algorithms for multi-million particle MD-simulation, International Journal of Modern Physics C, 4(6), 10851101.

COMPUTER METHODS IN MATERIALS SCIENCE

210

INFORMATYKA W TECHNOLOGII MATERIAW GDR Midi, 2004, On dense granular flows, Eur. Phys. J.E., 14, 341-365. de Gennes, P.G., 2008, From rice to snow, in: Nishina Memorial Lectures, vol. 746 of Lecture Notes in Physics, Springer, Berlin/Heidelberg, 297-318. Gilbert, L.E., Ertas, D., Grest, G.S., Halsey, D., Levine, T.C., Plimpton, S.J., 2001, Granular flow down an inclined plane: Bagnold scaling and rheology, Phys. Rev. E., 64 (051302). Goldhirsch, I., 2010, Stress, stress asymmetry and couple stress: from discrete particles to continuous fields, Granular Matter, 12 (3), 239-252. Gray, J.M.N.T., Ancey, C., 2011, Multi-component particle-size segregation in shallow granular avalanches, J. Fluid Mech., 678, 535-588. Gray, J.M.N.T., Chugunov, V.A., 2006 Particle-size segregation and diffusive remixing in shallow granular avalanches, J. Fluid Mech., 569, 365-398. Gray, J.M.N.T., Cui, X., 2007, Weak, strong and detached oblique shocks in gravity driven granular free-surface flows, J. Fluid Mech., 579, 113-136. Gray, J.M.N.T., Tai, Y.C., Noelle, S., 2003, Shock waves, dead zones and particle-free regions in rapid granular free surface flows, J. Fluid Mech., 491, 161-181. Gray, J.M.N.T., Thornton, A.R., 2005, A theory for particle size segregation in shallow granular free-surface flows, Proc. Royal Soc. A, 461,1447-1473. Hakonardottir, K.M., Hogg, A.J., 2005, Oblique shocks in rapid granular flows, Phys. Fluids. Hockney, R.W., Eastwood, J.W., 1981, Computer simulation using particles, McGraw- Hill, New York. Hungr, O., Morgenstern, N.R., 1984, Experiments on the flow behaviour of granular materials at high velocity in an open channel, Geotechnique, 34 (3). Irwing, J.H., Kirkwood, J.G., 1950, The statistical mechanical theory of transport processes, iv. the equations of hydrodynamics, The Journal of Chemical Physics. Iverson, R.M., 2003, The debris-flow rheology myth, in: Rickenmann and Chen, eds, Debrisow hazards havards mitigation: Mechanics, prediction and assessment, Millpress, 303-314. Iwai, T., Hong, C.W., Greil, P., 1999, Fast particle pair detection algorithms for particle Simulations, Int. J. Modern Physics C, 10(5), 823-837. Jenkins, J.T., Savage, S.B., 1983, A theory for the rapid flow of identical, smooth, nearly elastic, spherical particles, Journal Fluid. Mech., 130, 187-202. Luding, S., 2004, Micro-macro models for anisotropic granular media, in: Vermeer, P.A., Ehlers, W., Herrmann, H.J. Ramm, E., eds, Micro-macro models for anisotropic granular media, Balkema A.A., Leiden, 195-206. Luding, S., 2008, Cohesive, frictional powders: contact models for tension, Granular Matter, 10 (4), 235-246. Luding, S., Laetzel, M., Volk, W., Diebels, S., Hermann, H.J., 2001, From discrete element simulations to a continuum model, Comp. Meth. Appl. Mech. Engng., 191, 21-28. Lun, C.K.K., Savage, S.B., Jeffrey, D.J., Chepurniy, N., 1984, Kinetic theories for granular flow: inelastic particles in couette flow and slightly inelastic particles in a general flow field, J. Fluid Mech, 140, 223. Markesteijn, A.P., 2011, Connecting molecular dynamics and computational fluid dynamics, Ph.D. Thesis, University of Delft. Middleton, G.V., 1970, Experimental studies related to problems of flysch sedimentation, in: Lajoie, J., ed., Flysch Sedimentology in North America, Business and Economics Science Ltd, Toronto, 253-272. Moore, M., Wilhelms, J., 1988, Collision detection and response for computer animation, Computer Graphics (SIGGRAPH '88 Proceedings), 22 (4), 289-298. Munjiza, A., 2004, The combined finite-discrete element method, John Wiley & Sons Ltd. Muth, B., Mueller, M.K., Eberhard, P., Luding, S., 2007, Collision detection and administration for many colliding bodies, Proc. DEM07, 1-18. Ogarko, V., Luding, S., 2010, Data structures and algorithms for contact detection in numerical simulation of discrete particle systems, Proc. 6th Word Congress on Particle Technology WCPT6, Nuremberg. Ogarko, V., Luding, S., 2011, A study on the influence of the particle packing fraction on the performance of a multilevel contact detection algorithm, in: Onate, E., Owen, D.R.J., eds, II Int. Conf. on Particle-based Methods Fundamentals and Applications, PARTICLES 2011, Barcelona, Spain, 1-7. Ogarko, V., Luding, S., 2012, A fast multilevel algorithm for contact detection of arbitrarily polydisperse objects, Computer Physics Communications, 183 (4), 931-936. Pesch, L., Bell, A., Sollie, W.E.H., Ambati, V.R., Bokhove, O., Vegt, J.J.W., 2007, hpgem a software framework for discontinous galerkin finite element methods, ACM Transactions on Mathematical Software, 33 (4). Pouliquen, O., 1999, Scaling laws in granular flows down rough inclined planes, Phys. Fluids, 11 (3), 542-548. Pouliquen, O., Forterre, Y., 2002, Friction law for dense granular flows: application to the motion of a mass down a rough inlined plane, J. Fluid Mech., 453, 131-151. Raschdorf, S., Kolonko, M., 2011, A comparison of data structures for the simulation of polydisperse particle packings, Int. J. Num. Meth. Eng., 85, 625-639. Savage, S.B., Hutter, K., 1989, The motion of a finite mass of material down a rough incline, Journal Fluid. Mech., 199, 177-215. Savage, S.B., Lun, C K.K., 1988, Particle size segregation in inclined chute flow of dry cohesionless granular material, J. Fluid Mech., 189, 311-335. Schofield, P., Henderson, J.R., 1982, Statistical mechanics of inhomogenous fluids, Proc. R. Soc., 379, 231-246. Shen, S., Atluri, S.N., 2004, Atomic-level stress calculation and continuum molecular system equivalence, CMES, 6 (1), 91-104. Shinbrot, T., Alexander, A., Muzzio, F.J., 1999, Spontaneous chaotic granular mixing, Nature, 397 (6721), 675-678. Stadler, J., Mikulla, R., Trebin, H.R., 1997, IMD: A software package for molecular dynamics studies on parallel computers, Int. J. Modern Physics C, 8 (5), 1131-1140. Thatcher, U., 2000, Loose octrees, in: deLoura, M., ed., Charles River Media. Thornton, A.R, Weinhart, T., Krijgsman, D., Luding, S., Bokhove, O., Mercury md. http://www2.msm.ctw.utwente.nl/ athornton/MD/. Thornton, A.R, Gray, J.M.N.T., Hogg, A.J., 2006, A three phase model of segregation in shallow granular free-surface flows, J. Fluid Mech., 550, 1-25. Thornton, A.R, Weinhart, T., Luding, S., Bokhove, O., 2012, Modelling of particle size segregation: Calibration using

211

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW the discrete particle method. Submitted to Int. J. Mod. Phys. C., 23(8), 1240014. Thornton, A.R, Weinhart, T., Luding, S., Bokhove, O., 2013, Friction dependence of shallow granular flows from discrete particle simulations. Submitted to Eurp. Phys. Letts. Todd, D.,B., Evans, D.J., Daivis, P.J., 1995, Pressure tensor for inhomogeneous fluids, Phys. Rev. E, 52(2). van der Vegt, J.J.W., Thornton, A.R., Bokhove, O., hpgem. http://www2.msm.ctw.utwente.nl/athornton/hpGEM/. Vreman, A.W., Al-Tarazi, M., Kuipers, A.M., van Sint Annaland, M., Bokhove, O., 2007, Supercritical shallow granular flow through a contraction: experiment, theory and simulation, JFM, 578, 233-269. Walton, O.R., Braun, R.L., 1986, Viscosity, granular temperature, and stress calculations for shearing assemblies of inelastic, frictional disks, Journal of Rheology, 30 (949). Weinan, E., Engquist, B., Li, X., Ren, W., Vanden-Eijnden, E., 2007, Heterogeneous multiscale methods: a review, Communications in Computational Physics, 2 (3), 367450. Weinhart, T., Thornton, A.R., Luding, S., Bokhove, O., 2012a, Closure relations for shallow granular flows from particle simulations, Granular Matter, 14, 531-552. Weinhart, T., Thornton, A.R., Luding, S., Bokhove, O., 2012b, From discrete particles to continuum fields near a boundary, Granular Matter, 14, 289-294. Williams, J.R., OConnor, R., 1999, Discrete element simulation and the contact problem, Archives of Computational Methods in Engineering, 6 (4), 279-304. Williams, J.R., Stinton, A.J., Sheridan, M.F., 2008, Evaluation of the Titan 2D two-phase flow model using an actual event: Case study of the vazcun valley lahar, Journal of Volcanology and Geothermal Research, 177, 760-766. Woodhouse, M., Thornton, A.R., Johnson, C., Kokelaar, P., Gray, J.N.M.T., 2012, Segregation induced fingering instabilities in granular avalanches, JFM, 709, 543-580. WIELOSKALOWE METODY DLA WIELOSKADNIKOWYCH MATERIAW ZIARNISTYCH Streszczenie W artykule przedstawiono postp w zrozumieniu przepywu przesypywanych materiaw ziarnistych, jaki zosta ostatnio osignity dziki technikom modelowania wieloskalowego. Na pocztku omwiono metod dyskretnych czstek (ang. Discrete particle method - DPM) i wyjaniono w jaki sposb naley konstruowa cige pole na podstawie dyskretnych danych tak, aby model by spjny z makroskopow zasad zachowania masy i pdu. Zaprezentowano te now metod wykrywania kontaktu, ktra moe by wykorzystana do wieloskadnikowych materiaw sypkich o rozmiarze czstek zmieniajcych si w zakresie rzdw wielkoci. Pokazano jak zaawansowane symulacje DPM mog by zastosowane do uzyskiwania zalenoci dla modelu cigego (mapowanie zmiennych i funkcji midzy skalami mikro i makro). To umoliwio rozwj modeli kontinuum zawierajcych informacj o mikrostrukturze materiaw sypkich bez potrzeby robienia dodatkowych zaoe. Przejcie mikro-makro przedstawiono dla dwch problemw pynicia materiau sypkiego. Pierwszym jest pynicie w pytkim zsypie, gdzie gwn niewiadom w modelu kontinuum jest makro wspczynnik tarcia. W pracy badano jak ten wspczynnik zaley od wasnoci przepywajcych czstek i powierzchni, wzgldem ktrej czstki przepywaj. Drugim analizowanym problemem jest segregacja przy zsypie polidyspersyjnych czstek. W obydwu analizowanych problemach rozwaono krtkie okresy stacjonarne w symulacji DPM, aby otrzyma realistyczne, wzajemnie dopeniajce si zalenoci opisujce cige wasnoci materiau. W pracy omwiono rwnie problem dokadnoci i poprawnoci opisanych wzajemnie dopeniajcych si zalenoci dla zoonych problemw dynamicznych. Problemy te s odlege od omwionych wczeniej rozwiza dla krtkich okresw stacjonarnych, dla ktrych te zalenoci byy otrzymywane. Dla prostych przypadkw zastosowanie zdefiniowanych wzajemnie dopeniajcych si zalenoci dawao poprawne wyniki. W bardziej skomplikowanych sytuacjach potrzebne s nowe, bardziej zaawansowane rozwizania, w ktrych makrokontinuum i mikro dyskretny model s poczone w sposb dynamiczny ze sprzeniem zwrotnym.
Received: September 20, 2012 Received in a revised form: November 4, 2012 Accepted: November 21, 2012

COMPUTER METHODS IN MATERIALS SCIENCE

212

COMPUTER METHODS IN MATERIALS SCIENCE


Informatyka w Technologii Materia w
Publishing House AKAPIT

Vol. 13, 2013, No. 2

NUMERICAL ASPECTS OF COMPUTATIONAL HOMOGENIZATION


MARTA SERAFIN*, WITOLD CECOT Cracow University of Technology, ul. Warszawska 24, 31-155 Krakw *Corresponding author: mserafin@L5.pk.edu.pl
Abstract Computational homogenization enables replacement of a heterogeneous domain by an equivalent body with effective material parameters. Approach that we use is based on two-scale micro/macro analysis. In the micro-scale heterogeneous properties are collected in so-called representative volume elements (RVE), which are small enough to satisfy separation scale condition, but also large enough to contain all information about material heterogeneity. In the macro-scale the material is assumed as a homogeneous with the effective material parameters obtained during RVE analysis. The coupling between both scales is provided at the selected macro-level points, which are associated to independent RVE. Then, approximation of solution in the whole domain is performed. Even though such a homogenization significantly reduces the time of computation, the efficiency and accuracy of the analysis are still not trivial issues. In the micro-level it is required to guarantee accurate representation of heterogeneity and at both scales the optimal number of degrees of freedom should be used. The paper presents application of one of the most efficient numerical techniques, i.e. automatic hp-adaptive FEM that enables a user to obtain error-controlled results in rather short time; assessment of homogenization error, that is crucial for determination of parts of the body, where homogenization cannot be used and the hp-mixed FEM discretization details. Key words: homogenization, representative volume element, adaptive finite element method, mixed FEM

1. INTRODUCTION Even though numerical homogenization speeds up solution of real-life problems for heterogeneous materials the time of computation may be very large, especially if nonlinearity is accounted for. Therefore, we discuss in this paper certain numerical aspects that should increase efficiency of computational homogenization (Feyel, 2003; Gitman, 2006; Kouznetsova et al., 2004) without loosing its accuracy. Numerical analysis is performed by the automatic hp-adaptive version of FEM (Demkowicz et al., 2002). It is well known that the method gives the fastest convergence for linear problems. We have confirmed that one may expect the same situation

for inelastic problems (Serafin and Cecot, 2012) and it may be used both at the micro and macro-levels. Since stresses are of primary interest we decided to use the mixed FEM, in which stresses are approximated directly, rather than by derivatives of displacements. The stability of this approach is a difficult task (Arnold et al., 2007) but efficient and stable hp-mixed FEM is possible if appropriate weak formulation and shape functions are used. In order to increase reliability of the results the error of homogenization is estimated (Cecot et al., 2012). We propose assessment of homogenization error by additional analyses in selected subdomains with boundary conditions determined by the homogenized solution. Residuum of the differential equa-

213 218

ISSN 1641-8581

INFORMATYKA W TECHNOLOGII MATERIAW

tion for heterogeneous body may also be used as another criterion for detection of subdomains with large discrepancy between the exact and homogenized solutions. 2. ADAPTIVE FINITE ELEMENT METHOD In this paper automatic hp-adaptive finite element method, proposed by Demkowicz et al. (2002), is used for numerical analysis. Its main adventage is exponential convergence, superior to h and p adaptation techniques, where only algebraic convergence may be obtained. This automatic mesh adaptation was successfully used for various linear problems. Its key point is an appropriate strategy of anisotropic h, p or hp mesh refinement. The strategy proposed by Demkowicz et al. (2002) is based on the interpolation error estimate, which is a good upper bound of the best approximation error that in turn, for coercive problems by the Cea's lemma, is the upper bound for the actual approximation error. The aforementioned interpolation error is estimated making use of a fine mesh solution ( / , ), denoted here for the sake of brevity by that serves as a substitute for the exact solution. Such an ''exact'' solution is interpolated locally by the possible new hp-refined meshes. The difference between and its interpolant approximates the interpolation error and the optimal anisotropic mesh refinement is determined in such a way that the reduction of the interpolation error per number of additional degrees of freedom is maximal. It means that for the coarse mesh the optimal (h, p or hp) refinement is determined by maximizing the following expression

both h and p is performed and the optimal mesh is selected by maximization of the function defined by equation (1). For large problems computation of the fine mesh solution may be time-consuming. However, only partially convergent solution obtained by e.g. a fast two-grid solver may be used to guide the optimal hp-refinement. In this paper convergence of hp adaptation strategy for elastic-plastic problems is examined for twoscale modeling and some modifications of the algorithm are proposed. According to literature (Barthold et al., 1998; Cecot, 2007; Gallimard, 1996), inelastic deformations should be accounted for in a special way in a-posteriori error estimates in order to obtain appropriate stress approximation accuracy. Additional h-refinement is proposed along elastic-plastic interface, which is a place of lower solution regularity. Such an algorithm is called here modified (automatic) hp-refinement.

Fig. 1. RVE. Boundary conditions, elastic and plastic zones. Nopenetration boundary conditions were assumed on the left and bottom sides. Zero and constant loading (220 MPa) were applied at the top and right edges. Table 1. Material parameters. Material parameters Young modulus (GPa) Poisson ratio Yield strength (MPa) inclusion 300 0.3 300 30 matrix 100 0.3 200 10

COMPUTER METHODS IN MATERIALS SCIENCE

and
,

max

(1)

Hardening coefficient (GPa)

with additional assumption that the mesh is oneirregular, where M , , M , denote arbitrary and the optimal new meshes, respectively; , , denote H projection-based interpolants on the current and optimal meshes, respectively; N , N are the numbers of degrees of freedom in optimal and current meshes. The maximization is performed by search over a suitable subset of all possible hp refinements for every coarse mesh element. Thus, the algorithm of adaptation approach starts with the solution of the problem on the current (coarse) mesh ( / , ). Then, the refinement in

To verify the efficiency of automatic hp-adaptive FEM for inelastic problems RVE with cylinder-like inclusion was analyzed. A quarter of the domain was considered (figure 1). Both materials underwent elastic-plastic deformations. More precisely plane strain problem with the Mises yield condition, normality rule with kinematic hardening and boundary conditions, specified in figure 1, are considered. Material parameters are collected in table 1. After additional h-refinements of elements, which contained both elastic and plastic parts, elastic-plastic zone was successfully detected. Comparison of meshes obtained by original and modified hp

214

INFORMATYKA W TECHNOLOGII MATERIAW

algorithm is shown in figure 2. Convergence history is presented in figure 3. On the basis of this and other, not presented here, examples one may conclude that the original automatic hp algorithm works well for inelastic problems, even though elastic-plastic interface is not detected in accurate way.

:C-1 d

div.u d

.p d

n. ds

v.div d

v.b d

(2)

q. d 0

H div, , M , L , V , L , K where M is the space of second order, but not necessary symmetric tensors, K is the space of skewsymmetric tensors. The example of a tensor shape function that enables obtaining continuous tractions at every point of element interfaces may have the following form x, y, y, y, /detJ (3) x = -x, x, -y, x, where and denote coordinates of master element, x and y are coordinates of physical element, J stands for Jacobi matrix of transformation between those elements. One may also use formula obtained by Piola transformation that gives the following stress shape function possessing the same properties x, y, /detJ (4) x = 0 0 The main difference between both formulas is in definition of degrees of freedom (traction in normal/tangent or x/y directions). Comparison of traction components continuity of approximation by equations (3) and (4) is shown in figure 4. The RVE with square-like inclusion in plane strain state was considered as a benchmark for the proposed mixed approximation. A quarter of the domain was taken into account (figure 5) with appropriate boundary conditions. Deformations were only in elastic range. In this example inclusion was much weaker than the matrix (Young modulus: 0.002GPa and 200GPa, respectively; Poisson ratio for both materials: 0.3). Simulation of tension was performed and this way effective material parameters were computed. Convergence of effective Young modulus obtained by classical FEM (displacement formulation) and mixed method (displacement-stress formulation) is compared in figure 6. One may observe, that if we use both methods with small number of degrees of freedom we are

NDOF = 21048

NDOF = 54052

Fig. 2. RVE. Mesh after 40 steps of original and 20 steps of modified hp-refinements (grey scale indicates order of approximation).

Fig. 3. RVE. Convergence of error norm.

3. MIXED FINITE ELEMENT METHOD Mixed method enables independent approximation of at least two fields. Such approximation of stresses is useful in multiscale computations, where homogenization is based on evaluation of stress in the micro-scale. However, stable mixed finite elements for solid mechanics are very difficult to construct. They have to provide symmetry of stress tensor and continuity of only traction across interelement boundaries, rather than all the stress components. One may use a modified Hellinger-Reissner principle, in which stress tensor symmetry is enforced in a weak way (Arnold et al., 2007; Qiu & Demkowicz, 2009). The problem has the following form: find L , V and L , K H div, , M , such that:

215

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW

able to evaluate effective value with a good accuracy as an average of both solutions.

mation analysis of macrostructures. In that method the adaptation zones that correspond to regions with high strain-gradients, are identified based on a postprocessing step on the homogenized solution. Subsequently, for critical zones, exact microstructural representation is introduced, without intermediate models. Here, other possibilities of modeling error estimation are presented, since this issue is essential for reliability of the results. One is based on the solution of heterogeneous problem in selected subdoFig. 4. Traction components continuity. The arrows along the common edge denote trac- mains with boundary conditions tions evaluated for stress fields of adjacent elements along this edge. assumed on the basis of homogenized solution. Such an error estimate consists of a few steps: 1) compute the homogenized solution u0 in domain , 2) select a part of the body, where the error should be estimated and consider heterogeneous material in this subdomain, Fig. 5. RVE. Boundary conditions. No-penetration boundary 3) solve the boundary value problem for cut-off conditions were assumed on the left and bottom sides. Zero and heterogeneous subdomain A with boundary conconstant loading were applied at the top and right edges.

COMPUTER METHODS IN MATERIALS SCIENCE

Fig. 6. Effective Young modulus.

4. MODELING ERROR ESTIMATION Replacement of a heterogeneous body by a homogenized one with effective material parameters introduces an error, related to incomplete information about the microstructure. Thus, it may happen that the homogenization should not be used for certain part of the domain. A global explicit estimate using the homogenized (coarse) elasticity tensor and the actual fine-scale elasticity tensor was proposed by Zohdi et al. (1996). However, it is not able to capture local error. A scale adaptation strategy developed by Temizer and Wriggers (2011) was used to account for loss of accuracy for the finite defor-

ditions resulted from homogenized solution; obtain the solution uI and consider it in a smaller truncated part of the selected heterogeneous domain , 4) estimate the error between solution uI and homogenized one u0 in subdomain . Another possibility is based on residuum, by analogy to the explicit residual error indicator for FEM solution (Babuska & Miller, 1987; Babuska & Rheinboldt, 1978). The proposed algorithm of homogenization error estimation may be stated in the following way: 1) compute effective material parameters for homogenized domain,

216

INFORMATYKA W TECHNOLOGII MATERIAW

2) solve homogenized problem with effective properties to obtain u0 , 3) compute residuum R of equilibrium equation div for heterogeneous body in each macro-scale finite element, where denotes stands for stress tensor (in fact, body forces, is a distribution; its norm, which we are interested in, may be bounded by the norms of regular part of and jumps of the first derivatives of the solution), 4) compute jumps of tractions at interfaces of finite on ) and material elements ( on ), components ( where denotes common edge of adjacent elements and stands for material (m1, m2) interfaces, 5) wherever R hR0 h1/2Je 0 h1/2Jm0 is large, homogenization should not be used. The L-shaped domain in plane strain state was considered to perform numerical tests. The metal matrix was reinforced by cylinder-like inclusions, distributed uniformly. Assumed boundary conditions, as well as material distribution, are shown in figure 7.

Fig. 8. L-shaped domain. Boundary displacements for cut-off heterogeneous domain A resulted from homogenized solution.

Fig. 9. L-shaped domain with homogeneous and heterogeneous subdomains. New material distribution after excluding, on the basis of residual error estimate, two subdomains from homogenization and appropriate FE mesh (gray scale indicates order of approximation).

5. CONCLUSIONS The paper presents application of two efficient numerical techniques, i.e. automatic hp mesh adaptation and mixed approximation for inelastic two-scale analysis. Prospects of both approaches were positively verified by solution of selected numerical examples. In order to obtain reliable results the error introduced by homogenization was estimated giving information about quality of the results. Numerical improvements of computational homogenization, presented in this paper, will be used in further applications of the approach in modeling of elastic-plastic composites Acknowledgment. This research was supported by the National Science Center under grant 2011/01/B/ST6/07312. REFERENCES
Arnold, D.N., Falk, R., Winther, R., 2007, Mixed finite element methods for linear elasticity with weakly imposed symmetry, Mathematics of Computations, 76, 1699-1723. Babuska, I., Miller, A., 1987, A feedback finite element method with a posteriori error estimation. Part 1, Comp. Meth. Appl. Mech. Engng, 61, 1-40. Babuska, I., Rheinboldt, W.C., 1978, Error estimates for adaptive finite element computations, Int. J. Num. Meth. Engng, 12, 1597-1615.

Fig. 7. L-shaped domain. Boundary conditions and material distribution.

A part of the domain (reentrant corner of Lshaped domain figure 8) was selected and estimation by subdomain solutions was used. Estimated error for displacements is as follows u u0 0,B u0 0,B 0,13%

The residual modeling error estimate was also used for this example. Residuum in each macroelement of homogenized body was calculated and error indicators were evaluated. Assumed admissible error enables selection of subdomains, which should not be homogenized (figure 9). Automatic hp refinements enables obtaining a mesh that captures all the details of heterogeneity in selected subdomains.

217

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW Barthold, F., Schmidt, M., Stein, E., 1998, Error indicators and mesh refinements for finite-element-computations of elastoplastic deformations, Computational Mechanics, 22, 225-238. Cecot, W., 2007, Adaptive FEM analysis of selected elasticvisco-plastic problems, Comp. Meth. Appl. Mech. Engng, 196, 3859-3870. Cecot, W., Serafin, M., Klimczak, M., 2012, Reliability of computational homogenization, International USPoland Workshop: Multiscale Computational Modeling of Cementitious Materials, ISBN 978-83-7242-667-3, 183-194. Demkowicz, L., Rachowicz, W., Devloo, Ph., 2002, A fully automatic hp-adaptivity, Journal of Scientific Computing, 17, 127-155. Feyel, F., 2003, A multilevel finite element method (FE2) to describe the response of highly non-linear structures using generalized continua, Comput. Methods Appl. Mech. Engrg., 192, 3233-3244. Gallimard, L., Ladeveze, P., Pelle, J.P., 1996, Error estimation and adaptivity in elastoplasticity, Int. J. Numer. Meth. Engng, 39, 189-217. Gitman, I., 2006, Representative Volumes and Multi-scale Modelling of Quasi-brittle Materials, PhD thesis, Delft University of Technology. Kouznetsova, V., Geers, M., Brekelmans, W., 2004, Size of a representative volume element in a second-order computational homogenization framework, International Journal for Multiscale Computational Engineering, 2, 575598. Qiu, W., Demkowicz, L., 2009, Mixed hp-finite element method for linear elasticity with weakly imposed symmetry, Comp. Meth. Appl. Mech. Engng, 198, 3682-3701. Serafin, M., Cecot, W., 2012, Self hp-adaptive FEM for elasticplastic problems, International Journal for Numerical Methods in Engineering (submitted). Zohdi, T.I., Oden, J.T., Rodin, G.J., 1996, Hierarchical modeling of heterogeneous bodies, Comput. Methods Appl. Mech. Engrg., 138, 273-298. Temizer, I., Wriggers, P., 2011, An adaptive multiscale resolution strategy for the finite deformation analysis of microheterogeneous structures, Comput. Methods Appl. Mech. Engrg., 200, 2639-2661. NUMERYCZNE ASPEKTY HOMOGENIZACJI OBLICZENIOWEJ Streszczenie Homogenizacja komputerowa pozwala na zastpienie materiau niejednorodnego przez orodek jednorodny z efektywnymi parametrami materiaowymi. Podejcie to bazuje na analizie w dwch skalach mikro i makro. W skali mikro rozwaa si materia niejednorodny w tzw. reprezentatywnym elemencie objtociowym (RVE), ktry jest na tyle may, eby zapewni separacj skal, rwnoczenie na tyle duy, aby informacje o wszystkich niejednorodnociach zostay w nim zawarte. W skali makro zakada si materia jednorodny z efektywnymi parametrami materiaowymi otrzymanymi z analizy RVE. Transfer informacji midzy skalami dokonywany jest w wybranych punktach skali makro, powizanymi z niezalenymi RVE. Nastpnie dokonywana jest aproksymacja rozwizania w skali makro. W ten sposb redukowany jest czas oblicze, jednak naley zagwarantowa poprawno uzyskanych wynikw. W skali mikro niezbdne jest dokadne odzwierciedlenie mikrostruktury, a w obu skalach optymalnej liczby stopni swobody. W pracy zastosowano dwie efektywne techniki numeryczne, t.j. hp-adaptacyjn wersj metody elementw skoczonych, ktra pozwala na uzyskanie wiarygodnych wynikw w stosunkowo krtkim czasie oraz sformuowanie wielopolowe pozwalajce uzyska moliwie dokadn aproksymacj napre, bdcych gwnym celem oblicze. W publikacji zawarto rwnie moliwoci oszacowania bdu homogenizacji, niezbdnego do wyznaczenia obszarw, w ktrych homogenizacja nie powinna by stosowana ze wzgldu na zbyt duy bd.
Received: September 20, 2012 Received in a revised form: October 31, 2012 Accepted: November 9, 201

COMPUTER METHODS IN MATERIALS SCIENCE

218

COMPUTER METHODS IN MATERIALS SCIENCE


Informatyka w Technologii Materia w
Publishing House AKAPIT

Vol. 13, 2013, No. 2

A POD/PGD REDUCTION APPROACH FOR AN EFFICIENT PARAMETERIZATION OF DATA-DRIVEN MATERIAL MICROSTRUCTURE MODELS
LIANG XIA1,2, BALAJI RAGHAVAN1, PIOTR BREITKOPF1*, WEIHONG ZHANG2
2

Laboratoire Roberval, UMR 7337 UTC-CNRS, UTC, Compigne, France Engineering Simulation and Aerospace Computing (ESAC), NPU, Xian, China *Corresponding author: piotr.breikopf@utc.fr
Abstract

The general idea here is to produce a high quality representation of the indicator function of different phases of the material while adequately scaling with the storage requirements for high resolution Digital Material Representation (DMR). To this end, we propose a three-stage reduction algorithm combining Proper Orthogonal Decomposition (POD) and Proper Generalized Decomposition (PGD)- first, each snapshot pixel/voxel matrix is decomposed into a linear combination of tensor products of 1D basis vectors. Next a common basis is determined for the entire set of microstructure snapshots. Finally, the analysis of the dimensionality of the resulting nonlinear space yields the minimal set of parameters needed in order to represent the microstructure with sufficient precision. We showcase this approach by constructing a low-dimensional model of a two-phase composite microstructure. Key words: parameterization of microstructure, homogenization, voxel approaches, storage costs, material uncertainties

1. INTRODUCTION The constant increase of computing power coupled with ever-easier access to high-performance computing platforms enable the computational investigation of realistic multi-scale problems. On the other hand, the progress in material science allows us to control the material microstructure composition to an unprecedented extent. In order to accurately predict the performance of structures employing such new materials, it becomes essential to include the effects of the microstructure variation when modeling the structural behavior. The material microstructure varies in a much smaller length scale than the actual structural size. Typical examples include polycrystalline materials, functionally graded materials, porous media, and multiphase polymers. To perform a multi-scale analysis involving

these materials, one must first construct models of the microstructure variations to be used as input in the subsequent parameterized analysis. This analysis may be performed in order to answer questions such as: how does microstructure affect the structural behavior? What particular microstructure yields the desired performance? How do the inherent material uncertainties propagate at the structural level? Given a set of 2D/3D geometrical instances (snapshots) of the Representative Volume Element (RVE) generated from a priori known information about feasible microstructure topology, a reducedorder parameterized representation is formulated, which could be directly usable in multi-scale finite element procedures. This problem, similar to those encountered in computer vision image processing and statistical data analysis, requires the right projection space in which the set of snapshots generates ISSN 1641-8581

219 225

INFORMATYKA W TECHNOLOGII MATERIAW

a low-dimensional manifold that can then be fitted by a parametric hyper-surface. Basically, the parameterized representation of the material microstructure may be split into two major steps: 1. Low-dimensional model construction of various material properties within the RVE, 2. Using this model as an input to the Finite Element analysis at the RVE level. The work introduced in this paper utilizes the available sources of information about the variability of the microstructure to construct a set of possible realizations of the material internal structure and mechanical properties. This initial data may be provided by numerical models (Voronoi diagrams, cellular automata,) or may be experimentally obtained by using imaging techniques: computer tomography (CT), magnetic resonance imaging (MRI), etc. (Ghosh & Dimiduk, 2011). This set of instances called snapshots, is mapped into a lowdimensional continuous space that spans all the admissible variations permitted by the experimental data. By exploring this low-dimensional equivalent surrogate space, one is essentially sampling over the parameterized space of material topology variations that satisfy the (simulated) experimental data. This low-dimensional representation is subsequently used as an input model in the finite element analysis either via a mesh or a voxel-based variant of finite element model at microstructural scale. The major advantage of this approach is a significant reduction in complexity due to the analysis in a lowdimensional space and results in a drastic reduction in the critical memory requirement. This representation may then be used in FE2type multi-scale analysis (Feyel, 2003), in continuous optimization procedures (Sigmund & Torquato, 1997) or within a stochastic framework to include the effects of the input uncertainties at the material level in order to understand how they propagate and affect the performance of the structure (Velamur Asokan & Zabaras, 2006). The literature reveals little investigation into developing parameterized material representations. Ganapathysubramanian and Zabaras (2007) used Principal Component Analysis (PCA) to represent data-driven representation of the property variations of random heterogeneous media. A general framework has been proposed by Raghavan et al. (2010) for POD-based shape representation by separating the space variables and the design variables in the space optimization context. In this paper, an exten-

sion of this technique for material microstructures is made by separating the space variables. The overall goal is to linearly scale the storage requirements in order to cope with the ever-increasing resolution of microstructural snapshots. The paper is organized in the following manner: section 2 presents the general description of the overall problem. POD and PGD-POD approaches are introduced in Section 3 and 4, respectively. Section 5 compares the reconstruction errors obtained in using the two approaches based on the approximation on a defined two-phase periodic microstructure media. The paper ends with concluding comments and suggestions for future work. 2. PROBLEM DESCRIPTION Consider a material sample defined by a realvalued continuous N N or discrete density map s s x, y , v defined over a square, periodic domain = [0,1] [0,1] and depending on a set of (possibly unknown) parameters (design variables) v p . The problem addressed is how to identify a smallest set of design variables given a set of M snapshots (instances, realizations, samples or images) of the microstructure. The snapshots are given by N N matrices S k , k 1 M such that

S k i , j s x n i , y n j , v k , i, j 1 N with xn , y n
defining a regular grid of data points. 1. POD MODEL OF THE MICROSTRUCTURE Consider a set of discrete 2D snapshots Sk, k = 1M. Each snapshot matrix is stored in a column vector sk of length N2. The full set of snapshots is stored in an N2 M matrix s1 s s M s , cenM tered around the mean snapshot s k 1 s k M . The

COMPUTER METHODS IN MATERIALS SCIENCE

interpolation may be performed using standard 2D finite element shape functions x, y .

x, y, v k T x, y s k s

(1)

The snapshot matrix may be decomposed by Singular Value Decomposition (SVD)

s1 s s M s UDVT

(2)

with the U and V matrices containing respectively the left and right singular vectors. Taking a (reasonable) assumption that M N 2 , we define a projection basis composed of the first m

220

INFORMATYKA W TECHNOLOGII MATERIAW

left singular vectors 1 m U 1: N ;1: m . An arbitrary centered snapshot is approached by

k s k , k T s k s s

(3)

The relative Frobenius norm of reconstruction error of the whole matrix of snapshots is

k s k s k 1 s k s kM T k 1 s k s s k s M M 2 i i i m1 i i m 1 i F M M i i i 1 i i 1 i2 F
M T

(4)

(7)

x, y, v T x, y s i i v s
i 1

The dimension of Ek is now mx my, decided by the dimensions of the two basis: mx of and my of , the process of basis extraction is given afterwards (equation10-13). For an arbitrary point x , y of the snapshot, we can rewrite equation (6)

s x, y i x, y i v
i 1
T T

(5)

x, y, v k s x, y + T x Ek T y s
s x, y + Ei , j v k i x j y
i 1 j 1 mx m y

(8)

with s x, y x, y s and i x, y x, yi .
The storage requirements for this approach are m N 2 m M which may be a problem when the resolution of the grid increases. This is even more critical when extending the approach to 3D with the

where s x, y T x S y , i x T x i and

j y T y j . For arbitrary value of design and

221

COMPUTER METHODS IN MATERIALS SCIENCE

so for dropped off modes i, i = m + 1,,M the error is given explicitly by the sum of corresponding diagonal entries i of D, squared. It is thus possible to build a mapping giving for each microstructure instance generated by a set of design variables v p , a unique image m . There are however two problems to be solved: an arbitrary m does not necessarily yield an v p , the dimension of p is not a priori known. Both problems are to be addressed here by building a manifold of admissible microstructures. This is done locally for a snapshot by analyzing the local dimensionality of the space spanned by the coefficient vectors k . The interpolation of coefficients is then formulated as a minimization problem constrained by manifolds of admissible shapes. Combining the above with the spatial interpolation functions, we express s = s(x,y,v) for an arbitrary value of design variables not belonging to the initial set and at any point in , possibly not on the grid. The bi-level representation allows us to separate the space variables x and y from the design variables v. Assuming that the basis vectors, defined at the grid points, are constant and that only the coefficients k depend on the design variables, we rewrite equation (1)

storage becoming m N 3 m M , which is definitely not scalable as for N = 1024 and M = 100 some 800GB (considering 8 byte floating point numbers) are necessary for the modes only and for N = 4096 and M = 1000, with ~50TB memory which goes well beyond the capacity of current workstations. The second problem concerns the sensitivity of the modes with respect to the order in which matrix terms are rearranged into a vector. Therefore, there is a clear need for an approach not requiring renumbering of the matrix entries and scaling better with increasing resolution. The proposed algorithm is presented in the next section. 2. PGD-POD MICROSTRUCTURAL MODEL WITH SEPARATED SPACE VARIABLES In the previous section, an interpolation form separating design variables from the space coordinates is proposed. In this section, a further separation is performed to the individual space dimensions x and y. Given an N N grid of sampling points and a matrix snapshot of the density map S k i , j s xn i , yn j , v k , i , j 1 N , the spatial interpolation of s x, y , v k is

x, y, v k T x S k y s

(6)

by means of the standard 1D finite element shape functions x and y . Instead of rearranging the matrix snapshots Sk into a column vector sk and performing SVD directly on the data set, the idea is to transform each snapshot Sk to matrix Ek of reduced dimensions in terms of two separate basis and .

k S E k T S

INFORMATYKA W TECHNOLOGII MATERIAW

space variables, the continuous model may now be expressed as

x, y, v s Ei , j vi x j y s
i 1 j 1

mx m y

(9)

Such a 1D approximation in each direction thus transforms every N N snapshot Sk into a mx my compressed matrix Ek. The process of extraction of the two separate basis and is introduced in the following. We start by the truncated SVD decomposition of each individual snapshot

this approach into 3D, the storage requirement would decrease drastically from m N3 + m M to mx m y mz N m mx m y mz m M . The reconstruction error of all snapshots in this approach is calculated in a similar way as in equation (4). Note that, two factors, mx and my in the basis extraction and m in SVD reduction of [e1eM], actually influence the reconstruction error in this approach. 5. PARAMETERIZATION OF A TWO-PHASE RVE In this section, a comparison is given for both proposed approaches on a commonly analyzed periodic two-phase microstructure pattern as shown in figure 1. Snapshots of such a pattern could be utilized to model various types of materials and we have a list of them in table 1. The periodic snapshot is defined by two parameters controlling the radii of two groups of circular inclusions. 500 snapshots of resolution 256256 are randomly generated for a local approximation.
Table 1. List of possible material types. Material Type Porous Aluminum Reinforced Alloys Fiber Composites Reference Kouznetsova et al. 2001 Ghosh et al. 2001 Zeman and Sejnoha 2001 Nguyen et al. 2010

S k S U k Dk VkT

(10)

where only the first mk left and right singular vectors corresponding to the first mk largest singular values are calculated. Next, we arrange all sub-matrices U k 1: N ;1: mk and Vk 1: N ;1: mk into two
N m1 mM matrices

U* U1 1: N ;1: m1 U M 1: N ;1: mM and V* V1 1: N ;1: m1 VM 1: N ;1: mM

(11)

and we apply SVD separately to U* and V *


T T U* UU DU VU and V* UV DV VV

(12)

The two separate basis and are composed of the first mx and my left singular vectors from UU and from UV , respectively.

1 mx UU 1: N ;1: mx and

1 m y UV 1: N ;1: m y

(13)

Quasi-brittle Materials

Thereafter, the matrices Uk and Vk in equation (10) may be now be approximated in terms of the two separate basis and

COMPUTER METHODS IN MATERIALS SCIENCE

k A k , A k T U k and V k B k , B k T Vk U (14)

Substitute above into equation (10) and we have equation (7) where matrix Ek A k Dk BT k is the only term depending on the design variables. Once Ek are obtained, a similar approximation approach is followed to that in section 3. Transforming Ek to column vector ek of length mx my, SVD reduction is performed on a full set mx my M matrix [e1eM]. Then a new mapping connection is built between each microstructure v p and the coefficients m calculated by SVD on [e1eM]. The storage requirements for this approach are mx m y N m mx m y m M which is significantly less than m N2 + m M. When extending

Fig. 1. Two-phase, two-parameter microstructure snapshots.

5.1.

POD approach

SVD is performed directly on the data set. From figure 2, it can be seen that the s form a set of twodimensional manifolds rather than a cloud of points in 3D space regardless of the particular triplet of modes used, clearly indicating that the design domain is parameterized by two parameters t1, t2, as is known in priori. This means that 1 = 1(t1, t2), 2 = 2(t1, t2).... The surfaces formed by s could be interpreted as the set of all possible constraints (direct geometric constraints, technological con-

222

INFORMATYKA W TECHNOLOGII MATERIAW

straints, etc., that are dicult to be expressed mathematically). Therefore, new microstructure snapshot could be generated parametrically in a reduced dimension by taking the surface coordinates t1 and t2 as design variables.

squares is the result of retaining 99.9% projection energy and the average number of retained modes is about 147. When 99% projection energy is retained, the average number of modes is reduced to 122. Such a reduction makes the SVD in equation (13) and (14) a little bit more effective as shown in the curve marked with triangles, but an original error of 1% is introduced at the same time (when mx = my = 256). Note that, mx doesnt have to equal my, especially when anisotropic materials Fig. 2. 2D -Manifolds for the data set. are considered and obviously, the RVE considered here is anisotropic. Figure 4 shows the reconstruction error versus mx and my chosen independently. This means that a further reduction in storage requirement may be achieved by choosing mx and my independently. Only the case retaining Fig. 3. Reconstruction errors versus mx m y in three cases. 99.9% projection error is considered for each snapshot SVD. By setting mx = my = 180, the data set of 256256 matrices Ss is transformed to reduced 180180 matrices Es with an introduced error of 4%. In the next step, POD is performed on the reduced data set of Es. The dimension of the data set is reduced from 2562500 to 1802500. Fig. 4. Reconstruction errors versus mx and my in case of square marked curve in figure 3. The manifolds formed by the s (see figure 5) is similar to 5.2. PGD-POD approach that in the previous approach, which indicates the microstructure has two parameters and also maniBy extracting basis vectors in both directions, fests the fact that PGD maintains the interrelationeach snapshot matrix S is firstly transformed to reship among snapshots. The microstructure can be duced matrix E. For clearer visualization of the reparameterized again by taking surface coordinates t1 construction error in the basis extraction process, we and t2 as design variables, now within a reduced assume mx = my and the errors versus mx and my are storage requirement. given in figure 3. Three curves correspond to cases that different number of modes are retained after the SVD on each snapshot. The curve with data points marked using circles gives the error when retaining 100% projection energy, i.e., m1 = m2 = = mM = 256. The curve with the data points marked using

223

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW

Fig. 5. 2D -Manifolds for the reduced data set.

5.3.

Comparison of the reconstruction errors

similar result in a much less storage requirement compared to POD approach. 6. CONCLUSIONS AND PERSPECTIVES A three-stage model reduction scheme combining Proper Orthogonal Decomposition (POD) and Proper Generalized Decomposition (PGD) has been developed to build a reduced order model for the efficient parameterization of material microstructures. The proposed model maintains the high quality of the reconstructed microstructure snapshots with a significantly reduced storage requirement compared to the traditional POD model. With a reduced order model of this type, additional investigations may be conducted into the prediction and optimization of material properties using microstructures. Acknowledgement. The authors acknowledge the support of OSEO in the scope of the FUI OASIS project F1012003Z, Labex MS2T and of the China Scholarship Council. REFERENCES
Feyel F., 2003, A multiscale finite element method (FE2) to describe the response of highly non-linear structures using generalized continua, Comput. Methods Appl. Mech. Eng., 192, 3233-3244. Ganapathysubramanian, B., Zabaras, N., 2007, Modeling diffusion in random heterogeneous media: Data-driven models, stochastic collocation and the variational multiscale method, J. Comput. Phys., 226, 326-353. Ghosh, S., Dimiduk, D., 2011, Computational Methods for Microstructure-Property Relationships, Springer, New York. Ghosh, S., Lee, K., Raghavan, P., 2001, A multi-level computational model for multi-scale damage analysis in composite and porous materials, Int. J. Solids Struct., 38, 23352385. Kouznetsova, V., Brekelmans, W.A.M., Baaijens, F.P.T., 2001, An approach to micro-macro modeling of heterogeneous materials, Comput. Mech., 27, 37-48. Nguyen, V.P., Lloberas-Valls, O., Stroeven, M., Sluys, L.J., 2010, On the existence of representativevolumes for softeningquasi-brittlematerials A failure zone averaging scheme, Comput. Methods Appl. Mech. Eng., 199, 30283038.

A comparison of the reconstruction errors obtained using the two approaches is given in this section. Figure 6 plots the two error curves against the number of retained modes in both approaches. The curve marked with squares, error in POD approach, is calculated by equation (4). The curve marked with triangles, error in PGD-POD approach, is calculated similarly with a presupposition of retaining mx = my = 180 in the PGD process.

Fig. 6. The reconstruction errors of the two approaches.

COMPUTER METHODS IN MATERIALS SCIENCE

Fig. 7. Comparison of the reconstructed snapshots, from left to right: Original Snapshot, POD reconstructed and PGD-POD reconstructed.

Figure 6 shows that the two curves match each other, varying in a similar trend except for the red curve converging to zero while the blue curve converges to a value of 4% due to the reduction in the PGD process. If the expected reconstruction error is no less than 10%, then the numbers of modes needed for the two approaches are close to each other. Considering an error of 20%, we retain the first 5 modes for both approaches. The result of the reconstruction is shown in figure 7, where there is no obvious difference between the two reconstructed snapshots. Thereafter, the PGD-POD approach could achieve a

224

INFORMATYKA W TECHNOLOGII MATERIAW Raghavan, B., Breitkopf, P., Villon, P., 2010, POD-morphing, an a posteriori grid parametrization method for shape optimization, Eur. J. Comput. Mech., 19, 671-697. Sigmund, O., Torquato, S., 1997, Design of materials with extreme thermal expansion using a three-phase topology optimization method, J. Mech. Phys. Solids, 45, 10371067. Velamur Asokan, B., Zabaras, N., 2006, A stochastic variational multiscale method for diffusion in heterogeneous random media, J. Comput. Phys., 218, 654-676. Zeman, J., Sejnoha, M., 2001, Numerical evaluation of effective elastic properties of graphite fiber tow impregnated by polymer matrix, J. Mech. Phys. Solids, 49, 69-90. PARAMETRYZACJA CYFROWEJ REPREZENTACJI MIKROSTRUKTURY MATERIAU POPRZEZ REDUKCJ POD/PGD Streszczenie Oglna idea pracy polega na automatycznej generacji precyzyjnej funkcji reprezentujcej topologi i geometri poszczeglnych faz materiau celem uzyskania modelu obliczeniowego o minimalnej liczbie parametrw. W tym celu proponujemy trzystopniowy algorytm redukcji obrazu, czcy cechy dekompozycji POD i PGD. W pierwszym etapie, macierz obrazu reprezentatywnego elementu objtociowego jest rozoona na liniow kombinacj tensorowych produktw jednowymiarowych wektorw bazowych. Nastpnie budujemy wspln baz dla caego zbioru obrazw mikrostruktury. W trzecim etapie, analiza wymiaru otrzymanej rozmaitoci topologicznej daje minimalny zestaw parametrw potrzebnych do reprezentowania mikrostruktury z odpowiedni dokadnoci. Jako przykad podajemy budow niskowymiarowego modelu dwufazowej mikrostruktury kompozytu.
Received: October 16, 2012 Received in a revised form: November 8, 2012 Accepted: November 23, 2012

225

COMPUTER METHODS IN MATERIALS SCIENCE

COMPUTER METHODS IN MATERIALS SCIENCE


Informatyka w Technologii Materia w
Publishing House AKAPIT

Vol. 13, 2013, No. 2

LOCAL NUMERICAL HOMOGENIZATION IN MODELING OF HETEROGENEOUS VISCO-ELASTIC MATERIALS


MAREK KLIMCZAK*, WITOLD CECOT Cracow University of Technology, ul. Warszawska 24, 31-155 Krakw *Corresponding author: mklimczak@L5.pk.edu.pl
Abstract The main objective of this paper is to present the prospects of application of local numerical homogenization to visco-elastic problems. Local numerical homogenization is one of the computational homogenization methods, proposed by Jhurani in 2009 for linear problems. Its main advantage is that it can be used in the case of modeling of heterogeneous materials with neither distinct scales separation nor periodic microstructure. The main idea of the approach is to replace of a group of many small finite elements by one macro element. The coarse element stiffness matrix is computed on the basis of the fine element matrices. In such a way one obtains a coarse mesh approximation of the time consuming fine mesh solution. In this paper we use the Burgers model to describe inelastic deformations, however any other constitutive equations may be applied. In the 1D case the Burgers model is interpreted as a combination of a spring and a dashpot and it is mainly used for bituminous materials (e.g. binders or asphalt mix). Because of rheological effects a transient analysis is necessary. Integration of local numerical homogenization with Burgers model should improve modeling of heterogeneous visco-elastic materials. The approach we propose can reduce the computational cost of the analysis without deterioration of the modeling reliability. We present numerical results of 1D and 2D analysis for selected problems that provide comparison between the brute force FEM approach and local numerical homogenization in application to modeling of heterogeneous visco-elastic materials in order to validate the technique. Key words: local numerical homogenization, visco-elasticity, Burgers model

1. INTRODUCTION Most of new materials are composites of different kinds. Before their implementation they are thoroughly tested. Numerical tests can significantly reduce the cost of design process by eliminating some laboratory or in situ experiments. However, numerical modeling of heterogeneous materials is a challenging task, especially in the case of inelastic deformations and non-periodic material microstructure. Brute force FEM analysis, which accounts for all details of the material heterogeneity, is either highly time consuming or even impossible. Therefore, various approaches to evaluation of effective

material properties and composite response are proposed (e.g. Geers et al., 2003; Mang et al., 2008). One of them is the computational homogenization. It is used to bridge neighboring analyses scales by the concept of representative volume element (RVE). This approach was developed for example by Geers and his collaborators (Geers et al., 2003). However, we make use of the local numerical homogenization not based on RVE concept (Jhurani, 2009; Jhurani & Demkowicz, 2009; Klimczak & Cecot, 2011), which is presented briefly below.

226 230

ISSN 1641-8581

INFORMATYKA W TECHNOLOGII MATERIAW

2. LOCAL NUMERICAL HOMOGENIZATION Local numerical homogenization is one of computational homogenization methods. It was proposed by Chetan Jhurani (Jhurani, 2009) for linear problems. Mainly linear elasticity was discussed. Unlike other computational homogenization methods, local numerical homogenization is not based on the concept of RVE. The main advantage of this method is that no separation of scales condition has to be fulfilled. It means that the ratio of microscale characteristic dimension and macroscale characteristic dimension does not have to be much smaller than the unity. Moreover, periodicity of the material is not required. Therefore this method is suitable to model asphalt pavement structures, which is the subject of our interest. Local numerical homogenization is thoroughly presented in Jhuranis dissertation (Jhurani, 2009). We would like to give an overview of this method and its main steps in context of linear problems.

where K denotes the Moore Penrose pseudoinverse of K and u 0 is an arbitrary vector in the null space of K . Analyzing the same problem at macroscale level we use effective stiffness matrices of K R MxM (MN) and coarse scale load vector defined in terms of f as f AT f ( A R NxM - a chosen interpolation operator for a respective element). The coarse scale solution is expressed in the following way:
^
^

u K f u0
^

^ ^

(3)

where K denotes the Moore Penrose pseudoinverse of K , and u 0 is an arbitrary vector in the null space of K . The difference between (2) and (3) is equal to:
^ ^

Algorithm of the method consists of the following steps:


assume trial effective material properties of the analyzed heterogeneous domain, solve the auxiliary coarse mesh problem (it is advised to use adaptive FEM but it is not obligatory), refine the coarse mesh within its every element in order to match all the heterogeneities (fine and coarse meshes are naturally compatible then), find the coarse mesh element effective matrices knowing the fine mesh element matrices, assemble coarse mesh element effective matrices, solve the coarse mesh problem. The core of the algorithm is evaluation of effective coarse mesh element matrices. Let us focus on a single coarse mesh element of the analyzed domain. Then we refine the mesh within this coarse mesh element to capture all details of the heterogeneity. , and load vector For a stiffness matrix K R N f R , the fine mesh local FEM equation is:
NxN

u A u ( K A K AT ) f (u 0 A u 0 )

(4)

Thus, we can express , up to a constant, the error

e R N as:

e ( K A K AT ) f

(5)

Finally, minimization of the above expression (enhanced with the regularization term) with respect to K (Jhurani, 2009) leads to the effective coarse mesh element stiffness matrix K . This routine needs to be repeated for every coarse mesh element. Then the coarse mesh problem can be solved in the standard way. 1. BURGERS MODEL Visco-elastic Burgers model is commonly used for modeling of bituminous materials. Its 1D scheme is presented in figure 1. It is a material model, which efficiently simulates all of the most important response characteristics of bituminous materials, i.e. elastic, viscous, and visco-elastic. Additionally, it is may be easily implemented numerically. The total strain increment ( ) in Burgers model is the sum of the elastic
^

Ku f

(1)

Its solution is equal to :

u K f u0

(2)

227

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW

( E ), visco-elastic ( VE ) and viscous strain increments ( V ) then:


E VE V

(6)

Fig. 1. Burgers model.

All of the above increments are presented in details by Collop et al. (2003) for both 1D and 3D case. Algorithm for time integration of the Burgers model is as follows: evaluate elastic solution as a trial one for a time step ti , calculate inelastic strain increments, update load vector considering the impact of inelastic strain increments, solve the problem and calculate total strain increment, if the difference between updated total strain increment and a trial is negligible and the difference between updated solution and a trial one is also negligible, go to the next time step; otherwise go to the first iteration step. 2. INTEGRATION OF LOCAL NUMERICAL HOMOGENIZATION WITH BURGERS MODEL Modeling of heterogeneous visco-elastic materials also requires time-consuming transient analysis. In this chapter we present the prospects of local numerical homogenization in application of viscoelastic Burgers model. Analysis becomes much more complex as we have to homogenize at every time step.

Algorithm of the proposed approach for known load history and known constituents characteristics is as follows (for each time step): solve the elastic problem using local numerical homogenization according to the routine presented in chapter 2, consider each coarse mesh element to be an independent problem: refine the mesh within this element, assume boundary conditions on the basis of elastic solution and solve this local problem at time ti, update the coarse mesh load vector considering inelastic contribution, assemble coarse mesh element matrices and updated load vectors, solve the coarse mesh problem. Whole routine requires then solving several local visco-elastic problems instead of the global problem. 3. NUMERICAL RESULTS In this chapter we present preliminary results of 1D and 2D numerical tests for visco-elastic materials. In figure 2 analyzed 1D domain is presented. All material data (for 'white' material) are the same as for the test performed by Woldekidan (2011). 'Black' material is characterized by two times weaker parameters. Cross-sectional area is equal to 50 cm2. Analysis period is equal to 60 s. Load P is equal to 1.5 kN for t 15 s, then it is removed. Results for an arbitrary time step ti are presented in figure 3. The whole domain was discretized by 10 fine mesh elements and 5 coarse mesh elements. Thus, two fine mesh elements were homogenized into one coarse mesh element. Distribution of inclusions is periodic for the sake of simplicity. 2D analysis was carried out for the domain presented below in figure 4. It is a 2m by 4m square analyzed in plane strain state. Its bottom edge is fixed, left and right hand side edges can displace only in the vertical direction. Uniformly distributed tensile load (1 kN/m2) is applied to the upper edge. Material data were assumed in the same manner as for the 1D test. The Poisson ratio both for the inclusion and the matrix is equal to 0.3.

COMPUTER METHODS IN MATERIALS SCIENCE

Fig. 2. Analyzed heterogeneous visco-elastic 1D domain.

228

INFORMATYKA W TECHNOLOGII MATERIAW

Fig. 3. 1D example. Displacements at arbitrary time ti.

Fig. 4. Analyzed 2D domain with randomly distributed inclusions.

Vertical displacements of the upper edge are presented in figure 5.

Fig. 5. 2D example. Vertical displacements along the upper edge.

229

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW

6. CONCLUSIONS Summing up we can conclude that for the tests presented in the paper integration of local numerical homogenization with visco-elastic material models: significantly reduced the computational cost of numerical analysis, did not introduce significant additional error to the solution. These results encourage us to perform further tests and obtain an effective algorithm for analyses of heterogeneous visco-elastic materials. REFERENCES
Collop, A. C., Scarpas, A. T., Kasbergen, C., de Bondt, A., 2003, Development and finite element implementation of a stress dependent elasto-visco-plastic constitutive model with damage for asphalt, Transportation Research Record, 1832, 96-104. Geers, M., Kouznetsova, V., Brekelmans, W., 2003, Multi-scale first-order and second-order computational homogenization of microstructures towards continua, International Journal for Multiscale Computational Engineering, 1, 371-386. J h u r a n i , C . K . , D e m k o wi c z , L . , 2009, ICES Reports 093409-36, The University of Texas at Austin. (http://www.ices.utexas.edu/research/reports/). J h u r a n i , C . K . , 2009, Multiscale Modeling Using Goaloriented Adaptivity and Numerical Homogenization, PhD thesis, The University of Texas, Austin. Klimczak, M . , Cecot, W., 2011, Local homogenization in modeling heterogeneous materials, Czasopismo Techniczne, R. 108, z. 1-B, Wydawnictwo Politechniki Krakowskiej, 87-94. Mang, H.A., Aigner, E., Eberhardsteiner, J., Hackspiel, C., Hellmich, Ch., Hofstetter, K., Lackner, R., Pichler, B., Scheiner, St., Strzenbecher, R., 2008, Computational Multiscale Analysis in Civil Engineering, Proc. Conf. The Eleventh East Asia-Pacific Conference on Structural Engineering and Construction, D & E Drawing and Editing Services Company, Taipei, 3 14. Woldekidan, M. F., 2011, Response Modeling of Bitumen, Bituminous Mastic and Mortar, PhD thesis, Technische Universiteit Delft.

LOKALNA HOMOGENIZACJA NUMERYCZNA W MODELOWANIU NIEJEDNORODNYCH MATERIAW LEPKOSPRYSTYCH Streszczenie Gwnym celem niniejszego artykuu jest prezentacja moliwoci wykorzystania lokalnej homogenizacji numerycznej do zada lepkosprystych. Lokalna homogenizacja numeryczna jest jedn z metod homogenizacji komputerowej. Zostaa zaproponowana przez Ch. Jhurani w roku 2009 do zagadnie liniowych. Jej gwn zalet jest to, i moe by wykorzystana do modelowania materiaw niejednorodnych, ktre nie wykazuj wyranej rozdzielnoci skal ani nie charakteryzuj si periodycznoci mikrostruktury. Gwn cech tego podejcia jest wykonanie homogenizacji po dyskretyzacji analizowanego obszaru. Kluczowym krokiem algorytmu jest zastpienie grupy elementw siatki gstej jednym elementem siatki rzadkiej. Ostatecznie wystarczy rozwiza zadanie w obszarze zdyskretyzowanym siatk rzadk, zamiast siatk gst. W niniejszym artykule wykorzystujemy model Burgersa do opisania deformacji lepkosprystych. Moliwe jest jednak zastosowanie innego rwnania konstytutywnego opisujcego zachowanie materiau w czasie. W przypadku jednowymiarowym model Burgersa jest interpretowany jako kombinacja spryn i tumikw. Wykorzystywany jest gwnie do modelowania zachowania materiaw bitumicznych (np. lepiszcza asfaltowe lub mieszanki mineralno-asfaltowe). Ze wzgldu na reologi zagadnienia niezbdna jest wykonanie analizy niestacjonarnej. Powoduje to znaczne wyduenie czasu oblicze ze wzgldu na konieczno rozwizania zadania w kadej chwili czasu oraz iteracyjny charakter algorytmu. Integracja lokalnej homogenizacji numerycznej z modelem Burgersa moe poprawi sposb modelowania niejednorodnych materiaw lepkosprystych. Proponowane przez nas podejcie moe ograniczy czas oblicze bez pogorszenia wiarygodnoci modelowania. Prezentujemy wyniki zada 1D oraz 2D dla wybranych zagadnie. Porwnane zostay one z wynikami podejcia brute force, tj. wynikami oblicze wykonanych za pomoc MES przy penym uwzgldnieniu mikrostruktury materiau. Rezultaty porwna powyszych metod pokazuj, e proponowane przez nas podejcie moe by z powodzeniem wykorzystane do modelowania niejednorodnych materiaw lepkosprystych, gdy nie wprowadza znacznego dodatkowego bdu do rozwizania obniajc jednoczenie koszt wykonywanych oblicze.
Received: September 20, 2012 Received in a revised form: November 4, 2012 Accepted: November 21, 2012

COMPUTER METHODS IN MATERIALS SCIENCE

230

COMPUTER METHODS IN MATERIALS SCIENCE


Informatyka w Technologii Materia w
Publishing House AKAPIT

Vol. 13, 2013, No. 2

NUMERICAL SIMULATIONS OF STRAIN LOCALIZATION FOR LARGE STRAIN DAMAGE-PLASTICITY MODEL


BALBINA WCISO*, JERZY PAMIN Institute for Computational Civil Engineering, Cracow University of Technology, Warszawska 24, 31-155 Cracow, Poland *Corresponding author: bwcislo@L5.pk.edu.pl
Abstract This paper deals with the phenomenon of strain localization in nonlinear and nonlocal material models. Particularly, in the description of the material not only nonlinear constitutive relations (damage, plasticity) are included, but also geometrical nonlinearities (large strains) are taken into account. The strain localization in the analysed model has a twofold source: geometrical effects (necking) and softening due to damage of the material. To avoid pathological mesh sensitivity of numerical test results the gradient averaging is applied in the damage model. The material description is implemented within the finite element method and numerical simulations are performed for a uniaxial tensile bar benchmark. Selected results are presented for the standard and regularized continuum. Key words: strain localization, large strain, damage, plasticity, gradient-enhancement, AceGen package

1. INTRODUCTION One of the features of materials with microstructure (e.g. concrete or composites) is strain localization which is closely related to the softening of the material. The localization means that from some point the whole deformation concentrates in a narrow zone while a major part of the structure experiences unloading. The strain localization has a twofold source: geometrical effects (e.g. necking of metallic bars) or material instabilities (e.g. microcracking or nonassociated plastic flow). Although the softening and the localized deformations are visible in the macroscopic material response, they have the physical origin in the evolution of the microstructure. The application of standard continuum models to these problems fails to provide an objective description of the phenomena. For the descending stressstrain relationship (e.g. due to damage of the materi-

al) the equilibrium equations lose their ellipticity in the post-peak regime. This leads to an ill-posed boundary value problem that entails a pathological mesh-sensitivity in the numerical solution. The reason for the discretization-dependence in the computational tests is that the localization is simulated in the possibly smallest material volume which depends on the assumed mesh. The localization phenomenon associated with the softening response can be properly reproduced using enhanced continuum theories which have a non-local character and take into account higher deformation gradients in the constitutive description (Peerlings et al., 1996). The gradient averaging involves an internal length scale which is an additional material parameter coming from the microstructure. The parameter is usually associated with the width of the localization band and is determined for instance by an average grain size. The paper includes the description of the material model which incorpoISSN 1641-8581

231 237

INFORMATYKA W TECHNOLOGII MATERIAW

rates hyperelasticity, plasticity (with or without hardening) based on Auricchio and Taylor (1999) and gradient enhanced damage. The analysis is performed with the assumption of large deformations and isothermal conditions. The implicit gradient model is incorporated, which is reflected in an additional partial differential equation to be solved. The paper presents the results of a computational test of localization in a tensile bar. The simulations are performed using Mathematicabased package AceGen/AceFEM (Korelc, 2009). The application of the automatic code generator AceGen significantly simplifies implementation of the elaborated models and due to automatic computation of derivatives allows one to avoid an explicit derivation of the tangent matrix for the Newton-Raphson procedure. 2. SHORT PRESENTATION OF MATERIAL MODEL The following simulations are performed for a material model which involves hyperelasticity, plasticity and damage and takes into account large strains. However, damage is not directly coupled with plasticity thus depending on the assumed material parameters the model can also reproduce hyperelasticity-plasticity or hyperelasticity-damage. The material description is developed with the assumption of isotropy and isothermal conditions and is based on a classical multiplicative split of the deformation gradient into its elastic and plastic parts: F FeF p (Simo and Hughes, 1998). The free energy function in the presented model is assumed to be an isotropic function of the elastic left CauchyGreen tensor b e F e F eT , a scalar measure of plastic flow and a scalar damage parameter :

e e b be

(3)

Damage is understood in the described model as a degradation of the elastic free energy function in the form:

e ,d (1 ) e

(4)

where is a scalar damage variable which grows from zero for the intact material to one for a complete material destruction, and which is computed from the damage growth function. In the following numerical simulations the exponential model adopted from Mazars and Pijaudier-Cabot (1989) is applied:

( ) 1

0 1 exp( ( 0 )

(5)

where and are model parameters, is a history ~ , ), where ~ = variable calculated as max( 0 det(F) - 1 is a deformation measure which governs damage and 0 is a damage threshold. The damage condition takes the form:
~ , ) = ~ -0 Fd (

(6)

For Fd < 0 there is no growth of damage. The plastic part of the presented model is described in the effective stress space which means that it governs the behaviour of the undamaged skeleton of the material. Thus the following formulation takes into account the effective Kirchoff stress ten = /(1 - ). The plastic sor which is computed as: regime is defined through the yield function Fp which is an isotropic function of the effective Kirch and the plastic multiplier : hoff stress tensor
, ) = f( ) Fp(

COMPUTER METHODS IN MATERIALS SCIENCE

2 / 3 (y - q( )) 0

(7)

(1 ) e (b e ) p ( )

(1)

)is assumed to be the HuberThe function f(

The constitutive relations of hyperelasticity are expressed through the elastic part of the free energy function which is assumed in the following form (Simo and Hughes, 1998):

Mises-Hencky equivalent stress f 2 J 2 , which depends on the second invariant of the deviatoric part of the effective Kirchhoff stress tensor t : 1 J 2 t 2 . The function q represents the isotropic 2 linear hardening as: q( ) = -h, where h is a hardening modulus. The associative flow rule is assumed in the form:

1 1/3 e b ) 3 ( J be 1) ln( J be ) tr( J be 2 2 2 2 (2)

where Jbe is the determinant of the elastic left Cauchy-Green tensor and and are material parameters. The Kirchhoff stress tensor is related to the elastic left Cauchy-Green tensor be with the formula:

1 Nbe Lv be 2

(8)

232

INFORMATYKA W TECHNOLOGII MATERIAW

where Lv is the Lie derivative of b e (Bonet and Wood, 2008) and N is a normal to the yield hypersurface. 3. GRADIENT-TYPE REGULARIZATION A variety of approaches can be applied to preserve numerical results from the pathological meshsensitivity observed for a standard continuum model reproducing the behaviour of materials exhibiting softening, see e.g. Areias et al. (2003). In this paper a gradient regularization is applied, which is not only a computationally convenient approach but it is also motivated by micro-defect interactions. The introduction of the gradient enhancement into the material description requires the choice of a nonlocal parameter and the formulation of a corresponding averaging equation. In the literature different variables to be averaged are taken into account, e.g. the stored energy function (Steinmann, 1999) or the plastic strain measure (ebro et al., 2008). In the described model, the local strain measure governing ~ is replaced with its non-local counterpart damage ~ which is specified by the averaging equation:
~ - l22 ~ = ~

tion due to geometrical softening in plasticity are discussed. Finally, the complex model of hyperelasticity-plasticity coupled with gradient damage is considered. All model variants have been implemented in the Mathematica-based packages AceGen and AceFEM (Korelc, 2009). The former is an automatic code generator used for the preparation of finite element code whereas the latter is a FEM engine. 4.1. Hyperelasticity coupled with damage

(9)

4. NUMERICAL SIMULATIONS OF STRAIN LOCALIZATION In this section the numerical examples of strain localization in hyperelasticplastic-damage model are presented. Firstly, the results for material softening due to damage are presented for standard and regularized continuum. In the second subsection the results for strain localiza-

Fig. 1. Geometry and boundary conditions of a bar with imperfection.

233

COMPUTER METHODS IN MATERIALS SCIENCE

with homogeneous natural boundary conditions. The parameter l appearing in equation (9) is a materialdependent length parameter called the internal length scale. The application of the gradient averaging to the material model including large strains is additionally difficult due to the distinction of the undeformed and deformed configuration. Thus, the averaging equation and the internal scale l can be specified in the initial or current configuration. Based on the results obtained by Steinmann (1999) and Wciso et al. (2012) which show that the spatial averaging does not fully preserve the numerical results from the dependence on the discretization, the material averaging is chosen for the following simulations.

The simulations of the material model including hyperelasticity coupled with local damage are performed for a tensile bar with imperfection presented in figure 1. The enforced displacement and the boundary conditions preserve the uniaxial stress state. The material parameters applied in the simulations are: E = 200 GPa, v = 0.3, 0 = 0.011, = 0.99, = 1. In the central zone the damage threshold is assumed to be 0 = 0.01. The results of the computational test performed for two FE discretizations with linear interpolation 20x2x2 and 40x4x4 are depicted in the first graph of figure 2. The computations for the finest mesh 80x8x8 fail for the displacement control (snap-back occurs). We can observe in figures 2 and 3 that the results significantly differ for each discretization and that the zone of strain localization covers only the middle rows of elements. The next test is performed for the same sample but the hyperelastic model is coupled with gradient damage. The internal length parameter is assumed to be l = 3 mm. The reactions sum diagram is presented in the second graph of figure 2. It can be noticed that diagrams are close to one another, especially for the medium and the fine mesh. The deformed mesh with the damage variable distribution is presented in figure 4. The simulation confirms that for the gradient model the behaviour of the sample does not depend on the adopted mesh. The width of the damage zone which is related to the internal length parameter is similar for each discretization.

INFORMATYKA W TECHNOLOGII MATERIAW a)

b)

Fig. 2. Sum of reactions vs. displacement for a) hyperelasticity damage model and b) hyperelasticitygradientdamage model. Fig. 3. Evolution of Green strain Exx for meshes 20x2x2 and 40x4x4 (hyperelasticdamage model).

can be observed that in the plastic regime the dia-

COMPUTER METHODS IN MATERIALS SCIENCE

Fig. 4. Deformed mesh and distribution of damage variable for three discretizations (hyperelasticgradientdamage model).

4.2.

Hyperelasticity coupled with plasticity

Firstly, the test is performed for an ideal bar with a constant square cross-section along the length. The dimensions and the boundary conditions of the sample are the same as in the previous section and three meshes are taken into account: 20x2x2, 40x4x4 and 80x8x8 elements. The material parameters applied in the test are as follows: E = 200 GPa, v = 0.3, y = 300 MPa (perfect plasticity). The Huber-MisesHencky yield criterion is applied and the F - bar approach is incorporated to avoid locking (de Souza Neto et al., 2008). The graph of the reactions sum vs end displacement is depicted in the first diagram of figure 5. It

gram is descending although ideal plasticity is assumed. It is caused by taking into account the change of a cross-section during deformation. For all discretizations the loss of stability can be observed: for the coarse mesh the phenomenon is observed the earliest, whereas for the medium and the fine mesh the necking occurs at the same time. Figures 6 and 7 present the deformed sample with the final accumulated plastic strain distribution and the evolution of the Green strain Exx along the bar length respectively. It can be noticed that the loss of stability manifests itself in multiple necking. For each discretization the number and the arrangement of the strain localization zones is different.

234

INFORMATYKA W TECHNOLOGII MATERIAW

The test for the same sample but with the assumed imperfection in the middle of the bar is also performed. The imperfection is prescribed as the reduction of the yield stress to the value y = 290 MPa. The sum of reactions diagram is presented in the second graph of figure 5. We can observe that the results depend on the adopted finite element mesh and the finer the discretization is, the less stiff the model is. In figure 8 it can be noticed that the strains localize in the middle part of the sample where the imperfection is assumed.
a)

Fig. 7. Evolution of Green strain Exx along the bar for three meshes and ideal bar (hyperelasticplastic model). b)

4.3.

Hyperelasticity-plasticity coupled with gradient damage

Fig. 5. Sum of reactions vs. displacement for a) the ideal bar and for b) the bar with imperfection hyperelasticplastic model.

Fig. 6. Final accumulated plastic strain for three meshes and ideal bar (hyperelasticplastic model).

235

COMPUTER METHODS IN MATERIALS SCIENCE

It should be also mentioned, that the same results are obtained for hyperelasticity-plasticity with hardening where the hardening modulus has sufficiently small value, for example 0.5%E. It seems that, even in the absence of damage, regularization is necessary for large deformations and ideal plasticity or small hardening.

The last test is performed for the material model which includes both plasticity and gradient damage. The tested sample is the bar with imperfection with the following material parameters: E = 200 GPa, v = 0.3, 0 = 0.0002, = 0.99, = 1, y = 300 MPa, h = 1%E. The imperfection is as in the previous subsection. Figures 9 and 10 present selected results of the simulation. The reaction diagrams are close for each discretization and present the plasticity regime with hardening and reduction of the reaction forces due to damage. In the analysed test, the material softening is reproduced properly due to gradient regularization and geometrical softening does not occur because of a sufficiently large value of the hardening modulus.

INFORMATYKA W TECHNOLOGII MATERIAW

Fig. 8. Final accumulated plastic strain for imperfect bar and three discretizations (hyperelasticplastic model).

Fig. 9. Sum of reactions vs. displacement for hyperelasticity plasticitygradientdamage model.

model exhibits material softening which can cause mesh-sensitivity observed in the presented simulation results. The gradient averaging procedure, incorporating an internal length parameter, allows one to properly reproduce the material behaviour. The numerical tests reveal that for a model incorporating ideal plasticity in large strain regime the strain localization might occur. For a sample with imperfection one zone of large strains can be predicted in contrast to the ideal sample where multiple necks are formed. To prevent the numerical results from a pathological mesh-dependence, the application of the regularization of the plastic part of the model should be considered in the future work. Moreover, the work is planned to be extended towards thermo-mechanical coupling. Acknowledgments. The authors acknowledge fruitful discussions on the research with Dr K. Kowalczyk-Gajewska from IFTR PAS, Warsaw, Poland. The research has been carried out within contract L-5/66/DS/2012 of Cracow University of Technology, financed by the Ministry of Science and Higher Education.

COMPUTER METHODS IN MATERIALS SCIENCE

Fig. 10. Deformed mesh and damage variable distribution for discretizations 20x2x2 and 40x4x4 (hyperelasticity plasticitygradientdamage model).

REFERENCES
Areias, P., Cesar de Sa, J., and Conceicao, C., 2003, A gradient model for finite strain elastoplasticity coupled with damage, Finite Elements in Analysis and Design, 39, 1191-1235. Auricchio, F., Taylor, R. L., 1999, A return-map algorithm for general associative isotropic elasto-plastic materials in large deformation regimes, Int. J. Plasticity, 15, 13591378. Bonet, J., Wood, R. D., 2008, Nonlinear continuum mechanics for finite element analysis, Cambridge University Press, Cambridge.

5. CONCLUSIONS In the paper the problem of strain localization for a material model including geometrical and material nonlinearities with the applied gradient regularization has been outlined. The considered model is briefly described and selected numerical results are presented. The simulations are performed for different variants of the model and exhibit geometrical or material softening. The hyperelastic-damage

236

INFORMATYKA W TECHNOLOGII MATERIAW de Souza Neto, E., Peric, D., Owen, D., 2008, Computational methods for plasticity. Theory and applications, John Wiley & Sons, Ltd, Chichester, UK. Korelc, J., 2009, Automation of primal and sensitivity analysis of transient coupled problems, Computational Mechanics, 44, 631-649. Mazars, J., Pijaudier-Cabot, G., 1989, Continuum damage theory - application to concrete, ASCE J. Eng. Mech., 115, 345-365. Peerlings, R., de Borst, R., Brekelmans, W., de Vree, J., 1996, Gradient-enhanced damage for quasi-brittle materials, Int. J. Numer. Meth. Engng, 39, 3391-3403. Simo, J. C., Hughes, T. J. R., 1998, Computational Inelasticity. Interdisciplinary Applied Mathematics Vol. 7, SpringerVerlag, New York. Steinmann, P., 1999, Formulation and computation of geometrically non-linear gradient damage, Int. J. Numer. Meth. Engng, 46, 757-779. Wciso, B., Pamin, J., Kowalczyk-Gajewska, K., 2012, Gradient-enhanced model for large deformations of damaging elastic-plastic materials, Arch. Mech. (to be published). ebro, T., Kowalczyk-Gajewska, K., Pamin, J., 2008, A geometrically nonlinear model of scalar damage coupled to plasticity, Technical Transactions, 20/3-, 251-262. Zastosowanie standardowych modeli continuum nie prowadzi do poprawnej symulacji zachowania materiaw z osabieniem. Spowodowane jest to utrat eliptycznoci rwna rwnowagi, gdy zaleno midzy napreniami a odksztaceniami wchodzi na ciek opadajc. W takim przypadku odksztacenia lokalizuj si w najmniejszej moliwej objtoci materiau, ktra w symulacji numerycznej okrelona jest przez rozmiar elementu skoczonego. Aby unikn patologicznej zalenoci wynikw testw numerycznych od dyskretyzacji naley zastosowa odpowiedni regularyzacj. W niniejszej pracy zastosowano urednianie gradientowe, w ktrym istotn rol odgrywa wewntrzna skala dugoci. Jest to dodatkowy parametr materiau zwizany z jego mikrostruktur, ktry moe okrela szeroko strefy lokalizacji odksztace. W artykule przedstawiono zwile opis analizowanego modelu sprysto-plastycznego sprzonego z uszkodzeniem przy duych odksztaceniach oraz zastosowanej regularyzacji gradientowej. Model ten zosta oprogramowany w pakiecie AceGen w rodowisku Mathematica oraz przetestowany przy uyciu pakietu AceFEM. W pracy zaprezentowane s wybrane wyniki symulacji rozcigania prta dla rnych wariantw przyjtego opisu materiau, w ktrych mona zaobserwowa lokalizacj odksztace zarwno zwizan z osabieniem materiau jak i osabieniem geometrycznym.
Received: October 16, 2012 Received in a revised form: October 18, 2012 Accepted: October 26, 2012

NUMERYCZNE SYMULACJE LOKALIZACJI ODKSZTACE DLA MODELU USZKODZENIA SPRZONEGO Z PLASTYCZNOCI W DUYCH ODKSZTACENIACH Streszczenie Artyku dotyczy zjawiska lokalizacji odksztace w nieliniowych i nielokalnych modelach materiaowych. W szczeglnoci, przedstawiony opis materiau zawiera nie tylko nieliniowe zwizki konstytutywne (uszkodzenie, plastyczno), ale rwnie nieliniowoci geometryczne (due odksztacenia). Lokalizacja odksztace w analizowanym modelu ma dwojakie rdo: efekty geometryczne (szyjkowanie) oraz osabienie spowodowane uszkodzeniem materiau.

237

COMPUTER METHODS IN MATERIALS SCIENCE

COMPUTER METHODS IN MATERIALS SCIENCE


Informatyka w Technologii Materia w
Publishing House AKAPIT

Vol. 13, 2013, No. 2

THE MULTI-SCALE NUMERICAL AND EXPERIMENTAL ANALYSIS OF COLD WIRE DRAWING FOR HARDLY DEFORMABLE BIOCOMPATIBLE MAGNESIUM ALLOY
ANDRZEJ MILENIN*, PIOTR KUSTRA, DOROTA J. BYRSKA-WJCIK AGH University of Science and Technology, al. Mickiewicza 30, 30-059 Krakw, Poland *Corresponding author: milenin@agh.edu.pl
Abstract The problem of determination the drawing schedule of the cold drawing of thin (less than 0.1 mm) wire from the hardly deformable magnesium alloy Ax30 with the aid of the multi-scale mathematical model is examined in the paper. The special feature of the alloy Ax30 is the mechanism of fracture on the grain boundaries. It is experimentally proven that the microscopic cracks during the tension tests occurs long before the complete fracture of samples. The state of metal, which directly precedes the appearance of these microscopic cracks, is proposed to consider it as optimum from the point of view of the restoration of plasticity with the aid of the annealing. The simulation of this state in the wire drawing process and development on this basis regimes of wire drawing is the purpose of paper. Solution of problem required the development of the fracture model of alloy in the micro scale, identification of the fracture model and its implementation into the FEM model of wire drawing. Two schedules of wire drawing are examined. The first of them is according to the results of simulation allowed the appearance of microscopic cracks. The second regime was designed so that the microscopic cracks would not appear during wire drawing. Experimental verification is executed in laboratory conditions on the specially developed device. The annealing was carried out before each passage. The initial diameter of billet was 0.1 mm. In the first regime it was possible to realize only 2-3 passages, after which the fracture of wire occurred. The cracks on the grain boundaries were observed in this case on the surface of wire. The second regime made it possible to carry out 7 passages without the fracture, the obtained wire with a diameter of 0.075 mm did not contain surface defects, it had high plastic characteristics and allowed further wire drawing. Thus, the validation of the developed multi-scale model is executed for two principally different conditions of deformation. Key words: drawing process, multi-scale modeling, magnesium alloys

1. INTRODUCTION This paper is devoted to the new magnesium alloys used in medicine as a soluble implants (Heublein et al., 1999; Haferkamp et al., 2001; Thomann et al., 2009). Typically, these alloys contain lithium and calcium supplements. The production of thin surgical threads to stitching tissues may be an example of application of these alloys (Seitz et al. 2011; Milenin et al, 2010b). Feature of these alloys is a low technological ductility during cold forming. As shown in the previous works (Kustra et al., 2009;

Milenin et al., 2010a), a technological ductility of these alloys during cyclic processes based on a combination of a cold deformation and annealing is significantly lower than for most known magnesium alloys. The reasons of this fact are explained in the (Milenin et al., 2011) that there are fractures on the grain boundaries long before the fracture of the sample in the macro-scale in these alloys during cold deformation. These microcracks considerably make worse the restoration of plasticity using annealing. Solution of the problem is proposed in the works (Milenin et al., 2010b; Milenin & Kustra, 2010). It is ISSN 1641-8581

238 244

INFORMATYKA W TECHNOLOGII MATERIAW

based on drawing by a hot die. Studies show that this method is effective for more than 0.1 mm wire diameter. Obtainment a thin wire is difficult because of the strong sensitivity to the velocity of drawing process. Another disadvantage is that the biocompatible lubricant cannot be used what becomes to be important in medical application. Thus the solution of listed problems requires in-depth study of cold drawing process for these alloys. The aim of this work is to determine the parameters of the cold drawing of thin (less than 0.1 mm) wires by using a multi-scale modeling of wire drawing process and experimental verification of the results. 2. MECHANISM OF FRACTURE The MgCa0.8 (0.8% Ca 99.2%Mg) alloy and its modification Ax30 (0.8%Ca, 3.0%Al, 96.2% Mg) were selected as a material for the study. The technique of research of the fracture mechanism is based on stretching sample in microscopes vacuum chamber. During the process of stretching changes of microstructure and microcracks nucleation are monitoring. The experiment is described in detail in the works (Milenin et al., 2010a; Milenin et al., 2011). The test shown that these alloys crack mainly on grain boundaries. A porosity in sample appears long before the moment of fracture in macro-scale. An example of cracks for alloy Ax30 is shown in figure 1 in macro-scale (figure 1a) and micro-scale (figure 1b). The porosity values in the stretching sample characterize the technological plasticity during multi-pass drawing. It is proved that if the microcracks do not appear in a current pass, the annealing allows restoring the plasticity (Milenin et al., 2011). Otherwise, the effectiveness of annealing is much reduced and reaching of large deformation in a multi-pass process is impossible. In figure 2 the values of porosity in a center of sample during tensile test of MgCa0.8 and Ax30 alloys are shown. The values for Az80 alloy, used in mechanical engineering, are also shown for comparison purpose. The figures show that in these alloys microcracks appear much earlier than in the typical magnesium alloy Az80. Thus, the increase of porosity in the early stage of deformation is a fundamental difference between considered alloys and known magnesium alloys. It follows from this that the development of the drawing technology should be made in such a way

that in an every pass the material does not have microcracks. The multi-scale model of wire drawing process was proposed to solve this problem in the works (Milenin et al., 2010a; Milenin et al., 2011). For micro-scale modeling of the fracture processes the boundary element method (BEM) was used, which allows to easily simulate of the fracture of grain boundaries.

a)

b) Fig. 1. The examples of microcracks during tensile test of Ax30 alloy: a) on the surface of the sample in macro-scale, b) in micro-scale

Fig. 2. The porosity dependence on a total sample elongation

239

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW

3. THE MULTI-SCALE MATHEMATICAL MODEL OF A DRAWING PROCESS The macro-scale numerical model of drawing process is based on finite element method (FEM) and described in paper (Milenin, 2005). The microscale model of deformation is based on boundary element method. The macro-scale and micro-scale models are coupled in a such way that the results of simulation on macro-scale, especially stress and displacements, are the boundary conditions at the micro-scale. At the micro-scale the displacements, strains and stresses on grains boundary are computed, but the most important parameter in micro-scale simulation is the damage parameter D, which is explained below. The digital representation of the microstructure in a micro-scale model in proposed BEM code is considered as a two-dimensional representative volume element (RVE) which is divided into grains (figure 3). The model at the micro-scale includes the BE mesh generation based on images of a fragment of real microstructure and numerical solution at the micro-scale level.

The crystallographic orientation is included in the developed program by a random parameter k, which refers to change of elastic-plastic properties due to the various orientations of grains. The effective plastic modulus of the material for each grain is calculated as follows:

E eff k

(1)

where: k is the random parameter, increment of mean equivalent strain in grain; yield stress of material in grain. The Saint-Venant-Levy-Mises theory is used for relation between stresses and increments of strains for plastic deformation:

ij ij 0

2 ij 3

(2)

where ij the Kronecker delta, 0 the mean stress, ij the increment of strain components. The solution of boundary problem is based on the Kelvins fundamental solution (Crouch & Starfield, 1983) for the two-dimensional tasks and incompressible material. The solution of boundary problem and fracture criteria are described in detail in previous works (Milenin et al., 2010a; Milenin et al., 2011). The proposed criteria of crack initiation are based on the theory by L. M. Kaczanov and Y. N. Rabotnov (Rabotnov, 1969). This theory was successfully used in (Diard et al., 2002) for modeling of grain boundary cracking in the case of the deformation of the polycrystals. This model was modified to describe the crack initiation at the grain boundary:

COMPUTER METHODS IN MATERIALS SCIENCE

a)

d 1 , D D
0

(3)

eq b D 1 E

b2

1 D b

(4)

2 2 , eq n b0 S

(5)

b) Fig. 3. Photo of microstructure (a) and BE mesh (b)

where: D damage parameter; E Young modulus; n tensile (positive) component of normal stress at the boundary between two grains; S shear stress at the boundary between two grains; b0-b3 empirical coefficients. According to the equation (3)-(5), the damage parameter is computed at micro-scale for all boundary elements and depends on the material and stress 240

INFORMATYKA W TECHNOLOGII MATERIAW

state. The value of parameter D varies from 0 to 1. When the value of parameter D reaches the value 1 for boundary element, the fracture criteria is met. The crack initiation is allowed only for the internal boundaries in the developed model. The outer boundaries of the domains were assigned to boundary conditions and, in spite of a possible fulfillment of condition, they cannot be destroyed. The determination of empirical parameter of fracture model at the micro-scale is based on inverse analysis of experimental data. The purpose of this analysis is to minimize the difference between the empirical and calculated moment of crack initiation and the empirical and calculated porosity in microscale of sample during deformation (Milenin et al., 2012). As a result of experimental data processing, which are shown in figure 2, the following coefficients of equations (4) and (5) for alloy Ax30 are received: b0 = 0.02, b1 = 0.43, b2 = 0.30, b3 = -0.50. 4. THE MULTI-SCALE MODELING OF TWO VARIANTS OF DRAWING PROCESS For the purpose of proposed technique validation two variants of wire drawing process were simulated. Diameters of wires in variant 1: 0.10.0955 0.09120.0870.08310.07940.0758 (elongation per pass 1.096). Variant 2: 0.1620.1470.1350.1230.112 (elongation per pass 1.20). Angle of die in each pass was 50. The drawing speed was 10 mm/s and was chosen in such a way that the annealing could be done in a furnace, which was installed before the device for drawing. All passages in each variants was geometrically similar, so results of stress and strain for all calculated passages are close. For this reason, simulation only first passage for each variant was performed. In figure 4 the results of simulation (triaxility factor) of the first pass for variant 1 (figure 4a) and variant 2 (figure 4b) are shown. The present data shown, that stresses and strains in variants 1 and 2 are significantly different. From the point of view of experience in a drawing of magnesium alloys and based on the results of the simulation in macro-scale the variant 2 is preferred because in this case deformation is more homogeneous and value of tensile stresses is lower (Yoshida, 2004). However, this refers to alloys without high propensity to microcracks in the early

stages of deformation. Thus, when the experimental verification of numerical simulation finds that preferable to variant 1, this may be the proof of theoretical conclusions about the major impact of micro cracks on technological plasticity.

a)

b) Fig. 4. Distribution of triaxility factor: a for variant 1; b for variant 2

241

COMPUTER METHODS IN MATERIALS SCIENCE

In figure 5 the distribution of strain in the drawing direction and vertical stresses along the centre line of the deformation zone are shown. These parameters are used as boundary conditions for the microscale simulation of microstructure deformation. Results of simulation in micro-scale are shown in figure 6. As can be seen from the results (figure 6), in the variant 1 the cracks on the grains boundaries did not appear. The maximum value of the parameter D is reached for passage amounted to 0.89. However, in variant 2 there is the emergence of microcracks (figure 6b). This suggests that in this case the ductility restoration for alloy after pass will not be possible and the number of passes before the fracture of the wire will be less then in variant 1.

INFORMATYKA W TECHNOLOGII MATERIAW

a) a)

b)

b) Fig. 6. Results of simulation (effective strain in grains) in microscale for variant 1 (a) and for variant 2 (b)

c)

COMPUTER METHODS IN MATERIALS SCIENCE

the billet by hot wire drawing process is presented in the work (Milenin & Kustra, 2010). The surface of the workpiece does not contain defects observed on the optical microscope. In the variant 2 only 4 passage was perform. The hairline fractures on the grains boundaries after passage 2 on the surface of the wire can be observed using an optical microscope. The received wires were fragile and crumble and after 2 pass tie the knot is impossible. Developed network of cracks after 4 pass is shown in figure 7. Further attempts of annealing and drawing were unsuccessful.

d) Fig. 5. The distribution of strain in the direction of drawing (b, d) and vertical stresses along the centre line of the deformation zone (a, c) for variant 1 (a, b) and variant 2 (c, d)

5. THE EXPERIMENTAL VALIDATION OF RESULTS The experimental validation of the results of calculations was performed in the context described above. The Ax30 alloy was used. As a lubricant sunflower oil was proposed and the temperature of drawing was 30o . The methodology of receiving
Fig. 7. Network of cracks after 4 pass, wire diameter is 0.112 mm, variant 2

Much higher wire quality (figure 8) and mechanical properties which allow further drawing were achieve in variant 1. Study of mechanical properties

242

INFORMATYKA W TECHNOLOGII MATERIAW

in INSTRON machine showed that the tensile strength Rm of wire for all passages is not significantly different (diameter 0.0955 mm - Rm = 250.7 MPa, diameter 0.0758 mm - Rm = 252,9 MPa).

Fig. 8. Surface of wire after drawing according variant 1, wire diameter 0.0758 mm

6. CONCLUSIONS 1. The prediction of the microcracks using multiscale model coincided with the results of the experiment. Based on the developed schema of drawing the wire diameter 0.0758 mm for Ax30 alloy by cold drawing could be reached. 2. It is shown that microcracks on grains boundaries have influence on parameters of wire drawing technology of thin wire from Mg-Ca alloys.

Acknowledgements. Financial support of the


Ministry of Science and Higher Education project no. 4131/B/T02/2009/37 and project no. 11.11. 110.150 is gratefully acknowledged. REFERENCES
Crouch, S. L., Starfield, A. M., 1983, Boundary element methods in solid mechanics, GEORGE ALLEN & UNWIN London, Boston, Sydney. Diard, O., Leclercq, S., Rousselier, G., Cailletaud, G.,2002, Distribution of normal stress at grain boundaries in multicrystals: application to an intergranular damage modeling, Computational Materials Science, 18, 73-84. Haferkamp, H., Kaese, V., Niemeyer, M., Phillip, K., Phan-Tan, T., Heublein, B., Rohde, R., 2001, Exploration of Magnesium Alloys as New Material for Implantation; Mat.wiss. u. Werkstofftech, 32, 116-120. Heublein, B., Rohde, R., Niemeyer, M., Kaese, V., Hartung, W., Rcken, C., Hausdorf, G., Haverich, A., 1999, Degradation of Magnesium Alloys: A New Principle in Cardiovascular Implant Technology, 11. Annual Symposium "Transcatheter Cardiovascular Therapeutics", New York. Kustra, P., Milenin, A., Schaper, M., Grydin, O., 2009, Multiscale modeling and interpretation of tensile test of magnesium alloy in microchamber for the SEM, Computer Methods in Materials Science, 2, 207-214. Milenin, A., 2005, Program komputerowy Drawing2d narzdzie do analizy procesw technologicznych cignienia wielostopniowego, Hutnik-Wiadomoci Hutnicze, 72, 100-104 (in Polish). Milenin, A., Byrska, D. J., Grydin, O., Schaper, M., 2010 a, The experimental research and the numerical modeling of the

fracture phenomena in micro scale, Computer Methods in Material Science, 2, 61-68. Milenin, A., Kustra, P., 2010, Mathematical Model of Warm Drawing Process of Magnesium Alloys in Heated Dies, Steel Research International, 81, spec. ed., 12511254. Milenin, A., Kustra, P., Seitz, J.-M., Bach, Fr.-W., Bormann, D., 2010 b, Production of thin wires of magnesium alloys for surgical applications, Proc. Wire Ass. Int. Inc. Wire & Cable Technical Symposium, eds, M. Murray, Milwaukee, 61-70. Milenin, A., Byrska, D. J., Grydin, O., 2011, The multi-scale physical and numerical modeling of fracture phenomena in the MgCa0.8 alloy, Computers and Structures, 89, 1038-1049. Milenin, A., Byrska-Wjcik, D. J., Grydin, O., Schaper, M., 2012, The physical and numerical modeling of intergranular fracture in the Mg-Ca alloys during cold plastic deformation, 14th international conference on Metal Forming, eds, Pietrzyk, M., Kusiak, J., Krakw, 863866. Rabotnov, Y. N.,1969, Creep Problems in Structural Members, North-Holland Publishing Company, Amsterdam/London. Seitz, J.-M., Utermohlen, D., Wulf, E., Klose, C., Bach, F.W.,2011, The Manufacture of Resorbable Suture Material from Magnesium Drawing and Stranding of Thin Wires, Advanced Engineering Materials, 13, 1087-1095. Thomann, M., Krause, Ch., Bormann, D., N. von der Hoh, Windhagen, H., Meyer-Lindenberg, A., 2009, Comparison of the resorbable magnesium alloys LAE442 and MgCa0.8 concerning their mechanical properties, their progress of degradation and the bone-implant-contact after 12 months implantation duration in a rabbit model, Mat.-wiss. u. Werkstofftech, 40, 82-87. Yoshida, K., 2004, Cold drawing of magnesium alloy wire and fabrication of microscrews, Steel Grips, 2, 199-202.

WIELOSKALOWE NUMERYCZNE MODELOWANIE ORAZ ANALIZA EKSPERYMENTALNA PROCESU CIGNIENIA NA ZIMNO TRUDNO ODKSZTACALNYCH BIOZGODNYCH STOPW MAGNEZU Streszczenie Praca powicona jest opracowaniu procesu cignienia na zimno cienkich (o rednicy mniejszej ni 0,1mm) drutw z trudno odksztacalnego biozgodnego stopu magnezu Ax30 przy wykorzystaniu wieloskalowego modelu numerycznego. Cech charakterystyczn stopu Ax30 jest mechanizm pkania po granicach ziaren. Udowodniono eksperymentalnie, e mikropknicia w trakcie prby rozcigania pojawiaj si na dugo przed pkniciem prbki w skali makro. Stan metalu, ktry bezporednio poprzedza pojawienie si mikropkni, jest uznany za optymalny pod wzgldem moliwoci odzyskania plastycznoci za pomoc wyarzania. Gwnymi celami pracy s symulacja takiego stanu materiau oraz opracowanie procesu cignienia na tej podstawie. Rozwizanie przedstawionego problemu wymaga opracowania modelu pkania stopu w skali mikro, identyfikacji parametrw pkania oraz implementacji modelu w skali mikro do modelu MES procesu cignienia. Dwa przypadki procesu cignienia zostay zbadane. Pierwszy z nich, zgodnie z wynikami oblicze, prowadzi do powstania mikro-

243

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW pkni. Drugi rozpatrywany schemat cignienia zosta dobrany tak, by nie pojawiy si mikropknicia w cignionym drucie. Eksperymentalna weryfikacja wynikw oblicze zostaa przeprowadzona w warunkach laboratoryjnych w specjalnie do tego celu opracowanym narzdziu. Wyarzanie byo wykonywane przed kadym przepustem. Pocztkowa rednica drutu wynosia 0,1 mm. W pierwszym przypadku moliwe byo przeprowadzenie 2-3 przepustw, po ktrych w materiale wystpiy pknicia. W tym przypadku pknicia po granicach ziaren byy obserwowane na powierzchni drutu. Drugi rozwaany schemat cignienia pozwoli na przeprowadzenie 7 przepustw bez pojawienia si pkni, otrzymano drut o rednicy 0,075 mm bez defektw na powierzchni o plastycznoci pozwalajcej na dalsze cignienie. Tak wic, przeprowadzono walidacj modelu na dwch zasadniczo rnych przypadkach procesu cignienia.
Received: September 22, 2012 Received in a revised form: October 29, 2012 Accepted: November 9, 2012

COMPUTER METHODS IN MATERIALS SCIENCE

244

COMPUTER METHODS IN MATERIALS SCIENCE


Informatyka w Technologii Materia w
Publishing House AKAPIT

Vol. 13, 2013, No. 2

THREE-DIMENSIONAL ADAPTIVE ALGORITHM FOR CONTINUOUS APPROXIMATIONS OF MATERIAL DATA USING SPACE PROJECTION
PIOTR GURGUL*, MARCIN SIENIEK, MACIEJ PASZYSKI, UKASZ MADEJ AGH University of Science and Technology, al. Mickiewicza 30, 30-059 Krakow, Poland *Corresponding author: pgurgul@agh.edu.pl
Abstract The concept of the H1 projections for an adaptive generation of a continuous approximation of an input 3D image in the finite element (FE) framework is describe and utilized in this paper. Such an approximation, along with a corresponding FE mesh, can be interpreted and used as an input data for FE solvers. In order to capture FE solution gradients properly, specific refined meshes have to be created. The projection operator is applied iteratively on a series of increasingly refined meshes, resulting in an improving fidelity of the approximation. A developed algorithm for linking image processing to the 3D FEM code is also presented within the paper. In particular we compare hp-adaptive algorithm with h-adaptivity, concluding that hp-adaptivity for three dimensional approximation of non-continuous data loses its exponential convergence. Finally conclusions with the evaluation and discussion of the numerical results for an exemplary problem and convergence rates obtained for described problem are described. Key words: adaptive finite element method, space projections, digital material representation

1. INTRODUCTION Space projections constitute an important tool, which can be used in diverse applications including finite element (FE) analysis (Demkowicz, 2004; Demkowicz & Buffa, 2004; Demkowicz et. al., 2006). It might be used, for example, to create an approximation of a generic bitmap in the finite element space. Such bitmaps can represent e.g. morphology of the digital material representation (DMR) during FE analysis of material behavior under deformation and exploitation conditions (Madej et al., 2011; Madej, 2010; Paszyski et al., 2005). Due to the crystallographic nature of polycrystalline material, particular features are characterized by different properties that significantly influence material deformation. To properly capture FE solution gradients which are the results of mentioned material

inhomogeneities, specific refined meshes have to be created. The operator can be applied iteratively on a series of increasingly refined meshes, resulting in an improving fidelity of the approximation. A proof of concept for a limited set of applications has been presented in earlier authors works: (Gurgul et al., 2011; Sieniek et al., 2010; Gurgul et al., 2012). The continuous data approximation, namely the H1 projection, is necessary in case of a noncontinuous input data representing continuous phenomena. Some examples may involve: satellite images of topography of the terrain, when we have a non-continuous bitmap data representing rather continuous terrain; input data obtained by using various techniques representing temperature distribution over the

245 250

ISSN 1641-8581

INFORMATYKA W TECHNOLOGII MATERIAW

material, where the temperature is rather continuous phenomena. The exemplary application of the first case may involve the flood modeling (Collier et al., 2011), the application of the second case concerns the solution of time dependent problems with input data representing initial conditions. When we solve the nonstationary problems of the form u = f, where u represents temperature, with initial conditions u(x,0)=u0, where u0 is represented by a noncontinuous input data, it is rather necessary to perform H1 projection of the u0 to get the required regularity of u. A number of adaptive algorithms for finite element mesh refinements are known. HP-adaptation is one of the most complex and accurate, as it results in an exponential convergence with the number of degrees of freedom (Demkowicz et al., 2006). The hpadaptation process breaks selected finite elements into smaller ones and modifies the polynomial order of approximation locally. H-adaptive algorithm restricts the mesh refinement process to breaking selected finite elements with the fixed polynomial order of approximation, and it results in algebraic convergence only. In this work we compare hpadaptivity with h-adaptivity for the H1 projection of non-continuous data. We conclude that for three dimensional H1 projection of continuous data the hpadaptivity loses its exponential convergence and thus h-adaptation is enough. 2. PROJECTION OPERATOR projection onto the space V may be exA pressed as the following minimization problem:

where:
,

(3) (4)

projection onto the space V, constiAn tutes the solution to this system. However, the method above considers only the function itself for minimization of the error. We can include information about derivatives and though minimize not only the error of function's value, but projecalso its gradients. This method is called tion and can be expressed very similarly to projection. Given an arbitrary function , find such that . (5)

Thus the equation (6)


f x u . f x (6) u

needs to be minimal. Since the material data is not continuous in our case, we need to approximate the partial derivatives in the gradient f by finite differences. For a given X = (x1, x2, x3), we find the closest existing (integer) coordinates for x1, x2, x3 that are produced by the function r(x). Then, we compute the approximation of , compare equation (7).
;

COMPUTER METHODS IN MATERIALS SCIENCE

Given an arbitrary function such that

, find is minimal.

(7)

Since where .. are basis functions for V (i.e. ( ), we have to determine .. , the coefficients of this linear combination. Given f x , to find the minimum, we differentiate the equation with respect to the coefficients and compare them to zero in equation (1): This leads to a linear system (2): M U F (2) 0 (1)

3. ADAPTIVE ALGORITHM USED FOR SOLUTION Quality of the approximation depends on the choice of the space , where the approximation will be performed. There is no efficient way to determine precision of a given a priori and a workaround here is to refine space V iteratively, based on relative error rate in each step. This is done using selfcontaining spaces corresponds ,, where to the initial mesh and is the first mesh, for which the desired precision is achieved.

Let:

uV be a solution in the space V,

246

INFORMATYKA W TECHNOLOGII MATERIAW

1: 2: 3:

solve the minimization problem in solve the minimization problem in

Vt fine be a space corresponding to a mesh, where


all elements have been refined by one order with respect to Vt ,

Vt opt be a space corresponding to a mesh, where


element refinements have been optimally chosen by comparing Vt fine and Vt ,

4: _ 0 in coarse mesh do

Vt w be any space such that Vt Vt w Vt fine .

5: foreach element 6: |

Major steps of described algorithm are presented in figure 1. Algorithm presented in figure 1 is being performed in iterations until the stop condition (usually the desired precision) is met. These steps are presented in figure 2. 1:

7: foreach

| |

over element 0 do onto 2: 1 3: initial space corresponding to a trivial mesh > , t+1 _ do

such that | 8:

compute a projection of

4: while 9:
| |

5: , , , 6: 7:

perform listing 1 on

10: if 11:

then

8: end while 9: return


Fig. 2. Algorithm _ . approximating space with the

12: end if 13: end for 14: add all basis functions from on 15: 16: 17: if max_ end if to , _ , with supports

The quality of the interpolation can be improved by the expansion of the interpolation base. In FEM terms, this could be done thanks to some kind of mesh adaptation. Two methods of adaptation are being considered in the present work: 3.1.2. P-adaptation increasing polynomial approximation level. One approach is to increase order of the basis functions on the elements where the error rate is higher than desired. More functions in the base means smoother and more accurate solution but also more computations and the use of highorder polynomials.

18: end for 19: return , _

Fig. 1. Choice of an optimal mesh for the following iteration of the adaptive algorithm.

247

COMPUTER METHODS IN MATERIALS SCIENCE

3.1.

HP Mesh refinements and its role in projection-based interpolation

INFORMATYKA W TECHNOLOGII MATERIAW

3.1.3. H-adaptation refining the mesh. Another way is to split the element into smaller ones in order to obtain finer mesh. This idea arose from the observation that the domain is usually non-uniform and in order to approximate the solution fairly some places require more precise computations than others, where the acceptable solution can be achieved using small number of elements. The crucial factor in achieving optimal results is to decide if a given element should be split into two parts horizontally, into two parts vertically, into four parts (both horizontally and vertically on one side), into eight parts (both horizontally and vertically on the both sides) or not split at all. That is why the automated algorithm that decides after each iteration for the element if it needs h- or p-refinement or not was developed. The refinement process is fairly simple in 1D but the 2D and 3D cases enforce a few refinement rules to follow. 3.1.4. Automated hp-adaptation algorithm Neither the p- nor the h-adaptation guarantees error rate decreases in an exponential manner with a step number. This can be achieved by combining together mentioned two methods under some conditions, which are not necessarily satisfied in the present case. Still, in order to locate the most sensitive areas at each stage dynamically, and improve the solution as much as possible, the self-adaptive algorithm can be applied. It decides if a given element should be refined or it is already properly refined for the satisfactory interpolation, in an analogical manner to the algorithm for Finite Elements adaptivity described by (Demkowicz et al., 2006).

4. NUMERICAL RESULTS Presented projection algorithm was tested on one three dimensional example, with hp-adaptive (see figure 3) and h-adaptive (see figure 4) algorithms. The example concerns the approximation of the input data representing the ball shape distribution of data. This may represent the initial distribution of temperature over one ball shape material inside another material. This temperature distribution may constitute the starting point for some non-stationary time dependent heat transfer simulation. The numerical results presented in table 1 obtained by the hp-adaptive solution show that the algorithm utilized for the three dimensional H1 projections does not deliver exponential convergence, and thus it is reasonable to replace it with its cheaper, h-adaptivite counterpart that delivers similar convergence to the one presented in table 2, with simpler implementation and longer execution time.
Table 1. Convergence rate for the problem of H1 projections of 3D balls with hp-adaptivity. Iteration 1 2 3 4 5 6 Mesh size 125 2197 5197 12093 22145 41411 Relative error in H1 norm 71.3 66.3 62.4 63.9 57.7 51.03

COMPUTER METHODS IN MATERIALS SCIENCE

Fig. 3. 3D balls problem: mesh after the sixth iterations of hp adaptive algorithm and solution over the mesh.

248

INFORMATYKA W TECHNOLOGII MATERIAW

Fig. 4. 3D balls problem: mesh after the sixth iterations of the h-adaptive algorithm and solution over the mesh. Table 2. Convergence rate for the problem of H1 projections for 3D balls with h-adaptivity. Iteration 1 2 3 4 5 6 Mesh size 125 729 4913 11745 32305 68257 Relative error in H1 norm 72.1 68.3 68.2 62.1 52.9 49.44

the quality of projection of the initial state to the further stability of the non-stationary simulation. Acknowledgements. The work of the first author was partly supported by The European Union by means of European Social Fund, PO KL Priority IV: Higher Education and Research, Activity 4.1: Improvement and Development of Didactic Potential of the University and Increasing Number of Students of the Faculties Crucial for the National Economy Based on Knowledge, Subactivity 4.1.1: Improvement of the Didactic Potential of the AGH University of Science and Technology ``Human Assets'', No. UDA POKL.04.01.01-00-367/08-00. The work of the second author was supported by Polish National Science Center grants no. DEC-2011/03/N/ST6/ 01397. The work of the third author was supported by Polish National Science Center grant no. NN519447739. The work of the fourth author was supported by grant no. nr 820/N-Czechy 2010/0.

5. CONCLUSIONS AND FUTURE WORK This paper presents a way of incorporating wellestablished H1 projection concept into an adaptive algorithm used to prepare material data. The described method allows for generation of a smooth, continuous interpolation of given arbitrary data, alongside with an initial pre-adapted mesh suitable for further processing by a non-stationary FE solver. In this paper we compared three dimensional hpadaptivity with h-adaptivity, concluding that the hpadaptive algorithm does not deliver the exponential convergence in the case of non-continuous approximation of data, and it may be reasonable to utilize just h-adaptation algorithm, with uniform polynomial order of approximation, which is significantly easier to implement. It is desirable to experiment with more 3D images as well as with various input parameters (e.g. boundary conditions or image conversion algorithms). Besides, more sophisticated digital material representations should be investigated . The applicability of this methodology for non-stationary finite element method solvers will be tested in our future work. In particular we plan to test the influence of

Collier, N., Radwan, H., Dalcin, L., 2011, Time Adaptivity in the Diffusive Wave Approximation to the Shallow Water Equations, accepted to Journal of Computational Science. Demkowicz, L., Kurtz, J., Pardo, D., Paszyski, M., Zdunek, A., 2006, Computing With Hp-adaptive Finite Elements, CRC Press, Taylor and Francis. Demkowicz, L., Buffa, A., 2004, H1, H(curl) and H(div) conforming projection-based interpolation in three dimensions, ICES-Report 04-24, The University of Texas in Austin. Demkowicz, L., 2004, Projection-based interpolation, ICES Report 04-03, The University of Texas in Austin. Gurgul, P., Sieniek, M., Magiera, K., Skotniczny, M., 2011, Application of multi-agent paradigm to hp-adaptive projection-based interpolation operator, accepted to Journal of Computational Science. Gurgul, P., Sieniek, M., Paszyski, M., Madej, ., Collier, N., 2012, Two dimensional hp-adaptive algorithm for con-

249

COMPUTER METHODS IN MATERIALS SCIENCE

REFERENCES

INFORMATYKA W TECHNOLOGII MATERIAW tinuous approximations of material data using space projections; accepted to Computer Science. Madej, L., 2010, Development of the modeling strategy for the strain localization simulation based on the Digital Material Representation, DSc dissertation, AGH University Press, Krakow. Madej, L., Rauch, L., Perzyski, K., Cybuka P., 2011, Digital Material Representation as an efficient tool for strain inhomogeneities analysis at the micro scale level, Archives of Civil and Mechanical Engineering, 11, 661-679. Paszyski, M., Romkes, A., Collister, E., Meiring, J., Demkowicz, L., Wilson, C.G., 2005, One the modeling of Step-and-Flash imprint lithography using molecular static models, ICES-Report 05-38. Perzyski, K., Major, ., Madej, ., Pietrzyk, M., 2011, Analysis of the stress concentration in the nanomultilayer coatings based on digital Representation of the structure, Archives of Metallurgy and Materials, 56, 393-399. Sieniek, M., Gurgul, P., Koodziejczyk, P., Paszyski, M., 2010, Agent-based parallel system for numerical computations, Procedia Computer Science, 1, 1971-1981. TRJWYMIAROWY, ADAPTACYJNY ALGORYTM DO APROKSYMACJI CIGYCH DANYCH MATERIAOWYCH Z WYKORZYSTANIEM PROJEKCJI PRZESTRZENNYCH Streszczenie Celem niniejszego artykuu jest opis i pokazanie praktycznego wykorzystana koncepcji projekcji H1 do adaptacyjnej generacji aproksymacji cigej wejciowego obrazu w 3D w bazie elementw skoczonych. Taka aproksymacja, razem z odpowiadajc jej siatk, moe by interpretowana jako ciga reprezentacja danych wejciowych dla solwerw metody elementw skoczonych (MES). Artyku przedstawia teoretyczne podstawy mechanizmu projekcji wraz z porwnaniem algorytmw hp- adaptacji i h-adaptacji uytych do iteracyjnego generowania kolejnych aproksymacji. Omwiony zosta rwnie sposb oszacowania i redukcji bdu aproksymacji. Przedstawiony zosta ponadto przykad obliczeniowy ilustrujcy dziaanie opisywanych metod.
Received: September 26, 2012 Received in a revised form: October 23, 2012 Accepted: October 26, 2012

COMPUTER METHODS IN MATERIALS SCIENCE

250

COMPUTER METHODS IN MATERIALS SCIENCE


Informatyka w Technologii Materia w
Publishing House AKAPIT

Vol. 13, 2013, No. 2

PARALLEL IDENTIFICATION OF VOIDS IN A MICROSTRUCTURE USING THE BOUNDARY ELEMENT METHOD AND THE BIOINSPIRED ALGORITHM
WACAW KU*, RADOSAW GRSKI Silesian University of Technology, Institute of Computational Mechanics and Engineering, Konarskiego 18A, 44-100 Gliwice, Poland *Corresponding author: waclaw.kus@polsl.pl
Abstract The problem of identification of the size of a void in a microscale on the basis of the homogenized material parameters is studied in this work. A three-dimensional unit-cell model of a porous microstructure is modelled and analyzed by the boundary element method (BEM). The method is very accurate and for the considered problem requires discretization only the outer boundary of models. The algorithm used for identification is characterized by a hierarchical structure which allows for parallel computing on three different levels. The parallel algorithm is used for evolutionary computations. The solution of boundary value problems by the BEM and the determination of effective material properties by numerical homogenization method are also parallelized. The computation of the compliance matrix for a porous microstructure is shown. The matrix is used to formulate the objective function in identification problem in which the size of a void is searched. The scalability tests of the algorithm are performed using a server consisting of eight floating point units. As a result of using the hierarchical structure of the identification algorithm and the BEM, a significant computation speedup and the accuracy are achieved. Key words: parallel computing, bioinspired algorithms, identification, boundary element method, micromechanics, numerical homogenization

1. INTRODUCTION The bioinspired algorithms are very efficient optimization tools for single and multimodal objective functional problems (Michalewicz, 1996). The main drawback of these algorithms is a large number (hundreds or thousands) of objective function evaluations. The time needed for an evaluation of a single objective function depends on a boundary value problem usually solved by numerical methods, like the finite element method (FEM) or the boundary element method (BEM). The overall wall time of identification can be shortened when the parallel algorithms are used (Ku & Burczyski, 2008).

Apart from the analytical models and experimental testing, numerical simulations play today an important role in the prediction of a behaviour of new materials of a complex structure. A recent increase in a computational power gives a possibility of studying different materials using a numerical homogenization approach. Since the direct modelling and analysis of most of engineering structures made of heterogeneous materials is computationally very demanding, the numerical homogenization methods can be performed instead. By using this technique, a complex microstructure may be represented for instance by means of a representative volume element (RVE) or a unit cell and can be

251 257

ISSN 1641-8581

INFORMATYKA W TECHNOLOGII MATERIAW

modelled and analyzed on two or more different scales. Because the main emphasis in this work is put on parallel bioinspired computations assisting numerical homogenization, analytical homogenization procedures are out of the scope of this paper and will not be discussed. Among the numerical homogenization methods, the FEM and the BEM are the most frequently used. The studies in the literature concern the homogenization of different materials of complex microstructures, for example composite materials, cellular materials, heterogeneous tissue scaffolds and other. Fang et al. (2005) for instance have studied homogenization of porous tissue scaffolds by the FEM and by two other approaches. They computed the effective mechanical properties of different scaffolds materials and pore shapes. Dster et al. (2012) have shown a new approach for the numerical homogenization of heterogeneous and cellular materials using the finite cell method. The important feature of the method is a possibility of discretizing of complicated microstructures in a fully automatic way. Chen and Liu (2005) have analysed composites reinforced by spherical particles or short fibres by the advanced BEM. Difficulties in dealing with nearly-singular integrals during modelling of composites with closely packed fillers have been resolved by new and improved techniques. Araujo et al. (2010) have modelled and analysed threedimensional composite microstructures by the BEM and the parallel algorithm. Multiscale analysis by coupling the molecular statics and the BEM is presented by Burczyski et al. (2010a). Optimization of macro models analyzed by the FEM using parallel algorithms is shown by Ku and Burczyski (2008). Identification of material parameters of a bone by using a multiscale modelling and a distributed parallel evolutionary algorithm is presented by Burczyski et al. (2010b). In the present paper the identification of voids in microstructures modelled by the BEM is presented. In order to speed up the computations, the identification problem is solved by the parallel hierarchical algorithm. In order to solve the problem, the developed system is build of three programs, i.e. an evolutionary algorithm (an optimization tool), a computational homogenization module (evaluation of objective function) and the BEM program (a boundary value problem solver). Three-dimensional unit-cell models (which play the role of a RVE) of microstructures with voids are considered. The numerical homogenization of an orthotropic material is shown

for which the homogenized properties are determined. The properties are used to formulate the objective function depending on the quantities of a macro and micro model in order to identify the size of a void in the unit cell model of the material. As a result, the parameters defining voids in the material on a microscale are determined on the basis of orthotropic parameters in a macroscale. 2. COMPUTATIONAL HOMOGENIZATION BASED ON THE BEM In this section, the idea of a numerical homogenization is described within the framework of a linear elastic material characterized by a periodic microstructure containing voids. It is assumed that the material is macroscopically orthotropic and that a macro model of a structure made of this material is subjected to small deformations. The porous microstructure is modelled and analyzed by the BEM. Boundary integral equations for a general threedimensional (3D) isotropic body are shown. The stress-strain relationships for an orthotropic material are presented. As a result of the numerical homogenization by the BEM, the macroscopic homogenized properties of the material are determined on the basis of analysis of the unit cell models in a micro scale. First, consider a 3D body (a macro model) made of a homogeneous, isotropic and linear elastic material. The external boundary of the body is denoted by . The body is statically loaded along the boundary by boundary tractions tj, displacements of the body are denoted by uj. Assuming that the body forces does not act on the body, the relation between the loading of the body and its displacements can be expressed by the boundary integral equation (known as the Somigliana identity) in the following form (Gao & Davies, 2002):

COMPUTER METHODS IN MATERIALS SCIENCE

cij x u j x Tij x,x u j x d x

U x , x t x d x
ij j

(1)

where x is a collocation point, for which the above integral equation is applied, x is a point on the external boundary , a constant cij depends on the position of the point x, Uij and Tij are fundamental solutions of elastostatics. The summation convention is used in the equation (the indices for a 3D problem are i,j = 1,2,3).

252

INFORMATYKA W TECHNOLOGII MATERIAW

Numerical BEM equations are obtained after discretization of the boundary integral equation (1), which is successively applied for all collocation points. In the developed computer program the outer boundary of the body is divided into 8-node quadratic boundary elements. Along the external boundary the variations of coordinates, displacements and tractions are interpolated using quadratic shape functions. The resulting BEM equations can be expressed in the following matrix form:

H uG t

(2)

erage sense in order to determine the homogenized (the effective) macroscopic properties. The mechanical properties of a linear elastic material are characterized by the compliance matrix S or by the stiffness matrix C. The means for obtaining the elements of the compliance matrix S for an orthotropic material by using the numerical homogenization concept and the BEM are presented below. Using the engineering notation, the strain-stress relationships for an orthotropic material have the following form (Kollr & Springer, 2003):

where u and t are displacement and traction vectors, respectively, H and G are coefficient matrices dependent on the boundary integrals of fundamental solutions and shape functions, and their elements are integrated numerically using the Gauss quadratures. Consider now a heterogeneous material with a periodic microstructure with voids in the form of rectangular prisms. A unit cell of this material (a micro model) representing its porous microstructure contains a single rectangular prism of arbitrary side lengths (i.e. a1, a2, a3) as shown in figure 1. Because there are three mutually perpendicular symmetry planes with respect to the void aligned along the x1, x2 and x3 axes, the material in a macro scale is referred to as orthotropic.

1 S11 S 2 12 3 S13 23 0 13 0 12 0

S12 S22 S23 0 0 0

S13 S23 S33 0 0 0

0 0 0 S44 0 0

0 0 0 0 S55 0

1 2 3 0 23 0 13 S66 12 0 0 0
(3)

where the compliance matrix S has 12 nonzero elements, but only 9 are independent, 1, 2, 3 and 23, 13, 12 are engineering strains, 1, 2, 3 and 23, 13, 12 are engineering stresses. The walls of the unit cell in figure 1 align with the x1, x2 and x3 axes, in which the strains and stresses in equation (3) are defined. For the considered orthotropic material in a macro scale, the compliance matrix is specified in the coordinate system defined by these axes, which are perpendicular to the three symmetry planes. The compliance matrix S is symmetrical for an elastic material (Sij = Sji) and it is the inverse of the stiffness matrix (S = C-1).
In order to compute the elements of the compliance matrix, 6 numerical tests are performed using the unit cell in figure 1, i.e. 3 tensile tests and 3 shear tests. In this work, homogeneous static boundary conditions are applied. The unit tractions are prescribed to the unit cell models. For instance, for the tensile test in the x1 direction only the traction in this direction (1 stress) is prescribed and the remaining are zero. When the first stress state is applied, then the resulting strains are obtained from the strain-stress relationships for an orthotropic material and the first column of the compliance matrix in equation (3) is determined. Repeating an analysis five more times for the remaining unit stress vectors allows determining all columns of the compliance matrix. In order to determine the homogenized macroscopic properties represented by this matrix, the

Fig. 1. A unit cell model of an orthotropic material

If a material has a non-regular and non-uniform microstructure, the representative volume elements (RVE) representing a microstructure of this material should be rather used than the unit cell models. More comprehensive definitions of the RVE can be found elsewhere, for instance in Kouznetsova (2002). In the RVE or the unit cell analysis, representative sections (volumes) of a material are analyzed in order to calculate the homogenized properties. The coupling of the macro and micro levels is based on the averaging theorems. Thus, the relation between strains and stresses is formulated in an av-

253

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW

relation between strains and stresses is formulated in an average sense. The average strains are computed on the basis of displacements obtained from the BEM analysis by their integration over the boundaries of the models. These strains provide the relevant terms in the compliance matrix S and thus the effective properties. 3. FORMULATION OF IDENTIFICATION PROBLEM The identification problem consists in finding the side lengths a1, a2, a3 of the void in the unit cell model in figure 1 by minimization the following functional F dependent on the elements of the compliance matrix:

phase. Another improvement concerns the evaluation of fitness function. The developed algorithm uses a database containing an information about evaluated chromosomes and their fitness function. It prevents from the evaluation of fitness function for chromosomes which have the same genes. If this is the case, the value from the database is used. The procedure saves much time because solving of a boundary value problem is usually the most expensive operation in terms of time during the evolutionary process. Identification problems belong to a class of illdefined problems and the uniqueness of the solution is not guaranteed. The same value of the objective function may be obtained for different number and other parameters of voids. 4. PARALLELIZATION OF IDENTIFICATION ALGORITHM The aim of parallelization of the identification algorithm is to obtain the results as fast as possible. Two factors are taken into account in the parallelization strategy: wall time of computations and memory consumption. The memory usage by the algorithm is important because the methods used in the paper increase memory requirements. The physical memory installed in a server should be taken into account during parallelization of the algorithm to prevent swapping memory to disk which may lead to much longer wall time of computations. In the considered process of identification solving of a boundary value problem is the most time consuming task. The parallelization of the identification algorithm can be performed on at least three levels as shown in figure 2. On each level different program of the developed system consisting of three modules is applied, i.e. on the first level the evolutionary algorithm, on the second the computational homogenization procedure and on the third the BEM program for the solution of a boundary value problem (parallel system of equations solver - PSS). The parameter nLx is a number of threads used by a program on level x. The parallelization is hierarchical and the total number of parallel threads is equal to the multiplication of the parameters nLx for all three levels. The parallelization of the evolutionary algorithms is quite easy (Ku & Burczyski, 2008). The efficiency of using a parallel algorithm is high especially for problems for which evaluation of a fitness function is long (from seconds to hours or in some cases days). The maximum number of parallel

i F si s
i 1

(4)

where si are computed homogenized material properties, i.e. the elements of the compliance matrix si i are ref={S11, S22, S33, S12, S13, S23, S44, S55, S66}, s erence homogenized material properties related to a macromodel (e.g. from experiments), n is a number of independent material parameters for an orthotropic material (n = 9 in this case). The identification problem is solved by the evolutionary algorithm, in which a population of chromosomes are processed in each iteration. The design variables (the side lengths a1, a2, a3) are coded in genes of each chromosome which is a potential solution of the problem. At the beginning, the initial population of chromosomes is generated randomly. Then the values of the objective function (fitness function) for all chromosomes are calculated. The fitness function defined by equation (4) is obtained by solving six boundary value problems with the use of the BEM and the homogenization procedure. In the next step, the randomly chosen chromosomes and their genes are modified by using evolutionary operators. The new generation is created on the basis of the offspring population during the selection process. The loop of the algorithm is repeated until the end condition is fulfilled (expressed e.g. as a maximum number of iterations). In order to improve the evolutionary process of the algorithm and speed up the computations, the island (also called distributed) version of the evolutionary algorithm is proposed in the present work. It uses few subpopulations of chromosomes which evolve separately. The chromosomes between subpopulations can be exchanged during a migration

COMPUTER METHODS IN MATERIALS SCIENCE

254

INFORMATYKA W TECHNOLOGII MATERIAW

Fig. 2. A hierarchical parallel structure of identification algorithm

a1 a2 a3

0.200 0.400 0.300

0.217 0.346 0.319

8.5 13.5 6.3

Table 1. Homogenized properties of the reference material Material parameter Value [GPa-1] S11 1.076 S12 -0.318 S13 -0.319 S22 1.050 S23 -0.317 S33 1.057 S44 2.729 S55 2.780 S66 2.763

5. NUMERICAL TESTS A geometry of the considered RVE is presented in figure 1. The unit cell size is 111 mm. The constraints on design variables, i.e. the side lengths a1, a2, a3, are imposed and each is within the range

The tests were performed with the use of a server Dell PowerEdge R515. The server contains two processors AMD Opteron 6272, each with 16 cores (8 floating point units). In all tests, parameters of the evolutionary algorithm are the same as in the previ-

255

COMPUTER METHODS IN MATERIALS SCIENCE

threads nL1 is equal to the total number of chromosomes. The parallel evolutionary algorithms that use the floating point representation operate on small populations of chromosomes, for example 10, 20, 50. The parallelization increases the memory requirements for computations. The memory amount is a sum of memory requirements for the evaluation of a fitness function for each chromosome. The memory consumption is nL1 times larger than in the case of a sequential algorithm. The parallelization on level 2 is related with the parallel computational homogenization. The homogenization procedure consists of boundary value problems solved in a parallel way and sequential algorithm which computes homogenized material properties. The maximum number of parallel tasks is 6 for a 3D problem. The homogenization procedure runs 6 BEM analyses therefore the number of parallel tasks should be 1, 2, 3 or 6 in order to use all cores equally. The boundary value problem is solved with the use of the BEM on third level of parallelization. Several steps of the BEM algorithm can be parallelized. The most important is parallelization of solving of the system of equations. In the BEM the full matrices are created thus the standard algorithms like LAPACK can be used in order to solve a system of algebraic numerical equations. The parallel approach is realized with the use of an Intel MKL library.

of 0.05 to 0.85 mm. For each test the prescribed traction to a wall of the unit cell is p = 1 MPa. The linear elastic material properties of the microstructure are as follows: Youngs modulus E = 1.0 GPa and Poissons ratio = 0.3. Each outer wall of the unit cell and inner wall of the void is divided into 16 quadratic boundary elements, resulting in 192 elements for the whole model. The orthotropic properties of the reference material are shown in table 1. The elements of the reference compliance matrix were obtained for an actual void and its side lengths shown in table 2. The parameters of the evolutionary algorithm are as follows: number of genes is 3, number of chromosomes is 20, number of iterations is 50, probability of simple crossover with Gaussian mutation is 90%, probability of uniform mutation is 10%. The results of identification and an error with respect to the actual void are shown in table 2. The corresponding value of the objective function is F = 0.027 GPa-1.
Table 2. Actual and found void side lengths Void parameter Actual Found Error %

INFORMATYKA W TECHNOLOGII MATERIAW

ous example. The maximum value of the nL1 parameter is 18 (which corresponds to a maximal number of chromosomes evaluated in each iteration of the evolutionary algorithm). The times of identification for all tests are presented in table 3. The speedup is computed in a reference to results for the test 1. The parallelization on the level 3 (tests 1-6) is not efficient due to a partial parallelization of the BEM algorithm. The parallel evolutionary algorithm and the homogenization procedure are characterized by a similar efficiency (tests 7-16). The maximum number of parallel evolutionary algorithm threads is 18, the homogenization allows 6 parallel threads, the total number of cores which may be used in computations is equal to 108. In the future the authors plan to use a cluster with more number of cores to check the scalability of the presented approach.
Table 3. Times of identification for different number of tasks Test 1 2 3 4 5 6 7 8 9 10 11 12 13 nL1 1 1 1 1 1 1 2 2 1 6 2 16 8 3 6 nL2 1 1 1 1 1 2 1 2 6 1 6 1 2 6 3 nL3 1 2 4 8 16 1 1 1 1 1 1 1 1 1 1 Number of threads 1 2 4 8 16 2 2 4 6 6 12 16 16 18 18 Time [s] 21 220 18 136 17 983 17 112 16 449 11 937 12 039 6 861 5 975 5 802 3 426 3 213 2 806 2 549 2 539 Speedup 1 1.17 1.18 1.24 1.29 1.78 1.76 3.09 3.55 3.66 6.19 6.60 7.56 8.32 8.36

tification. In order to solve the problem, the hierarchical parallelization of the algorithms was developed. In numerical examples, parameters defining geometry of a void were successfully identified. The results of numerical tests with wall time measurements for different number of cores are shown. The time of computations is reduced from about 6 hours to about 40 minutes when one core and the parallel approach is applied, respectively. Acknowledgements. The scientific research has been financed by the Ministry of Science and Higher Education of Poland in years 2010-2012. REFERENCES
Arjo, F.C., dAzevedo, E.F., Gray, L.J., 2010, Boundaryelement parallel-computing algorithm for the microstructural analysis of general composites, Comput. Struct., 88, 773-784. Burczyski, T., Mrozek A., Grski R., Ku, W., 2010a, Molecular statics coupled with the subregion boundary element method in multiscale analysis, Int. J. Multiscale Comput. Eng., 8, 319-331. Burczyski, T., Ku, W., Brodacka, A., 2010b, Multiscale modeling of osseous tissues, J. Theor. Appl. Mech., 48, 855870. Chen, X.L., Liu, Y.J., 2005, An advanced 3D boundary element method for characterizations of composite materials, Eng. Anal. Bound. Elem., 29, 513-523. Dster, A., Sehlhorst, H.G., Rank, E., 2012, Numerical homogenization of heterogeneous and cellular materials utilizing the finite cell method, Comput. Mech., 50, 413-431. Fang, Z., Yan, C., Sun, W., Shokoufandeh, A., Regli, W., 2005, Homogenization of heterogeneous tissue scaffold: A comparison of mechanics, asymptotic homogenization, and finite element approach, ABBI, 2, 17-29. Gao, X.W., Davies, T.G., 2002, Boundary element programming in mechanics, Cambridge University Press, Cambridge. Kollr, L.P., Springer, G.S., 2003, Mechanics of composite structures, Cambridge University Press, Cambridge New York. Kouznetsova, V., 2002, Computational homogenization for the multi-scale analysis of multi-phase materials, PhD thesis, Technishe Universiteit Eindhoven, Eindhoven. Ku, W., Burczyski, T., 2008, Parallel Bioinspired Algorithms in Optimization of Structures, eds, Wyrzykowski, R., Dongarra, J., Karczewski, K., Wasniewski, J., PPAM 2007, LNCS, 4967, 1285-1292. Michalewicz, Z., 1996, Genetic algorithms + data structures = evolutionary algorithms. Springer-Verlag, Berlin.

COMPUTER METHODS IN MATERIALS SCIENCE

15 16

6. CONCLUSIONS The identification of the size of a pore in a micro scale model on the basis of parameters in a macro scale is considered in this work. A three-dimensional unit-cell model of a porous material is modelled and analyzed by the boundary element method (BEM). The main advantage of using the BEM in analysis is its high accuracy and that it requires discretization only the outer boundary of the considered models. The advantages are valuable and can be exploited in more complex problems dealing for instance with numerical homogenization and optimization or iden-

256

INFORMATYKA W TECHNOLOGII MATERIAW RWNOLEGA IDENTYFIKACJA PUSTEK W MIKROSTRUKTURZE Z WYKORZYSTANIEM METODY ELEMENTW BRZEGOWYCH ORAZ ALGORYTMU INSPIROWANEGO BIOLOGICZNIE Streszczenie W pracy przedstawiono zagadnienie identyfikacji rozmiaru pustki w skali mikro, na podstawie zhomogenizowanych parametrw materiaowych. Trjwymiarowy model komrki jednostkowej mikrostruktury porowatej modelowany i analizowany jest metod elementw brzegowych (MEB). Metoda jest bardzo dokadna i dla rozwaanego zadania wymaga jedynie dyskretyzacji zewntrznego brzegu modeli. Zastosowany algorytm do identyfikacji charakteryzuje si hierarchiczn budow pozwalajc prowadzi obliczenia w sposb rwnolegy na trzech rnych poziomach. Wykorzystano rwnolegy algorytm do oblicze ewolucyjnych. Zrwnoleglono take rozwizywanie zada brzegowych za pomoc MEB oraz wyznaczanie zastpczych wasnoci materiaowych metod numerycznej homogenizacji. Pokazano sposb wyznaczania macierzy podatnoci mikrostruktury porowatej. Macierz jest wykorzystana do sformuowania funkcji celu w zagadnieniu identyfikacji, w ktrym poszukiwany jest rozmiar pustki. Przeprowadzono testy skalowalnoci algorytmu z uyciem serwera zawierajcego osiem jednostek zmiennoprzecinkowych. Jako rezultat zastosowania algorytmu o budowie hierarchicznej oraz MEB uzyskano znaczne przypieszenie i dokadno oblicze.
Received: October 11, 2012 Received in a revised form: October 22, 2012 Accepted: October 29, 2012

257

COMPUTER METHODS IN MATERIALS SCIENCE

COMPUTER METHODS IN MATERIALS SCIENCE


Informatyka w Technologii Materia w
Publishing House AKAPIT

Vol. 13, 2013, No. 2

APPLICATION OF THE THREE DIMENSIONAL DIGITAL MATERIAL REPRESENTATION APPROACH TO MODEL MICROSTRUCTURE INHOMOGENEITY DURING PROCESSES INVOLVING STRAIN PATH CHANGES
KRZYSZTOF MUSZKA*, UKASZ MADEJ AGH University of Science and Technology, Faculty of Metals Engineering and Industrial Computer Science, Mickiewicza 30, 30-059 Krakw, Poland *Corresponding author: muszka@agh.edu.pl
Abstract The present paper discusses possibilities of application of the 3D Digital Materials Representation (DMR) approach in the light of the multiscale modelling of materials subjected to the complex strain paths. In some metal forming processes, material undergoes complex loading history that introduces significant inhomogeneity of the strain. High strain gradients, in turn, lead to high inhomogeneity of microstructure and make the prediction of the final materials properties especially complicated. Proper control of those parameters is very difficult and can be effectively optimised only if the numerical tools are involved. The 3D Digital Materials Representation approach is presented and introduced in the present paper into a multiscale finite element model of two metal forming processes characterised by high microstructural gradients: the cyclic torsion deformation and the Accumulative Angular Drawing (AAD). Due to a combination of the multiscale finite element model with the DMR approach, detailed information on strain inhomogeneities was obtained in both investigated processes. Key words: 3D digital material representation, multiscale modelling, strain path changes

1. INTRODUCTION During manufacturing processes, metal may be subjected to complex strain path changes that introduce high level of both deformation and microstructural inhomogeneity and make the prediction of material behaviour extremely difficult. Existing numerical tools are powerful and offer various possibilities, however there are still limitations in the modelling of processes that are characterised by non-linear and non-symmetrical deformation modes. Problem of strain path change on the microstructure evolution and mechanical behaviour has been widely studied, both theoretically and experimentally (Davenport et al, 1999; Jorge-Badiola & Gutierrez, 2004). It has

been found that this processing parameter significantly retards recrystallization, precipitation and phase transformation kinetics during hot deformation of steels. Strain path changes applied during cold deformation also play important role in the control of strain and microstructure inhomogeneity. High local strain accumulation leads to significant grain refinement and significantly improves strength of the material but in some cases (severe plastic deformation methods) decreases ductility of the material. Understanding of the strain path in the light of aforementioned problems is therefore of paramount importance. Computer modelling needs to be involved in order to learn how to control the microstructure and ISSN 1641-8581

258 263

INFORMATYKA W TECHNOLOGII MATERIAW

deformation inhomogeneity during complex loading Deformed microstructures observed using optiprocesses. Due to nonlinearity and lack of symcal microscopy are shown infigure1 b, c. It can be metry, simulation of deformation involving complex seen that in both cases, the original shape of austenstrain path changes requires 3D models to be creatite grains has been restored. In the case of 2-pass ed. As most of the microstructural phenomena durdeformation, however, the initial austenite microing deformation take place at various scales, mulstructure has been subdivided into well-developed tiscale modelling approach should also be considlamellar structures separated by high angle grain ered. Proper representation of the microstructural boundaries (Sun et al., 2011). The recorded flow features can be effectively done with utilisation of curves are summarized in figure 2. In both cases, the the recently developed Digital Materials Representastrain level upon reversal is lower what suggests tion (DMR) (Madej et al., 2011) technique, where occurrence so called Bauschinger effect due to microstructure is explicitly represented by properly rearrangement of the substructure upon reversal the divided heterogeneous finite element a) b) c) mesh. In the present paper, the 3D Digital Materials Representation approach is presented and incorporated into a multiscale finite element model of two metal forming processes characterised by high microstructural gradients. The first case study involves cyclic torsion deformation of the FCC structure, whereas in 1. Electron Backscatter Diffraction (EBSD) map of the initial austenite microthe second case study, Accumulative Fig. structure; optical microstructures of the deformed samples using deformation route Angular Drawing (AAD) process of the 1 and 2 -b),-c) respectively. BCC structure is modelled. dislocation density in the reversed structure is lower. Additionally, based on the flow curves it can be seen 2. EXPERIMENTAL INVESTIGATION that this effect has been multiplied in the 8-pass test. 2.1. Forward/reverse torsion test The effect of strain reversal on austenite subjected to strain path reversal was studied in torsion using model alloy system with a chemical composition of 0.092C-30.3Ni-1.67Mn-1.51Mo-0.19Si (in wt. %). Since in Fe-30wt%Ni systems austenite phase is stable down to room temperature and they are characterised by similar Stacking Fault Energy and high temperature flow behaviour as low carbon steels, they are widely used to model the austenite phase of those materials. The initial microstructure of the studied material represented by EBSD map is shown in figure 1. Solid bar torsion specimens with gauge length of 20mm and gauge diameter of 10mm were machined out of the solution treated plate. Torsion test was carried out using servo-hydraulic torsion rig at 840C with strain rate of 1/s. Two deformation routes were applied, both with the same equivalent total strain of 2. In the first case, 4 cycles of forward/reverse with strain of 0.25 per pass (8-passes in total) were applied. In the second case, 2-passes of deformation with the strain of 1 per pass and only one reversal were applied.

Fig. 2. Flow curves recorded during cyclic torsion deformation of Fe-30wt.%Ni.

The present study confirmed that strain path effect represent one of the most important processing parameters characterising hot metal forming processes. Various austenite state as an effect of different strain path in steel is crucial since it affects the subsequent phase transformations and thus its products what in turn has an effect on the properties of the final materials. Computer modelling of such problems can put some new insight into understand-

259

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW

ing and optimisation of the processes carried out with strain path changes. 2.2. Accumulative Angular Drawing (AAD) process

to bending/unbending process, and desired shear deformation. Again numerical modelling can be a valuable support to the experimental research on these effects. 3. NUMERICAL INVESTIGATION

In order to carry out the study of the AAD process, a special die was designed such that an ordinary drawing bench could be used (Wielgus et. al., 2010). Microalloyed steel (0.07C/1.37Mn/ 0.27Si//0.07Nb/0.009N) supplied as a wire rod, with homogenous equiaxed ferrite microstructure and the mean grain size of 15m, was used in this study. The 6.5mm diameter wire rods were drawn down to the diameter of 4 mm through the set of three dies (in two passes of drawing) with the total strain of 0.97. Although the AAD design allows various combinations of die positioning to be used, the present study was concentrated on the stepped die positioning, in which the offset from the drawing line between the successive dies was equal to 15.

The main aim of the present work was to study whether combination of the multiscale finite element modelling with 3D DMR approach can be used to effectively model complex deformation processes that were described in the previous chapter. Calculations were performed using Abaqus Standard/Explicit package. In both cases, material behaviour was described using elasto-plastic model with combined isotropic-kinematic hardening (Lemaitre & Chaboche, 1990). The evolution law of this model consists of the two main components: a nonlinear kinematic hardening component which describes the translation of the yield surface in stress space through the backstress :

Optical and electron microscopy observations have shown high level of microstructure inhomogeneity, i.e. substantial grain refinement was achieved in the transverse section of the wires, in the areas near the surface. Grains were also elongated along the wire axis. The dependence of grain shape, size and distribution on the transverse cross section on the processing route is clearly seen in figure3. The refinement of the microstructure is localised in the near-surface layers, however, with various intensities.
a) b)

k Ck

pl k k pl , k 0
k 1

(1) where, k is the backstress, N, is the number of backstresses, 0 the equivalent stress defining the size of the yield surface and Ck and k are material parameters; and an isotropic hardening component describing the change of the equivalent stress defining the size of the yield surface, as a function of plastic deformation:
c) d)

COMPUTER METHODS IN MATERIALS SCIENCE

Fig. 3. Initial microstructures of the studied material taken in the longitudinal a) and transverse b) cross-section. Euler angle maps of the deformed wires taken near the surface b) and in the centre d) of the longitudinal cross section of the deformed wire.

The presented work confirmed that the strain path applied in the AAD process affects directly the microstructure and texture changes in the final product. It is a combined effect of: reduction of the area, strain accumulation in the outer part of the wire due

0 0 Q 1 e b

(2)

where, 0 is the yield stress at zero plastic strain and Q and b are material parameters. The model is

260

INFORMATYKA W TECHNOLOGII MATERIAW

based on the two major model parameters Ck (the initial kinematic hardening moduli) and k (rate at which the kinematic hardening moduli decrease with increasing plastic deformation). These parameters can be specified directly, calibrated based on a halfcycle test data (unidirectional tension or compression), or can be obtained based on the test data from a stabilized cycle (when the strain-stress curve no longer changes shape from one cycle to next). Parameters of the model have been identified using inverse approach based on data from cyclic torsion test that was performed on studied materials at both deformation temperatures. Example of the comparison of the torque vs. angle data calculated using calibrated hardening model and data measured experimentally (cyclic torsion test) are presented in figure 5. It can be seen that the divergence of the model and experiment is good what proves the accuracy of the applied methodology. 3.1. Multiscale model of the cyclic torsion test

original position was then restored after strain reversal and application of the second pass of deformation with the same strain level applied in the opposite torsion direction. Macro scale model is unable to provide such detailed results. Due to presented advantages authors decided to apply the same approach to model the AAD process.

Fig. 4. Multiscale model of the cyclic torsion test.

The multiscale model of the torsion test was designed as seen in figure 4. Submodelling technique was used to bridge different scales. Global model of strain gauge was prepared and analysed using Abaqus Standard code. Next, the submodel was generated using the DMR approach and calculations were performed again. A unit cell (100 m 100 m 100 m) with 37 grains was created to capture the effect of the process on inhomogeneity of both strain and microstructure. The parameters of the combined material hardening model applied in the submodel were additionally diversified using the Gauss distribution function to reflect differences in the crystallographic orientations. Equivalent von Mises stress distributions in the global model and in unit cell during the first forward/reverse cycle of torsion test are presented in figure 6a, b. It can be seen that the application of the multiscale modelling approach and its combination with 3D DMR approach resulted in much higher accuracy of the results compared to simulation using only the global model (figure 6a). Global material response obtained from both models can be similar to some extend, however macro scale model neglects inhomogeneities occurring along microstructure features. Additionally 3D DMR properly captured not only inhomogeneites in stress or strain state but also grain shape changes as an effect of strain reversal (figure 6c). It can be seen that the first pass of cyclic deformation caused grain rotation. Its

Fig. 5. Comparison of the measured and calculated torque vs. twist angle.

Multiscale model of the AAD process due to its complexity requires two steps of submodelling as presented in figure 7. First, global model with 42000 eight-node hexagonal reduced integration elements with hourglass control (C3D8R) was realized using Abaqus Explicit. Drawing of 300 mm long wire with the initial diameter of 6.5mm was modelled. Tools were meshed with quad-dominated discrete rigid elements (R3D). Furthermore, the analysis was replicated on the smaller cylindrical area (10mm long) subdivided from the global model using Abaqus Standard and much finer mesh was used. Finally, second submodel was generated using the 3D DMR approach and calculations were performed again,

261

COMPUTER METHODS IN MATERIALS SCIENCE

3.1.

Multiscale model of the AAD process

INFORMATYKA W TECHNOLOGII MATERIAW

Fig. 6. Von Mises stress distributions in global a) and submodel b) Equivalent plastic strain distribution in the selected grain c)

Fig. 7. Multiscale model of the AAD process.

COMPUTER METHODS IN MATERIALS SCIENCE

using Abaqus Standard. Set of 5 unit cells (100 m x100 m x 100 m) containing 37 grains each was created to capture the effect of the process on inhomogeneity of both strain and microstructure. Obtained global equivalent plastic strain distributions on the surface and on the transversal cross section of the drawn wire after the first pass are presented in figure 8.

Fig. 8. Examples of calculations. Equivalent plastic strain in drawn wire after 1st pass of drawing. Global model and unit cells attached at various positions of the wires cross-section.

262

INFORMATYKA W TECHNOLOGII MATERIAW

It can be noticed that the inhomogeneity of strain that is characteristic for this deformation process was properly captured by the applied model. Again, much more detailed information regarding strain localisation and inhomogeneities can be extracted from the submodels in comparison to macro scale model predictions. The 3D DMR approach show different levels of strain inhomogeneity, localisation and distortion across subsequent grains resulting from the AAD process. Higher strain accumulation near the wire surface was also predicted by the computer model (figure 8d, e, f). It can be seen that application of the 3D DMR approach for the modelling of AAD can be an effective support of the experimental research. 5. CONCLUSIONS Two complex loading cases with high local strain accumulation were simulated using multiscale FEM model combined with 3D Digital Materials Representation approach. Based on the presented modelling results it can be concluded that the applied modelling strategy was able to catch most of the important phenomena accompanying processes with complex deformation modes with reasonably good accuracy. Future research will focus on application of the crystal plasticity model integrated with DMR what will even more extend predictive capabilities of the proposed methodology. Acknowledgements. Financial support the Polish Ministry of Science and Higher Education (grant no. N N508583839) is gratefully acknowledged. FEM calculations were realised at ACK AGH Cyfronet Computing Centre under grant no: MNiSW/ IBM_BC_HS21/AGH/075/2010. REFERENCES
Davenport, S.B., Higginson, R.L., Sellars, C.M., 1999, The effect of strain path on material behaviour during hot rolling of FCC metals, Philosophical Transactions of the Royal Society of London A, 357, 1645-1661. Jorge-Badiola, D., Gutierrez, I., 2004, Study of the strain reversal effect on the recrystallization and strain-induced precipitation in a Nb-microalloyed steel, Acta Materialia, 52, 333-341. Lemaitre, J.,Chaboche, J.L., 1990, Mechanics of Solid Materials, Cambridge University Press. Madej, L., Rauch L., Perzynski, K., Cybulka, P., 2011, Digital Material Representation as an efficient tool for strain inhomogeneities analysis at the micro scale level, Archives of Civil and Mechanical Engineering, 11, 661-679.

Sun, L., Muszka, K., Wynne, B.P., Palmiere, E.J., 2011, The effect of strain path reversal on high-angle boundary formation by grain subdivision in a model austenitic steel, Scripta Materialia, 64, 280-283. Wielgus, M., Majta, J., uksza, J., Packo, P., 2010, Effect of strain path on mechanical properties of wire drawn products, Steel Research International, 81, 490-493.

ZASTOSOWANIE TRJWYMIAROWEJ CYFROWEJ REPREZENTACJI MATERIAU DO MODELOWANIA NIEJEDNORODNOCI MIKROSTRUKTURY W PROCESACH CHARAKTERYZUJCYCH SI ZMIENN DROG ODKSZTACENIA Streszczenie W pracy przedstawiono moliwoci wykorzystania trjwymiarowej Cyfrowej Reprezentacji Materiau do wieloskalowego modelowania materiaw odksztacanych w warunkach zmiennej drogi odksztacania. W procesach przerbki plastycznej materia poddawany jest zoonej historii odksztacania, ktra charakteryzuje si du niejednorodnoci odksztacenia. Duy gradient odksztacenia prowadzi z kolei do niejednorodnoci rozwoju mikrostruktury i powoduje, e przewidywanie wasnoci wyrobu finalnego staje si szczeglnie skomplikowane. Odpowiednia kontrola tych parametrw jest utrudniona i moe by efektywnie optymalizowana jedynie w przypadku, gdy zostanie wsparta narzdziami numerycznymi. Podejcie przedstawione w niniejszej pracy procesu zostao zastosowane do modelowania dwch procesw przerbki plastycznej charakteryzujcych si zmienn drog odksztacania: procesu cyklicznego odksztacania na drodze skrcania oraz procesu Ktowego Wielostopniowego Cignienia (KWC). W pracy wykazano, e poczenie wieloskalowego modelu MES wraz z trjwymiarow Cyfrow Reprezentacj Materiau wpywa na znacz popraw dokadnoci uzyskiwanych wynikw w przypadku modelowania niejednorodnoci odksztacenia w rozpatrywanych procesach przerbki plastycznej.
Received: October 17, 2012 Received in a revised form: November 26, 2012 Accepted: December 11, 2012

263

COMPUTER METHODS IN MATERIALS SCIENCE

COMPUTER METHODS IN MATERIALS SCIENCE


Informatyka w Technologii Materia w
Publishing House AKAPIT

Vol. 13, 2013, No. 2

IDENTIFICATION OF INTERFACE POSITION IN TWO-LAYERED DOMAIN USING GRADIENT METHOD COUPLED WITH THE BEM
EWA MAJCHRZAK1*, BOHDAN MOCHNACKI2 Institute of Computational Mechanics and Engineering, Silesian University of Technology, Konarskiego 18a, 44-100 Gliwice, Poland 2 Higher School of Labour Safety Management, Bankowa 8,40-007 Katowice, Poland *Corresponding author: ewa.majchrzak@polsl.pl
Abstract The non-homogeneous domain being the composition of two sub-domains is considered, at the same time the position of internal interface is unknown. The additional information necessary to solve the identification problem results from the knowledge of temperature field at the set of points X selected from the domain analyzed. From the practical point of view the points X should be located at the external surface of the system. The steady temperature field in domain considered is described by two energy equations (the Laplace equations), continuity condition given on the contact surface and the boundary conditions given on the external surface of domain. To solve the inverse problem the gradient method is used. The sensitivity coefficients appearing in the final form of equation which allows one to find the solution using a certain iterative procedure are determined by means of the implicit approach of shape sensitivity analysis. This approach is especially convenient in the case of boundary element method application (this method is used at the stage of numerical algorithm construction). In the final part of the paper the examples of computations are shown. Key words: heat transfer, inverse problem, gradient method, boundary element method
1

1. INTRODUCTION The following boundary value problem is considered


( x, y ) e : 2T ( x, y ) 2Te ( x, y ) e e 2 0, y2 x e 1, 2

( x, y ) ex :
(2)

T1 ( x, y ) T1 ( x, y ) Ta n

(1) where index e corresponds to the respective subdomains, e is the thermal conductivity, T, x, y denote the temperature and spatial co-ordinates, respectively. The equation (1) is supplemented by the typical boundary conditions, in particular

where ex is the external surface of domain marked in figure 1, Ta is the ambient temperature, is the heat transfer coefficient, T1 /n denotes the normal derivative. On the surface between sub-domains the continuity of heat flux and temperature field is assumed, this means

( x, y ) c :

T1 ( x, y ) T ( x, y ) 2 2 1 n n T ( x, y ) T ( x , y ) 2 1

(3)

264 268

ISSN 1641-8581

INFORMATYKA W TECHNOLOGII MATERIAW

On the internal surface in (c.f. figure 1) the Dirichlet condition is taken into account

Te* (, , x, y )

1 1 ln 2 e r

(6)

( x, y ) in :

T2 ( x, y ) Tb

(4)

On the remaining parts of boundary the no-flux condition can be accepted. As is well known, when the thermophysical and geometrical parameters appearing in the mathematical model of the process considered are given then the direct problem is formulated and the temperature distribution in the domain can be found. The inverse problem considered here bases on the assumption that the temperature distribution at the boundary ex is known (e.g. thermographs), while the position of c is unknown (Ciesielski & Mochnacki, 2012; Romero Mendez et al., 2010).
y

where r is the distance between the points (, ), (x, y) and


* qe (, , x, y ) e

Te* (, , x, y ) d (7) n 2r 2
(8)

while

d ( x ) nx ( y )n y

In numerical realization of the BEM the boundaries are divided into boundary elements and the integrals appearing in equations (5) are substituted by the sums of integrals over these elements. After the mathematical manipulations one obtains two systems of algebraic equations (Majchrzak, 2001)

external surface

ex

G eq e H e Te

(9)

Now, the following notation is introduced (c.f. figure 1) T11 ,


2 ex are the vectors T12 , T1ex , q1 q1 , q1 1,

in

of functions T and q at the boundary 12ex of domain 1, Tc1 , Tc 2 , q c1 , q c 2 are the vectors of functions T and q on the contact surface c between sub-domains 1 and 2, T23 ,

Fig. 1. Domain considered

T24 , T2in , q 3 q4 q in 2, 2, 2 are the vectors

2. SOLUTION OF DIRECT PROBLEM BY MEANS OF THE BOUNDARY ELEMENT METHOD The boundary integral equations corresponding to the Laplace equations (1) are the following (Brebbia & Dominguez, 1992 ; Majchrzak, 2001)

of functions T and q at the boundary 34in of domain 2 and then one has for sub-domain 1

1 ex G1 G1

2 G1

2 H1

(, ) e :
* e

B (, )Te (, )

(10) for sub-domain 2


q c 2 q3 4 2 G2 q in H c 2 2 4 q2 Tc 2 T3 2 H4 2 T2in 4 T2

T (, , x, y) qe ( x, y)de =
e

qe (, , x, y) Te ( x, y)de
*

(5)

G c 2

G3 2

G in 2

H3 2

H in 2

where B(, )(0, 1) is the coefficient connected with the local shape of boundary, () is the observation point, qe(x, y) = - e Te (x, y)/n, n = [nx, ny] and Te*(, , x, y) is the fundamental solution

(11) The continuity condition (3) written in the form

q c1 q c 2 q Tc1 Tc 2 T

(12)

265

COMPUTER METHODS IN MATERIALS SCIENCE

q1 1 ex q1 H1 H ex G c1 1 q2 1 1 qc1

T11 ex T1 H c1 T2 1 T c1

INFORMATYKA W TECHNOLOGII MATERIAW

allows one to couple the equations (10), (11). Taking into account the remaining boundary conditions finally one obtains

Using the necessary condition of optimum, one obtains


T11 ex T1 T12 ex 0 T G 1 Ta 4 in G 2 Tb H 2 q 3 T2 q in 2 4 T2

H1 1 0

ex ex G1 H1 0

2 H1 0

H c1 H c 2

G c1 G c 2

0 H3 2

0 G in 2

(13)

or
AY B
(14)

S 2 bn M

(T
i 1

Td i )

Ti 0, bn

n 1, 2, ..., N
(16)

where A is the main matrix of the system of equations (13), Y is the vector of unknowns and B is the free terms vector. The system of equations (14) allows one to find the ,missing boundary values. Knowledge of nodal boundary temperatures and heat fluxes constitutes a basis for computations of internal temperatures at the optional set of points selected from the domain considered. 3. SOLUTION OF INVERSE PROBLEM USING GRADIENT METHOD COUPLED WITH THE BEM

The function Ti is expanded into the Taylor series taking into account the first derivatives
N T Ti Ti k i j 1 b j

(b k 1 b k j ) (17) k j b j b j

where bj0 is the arbitrary assumed value of parameter bj, while for k > 0 it results from previous iteration. Introducing (17) into (16) one has

U
i 1 j 1
M

k i, j

1 U ik, n (bk bk j j )

COMPUTER METHODS IN MATERIALS SCIENCE

The inverse problem considered here bases on the assumption that the temperature distribution at the boundary ex is known, while the position of c is unknown. The surface c is defined by the set of points (xn, yn), n = 1, 2, , N. The aim of investigations is to determine the values of shape parameters b1, b2, , bN which correspond to the co-ordinates yn shown in figure 2.
The criterion which should be minimized is of the form (Kurpisz & Nowak, 1995; Burczyski, 2003)

T
i 1

di

Ti k U ik, n 0,

n 1, 2, ..., N

(18)

where

T U ik, j i bj

k b j b j

(19)

S (b1 , . . . , bn , . . . , bN )

1 M

(T
i 1

Td i ) 2

(15)

where Tdi , Ti are the temperatures known from the measurements and calculated ones, respectively. In this paper the real measurements are substituted by the temperatures Ti obtained from the direct problem solution for arbitrary assumed position of points (xn, yn).

are the sensitivity coefficients. From the system of equations (18) the values bjk+1 are calculated. To determine the sensitivity coefficients the methods of shape sensitivity analysis are used (Kleiber, 1997; Majchrzak et al., 2011; Freus et al., 2012). Here the implicit differentiation method, which belongs to the discretized approach, basing on the differentiation of algebraic boundary element matrix equations (14) is applied. So, the system of equations (14) should be differentiated with respect to parameter bj and then

A Y B Y+A b j b j b j
or

(20)

266

INFORMATYKA W TECHNOLOGII MATERIAW

Y B A Y b j b j b j

(21)

It should be pointed out that the derivatives of the boundary element matrices are calculated analytically (Majchrzak et al., 2011). 4. RESULTS OF COMPUTATIONS The domain of dimensions 2LL (L = 0.02 m) shown in figure 1 has been considered. At first, the direct problem described in the chapter 1 has been solved. The following input data have been introduced: thermal conductivities 1 = 0.1 W/(mK), 2 = 0.2 W/(mK), heat transfer coefficient = 10 W/(m2K), ambient temperature Ta = 20oC (c.f. condition (2)), boundary temperature Tb = 37oC (c.f. condition (4)). The shape of internal surface c has been assumed in the form of parabolic function (in this place the optional shapes can be taken into account)
y ( x) 0. 8 L( x L) 2 y p x ( x 2 L) L2
Fig. 3. Temperature distribution in the domain considered

(22)

where (L, yp) = (0.02 m, 0.012 m) is the tip of parabola. The discretization of boundaries using the linear boundary elements is shown in figure 2.

Fig. 4. Temperature distribution at the external surface

Fig. 2. Discretization of boundaries

In figure 3 the temperature distribution in the domain considered is presented, while figure 4 illustrates the course of temperature at the external surface. To solve the inverse problem, 29 shape sensitivity coefficients corresponding to the y co-ordinate of nodes from 29 = 68 to 47 = 50 (figure 2) has been distinguished. The nodes 28 and 48 are fixed c.f. equation (20). So, 29 additional problems connected with the determination of sensitivity functions have been formulated.

Fig. 5. Results of identification variant 1

The identification problem has been solved under the assumption that the temperatures at the nodes from 5 to 23 (figure 2) are known and the initial position of internal boundary is described by function (22) where yp = 0.015 m or yp = 0.011 m (the different start points allow ones to observe the course of iteration process). In figures 5 and 6 the results of computations are shown. It is visible that

267

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW

for exact input data the exact position of boundary is obtained and the iteration process is convergent.

ner, J., Bohm, H.J., Rammerstorfer, F.G., Vienna University of Technology, Vienna, Austria, Kleiber, M., Parameter sensitivity, 1997, J.Wiley & Sons Ltd., Chichester. Kurpisz, K., Nowak, A.J. Inverse Thermal Problems, 1995, Computational Mechanics Publications, SouthamptonBoston. Romero Mendez, R., Jimenez-Lozano, J.N., Sen, M., Gonzalez, F.J., 2010, Analytical solution of a Pennes equation for burn-depth determination from infrared thermographs, Mathematical Medicine and Biology, 27, 21-38. Majchrzak, E., 2001,Boundary element method in heat transfer, Publ. of the Czestochowa University of Technology, Czestochowa (in Polish). Majchrzak, E., Freus, K., Freus, S., 2011, Shape sensitivity analysis. Implicit approach using boundary element method, Scientific Research of the Institute of Mathematics and Computer Science, 1, 151-162.

Fig. 6. Results of identification variant 2

6. CONCLUSIONS The algorithm proposed can be useful, among others, in the medical practice (estimation of wound shape on the basis of surface temperature distribution). It should be pointed out that both from the mathematical and numerical points of view the problem is rather complicated, but taking into account the practical applications it seems that the scientific research in this scope should be continued. The algorithm proposed allows one to identify the complex shapes of internal boundary (the co-ordinates yn are estimated separately). In a similar way the 3D problems can be also solved. In the future the detailed research of iterative procedure convergence should be also done.

WYKORZYSTANIE METODY GRADIENTOWEJ I MEB DO IDENTYFIKACJI KSZTATU GRANICY MIDZY PODOBSZARAMI W DWUWARSTWOWYM NIEJEDNORODNYM OBSZARZE CIAA STAEGO Streszczenie W pracy rozpatruje si niejednorodny obszar ciaa staego bdcy zoeniem dwch podobszarw, przy czym pooenie powierzchni granicznej nie jest znane. Dodatkow informacj pozwalajc rozwiza sformuowane w ten sposb zadanie odwrotne s wartoci temperatury w punktach X wyrnionych w rozpatrywanym obszarze. Z praktycznego punktu widzenia punkty przyoenia sensorw powinny by zlokalizowane na powierzchni zewntrznej pozostajcej w kontakcie z otoczeniem. Model matematyczny procesu tworzy ukad rwna eliptycznych (rwna Laplacea), warunki idealnego kontaktu na powierzchni kontaktu i warunki zadane na powierzchniach zewntrznych. Rozwizanie zadania uzyskano metod gradientow, a wspczynniki wraliwoci wystpujce w ukadzie rozwizujcym wyznaczono wykorzystujc niejawne podejcie analizy wraliwoci, ktre jest szczeglnie efektywne w przypadku zastosowania metody elementw brzegowych (t metod wykorzystano na etapie konstrukcji algorytmu numerycznego). W kocowej czci artykuu zamieszczono wyniki oblicze numerycznych.
Received: October 2, 2012 Received in a revised form: October 22, 2012 Accepted: November 9, 2012

COMPUTER METHODS IN MATERIALS SCIENCE

REFERENCES
Brebbia, C.A., Dominguez J., 1992, Boundary elements, an introductory course. CMP, McGraw-Hill Book Company, London. Burczyski, T., 2003, Sensitivity analysis, optimization and inverse problems, Boundary element advances in solid mechanics, eds, Beskos, D., Maier, G., Springer Verlag, Vien, New York, 245-307. Ciesielski, M., Mochnacki, B., 2012, Numerical analysis of interactions between skin surface temperature and burn wound shape, Scientific Research of Institute of Mathematics and Computer Science, 1, 15-22. Freus, S., Freus, K., Majchrzak, E., Mochnacki, B., 2012, Identification of internal boundary position in two-layers domain on the basis of external surface temperature distribution, CD-ROM Proceedings of the 6th European Congress on Computational Methods in Applied Sciences and Engineering (ECCOMAS 2012), eds, Eberhardstei-

268

COMPUTER METHODS IN MATERIALS SCIENCE


Informatyka w Technologii Materia w
Publishing House AKAPIT

Vol. 13, 2013, No. 2

INFLUENCE OF THE SAMPLE GEOMETRY ON THE INVERSE DETERMINATION OF THE HEAT TRANSFER COEFFICIENT DISTRIBUTION ON THE AXIALLY SYMMETRICAL SAMPLE COOLED BY THE WATER SPRAY
AGNIESZKA CEBO-RUDNICKA*, ZBIGNIEW MALINOWSKI, BEATA HADAA, TADEUSZ TELEJKO Faculty of Metals Engineering and Industrial Computer Science, Department of Heat Engineering and Environment Protection, AGH University of Science and Technology, Al. Mickiewicza 30, 30-059 Krakw, Poland *Corresponding author: cebo@agh.edu.pl
Abstract The paper presents the results of the heat transfer coefficient determination while the water spray cooling process. To determine the boundary condition over the metal surface cooled by water spray the inverse heat conduction problem has been used. In the investigations the axially symmetrical sample has been used as a cooled object. Because of the specific setup of the sensor used in investigations, two finite element models have been tested in the inverse determination of the heat transfer coefficient. The first one, which simplifies the sensor geometry to a cylinder and the second one, that describes the real shape of the sensor. Also, the comparison between two different models employed to determine the heat transfer coefficient over the cooled sample surface have been presented. The boundary condition models differ in description of the function that has been employed to approximate the heat transfer coefficient distribution over the cooled surface in the time of cooling. Key words: water spray cooling, heat transfer coefficient, boundary inverse problem, finite element method

1. INTRODUCTION In the metal industry the water cooling is widely used to control the product temperature variation in the production process. Continuous casting lines are equipped with the water spray secondary cooling zones. The main goal in these case is to ensure sufficient heat transfer from the ingot surface to achieve a proper solidification structure. The industrial hot rolling mills are equipped with systems for controlled cooling of hot steel products. In the case of strip rolling mills the main cooling system is situated at run-out table to ensure the required strip temperature before coiling (Tacke et al., 1985; Malinowski et al., 2012). The proper cooling rate affects the final

mechanical properties of products which strongly dependent on microstructure evolution processes. Numerical simulations can be used to determine the water flux which should be applied in order to ensure desired product temperature. The heat transfer boundary condition in case of water cooling is defined by the heat transfer coefficient (HTC). Due to complex nature of the cooling process the existing heat transfer models are not accurate enough in the case of high temperature processes common in metal industry. Also, the direct measurements of the HTC by such methods as mass transfer or transient method that uses liquid crystals to measure the surface temperature cannot be used in the case of steel industry processes (Mascarenhas & Mudawar, 2010; ISSN 1641-8581

269 275

INFORMATYKA W TECHNOLOGII MATERIAW

Liu et al., 2012). For such processes the best way to determine the HTC is to formulate the boundary inverse heat conduction problem (IHCP). There, HTC can be determined as a function of cooling parameters and product surface temperature. In inverse algorithm various heat conduction models and boundary condition models can be implemented. In the paper the results of the inverse calculation of HTC have been presented. The calculations have been performed on the basis of temperature measurements inside selected points of axially symmetrical sample cooled by water spray. The experimental investigations have been conducted for two materials: inconel and brass. 2. BOUNDARY INVERSE MODEL The HTC on the cooled surface of the cylinder can be determined from the inverse solution to heat transfer problem by minimizing the objective function defined as: (1)

Variation of the heat transfer coefficient h at the metal surface in time of cooling has been approximated by two HTC models. In the first model (model A) average HTC over the cooled surface as a function of the time of cooling and average sample surface temperature has been determined. In the second model (model B) HTC distribution over the cooled surface has been approximated by the witch of Agnesi type function with the expansion in time of the HTC parameters Cebo-Rudnicka et al. (2012). 3. PROBLEM FORMULATION In the present study the boundary condition over the surface of the metal sample cooled by water spray has been sought. The sample has a form of a cylinder 20 mm in height and 20 mm in diameter. The top surface of the cylinder has been cooled by the water sprays. The cylinder has been completed with a flange 30 mm in diameter and 1 mm thickness and it has been placed in cylindrical housing. The space between the cylinder and the housing has been filled with air, that allows to reduce the heat losses to the surrounding. However, the flange which allows to join the cylinder and housing, causes that the sample temperature field is not perfectly one dimensional. In figure 1 the schematic illustration of the experimental setup which consists of the cylindrical sensor, flange and housing has been presented. The cylinder with flange, as well as housing have been made from the same material. To measure the temperature inside the cooled sample three fast response, NiCr - NiAl thermocouples have been used. Thermocouples have been placed in the symmetry axis of the cylinder in the distance of: 2, 4 and 6 mm from the cooled surface.

where: pi is the vector of the unknown parameters to be determine by minimizing the objective function, Nt number of the temperature sensors, Np number of the temperature measurements performed by one the sample sensor in the time of cooling, temperature measured by the sensor m at the time n, the sample temperature at the location of the sensor m at the time n calculated from the finite element solution to the heat conduction equation:
, , , , , ,

COMPUTER METHODS IN MATERIALS SCIENCE

(2)

where: T temperature, time, r, z cylindrical coordinates, qv internal heat source, thermal conductivity, c specific heat, density. In the finite element model employed to solve equation (2) linear shape functions have been used. Descriptions of the model with linear shape functions has been presented in the paper of Godasz et al. (2009). The heat transfer boundary condition on the cooled surface of the metal cylinder has been expressed as a function of surface radius and time: , , , (3)

where: Ts cooled sample surface temperature, heat flux, Ta cooling water temperature, h heat transfer coefficient.

Fig. 1. Schematic illustration of the experimental setup employed for the determination of the heat transfer boundary condition.

The experimental tests have been performed for two materials, which differ substantially in thermal

270

INFORMATYKA W TECHNOLOGII MATERIAW

conductivity. Inconel and brass samples have been selected for the study. The initial temperature to which the materials have been heated up was 730C for inconel and 517C for brass. The water spray pressure in both testes was 1 MPa and water temperature was equal to 20C. The water flux was 38.6 kg/(m2s) while cooling inconel sample and 1 kg/(m2s) for cooling the brass sample. The temperature measurements logged while experimental tests have been assumed as an input data in inverse calculation of HTC. Because of the sensor construction two finite element models have been tested in the inverse determination of the heat transfer coefficient. The first finite element model simplifies the sample geometry to the perfect cylinder (the simplified model). In the case of the second model the cylindrical sample and the adapter ring (flange) have been described by the finite element mesh (the exact model). Further, two boundary condition models have been employed in equation (3) in order to determine the heat transfer coefficient on the sample surface. 4. RESULTS OF INVESTIGATIONS The results of the inverse calculations have allowed to determine the influence of the sample geometry description on the heat transfer coefficient identification. In figures 2 to 5 the comparison between HTC variations in the cooling process calculated for simplified and exact description of the sample geometry in the finite element model have been presented. The figures present variations in the average values of HTC (boundary condition model A) versus the time of cooling (figures 2 and 4) and versus the average sample surface temperature (figures 3 and 5). In case of water spray cooling of inconel sample, the simplification in sample geometry description to the perfect cylinder does not effects the average HTC for the mean sample surface temperature from 730C to about 250C (figure 3). This range of temperature corresponds to the film and transition boiling regimes that take place on the sample surface while water spray cooling process. During that processes the vapor film is formed on the cooled surface and it limits the heat transfer between the cooled surface and the cooling water. Additionally, low heat conductivity of inconel causes that heat transfer to the flange is low and does not influence the average HTC in these two boiling regimes. The heat transfer in radial direction to the flange increases while the

surface temperature decreases. Below 250C (figure 3) the heat transfer process changes to the nucleate boiling. That results in significant increase in the HTC values. Simultaneously heat conduction in radial direction is more significant. These to processes affect the inverse determination of HTC. Therefore the sample geometry simplification in the finite element model to the perfect cylinder results in the HTC values about 10 percent higher if compared to those obtained with the real sample geometry description (with flange) in the finite element model of heat transfer (figures 2 and 3). The average difference between the calculated and measured temperatures has been equal to 7.95C and has not decreased for the better definition of sample geometry (table 1).
Table 1. The average difference between measured and calculated temperatures at the thermocouples locations. Inconel sample Case of study Average difference in temperatures, C 7.953 Brass sample Average difference in temperatures, C 4.418

Average HTC over the cooled surface calculated for simplified definition of the sample geometry in the finite element model Average HTC over the cooled surface calculated for exact definition of the sample geometry in the finite element model Radial distribution of HTC over the cooled surface calculated for simplified definition of the sample geometry in the finite element model Radial distribution of HTC over the cooled surface calculated for exact definition of the sample geometry in the finite element model

7.953

3.795

7.735

5.924

7.735

3.752

The inverse calculations performed on the basis of temperature measurements obtained for the spray cooling of brass sample have indicated a significant influence of the sample geometry description on the average HTC values (figures 4 and 5). The thermal conductivity of the brass is much higher than the inconel one. In such a case heat transfer to the flange is much more important and the exact description of the cooled sample geometry plays an important role in the HTC identification. Neglecting the flange in the definition of the sample geometry has caused that the calculated values of HTC in the whole spray

271

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW

cooling process have been greater for about 25 percent than the ones calculated by using the model with the flange (the exact geometry model) (figure 4 and 5). Moreover, in the case of brass sample the exact definition of the sample geometry (with flange) has lead to the lower difference between measured and calculated temperatures at the thermocouple locations (table 1).
40000

and brass. Further, simplified and exact description of sample geometry in the finite element model has been considered. The results of the inverse calculation of HTC distributions as functions of the sample radius and the time of cooling have been presented in figures to 9 for inconel sample and in figures 10 to 11 for the brass sample.
20000
perfect cylinder cylinder with flange

16000 HTC, W/(m2.K)


perfect cylinder cylinder with flange

30000 HTC, W/(m2.K)

12000

20000

8000

10000

4000

0 0 0 4 8 Time, s 12 16 0 5 10 15 Time, s 20 25

Fig. 2. The comparison of the average HTC variations in the time of cooling obtained for the simplified and exact definition of the sample geometry in the finite element model. Inconel sample.
40000
perfect cylinder cylinder with flange

Fig. 4. The comparison of the average HTC variations in the time of cooling obtained for the simplified and exact definition of the sample geometry in the finite element model. Brass sample.
20000
perfect cylinder cylinder with flange

16000 HTC, W/(m2.K)

30000 HTC, W/(m2.K)

12000

20000

8000

10000

4000

COMPUTER METHODS IN MATERIALS SCIENCE

0 0 0 200 400 600 Temperatrure,oC 800 100 200 300 400 Temperatrure,oC 500 600

Fig. 3. The average HTC variations as a functions of sample surface temperature obtained for the simplified and exact definition of the sample geometry in the finite element model. Inconel sample.

Fig. 5. The average HTC variations as a functions of sample surface temperature obtained for the simplified and exact definition of the sample geometry in the finite element model. Brass sample.

The discussed above boundary condition model gives only average HTC over the cooled sample surface. In practice it is expected HTC distribution over the cooled surface. Such a possibility gives the second boundary condition model. Due to axially symmetrical problem only radial variation of HTC in the time of cooling has been modeled. The analysis has also been performed for two materials: inconel

The inverse solution to HTC distribution along the cooled sample radius performed for the inconel with simplified definition of the sample geometry in finite element model has not indicated a visible differences in HTC along the radius of the cooled sample surface (figure 6 and 7). Exact description of the sample geometry (with flange) in the finite element model has resulted in lower of about 10% values of HTC (figure 8).

272

INFORMATYKA W TECHNOLOGII MATERIAW


40000

40000
r = 1mm r = 5 mm r = 7 mm r = 10 mm

30000

30000

HTC, W/(m2.K)

20000

HTC, W/(m2.K)
r = 1mm r = 5 mm r = 7 mm r = 10 mm

20000

10000

10000

0 0 4

Time, s

12

16

200

Temperature, oC

400

600

800

Fig. 6. HTC variation versus time of cooling for selected locations along the cooled surface radius calculated for simplified definition of the sample geometry in the finite element model. Inconel sample, HTC model B.
40000
r = 1mm r = 5 mm r = 7 mm r = 10 mm

Fig. 9. HTC variation versus surface temperature for selected locations along the cooled surface radius calculated for exact definition of the sample geometry in the finite element model. Inconel sample, HTC model B.

30000

20000

10000

0 0 200

Temperature, oC

400

600

800

Fig. 7. HTC variation versus surface temperature for selected locations along the cooled surface radius calculated for simplified definition of the sample geometry in the finite element model. Inconel sample, HTC model B.
40000

30000

20000

10000

r = 1mm r = 5 mm r = 7 mm r = 10 mm

0 0 4

Time, s

12

16

Fig. 8. HTC variation versus time of cooling for selected locations along the cooled surface radius calculated for exact definition of the sample geometry in the finite element model. Inconel sample, HTC model B.

Same difference of HTC distribution versus sample surface temperature for the HTC model B has been observed only at the sample and flange connection (r = 10 mm in figure 9). It can be explained by the better description of the sample temperature near the flange by the exact geometry model. Implementation of the HTC model which allows for the distribution of heat transfer coefficient resulted in very similar solutions to the average HTC model. It can be explained by the high water flux applied in the cooling of inconel sample. In such a case sample surface is cooled uniformly. In the case of cooling brass sample the diversification of HTC along the cooled surface radius has been observed both for simplified and exact description of the sample geometry in the finite element model (figures 10 and 11). Implementation of the exact definition of the sample geometry and the HTC variation over the sample surface in the finite element model has allowed to illustrate both the influence of thermal conductivity of sample material as well as the influence of the cylinder flange on the heat transfer between cooled sample and water spray. In the case of simplified definition of the sample geometry to the perfect cylinder in the finite element model the HTC values along the radius of the sample decrease. The greatest difference between the maximum HTC value in the cylinder axis and at the distance of 10 mm from the symmetry axis is equal to about 9 kW/(m2K) (figure 10). Implementation of the exact definition of the sample geometry results in much higher diversification of HTC values. In these case the difference between the maximum values of HTC is equal to about 17 kW/(m2K) (fig-

HTC, W/(m2.K)

273

COMPUTER METHODS IN MATERIALS SCIENCE

HTC, W/(m2.K)

INFORMATYKA W TECHNOLOGII MATERIAW

ure 11). In both considered cases of cooling the metal samples exact definition of the sample geometry (with the flange) in the finite element model have resulted in lower average differences between measured and calculated temperatures (table 1).
25000
r = 1 mm r = 2 mm r = 3 mm r = 4 mm r = 5 mm r = 6 mm r = 7 mm r = 8 mm r = 9 mm r = 10 mm

20000

HTC, W/(m2.K)

15000

of material which is characterized by low heat conductivity (inconel) and about 25 percent growth in HTC value in the case of high conductivity materials (brass). Identification of HTC performed for two boundary condition models has shown that allowing for the HTC distribution over the cooled surface results in more accurate determination of the heat transfer boundary condition. The developed definition of the boundary condition is capable of identification both constant and variable heat transfer coefficient over the cooled surface of the cylindrical sample. Acknowledgements. The work has been financed by the Ministry of Science and Higher Education of Poland, Grant No NR15 0020 10. REFERENCES

10000

5000

0 0 5 10

Time, s

15

20

25

Fig. 10. HTC variation versus time of cooling for selected locations along the cooled surface radius calculated for simplified definition of the sample geometry in the finite element model. Brass sample, HTC model B.
30000
r = 1 mm r = 2 mm r = 3 mm r = 4 mm r = 5 mm r = 6 mm r = 7 mm r = 8 mm r = 9 mm r = 10 mm

HTC, W/(m2.K)

20000

10000

COMPUTER METHODS IN MATERIALS SCIENCE

0 0 200

Temperature, oC

400

600

Fig. 11. HTC variation versus surface temperature for selected locations along the cooled surface radius calculated for exact definition of the sample geometry in the finite element model. Brass sample, HTC model B.

5. CONCLUSIONS The conducted analysis has allowed to determine the influence of the exact definition of the cooled sample geometry in the finite element model on the solution to the heat transfer process. It has been shown that simplification of the sample geometry to the perfect cylinder in the finite element model results in about 10 percent growth in the heat transfer coefficient determined by the inverse method in case

Cebo-Rudnicka, A., Malinowski, Z., Telejko, T., Hadaa, B., 2012, Implementation of the finite element model with linear and Hermitian shape function to determination of the heat transfer coefficient distribution on the hot plate cooled by water spray, Proceedings of Numerical Heat Transfer 2012 International Conference, eds, Nowak, A.J., Biaecki, R.A., Institute of Thermal Technology, Silesian University of Technology, Gliwice Wrocaw, 58-67. Godasz, A., Malinowski, Z., Hadaa, B., 2009, Study of heat balance in the rolling process of bars, Archives of Metallurgy and Materials, 54, 685-694. Liu, T., Wang, B., Rubal, J., Sullivan, J.P., 2012, Correcting lateral heat conduction effect in image-based heat flux measurements as an inverse problem, Int. J. Heat and Mass Trans., 54,1244-1258. Malinowski, Z., Telejko T., Hadaa, B., Cebo-Rudnicka, A., 2012, Implementation of the axially symmetrical and three dimensional finite element models to the determination of the heat transfer coefficient distribution on the hot plate surface cooled by the water spray nozzle, KEM, 504-506, 1055-1060. Mascarenhas, N., Mudawar, I., 2010, Analytical and computational methodology for modeling spray quenching of solid alloy cylinders, Int. J. Heat and Mass Trans., 53, 5871-5883. Tacke, G., Litzke, H., and Raquest, E., 1985, Investigation into the Efficiency of Cooling System for Wide-Strip Hot Rolling Mills and Computer-Aides Control of Strip Cooling, Proceedings of a Symposium on Accelerated Cooling of Steel, eds, Sothwick, P.D., The Metallurgical Society of AIME, Pittsburgh, Pennsylvania, 35-54.

274

INFORMATYKA W TECHNOLOGII MATERIAW

WPYW GEOMETRII PRBKI OSIOWOSYMETRYCZNEJ NA WYZNACZANIE ROZKADU WSPCZYNNIKA WYMIANY CIEPA PODCZAS CHODZENIA NATRYSKIEM WODNYM Streszczenie W pracy przedstawiono wyniki oblicze wspczynnika wymiany ciepa wyznaczonego na podstawie bada eksperymentalnych. Do wyznaczenia warunku brzegowego na powierzchni metalu chodzonego natryskiem wodnym wykorzystano rozwizanie brzegowego odwrotnego zagadnienia przewodzenia ciepa. Badania eksperymentalne przeprowadzono dla prbki osiowosymetrycznej. Ze wzgldu na specyficzn budow czujnika wykorzystanego w badaniach, w algorytmie metody odwrotnej przetestowano dwa modele elementw skoczonych opisujce geometri prbki. Pierwszy model upraszcza geometri prbki do postaci zwykego walca, drugi model opisywa rzeczywisty ksztat prbki. W pracy testowano rwnie dwa modele aproksymacji warunku brzegowego.
Received: September 17, 2012 Received in a revised form: October 24, 2012 Accepted: October 29, 2012

275

COMPUTER METHODS IN MATERIALS SCIENCE

COMPUTER METHODS IN MATERIALS SCIENCE


Informatyka w Technologii Materia w
Publishing House AKAPIT

Vol. 13, 2013, No. 2

INVESTIGATION OF THE HEAT TRANSPORT DURING THE HOLLOW SPHERES PRODUCTION FROM THE TIN MELT
MICHAEL PETROV1,2, PAVEL PETROV2, JUERGEN BAST1, ANATOLY SHEYPAK3 Technical University Mining Academy of Freiberg, Institute of Machine Elements, Design and Manufacturing, Leipzigerstr. 30, 09596 Freiberg, Germany 2 Moscow State University of Mechanical Engineering (MAMI), Department Car Body Making and Metal Forming, B. Semenovskaya street 38, 107023 Moscow, Russia 3 Moscow State Industrial University (MSIU), Department Electrical, Heating and Hydraulic Engineering and Energy machines, Avtozavodskaya street 16, 115280 Moscow, Russia *Corresponding author: petroffma@gmail.com
Abstract The present paper reveals one of the energy efficient ways of the units (hollow spheres) production for cellular structures for their further application in light weight constructions, realized through the metallurgical procedure. The metallurgical method exhibit all known methods in intelligence, because it based on the own physical properties of the used materials and boundary conditions of the process, not involving any organically core and preparation of the powder and slurries. These small hollow spheres made from different materials could change the weight of a construction part essentially, used as acoustic and thermal insulation and also as protection against vibrations. They can be used as a unit cell for big parts and alone filled with an inert gas, e.g. fusion targets. Pure tin shells were produced intransient (thixotropic) state of materials by elevated temperatures (close to the melting point of the pure tin) and several simulation steps were used, to determine the preferable boundary conditions. To one of them belongs the investigation of the temperature fields during the formation process. The heat transport from the tin melt into the semi-solid tin shell influence the nucleation process so the solid wall should be formed before the gas starts to form the inner hollow space. Otherwise the semi-solid shell will be broken by the gas pressure or the inner hollow space does not occur. For these purposes the CFD- (Computational Fluid Dynamic) and FEM-commercial codes such as FLUENT and Solid Works Simulation Package respectively were taken. At the end the data verification of the obtained simulation results with the measurements on the laboratory stand and theoretical calculations were carried out. Current investigation was completed by the determination of the whole temperature fields on the side surface of the form nozzle, which was obtained from a thermogram captured with the help of an infrared camera (IRC). Key words: hollow spheres, spherical shells, tin melt, FEM, CFD, Solid Works, Fluent, thermography, heat transport
1

1. INTRODUCTION Generally any production route consists of one or several production, treatment and controlling operations which are connected together through automation devices. For hollow spheres production the metallurgical technique, illustrated in figure 1, was used because of its high effectiveness and low

production costs. The property of the product changes through the microstructure development results from the thermal energy consumption during the primary crystallization and further heat treatment under different regimes. The main controlling operations can be also implemented to obtain the spheres outer geometry, strength/ductility and inner geometry. Once the products were sorted the structure as-

276 282

ISSN 1641-8581

INFORMATYKA W TECHNOLOGII MATERIAW

sembling occurs depending on the application cases. The current paper is focused on the first two steps in this line (metal melting and hollow sphere/shell solidification).

3. NUMERICAL SIMULATION Although the similar production technique was discussed by Kendall (1981), Dorogotowcev & Merkulyev (1989) several differences and resulted setups of the heat transport problems were not still numerical investigated. Also many fundamental aspects on the heat transport given by e.g. Baehr & Stephan (2006) should to be coupled to the manufacturing route and equipment. 3.1. Model preparation

Fig. 1. General process route of hollow spheres/shells production.

2. EXPERIMENTAL EQUIPMENT Designed and manufactured labor stand for metal hollow spheres and shells production from the tin melt was investigated. The melting, dosing and controlling of the solidification process occur in a special heating unit, which was made in a sandwich like assembly from copper and aluminum plates, tempered by heating rods. It allows adjusting the temperature in a narrow range and performing the nozzle tempering with a high precision. The whole metal melting process runs in three stages: 1) preparing the metallic melt (optionally: alloying, degassing, protecting gas against oxidization procedure); 2) switch on the heating devices (furnace, heating unit); 3) process initiation: forming gas is fed to the tin melt through the gas needle and forms the spherical shells at the nozzle. Further description of the equipment could be found elsewhere (Petrov & Bast, 2010; Petrov, 2012).

To perform the numerical simulation the CAD-models of heating devices inclusive crucible were optimized before meshing (fasteners connections were closed, sharply corners were rounded and the fiber thermal insulation was proposed as a material with homogeneous properties). To obtain adequate results different size of the mesh elements were applied: the biggest of 0,9 mm for the nozzle and 22,2 mm for the heating furnace with the common side ratio of volume tetraeder elements of 1,5. The simulated hollow sphere has a transient heating bridge, which connected the sphere with the main tin melt volume in the nozzle area as it is shown in figure 2d. It is expected that an additional heat amount will transferred into the spherical shell. It was assumed that the heating bridge elevates the temperature in the spherical shell, but the temperature difference stays the same. The lifetime of the bridge corresponds to the formation time of a single shell.
3.2. Temperature distribution in the system

During the simulation setup of the temperature distribution in the whole system (furnace heating plates nozzle) three main heat transport mechanisms (thermal conductivity, convection and radiation) were activated. To enable the simulation the following boundary conditions were assigned and are presented in the table 1. The simulation results delivered a homogeneous temperature distriFig. 2. Prepared model for simulation (a), model of the crucible (b) with the tin melt vol- bution in the crucible, shown in ume(c) and hollow sphere of 3 mm in diameter and the wall thickness of 0,5 mm (d). figure 3.

277

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW Table 1. Boundary conditions in the simulated system. Element Furnace Heating plates Inductor Heat source initial temperature on the refractory lining initial temperature on each heating rod or capacity per rod initial temperature on the refractory lining

nects the volume and thermal material properties of the plates (Petrov, 2012).

mCu cCu m Al c Al , t t

(1)

where cCu and cAl specific heat capacity of copper

COMPUTER METHODS IN MATERIALS SCIENCE

and cast aluminum alloy; temperature difference; mCu and mAl mass of the copper and aluminium plates. From this analysis it was stated, that the thermal resistance (temperature drop or heat loss) due to plates stacks and heat loss due to convection and a little amount of the radiation does not exceed 5%. To perform the simulation Fig. 3. Temperature fields in the longitudinal cross section of the nozzle (a) and heating equipment (b). the mesh parameters were defined as follows: volume 3.3. Temperature distribution in the heating tetraeder elements, the biggest element of 10,2 mm, plates (mesolevel) side ratio of 1,5. As a merit of the numerical simulation accuracy the duration of the heating stage was The simulation of the heat transfer between the taken. The results from the transient heat transport fluids and inside the heating system was investigated simulation were compared with the temperature on and represents the mesolevel of the system. The goal the heating rod, placed in the upper copper plate, was to determine the expected temperature fields on obtained with the help of thermocouple Ni-CrNi the microlevel, i.e. at the nozzle. The heating unit is (type K). Giving the heat energy from the heating a sandwich like assembly and consists of an upper rods to the colder parts the main amount of it riches and lower blocks, produced from pure copper and the nozzle. It was stated, that the amount of the cast aluminum alloy. In the aluminum upper and transported heat energy is constant through a period lower blocks the gravure of the nozzle was milled. of time and enough to guarantee the forming process After that the plates were mounted together. Because discontinuity. At the same time the capacity of the of the fact, that this heating unit will be placed in the heating rods determine also the heating time. Several space with a variable environmental temperature, the in situ measurements were carried out to justify the temperature distribution in the plates was calculated proper choice of the boundary conditions. So the theoretically, after that compared with the simulation additional heat transfer mechanism, namely convecresults and validated with the help of IRC, according tion reduce the calculation error (compared to the the route in figure 4. measured value from the table 2) up to 8,4%.

Fig. 4. Construction stages of the heating plates.

Theoretical calculation was based on the total capacity of the heating rods. The equation (1) con 278

INFORMATYKA W TECHNOLOGII MATERIAW Table 2. Heating time, obtained for different simulation cases Heating stage, C from 0 up to 257 Time, min : sec Experiment Simulation 14:30 16:55 15:30 Boundary conditions TR, HR + HC + T TR, HR + HC + T+C

where TR transient heating process, HR heat radiation, HC heat capacity, T constant temperature, C convection.

Measurements with IRC has shown, that the temperature difference between the heating plates and the melt at the nozzle orifice stays by 16C, during the simulation and theoretical investigations followed to a temperature of 16 25C and 13 24C respectively, where the greater value corresponds the greater heat loss in the system. 3.4. Temperature distribution at the nozzle (microlevel)

Moreover once the temperature influence was cleared, the gas distribution in the tin melt was still unknown. To carry out the case study a special CFD-simulation and simple verification test were developed. The principle of the test is the periodic gas injection (frequency of 1 Hz) into the tin melt and measurement of the height of the invader cone zone. The aim of the test is to find out the cone height from the simulation results which corresponds to the real cone height from the verification test under the same boundary conditions and for the same time point, showed in figure 5b and 5c. The theoretical problem description can be found by Bohl (1991), Loycyanskiy (2003), Sheypak (2006) and other fundamentalists. Because of the fact that the forming gas expands into the certain melt volume the expected temperature in the cross section could be calculated from the equation (2) as:

Fig. 5. Phase and temperature distribution during the forming gas injection (a numerical simulation of the temperature distribution after the gas expansion in the hot chamber, obtained in Solid Works; b gas distribution in the tin melt, obtained with high speed camera, 100 fps;c simulation of the gas distribution in the tin melt, obtained in FLUENT).

279

COMPUTER METHODS IN MATERIALS SCIENCE

An obtained temperature distribution for mesolevel could be applied also for a microlevel. Because of a new fluid phase (forming gas) the new problem had to be defined. The temperature can be influenced by the forming gas expansion due to the differences of the cross sections of the pipeline and the feeding needle. The rash temperature drops of millisecond duration results strong tin melt undercooling. This undesirable effect leads to the process discontinuity due to metal solidification between two periods of hollow spheres formation. Through the simulation it could be shown, that even the test chamber temperature of 232C (melting point of the pure tin) by investigated gas flow rates of an average value of 750 liter per hour does not allow to eliminate the undercooling effect on the displacement of 1,5 mm from the needle top of the nozzle presented in figure 5a.

To ,th
where

po Ti p i

(2)

cp cV

isentropen exponent, cp(v) isobare

(isochore) specific heat capacity and

po critical pi

pressure ratio (po pressure value after gas expansion and pi pressure value in the pipeline); T temperature (index i for inside, o for outside and th for theoretical) The true temperature due to gas expansion was calculated from the equation (3), under the assumption, that the gas velocity exceeds the value of the velocity coefficient, defined as 1 with as a drag coefficient, and together with the energy equation:

INFORMATYKA W TECHNOLOGII MATERIAW

To Ti 2 Ti To ,th ,

(3)

So for an environmental temperature (ET) of 17C the temperature differences stay by 7C for experiment and 6C for numerical simulation. Taking the height of the involved cone from the figure 5b of 6,5 mm on the diagram temperature-distance, shown in figure 6a, two areas can be pointed out: up to the distance of 6,5 mm and over this value. The expected temperature drop up to 6 7C by initial ET of 17C gives the distance of 5,5 mm from the nozzle. For an initial ET of 232C the same distance will give the temperature decrease up to 45C. The area above the distance of 6,5 mm is out of the gas distribution zone and not intended into the investigation.

figure 6c as thinner walls in figure 6b. Self the line for the thin wall has a greater slope as the line for the thick wall that means that the temperature gradient for thin walls under the same boundary conditions should be greater to transport the heat amount from the melt to activate the solidification process. r 1 i r , (4) Tth T (r ) Ti Ti To ri 1 ro where T temperature (index i for inner sphere surface, o for outer sphere surface and th for theoretical); r arbitrary radius of the sphere.

COMPUTER METHODS IN MATERIALS SCIENCE

Fig. 6. Temperature fields: temperature distance diagram for determining undercooling effect (a); temperature distribution in the shells wall with different radius and thickness (b and c) and slope of the linear temperature distribution curves (d).

3.5.

Temperature distribution in the spheres wall (microlevel)

The temperature distribution in the spherical shell occurs also in micro level of the investigated system and can be calculated on hand the simple equation (4) with sphere radius as argument. Following the equation (4) the thicker walls propose the smaller temperature derivation from the linearity in

The hollow spheres in figure 7 were previously meshed with volume tetraeders elements with the biggest size of 1 mm and side ratio of 1,5. After the simulation the obtained temperatures from the middle radius of the shell were compared with the calculated from the equation (4).

280

INFORMATYKA W TECHNOLOGII MATERIAW

Fig. 7. Simulation of the temperature fields in the spherical shell at the time point of 20 seconds (a), 200 seconds (b), 2000 seconds (c).

Fig. 8. Temperature fields on the outer surface of the spherical shell in 1000 seconds (a); ISO-surfaces in 900 seconds of calculation time for a temperature distribution from 313C and higher (b) and from 314C and higher (c).

3.6.

Solidification process

From the simulation results presented in figure 8 it is clearly seen, that the solidification front moves from the top of the spherical shell to the nozzle orifice and the temperature difference does not exceed 0,2C.
ISO-surfaces also show, that at the same time point the temperature in the spherical shell distributes very fast, differ from the temperatures on the other surfaces not more than on 1C, presented in figures 8b and 8c. The necessary solidification rate (SR) of a solidification process can be calculated from the equation (5):

where T temperature, hS heat of solidification; C and L thermal conductivity of the crystal and melt respectively; x distance (wall thickness), tC time of the crystal growth. 4. CONCLUSIONS AND OUTLOOKS In the present paper the heat transport during the hollow spheres production from the tin melt was investigated in the numerical, theoretical and experimental way. The results go back on the micro-, meso- and macrolevel numerical problems of the investigated system, represented by simulation of the temperature distribution in heating system, heating plates and nozzle. Hollow spheres/shells could be produced directly from the tin melt if the boundary conditions were properly defined. The temperature fields in the spheres wall due to small dimension of the sphere changes very quickly and need to be investigated separately with finer finite elements, because the temperature changes here during the whole processing time does not exceed 0,2C. From the figures 6b and 6c the importance of the wall thickness in the heat transport problem is obtained. For thinner walls the temperature distribution shows greater deviation from a linear characteristic as for thicker walls, there temperature could be better described by a linear function. From the equation (4) follows, that non-uniform wall thickness of the shell

SR

, t cal

(5)

where tcal calculation time; temperature difference on the outer and inner surfaces of the hollow sphere. From the balance of heat fluxes described by Gottstein (2004) the heat of solidification can be obtained on the base of the equation (6):

dx dT dT L hS dt dx C dx L

(6)

281

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW

will cause different temperatures. Consequently the cooling rate will differ and the microstructure development in the wall will vary. Also the optical surface quality of the produced microsphere varies between rough for small cooling rates and smooth shiny for greater cooling rates. Also a problem of undercooling effect due to expansion of not preheated forming gas at the nozzle was formulated and investigated. The prediction of the undercooling can be made on the hand of temperature-distance diagram in figure 6a and calculated from the equation (5) and (6). Both the information about the temperature distribution in the metallic melt and the gas distribution were used by the design and construction of the nozzle: nozzle placing, orifice diameter, temperature fields around the nozzle etc. These essential parts of the work make possible to increase the production capacity of the laboratory equipment. The minimal diameter could be obtained from the Young-Laplace equation for hollow microspheres up to 1 mm in diameter and for bigger microspheres up to 3 4 mm in diameter the technique given by Petrov (2012) can be applied. The results can be used by the further microstructure prediction by function with two arguments, namely the outer shell diameter and wall thickness. Acknowledgement. This paper was prepared in scope of the state contract 16.740.11.0744, funded by Ministry of Education and Science of the Russian Federation. REFERENCES
Baehr, H. D., Stephan, K., 2006, Wrme- und Stoffbertragung, 5th edition, Springer Verlag, Berlin, Heidelberg, New York (in German). Bohl, W., 1991, Technische Strmungslehre, 9th edition, Vogel Buchverlag, Wrzburg (in German). Dorogotowcev, W., Merkulyev, J., 1989, The methods of the hollow microspheres production, Physical Lebedev Institute Publishing, Moscow (in Russian). Gottstein, G., 2004, Physical foundations of materials science, Springer Verlag, Berlin, Heidelberg, New York. Kendall, J.M., 1981, Hydrodynamic performance of an annular liquid jet: production of spherical shells, Proceedings of the second international colloquium on drops and bubbles, eds, LeCroisette, D. H., NASA JPL, Pasadena, 7987. Loycyanskiy, L.G., 2003, Mechanic of fluids and gases, 7th edition, Moscow (in Russian) Petrov, M., Bast, J., 2010, Entwicklung einer Anlage zur Hohlkugelherstellung, Scientific reports on resource issues, Proceedings of the 61. BHT (Berg- und Httenmnnische Tagung), eds, Drebenstedt, C., TU Bergakademie Freiberg, Freiberg, Volume 3, 343-350 (in German).

Petrov, M., 2012, Untersuchungen zur Hohlkugel- und Schalenherstellung direkt aus der metallischen Schmelze zu ihrer Anwendung in Leichtbaukomponenten, PhD thesis, Freiberg (in German). Sheypak, A.A., 2006, Hydraulic and hydraulic drive systems, 5th edition, MSIU Publishing, Moscow (in Russian).

ZAGADNIENIE TRANSPORTU CIEPA PODCZAS PRODUKCJI PUSTYCH KUL Z CYNY Streszczenie Praca przedstawia jedn z energooszczdnych metod produkcji zespow struktur komrkowych (pustych kul) stosowanych w lekkich konstrukcjach. Opisany proces metalurgiczny produkcji kul wykorzystuje fizyczne wasnoci zastosowanych materiaw oraz warunki brzegowe procesu bez wprowadzania proszkw czy zawiesin. Puste kulki o maych rednicach wykonane z rnych materiaw mog znaczco zmniejszy wag elementw konstrukcyjnych wykorzystywanych jako izolatory akustyczne i termiczne, a take jako ochrona przed wibracjami. Mona je stosowa jako komrki jednostkowe dla wikszych elementw lub jako elementy samodzielne wypenione gazem obojtnym. Powoki z czystej cyny produkowane s w temperaturze bliskiej temperatury topnienia materiau (w stanie tiksotropowym). W ramach pracy wykonano szereg symulacji komputerowych, ktre umoliwiy okrelenie waciwych warunkw brzegowych procesu produkcji. Badano midzy innymi rozkad temperatury podczas procesu produkcji. Transport ciepa w procesie formowania powok z cyny od stanu pynnego do stanu p-staego ma wpyw na proces zarodkowania, dlatego cianki bdce w stanie staym powinny by formowane zanim gaz rozpocznie ksztatowanie wntrza powoki. W przeciwnym wypadku p-staa powoka pknie pod wpywem cinienia gazu lub w ogle nie powstanie. W pracy do symulacji wykorzystano komercyjne programy FLUENT oraz pakiet SolidWorks. Wyniki symulacji zostay porwnane z wynikami dowiadcze oraz oblicze teoretycznych. Ponadto badania uzupeniono o wyznaczenie pola temperatury na powierzchni dyszy formujcej dziki termogramowi zarejestrowanemu kamer na podczerwie (IRC).
Received: September 20, 2012 Received in a revised form: November 4, 2012 Accepted: November 21, 2012

COMPUTER METHODS IN MATERIALS SCIENCE

282

COMPUTER METHODS IN MATERIALS SCIENCE


Informatyka w Technologii Materia w
Publishing House AKAPIT

Vol. 13, 2013, No. 2

AN EXPERIMENTAL STUDY OF MATERIAL FLOW AND SURFACE QUALITY USING IMAGE PROCESSING IN THE HYDRAULIC BULGE TEST
SAWOMIR WIO Faculty of Production Engineering, Warsaw University of Technology, Warsaw/Poland *Corresponding author: s.swillo@wip.pw.edu.pl
Abstract The paper presents a method for the surface shape and strain measurement applied in the determination of metal flow and product quality. Accurate determination of these characteristics in the sheet metal forming operation is extremely important, especially in automotive applications. However, the sheet metal forming is a very complex manufacturing process, and its success depends on many factors. This involves a number of tests that should be carried out to find optimal, yet cost-effective solutions. In this study the author discusses the investigations that are focused on better understanding of the strain values and their distribution in a product, and checking if they do not exceed certain limit resulting in the loss of stability. The hydraulic bulge test was identified as a method most applicable in these investigations, where both theoretical and experimental analysis was conducted.

First, the presented solution in the field of reconstruction of three-dimensional (3D) objects significantly expands the capabilities of the analysis, compared with the solutions existing so far. Second, a new method for the 3-dimensional geometry and strain measurement based on laser scanning technique is presented. Next, the image acquisition process and digital image correlation (DIC) are presented to recognize and analyze the objects taken from camera. The 3D shape and strain analysis presented in this paper offers a valuable tool in the metal products quality control, along with a complete testing equipment for maximum strain calculation just before cracking. The computer measurement system is directly connected to the hydraulic bulge test apparatus, thus providing fast and accurate results for material testing and process analysis.
Key words: strain analysis, sheet metal forming, hydraulic bulge test

1. INTRODUCTION Methods for the measurement of surface shape or deformation (displacement) generate solutions that are currently and generally applied in various scientific fields. Especially, three methods are widely used by different authors. The first of them is the projection method described in detail by Swillo and Jaroszewicz (2001). The second method is based on the analysis of surface patterns (Swillo, 2001; Koga & Murakawa, 1996; Sirkis, 1990). The third method is laser scanning. In this group numerous solutions

are available, and their classification depends on the technique by which the displacements are measured. However, all the contemporary methods 3D reconstruction of objects or measurements of deformations - are based on a system of image recording by means of a CCD camera. These techniques seem to be very useful in the field of metal forming because they are very effective when strain values have to be determined by the analysis of surface patterns. The method commonly used in studies of the kinematics of the sheet metal forming process is

283 288

ISSN 1641-8581

INFORMATYKA W TECHNOLOGII MATERIAW

stretching of the sheet metal surface with a metal punch (figure 1a). In the present study, to test the plastic forming process, a method of bulging with fluid under pressure the sheet metal disc clamped at the edges has been applied (figure 1b). In this operation, a uniform biaxial stretching occurs, due to which the processed element assumes the shape of a spherical cap. The proposed method of the strain measurement in a bulging process is based on the numerical image processing and three-dimensional object reconstruction. In the last several years, various technical improvements in the method of sheet metal pattern recognition took place, enabling analysis of the local state of strain in industrial sheet metal forming. At the same time, during laboratory tests, many authors have been evaluating the sheet metal quality based on the forming limit curves (FLC), originally proposed by Marciniak (1961), and Marciniak & Kuczyski (1967).

Fig. 1. Schematics (section view) of the sheet metal forming: a) using metal punch, b) using hydro-bulging.

Currently, we can find, among others, methods that serve the determination of forming limit curves which, combined with calculations of the deformation occurring in the examined products, provide comprehensive information about the state of the material (for example: Swillo et al., 2000). This solution is based on the use of a metal punch operating on samples with different geometry (figure 1a). These methods are based on either correlation or analysis of the geometry of regular coordination grids. As complementary to the created solutions of automated strain measurement, studies are carried out to detect the crack onset. The, used for many years, method of grids coupled with the image processing has gained numerous solutions. All these solutions are based on the identification of grids applied to the surface of sheet metal. The grids can

COMPUTER METHODS IN MATERIALS SCIENCE

have a regular or stochastic shape. One of such solutions is a system that enables automatic measurement of lines perpendicular to each other and lying at a distance of 1-5 mm from each other, depending on the nature of the measurement. Studies on the design of systems for automatic strain analysis are carried out by Vialux Company (Feldmann & Schatz, 2006; Feldmann, 2007). The Company has developed a system called AutoGrid used for the analysis of deformation during bulge test. The possibility of plotting the forming limit curves also enables predicting what are the chances for further plastic forming of the sheet metal. Owing to this characteristic of the deformation limit of any sheet material, the collected information allows us to determine if the strain values in areas of the largest product deformation are approaching the limit values. Another option is a system for the analysis of deformation based on the bulging test using a steel punch (Liewald & Schleich, 2010). The process is performed on a specially designed testing machine, where appropriate optical system with two cameras can record the run of the forming process. Image analysis of the deformed sample area is done by a commercial ARAMIS system made by GOM (Hijazi et al., 2004). The stand with this device is capable of performing a fully automated control of the bulging process using a metal punch The solution uses a digital image correlation technique based on stochastic grids, which, in the authors opinion, are much more efficient as regards the accuracy of the obtained results than the regular grids. It is believed that, by careful designing of process operations, most of the final product defects and limitations can be eliminated, minimized, or at least controlled. According to the current experimental investigation, all available information to predict the quality of the final products is not sufficient. Therefore the goal is to develop a major parameter that can be used in an assessment of the quality of automotive parts after sheet metal forming. 2. EXPERIMENTAL APPARATUS Solutions presented in this paper are referring to the three dimensional cases. On the example of bulging test, a new possibility in the field of image processing techniques is demonstrated. These techniques seem to be very useful for the metal forming analysis because they are very effective when strain values have to be determined by the analysis of stochastic surface pattern. A specially designed exper-

284

INFORMATYKA W TECHNOLOGII MATERIAW

imental apparatus for bulging process analysis was assembled (figure 2a). The developed solution comprises a computerized, fully automatic, motorized test stand equipped with optical and vision systems to acquire the data. In the described study, tests of the plastic forming were performed using a method of bulging the sheet metal discs (fixed at the edges) with fluid under pressure. In this operation, a biaxial uniform stretching occurs, producing specimens where the dome is spherical in shape. In this example of the sheet metal forming process, numerous solutions are possible as regards the description of the process kinematics and study of the test conditions under which the loss of stability occurs due to the absence of friction on the tool - die contact surface (Marciniak, 1961). The use of the stand allows running two types of the measurements: basic and complex. The first group of measurements includes recording the run of the plastic forming process in terms of pressure and displacement, while second group includes measurements of the process kinematics and of the shape of the bulged samples. Full description of the bulging process should enable further materials research and development of process control mechanisms. The elements of such control can include, among others, an option for the automatic monitoring of the process run and the possibility of its interruption at a strictly determined stage of deformation.

proposed in the measurement model, each lasergenerated section is identified by one camera only, while the assumed axisymmetrical shape of the examined object allows its 3D reconstruction. To obtain rapid (in real-time), continuous (pixel-based), high accuracy (sub-pixel) verification of the geometry of the distorted elements, it was necessary to use measurements based on vision control. The, proposed for studies of the kinematics of the shaped objects, method of outline reconstruction using laser light bases on a temporary outline searched for the examined element subjected to deformation. In addition to the measured contour, a front view of the object (with stochastic grid) is recorded, which enables the reconstruction of a 3D image of the measured sample and determination of the size of deformation. 3. ANALYTICAL MODELING OF THE BULGING PROCESS The aim of the investigations of the sheet metal forming process is to deepen our knowledge about the factors (phenomena) that restrict this process, during which the material undergoes plastic deformation by drawing, extrusion or redrawing. In this process, the formed object is obtained by mapping its shape on a sheet of metal using a punch and a die (figure 1b). The deformation in the forming process

Fig. 2. Schematics of the experimental apparatus for hydro-bulging process: a) the testing stand, b) an optical system to control the bulging process.

The central measurement system in the test stand is an optical system, whose task is to allow a 3D reconstruction of the sheet metal formed (figure 2b). The proposed mathematical model to solve this problem has been based on the author's own research (Swillo et al., 2012). Owing to some simplifications

cannot reach any arbitrarily large values, because some limiting phenomena will occur at a certain stage of the process, disturbing or disrupting even its further course. The main problems include strain localization, cracking, or curling of the sheet metal. The state of biaxial stretching occurring in many

285

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW

forming operations makes the bulge test very useful in this analysis (mainly due to the mere nature of the state of stress). The run of this process is usually considered in terms of the strain occurring in a perfectly flexible thin membrane. According to a mathematical formula relevant to this case, the product of the yield stress multiplied by the actual thickness of the membrane (p*g) is constant (Marciniak, 1961). The deformation occurring in the center of the dome follows the relationship given below: 2 1 (1)

(3)

where: h is the actual dome high and a is the blank radius. In the pure biaxial case where the bulge is a perfect bowl (figure 1), the extreme values of the internal pressure p is given by:
(2) where: R is the actual dome radius and g is the thickness at the top of the dome. Since we have the biaxial uniform stretching, the relationship (2) can be solved by differentiation and simplified to the final form of:

This relationship allows us to determine in a graphical manner the strain value at which the pressure p will reach its maximum for the known hardening curve. Figure 3 shows the results of calculations, where the experimental results obtained on a stand for the test forming using oil were compared with computations made for the membrane theory. The high consistency of the obtained results confirms that the theoretical solutions used to determine the polar limit strain values are correct theoretical assumptions for this area of the metal forming technology. 4. STRAIN MEASUREMENTS ON STOCHASTIC GRIDS As a result of the performed calculations, the coordinates for the projection of nodal points in a twodimensional space were obtained. Using information from the second camera, the profile of the bulged sample was determined (according to previous de-

COMPUTER METHODS IN MATERIALS SCIENCE

Fig. 3. A comparison of experimental and analytical results of the bulging test: a) strain distribution, b) stress-strain hardening curve.

Fig. 4. Displacement and strain calculation: a) global strain calculation using DIC, b) vision inspection (crack localization), c) comparison for the local strain calculation (micro-strain results) and vision inspection.

286

INFORMATYKA W TECHNOLOGII MATERIAW

scription) and, in accordance with the proposed 3D reconstruction algorithm, the third coordinate was specified. Strain measurements were carried out according to the described method of calculations for the main deformation direction and directional displacement gradient measurements (figure 4a). As an operation complementary to the strain calculation, micro-deformations in the zone of the crack onset during bulging process were determined. The accurate measurement of the forming limit is one of the major issues in plotting of the forming limit curves. The method of image correlation used for this purpose is a highly efficient tool for an accurate measurement of these parameters. The method consists in adding up two values of the deformation. The first value of the deformation is calculated as a result of the identification of the position (image), for which the strain localization occurs (figure 4a). The second value results from the determination of strain that occurs in the crack onset zone (figure 4b). Figure 4c shown the comparison of the calculated strain localization and vision inspection.

5. SUMMARY The method proposed by the author consists in the identification of an outline of the examined axisymmetrical object, extended to the identification of three-dimensional objects with the possibility of deformation measurements. The use of this approach in the study of the kinematics of the forming process is a solution that has required the development of a mathematical model, the introduction of a number of assumptions to the design of a test stand using this method, as well as the development of methods to process images recorded during plastic forming. The method commonly used in the study of the kinematics of the sheet metal forming is stretching of the sheet metal surface with a metal punch. In this study, the test method used for plastic forming has been bulging with a fluid under pressure of the sheet metal discs, fixed on the edges. In this operation, a biaxial, uniform stretching occurs, resulting in the formation of objects in the shape of a spherical cap. The described example of the process of the sheet

Fig. 5. a) comparison of experimental results with the hardening curve: b) bulge-samples.

Finally, based on experimental calculations, the hardening curve for DC04 was plotted, comparing the results with the measurements by the method of uniaxial stretching and with information about the equation of a curve given in the literature (figure 5a). The large scatter in the experimental measurements is due to the lack of a more refined technique for taking precise strain measurements by the method of correlation. Large number of images generated during measurements is an obstacle in precise determination of the deformation history, which is a key factor in the calculation of plastic properties.

metal forming allows obtaining a number of solutions as regards the description of the process kinematics and study of the test conditions under which the loss of stability occurs due to the absence of friction on the tool - die contact surface. Acknowledgements. Scientific work financed

as a research project from funds for science in the years 2009-2011 (Project no. N N508 390637).
REFERENCES
Feldmann, P., Schatz, M., 2006, Effective Evaluation of FLCTests with the optical in-process strain analysis system AutoGrid, Proc. Conf. FLC, ed, Hora P., Zurich, 69-73.

287

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW Feldmann P., 2007, Application of strain analysis system AUTOGRID for evaluation of formability tests and for strain analysis on deformed parts, Proc. Conf. International Deep Drawing Research Group, ed, Tisza M., Gyor, 483490. Hijazi, A., Yardi N., Madhavan V., 2004, Determination of forming limit curves using 3D digital image correlation and in-situ observation, Proc. Conf. SAMPE, Long Beach, 791-803. Koga, N. and Murakawa, M., 1996, Application of visioplasticity to experimental analysis of shearing phenomena, Proc. Conf. Advanced Technology of Plasticity, Vol. II, ed, T. Altan, Columbus, 571-574. Liewald, M., Schleich R., 2010, Development of an Anisotropic Failure Criterion for Characterising the Influence of Curvature on Forming Limits of Aluminium Sheet Metal Alloys, Int. Journal of Material Forming, 3,1175-1178. Marciniak, Z., Kuczyski, K., 1967, Limits strains in the processes of stretch-forming sheet metal, Int. Journal of Mechanics Science, 9, 609-612. Marciniak Z., 1961, Influence of the sign change of the load on the strain hardening curve of a copper test piece subject to torsion, Archives of Mechanics, 13, 743-752 (in Polish). Sirkis, S., 1990, System response to automated grid methods. Opt. Eng., vol. 29, 1485-1493 Swillo, S., and Jaroszewicz, L.R., 2001, Automatic shape measurement on base of static fiber-optic fringe projection method. Proc. Conf. Engineering Design & Automation, eds, Parsaei, H.R., Gen, M., Leep, H.R., Wong J.P., Las Vegas, 476-481. Swillo, S., 2001, Automatic of strain measurement by using image processing. Proc. Conf. Engineering Design & Automation, eds, Parsaei, H.R., Gen, M., Leep, H.R., Wong J.P. Las Vegas, 272-277. wio, S., Kocada, A. and Piela, A., 2000, Determination of the forming limit curve by using stereo image processing. Proc. Conf. Metal Forming 2000, eds, Pietrzyk M., Kusiak J. & Majta J. and Hartley P., & Pillinger I., Krakw, 545-550. wio, S., Czyewski, P., Lisok, J., 2012, An experimental study for hydro-bulging process using advanced computer technique, Proc. Conf. Metal Forming 2012, eds, Kusiak J., Majta J., Szeliga D. and Weinheim, Krakw, 1411-1414. ANALIZA DOWIADCZALNA PYNICIA MATERIAU I KONTROLA JAKOCI POWIERZCHNI W PROCESIE WYBRZUSZANIA Z WYKORZYSTANIEM OBRBKI OBRAZU Streszczenie W artykule przedstawiono metod pomiaru geometrii i odksztace w odniesieniu do pl przemieszcze i kocowej jakoci produktu. Dokadna charakterystyka tych wielkoci w procesach ksztatowania blach jest niezwykle wana, szczeglnie w przemyle samochodowym. Jakkolwiek jednak jest to proces niezwykle trudny i uzaleniony od wielu czynnikw. Dlatego wykorzystywanych jest wiele testw, w celu okrelenia charakterystyki materiau. W przedstawionym opracowaniu autor koncentruje si na lepszym zrozumieniu rozkadu odksztacenia i jego koncentracji prowadzcej do utraty statecznoci. Zaproponowana zosta metoda hydro-wybrzuszania, gdzie przedstawiono dwa rozwizania dowiadczalne i analityczne. W pierwszej czci przedstawiono rozwizania w zakresie rekonstrukcji trjwymiarowej, ktre znaczco rozszerzaj moliwoci w stosunku do tradycyjnych metod optycznych analizy ksztatu. Nastpnie zaprezentowano wyniki analizy obrazu z wykorzystaniem korelacji, wskazujc na znaczne uatwienia w rozwizywaniu problemw wyznaczania odksztace i pomiaru ksztatu dla przedstawionego przykadu, jak rwnie dla identyfikacji miejsc potencjalnych pkni. Dziki sprzeniu urzdzenia testujcego z ukadem komputerowym moliwe jest szybkie i precyzyjne przedstawienie kocowych wynikw pomiarw wasnoci materiau.
Received: October 15, 2012 Received in a revised form: November 21, 2012 Accepted: December 5, 2012

COMPUTER METHODS IN MATERIALS SCIENCE

288

COMPUTER METHODS IN MATERIALS SCIENCE


Informatyka w Technologii Materia w
Publishing House AKAPIT

Vol. 13, 2013, No. 2

SELECTION OF SIGNIFICANT VISUAL FEATURES FOR CLASSIFICATION OF SCALES USING BOOSTING TREES MODEL
SZYMON LECHWAR ArcelorMittal Poland, Hot Rolling Mill in Krakw, ul. Ujastek 1, 30-969 Krakw *Corresponding author: szymon.lechwar@arcelormittal.com
Abstract The subject of this paper is to design and implement an efficient model for various kinds of scales recognition at the Hot Rolling Mill (HRM) in Krakw. Subsequently, the model and its most important variables can be used to describe and distinguish different kinds of scales. At the moment an extensive knowledge regarding the reasons of scale occurrence is gathered. Nevertheless, the real challenges nowadays seem to be measuring techniques of those phenomena, as well as reliable online classification. This paper describes the basics of automatic surface inspection system (ASIS) which was used as a source of entry data, as well as the method of interpretation of the data obtained from this system. The ASIS provided numerous features describing single image, which was considered as a defect. The objective of this paper was to supply information regarding the most important visual attributes, which will be subsequently used in building reliable classifier for scale recognition. It was done by use of data mining techniques. The result was a set of measurement data, stored in online production database. However, some kinds of scales could not be recognized efficiently. The reason behind that was the lack of unique features, which could distinguish them from the other defects. This problem will be solved in following studies by creating offline post processing rules. Key words: automatic surface inspection system, boosting trees, data mining, hot rolling mil

1. INTRODUCTION In today's industry, global competition and rising customers requirements are becoming increasingly important in production of high quality products. At the same time, each plant puts strong emphasis on the automation of its process and the maximum costs reduction. Combination of these factors often proves to be very difficult or even impossible to achieve with the use of common production methods. Steel industry is no exception to that rule. Direct customers and subsequent treatment processes (ex. cold rolling process) requires production of higher quality steel while reducing costs. One way to achieve this goal is the application of automatic surface inspection of rolled sheets (ASIS - Automatic

Surface Inspection System). The purpose of the system is to take pictures of produced material, to detect local variations in contrast on its surface and to classify individual irregularities. Each picture taken by the system is digital and converted into grey scale pixels. In this way a map can be obtain, which supplies information regarding defective material in terms of various defects and pseudo defects. This type of system brings a significant reduction of visual inspections performed by human inspector. To make it possible to build reliable classifier of surface defects, ASIS needs to be taught. Person, who is an expert in certain classification, should create sets of defects that will be used to teach software provided by the manufacturer. In an ordi-

289 294

ISSN 1641-8581

INFORMATYKA W TECHNOLOGII MATERIAW

nary approach, the end user (expert in the field) makes selection of images that he believes belong to specific classes of defects and arrange them in the program supplied by the manufacturer. On this basis, the software creates models, examines the characteristics of images and selects the rules by which it will be possible to classify newly emerging images. This approach cuts user's knowledge about visual characteristics of the images. In this approach it is impossible to distinguish specific types of defects using results given by the ASIS. Furthermore, user cannot build additional rules in third party software to assist classification of similar defect classes. In this study it was decided to deal with this issue in more detailed manner. The aim of the work was to find visual characteristics of defects that best serve to build a model (Webb, 2002; Bakker et al., 2006). The research was concentrated on scale defects produced at hot rolling mill in Krakw. It was decided to manually select a set of reference defects, analyse visual features of each defect class, which has an influence on construction of a scale classification model and decides which features could be used in future work to build a reliable scale classifier. 2. DISTRIBUTION OF SCALES AT HOT ROLLING MILL IN KRAKW ASIS is monitoring whole coils production at the hot rolling mill, returning as a result map of irregularities in the contrast detected on the surface of hot strip. The number of possible defects that might be produced during production varies depending on steel grade, strip thickness and technological mill settings. At most, about 30 different real defects could occur on the hot strip. Therefore, ASIS is trained to detect and classify all of them. This study focused on scale defects. Based on ArcelorMittal internal defects catalogue (Breitschuh et al., 2007) and expert knowledge it was decided to select and distinguish 10 scale classes. Different defects occurring at hot rolling mill in Krakw production line were sorted in order to conduct the study. Defects images were taken by ASIS from production line. From the pool of 26000 candidates images some of them were isolated as real scale defects. This was followed by manual classification of images based on expert knowledge and reference materials (Melfo et al., 2006; Sun et al., 2003; Sun et al., 2004). As a result, set of 3300 scale defects were gathered and hand-classified. Not all scale classes will be presented in this paper. Classes

will be treated as a reference data based on which data analysis will be carried out. 3. SELECTION OF THE MOST RELEVANT VISUAL FEATURES OF THE SCALE DEFECTS USING DATA MINING METHODS Creation of a rich set of reference data provided opportunity to explain visual features of scales in detail. Images, taken by ASIS, were analysed by the manufacturer software to find local variations in contrast. Such areas, called regions of interest (ROI), received a number of features that describes their visual characteristics in numeric manner. Features, which provided information regarding classification of the currently implemented classifier were removed, as they supply unnecessary data at this stage of study. In the end, raw data consisted of 744 variables, which was passed to the further analyses. 3.1. Preliminary data analysis

COMPUTER METHODS IN MATERIALS SCIENCE

The first step in data mining analysis (Hand et al.,2001; Han & Kamber, 2006) focused on data preparation and cleaning, which had been significantly reduced due to correctness of reference data manual classification. It does not contain any missing fields or repeated observations. Variables, with variance below 10-10, were removed as they did not carry any valuable information. It was assumed that reference data does not contain any unusual values or outliers. Any transitions and transformations were not carried out during data cleaning stage. In order to seek the most important features describing categorical variable "type of scale" input data was analysed by the two data mining (Statsoft, 2006) modules: Decision trees C & RT (Classification and Regression Trees) Variables selection and analysis of the causes, which finds the best predictors for each dependent variable. Interactions between predictors are not taken into account. In total, 145 variables were selected for further analysis. 3.2. Choice of the best set of features describing variable "type of scale"

It was decided to build Boosting Trees model in order to supply information regarding the most important features, that will be used in creation of reli-

290

INFORMATYKA W TECHNOLOGII MATERIAW

able classifier. Besides classification, model defined the most relevant attributes that are the case of this study. Input data was divided into learning sample and the validation sample with 80% to 20% ratio. In order to find the most efficient model, different parameterisations were analysed. First step covered testing of the model depending on number of variables. Two variants were tested, i.e. without the use of redundant variables (those with correlation between variables exceeding 0.8) and with redundant variables. First case (figure 1) shows that the best model can be obtained for 21 variables.

efficiency of the model. Only one parameter, minimum cardinality of node at 123, gave better model (figure 3).

Fig. 3. Correctness of the Boosting Trees model depending on number of minimum cardinality of node.

Finally, the best model reached 93.84% of correctness on the learning sample and 80.34% of correctness on the test sample. The rest of the parameters remained at default values. Table 1 contains gathered parameters of the model.
Table 1. Parameters of best Boosting Trees model. Fig. 1. Correctness of the Boosting Trees model depending on number of variables without the use of redundant variables. Number of variables Erase of redundancy data Fast variables selection Minimum cardinality of node Minimum cardinality of node of descendant Maximum number of levels Maximum number of nodes A priori probability 33 No No 123 1 10 13 Equal

The second case (figure 2) gave much more promising results at 33 variables, with 94.26% of correctness on the learning sample and 78.65% of correctness on the test sample.

Fig. 2. Correctness of the Boosting Trees model depending on number of variables with the use of redundant variables.

Parameterisation was continued with the use 33 variables as they gave the best model at this point of study. The change of a priori probability, maximum number of nodes, maximum number of levels and minimum cardinality of descendant did not change

291

COMPUTER METHODS IN MATERIALS SCIENCE

The benefit from construction of this model was selection of 33 variables, which describe characteristics of the scale defects. Table 2 presents collected variables, along with their significance and short description. First two items from the table 2 describe decomposition of scale on the strip. It is most often associated with abnormal work of descalers, because they remove only a portion of scale covering the slab. Nevertheless, in this paper, proper work of descalers is only the cause of defects that should be eliminated. The core of the work was to find visual features that could be used in creation of efficient scale classifier. Therefore, position of defects throughout the strip will be ignored.

INFORMATYKA W TECHNOLOGII MATERIAW Table 2. Significance of visual features.

Figure 5 shows decomposition of the feature for single strip scale defect formed due to malfunction of the descalers. The feature could support final classification decision between these two classes. Although, straightforward use of the attribute to classify one of these scales, whenever it lies between -0.3 and 0.5 or -0.3 and 0.1, is not possible. The other type of visual feature, which - in the opposite to previous one - could be used in direct classification, is maximum difference between horizontal and vertical dark segment length. The attribute inform the classification system what is the biggest difference between the horizontal and vertical lengths among all dark segments in the defect (segments composed of dark pixels). Figure 6 shows decomposition of the feature for line scale. Figure 7 shows decomposition for V scale, which is defect originating from a finishing train. Its shape resembles V letter.

4. RESULTS - DISTINCTION OF SCALES THROUGHOUT VISUAL FEATURES OF ITS IMAGES Most relevant features, that had been isolated from a wide variety of attributes given by ASIS, were used to distinguish scale classes. Subsequently, these features along with sufficient logic will support automatic classifier (build by default within supplier software). It is possible to create additional classification rules both in C++ language and T-SQL stored procedures (ASIS database).

COMPUTER METHODS IN MATERIALS SCIENCE

Fig. 4. Horizontal to vertical difference of gradient range Line scale.

One example of feature, that, together with necessary logic, could be implemented as classification support is horizontal to vertical difference of gradient range. It describes the numerical differences between horizontal and vertical gradient ranges (in grey scale) of the defect. Figure 4 shows decomposition of the feature for line scale, which originates from the first stand of finishing train at the end of rolling campaign.
Fig. 5. Horizontal to vertical difference of gradient range Single strip scale.

292

INFORMATYKA W TECHNOLOGII MATERIAW

and Transact SQL stored procedures. They will be used in the next classification process, called postclassification. This step will be assisted by knowledge described in this paper. Acknowledgements. The author would like to express his gratitude to Mr Witold Dymek, who supported this work at Hot Rolling Mill in Krakw and Professor Maciej Pietrzyk and Dr ukasz Rauch for their enthusiastic encouragement and useful critiques of this research. REFERENCES
Bakker, A., Hoyles, C., Kent, P., Noss, R., 2006, Improving Work Processes by Making the Invisible Visible, Journal of Education and Work, 19, 343-361. Breitschuh, W., Crowley, G., Dallemagne, L., Delglise, A., Diaz-Alvarez, J., Valcarcel, J.M., Di Fant, M., Fiori, S., Hemmerlin, M., Koschack, U. Schroyens K., 2007, ArcelorMittal internal defects catalogue. Han, J., Kamber, M., 2006, Data mining: concepts and techniques, University of Illinois, Urbana-Champaign. Hand, D., Mannila, H., Smyth, P., 2001, Principles of data mining, The MIT Press, Cambridge. Melfo, W., Dippenaar, R., Reid, M., 2006, In-Situ Study of Scale Formation under Finishing-Mill Operating Conditions, Proc. AISTech 2006, Association for Iron & Steel Technology, Cleveland, Ohio, II, 25-35. Sun, W., Tieu, A.K., Jiang, Z., Lu, C., 2004, High temperature oxide scale characteristics of low carbon steel in hot rolling, Journal of Materials Processing Technology, 155-156, 1307-1312. Sun, W., Tieu, A.K., Jiang, Z., Lu, C., Zhu, H., 2003, Surface characteristics of oxide scale in hot strip rolling, Journal of Materials Processing Technology, 140, 76-83. Statsoft, 2006, Elektroniczy Podrcznik Statystyki PL, Krakw (in Polish). Webb, A., 2002, Statistical Pattern Recognition (2nd Edition), John Wiley & Sons, Ltd., Chichester, England.

Fig. 6. Maximum difference between horizontal and vertical dark segment length Line scale.

Fig. 7. Difference between horizontal / vertical of maximum dark segment length V scale.

In this case a threshold could be set at 0.2. This kind of rule might be used in supporting logic to distinguish those two scale defects. 5. CONCLUSIONS AND PERSPECTIVES In the paper scale defects occurring at hot rolling mill in Krakw were divided into unique classes. Second part of the paper describes process of Boosting Trees model creation for the scale classification. Along with the model, 33 most relevant attributes for the model were selected. These numerical visual features were used to describe each scale class by decomposition of its values. Subsequently, the features can be used in building reliable classifiers for scale recognition. Only part of the scale visual features, which could be used in classifier building, were presented in this paper. The next step in the study will be scale classifier implementation with the use of manufacturer software. It will depend on manual selection of the scale defects and their assignment to proper class. Nevertheless, study assumes that creation of the best possible classifier could be hard to obtain using manufacturer software. To improve its classification decision some additional rules have to be created. The rules can be written in C++ programming language

DOBR NAJISTOTNIEJSZYCH ASPEKTW WIZYJNYCH ZJAWISKA WYSTPOWANIA ZGORZELINY Z UYCIEM WZMACNIANYCH DRZEW KLASYFIKACYJNYCH Streszczenie Przedmiotem bada jest zaprojektowanie i wdroenie skutecznego modelu klasyfikujcego wszystkie rodzaje zgorzeliny wystpujce w walcowni gorcej w Krakowie. Model oraz jego kluczowe zmienne mog opisa i rozrni poszczeglne typy zgorzeliny. W ramach pracy postanowiono zaj si technik pomiarow oraz wykorzystaniem danych pomiarowych do budowy optymalnego klasyfikatora wad tego zjawiska. Danych pomiarowych, dotyczcych aspektw wizyjnych pojedynczych obszarw pasma, dostarczy automatyczny system kontroli powierzchni (ASIS), ktrego podstawy dziaania przedstawiono w pracy. Otrzymane dane pomiarowe zostay przeanalizowane z wykorzystaniem metod selekcji cech, a wybrane cechy posu-

293

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW yy do budowy klasyfikatora dla wad powierzchni typu zgorzelina. Klasyfikator zaimplementowany zosta z wykorzystaniem metod eksploracji danych, ktre, wraz z otrzymanymi wynikami, zostay szczegowo opisane w niniejszym artykule.
Received: October 27, 2012 Received in a revised form: November 11, 2012 Accepted: November 16, 2012

COMPUTER METHODS IN MATERIALS SCIENCE

294

COMPUTER METHODS IN MATERIALS SCIENCE


Informatyka w Technologii Materia w
Publishing House AKAPIT

Vol. 13, 2013, No. 2

A USER-INSPIRED KNOWLEDGE SYSTEM FOR THE NEEDS OF METAL PROCESSING INDUSTRY


STANISAWA KLUSKA-NAWARECKA1*, ZENON PIROWSKI1, ZORA JANKOV2, MILAN VROINA2, JI DAVID2, KRZYSZTOF REGULSKI3, DOROTA WILK-KOODZIEJCZYK4
2

Foundry Research Institute, Zakopiaska 73, Cracow, Poland Department of Automation and Computer Application of Metallurgy, VB Technical University Ostrava 17. listopadu 15, Ostrava-Poruba, 708 33 CZECH 3 AGH University of Science and Technology, Mickiewicza 30, Cracow, Poland 4 Andrzej Frycz Modrzewski University, Herlinga-Grudziskiego ,1 Cracow, Poland *Corresponding author: nawar@iod.krakow.pl
Abstract This article describes the works related with the development of an information platform to render accessible the knowledge on casting technologies. The initial part presents the results of a survey on the preferences of potential users of the platform regarding areas of the used knowledge and functionalities provided by the system. The second part contains a presentation of selected modules of the knowledge with attention focussed on their functionalities targeted at the user needs. The guide facilitating the use of the platform is a "virtual handbook". The System is used as a coupling link in the diagnosis of defects in castings, while ontological module serves the purpose of knowledge integration when different sources of knowledge are used. Key words: artificial intelligence, knowledge, distributed and heterogeneous sources, technology platforms, databases integration, ontologies

1. INTRODUCTION Currently, the software market offers knowledge" systems for computer-aided design and simulation processes (CAD / CAM) and also knowledge management tools and industrial information of the ERP / MRPII type. On the other hand, still very poorly developed area remains that of the technological decision support tools, i.e. expert systems, technology platforms to share domain knowledge, tools for integration of knowledge from distributed and heterogeneous sources. In recent years, the interest in expert systems supporting diagnosis and technological decisionmaking process has been subject to some fluctua-

tions . The observed disappointment in this class of systems was due to some difficulties related with collection and application of a sufficient number of rules in knowledge bases. Currently, the development of algorithms for automated knowledge acquisition has aroused a new interest of science centres in inference systems what has been described (Adrian et al., 2012; Kluska-Nawarecka et al., 2011a; Kluska-Nawarecka et al., 2011c; Mrzygd et al., 2007; Zygmunt et al., 2012). Studies are carried out on possibilities to apply modern knowledge engineering formalisms as it has been written in (Jankov et al., 2011; Kluska-Nawarecka et al., 2009; Spicka et al., 2010; vec et al., 2010), including fuzzy logic, rough sets, decision tables and the ISSN 1641-8581

295 303

INFORMATYKA W TECHNOLOGII MATERIAW

use of multimedia techniques to render this knowledge accessible to users. With the current rush of knowledge and data, unavoidable is the research on the methods of knowledge acquisition and integration, including ontologies enabling modelling of domain knowledge, and ultimately the creation of semantically structured systems. The article outlines the future plans and gives selected results of work aimed at building a system platform with the task of creating and sharing the knowledge from the area of foundry technologies. 2. ANALYSIS OF USER NEEDS When the work was started on the development of a concept of the structure, and on the substantive content of the system with determination of the functionality of each of its modules, it was considered necessary to refer to the needs of potential users of the system. Therefore, the main objective of the first stage of the work was considered to be an interview with the industry and scientific communities in Poland and abroad dealing in some way with the casting practice, to determine the need for different types of functionality of the future system rendering the knowledge accessible. The interview was conducted in the form of questionnaires and discussions carried out with the representatives of industry and research centres. Surveys covered a specific range of the system utility, namely the area rendering available the knowledge components.

were selected from the circles of the scientists cooperating with plants processing different types of metals. Respondents were asked to indicate the types of knowledge they believe are most commonly demanded by the manufacturing plants. In the questionnaire they were given the following options: literature knowledge about processes, descriptions of processes and applied technologies, handbooks including descriptions of the possible types of treatment norms, certifications, standards, Polish Standards, ISO, etc. visuals: pictures, computer simulations, photographs of castings and defects, and microstructures reports and studies of own production, document templates, balance sheets, ready compilations specification of requirements and properties, and chemical compositions of materials. tables, databases expert knowledge in case of defects - irregularities in the process, request for expert opinion Branch stats, volume of production etc. market analysis marketing data, studies, foresight industry statistics, statistical yearbooks, data on production volume in a given sector, data from Chief Statistical Office (CSO) The results of survey are presented in the form of a diagram in figure 1.

COMPUTER METHODS IN MATERIALS SCIENCE

Fig. 1. User preferences on the types of shared knowledge.

The selected companies represented the large, small and medium-sized enterprises (joint stock companies, limited liability companies); experts

It is clear that professionals reach most often for multimedia resources in the form of photographs and simulations, also for diagrams and visuals in the

296

INFORMATYKA W TECHNOLOGII MATERIAW

form of charts and diagrams. As important the following ones were also identified: norms, standards and certifications, literature knowledge and handbooks, specification of requirements and properties, chemical composition of materials, tables, databases, market analysis and marketing studies, expert knowledge when it is necessary to have an expertise performed. Of minor importance was considered industry statistics, and reports and studies of own production. Probably the reason is effective circulation of such information and needs satisfied by the already existing tools and management systems. In order to clarify the need for different types of knowledge, the respondents were requested to assess the individual potential functionalities of the future information system operating in their companies. The proposed list of functional features is presented in table 1.
Table 1. List of potential functionalities of an information system. Potential functionalities of information system virtual handbook descriptions of processes and applied technologies, handbooks including description of possible treatments, access to publications visuals: pictures, simulations photographs of castings and defects, microstructures electronic standards, certifications Polish standards, ISO standards, etc. databases, catalogues specifications of requirements and properties, chemical compositions of materials, catalogues of materials e-learning virtual training in advanced manufacturing technologies, interactive courses in casting techniques expert systems discovering the causes of defects, detection of process irregularities tools for classification determination of defect types and class/grade of material from which the product should be made, etc. tools to make reports based on production data, statements and reports on e.g. production volume, consumption of materials, costs, severity of defects

3. KNOWLEDGE-SHARING PLATFORM When the concept of a knowledge-sharing platform on casting technologies has been created, it was assumed that the platform should include all major solutions developed in the course of previous works on the computer-aided manufacturing processes. At the same time it should be enriched with new modules and functionalities, targeted at meeting the user's preferences as regards the application of new trends and opportunities that arise from the

The list of potential functionalities of the system was based on the conducted research and currently available capabilities of a system developed as a result of cooperation between the team of workers from the Foundry Research Institute and Knowledge

297

COMPUTER METHODS IN MATERIALS SCIENCE

Engineering Team from the Department of Applied Informatics and Modelling at the Faculty of Metals Engineering and Industrial Computer Science, University of Science and Technology, Cracow, Poland. Definitely the highest rated was the proposed "virtual handbok" - a platform to share the collections of documents with descriptions of processes and technologies, and handbooks and publications in electronic form. Also highly rated were the visuals: photographs, simulations, pictures, photographs of castings and defects, and microstructures, as well as databases, catalogues, the specifications of requirements and properties, chemical compositions of materials, catalogues of materials all of them reflecting the most common needs. The responses obtained allowed establishing the following ranking of other functionalities: 1. Databases, catalogues; 2. Expert systems; 3.4 Classification tools, reporting tools; 5. Electronic standards, certifications; 6. E-learning. As a general conclusion from the survey and discussions held, it can be stated that such knowledge sharing is needed that, while giving the user an easy to handle interface, will also ensure a constant supply of current information and knowledge from the area of the casting practice, not only derived from the literature, but also from all other sources such as databases, or knowledge obtained algorithmically from the process data. To achieve this, the above mentioned sources will have to be integrated and made ready for processing (e.g. indexed for convenient search). At the same time, the need arises to design a knowledge base in such a way as to make it interactive, to enable user to get through a dialogue with the system just this knowledge that is necessary for solving of a problem.

INFORMATYKA W TECHNOLOGII MATERIAW

development of methods and technologies based on computer science what has been said in (David et al., 2011; Olejarczyk et al., 2010). Consequently, the platform has a multi-module structure, where individual modules can operate independently, and the results of their actions are subjected (if necessary) to the process of integration. The degree and manner of this integration depends mainly on the scenario of actions taken by the user (when using the system in an interactive mode). Among the modules already existing and available on the Internet, one can mention the Infocast system (including databases on publications, standards, and catalogues) and the Castexpert system designed to serve as a tool for the diagnosis of casting defects assisted with knowledge presented in the form of multimedia. Below are outlined the results of the implementation of additional modules specific for the operation of the whole platform and which received most attention from the users.

3.1.

Virtual handbook

The virtual handbook, whose schematic diagram is shown in figure 2, is a kind of clause, linkining together the functionalities of other modules of the platform. Using this tool, the user gets a general idea about the type of knowledge provided by the platform, and as a result can find out which variant in the scenario will lead him to a solution of the problem. A casual scenario of the use of the handbook is presented as an example in table 2. 3.2. System for decision support and classification of defects in castings

The RoughCast system allows the use of approximate logic to enable a classification of casting defects according to international standards: Polish, Czech and French. The databases can be expanded in the future to include other standards. The approximate logic is a tool to model the uncertainty arising from incomplete knowledge resulting from the granularity of information. The main application of approximate logic is in classification,

COMPUTER METHODS IN MATERIALS SCIENCE


Fig. 2. Specification of functional requirements for the knowledge module Virtual Handbook.

298

INFORMATYKA W TECHNOLOGII MATERIAW Table 2. An example of the use of virtual handbook. Name: Actors Shareholders/Stakeholders: Virtual handbook End User, Expert, Knowledge Engineer Artificial intelligence Data Knowledge engineering Sources Data Mining Internet

Short description Preliminary conditions Final conditions

Preparing a specialised Virtual Handbook. The user must have access to a computer and specified topic of the handbook Handbook ready to display. Database updated and saved .

The main flow of events 1. The user opens the Virtual Handbook interface (on-line) 2. The user writes in the subject 3. The system analyses the subject a) Finds data on the Internet and in the documents and databases in natural language b) Collects statistical data c) Searches alternative sources of knowledge (literature, source materials, expert knowledge, etc.) 4. Cataloguing of data: a) Taken from the Internet by means of the Data Mining methods b) Statistical data using rule induction algorithms c) Data from sub-item 4.b and alternative sources of knowledge 3.c using decision tables 5. Saving in XML files 6. Algorithms of artificial intelligence are preparing relevant data for display in the handbook (semantic analysis with the use of ontologies) 7. Displaying the appropriate page of the handbook Special requirements Device with access to Internet.

Table 3. Fragment of decision table for defects in steel castings

as with this logic it is possible to build models of approximation for a family of the sets of elements for which the membership in sets is determined by the attributes. The conducted research allowed developing a methodology for the creation of decision tables to serve the knowledge of casting defects. Using this methodology, a decision table was developed for the selected defects in steel castings. A fragment of the array is presented in table 3.

Based on rough set theory, elementary sets can be determined in the array. Sets determined in this way represent the division of the universe in terms of the indiscernibility relations for the attribute distribution. The most important step is to determine the upper and lower approximations in the form of a pair of precise sets. Abstract class is the smallest unit in the calculation of rough sets. Depending on the que-

299

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW

COMPUTER METHODS IN MATERIALS SCIENCE

ry, the upper and lower approximation is calculated by summing up the respective elementary sets. The system operates on decision-making tables this structure of the data allows the use of an inference engine based on approximate logic. The system maintains a dialogue with the user asking questions about the successive attributes. The user can select the answer (the required attribute) in an intuitive manner. Owing to this method of the formulation of queries, there is no need for the user to know the syntax and semantics of queries in the approximate logic. However, to make this dialogue possible without the need to build a query by the user in the language of logic, the system was equipped with a query interpreter in a semantics much more narrow than the original Pawlak semantics. It has been assumed that the most convenient way to build queries in the case of casting defects is by selection from a list of attributes required for a given object (defect). The user chooses which characteristics (attributes) the selected defect has. This approach is consistent with the situation occurring every day when the user has to deal with specific cases of defects, and not with the hypothetical tuples. Thus set up queries are limited to conjunctions of attributes, and therefore the query interpreter has been equipped with only this one logical operator. The knowledge base created for the needs of a RoughCast system is the embodiment of an information system in the form of decision-making tables. It contains tabulated knowledge on the characteristics of defects in steel castings taken from Polish Standards, Czech studies, French directory of casting defects, and German textbook of defects. Using this formalism it becomes possible to solve a number of difficulties arising from the foundry knowledge granularity in the form of indistinguishable descriptions created with attributes and inconsistent classifications from various sources (as in the case of standards for steel castings). 3.3. Cluster Analisys

General design requirements apply to the data mining system, whose main objective is to classify documents (articles) by a thematic classification. The implementation of task module was developed based on the full-text clustering method, supported by the use of a thesaurus. The process of full-text cluster analysis was used to create the task category (conceptual clustering) as a method of unsupervised learning. The aim was to design a system operating

efficiently, which, based on the documents provided (in the correct format and in accordance with the established standards and norms), will carry out the task of clustering the documents by thematic classification based on data mining methods - cluster analysis. This module is fully compatible with the directly cooperating document repository systems and databases, and should be, to the greatest extent possible, susceptible to subsequent modifications or development. In the task of conceptual grouping, the training set is a collection of articles provided with a compatible system in the form of a knowledge base, while the task of the aggregation analysis is to split these objects into categories (aggregations) and construct a description of each category (aggregation centroids), enabling the efficient classification of new articles. As a result of this process, each article is included into one of the resulting aggregations. Each of the resulting aggregations has its centroid, which represents a concept associated with this aggregation. The classification of text documents is a very complex problem. The main reason for the difficulties is the semantic complexity of natural language. The difficulties associated with the classification of documents written in natural language are related to, among others, polysemy, i.e. terms having many different meanings. For example, the term 'table' may refer to either a piece of furniture, or it may also mean a set of specifically arranged data, numbers, etc. The dimension of feature space in the document classification tasks, related to the number of possible words in a natural language (usually the order of tens of thousands of words), is also a difficulty. On the other hand, the representation of documents using the selected (small) number of words will reduce the quality of classification. Therefore, the first step of the task of conceptual clustering is to reduce the articles from the knowledge base to the basic grammatical forms. The process of cluster analysis will compare the sets of articles (with each other) in search for common words excluding irrelevant words (or, also, but, etc.). Articles will be grouped in clusters on the basis of the probability of adjustment determined by the number of occurrences of the common words, and on this basis the aggregation centroid will be created, as presented in figure 3. To improve the quality of cluster formation and classification of new articles, these processes will be supported by a thesaurus (a structured set of key-

300

INFORMATYKA W TECHNOLOGII MATERIAW

words), which will also eliminate the problem when compared to the sets that do not have the same keywords in the text, and yet belong to the same category. Constraints can be formulated in OCL for the class Aggregation_Centroid as exemplified is given below.

context Aggregation_Centroid::Generate_Aggregation_Centroid() : Boolean pre: self.Aggregation_Centroid_Path = '' post self.Aggregation_Centroid_Path<>'' context Aggregation_Centroid::get_ID : Inteager post rezult = self.ID_Centroid context Aggregation_Centroid::get_PATH : String post: rezult self.Aggregation_Centroid_Path

Fig. 3. Diagram of classes in the module of document aggregation analysis.

Fig. 4. Use Case Diagram.


context Aggregation_Centroid inv self.ID_Centroid >=1 and self.Aggregation_Centroid_Path<>'' and Aggregation_Centroid.allinstance ->forall (p1,p2) p1<>p2 implies p1.ID_Centroid <> p2.ID_Centroid and Aggregation_Centroid.allinstance ->forall (p1,p2) p1<>p2 implies p1.Aggregation_Centroid_Path<> p2.Aggregation_Centroid_Path

The use case diagram is shown in figure 4. In table 4 there is a description of one of possible utilizations discussed algorithm.

301

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW Table 4. Description of the case of use. Name: Actors: (Stakeholders/Interests) Automatic clustering of documents System The system manages the service module of the operations of the database. It performs import operations on documents from a knowledge base and export operations on the resulting database to ontology classes. The system also manages the module of document clustering by the method of Data Mining. Automatic clustering of documents in terms of thematic fit. Obtaining edited documents and information in the form of a knowledge base Thematically grouped forwarded database of articles. 1. 2. 3. 4. 1. 2. Receiving knowledge base System performs cluster analysis for the resulting knowledge base. System creates a new database of articles grouped thematically based on cluster analysis carried out. System sends the created database. When new article is downloaded, the cluster analysis classifies it into one of the topics. When there is a limit to the number of articles classified to various topics, and all these articles are characterised by a high degree of fit with each other, then a new topic will be created to which these articles will be assigned. For the main flow of events to occur, the knowledge base must be delivered. For an alternative flow of events to occur, the artificial intelligence system must initiate the delivery of a new article.

Short description: Pre-conditions: Post-conditions: Main flow of events:

Alternative flow of events:

Special requirements:

1. 2.

4. FINAL REMARKS The article describes solutions developed for selected modules of a platform for sharing the knowledge of foundry technologies in the context of the preferences expressed by users. It seems that studies carried out to create this context make an interesting contribution to the contents of this article, since the results of such surveys are not often disclosed in the presentations of different expert and decision-making systems. Selecting modules of the platform described in the article, it was attempted, on the one hand, to show their diversity and, on the other, to expose the way by which they will be adapted to the declared user needs. It has been the intention of the creators and promoters of the platform to offer a system that will have an evolving nature, and will be gradually enriched with new modules, according to the emerging needs. Currently work is underway on the implementation of modules of the automatic acquisition of knowledge from the Internet and on the analysis and classification of text documents which is presented (Kluska-Nawarecka et al., 2011b). Scientific work financed from funds for the scientific research as an international project. Decision No. 820/N-Czechy/2010/0.

REFERENCES
Adrian, A., Mrzygd, B., Durak, J., 2012, Model strukturyzacji wiedzy dla systemu wspomagania decyzji, Hutnik Wiadomoci Hutnicze, 1, 67-70 (in Polish). David, J., Vroina, M., Jankov, Z., 2011, Determination of crystallizer service life on continuous steel casting by means of the knowledge system, Transactions On Circuits And Systems, 10, 359-369. Jankov, Z., Ruiak, I., Kopal, I., Jonta, P., 2011, The Influence of Rubber Blend Aging and Sample on Heat Transport Phenomena, Defect and Diffusion Forum, 312-315, 183-186. Kluska-Nawarecka, S., Wilk-Koodziejczyk, D., Dobrowolski, G., Nawarecki, E., 2009, Structuralization of knowledge about casting defects diagnosis based on rough sets theory, Computer Methods in Materials Science, 9, 341-346. Kluska-Nawarecka, S., Wilk-Koodziejczyk, D., Regulski, K., 2011a, Practical aspects of knowledge integration using attribute tables generated from relational databases, Semantic Methods for Knowledge Management and Communication, eds, Katarzyniak, R., Chiu, T.F., Hong, C.F., Nguyen, N.T., Springer, Berlin, Heidelberg, 13-22. Kluska-Nawarecka, S., Wilk-Koodziejczyk, D., DziaduRudnicka, J., Smolarek-Grzyb, A., 2011b, Acquisition of technology knowledge from online information sources, Archives of Foundry Engineering, 11, 107-112. Kluska-Nawarecka, S., Wilk-Koodziejczyk, D., Regulski, K., Dobrowolski, G., 2011c, Rough Sets Applied to the RoughCast System for Steel Castings, Inteligent Information and Database Systems, Proc. Conf. Third International Conference, ACIIDS 2011, eds, Nguyen, N.T., Kim, C.G., Janiak, A., Daegu, Korea, 52-61. Mrzygd, B., Adrian, A., Kluska-Nawarecka, S., Marcjan, R., 2007, Application of Bayesian network in the diagnosis of hot-dip galvanising process, Computer Methods in Materials Science, 7, 317-323.

COMPUTER METHODS IN MATERIALS SCIENCE

302

INFORMATYKA W TECHNOLOGII MATERIAW Olejarczyk, I., Adrian, A., Adrian, H., Mrzygd, B., 2010, Algorithm for controling of quench hardening process of constructional steels, Archives of Metallurgy and Materials, 55, 171-179. Spicka, I., Heger, M., Franz, J., 2010, The mathematicalphysical models and the neural network exploitation for time prediction of cooling down low range specimen, Archives of Metallurgy and Materials, 55, 921-926. vec, P., Jankov, Z., Meleck, J., Kotial, P., 2010, Implementation of neural networks for prediction of chemical composition of refining slag, Proc. Conf. Metal 2010, International Conference on Metallurgy and Materials, Tanger, 155-159. Zygmunt, A., Kolak, J., Nawarecki, E., 2012, Analiza otwartych zrodel internetowych z zastosowaniem metodologii sieci spolecznych, Bialy wywiad: otwarte zrodla informacji wokol teorii i praktyki, eds: W. Filipkowski, W. Mdrzejowski, C.H. Beck, Warszawa, 197-221 (in Polish). SYSTEM UDOSTPNIANIA WIEDZY INSPIROWANY POTRZEBAMI UYTKOWNIKA Z ZAKRESU PRZEMYSU PRZETWRSTWA METALI Streszczenie Artyku dotyczy prac zwizanych z realizacj platformy informatycznej, sucej od udostpnienia wiedzy z zakresu technologii odlewniczych. W czci pocztkowej przedstawiono rezultaty sondau dotyczcego preferencji potencjalnych uytkownikw platformy odnonie obszarw wykorzystywanej wiedzy oraz funkcjonalnoci udostpnianych przez system. Cz druga zawiera prezentacj wybranych moduw wiedzy ze zwrceniem uwagi na ich funkcjonalnoci ukierunkowane na potrzeby uytkownikw. Rol przewodnika uatwiajcego korzystanie z platformy peni wirtualny poradnik, system Rought Cast suy do sprzgania diagnostyki wad odleww, za modu ontologiczny, pozwala na integracj wiedzy pochodzcej z rnych rde.

Received: October 16, 2012 Received in a revised form: November 28, 2012 Accepted: December 7, 2012

303

COMPUTER METHODS IN MATERIALS SCIENCE

COMPUTER METHODS IN MATERIALS SCIENCE


Informatyka w Technologii Materia w
Publishing House AKAPIT

Vol. 13, 2013, No. 2

THE PLATFORM FOR SEMANTIC INTEGRATION AND SHARING TECHNOLOGICAL KNOWLEDGE ON METAL PROCESSING AND CASTING
STANISAWA KLUSKA-NAWARECKA1, EDWARD NAWARECKI2, GRZEGORZ DOBROWOLSKI2, ARKADIUSZ HARATYM2, KRZYSZTOF REGULSKI2*
2

Foundry Research Institute, Zakopianska 73; Cracow, Poland AGH University of Science and Technology, Mickiewicza 30, 30-059 Cracow, Poland *Corresponding author: regulski@tempus.metal.agh.edu.pl
Abstract

This paper presents the concept of knowledge sharing platform, which uses an ontological model for integration purposes. The platform is expected to serve the needs of the metals processing industry, and its immediate purpose is to build an integrated knowledge base, which will allow the semantic search supported by domain ontology. Semantic search will resolve the difficulties encountered in the class of Information Retrieval Systems associated with polysemy and synonyms, and will make the search for properties (relations), not just the keywords, possible. An open platform model using Semantic Media Wiki in conjunction with the author's script parsing the domain ontology will be presented. Key words: knowledge integration, knowledge base, decision support, semantic search, metal processing, casting

1. CONTEXT OF THE RESEARCH WORKS For many years, at the Foundry Research Institute in Cracow, construction tools for integrated knowledge bases have been developed (Dobrowolski et al., 2007; Grny et al., 2009; Kluska-Nawarecka et al., 2002; Nawarecki et al., 2012; KluskaNawarecka et al., 2005). The studies carried out at present are aimed at improving the information retrieval systems (IR systems) in such a way as to make the collection and sharing of documents easier and more functional from the users point of view. In 1997, the Institute launched a SINTE database, which is a bibliographic casting database containing abstracts of over 38,000 articles published in various casting journals (American, English, French, German, Czech, Slovenian, Russian, Ukrainian), proceedings of conferences, and R&D works written by

the staff of the Foundry Research Institute. Together with NORCAST, CASTSTOP and a CASTEXPERT diagnostic system, the SINTE database forms a part of the INFOCAST system, a decentralised decisionmaking information system which is intended to support the casting technology both in industry and in scientific and research work (Marcjan et al., 2002). In thus extended knowledge base it becomes increasingly difficult to reach to the information searched. This is particularly true when the user does not know in advance what kind of resources will be of interest to him, that is, whether he is looking for publications on the subject indicated, or for the information on standards and certificates, or for knowledge in the form of rules or guidance on the characteristics of materials and physico-chemical properties.

304 312

ISSN 1641-8581

INFORMATYKA W TECHNOLOGII MATERIAW

In extended IR systems, such as the SINTE database, there are internal tools for cataloguing the content, usually based on keywords. Categorising is done with the help of a structured thesaurus, which is a set of descriptors arranged in hierarchies. These descriptors allow us to describe the content of the document and make the search of the database possible, as it happens in the library catalogues. However, this way of cataloguing the content has some important limitations. The search based on keywords does not allow accounting in the results for the natural semantics of the measure of distance between the query and a set of documents. It likewise gives no possibility for defining the rating of response (documents) in relation to the request. Problems associated with polysemy (variety of meanings depending on the context) or synonyms (if the word does not appear in the document, although the document is closely connected with that word) pushed researchers towards semantic description of documents using models in description logic (Ciszewski & Kluska-Nawarecka, 2004; Mrzygd & Regulski, 2012; Regulski et al., 2008). Among the knowledge bases used in the Foundry Research Institute, a CASTSTOP system also appears (Pocik, 1999). It allows the selection of cast materials based on their physico-chemical and technological properties. Additionally, the database offers the possibility to search for Polish counterparts of foreign standard alloy grades, such as grey cast iron, malleable cast iron, and cast alloys of Al, Cu, Zn and Mg. Total content of the database includes information on more than 1000 alloys. Knowing the technical requirements of the designed product, operating via a system, the user can select the appropriate cast alloy meeting these requirements. However, also in this case, the system has some limitations, namely it does not include knowledge of the material properties that can be obtained by thermal or mechanical treatment. Such information goes beyond the scope of standardised material properties, and thus cannot be easily compared for different national or foreign suppliers. An attempt to create an algorithm which will enable searching the material by its physico-chemical or mechanical parameters, taking also into account the possible upgrading process, requires the use of a knowledge model of the treatment processes. The answer and the way to solve this problem, also in this situation, can be the semantic search based on ontological model.

2. THE PLATFORM FOR SEMANTIC INTEGRATION AND SHARING KNOWLEDGE ON METAL CASTING CASTWIKI The knowledge sharing platform should act as an intermediary between the user and heterogeneous sources of knowledge. Using appropriate knowledge formalisms, such as description logic-based ontologies, it aims at integration of resources in such a way that going through a variety of sources is done with a significant benefit to the user, and also in a way transparent to him. The platform aims at knowledge sharing for non-routine tasks that are difficult to predict in the normal course of production. The solutions and implementations presented in the previous section comprise the knowledge modules that can operate independently (as evidenced by the experience), but using them to create an integrated information and decision-making system enriched ontologically would provide additional functionalities and make it easier for the user to use the functionalities already existing. The platform is to be conceived as an ontological tool for integration of various subsystems in a semantic network, where individual packets of information and data, as well as components of the knowledge transferred in the system are described using metadata in accordance with the shared ontological model, so that they can be explicitly used (shared) by the individual modules and also remain ready for reuse by other computerised systems, as the Platform is an open system. Recent studies done at the Foundry Research Institute among experts from the world of industry led to the conclusion that the goal should be to create a platform for knowledge sharing that by giving the user an easy-to-use interface would provide him with a steady supply of current information and knowledge in the field of the metalcasting practice, coming not only from the literature, but also from all other sources, such as databases, or knowledge obtained algorithmically from the process data. To achieve this, these sources will need to be properly integrated and ready for processing (at least indexed for easier search). The system should allow the design of a knowledge base in such a way that it is interactive and makes the codification of expert technological knowledge possible. The task of the proposed system will be the semantic integration of the collected data, information and knowledge. The aim will be to provide the end user with a transparent access to the integrated knowledge base based on

305

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW

heterogeneous resources. The integration tool will be ontologies. 3. ONTOLOGIES MODELS IN DESCRIPTION LOGIC APPLICATION The Descriptive or Description Logic (DL) is a subset of First Order Logic (FOL), which can be used to represent a domain in a formalised and structured and, at the same time, computer-processable manner. The basic element of representation are unary predicates corresponding to a set of objects and binary predicates mapping relationships between objects. The descriptive logic allows creating definite descriptions that depict the domain using concepts (unary predicates) and roles (binary predicates). For example, a steel casting with hot cracks in two places, in the DL record may look like this: Casting

made.CastSteel (2 defective) defective. HotCrack

Hence it follows that concepts can be built from atomic ideas (casting, cast steel, HotCrack), atomic roles (made = made of, defective = having a defect), and constructors. Using logic one can create definitions of concepts: Casting CastSteel Product cast.Mould or axioms: Alloy (everyCastSteel is Alloy)

Description logic was created for the ontology, and therefore a very important issue was to create simultaneously such a language that would allow the symbolic language of logic to be written in the form of computer code. Such language for the description logic has proved to be OWL (Web Ontology Language). Creating a kind of shared formal language, ontologies allow integrating a wide variety of distributed sources of knowledge in a given field, overcoming the problem of differences in the systemic, syntactic, and semantic areas which, so far, has been the biggest challenge for computer tools, often preventing a clear identification of the concept, and therefore, finding information related to this concept. Ontology is not a database schema, but a simplification can be used: for knowledge repositories, ontology is that what the entity diagram is for a database; it is a diagram, a model of a field of knowledge, understandable by both computers and humans. The Department of Applied Informatics and Modelling at the AGH Department of Metals Engineering and Industrial Computer Science has developed a domain ontology for different cast iron grades and changes in their properties under the influence of treatment presented in figure 1. Directly under the parent class there are 5 main categories: Treatment, Alloying_elements, Carbon_form, Properties, and Cast Iron. All together, accumulate more than 90 general terms, on which the reasoning in CastWiki Platform will be based.

COMPUTER METHODS IN MATERIALS SCIENCE


Fig. 1. Symbolic representation with directed graph of a fragment of domain ontology in the field of cast iron.

306

INFORMATYKA W TECHNOLOGII MATERIAW

3.1.

OntoGRator System

3.2.

CastWiki Platform

307

COMPUTER METHODS IN MATERIALS SCIENCE

At the Foundry Research Institute in Cracow, attempts to use ontologies for integration of knowledge in the field of metal processing have already been going on for several years in cooperation with the AGH Department of Computer Science (Dobrowolski et al., 2007; Regulski et al., 2008). The OntoGRator system developed at that time allowed describing in a strict and formal manner the area that the integrated data were related with and also specifying the semantics of integrated resources. Although the OntoGRator system was solving problems of a semantic description of the area which the processing of metals is, it did not enjoy the sympathy of its potential users. The lack of success was due to, among others, a very complex structure of the system, which consisted of the two main subsystems: OntoGRator Engine - engine that integrates data from multiple heterogeneous sources, including information on the problem area contained in domain ontology. This subsystem provides the data, which it has integrated, in the form of new, expanded by the data from external sources, ontology, operating through the Jena API programming interface, OntoGRator Web - application in J2EE technology presenting in the form of web pages the integrated ontologies available through the Jena API programming interface. A user who was not a knowledge engineer might have great difficulties in understanding the operating principle of the system. Without basic knowledge of the ontology, the system was becoming incomprehensible. Additionally, the mere idea of the system assumed its ability to integrate structured knowledge resources, such as databases and documents from servers located in the Internet, which the user had access to (Adrian et al., 2007). However, this assumption turned out to be ahead of its time the actual databases often did not provide any API, and user had no access to them via the network interface. Much more functional has proved to be the system that provided the ability to insert the content directly, instead of placing the URL / URI for each resource.

The example of Wikipedia (http:// en.wikipedia.org/) shows that it is possible to create a system in which each user has the ability to edit and add content, and at the same time a high level of quality of the accumulated knowledge is maintained through supervision and control, and the ability of other users to introduce their own amendments. Participation in editing Wikipedia is voluntary and unpaid, and millions of users around the world every day add new definitions and edit the existing ones. Wikipedia's success has inspired software developers to implement industrial systems operating on the same principle, but being the sole repository of a company. Systems operating in this way, known as content management systems (CMS) or idea management systems, inherit from Wikipedia several advantages: Wiki tools are a popular source of information and knowledge, which most of internet users have already encountered and become familiar with, Wiki keeps the knowledge resources constantly updated through editing, but discussion leading to the development of a final version of the problem is an integral part of the entry in Wiki, Wiki technology is as simple as possible, it requires minimal skills to edit and add new content, and is available to everyone, Wiki structure provides a description of the concepts in natural language, and at the same time contains a unique URI which is an effective way to identify concepts in the knowledge model. Thus, a wiki-type tool successfully meets the most important demands of the knowledge management: it allows the codification of knowledge, recording of experience and results of the creation of new (experimental) knowledge by free editing of entries, supports discussion on specific concepts, giving the opportunity to generate the phenomenon of externalisation of knowledge, and by maintaining the history of discussion on a given topic can also be personalised. The ability to create a "stub article", which is only a draft definition allowing for the extension, too short to serve as a definition of the encyclopaedic nature, but still giving some information about the topic, is a key aspect here. It is precisely in this way, by creating first a short description, incomplete and uncertain, that we allow the discussion to be started on a given topic. Other

INFORMATYKA W TECHNOLOGII MATERIAW

users can participate in determining the definition of the concept, adding fragments of the description. Such a scheme of action allows the collective creation of knowledge resources, so-called, shared conceptualisation. Creating the "stub articles" is a voluntary activity, the aim of which is to liberate the externalisation of knowledge by encouraging discussion on a given topic. However, Wikipedia as a public system is not an acceptable solution for companies that need to restrict access to their knowledge. Knowledge in industrial plants is valuable, but also highly specialised. That is why it was necessary to create a separate platform, using standard Wiki specifically for the needs of foundry plants. MediaWiki software was applied as a platform used by Wikipedia and made available under the GPL licence. The system called CastWiki schematically presented in figure 2 is designed to provide a platform for the exchange of knowledge and saving the casting knowledge by specialists in the field of metal processing. Wiki mechanism allows the inclusion of such types of content as: descriptions in natural language, graphic files, hypertext links to other concepts in CastWiki, links to all the resources available in the network (documents, catalogues, images, photographs, animations, databases) and having its own URL.

terms (concepts) of ontology, then mapping their structure to the underlying ontology components. For each class in the ontology, a description in natural language can be added. Ontology editing tools such as Protg permit the placement of descriptions in text form directly in the description of the OWL ontology. They also allow user to place references in the form of a URL. This gives the possibility to transfer the unstructured knowledge, which the definitions in natural language are, and photographs to CastWiki knowledge base. Each concept (article) in CastWiki acquires its unique URI / IRI, which can also serve as a reference to the class description in ontology. The problem in Wiki-class systems is page duplication and redundancy of knowledge. Cocreation by multiple users makes the situation when for the same substantive term there are several articles under various entries (e.g, L200HNM cast steel, which is also G200CrMoNi4-3-3 cast iron). In this situation, it is necessary to integrate duplicate articles and create redirection of individual entries to the integrated article. Ontology facilitates the analysis of overlapping terms through the rdf: SeeAlso property, which allows placing adequate terms directly in ontological description, thus greatly facilitating the work of CastWiki editors. Another problem that is solved due to this structure of the knowledge model is the problem of

COMPUTER METHODS IN MATERIALS SCIENCE

Fig. 2. Description of classes in CastWiki.

In this way, it allows the integration of knowledge already stored in digital form, being a component of other knowledge systems (e.g. INFOCAST, CastExpert + etc). The integration of these data and knowledge resources (as well as those added during the use of sources) consists in describing the resources by

homonyms. Concepts with the same name require the creation of an additional article in CastWiki, which will be a list of words that share the same spelling and pronunciation with a short note about the context of each word. Ontology also solves the problem of homonyms: the model itself cannot have two classes with the same name, which forces the

308

INFORMATYKA W TECHNOLOGII MATERIAW

ontology engineer to extend the class name in such a way as to give the context in accordance with the namespace. Ontology also provides the ability to create a more structured knowledge base than the traditional wiki approach. Namely, Wikipedia does not allow the definition of relationships. The concept, which is a non-autonomous object, cannot have a representation in the form of a Wikipedia entry. Creating ones own CastWiki system is a way to avoid this limitation. Users gain the ability to create descriptions not only for classes and instances in the ontology, but also for relationships (object properties). For example, in an ontology there is a relationship preventive_means, for which the basic Wiki version has no place in a separate article. Therefore, knowledge about how to prevent the casting defect would have to be included in the description of a particular type of defect. CastWiki allows us to create an article integrated with the preventive_means relationship so that the user can easily create a document that contains basic information about how to prevent defects, while collecting all the known resources of knowledge on this subject (including specification of defects which are related in the article, or links to specific procedures to prevent defects). This form of knowledge collection provides hypertext structure of the system and ease of navigation across the resources offered to the user who needs no preliminary knowledge of the conceptual model of the system. At the same time, the user navigating across the ontology can easily find a definite description of relationship, not just its member classes. 3.3. Semantic CastWiki

Semantic MediaWiki (SMW) is a complex semantic extension of the MediaWiki platform (a free Wiki-type solution licensed by open source, developed by the Wikimedia Foundation, which is a basis for most of the projects such as Wikipedia, Wiktionary and Wikinews.) by the mechanisms to improve extraction, search, valuation, marking and publishing of Wiki content. It also provides a platform for software development, which makes it the fastest growing project of this type in the network. In the present work, this application provides the basic mechanisms for the performance of semantic issues.

Semantic annotations that have been developed for SMW are designed in such a way as to enable a faithful export of ontologies in OWL DL format. It is worth mentioning that the SMW user interface does not require a formal interpretation of OWL DL, nor does it impose restrictions on expressiveness. OWL DL ontology structures can be divided into instances that represent individual elements of a particular domain, classes which are aggregates of instances with the same characteristics, and attributes describing logical relationships between instances. The way in which SMW represents knowledge was partly inspired by solutions such as Web Ontology Language, which allows performing an easy transcription from one format to another. From a technical point of view, the MediaWiki platform uses the namespaces to group the pages by content. This mechanism was also used in the clustering of ontology elements: OWL individuals - these elements are represented as regular articles. Pages of this type account for a significant portion of the data contained in Wiki. Usually they are grouped in the main namespace, but can be stored in some other spaces, too (People space, Image space). OWL classes - they have a counterpart in the basic mechanisms of MediaWiki as a category. The category system, which has been an integral part of the MediaWiki platform since 2004, quickly became the main tool for the classification of documents in Wikipedia and other Wikimedia Foundation projects. Category pages are grouped in the namespace of the same name. They can be organised hierarchically in a similar way as it is done in OWL ontologies. OWL properties - the relationships between ontology elements have no counterpart in the MediaWiki engine, and are supplied with the extension of SMW. OWL distinguishes the relationships between data (assigning numerical value to ontological element) and between objects (the relation of two ontological elements). Semantic MediaWiki simplifies this division by aggregating all types of relationships in the namespace called Property. In order to easily browse the semantic annotations found on the Wiki page, a factbox is used. It allows users to view the most important facts about the subject. For those who are supporting and complementing Wiki, it is also a tool to validate the correctness of the Wiki engine "reasoning" in respect to the annotations introduced earlier. The in-

309

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW

formation is displayed in two columns: the first, starting from the left, contains attributes used on the page (e.g. the population), while the second one stores the values assigned to them (e.g. 340, 000). Each attribute name is also a link to its site, where one can usually find basic information about this attribute (meaning, use). Depending on the design of the attribute, the fact table may include, for example, its value in different units of measurement. Next to each attribute there is a special magnifying glass icon which is a link to a simple semantic search engine. For example, if a web page is labelled [[is an alloy::Iron]], the user will receive a list of all the sites that meet this requirement. Below the list there is a form in which one can specify any desired attribute-value pair. If the attribute value is numeric, the search engine can also provide pages with an approximation of this value. In the header of the fact table there is an eye icon, which allows quick browsing of all the semantic annotations as presented in figure 3.

3.4.

Parsing of domain ontologies implementation of the script

The recommended method of entry of the ontology into the Wiki structure is by creating ones own Wiki parsing script. SMW initially circulated such a possibility, but a multitude of ontological formats and change in the approach of the authors of the application to the construction of semantic structures in Wiki caused giving up the idea of a development of this tool (in most of the recently launched scenes it has been removed completely). Author's script, in turn, allows a relatively easy optimisation, depending on the user needs. Taking into account the available libraries working with MediaWiki it also gives the possibility of parsing most of the popular ontology formats. The script applied in SemaWiki to load the ontology was implemented using the Python Wikipediabot (pywikipedia) programming platform. It is a set of tools to automate the work on the pages of MediaWiki and other popular Wiki engines using web

COMPUTER METHODS IN MATERIALS SCIENCE

Fig. 3. Fact table a description of links.

crawler. From the point of view of the Wiki platform, robot is a normal user with specific access rights. 310

INFORMATYKA W TECHNOLOGII MATERIAW

The first step is to import the necessary libraries that allow us to edit the ontology while preserving the logic graph. For this task, a rdflib package is used. Then are defined namespaces used in parses ontology. After verifying that the robot correctly loaded the ontology and logged on to the designated Wiki, actual parsing begins. First in line is the class structure. An algorithm in each iteration of the loop finds a subject-object pair joined with verb such as rdfs: subClassOf. In this way, no class shall be neglected. It is worth noting that during transcription of ontology, no reasoning is carried out, i.e. if a relationship is not defined directly, it shall not be reflected in the ontology. Each resulting subject-object pair will be represented in CastWiki as a category page, so its name must begin with the keyword "Category". Additionally, to obtain the required transparency, from each URI, the natural class name is extracted. The resulting pages are tested under four conditions. The first checks if parent category is not a class Thing (highest class in OWL, all classes declared in any ontology are subordinate to the Class Thing). If the condition is not met, the child class becomes the parent class. Entering attributes to Wiki structure is done in the same way. The algorithm takes subject-object pairs combined by a relationship: rdfs: subPropertyOf. Such a condition, however, does not guarantee the extraction of all classes from the ontology. This is due to the fact that, in contrast to classes, attributes are not grouped under one and the same parent attribute. 3.5. Semantic search

for which the attribute "has alloying element" assumes the value "Nickel". Of course, the introduced phrases can be much more advanced. The syntax allows creating queries based on the logic of sets such as [[Category: Cast Iron]] [[has alloying element ::! Nickel]], which will generate a list of all pages in the category "Cast Iron" with value "nickel" for the attribute "has alloying element". In the case of attributes taking numerical value, there is the additional possibility of declaring the search ranges ([[has content of C :: > 0.1%]] [[has content of C :: <2%]]). 4. SUMMARY The designed platform is a complete, functional tool that allows for the creation in an enterprise of new channels of communication and knowledge transfer. Such a system can be built at minimum cost - the cost is actually just the time. The platform provides employees with complete information about all the resources of knowledge that are available in the organisation, can also easily share new resources and integrate the ones already existing, but still not catalogued. The implementation of such a system can prevent employees from repeating the same job many times, give easy access to proven best practices that exist in the company, and facilitate the development and transfer of knowledge. Thus shaped system has a huge advantage over the dedicated systems with a ready knowledge base. It is above all much cheaper. CastWiki must be extended by the staff, which takes the time, but it is cheap and easy to use. Every employee can participate in the development of a knowledge base, which allows taking full advantage of the knowledge accumulated in the company. At the same time it is possible in the process of implementing CastWiki to fill a basic knowledge base with the resources accumulated previously, or with information from other purchased systems. The proposed tools - ontologies - can significantly affect the competitiveness of the casting plants, support knowledge management and reduce the barriers of entry for companies wishing to expand the range of products. Implementation of integrated knowledge management systems, as well as decision support systems requires long-term investments, especially experts time, but in the long run it could decide about the survival and competitiveness of an industrial plant.

With ready-to-action ontology, the user can start adding pages. This process is not much different from the completion of a database content in simple Wiki. The easiest way is to enter into a standard MediaWiki search engine the name of a specific term. If it has not been already included in the base, the application will ask whether to create a page to an earlier question. Simple search by attributes, with the continuously expanding knowledge bases of a Wiki type, would be insufficient in the long run. Therefore CastWiki also provides the ability of search based on formal questions. For this purpose, a special syntax has been designed, similar to the solutions used in the same tags. For example, the query [[has alloying element :: Nickel]] will generate all pages,

311

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW

olutions proposed in this article greatly improve the search process and data distribution. They are justified by the relatively simple solutions that do not require a lot of time to assimilate, mainly owing to the fact that they are based on commonly used technologies. A few years ago, the main problem associated with the implementation of similar techniques was little interest from serious investors. Today, many multinational companies driving development of information technology (Google, Apple) and most popular social networking sites (Facebook, last.fm) successfully use their own semantic solutions. Within the last few years, several ontology-based Wiki platforms have been created, which were used as knowledge bases not only in IT-related companies. Among them, a foundry plant could find its place without any major obstacles. As regards conversion into semantic knowledge, information used in foundry practice is in no way different from other data. Acknowledgement. Scientific work financed from funds for the scientific research as an international project. Decision No. 820/N-Czechy/2010/0 and 0R0B0008 01 REFERENCES
Adrian, A., Kluska-Nawarecka, S., Marcjan, R., 2007, The role of knowledge engineering in modernisation of new metal processing technologies, Archives of Foundry engineering, Polish Academy of Sciences, Volume 7, Issue 2, April-June, 35/2, p. 169-172 Ciszewski, S., Kluska-Nawarecka, S., Document driven ontological engineering with applications in casting defects diagnostic, Computer Methods in Materials Science : quarterly / Akademia Grniczo-Hutnicza. 2004 t. 4 nr 1 2 s. 56-64. Dobrowolski, G., Kluska-Nawarecka, S., Marcjan, R., Nawarecki, E., OntoGRator an intelligent access to heterogeneous knowledge sources about casting technology, Computer Methods in Materials Science: quarterly / Akademia Grniczo-Hutnicza; ISSN 1641-8581 - 2007 vol. 7 no. 2 s. 324-328. Grny, Z., Pysz, S., Kluska-Nawarecka, S., Regulski, K.: Online expert system supporting casting processes in the area of diagnostics and decision-making adopted to new technologies, Innowacje w odlewnictwie, Cz. 3, pod red. Jerzego J. Sobczaka. - Krakw : Instytut Odlewnictwa, cop. 2009. S. 251261. (in Polish). Kluska-Nawarecka, S., Dobrowolski, G., Marcjan, R., Nawarecki, E., From pasive to active sources of data and knowledge: decentralised information-decision system to aid foundry technologies, Krakw : Akademia Grniczo-Hutnicza w Krakowie. Katedra Informatyki, 2002. Kluska-Nawarecka, S., Pocik, H., Wjcik, T., Nawarecki, E.: Information-decision system aiding scientists and engineers: Lecture Notes in Information Sciences, ISGI 2005: International CODATA Symposium on Generalization and Information: Berlin, Germany, September

1416, 2005, ed. Horst Kremers. Berlin: CODATA, 2005. Nawarecki, E., Kluska-Nawarecka, S., Regulski, K.: Multiaspect character of the man-computer relationship in a diagnostic-advisory system, Human computer systems interaction: backgrounds and applications 2, Pt. 1, eds. Zdzisaw S. Hippe, Juliusz L. Kulikowski, Teresa Mroczek. - Berlin; Heidelberg: Springer-Verlag, cop. 2012. - (Advances in Intelligent and Soft Computing ; ISSN 1867-5662; 98). S. 85-102. Marcjan, R., Nawarecki, E., Kluska-Nawarecka, S., Dobrowolski, G.: Integration of the INFOCAST system databases by means of agent technology, INFOBAZY' 2002 Bazy danych dla nauki: III [trzecia] krajowa konferencja naukowa: Gdask 24 czerwca26 czerwca 2002 r.: materiay konferencji / Politechnika Gdaska; Centrum Informatyczne TASK; Instytut Oceanologii PAN [Polskiej Akademii Nauk]. - Gdask: CI TASK, 2002. S. 83-88. Mrzygd, B., Regulski, K.: Application of description logic in the modelling of knowledge about the production of machine parts, Hutnik Wiadomoci Hutnicze: czasopismo naukowo-techniczne powicone zagadnieniom hutnictwa: organ Stowarzyszenia Inynierw i Technikw Przemysu Hutniczego w Polsce - 2012 R. 79 nr 3 s. 148-151 (in Polish). Pocik, H.: Baza znormalizowanych gatnkw stopw odlewniczych, INFOBAZY99, Bazy Danych dla nauki, 385390, Gdansk 1999, Centrum Informatyczne TASK (in Polish). Regulski, K., Marcjan, R., Kluska-Nawarecka, S.: Knowledge management in casting industry processes, RMES\^{Z}08: problemy upravleni bezopasnost' sloznyh sistem: trudy 16 mezdunarodnoj konferencii: Mockva, dekabr' 2008 / red. N. I. Arhipovoj, V. V. Kul'by; Rossijska Akademi Nauk [et al.]. Moskva: Rossijskij gosudarstvennyj gymanitarnyj universitet, 2008. S. 285-289.

PLATFORMA SEMANTYCZNEJ INTEGRACJI I UDOSTPNIANIA WIEDZY TECHNOLOGICZNEJ Z ZAKRESU PRZETWRSTWA METALI I ODLEWNICTWA Streszczenie Artyku prezentuje koncepcj platformy udostpniania wiedzy wykorzystujc w celach integracji model ontologiczny. Platforma ma suy celom przemysu przetwrstwa metali: budowa zintegrowan baz wiedzy, w ktrej moliwe bdzie wyszukiwanie semantyczne wspierane przez ontologi dziedzinow. Wyszukiwanie semantyczne pozwoli rozwiza trudnoci spotykane w systemach klasy Information Retrieval Systems zwizane z polisemi i synonimami, a take umoliwi wyszukiwanie pod wzgldem waciwoci (relacji), a nie tylko po sowach kluczowych. Przedstawiony zostanie model wykorzystujcy otwart platform Semantic Media Wiki w poczeniu z autorskim skryptem parsujcym ontologi dziedzinow.
Received: November 16, 2012 Received in a revised form: December 5, 2012 Accepted: December 12, 2012

COMPUTER METHODS IN MATERIALS SCIENCE

312

COMPUTER METHODS IN MATERIALS SCIENCE


Informatyka w Technologii Materia w
Publishing House AKAPIT

Vol. 13, 2013, No. 2

INDUSTRIAL PROCESS CONTROL WITH CASE-BASED REASONING APPROACH


JAN KUSIAK1, GABRIEL ROJEK1*, UKASZ SZTANGRET1, PIOTR JAROSZ2 AGH University of Science and Technology, Faculty of Metals Engineering and Industrial Computer Science, Department of Applied Computer Science and Modelling al. A. Mickiewicza 30, 30-059 Krakw, Poland 2 AGH University of Science and Technology, Faculty of Non-Ferrous Metals, Department of Physical Chemistry and Metallurgy of Non-Ferrous Metals al. A. Mickiewicza 30, 30-059 Krakw, Poland *Corresponding author: rojek@agh.edu.pl
Abstract The goal of presented work is an attempt to design an industrial control system that uses the production data registered in the past during the regular production cycle. The main idea of the system is processing of the production data in order to find some registered past cases of production that are similar to the present production period. If the found production past case fulfills the requirements of the given quality criterion, the registered control signals corresponding to that case are considered as the pattern for the actual control. Such approach is consistent with the core assumption of Case-Based Reasoning, namely that similar problems have similar solutions. The paper presents preliminary results of the implementation of the CBR system to industrial control of the oxidizing roasting process of sulphide zinc concentrates. Key words: industrial process control, case-based reasoning, oxidizing roasting process of sulphide zinc concentrates, multi-agent system
1

1. INTRODUCTION Preparation of zinc form sulfide concentrates is currently realized in the industry mainly through hydrometallurgical processes. The first stage of this technology is transformation of metal sulfides to oxides, which is called the roasting process and is carried out in fluidized bed furnaces. As the result of roasting of zinc sulfide concentrates in the fluidizedbed furnace zinc oxide (ZnO) is obtained in two fractions: fine and thicker dust of maximum content of sulphide sulfur content 0.6% and 0.4%. During the roasting process, the aim is to obtain a minimum content of sulphide sulfur in the composition of the product. By production of the roasting of sulphide

concentrates of zinc heat and gases are obtained, that are processed further in the sulfuric acid plant installation. From a point of view of optimization the oxidizing roasting process is nonlinear and multidimensional process. The oxidizing roasting process was modeled using artificial neural networks, what is presented in (Sztangret et al., 2011). This model is based on artificial neural networks (ANN). The goal of the artificial neuron network is to generate the proper output signal that depends on the input signals, and that is close to the observed output of the modelled object. Presented in (Sztangret et al., 2011) results of modelling of the oxidizing roasting process using artificial neural networks show usefulness of this apISSN 1641-8581

313 319

INFORMATYKA W TECHNOLOGII MATERIAW

proach, especially used together with evolutionary techniques in order to optimize the industrial process control, however due to the complex nature of the modeled process, should be compared and referred to other approaches to process control. As it is presented in (Rojek & Kusiak, 2012b) Case-based reasoning (CBR) seems as one of possible techniques, that can be used at industrial control. Presented here research concerns analysis and implementation of CBR approach to control of the industrial process of the oxidizing roasting process. 2. CASE-BASED REASONING The main paradigm of case-based reasoning (CBR) is reasoning by reusing of previous similar situations by solving a current problem. A decision system with CBR approach uses the case-base, which is collection of past made and stored experience items, called past cases, or cases. Every time a new problem is solved, a past case relevant to present problem has to be selected in the case-base and next this selected case has to be adopted to current situation. Every time a new problem is solved, a new experience is retained in order to be available for future reasoning concerning future problem situation. The retaining of made experiences enables incremental, sustained learning. From the general point of view the CBR approach is relaying on experienced made in the past during solving of concrete problem situations, instead of using any general knowledge of a problem domain, as presented in (Aamodt & Plaza, 1994). An example of implementation of CBR approach is optimization of autoclave loading for heat treatment of composite materials, where airplain parts are treated in order to get the right properties (Aamodt & Plaza, 1994). This system uses relevant earlier situations in order to give advise for the current load. Other application areas of CBR approach are help-desk and customer service, recommender system in e-commerce, knowledge and experience management, medical applications, applications in image processing, applications in law, technical diagnosis, design, planning and human entertainement (computer games, music), as it is presented in (Bergmann et al., 2009). The main point of technically different known CBR systems is the CBR cycle. The CBR cycle is common algorithm of every CBR application and consists of 4 sequential processes (or phases) (Aamodt & Plaza, 1994):

1. Retrieve the most similar case or cases 2. Reuse the information and knowledge in that case in order to solve the problem 3. Revise the proposed solution 4. Retain the parts of this experience in order to use it for future problem solving The CBR cycle starts when there is a new problem to be solved. Main task in the first, retrieve process is to find k-nearest-neighbor considering a specific similarity measure. The similarity measure can be inverse Euclidean or Hamming distance or can be specific modeled according the knowledge of the domain. The similarity measure should induce a preference order in the case base taking into account the new, currently solved problem. The preference order should enable to select one or a small number of cases, which are relevant for the new case. When one or several similar cases are selected in the retrieved process, the solution contained in these cases is reused to solve the current problem, what takes place at the reuse process. This process can be very simple, when the solution is returned unchanged as the proposed solution for the current case, however there are domains, which require adaptation of solution. There are two main ways to adapt retrieved past cases to the current problem: (1) transform the past case, (2) reuse the past method that constructed the solution. At the revise process the solution generated at the reuse process is evaluated and in the case of undesired evaluation there is possibility to repair the case solution using domain-specific knowledge. This phase can consist of two tasks: evaluation of solution and fault repair. The evaluation task uses the results from applying of the suggested solution to the real environment, what can happen by asking a teacher or performing the task in the real world. This task is usually performed outside the CBR system and makes necessary to link the CBR system with the real world domain, which concerns the solved problem. Fault repair involves detecting of errors of the current solution and using failure explanation to modify the solution in a way to improve it in a way errors do not to occur. The retain process at the CBR cycle concerns learning by retaining of current experience, what usually occurs by simply adding the revised case to the case base. Thanks to this adding, the revised solution becomes available for reuse at future problem solving. As a result of the retain process a CBR system gains new experience due to and together

COMPUTER METHODS IN MATERIALS SCIENCE

314

INFORMATYKA W TECHNOLOGII MATERIAW

with regular solving of current problems. In some domains of applications continuous increase of the case base cause by the retain process causes continuous decrease of efficiency of the retrieve phase. 3. DESIGN OF THE CBR SYSTEM FOR CONTROL OF INDUSTRIAL PROCESS The implementation of the CBR approach to the industrial process control is illustrated with example of the oxidizing roasting process of sulphide zinc concentrates as an example of typical industrial process. The goal of control of this process is to achieve the minimal concentration of sulphide sulphur in the roasted products. All input signals of this process can be divided into three main groups: (1) independent signals chemical composition of the input zinc sulphide concentrate, (2) dependent signals measured only signals, that influence the nature of the process e.g. temperature inside the furnace, and (3) controllable signals signals that can be set e.g. air pressure after blower. The quality criterion is minimal concentration of sulphide sulphur in the roasted products. This concentration is measured several times a work day (e.g. 5 or7 times a day). All independent signals (concentration of Zn, Pb, Fe and S in the input concentrate) are measured only once per day, but dependent signals are measured several time per minute. Controllable signals are set with the equal frequency to the frequency of dependent signals measure. 3.1. A case in the domain of industrial control

The fundamental problem having the goal to design a CBR system for any domain of its use is defining, what is a case. A case relates to one single problem solved by a CBR system. Considering the oxidizing roasting process of sulphide zinc concentrates it is possible to state, that the solved problem can be presented in the form of question: how to control the process knowing independent signals (chemical composition of the input concentrate) in order to obtain minimal concentration of sulphide sulphur in the made products. Because the chemical composition of the input concentrate is known only ones per a production day (at the beginning of a day) it is assumed that the whole day of production should be controlled in the same manner using one single control function. This control function should take into consideration values of measured dependent signals and on this basis propose values of controllable signals. After the end of production day an

3.2.

The retrieve phase

In the domain of control of presented industrial process the main goal of the retrieve phase is to find a past case, which concerns similar problem to the current problem and contained in this case solution is evaluated as desirable. Similar previous solved problems to the current one are cases representing past production days with similar values of measured independent signals (what means similar com-

315

COMPUTER METHODS IN MATERIALS SCIENCE

average quality measure is known, what enables to evaluate the production characterized by measured values of independent signals and used control function (in the form of dependent signals and controllable signals). Presented above discussion lets to define a case as the triple problem-solution-evaluation. The problem is specified by measured independent signals (chemical composition of the input concentrate). The solution is, in other words, the control function used to production characterized by specified independent signals. The control function takes as the input values of dependent signals and results in values of controllable signals, so the control function can be described by dependent and controllable signals registered during past production. The evaluation is represented by the average measure of concentration of sulphide sulphur in the made products during the period of using the control function specified at the solution. Reassuming, a case is the data structure, that consist of: Problem single values of independent signals for the whole considered production day, Solution the description of used control function in the form of values of dependent and controllable signals registered during considered production day, Evaluation average quality measure in the form of average concentration of sulphide sulphur in the products made during considered production day. From the general point of view a case represents one day of production. Every CBR system needs a knowledge represented in the case-base in order to propose solution for current problem. In domain of presented industrial process it is possible to use past data related to manual control done in the past. Such case-base should enable for designed system to imitate the manual control considering quality results which were obtained during different production days.

INFORMATYKA W TECHNOLOGII MATERIAW

position of input materials). It is proposed to choose first a small number of past cases representing similar problems (using k-nearest neighbor algorithm) and next to select among them only one that has best evaluation, what can be done in two steps: 1. Choose a number of previous cases from the case base with the highest similarity rate, that is measured as the Euclidean distance between values of independent signals measured for the current problem and the solved problems represented by the previous cases. 2. Select among chosen cases only one, which is evaluated with the most desirable average value of quality measure. 3.3. The reuse phase

that is solved. The feedback in the case of industrial control domain is in the form of evaluations of real products made during current period of production. This evaluation has to be made outside computer system and is usually equivalent of quality measure (made by human). In the case of industrial control of the oxidizing roasting process of sulphide zinc concentrates the quality measure is done after production time, so it is not possible any fault repair process concerning present production period. 3.5. The retain phase

COMPUTER METHODS IN MATERIALS SCIENCE

In the reuse phase the solution represented by the selected past case in the previous phase should be applied to the current problem, that is control of the industrial process. The past case contains description of solution in the form of dependent values and values of controllable signals. It is assumed, that controllable signals are function (named control function) of dependent values. Having goal to reuse the solution represented by the selected case, the control function used at the selected case has to be approximated and next has to be used in the control of present case of production, what can happen in two steps: 1. A model of the control function relevant for the selected past case should be prepared with the use of values of dependent and controllable signals. 2. The prepared model of control function should be used at solving of the current problem. An artificial neuron nets can be used by modeling and using of control function. In the first step the artificial neuron net has to be learned with values of dependent and controllable signals that are contained in the selected past case. In the second step the learned net has to be used in order to predict values of controllable signals on the base of presently measured dependent signals. During the second step all values of dependent and controllable signals should be saved in order to be used during the retain phase. 3.4. The revise phase

The retain phase enables learning in the CBR cycle. This phase starts, when the current problem was solved and the evaluation of this solution is known. The current case contains already the description of the problem, description of the applied solution in the form of values of dependent and controllable signals saved during reuse phase and the evaluation in the form of average value of quality measure. In the retain phase the current case is just add to the case base and becomes one of past cases representing experience items concerning control of the industrial process, what enables the currently ended present case to be available for reuse in future problem solving process. 4. IMPLEMENTATION OF THE CBR SYSTEM 4.1. Control of oxidizing roasting process

The revise phase assures feedback of the applied in the reuse phase solution into the current problem

Presented above analysis concerning CBR system in the domain of industrial control of the oxidizing roasting process of sulphide zinc concentrates is implemented using agent technology, which was presented among others in (Weiss, 1999; Wooldridge, 2001). The complete functioning of the CBR system is partitioned into individual agents. Two main types of agents are functioning in the system: the Past Episode Agent, which represents one past case, the Control Agent, which performs the CBR operations concerning resolving solution for control of the present production period. Due to the fact, that one Past Episode Agents represents one past case, the number of Past Episode Agents is equal to the number of past cases contained in case-base. The Past Episode Agent can receive messages concerning represented past case and answer to such questions providing information

316

INFORMATYKA W TECHNOLOGII MATERIAW

317

COMPUTER METHODS IN MATERIALS SCIENCE

concerning description of problem, solution or evaluation represented by the past case. The Control Agent performs CBR operations, that aim to control of oxidizing roasting process. In the retrieve phase this agent finds one past case, that is relevant for the current production period. This selection is made by agents communicating in the system: first 5 Past Episode Agents are selected, that represent the most similar production periods concerning independent signals, second from those five selected only one is chosen, that represents the best evaluated production (as presented in subsection 3.2). In the reused phase the Control Agent uses an artificial neural network (as presented in subsection 3.3). This neural network is a multilayered perceptron composed of neurons with sigmoid function. All neurons are located in 4 layers composed of 9, 13, 11, 7 neurons. By the modeling step a supervised learning is used with usage of data represented by the selected in previous phase case. The main problem that we were faced by implementation of the software concerns the revise and retain phase, which require the real evaluation of proposed solution. This evaluation would be not a problem, when implemented system will be really a control system for the real control system. Lack of evolution, however, was not a handicap at implementation of the presented system, that is solution for control of the present production period. Due to mentioned problem with evaluation of made products under control of presented system, the revise and the retain phases are not implemented. By implementation of the presented system JAVA and JADE (framework for agent systems) are used, as in previous works concerning industrial control presented in (Rojek et al., 2011; Rojek & Kusiak, 2012b). Using of agent technology allows to overcome many development problems, that appear by implementation of CBR systems. The CBR methodology relays on using of information, that is contained in case-base, that is just set of distributed cases concerning past situations. The number of past cases is changing due to new problems, that are successively resolved and stored in order to be used in future. Such constriction of case-base can be simple transferred to multiple agents, that represent, as it was proposed, one case. This transformation maintains natural decomposition of problem through use of the agent technology.

4.2. Combustion control of blast furnace stoves Presented in (Sun, 2008) implementation of the CBR methodology to control of combustion control of blast furnace stoves seems analogous to presented above work. Main problems of shown in (Sun, 2008) application are related to definition of representation of a case, case-base and all phases of implemented CBR phases. In the retrieve phase similar past cases to the current one are searched with the same method, as presented in subsection 3.2 (k-nearest neighbor algorithm and the Euclidean distance). The reuse phase is much simpler due to the fact, that a case represents only one moment of time. The solution represented in the relevant case is just taken directly as the final control decision. Presented in section 3 work concerns a case as a whole day of production, what induces using approximation methods (e.g. neuron nets) in order to obtain temporary control decisions. The revise and retain phase are presented in (Sun, 2008) very shortly. It is assumed, that cases are evaluated later and added to case-base for future problem solving, what is similar to research presented in subsections 3.4-3.5. 5. ARTIFICIAL NEURAL NETWORK BASED CONTROL SYSTEM Other approach to control of an industrial process uses a model and a optimization procedure (figure 1). Implementing this approach to the considered oxidizing roasting process, the artificial neural network is used as a model for prediction of concentration of sulphide sulphur in roasted ore. The elaborated ANN model is based on the architecture of MultiLayer Perceptron (MLP). The used ANN first should be trained in order to predict the concentration. For this step supervised learning method is used. The dataset used at training contains records, composed of the measurements of the roasting process and the resulting concentration. After the ANN was trained, it can be used as the model, which is used by a optimization procedure. As a optimization method, particle swarm optimization (PSO) is used. The goal of optimization is to obtain values of control signals which provide minimal concentration of sulphide sulphur in a roasted ore. PSO method is inspired by the behaviour of swarms of birds, insects or fish shoal looking for food or shelter. Every member of the swarm searches in its neighbourhood but also follow the others, usually the best member of the swarm. In the algorithm based on this behaviour, the swarm is consid-

INFORMATYKA W TECHNOLOGII MATERIAW

ered as particles representing single solutions. Each particle is characterized by its own position and the velocity. Particles move through decision space and remember the best position they ever had. More accurate description of this method can be found in (Sztangret et al., 2009) and (Sztangret et al., 2010).

ture of Case-base reasoning. From general point of view, work of such complete system can be analogous to work of an worker, that gains and uses experience according to made decisions. Acknowledgment. Financial assistance of the MNiSzW (Dziaalno statutowa AGH nr 11.11.110.085). REFERENCES
Aamodt, A., Plaza, E., 1994, Case-Based Reasoning: Foundational Issues, Methodological Variations, and System Approaches, AICom - Artificial Intelligence Communications, 7, 39-59. Bergmann, R., Althoff, K. D., Minor, M., Reichle, M., Bach, K., 2009, Case-Based Reasoning Introduction and Recent Developments, Kunstliche Intelligenz: Special Issue on Case-Based Reasoning, 23, 5-11. Rojek, G., Kusiak, J., 2012a, Industrial control system based on data processing, Proc. Conf. ICAISC 2012, eds, Rutkowski, L., Zakopane, 502-510. Rojek, G., Kusiak, J., 2012b, Case-Based Reasoning Approach to Control of Industrial Processes, submitted and accepted to Computer Methods in Material Science. Rojek, G., Sztangret, ., Kusiak, J., 2011, Agent-based information processing in a domain of the industrial process optimization, Computer Methods in Materials Science, 11, 297-302. Sun, J., 2008, CBR Application in Combustion Control of Blast Furnace Stoves, Proc. Conf. IMECS 2008, eds, Ao, S. I., Hong Kong, vol. I, 25-28. Sztangret, ., Stanisawczyk, A., Kusiak, J., 2009, Bio-inspired optimization strategies in control of copper flash smelting process, Computer Methods in Materials Science, 9, 400408. Sztangret ., Szeliga D., Kusiak J., 2010, Analiza wraliwoci jako metoda wspomagajca optymalizacj parametrw procesw metalurgicznych, Hutnik Wiadomoci Hutnicze, 12, 721-725 (in Polish). Sztangret, ., Rauch, ., Kusiak, J., Jarosz, P., Maecki S., 2011, Modelling of the oxidizing roasting process of sulphide zinc concentrates using the artificial neural networks, Computer Methods in Materials Science, 11, 122-127. Weiss, G., 1999, Multiagent Systems: A Modern Approach to Distributed Artificial Intelligence, MIT Press Cambridge, USA. Wooldridge, M., 2001, Introduction to Multiagent Systems, John Wiley & Sons, Inc., New York, USA.

Fig. 1. Scheme of control system using an ANN model and a optimization method.

6. CONCLUSIONS Case-base reasoning approach enables to make decision in the case of unknown model of domain of implementation. The decision is made on the basis of previous made decisions, if those decision bring desirable results. The CBR system uses experience, that is information of previous made decisions. The experience is in the form of case-base, which is used at currently solved problems. The CBR system together with solving problems, simultaneously is gaining its experience by adding current problems and its solutions to the case base. Presented in this article research shows, that it is possible to implement the CBR methodology to the domain of control of industrial process. Such implementation involves many design decisions according to representation of a case, construction of case-base and development of the whole CBR cycle. All that decision have to relate to the real industrial process, that is controlled. Future works should be oriented to implementation of CBR approach to real control of industrial process. Only such implementation will enable to obtain real evaluation of made decisions according to control of production. If the evaluation of control made by CBR system will be known, the revise and retain phases will be possible to realize and finally it will be possible to close the CBR cycle. Such CBR system will not only use its experience, but also will gain experience, as it is presented as the main fea-

COMPUTER METHODS IN MATERIALS SCIENCE

STEROWANIE PROCESW PRZEMYSOWYCH Z PODEJCIEM OPARTYM NA WNIOSKOWANIU EPIZODYCZNYM Streszczenie Celem prezentowanej pracy jest prba zaprojektowania systemu sterowania przemysowego, ktry w trakcie biecego cyklu produkcyjnego wykorzystuje zarejestrowane w przeszoci dane produkcyjne. Gwnym zaoeniem systemu jest przetwarzanie danych produkcyjnych w celu znalezienia zarejestrowanych przeszych przypadkw produkcji, ktre s podobne do biecego okresu produkcji. Jeli znaleziona w bazie przeszych

318

INFORMATYKA W TECHNOLOGII MATERIAW przypadkw produkcja spenia wymagania danego kryterium jakoci, zarejestrowane wartoci sygnaw sterujcych, ktre odpowiadaj znalezionemu przypadku, uwaane s za wzr dla biecego sterowania. Takie podejcie jest zgodne z podstawowym zaoeniem wnioskowania epizodycznego (ang. CaseBased Reasoning), ktrym jest stwierdzenie, e podobne problemy maj podobne rozwizania. W pracy przedstawiono wstpne wyniki wdroenia systemu CBR do sterowania procesu przemysowego, ktrym jest proces utleniajcego praenia koncentratw siarczkowych cynku.
Received: December 4, 2012 Received in a revised form: December 18, 2012 Accepted: December 28, 2012

319

COMPUTER METHODS IN MATERIALS SCIENCE

COMPUTER METHODS IN MATERIALS SCIENCE


Informatyka w Technologii Materia w
Publishing House AKAPIT

Vol. 13, 2013, No. 2

RULE-BASED SIMPLIFIED PROCEDURE FOR MODELING OF STRESS RELAXATION


KRZYSZTOF REGULSKI1*, DANUTA SZELIGA1, JACEK RODA1, ANDRZEJ KUNIAR2, RAFA PUC2 AGH University of Science and Technology, Cracow, Poland, Mickiewicza 30; 30-059 Krakow, Poland 2 WSK "PZL - Rzeszw" S.A., Hetmaska 120; 35- 045, Rzeszw, Poland *Corresponding author: regulski@tempus.metal.agh.edu.pl
Abstract A case study of rule knowledge base on developing stress relaxation in welded components is presented in the paper. The procedure of simplified decision support model formulation including expert knowledge externalization, formalization of knowledge, the selection of variables and identifying domain of attributes is described. A decision tree was used to support creating and visualizing of the model. The result of the work is a set of rules constituting a simplified model to predict the stress relaxation parameters of stress-relief annealing applied to welded components of PZL-10W helicopter engine produced by WSK Rzeszw. Developed application can be implemented in one of the reasoning system shells. Key words: rule-based system, knowledge base, decision support, stress-relief annealing, welding
1

1. CHARACTERISTICS OF THE RESEARCH WORK The inference and the development of control models proceeds in several steps and is an iterative process where specialists perform the role of experts, not only at the stage when the model is defined, but also on the stage when variables of a process are selected, a variable domain is determined, inference rules are proposed, and finally during model evaluation. This process also requires a collaboration of experts from various areas of expertise not only from manufacturing, but also with the knowledge engineering, computer science, etc. (Rutkowski, 2005). The aim of this research is to propose a set of control rules aiming at decreasing the stress in welded components on the basis of WSK Rzeszow spe-

cialists technology knowledge. An inference stress relaxation model is developed in collaboration with welding experts from WSK Rzeszow. The design of the model requires the determination of a model scope, the decision criterions and appropriate selection of independent variables. Various materials, such as: Inconel 625, Inconel 718 and Steel 410 are considered in the numerical simulation of welding and heat treatment processes for manufacturing of turbine engine. 2. SOURCES OF TECHNOLOGICAL KNOWLEDGE UTILIZED IN THE MODEL DEVELOPMENT 2.1. Literature review

The knowledge base for the assessment of weldment integrity can be based on the measurement ISSN 1641-8581

320 325

INFORMATYKA W TECHNOLOGII MATERIAW

of residual stress and density of micro-cracks. Issues related to the relaxation of stress after welding has been discussed in numerous publications, e.g. by Tasak (2008), and Pilarczyk (1983). Stress relief annealing leads to some stress relaxation and also can restore ductility in brittle zones. A stress-relief annealing is the most common method for removing internal stress. Thermal annealing reduces the yield strength at elevated temperatures that results in the occurrence of plastic deformation in areas, where the second invariant of internal stresses exceeds a local yield limit. Increasing the annealing temperature reduces the limits of yield and the strength of steel. That situation is usually preferred in the case of stress-relieving elements that are required to have good strength properties. The annealing temperature lowering could lead to insufficient stress relaxation. The most important parameters of stress-relief annealing are: an alloy composition, complexity of the shape and size of a product, heating rate, annealing temperature and cooling rate. 2.2. Knowledge base and experience of executive team

The decision-making control model: determination of heat treatment parameters on the basis of the workpiece. Finally authors followed the fourth option, i.e. decision making control model. The manufacturing knowledge base available in the WSK Rzeszow existing in non-formal documentations, industrial practice and observations recorded by technologists that can be collected together and formally coded in the form of data bases. Rules supporting a decision making process will be identified as the data base motor. 2.3. Internal procedures and standards

Authors preliminary knowledge in the area of rule system implementation is priceless in the preparatory phase of a research project (KluskaNawarecka et al., 2007; Kluska-Nawarecka & Regulski, 2007; Kluska-Nawarecka et al., 2009; Mrzygd & Regulski, 2012; Nawarecki et al., 2012; Szeliga et al., 2011, Szeliga, 2012). During the problem formulation, after literature study and discussions with engineers from WSK Rzeszw the major tasks were defined and the objectives were identified with selection of decision making criterions and appropriate variables, so called inference objects. The first step in developing a decision making support system is to determine the inference object. There are several variants of such study based on: The prediction model: determining the quality of the residual stress after heat treatment on the basis of welding, annealing parameters and workpiece data. The decision support model: determination of heat treatment parameters on the basis of the expected properties of the material after annealing process. The diagnostic model: determining the causes of defects in the process of heat treatment.

Each of the engine elements has the exact specification in the manufacturing process output. The specifications are established according to materials treatment standards. Two of the standards: American (AMS) and international (ISO) are generally available, but inner specifications are confidential. Those confidential specifications are available for the study but the knowledge base and decision making models derived from that specification are still the property of the WSK Rzeszw. Following that, decision tables, decision trees and the rules are presented here without confidential details. Heat treatment of turbine engine case is carried out after welding. It may also be used during engine overhaul when a case is repaired by welding. When the control procedure shows that weld cracks exceed security limits, a repaired element must be rewelded. Typical heat treatment of a case consists of a vacuum annealing conducted following the procedure: 1. Securing of required level of a vacuum. Some depreciation of this vacuum state is permissible during the process. 2. Heating of a furnace to a required temperature. 3. Continuous heating of a chamber to the annealing temperature and maintenance of this temperature for a specified period. 4. Cooling a furnace with a specific rate up to the minimum temperature and further cooling a case up to the ambient temperature. Annealing the single parts of a case and components is carried out in a furnace with a fixture for maintaining shape and dimensions of elements in the risk of deformation due to thermal dilatation. The same device is use in a case maintenance procedure.

321

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW

2.4.

WSK base of knowledge

The information available from production engineers appointed by WSK Rzeszw is valuable source of knowledge for the development of process control model. Non-formalized knowledge based on industrial experience is called by managers tacit knowledge and the know-how. Without the knowledge of the practical aspects of decisionmaking in an industrial environment, without familiarity with the most common dilemmas emerging technology, it is impossible to develop a control model and propose rules of inference. In the later stages of the work, other sources of the knowledge play only auxiliary functions. The major task of the engineer of knowledge is the externalization of the experience of experts. To develop the control model of stress relaxation, the knowledge of production engineers were codified following a cycle of interviews and the model assessment. The cycle was repeated several times that helps in avoiding modeling errors and identification of decision rules. 3. KNOWLEDGE CODING FOR RULEBASED EXPERT SYSTEMS The knowledge coding is the next step in the design process of the control model for welding. The objective of coding is to represent a production engineering knowledge in such a way that it can be implemented into the inference process. Moreover, the introduced formalism should be clearly understood by a production engineer who can evaluate and verify the knowledge. Several methods were used to perform that process by developing the following objects: decision trees, decision tables, and rules of inference. 3.1. Selection of inference parameters

tionale. For example: a variable specification can be a result of conclusion of the inference in the first phase and later may appear as a variable in further rules. Production engineering experts may selected the following variables in the decision making process: specification of materials, necessity of fixture usage during stress-relief annealing, heating rate, number of stops during the heating period for a supply of argon. 3.2. Decision making tables

The first step in the design of the decision making system is the identification of parameters i.e. variables in rules for concluding. The number of variables can be proposed, regardless of whether they will be used or not in the final model. Variables can be selected according to the future application, determining which of them will be used in relations and which will be evaluated in the inference process. A domain of each parameter/variable is determined as a field attribute. In this model some of variables could appear both as a result of conclusion or a ra-

One of steps of complete decision-making model is to answer the question related to maintenance of a work piece shape, i.e. whether the fixture usage is necessary during a heat treatment? This decision is important from economical point of view, because mass of a fixture, which ranges to several kilograms, absorbs heat and therefore, the oven to reach the same annealing temperature must be heated longer than for the case of heat treatment without such device. The base surface is often the outer cylindrical surface and a cylinder base. Some parts could be supported simultaneously on various surfaces. A fixture strengthening a workpiece should be used for flexible parts. A production engineer is making a decision on which surfaces a fixture should applied. Usually, those are outer surfaces of a workpiece. Diameters of a workpiece are considered together with dimensional tolerances and shape requirements such as flatness and roundness. Dimensional tolerances and geometrical tolerances are controlled after heat treatment. An attachment and detachment in a strengthening should be easy and unambiguous. To eliminate thermal deflection after a heat treatment various repair operations are used, such as straightening operation for bars and tubes or spinning for the cylindrical parts. The final heat treatment process is marked by the acronym, HT. When such operations are final, and there is high risk of deflection then to secure the shape within strictly prescribed tolerances a strengthening fixation must be used. Variables consisting so called decision table does not distinguish between types of support surfaces, e.g. cylindrical or flat. Such information is already included among others in the variable

COMPUTER METHODS IN MATERIALS SCIENCE

322

INFORMATYKA W TECHNOLOGII MATERIAW

named: "deformations". The decision table, shown as the table 1, assigns a number of decision rules to the heat process scenarios. It can be read as follows: scenario 1 when a heat treatment is in interoperation stage and tolerances of the supporting surface are loose, and a deflection may could exceed a tolerance, then a strengthening fixation must be applied.

ted, because a decision is made on the based on the information about the heat treatment stage and appropriate tolerances. The rationales for each decision making should be described for each manufacturing process scenario. This redundancy will be eliminated following the future process analysis. 3.3. Construction of decision trees

Table 1. The decision table showing a requirement for a strengthening fixation during stress-relief annealing. Possible scenarios: TREATING STAGE TOLERANCES OF SUPPORTING SURFACES ANTICIPATED DEFORMATIONS Use of stress-relief annealing apparatus source: own study

The process of knowledge acquisition from experts is much more laborious than it would 1 2 3 4 5 6 appear following this study. internal final However, results of such acquiloose tight loose tight sition are gathered collectively YES NO YES YES NO YES in the decision-making table. For NO NO YES YES NO YES the sake of simplicity, the authors decide to omit a number of steps leading to the refinement of relations and inference rules. The entire decision tree makes depicts rules in a comprehensive manner. Since the whole tree exceeds one page and the information is confidential, the value of the parameter 'specifications' is presented only by the appropriate acronym or symbol (see figure 1). To describe the idea of the decision tree only a small portion of a full model is presented here only for Steel 410. 3.4. Rules of inference

Inference control rules can be generated on the basis of a decision tree. These rules are shortened to include only the necessary conditions for achieving an inference. For example, selected few rules are the following:

THEN specification = Inc718a IF treating_stage = final AND tolerance = tight AND deformation = significant THEN apparatus = yes IF specification = Inc718a AND heating_speed = 6C/min THEN number_of_stops = 0 Fig. 1. A part of the decision tree to determine the heat treatment parameters on the basis of a workpiece-decision-making control model. IF specification = 625 AND heating_speed = 8C/min AND exploitation_treatement = TRUE THEN number_of_stops = 2

This rule can be avoided, as in this particular scenario the information on thermal deflections is redundant. Therefore, this parameter could be omit 323

COMPUTER METHODS IN MATERIALS SCIENCE

IF material = Inconel 718 ing_stage = final

AND treat-

INFORMATYKA W TECHNOLOGII MATERIAW

Using this type of rules production engineer can decide about indicated variables such as: specification of materials; utilization of strengthening fixture during annealing, heating rate, and number of breaks application of an argon dose. Those rules are also ready to be implemented in one of expert system shells. 3.5. Example of use case

parameters of stress-relief annealing), a requirement for a strengthening fixation, heating speed and number of stops. The whole system with developed knowledge base was successfully implemented in WSK Rzeszw. 5. SUMMARY The task was performed with the following works: acquisition of the knowledge from the best practice of production engineering in WSK regarding e.g. stress relaxation after welding of engine case and its components, codification of the expert knowledge on the present-day process experience and ways for decisions of heat treatment parameters for selected materials and components; codification and implementation of manufacturing know-how in a decision table, decision tree, and control rules of inference. The final step of an expert system development consists of the derivation of rules for the control of a stress relaxation process. Set of such rules would be used in future by process engineers for supporting decisions for the design of stress-relief annealing. Developed knowledge base is a functional model of stress relaxation control. The model was sent to the WSK Rzeszow for evaluation. The mayor result of this paper is an attempt to codification of previously informal expert knowledge and a proposition of inference rules. For further application, the presented scheme should be implemented in one of inference systems. Acknowledgements. This research was carried on within the research project of NCBiR, titled SPAW no. ZPB/33/63903/IT2/10 - 2010-06-01. REFERENCES
Banet E., Baster B., Duda J., Gawe B., Jankowski R., Jdrusik S., Macio P., Macio A., Madej ., Nowak J., Paliski A., Paradowska W., Pilch A., Puka R., Rbiasz B., Stawowy A., liwa Z., Wrona R., 2011, Business rules management : perspectives for application in technology management; eds. Macio A., Stawowy A., Krakw, AGH ISBN 978-83-932904-0-6. Kluska-Nawarecka, S., Marcjan, R., Adrian, A., 2007, The role of knowledge engineering in modernisation of new metal processing technologies, Archives of Foundry Engineering, 7, 169-172. Kluska-Nawarecka, S., Regulski, K., 2007, Knowledge management in material technology support systems, Prob-

Reading a decision tree or applying some rules of inference in a daily routine can be presented in a form of dialogue with a user. The dialogue can be advanced as follows:
MODEL: What kind of parts made of? USER: Steel 410 MODEL: Treating stage is final or internal? USER: Final MODEL-CONCLUSION: You should use Specification of parameters sign Spec410a MODEL: Is tolerance of dimensions tight or loose? USER: Loose MODEL: Is the risk of deformation significant? USER: Yes MODEL-CONCLUSION: You should use elements geometry sustaining apparatus during stress-relief annealing MODEL-CONCLUSION: You should apply heat rate at 6 celsius degrees per minute MODEL-CONCLUSION: You should plan 1 stop for argon application a material are the

COMPUTER METHODS IN MATERIALS SCIENCE

4. APPLICATION OF KNOWLEDGE MODEL Presented in the paper rules was implemented in the system ReBIT (Banet et al, 2011). ReBIT is the Business Rules Management Systems which combines the capabilities of the rule-based decision support system with the expressiveness available in algorithmic programming languages. System acts as expert system shell. It implements an engine of inference and give an opportunity of developing a knowledge base. The system was developed in the Department of Applied Computer Science of the Faculty of Management AGH in Krakow. Research team decided to apply forward inference. Knowledge base consists on several dozens of variables and tens of rules. Decision variables are constructed from: specification (which is a set of 16

324

INFORMATYKA W TECHNOLOGII MATERIAW lems of Mechanical Engineering and Robotics, 36, 73 86. (in Polish). Kluska-Nawarecka, S., Grny, Z., Pysz, S., Regulski, K., 2009, On-line expert system supporting casting processes in the area of diagnostics and decision-making adopted to new technologies, Innowacje w odlewnictwie, ed. Sobczak J., vol.3, Krakw: Instytut Odlewnictwa, 251 261. (in Polish). Mrzygd, B., Regulski, K., 2012, Application of description logic in the modelling of knowledge about the production of machine parts, Hutnik Wiadomoci Hutnicze, 79, 148151. (in Polish). Nawarecki, E., Kluska-Nawarecka, S., Regulski, K., 2012, Multi-aspect character of the man-computer relationship in a diagnostic-advisory system, Advances in Intelligent and Soft Computing. Pilarczyk, J., 1983, Poradnik inyniera Spawalnictwo, Wydawnictwo: Naukowo Techniczne, Warszawa, (in Polish). Rutkowski, L., 2005, Methods and techniques of artificial intelligence, Wydawnictwo Naukowe PWN, Warszawa (in Polish). Szeliga, D., Pietrzyk, M., Kuziak, R., Podvysotskyy, V., 2011, Rheological model of Cu based alloys accounting for the preheating prior to deformation, Archives of Civil and Mechanical Engineering, 11, 451467. Szeliga, D, 2012, Design of the continuous annealing process for multiphase steel strips, 21st International Conference on Metallurgy and Materials, Brno, CD ROM. Tasak, E., 2008, Metalurgia spawania, Wydawnictwo: Biuro Ekspertyz i Doradztwa Technicznego "Techmateks", Krakow (in Polish). UPROSZCZONA PROCEDURA MODELOWANIA RELAKSACJI NAPRE OPARTA O SYSTEM REGUOWY Streszczenie Artyku ma na celu przedstawienie studium przypadku tworzenia reguowej bazy wiedzy w zakresie relaksacji napre powstajcych w elementach spawanych. Opisana jest procedura powstawania uproszczonego modelu wspomagajcego podejmowanie decyzji obejmujca eksternalizacj wiedzy ekspertw, formalizacj wiedzy, dobr zmiennych i okrelanie dziedzin atrybutw. Jako narzdzie wspomagajce proces tworzenia i wizualizacji modelu wykorzystano take drzewo decyzyjne. Wynikiem prac jest zestaw regu stanowicy uproszczony model, ktry na podstawie wartoci kilku zmiennych zdefiniowanych przez uytkownika okrela parametry procesu wyarzania odprajcego stosowanego w WSK Rzeszw do produkcji czci skadowych silnikw migowcowych PZL-10W. Model taki moe by w przyszoci implementowany w systemach wnioskujcych.
Received: September 20, 2012 Received in a revised form: November 4, 2012 Accepted: November 21, 2012

325

COMPUTER METHODS IN MATERIALS SCIENCE

COMPUTER METHODS IN MATERIALS SCIENCE


Informatyka w Technologii Materia w
Publishing House AKAPIT

Vol. 13, 2013, No. 2

EXPERIMENTAL APPARATUS FOR SHEET METAL HEMMING ANALYSIS


SAWOMIR WIO Faculty of Production Engineering, Warsaw University of Technology, Warsaw/Poland *Corresponding author: s.swillo@wip.pw.edu.pl
Abstract

A new portable system for experimental investigation in the process of sheet metal hemming was developed. In the introduction, information was provided about the need to use machine vision systems to solve problems that occur in the process of hemming. Then, a test stand designed for the practical implementation of a three-step hemming process was presented. Among the different vision systems available, a method using laser light scanning for reconstruction of the geometry of the examined hemmed sample was selected. An optical system for studies of the measurement technique and a method of image analysis used in the described example of the plastic forming process were presented. In this process, first, the test sample image is recorded, and it is analysed next to obtain information about an outline of the deformed line. Further in the text, a new method proposed by the author for the reconstruction of a 3D outline of the hemmed sample was disclosed along with a technique to calculate the value of strain on its surface. Finally, a portable measurement system for quality control of the hemmed surface edges was shown for the industrial application.
Key words: strain analysis, sheet metal hemming, experimental analysis, vision based measurement

1. INTRODUCTION The subject of the paper relates to the proposed complex solution for quantitative and qualitative control of a three-step hemming process (figure 1). Hemming is the sheet metal forming process, which consists in flanging, followed by pre-hemming and final hemming classified by Muderrisoglu et al. (1996). At the final stage of the process, the end part of the sheet rolled over to the inside onto itself forms an angle of 180 degrees with the remaining base part of the sheet. This process is applied in the final stage of the car body production, and is used for two purposes: (i) to join together two parts of the sheet metal, where one sheet fills a gap between the bent edges of another sheet, and (ii) as a finishing operation by which the raw sheet edge is hidden inside the

item shaped. Yet, the mechanism of the hemming process is much more complex than it might be judged from the description of a pure bending process. One of the example of the hemming process includes the door hinges, where different deformation mechanisms are operating. It is a well known fact that properly designed and performed, each stage of the hemming process can effectively eliminate or at least minimise the majority of defects caused by metal deformation. For this reason, the whole range of parameters governing the process of forming should be subject to very carefully monitoring, remembering that it affects the final product quality, mainly in terms of gaps existing between the rolled over edges of adjacent components, or folds and fissures, i.e. the rollin, roll-out, warp and recoil described by Livatyaliet ISSN 1641-8581

326 332

INFORMATYKA W TECHNOLOGII MATERIAW

et al. (2000). Therefore, to minimise errors occurring when the process of hemming is designed, a comprehensive understanding of the process itself is necessary to which hitherto not much attention has been paid. This can be achieved by improving the already existing techniques or developing the entirely new ones, to achieve better insight into the forming process. The ultimate goal is to determine experimentally the process limit parameters, the process kinematics, and the geometry and surface quality obtainable in the operation of sheet metal hemming on the basis on research performed by Swillo et al. (2005 and 2006). Due to this it will be possible to evaluate the results of an inspection through comparison of model results with the experimental data generated by a specially developed vision system. The studies will enable quick and accurate analysis of the process of hemming for any geometric and material-related parameters found in the selected segments of a car body. In contrast to the timeconsuming and less accurate methods of assessment based on an optical system, the proposed method using a vision system will allow an immediate analysis of the finished product.

2. EXPERIMENTAL APPARATUS A special column-like shaped device was designed and built to perform the hemming process; an option has also been provided for the quick setup of the device (figure 2a). The device consists of the following main parts: two columns fixed in the bottom plate, guide sleeves fixed in the top plate, and a forming tool. The forming elements include a die with an option allowing changes in the bending radius and a punch with an option allowing changes in the pre-hemming angle. The concept of the device for the hemming process assumes an easy replacement of the forming elements. Moreover, the use of punch guides in the device allows precise control of the punch travel with the possibility to measure the clearance between the die and the punch. This solution allows precise determination of the forming process parameters, and therefore has been used in the studies of a numerical modelling of the hemming process. The material used in the hemming test was aluminium sheet (A1050). Figure 2b shows the measurement stand for the hemming tests equipped with two systems: a vision system for recording of

Fig. 1. Schematics of the three steps hemming process: a) flanging, b) pre-hemming, c) final-hemming.

Fig. 2. Experimental apparatus for hemming process: a) schematic of the designer apparatus, b) hemming tool.

the process run and data acquisition from the force sensor, and a displacement sensor. To study the hemming process (curve surface and curve edge), a special hydraulic press (applied previously in stud 327

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW

ies of the flat surface and straight edge, by wio et al. (2011, 2012) was used. Due to its high stiffness, large working space, the maximum pressure of 40kN and low punch operating speed during forming, the press allowed obtaining similar forming conditions as the conditions used during industrial hemming of the sheet metal. The hemming test was carried out using a force measurement system in the form of an axial strain gauge mounted on the model press. Used in combination with the displacement sensor attached to the press, it enabled full control of the hemming force in function of the punch position. The measurements were carried out in a Matlab/ Simulink environment that allows for block construction of the performed measurement tasks based on image processing and data acquisition originally proposed by Higham and Higham (2005), and Chen et al. (1999).

vices such as a feeler gauge. The repeatability in such cases is unsatisfactory. This statement leads to the conclusion that it is necessary to develop a vision system based on an advanced method of measurement and control of all process parameters. Therefore, the aim of the project is to use all the three main parameters of the process, that is: deformation during forming, change in the shaped product geometry, quality of the shaped surface. All these parameters can be used in a comprehensive assessment of the hemming process quality referred to the structural parts of a car body design. The use of these three parameters demands further development and implementation of advanced measurement techniques allowing for their full identification. Thus, the final task of the running project

Fig. 3. Vision based measurement for hemming: a) schematics for the strain and geometry analysis, b) stationary vision system.

3. METHODOLOGY OF HEMMING ANALYSIS Based on the industrial experience and numerous research results, it becomes obvious that the final quality in the hemming process depends on a complex interaction between the material properties, geometry and process parameters, as reported in many works on this subject. The studies discussed above, and a number of other related works (Livatyali et al., 2000 and Graf & Hosford, 1994) clearly indicate the need to search for proper relationships between the product quality, determined by the presence or absence of such defects as cracks or folds in the sheets, and the geometry of the hemming process. It should be remembered that all of the above indicated features directly affect the final evaluation of the product quality and functionality, and thus indirectly the quality and functionality of the whole car. The methods used commonly in the industry to assess the quality of such products are either visual methods or methods using simple de-

was to design and manufacture of a portable vision measurement system to allow inspection of the chosen three parameters.

3.1. Geometry and deformation measuremen


The measurement of deformation in a bent sample, where the bending angle is between 20 to 180 degrees, is a serious problem due to high localisation of the non-linear in nature distribution of deformation. In samples with a thickness of 1 mm, the maximum deformation will be concentrated in an area of the size of micrometres. Consequently, the selected method to measure deformation should be characterised by both high resolution and high accuracy with the ability to determine the deformation in different variations of the hemming process (the curved surface and the curved hemmed edge - 3D). Therefore, the strain measurement algorithm, presented by the author in this study, is based on a previously proposed solution by Swillo et al. (2005), co-called ALM (Angle Line Method). This method

COMPUTER METHODS IN MATERIALS SCIENCE

328

INFORMATYKA W TECHNOLOGII MATERIAW

allows a continuous strain determination in the examined sample, where the discretisation of measurements is imposed only because of the image resolution. Strain measurement using this method involves the application of a simple pattern-line onto the examined object in the area of the expected deformation, and the line should be applied at a certain angle. Then, to identify the line, a numerical image processing is used, allowing full automation of the strain determination technique. The advantage of this method over the traditional techniques using different mesh geometry is the process of the measurement discretisation, which in the case of the proposed method has no major restrictions on account of the pattern geometry used. In addition, another advantage of ALM is its simplicity in use, involving a simple formula, which can be written with just any pen. Next, as an extension of this method a new optical configuration has been proposed by wio et al. (2011), allow user simultaneously calculate geometry as well as deformation from a single CCD camera. The proposed method is based on an angled laser line examined element subjected to displacements. Figure 3 shows in detail a schematics of the stationary measurement as well as real experimental equipment. An example of this method in its practical embodiment has been described in detail in

research performed by wio and Czyewski (2011), wio et al. (2012) and wio (2012), where the strain was measured in the hemmed sample in an area of 0.6mm, while maximum deformation covered the area of a width not exceeding 50 m. In addition, an experimental study of grid pattern limitation is demonstrated on figure 4a. For the specimen with the square pattern the grid shape is unrecognized, so the strain calculation cannot be calculated (figure 4b). Next a grid circle where several objects (ellipses) was recorded and analyzed. A selected circle shows strong grid defects (cracking) that make strain measurement for this case very difficult. In particularly, the circle shape recognition by using image processing due to such deformations could be difficult to predict. Finally, a single line pattern with no visual defect such as broken parts, could be easily recognized and analyzed in case of hemming strain measurement (figure 4b). 3.2. Surface quality measurement

Another parameter previously proposed by Swillo et al. (2006) is used to judge a surface quality for hemming process evaluation. The common practise of optimizing assessing of the hemming quality is based on human inspection of the exposed hemmed surfaces. This inconsistency in the quality

Fig. 4. Strain measurement for hemming: a) direct results comparison using three results: circle, grid and ALM, b) grid pattern images for three methods of strain measurement.

Fig. 5. Micro-crack formation measurement for the hemmed surface.

329

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW

control methodology results in undesirable quality variation in the hemmed parts. Many researchers used various non-optical measurement and image processing methods to study and describe fractures and cracking ( Epstein, 1993). The first reported application of the vision method for fracture analysis was by McNeill et al. (1987). Since then many variations of this approach have been developed and implemented to determine crack length or displacement field in the region around the crack, (Livatyali et al., 2004).

local maximum in the graph proves that the maximum number of micro-cracks has been formed and their fusion has started taking place. This means that there has been the localisation of deformation, which favours rather fusion of the micro-cracks already existing than the formation of new ones. As a crack formation index, the average length of micro-cracks relevant to the described maximum has been adopted.

Fig. 6. Methodology for the measurement using hand-held vision system: a) schematics of the system, b) profile and strain measurement, c) surface inspection.

In the hemming process, there is a large strain concentration at the edge of the hemmed surface, conferring considerable roughness to this surface. The size of the roughness is a function of the size of the deformation and changes gradually from the surface smooth to very rough, eventually giving rise to the formation of local cracks. In a perfectly run hemming process, the surface remains smooth. In practice, the commonly used method for the surface quality assessment is a visually adopted roughness reference level at which the product should be rejected. This method of elimination leads to a lack of regularity and repeatability in the process of product elimination. The lack of objectivity is a fundamental error, and ultimately has an impact on the whole process of elimination. Therefore, it was necessary to propose an alternative route based on vision control. The proposed method of image analysis of the hemmed surface takes into account the deformation mechanism through statistical analysis of the microcrack formation (figure 5). To find a quantitative criterion of the surface deformation, the cumulative length of micro-cracks is determined and referred to the number of these micro-cracks. The occurrence of

4. PORTABLE MEASUREMENT SYSTEM As a final result of the research investigation in the area of the hemming process analysis, a portable, hand-held vision-based measurement system has been developed (figure 6a). The portable vision system is applicable in the analysis of the surface edge hemming under production conditions. All the three above described techniques have been implemented, i.e. surface inspection, geometry reconstruction and strain measurement. First, in the developed technique of scanning along the edge of the inspected part, a manual capturing of the images takes place to provide information on the hemmed surface shape. For that reason, the laser line generator is located on the top of the portable system rotated relative to the camera by a specifically selected angle. Figure 6b shows the result of profile calculation for an arbitrarily chosen location within hemmed surface. Second, using the portable vision system with data on the average length of micro-cracks and crack initiation conditions, we are able to analyze and characterize the hemming quality for any given material and

COMPUTER METHODS IN MATERIALS SCIENCE

330

INFORMATYKA W TECHNOLOGII MATERIAW

processing conditions. Figure 6c demonstrates the inspection measurement technique for micro-crack evaluation technique based on the use of co-axial illumination system. The third measurement takes place only in the situation when a simple pattern, i.e. an angled single line, is applied to the sheet surface in the region of the anticipated hemming deformation. To identity that pattern, the, improved by the author, ALM solution based on digital image processing techniques is used to provide more reliable solution, more accurate strain measurements and full automation. To summarize the whole, figure 6 demonstrates the portable vision measurement system capabilities: (a) the surface quality characterization by micro-cracks evaluation, (b) measurement of the surface geometry with profile, and (c) continuous strain measurement with improved ALM.

REFERENCES
Chen, K., Giblin, P., Irving, A., 1999, Mathematical exploration with MATLAB, Cambridge University Press. Epstein, J.S., 1993, Experimental techniques in fracture, New York, VCH Publishers. Graf, A. and Hosford, W., 1994, The influence of strain-path changes on forming limit diagrams of A1 6111 T4 , International Journal of Mechanical Sciences, 36/10, 897910. Higham, D.J., Higham, N.J., 2005, MATLAB guide, Society for Industrial and Applied Mathematics. Livatyali, H., Mderrisolu, A., Ahmetolu, M. A., Akgerman, N., Kinzel, G. L. and Altan, T., 2000, Improvement of hem quality by optimizing flanging and pre-hemming operations using computer aided die design, Journal of Materials Processing Technology, 98/1, 41-52. Livatyali, H., Laxhuber, T. and Altan, T., 2004, Experimental investigation of forming defects in flat surfaceconvex edge hemming, Journal of Materials Processing Technology, 146/1, 20-27. McNeill, S. R., Peters, W.H. and Sutton, M. A., 1987, Estimation of stress intensity factor by digital image correlation, Eng. Fract. Mech. 28/1, 101-112. Muderrisoglu, A. M., Murata, M., Ahmetoglu, M. A., Kinzel G. and Altan T., 1996., Bending flanging and hemming of aluminum sheet - an experimental study, Journal of Materials Processing Technology, 59/1-2, 10-17. Swillo, S. J., Hu, S. J., Iyer, K., Yao, J., Ko, M., Cai, W., 2005, Detection and characterization of surface cracking in sheet metal hemming using optical method, Transactions of the North American Manufacturing Research Institute of SME, 33, 49-55. Swillo, S., Iyer, K., and Hu, S. J., 2006, Angled Line Method for Measuring Continuously Distributed Strain in Sheet Bending, ASME Journal of Manufacturing Science and Engineering, 128, 651-658. wio, S., Kocada, A., Czyewski, P., Kowalczyk, P., 2011, Hemming Process Evaluation by Using Computer Aided Measurement System and Numerical Analysis; Proc. Conf. Technology of Plasticity 2011, eds. Gerhard Hirt, A. Erman Tekkaya, (Wiley-VCH Verlag GmbH & Co. KGaA) Weinheim, Aachen, 633-637. wio, S., Czyewski, P., 2011, Analiza procesu zawijania z wykorzystaniem pomiarw wizyjnych i oblicze numerycznych (MES), Zeszyty Naukowe, 238, Oficyna Wydawnicza Politechniki Warszawskiej, 93-98 (in Polish). wio, S., Czyewski, P., Kowalczyk P., 2012, An experimental study of deformation load for hamming process, Przegld Mechaniczny, 5, 34-37 (in Polish). wio, S., 2012, Hemming process strain measurement using Angled Line Method (ALM), 2012, Hutnik, 6, 443-446 (in Polish).

5. SUMMARY
In this paper the author presents several solutions that have been developed in the area of the hemming process experimental analysis. Since, the hemming deformation area is concentrated in a small corner area, advanced vision-based methods were applied with key parameters such as: strain, geometry and quality measurement. As a result of using the surface quality evaluation method, the hemming quality could be analyzed and characterized for any given material and processing conditions. Next, a successful result of using the strain measurement method for large deformation continuous (high increment resolution) strain distribution, maximum strain (strain peak localization and value), were computed. Finally, a geometry reconstruction was performed by scanning laser method. As a final results of the research investigation, a specially design portable visionbased measurement system has been developed to conduct all the experiments instead of a previously used stationary solutions. Currently, the hemming experimental investigation confirms that the surface strain distribution is a major factor including the solution to the problem of hemming diagram representation. By calculating more accurately the strain distribution using a new hand-held system and including the history of deformation, a new model of hemming limit diagram representation can be created. Acknowledgements. Scientific work financed as a research project from funds for science in the years 2009-2011 (Project no. N N508 390737).

APARATURA POMIAROWA DO ANALIZY PROCESU ZAWIJANIA Streszczenie Nowy, przenony system dowiadczalny zosta zaproponowany do analizy procesu zawijania. We wstpie przedstawiono informacj o potrzebie wykorzystania systemw wizyjnych w analizie problemw wystpujcych w procesie zawijania. Nastpnie, przedstawiono konstrukcj narzdzi do realizacji trzystopniowego procesy zawijania. Pord licznych wizyjnych urzdze pomiarowych stosowanych do pomiarw i rekonstruk-

331

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW cji 3D elementw zawijanych, zaproponowana zostaa przez autora metoda skanowania. W przedstawionym artykule, odniesiono si do problemw zaproponowanych technik pomiarowych i procesu obrbki obrazu w odniesieniu do przykadw dowiadczalnych. Po pierwsze, zapisany obraz prbki jest analizowany pod ktem jej geometrii. W dalszej czci, przedstawiono szczegy odnonie zaproponowanej nowej metody rekonstrukcji geometrii wraz z pomiarem wartoci odksztacenia. Na koniec, przedstawiono, system przenony kontroli jakoci w ujciu przemysowym.
Received: October 28, 2012 Received in a revised form: December 4, 2012 Accepted: December 13, 2012

COMPUTER METHODS IN MATERIALS SCIENCE

332

COMPUTER METHODS IN MATERIALS SCIENCE


Informatyka w Technologii Materia w
Publishing House AKAPIT

Vol. 13, 2013, No. 2

AN EXPERIMENTAL AND NUMERICAL STUDY OF MATERIAL DEFORMATION OF A BLANKING PROCESS


SAWOMIR WIO*, PIOTR CZYEWSKI Faculty of Production Engineering, Warsaw University of Technology, ul. Narbutta 85, 02-524 Warszawa, Poland *Corresponding author: s.swillo@wip.pw.edu.pl
Abstract An experimental and numerical investigation is carried out in order to determine a material deformation of a blanking process. A highly localized, large strain distribution during the process at the last stage of a complete martial separation, has influence on the final surface quality product. Commonly using method in simulation of the blanking process is based on numerical approach. However, due to the large plastic element deformation is highly recommended to use remeshing procedure and other estimation solutions to simulate the last stage. To verify the final results and theoretical model other method are required. The paper present some implementation of experimental investigation in the field of displacement and strain measurement using a digital image correlation technique (DIC). The authors presents an experimental results of 1 mm thick specimen at the planar blanking process, where different clearances were used in the designed, fully automated apparatus. Finally, the experimental results were compared to the FEM simulation model with a good agreement. Key words: vision system, blanking process, correlation method, strain measurement, FEM

1. INTRODUCTION Currently, the technology of making numerous electronic components and equipment, such as engine rotors or transformer cores, is based on the use of a punching process. The above mentioned components are assembled by packeting a group of cut out elements. Hence, very important for the overall quality of the electrical assembly operation is the quality of single components in a packet. The limiting factor is too large burr on the cutting edge, which causes inaccurate adhesion of the sheet metal in a packet and serious deterioration of the quality of electrical assemblies. Finding a solution to this problem is one of the key issues in this technology of making components, and one of the methods currently applied is an experimental analysis of the cutting process.

Experimental analysis of the blanking process is a very complex issue and for a long time it lacked a solution due to the occurrence of large and irregular deformations near the die edge and punch. For a long period of time, the method used for the analysis of displacements was that of visioplasticity (Sutton et al. 1986), unfortunately giving less accurate results and requiring time-consuming calculations. Difficult to identify patterns of the grids, the blurred images of which were subjected to image processing, did not allow obtaining satisfactory results. Hence the need has emerged to search for new solutions in the field of numerical analysis. An outcome of this search was the development of a method based on Fourier transform, applied in the analysis of displacement increase between the individual stages of a cutting process (Leung et al. 2004). By

333 338

ISSN 1641-8581

INFORMATYKA W TECHNOLOGII MATERIAW

comparing the image of a certain stage of deformation with the image following immediately this stage, a distribution of displacement was obtained, which enabled the deformation size to be determined. However, certain conditions had to be satisfied to make such calculations possible. The goal was achieved by designing a special unit to carry out the punching process. With the punching process performed under static conditions, the authors were able to gain control of the image recording at a resolution relevant to the size of material displacement, since the proposed method required visualisation of even the smallest displacements of the material, possible to achieve only with a sufficiently large image resolution. This was due to the fact that, instead of the typical markers applied to the surface in the form of mesh, material texture was examined. Since that time, mainly due to the rapid development of various vision techniques and gaining access to solutions offering rapid cameras of high-resolution, many authors (Stegeman et al. 1999) have tried to solve this problem, getting results even when operating on millimetre samples. However, the results obtained in this way were mainly based on tests carried out with the aid of specially designed instruments, taking into account the conditions adequate to vision measurements, but irrelevant to the real conditions under which processes of this type are performed. Hence, the submitted studies lacked any conclusions regarding the tool wear behaviour and analysis of the crack formation tendency, both these issues being quite fundamental in control of the tool performance and monitoring of the process run.

sion access to the area of material deformation. All these factors make the analysis of the deformation process of the die-cut materials a great experimental challenge in the field of measurement techniques for both experimental and numerical methods (Makich et al. 2008, Brokken et al. 1998, and Hambli 2001). 2. EXPERIMENTAL SET-UP The schematic representation of a measurement stand is shown in the attached figure 1. Specially designed illumination system allowing for the small measurement area and diversity of material structures enables taking a sequence of images captured by the vision system with specially chosen lens and a digital camera recording in the memory hundreds of photos per second. Thus gathered information is transformed to the computer memory and subjected to further numerical analysis, taking into consideration two stages of the process shown in figure 1a, i.e. plastic flow in the initial phase and crack formation next. For tests an albumin plate was used. From the sheet with overall dimensions of 100x80 millimetres and 1.5 millimetres thick, strips of 1.5x7x35 mm dimensions were cut out. Tests were carried out using a specially designed blanking apparatus, with several elements such as: base, side walls and upper connecting element and bearing shell as well as sliding element with clamping of the upper cutting surface, where the sheet metal is pressed between plate and die. Initially, the blanking process was performed using only a hand holder. Currently, the process is carrying out using a stepper motor, with

COMPUTER METHODS IN MATERIALS SCIENCE

Fig. 1. Schematic of the experimental set-up: a) blanking apparatus, b) vision system configuration.

The authors proposal for the study of the cutting process relates to vision measurements taken under the real conditions of the punching operation. These conditions demand taking into account the external factors such as vibration, adequate lighting and vi-

precise punch location for each step of deformation (figure1b). A final configuration of the experimental set-up is demonstrated in figure 1c, where all the systems such as: optical, illumination and vision systems are presented.

334

INFORMATYKA W TECHNOLOGII MATERIAW

Fig. 2. Specimen surface quality under different illumination and after surface machining.

To perform an analysis of the recorded images of the surface of the cut out material, advanced numerical solutions of the image processing based on digital image correlation have been used. However, a high surface specimen quality is required, where material texture combine with their illumination are the major parameters as shown in figure 2. Next the optical magnification is an important parameter, since the DIC method is sensitive for the texture marks. To accurately measure the material flow on the surface of an object, a texture patterns need to be related to small group areas. To meet this requirement, several solutions are applied, such as: dividing the examined area into small sub-groups or using high resolutions. In the tests carried out, although a low-resolution camera has been used (640x480 pixels), any possible inaccuracies in calculations were compensated by optical zoom and limiting the analysis to a small area with the highest strain concentration (figure 3).

high resolution and high accuracy. For this reason, a method for the strain measurement has been proposed that demands the use of combined advanced solutions in the field of machine vision based on the correlation of two images. The numerical process of comparing two images is performed for each pixel in the examined area, which enables high-accuracy determination of the measured parameters. Due to this, the process of the measurement discretisation is imposed solely because of the image resolution. The measurement conditions, however, require that the test area was adequately illuminated and the process of numerical calculations referred to small displacements of the material. Figure 4 shows results of the surface discretization for the sample before and after process using virtual grid pattern. An advantage of this method over the traditional techniques using different mesh geometry is the process of measurement discretisation, which in the case of the proposed solution is not restricted by the geome-

Fig. 3. Methodology of the planar blanking process: a) blanking apparatus, b) initial step, b) final step (just before cracking).

3. STRAIN MEASUREMENT The measurement of deformation in a punched sample is a serious problem due to high localisation of the non-linear strain distribution. Assuming that the die-cut sheet has a thickness of 1 mm, the maximum strain will be concentrated in an area not larger than tens of micrometers. Therefore, the strain measurement method should be characterised by

try of the pattern used. An additional advantage of the proposed solution is its simplicity, involving the use of a simple natural pattern resulting directly from the texture of the material.

335

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW

Fig. 4. Surface discretization using digital image correlation method (virtual grid pattern applied to the real surface: a) initial stage, b) final stage.

As for the kinematics calculation a method of analysis of grid has been applied, and by taking into consideration the impact of the near-by surrounding (Swillo 2001). In a mathematical formulation it means the use of a directional derivative of the gradient of an increment of displacement. The next surrounding in relation to the grid is conceived as neighboring points, the assessed quantity of which depends upon the position of the analyzed node. Let us choose some point xi(n) from the neighborhood of the point xi. On the basis of directional derivative of the displacement increment vector we get:

Finally, the total logarithmic strain was calculated as presented in figure 5, showing a good agreement to the FEM results. In the numerical simulation a large grid pattern was used intentionally (similar to the virtual grid figure 4) in order to verify the image processing procedure that has been performed based on the correlation method. 4. DISCUSSION OF THE RESULTS Next, additional experimental investigation was conducted for a planar blanking process to determine an influence of the clearance for the material fracture. Ten sets of experiments were conducted in the range of 0.035mm to 0.485mm of clearance (figure 6). During these experiments the clearance between the material and the punch was measure and the blanking process was recorded in computer memory. The punch penetration for each case were obtained every time up to fracture. Figure 7 shows that the relation between the punch penetration and clearance is most likely linear. That prediction could be successfully implemented in numerical calculation.

(j n ) ui , j

2ui( n ) s ( n )

for i, j 1, 2,3

(1)

where: 2ui( n ) is the second order increment of the displacement vector, s(n) is a distance between the points xi( n ) and xi in the direction, v j
( n)

expresses

COMPUTER METHODS IN MATERIALS SCIENCE

vector directional cosines of the n direction. The upper braced index denotes the chosen direction. The displacement increments uij are unknown in this equation. The solution of the equations is available when at least two directions are investigated and the least square method is used.

Fig. 5. Results of the True Strain for the planar blanking process: a) experiment, b) FEM.

336

INFORMATYKA W TECHNOLOGII MATERIAW

Fig. 6. Set of experimental results for different clearance (0.035-0.485 mm), monitored up to fracture.

5. SUMMARY In the currently ongoing project, the deformation in a planar blanking process was monitored up to fracture. The DIC method was used to numerically control the defamation and the FE method was compare to experimental results. The proposed automatic vision system enabling the realization of measurements and calculations in a quick and precise manner for the blanking process even for the small Fig. 7. Influence of clearance on the material punch penetration. (less than 1 mm) thick. The experimental examples presented in this paper ized plastic deformation, International Journal of Machine Tools and Manufacture, 7-8, 669-676. referring to the two dimensional displacement analMakich, H., Carpentier, L., Monteil, G., Roizard, X., Chambert, yses (according to the function (1)) and strain measJ., 2008, Metrology of the burr amount - correlation with urements using grid method. The results indicate blanking operation parameters (blanked material wear that there are ample possibilities in the field of exof the punch), Int. J. Mater. Form., 1, 1243-1246. perimental analysis of the material flow, and are Stegeman Y.W., Goijaerts A.M., Brokken D., Brekelmans, W.A.M., Govaert, L.E., Baaijens F.P.T., 1999, An exa valuable tool for verifying numerical methods. Acknowledgements. Scientific work financed

as a research project from funds for science in the years 2011-2013 (Project no. N N508 628140).
REFERENCES
Brokken, D., Brekelmans, W.A.M., Baaijens, F.P.T., 1998, Numerical modeling of the metal blanking process, Journal of Materials Processing Technology, 83, 192 199. Hambli, R., 2001, Comparison between Lemaitre and Gurson damage models in crack growth simulation during blanking process, International Journal of Mechanical Sciences, 43, 27692790. Leung, Y.C., Chan, L.C., Tang, C.Y., Lee, T.C., 2004, An effective process of strain measurement for severe and local-

DOWIADCZALNA I NUMERYCZNA ANALIZA PROCESU CICIA W POMIARACH DEFORMACJI Streszczenie Dowiadczalne i numeryczne badania zostay przeprowadzone, w celu okrelenia wielkoci deformacji w procesie cicia. Due wartoci odksztacenia poczone z ich koncentracj na niewielkich obszarach prowadzce do procesu rozdzielenia

337

COMPUTER METHODS IN MATERIALS SCIENCE

perimental and numerical study of a planar blanking process, J. Mat. Proc. Techn., 87, 266-276. Sutton, M.A., Mingqi, Ch., Peters, W.H., Chao Y.J., McNeill, S.R., 1986, Application of an optimized digital correlation method to planar deformation analysis, Image and Vision Computing, 3, 143-150. Swillo, S., 2001., Automatic of strain measurement by using image processing. Proc. Conf. Engineering Design and Automation 2001, eds, Parsaei, H.R., Gen, M., Leep, H.R., Wong J.P., Las Vegas, Nevada, 272-277.

INFORMATYKA W TECHNOLOGII MATERIAW materiau maj duy wpywa na kocow jako wyrobu. Tradycyjne metody analizy tych zjawisk opieraj si wykorzystaniu metod numerycznych. Z uwagi jednak na du koncentracj odksztace zalecane jest weryfikowanie tych wynikw innymi metodami dowiadczalnymi. W artykule przedstawiono moliwo zastosowania numerycznej obrbki obrazu z zastosowaniem korelacji w pomiarach pl przemieszcze i odksztace. Zaprezentowano wyniki oblicze dowiadczalnych dla procesu cicia dla rnych wielkoci luzw. Wyniki pomiarw dowiadczalnych zostay zestawione z symulacja komputerow MES.
Received: October 17, 2012 Received in a revised form: October 22, 2012 Accepted: November 5, 2012

COMPUTER METHODS IN MATERIALS SCIENCE

338

COMPUTER METHODS IN MATERIALS SCIENCE


Informatyka w Technologii Materia w
Publishing House AKAPIT

Vol. 13, 2013, No. 2

MODELLING OF STAMPING PROCESS OF TITANIUM TAILOR-WELDED BLANKS


PIOTR LACKI*, JANINA ADAMUS, WOJCIECH WICKOWSKI, JULITA WINOWIECKA Czstochowa University of Technology, ul. Dbrowskiego 69, 42-201 Czstochowa, Poland *Corresponding author: piotr@lacki.com.pl
Abstract In the paper some numerical simulation results of sheet-titanium forming of tailor-welded blanks (TWB) are presented. Forming the spherical caps from the uniform and welded blanks are analysed. Grade 2 and Grade 5 (Ti6Al4V) titanium sheets with thickness of 0.8 mm are examined. A three-dimensional model of the forming process and numerical simulation are performed using the ADINA System v.8.6, based on the finite element method (FEM). An analysis of the mechanical properties and geometrical parameters of the weld and its adjacent zones are based on the experimental studies. Drawability and possibilities of plastic deformation are assessed based on the comparative analysis of the determined plastic strain distributions in the drawpiece material and thickness changes of the cup wall. The preliminary experimental studies confirm correctness of the assumptions in the presented numerical forming process. The results obtained in the numerical simulations show some difficulties occurring in forming of welded blanks and provide important information about the process course. They might be useful in design and optimization of the forming process. Key words: TWB blanks, sheet-metal forming, FEM modelling, titanium sheet

1. INTRODUCTION Tailor-Welded Blanks (TWB) become more popular in industrial applications in these sectors where reduction of weight and manufacturing costs are important. They are of particular interest in automotive and aircraft industry where there is the growing demand for shell parts (drawpieces) meeting specific functional properties which include low fuel consumption and sufficient strength of elements responsible for usage safety (Hyrcza-Michalska & Grosman, 2008; Sinke et al., 2010; Schubert et al., 2001). Reduction of production costs for elements made of TWB blanks results from limitation to material usage and number of required forming operations, and consequently decline in the demand for tools. Application of TWB blanks allows for achieving in one operation drawpieces characterized by mixed

strength and functional properties. It also allows for reduction of discards from cutting and blanking, and decrease in number of parts needed to produce component. It is estimated that application of TWB blanks can reduce the number of required parts to 66% and reduce the weight by half (Qiu & Chen, 2007; Babu et al., 2010; Meinders et al., 2000). Application of welded blanks for products manufactured with use of stamping process requires solving many problems, especially in case of forming hard-to-deform sheets, such as alpha and beta titanium alloys. The presence of the weld (its geometric parameters) of different (generally lower) plasticity compared to the base material and heterogeneity of stamped blank lead to change in material deformation scheme in comparison with the deformations that occur in a homogeneous material. This is due to weld dislocation, whose direction and mag-

339 344

ISSN 1641-8581

INFORMATYKA W TECHNOLOGII MATERIAW

nitude depend on differences in mechanical properties and thickness of welded materials (HyrczaMichalska & Grosman, 2008; Babu et al., 2010; Kinsey et al., 2000). In order to evaluate suitability of welded blanks for the forming processes, it is necessary to carry out several studies, including numerical simulations of the process, that will allow for prediction of sheet behaviour in consecutive stages of the forming process (Ananda at al., 2006; Qiu & Chen, 2007; Meinders et al., 2000; Babu et al., 2010; Rojek, 2007; Hyrcza-Michalska et al., 2010; Lisok & Piela, 2003; 2004; Wickowski et al., 2011; Zimniak & Piela, 2000). The increase in demand, among others in aircraft industry, for structural elements with specific functional properties leads to a growth of interest in sheet-titanium forming. Generally, titanium Grade 2 sheets have good drawability however produced drawpieces are characterized by low strength. On the other hand titanium Grade 5 sheets have higher strength than titanium Grade 2 sheets and they have low propensity to plastic deformation and this limits their application in forming processes (Adamus, 2010; 2009 a; 2009 b).

uniform sheets Grade 2 and Grade 5 were performed. Experimental studies are designed to confirm the validity of the assumptions made in the numerical model of the process (figure 1a). Grade 2 and Grade 5 materials were joined using electron beam welding (EBW) technology. EBW causes some changes in material microstructure. Analysis of the joint microstructure shows occurrence of 5 zones from the left: base material Grade 5, heat affected zone (HAZ) in Grade 5, zone of joint penetration, heat affected zone in Grade 2 and base material - Grade 2. The zone of microstructure changes has a width of less than 3 mm. HAZ in Grade 2 is wider than HAZ in Grade 5. Its width is of ~2282m, while width of HAZ in Grade 5 together with zone of joint penetration is of ~553 m. Titanium Grade 5 has a globular, fine-grained structure. Grains of phase with separation of phase on the grain boundaries are visible. Higher magnification shows a change of globular microstructure into lamellar one on the transition of HAZ in Grade 5 into the joint penetration zone. Microstructure of the border zone between the joint penetration and HAZ in Grade 2 is more evolute than the border zone be-

COMPUTER METHODS IN MATERIALS SCIENCE

Fig 1. a) Drawpiece obtained during experimental research, b) microstructure of electron beam welded joint.

2. GOAL AND SCOPE OF THE WORK A goal of the paper is evaluation of changes in deformation and displacement scheme of TWB blank material in consecutive stages of the forming process, using numerical simulation and experimental verification of changes in the wall thickness distribution in the drawpiece. In this study the numerical simulation of drawing spherical cap from welded sheets made of titanium Grade 2 and Grade 5 of the same thickness was performed, in order to evaluate its drawability and formability in traditional stamping processes. Additionally calculations for the

tween HAZ in Grade 5 and the joint penetration zone. Microstructure of HAZ in Grade 2 shows big recrystallized grains of phase. phase grains with lenticular grains represent the microstructure of base material Grade 2. Rectilinear shape of grain boundaries is typical for recrystallized grains. Microstructure of electron beam welded joint is shown in figure 1b.

340

INFORMATYKA W TECHNOLOGII MATERIAW

3. NUMERICAL MODEL A three-dimensional model of the stamping process was developed. The model comprises material of the welded blank and the stamping tool consisting of a die, a punch and a blank-holder. FEM geometry model is shown in figure 2.

count different material properties in the weld vicinity. In the presented model 5 zones were distinguished: weld zone (W), two heat affected zones (HAZ) located symmetrically on both sides of the weld (HAZ1, HAZ2) and two zones representing base materials (M1, M2) - figure 2.

Measurements of the zones were performed during observation of the weld cross-section structure. In the calculations a constant thickness of the weld and heataffected zones, which equals to thickness of the welded blanks 0.8 mm, was assumed. Some important geometric parameters of the model are presented in table 1. In the analysed case the weld was located in the drawpiece centre.

A contact interaction between the tool and the blank material plays an important role in the forming process (Adamus, 2010; 2009 a). In the numerical calculations a friction coefficient .1 was set for contact surfaces between the die, blank and Fig. 2. A discrete model of the forming process of a spherical cup made of TWB blank blank-holder, where the working with the specified 5-zone model of the welded blank. surfaces were lubricated, and the friction coefficient . was set In the calculations all elements corresponding to for the contact surface punch deformed material the tool were assumed to be perfectly rigid and for (blank) without lubrication. elements corresponding to the deformed sheet an Calculations were performed using the ADINA isotropic elastic-plastic material model was applied System v. 8.6, based on FEM, which allows for non(bilinear plastic material model). linear description of material hardening and the conMutual displacement of tool elements against tact between the tool and the forming blanks. each other was realised by immobilising die and applying the displacement to the punch in the direcTable 1. Parameters assumed in FEM model for the stamping process of TWB blanks. tion of X axis. In the case of the blank-holder its progressive motion was limited by a hold-down Parameter value force Fd. A proper selection of the blank-holder blank diameter d 60 mm k force prevents wrinkling of the flange material (figclearance between punch and die l = dm-ds 2 mm ure 3), and it also has a significant impact on the punch radius rs 16 mm distribution of drawpiece wall thickness. An optimal die fillet radius rm 4 mm blank thickness g 0,8 mm value of the blank-holder force was determined weld width W 1,9 mm based on the preliminary numerical simulation of the heat-affected zone width HAZ1 1,7 mm stamping process. heat-affected zone width HAZ2 1,0 mm Discretization of the blank material (TWB blank-holder force Fd 3000 N blanks) for the stamping process, in the form of punch path hs 20 mm a disc with diameter dk = 60 mm, was performed using four-node shell elements of specified thickThe mechanical properties, which are required ness. for performing calculations, of base material, heat Modeling of the welded blank material required affected zone and weld zone were determined based distinction of appropriate zones and taking into acon the uniaxial tensile test as well as on the basis of 341

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW

changes in hardness distribution within the weld cross-section. Test specimens were prepared using TIG welding method. The mechanical properties of the material in the weld zone were estimated based on the relationship between the hardness and strength of the material assuming that the material yield stress is in direct proportion to its hardness. The assumed mechanical properties are summarized in table 2.
Table 2. Experimentally determined material properties of Grade 2 and Grade 5 titanium, weld materials, and HAZ material. Tensile strength Rm [MPa] 316.6 1002.4 442.8 798.5 518.5 Yield strength R0,2 [MPa] 236.8 964.3 368.3 747.7 375.0 Youngs modulus E [GPa] 110 110 110 110 110 Poissons ratio 0.37 0.37 0.37 0.37 0.37

4. RESULTS Figure 2 shows the shape of the drawpiece obtained during the numerical simulation of stamping process of TWB blank. The numerical calculation results of plastic strain [-] and thinning of the drawpiece wall as a result of stamping process are shown in figures 46. In the case of stamping process of uniform Grade 2 blank it can be observed that the plastic strain distribution in the blank material is uniform and circular (figure 4a), and is accompanied by uniform thinning of the drawpiece wall (figure 4b). In the case of the forming process of uniform Grade 5 blank it is seen that concentration of plastic strains is in a pole of the cup. In this area it is seen a considerable material thinning (figure 5a and 5b). The numerical simulation of TWB blank forming shows that the weld moves in the direction of Grade 5 material as punch hollows into the deformed blank (figure 6).

Material/zone M1-GRADE 2 M2-GRADE 5 HAZ1 HAZ2 W

COMPUTER METHODS IN MATERIALS SCIENCE

Fig. 3. The drawpiece shape obtained in numerical simulation of the stamping process of the welded blank a) blank-holder force 1000N, b) blank-holder force 3000N.

Fig. 4. Numerical simulation results of stamping process of spherical cup made of Grade 2 blank at the punch penetration of 10 mm: a) plastic strain distribution [-], b) material thinning [mm].

342

INFORMATYKA W TECHNOLOGII MATERIAW

Fig. 5. Numerical simulation results of stamping process of spherical cup made of Grade5 blank as the punch penetrates the depth of 10 mm: a) distribution of plastic strains [-], b) material thinning [mm].

Fig. 6. Numerical simulation results of stamping process of spherical cap made of welded blank Grade2||Grade5 for punch penetration of 10 mm: a) distribution of plastic strain [-], b) material thinning [mm].

As a result of weld displacement, plastic strains increase in more deformable material and decrease in less deformable material (figure 6a). It should also be noted that in the area near the drawpiece top (on the border between more deformable material and the heat affected zone) there is a local increase in strains and significant thinning of the drawpiece material (figure 6a and 6b). This might indicate that there is a possibility of drawpiece weakening and possible loss of material continuity in this area. 5. SUMMARY The main goal of the study was to develop a numerical model of stamping process of titanium TWB blanks. The performed simulations (FEM) of the process allow for analysis of deformation introduced into material during forming process and drawability assessment of the welded blanks. In future these studies will be focused on a more accurate description of material mechanical characteristics in the heat affected zones and weld, which will allow for further improvement. The calculations confirmed experimental results that stamping of titanium welded blanks that are characterized by different strength properties, using rigid tools, is much more difficult than stamping of the

Acknowledgements. Financial support of Structural Funds in the Operational Programme - Innovative Economy (IE OP) financed from the European Regional Development Fund - Project "Modern material technologies in aerospace industry", Nr POIG.01.01.02-00-015/08-00 is gratefully acknowledged.

343

COMPUTER METHODS IN MATERIALS SCIENCE

uniform blanks. Comparison of strain distribution in the drawpiece made of a homogeneous material with those found in drawpiece made of TWB blanks shows that the presence of weld with different strength properties, introduces irregularity in the strain scheme in the deformed blank. It can be observed that there is limited formability in the zone corresponding to weld and that there is movement of this zone in the direction of less deformable material. The simulation results show the efficiency of applying numerical calculations to studying stamping processes of TWB blanks. The results provide important information about the process and may be useful for the design and optimization of the process run (selection of appropriate process parameters such as: blank-holder force, lubrication conditions etc.).

INFORMATYKA W TECHNOLOGII MATERIAW

REFERENCES
Adamus J., 2010, The analysis of forming titanium products by cold metalforming, Monografie nr 174, Wyd. Politechniki Czstochowskiej (in Polish). Adamus J., 2009 a, Stamping of the Titanium Sheets. Key Engineering Materials, http://www.scientific.net Adamus J., 2009 b, Theoretical and experimental analysis of the sheet-titanium forming process, Archives of Metallurgy and Materials, 54/3. Ananda D., Chena D.L., Bhole S.D., Andreychuk P., G. Boudreau, 2006, Fatigue behavior of tailor (laser)-welded blanks for automotive applications, Materials Science and Engineering, A 420, 199207. Babu Veera K., Narayanan Ganesh R., Kumar Saravana G., 2010, An expert system for predicting the deep drawing behavior of tailor welded blanks, Expert Systems with Applications, 37. Hyrcza-Michalska M., Grosman F., 2008, Formability of laser welded blanks, Proc. Conf. 17th International Scientific And Technical Conference Design and technology of drawpieces and die stampings, Pozna:INOP (in Polish). Hyrcza-Michalska M., Rojek J., Fruitos O., 2010, Numerical simulation of car body elements pressing applying tailor welded blanks practical verification of results, Archives of Civil and Mechanical Engineering, 10/4. Kinsey B., Liu Z., Cao J., 2004, A novel forming technology for tailor-welded blanks, Journal of Materials Processing Technology, 99. Lisok J., Piela A., 2004, Model of welded joint in the metal charges used for testing pressformability, Archives of Civil and Mechanical Engineering, 4/3. Lisok J., Piela A., 2003, Model zcza spawanego we wsadach do toczenia blach tailored blanks, Przegld spawalnictwa, 6 (in Polish). Meinders T., van den Berg A., Huetink J., 2000, Deep drawing simulations of Tailored Blanks and experimental verification, Journal of Materials Processing Technology, 103. Qiu X.G., Chen W.L., 2007, The study on numerical simulation of the laser tailor welded blanks stamping, Journal of Materials Processing Technology, 187188. Rojek J., 2007, Modelling and simulation of complex problems of nonlinear mechanics using the finite and discrete element methods, Prace IPPT IFTR REPORTS 4 (in Polish). Schubert E., Klassen M., Zerner C., Walz C., Seplod G., 2001, Light-weight structures produced by laser beam joining for future applications in automobile and aerospace industry, Journal of Materials Processing Technology, 115. Sinke J., Iacono C., Zadpoor A.A., 2010, Tailor made blanks for the aerospace industry, Int. J. Mater. Form, 3/1. Wickowski W., Lacki P., Adamus J., 2011, Numerical simulation of the sheet-metal forming process of tailor-welded blanks (TWBs), Rudy Metale, 56/11 (in Polish). Zimniak Z., Piela A., 2000, Finite element analysis of a tailored blanks stamping process, Journal of Materials Processing Technology, 106.

MODELOWANIE PROCESU TOCZENIA SPAWANYCH BLACH TYTANOWYCH TYPU TWB Streszczenie W artykule przedstawiono wyniki symulacji numerycznych procesu toczenia spawanych blach tytanowych typu TWB (Tailor-Welded Blanks). Przeprowadzono analiz ksztatowania czaszy kulistej z wsadu spawanego oraz materiaw jednorodnych. Badano blachy tytanowe Grade2 i Grade 5 o gruboci 0.8 mm. Przestrzenny model procesu toczenia oraz obliczenia numeryczne wykonano przy uyciu programu ADINA v. 8.6, bazujcego na metodzie elementw skoczonych (MES). Oceny waciwoci mechanicznych i parametrw geometrycznych spoiny oraz obszarw jej przylegych dokonano na podstawie bada dowiadczalnych. Dokonano oceny tocznoci oraz moliwoci ksztatowania plastycznego badanych materiaw poprzez analiz porwnawcz wyznaczonych rozkadw odksztace plastycznych w materiale wytoczek oraz zmiany gruboci cianek wytoczek. Prowadzone rwnolegle wstpne badania dowiadczalne potwierdziy suszno przyjtych zaoe w zaprezentowanym modelu numerycznym procesu toczenia. Uzyskane na drodze symulacji wyniki wskazuj na trudnoci wystpujce podczas ksztatowania blach spawanych oraz dostarczaj istotnych informacji o przebiegu procesu, przez co mog by przydatne na etapie projektowania i optymalizacji procesw toczenia.
Received: October 16, 2012 Received in a revised form: November 22, 2012 Accepted: November 9, 2012

COMPUTER METHODS IN MATERIALS SCIENCE

344

COMPUTER METHODS IN MATERIALS SCIENCE


Informatyka w Technologii Materia w
Publishing House AKAPIT

Vol. 13, 2013, No. 2

FIRST PRINCIPLES PHASE DIAGRAM CALCULATIONS FOR THE CdSe-CdS WURTZITE, ZINCBLENDE AND ROCK SALT STRUCTURES
ANDRZEJ WONIAKOWSKI1, JZEF DENISZCZYK1, OMAR ADJAOUD2,4, BENJAMIN P. BURTON3 Institute of Materials Science, University of Silesia, Bankowa 12, 40-007 Katowice, Poland 2 GFZ German Research Centre for Geosciences, Section 3.3, Telegrafenberg, 14473 Potsdam, Germany 3 Ceramics Division, Materials Science and Engineering Laboratory, National Institute of Standards and Technology, Gaithersburg, Maryland 20899-8520, USA 4 Present address: Technische Universitt Darmstadt, Fachbereich Material- und Geowissenschaften, Fachgebiet Materialmodellierung, Petersenstr. 32, D-64287 Darmstadt, Germany *Corresponding author: andrzej.wozniakowski@us.edu.pl
Abstract The phase diagrams of CdSe1-xSx alloys were calculated for three different crystal structure types: wurtzite (B4); zinc-blende (B3); and rocksalt (B1). Ab initio calculations of supercell formation energies were fit to cluster expansion Hamiltonians, and Monte Carlo simulations were used to calculate finite temperature phase relations. The calculated phase diagrams have symmetric miscibility gaps for B3 and B4 structure types and a slightly asymmetric diagram for B1 structure. Excess vibrational contributions to the free energy were included, and with these, calculated consolute temperatures are: 270 K for B4; 300 K for B3; and 270 K for B1. Calculated consolute temperatures for all structures are in good quantitative agreement with experimental data. Key words: clamping, groove rolling, FEM
1

1. INTRODUCTION The cadmium chalcogenide CdSe1-xSx semiconducting alloy is characterized by a variable direct band gap which can be tuned by alloying, from 1.72 eV for CdSe to 2.44 eV for CdS. Because of excellent properties Cd(S,Se) is used in optoelectronic devices, photoconductors, gamma ray detectors, visible-light emitting diodes, lasers and solar cells (Xu et al., 2009; and references cited therein). CdSe1-xSx solid solutions have attracted great interest in recent years from both experimental and theoretical points of view (Xu et al., 2009; Mujica et al., 2003; Wei & Zhang, 2000; Banerjee et al., 2000;

Tolbert & Alivisatos, 1995; Hotje et al., 2003, Deligoz et al., 2006). It is known that CdS and CdSe occur at normal conditions both in the wurtzite and metastable zincblende structures. (Mujica et al., 2003; Madelung et al., 1982). Depending on the growth conditions, the CdSe (CdS) can be synthesized in the B4, or in the metastable B3-type structure either by molecularbeam epitaxy, or by controlling the growth temperature (Wei & Zhang, 2000). The equilibrium zincblende structure is observed in CdS nanostructures (Banerjee et al., 2000). Under high pressure, both B3 and B4 structures convert to the denser rocksalt-

345 350

ISSN 1641-8581

INFORMATYKA W TECHNOLOGII MATERIAW

structure phase (Mujica et al., 2003; Tolbert & Alivisatos, 1995; Hotje et al., 2003). Recent measurements of formation enthalpies (Hf) for CdSxSe1-x B4-type solid solutions, reported by Xu et al. (2009), indicated that within experimental error Hf = 0 at 298 K. This may indicate that, at least above room temperature, that CdS and CdSe form an ideal solution in the B4- structuretype; despite differences in molar volume (Vmol,CdSe = 33.727 cc/mol, Vmol,CdS = 29.934 cc/mol, Davies, 1981) and anion radii (RCdSe = 1.91 , RCdS = 1.84 , Jug & Tikhomirov, 2006) (Xu et al., 2009). These measurements did not show the presence of a miscibility gap above 298 K, i.e. indicating that either: 1) the blocking temperature for Se/S diffusion is above TC; or 2) the consolute temperature for CdSxSe1-x in B4 structure must be below room temperature. The T-x phase diagram of the CdSe-CdS system was the subject of theoretical ab initio studies (Ouendadji et al., 2010; Breidi, 2011; Lukas et al., 2007). In both cases (B3 and B4), only formation energies (at x = 0, 0.25, 0.5, 0.75 and 1.0) were considered, while excess vibrational free energy contributions were neglected. In Ref. (Ouendadji et al., 2010) only B3 structure was investigated, while in (Breidi, 2011) phase diagrams for both B3 and B4 structures were determined. Both studies predict miscibility gaps. For the B3-type structure the consolute temperatures (TC) reported by Ouendadji et al. (2010) and Breidi (2011) are: TC = 315 K and 228 K, respectively. Both predicted consolute temperatures differ from the critical temperature (TC = 298 K) reported by Xu et al. (2009). The difference between TC as calculated by Ouendadji et al., (2010) and Breidi (2011) originates from the different ab initio computational setup, and different choices of supercells for which formation energies were calculated (this difference indicates that the sets of formation energies for at least one of these calculations, and probably both, are based on sets of formation energies that are too small to yield converged effective Hamiltonians). For the B4-type structure Breidi (2011) reports TC = 225 K < TC = 228 K for the B3 structure. The aim of this study is to compare well converged calculations of CdSe-CdS phase diagrams in all three crystal structure types: B1, B3 and B4. Both configurational and excess vibrational contributions to the free energy are considered. Sufficiently large sets of formation energies are used, that one can have reasonable confidence that calculated phase diagrams faithfully reflect density functional theory (DFT) energetics.

2. COMPUTATIONAL DETAILS Calculations of formation energies, defined as E f E CdS x Se1 x xE CdS 1 x E CdSe , were performed using the Vienna ab initio Simulation Package VASP (Kresse & Hafner, 1993, 1994; Kresse & Furthmller, 1996a, 1996b) implementing the Blchls projector augmented wave approach (Blchl, 1994), with the generalized gradient approximation for exchange and correlation potentials. Valence electron configurations for the pseudopotentials are: Cd = 4d105s2, Se = 4s24p4 and S = 3s23p4. All calculations were converged with respect to gamma centered k-point sampling, and a plane-wave energy cutoff of 350 eV was used which yields values that are converged to within a few meV per atom. Electronic degrees of freedom were optimized with a conjugate gradient algorithm. Both cell parameters and ionic positions were fully relaxed for each superstructure of underlying B1, B3 and B4 crystal structures. Based on the VASP results, the First Principles Phase Diagram calculations were performed with the use of Alloy Theoretic Automated Toolkit (ATAT) software package (van de Walle & Ceder, 2002a; van de Walle et al., 2002; van de Walle & Asta, 2002). The VASP calculations were used to construct cluster expansion (CE) Hamiltonian in a form of polynomial in the occupation variables:
E m J i

i '

(Sanchez et al., 1984),

COMPUTER METHODS IN MATERIALS SCIENCE

where is a cluster defined as a set of lattice sites, m denote the number of clusters that are equivalent by symmetry, summation is over all clusters that are not equivalent by a symmetry operation and an average is taken over all clusters that are equivalent to by symmetry. The Effective Cluster Interaction (ECI) coefficients, J, embody the information regarding the energetics of an alloy. In our investigations the well-converged cluster expansion system required calculation of the formation energy for 30-50 ordered superstructures. The predictive power of cluster expansion is controlled by cross2 1 n )2 validation score CVS ( E i E , where (i ) n i 1 1

Ei is an ab inito calculated formation energy of su-

represent the energy of perstructure i, while E (i )


superstructure i obtained from CE with the use of the remaining (n 1) structural energies. The free ener-

346

INFORMATYKA W TECHNOLOGII MATERIAW

gy contributed by lattice vibrations was introduced employing the coarse-graining formalism (van de Walle & Ceder, 2002b). For each superstructure the vibrational free energy was calculated within the quasi-harmonic approximation with the application of the bond-length-dependent transferable force constant approximation (van de Walle & Ceder, 2002b). The phase diagram calculations were performed with the use of the Monte Carlo thermodynamic integration within the semi-grand-ensemble (van de Walle & Asta, 2002). In this ensemble, for temperature (T) and chemical potentials () imposed externally, the internal energy (Ei) and concentration (xi) of constituents of an alloy with fixed number of atoms (N), are allowed to fluctuate. The thermodynamic potential (per atom) associated with the semi-grand-canonical ensemble can be defined in terms of the partition function of the system in the form presented in equation (1) (van de Walle & Asta, 2002):

temperature at chemical potential stabilizing a given ground state of a system (here, the chemical potentials of end-members CdS and CdSe). A schematic diagram of the approach is: VASP calculations of formation enthalpies and vibrational free energies for a set of superstructures fit a cluster expansion (CE = set of effective cluster interactions, ECI) fit effective force constants to model excess vibrational contributions Monte Carlo thermodynamic integration phase diagram. The advantage of this approach is that it is based on the parameters-free ab initio calculations and leads to high quality effective Hamiltonians for multicomponent systems. The CE has the limitation that it only applies to a parent structure and its superstructures. 3. RESULTS AND DISCUSSION Using the ab initio (VASP) method, calculations of the ground state energy were performed for the stoichiometric compounds CdSe and CdS and for the formation energies of many B1-, B3-, or B4-based superstructures (36 B1, 36 B3 and 34 B4). All formation energies were positive, which implies that no intermediate ground state structures were predicted. The optimal number of superstructures were determined by minimizing the cross-validation score between ab initio computations and the cluster expansion prediction. Figure 1 shows the dependence of the CVS on the number of calculated superstructures. The convergence of CVS at values less than 1.5 meV/atom was reached for approximately 25 superstructures. Increasing further the number of superstructures results in the fluctuations of CVS with standard deviation of order of 0.1 meV/atom. The results presented in figure 1 strongly suggests convergence of the CE series.

( , )

1 ln exp( N ( Ei x i ) , (1) N i

where the summation is over different atomic configurations (alloy states) and 1 /(k BT ) (kB is Boltzmanns constant). In the differential form (with variable T and ) equation (1) can be rewritten in a form given by equation (2):

d ( ) ( E x ) d x d .

(2)

Using the differential form given by equation (2) the thermodynamic potential ( , ) can be calculated through the thermodynamic integration described by equation (3) (van de Walle & Asta, 2002):

1 (1 , 1 ) 0 ( 0 , 0 )

( 1 , 1 )

( 0 , 0 )

( E x, x ) d ( , )
(3)

The thermodynamic integration in (3), along a continuous path connecting points (0, 0) and (1, 1) which does not encounter the phase transition, was performed using the Monte Carlo method. The starting point (0, 0) is taken in the limit of low

Fig. 1. Dependence of the CVS vs number of superstructures used in the fit of CE for B1, B3 and B4 structures.

347

COMPUTER METHODS IN MATERIALS SCIENCE

where E and x are the alloys average internal energy (calculated with the use of CE expansion) and concentration of constituents; and averaging was performed according to formula: A Ai exp( N ( Ei xi )) / exp( N ( Ei xi ))

INFORMATYKA W TECHNOLOGII MATERIAW

The ECI, are plotted as functions of inter-atomic separations in figures 2. It is evident that with increasing distance, pair-ECI magnitudes decrease with oscillatory sign. The 3-body ECI, for B1 structure-type, is very small. Low values of cross validation score and decreasing magnitudes of the ECI justify truncation of CE series and discarding the larger clusters.

Fig. 3. Formation energies Ef calculated by VASP (cross) and fitted by cluster expansion (CE) for B4 (figure a), B3 (figure b) and B1 (figure c) structures. Note the different scale used on vertical axis of figure c.

Fig. 2. Effective cluster interactions (ECI) as functions of interatomic distance (d/dnn) for the clusters taken into account in cluster expansion series for the B4 (figure a), B3 (figure b) and B1 (figure c) underlying crystal structures. The inter-atomic distance is expressed in units of the nearest neighbor distance (dnn).

Figures 3 are plots of the VASP-calculated supercell formation energies (Ef per cation) that were used to fit the ECI in figures 2. The differences between VASP-calculated and CE-calculated are small, except for the end-members compounds CdSe and CdS in B4 structure-type; note that these differences are an order of magnitude smaller than the values Ef, thus the results in figures 3, confirm the quality and predictive power of CE.

Note that formation energies for B1-based supercells are about twice as large as for the wurtzite and zinc-blende structures. This correlates with the higher predicted consolute temperature when only the configurational part of free energy is taken into account. Figures 4 are the calculated phase diagrams for the CdSe-CdS system in B1, B3 and B4 crystal structures. Temperature independent ECI were used to calculate the lower solvii in (a) and (b) and the upper curve in (c) (dash lines). Temperature dependent ECI, which imply the inclusion of excess vibrational contributions to the free energy, yield the results which are plotted as the upper solvii in (a) and (b) and the lower solvus in (c) (solid lines). In most miscibility gap systems (Adjaoud, et al., 2009; Burton & van de Walle, 2006) the inclusion of temperature-dependent ECI leads to a reduction in TC, thus the results in figures 3a and 3b, are atypical. However, detailed model studies of an effect of lattice

COMPUTER METHODS IN MATERIALS SCIENCE

348

INFORMATYKA W TECHNOLOGII MATERIAW

cantly as compared to that of B4 structure, but for the B3 structure we obtained higher consolute temperatures: 230 K and 300 K, respectively. For the B1 structure (figure 4c) the phase diagram obtained on the basis of temperature independent ECI is almost symmetric (xC = 0.51). Inclusion of the vibrational contribution enhances asymmetry (xC = 0.61) and reduce the critical temperature: from TC = 360 K, when only formation energy is taken into account, to TC = 270 K, with vibrational effects included. 4. CONCLUSIONS Ab initio calculations of CdSe-CdS phase diagrams for wurtzite-, zinc-blende- and rock salt structure-types were calculated by the CE-method; both without- and with excess vibrational free energy contributions (i.e. without- and with T-dependent ECI, respectively). Miscibility gaps are predicted for all three systems. When only configurational free energy is taken into account the calculated consolute temperatures are: 220 K, 230 K and 360 K for wurtzite-, zinc-blende- and rock salt structure-types, respectively. Surprisingly, the inclusion of excess vibrational contributions to the free energy destabilizes the B3- and B4-based solid solutions, contrary to similar studies (Burton et al., 2006), and increases the consolute temperature by 30% and 23% for wurtzite- and zinc-blende structure-types, respectively. For the rock-salt structure inclusion of vibrations reduces the consolute temperature by 25%, similarly as reported by Adjaoud, et al. (2009) for the TiC-ZrC system. Slightly above room temperature complete solid solution is possible in the zincblende structure. Calculated consolute temperatures for B1, B3 and B4 structure-types compare well with experimental critical temperature TC = 298 K reported by Xu, et al., (2009). REFERENCES
Adjaoud, O., Steinle-Neumann, G., Burton, B.P., van de Walle, A., 2009, First-principles phase diagram calculations for the HfC-TiC, ZrC-TiC, and HfC-ZrC solid solutions, Phys. Rev. B, 80, 134112-134119. Banerjee, R., Jayakrishnan, R., Ayyub, P., 2000, Effect of the size-induced structural transformation on the band gap in CdS nanoparticles, J. Phys. Condens. Matter, 12, 10647-10654. Blchl, P. E., 1994, Projector augmented-wave method, Phys. Rev. B, 50, 17953-17979. Burton, B.P., van de Walle, A., 2006, First principles phase diagram calculations for the system NaCl-KCl: The role of excess vibrational entropy, Chem. Geo., 225, 222229.

Fig. 4. Calculated phase diagram of CdSxSe1-x alloy in B4 (figure a), B3 (figure b) and B1 (figure c). Dash and solid solvii are the phase diagrams that were calculated with temperatureindependent-, and temperature-dependent ECI, respectively. The results in (a) and (b), that T-independent ECI predict lowest TC rather than higher, is atypical.

vibrations on the phase stabilities of substitutional alloys have shown (Garbulsky & Ceder, 1996), that inclusion of vibrations into phase-diagram modeling of miscibility gap systems can increase the consolute temperature. Furthermore, investigations of the vibrational entropy change upon order-disorder transition in Pd3V system (van de Walle & Ceder, 2000) have shown, that the relaxation of bonds can change the sign of vibrational entropy differences as compared to the expectations based on the bond proportion model. The main feature of the calculated phase diagrams is the consolute temperature (TC). For the B4 structure (figure 3a) the miscibility gap is symmetric with critical point (xC, TC) = (0.50, 220 K) when only configurational degrees of freedom are taken into account, and (xC, TC) = (0.50, 270 K) when temperature dependent vibrational contribution is included. For the B3 structure (figure 3b) the shape of the phase diagram does not change signifi-

349

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW Burton, B.P., van de Walle, A., Kattner, U., 2006, First principles phase diagram calculations for the wurtzitestructure systems AlN-GaN, GaN-InN, and AlN-InN, Journ. Appl. Phys., 100, 113528-113534. Breidi, A., 2011, Temperaturepressure phase diagrams, structural and electronic properties of binary and pseudobinary semiconductors: an ab initio study, PhD thesis, lUniversit Pual Verlaine, Metz. Davies, P.K., 1981, Thermodynamics of solid solution formation, PhD thesis, Arizona State University, Arizona. Deligoz, E., Colakoglu, K., Ciftci, Y., 2006, Elastic, electronic, and lattice dynamical properties of CdS, CdSe, and CdTe, Physica B, 373, 124-130. Garbulsky, G.D., Ceder, G., 1996, Contribution of the vibrational free energy to phase stability in substitutional alloys: Methods and trends, Phys. Rev. B,53, 8993-9001. Hotje, U., Rose, C., Binnewies, M., 2003, Lattice constants and molar volume in the system ZnS, ZnSe, CdS, CdSe, Sol. State Sci., 5, 1259-1262. Jug, K., Tikhomirov, V.A., 2006, Anion substitution in zinc chalcogenides, J. Comput. Chem. 27, 1088-1092. Kresse, G., Hafner, J., 1993, Ab initio molecular dynamics for liquid metals, Phys. Rev. B, 47, 558-561. Kresse, G., Hafner, J., 1994, Ab initio molecular simulation of the liquid-metalamorphous-semiconductor transition in germanium, Phys. Rev. B, 49, 14251-14269. Kresse, G., Furthmller, J., 1996a, Efficiency of ab-initio total energy calculations for metals and semiconductors using a plane-wave basis set, Comput. Mater. Sci., 6, 15-50. Kresse, G., Furthmller, J., 1996b, Efficient iterative schemes for ab initio total-energy calculations using a planewave basis set, Phys. Rev. B, 54, 11169-11186. Lukas, H. L., Fries, S. G., Sundman, B., 2007, Computational Thermodynamics. The Calphad Method, Cambridge Press, Cambridge. Madelung, O., Schultz, M., Weiss, H., 1982, Landolt-Bornstein Numerical Data and Functional Relationships in Science and Technology, group III, 17b, Springer-Verlag, Berlin. Mujica, A., Rubio, A., Munoz, A., Needs, R.J., 2003, High pressure phases of group IV, II-V and II-VI compounds, Rev. Mod. Phys., 75, 863-912. Ouendadji, S., Ghemid, S., Meradji, H., El Haj Hassan, F., 2010, Density functional study of CdS1-xSex and CdS1-xTex alloys, Comput. Mater. Sci., 48, 206-211. Sanchez, J.M., Ducastelle, F., Gratias, D., 1984, Generalized cluster description of multicomponent systems, Physica A, 128, 334-350. Tolbert, S.H., Alivisatos, A.P., 1995, The wurtzite to rock salt transformation structural transformation in CdSe nanocrystals under high pressure, J. Chem. Phys., 102, 46424656. van de Walle, A., Ceder, G., 2000, First-principles computation of the vibrational entropy of ordered and disordered Pd3V, Phys. Rev. B, 61, 5972-5978. van de Walle, A., Ceder, G., 2002a, Automating firstprinciples phase diagram calculations, J. Phase Equilib., 23, 348-359. van de Walle, A., Asta, M., Ceder, G., 2002, The alloy theoretic automated toolkit: A user guide, Calphad, 26, 539-553. van de Walle, A., Asta, M., 2002, Self-driven lattice-model Monte Carlo simulations of alloy thermodynamic, Modelling Simul. Mater. Sci. Eng., 10, 521-538. van de Walle, A., Ceder, G., 2002b, The effect of lattice vibrations on substitutional alloy thermodynamics, Rev. Mod. Phys., 74, 11-45. Wei, S.-H., Zhang, S.B., 2000, Structure stability and carier localization in CdX(X=S, Se, Te) semiconductors, Phys. Rev. B, 62, 6944-6947. Xu, F., Ma, X., Kauzlarich, S.M., Navrotsky, A., 2009, Enthalpies of formation of CdSxSe1-x solid solutions, J. Mater. Res., 24, 1368-1374.

OBLICZENIA Z PIERWSZYCH ZASAD DIAGRAMW FAZOWYCH DLA PSEUDOBINARNEGO SYSTEMU CdSe-CdS KRYSTALIZUJCEGO W SIECIACH WURCYTU, BLENDY CYNKOWEJ ORAZ SOLI KAMIENNEJ Streszczenie Pprzewodnikowe zwizki Cd(Se,S) oraz ich stopy charakteryzuj si szerok bezporedni przerw energetyczn i dlatego mog by przydatne w urzdzeniach optoelektronicznych, wiatoczuych, detektorach promieniowania gamma, diodach elektroluminescencyjnych, laserach oraz ogniwach sonecznych. Ze wzgldu na moliwo atrakcyjnych zastosowa pprzewodnikowe stopy CdSe1-xSx s w ostatnich latach przedmiotem rozwaa teoretycznych oraz intensywnych bada dowiadczalnych. Zwizki Cd(Se,S) w warunkach normalnych krystalizuj w heksagonalnej strukturze wurcytu (B4) oraz metastabilnej, ciennie centrowanej strukturze blendy cynkowej (B3). Pod wpywem wysokiego cinienia struktury B4 oraz B3 zmieniaj swoj form krystaliczn i przeksztacaj si w gstsz, ciennie centrowan struktur soli kamiennej (B1). Celem niniejszej pracy jest przeprowadzenie oblicze diagramw fazowych oraz wyznaczenie krytycznej temperatury mieszalnoci dla stopw CdSe1-xSx krystalizujcych w strukturach B4, B3 oraz B1. Diagramy fazowe zostay wyznaczone na podstawie potencjaw termodynamicznych obliczonych metod cakowania termodynamicznego Monte Carlo. Zrealizowany proces oblicze wskazuje na wystpowanie luk mieszalnoci w caym zakresie koncentracji CdSe1-xSx dla wszystkich rozpatrywanych sieci krystalicznych. Rezultaty uzyskane dla struktur sieci B4 oraz B3 charakteryzuj si symetrycznymi lukami mieszalnoci. W przypadku struktur sieci B1 luka mieszalnoci wykazuje lekko asymetryczny charakter. Wyznaczone krytyczne temperatury mieszalnoci wynosz 270 K, 300 K, 270 K odpowiednio dla struktur sieci B4, B3 i B1. W obliczeniach uwzgldniono efekt drga sieci, a uzyskane rezultaty wykazuj dobr zgodno z dostpnymi w literaturze danymi eksperymentalnymi.
Received: October 29, 2012 Received in a revised form: December 3, 2012 Accepted: December 8, 2012

COMPUTER METHODS IN MATERIALS SCIENCE

350

COMPUTER METHODS IN MATERIALS SCIENCE


Informatyka w Technologii Materia w
Publishing House AKAPIT

Vol. 13, 2013, No. 2

PHASE DIAGRAM CALCULATIONS FOR THE ZnSe BESE SYSTEM BY FIRST-PRINCIPLES BASED THERMODYNAMIC MONTE CARLO INTEGRATION
ANDRZEJ WONIAKOWSKI, JZEF DENISZCZYK* Institute of Materials Science, University of Silesia, Bankowa 12, 40-007 Katowice, Poland *Corresponding author: jozef.deniszczyk@us.edu.pl
Abstract The T-x phase diagram of Zn1-xBexSe alloy is calculated by means of ab initio method supplemented with the lattice Ising-like model cluster expansion approach and the Monte Carlo thermodynamic computations. Presented results confirm the high quality of mapping of disordered alloy onto the lattice Hamiltonian. The calculated phase diagram shows the asymmetric miscibility gap with the upper critical solution temperature equal to 860 K (1020 K) when the lattice vibrations are included (excluded) in the free energy of the system. We have proved that below the room temperature the miscibility of ZnSe and BeSe phases is possible only in the narrow range of concentration near the x = 0 and 1. At elevated temperatures the two phases are more capable to be mixed over the wider concentration range on the Zn-rich side of phase diagram. Key words: clamping, groove rolling, FEM

1. INTRODUCTION The alloying of semiconductors with the aim of tuning the band gap energy to achieve the values expected for devices application in recent past was the subject of extensive experimental and theoretical research (Berghout et al., 2007; and references cited therein). The investigations were focused especially on the wide-band gap II-IV semiconductors. The IIIV ZnSe compound is one of the promising materials for light emitting devices operating at short wavelengths (green and blue range). However the dislocations, point defects and their diffusion causes the degradation observed in the devices based on the ZnSe structures. Knowing that the BeSe compound is characterized by a strong covalent bonding (Vri, 1997), the alloying of ZnSe with BeSe was proposed to increase the resistance of the ZnSe structure to defect generation (Plazaola et al., 2003).

Recent measurements have confirmed that the concentration of vacancies in Zn1-xBexSe alloy decreases with increasing concentration of Be atoms (Plazaola et al., 2003). However, Raman scattering study of lattice dynamics in Zn1-xBexSe alloy has proved that the atomic distribution in the alloy is not uniform but the ZnSe rich and BeSe rich regions form (Pags et al., 2010). This findings indicate that the phase separation occurs in the samples investigated, what may indicate the presence of the immiscibility region in the (T-x) phase diagram of the alloy. Due to mismatch of unit cell volumes and different elastic properties of ZnSe and BeSe constituents the preparation of the Zn1-xBexSe alloy may demand special conditions. The knowledge of the T-x phase diagram might be very helpful in preparation of Zn1-xBexSe alloy. The phase diagram of the Zn1-xBexSe alloy was the subject of theoretical ab initio study with the use of the common tangent ISSN 1641-8581

351 356

INFORMATYKA W TECHNOLOGII MATERIAW

method of phase diagram construction (Berghout et al., 2007 ). In the approach only formation enthalpy was taken into account, neglecting the lattice vibration contribution to the free energy of the system. Furthermore, in the phase diagram calculations the formation enthalpy for only few concentrations was considered. The present work aims to extend the study phase stability of Zn1-xBexSe system to cover both the configuration formation enthalpy and lattice vibrations free energy contributions to the total free energy of the system. With this aim the theoretical research was undertaken aimed to determine the T-x phase diagram of Zn1-xBexSe alloy by ab-initio method supplemented with the Monte Carlo (MC) phase diagram calculations within the semi-grandcanonical ensemble. In the subsequent text the computational details are presented in Sec. 2. The results are presented and discussed in Sec. 3. Section 4 is the conclusion. 2. COMPUTATIONAL DETAILS In the normal conditions the both end-member compounds, ZnSe and BeSe, crystallize in the zincblend (B3) type structure (Karzel et al., 1996; Luo et al., 1995). With this in mind, in our phase diagram calculations we assume the B3-type structure for Zn1-xBexSe alloy for the entire concentration range. Formation energies, E E Zn1 x Bex Se xE BeSe 1 x E ZnSe were calculated up to 20 atoms per unit cell. Total energy calculations were done using the Vienna ab initio Simulation Package VASP (Kresse & Hafner, 1993, 1994; Kresse & Furthmller, 1996a, 1996b) using ultrasoft Vanderbilt-type pseudopotentials (Vanderbilt, 1990) with the generalized gradient approximation (GGA) for exchange and correlation. Valence electron configurations for the pseudopotentials are Se = 4s24p4, Be = 2s2, Zn = 3d104s2. All calculations were converged with respect to gamma centered k-point sampling, and a plane-wave energy cutoff of 350 eV was used which yields values that are converged to within a few meV per atom. Electronic degrees of freedom were optimized with a conjugate gradient algorithm, which is recommended for difficult relaxation problems. Each superstructure was relaxed with respect to volume, supercell shape and atomic positions. Based on the VASP results all remaining steps of the First Principles Phase Diagram (FPPD) calculations were performed with the use of Alloy Theo-

retic Automated Toolkit (ATAT) software package (van de Walle & Ceder, 2002a; van de Walle et al., 2002; van de Walle & Asta, 2002). In the first step VASP calculations were used to construct cluster expansion (CE) Hamiltonian. The cluster expansion (Sanchez et al., 1984) is a method to parameterize the energy of a material as a function of its configuration. The energy, (per atom) is represented as a polynomial in the occupation variables given by equation (1):
E m J i ,

i '

(1)

where is a cluster defined as a set of lattice sites. The sum is taken over all clusters that are not equivalent by a symmetry operation of the space group of the parent lattice, while the average is taken over all clusters that are equivalent to by symmetry. The coefficients J of CE expansion (1) embody the information regarding the energetics of the alloy and are called the Effective Cluster Interaction (ECI). The multiplicities m indicate the number of clusters that are equivalent by symmetry to a divided by the number of lattice sites. Typical wellconverged cluster expansion system contains of about 10-20 effective cluster interactions and requires the calculation of the energy of around 30-50 ordered structures (van de Walle et al., 2002; van der Ven et al., 1998; Garbulsky & Ceder, 1995; Ozolin et al., 1998). The predictive power of cluster expansion, defined by equation (1) is controlled by crossvalidation score defined by equation (2)
2 1 n )2 CV ( Ei E (i ) n i 1 1

(2)

COMPUTER METHODS IN MATERIALS SCIENCE

where Ei is an ab inito calculated energy of super represent the energy of superstructure i, while E
(i )

structure i obtained from CE (equation (1)) using the (n 1) other structural energies. The part of the free energy contributed by lattice vibrations was taken into account employing the coarse-graining formalism. For each superstructure the vibrational free energy was calculated within the quasi-harmonic approximation. To reduce the computational time needed for obtaining phonon densities of states for a set of superstructures involved in the cluster expansion procedure, the bond-lengthdependent transferable force constant approximation was used. Within the approximation the nearest-

352

INFORMATYKA W TECHNOLOGII MATERIAW

The ab initio calculations were performed for the end-member compounds ZnSe and BeSe and 33 reference superstructures containing up to 20 atb oms. All supercell energies are positive with re spect to end-member compounds, no intermediate i, j b , (3) ground states was found, what indicates a miscibils ity gap system. In the fitting procedure described in with only two independent terms: stretching stiffness Sec. 2 the best cross-validation score s, and the isotropic bending stiffness b. (CV = 0.0051) was obtained for 14 clusters in the An alternative way to determine the vibrational expansion (1). The cluster coordinates and correfree energy for each superstructure considered in the sponding effective cluster interactions are collected cluster expansion (1) is to apply the direct force in table 1. It is evident, that the largest ECI intromethod (Parlinski at al., 1997) or the linear response duce the zero and point clusters. Among multisite theory (Giannozzi et al., 1991), individually for each clusters the largest ECI belongs to nearest neighbor superstructure. Because of multiatomic composition cation sites. With increasing distance between sites and low crystal symmetry of superstructures both of cluster the values of pair ECI falls down in an alternative approaches demand high computing oscillatory manner. power and are time consuming, and are not applicaFigure 1 shows the formation energies E (per ble in the phase diagram calculations. cation atom) calculated for all reference superstructures by ab initio (VASP) method Table 1. Cluster coordinates and corresponding effective cluster interactions of clusters and using the cluster expansion taken into account in cluster expansion series defined by equation (1). (1) using the ECI given in table 1. The calculated and fitted energies (a) (a) di,j , di,j , do not coincide precisely, but ECI, ECI, Cluster Cluster Index Index eV/cation eV/cation besides the end-member comcoordinates coordinates pounds the residuals are at least (1,1,1) 1 Zero cluster 0.041703 8 6.944 -0.001199 (0, 0, 2) one order of magnitude smaller (1,1,1) than the values of E itself. This 2 Point cluster 0.012845 9 7.501 -0.000661 (-1/2, 0, 3/2) result confirms the quality of (1, 1, 1) (1,1,1) computational methodology and 3 2.835 0.006974 10 8.019 -0.001201 (1/2, 1, 3/2) (-1, 1, 1) correct predictive power of CE (1,1,1) (1,1,1) 4 4.009 -0.003786 11 8.505 -0.001169 expansion. (0, 1, 1) (-1, 1/2, 3/2) The bending and stretching (1,1,1) (1,1,1) bond stiffness versus bond length 5 4.910 -0.002446 12 8.505 0.000852 (0, 1/2, 3/2) (-1/2, -1/2, 1) relationships for Se-Zn and Se-Be (1,1,1) (1,1,1) 6 5.670 0.003586 13 8.965 -0.000586 nearest neighbor bonds are pre(0, 0, 1) (-1, 0, 1) sented in figure 2. The bond stiff(1,1,1) ness (b and s) constants calculated (1,1,1) (1/2, 1, 3/2) 7 6.339 -0.001999 14 2.835 -0.002262 base on the force constants deter(-1/2, , 1) (1/2, 1/2, 1) mined ab initio within the small (a) di,j is the distance of the longest pair within the cluster displacement approach the for 353

neighbor force constant matrix as function of bond lengths (volume) was calculated for the endmembers SeZn and SeBe of Zn1-xBexSe series. For all remaining superstructures the force constant matrices were predicted using the relaxed bond lengths and the chemical identities of bonds in each superstructure and employing the bond stiffness versus bond length functions evaluated for end-member compounds. In this procedure, called transferable force constant approach (van de Walle & Ceder, 2002b), the force constant matrix is approximated by the diagonal form described by equation (3) (Liu et al., 2007)

The phase diagram calculations were performed with the use of the Monte Carlo (MC) thermodynamic integration within the semi-grand-ensemble (van de Walle & Asta, 2002). The Hamiltonian used in MC integration was of the cluster expansion form given by equation (1) with expansion parameters fitted to the formation enthalpy and vibrational free energy calculated by parallel computing. 3. RESULTS AND DISCUSSION

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW

Figure 3 shows the phase diagram of Zn1-xBexSe alloy calculated on the base of temperature independent ECI (fitted to formation enthalpies) and temperature dependent ECI (fitted to the sum of formation enthalpies and vibration free energies). The main feature of the calculated phase diagram is the presence of asymmetric miscibility gap with upper critical solution temperature (TC). With only the configurational degrees of freedom taken into account our modeling yields the critical point (xc, TC) = (0.69, 1020 K). The shape of our phase diagram differs noticeably from that calculated by Berghout et al. (2007), where no asymmetry can be observed. The critical temperature obtained within the restricted (to formation enthalpies) approach (TC = 1020 K) is significantly lower than the one Fig. 1. Formation energies E calculated by VASP (cross) and obtained by Berghout et al. (2007) (TC = 1324 K). fitted by cluster expansion Also the critical concentration (xc = 0.54) found by Berghout et al. (2007) is much lower then our result. The higher critical concentration resulted from our calculations can be attributed to an asymmetry of the phase diagram (figure 3). The contribution to free energy of vibrational degrees of freedom, when taken into account, modifies considerably the phase diagram (figure 3). The main effect of lattice vibrations is the shift of critical point to lower temperature (TC = 860 K). The critical concentration remains almost Fig. 2. Nearest neighbor stretching s (figures a, c) and bending b (figures b, d) unchanged (xC = 0.65). An effect of viforce constants versus bond length for Se-Zn (left panel) and Be-Se (right panel) brations on the shape of phase diagram is bonds. Crosses indicate an ab initio data points and lines represent linear fits negligible although the recess visible on used in the calculations of the vibrational free energy. the phase diagram resulted from formation enthalpies solely is reduced slightly. The asymmetric shape of calculated phase diagram might suggest the presence on the Zn-rich lefthand side of concentration range of some additional structure with the ground-state configuration energy lower than that of end-member compounds. However our detailed search for the structures with negative ground-state formation enthalpy failed. The asymmetry of the phase diagram, indicating the enhanced miscibility of cation atoms on the Znrich side of phase diagram, can be related to the mismatch of ionic radii of cation ions (Zn 74 pm and Be 41 pm). It is well known, that the miscibilFig. 3. Calculated phase diagram based on the ECI fitted to the configuration formation enthalpy (dash line) and on the temperity gap between end--member compounds with ions ature-dependent ECI fitted to the sum of configuration and with very different ionic radius are asymmetric with vibration free energy (solid line). The arrows show the upper reduced solubility on the side of the diagram correcritical solution points. sponding to the smaller ion (Burton et al., 2006). a sequence of volumes of end-member compounds show the linear dependence on bond length. 354

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW

CONCLUSIONS Ab initio based Monte Carlo modeling of x-T phase diagram of Zn1-xBexSe alloy has shown that in the normal condition the solubility of Zn and Be cations is possible only on the Zn-rich side of phase diagram. At elevated temperatures the two phases, ZnSe and BeSe, are more capable to be mixed over the wider concentration range on the Zn-rich side of phase diagram. The upper critical solution temperature of 860 K is predicted at concentration x = 0.65 of Be, above which the solubility of cations is possible over the all concentration range. REFERENCES
Berghout, A., Zaoui, A., Hugel, J., Ferhat, M., 2007, Firstprinciples study of the energy-gap composition dependence of Zn1-xBexSe ternary alloys, Phys. Rev. B, 75, 205112-205121. Burton, B.P., van de Walle, A., Kattner, U., 2006, First principles phase diagram calculations for the wurtzitestructure systems AlN-GaN, GaN-InN, and AlN-InN, Journ. Appl. Phys., 100, 113528-113534. Garbulsky, G.D., Ceder, G., 1995, Linear-programming method for obtaining effective cluster interactions in alloys from total-energy calculations: Application to the fcc Pd-V system, Phys. Rev. B, 51, 67-72. Giannozzi, P., de Gironcoli, S., Pavone, P., Baroni, S., 1991, Ab initio calculation of phonon dispersions in semiconductors, Phys. Rev. B, 43, 7231-7242. Karzel, H., Potzel, W., Kfferlein, M., Schiessl, W., Steiner, M., Hiller, U., Kalvius, G.M., Mitchell, D.W., Das, T.P., Blaha, P., Schwarz, K., Pasternak, M.P., 1996, Lattice dynamics and hyperfine interactions in ZnO and ZnSe at high external pressures, Phys. Rev. B, 53, 11425-11438. Kresse, G., Hafner, J., 1993, Ab initio molecular dynamics for liquid metals, Phys. Rev., B 47, 558-561. Kresse, G., Hafner, J., 1994, Ab initio molecular simulation of the liquid-metalamorphous-semiconductor transition in germanium, Phys. Rev. B, 49, 14251-14269. Kresse, G., Furthmller, J., 1996a, Efficiency of ab-initio total energy calculations for metals and semiconductors using a plane-wave basis set, Comput. Mater. Sci., 6, 15-50. Kresse, G., Furthmller, J., 1996b, Efficient iterative schemes for ab initio total-energy calculations using a planewave basis set, Phys. Rev. B, 54, 11169-11186. Liu, J.Z., Gosh, G., van de Walle, A., Asta, M., 2007, Transferable force-constant modeling of vibrational thermodynamic properties in fcc-baased Al-TM (TM = Ti, Zr, Hf) alloys, Phys. Rev. B, 75, 104117-15. Luo, H., Ghandehari, K., Greene, R.G., Ruoff, A.L., Trail, S.S., DiSalvo, F.J., 1995, Phase transformation of BeSe and BeTe to the NiAs structure at high pressure, Phys. Rev. B, 52, 7058-7064. Ozolin, V., Wolverton, C., Zunger, A., 1998, Cu-Au, Ag-Au, Cu-Ag and Ni-Au intermetallics: First-principles study of temperature-composition phase diagram and structures, Phys. Rev. B, 57, 6427-64443. Pags, O., Postnikov, A.V., Chafi, A., Bormann, D., Simon, P., Glas, F., Firszt, F., Paszkowicz, W., Tournie, E., 2010,

Non-random Be-to-Zn substitution in ZnBeSe alloys: Raman scattering and ab initio calculations, Eur. Phys. J. B, 73, 461-469. Parlinski, K., Li Z.Q., Kawazoe, Y., 1997, First-principle determination of the soft mode in cubic ZrO_2, Phys.Rev.Lett., 78, 4063-4066 Plazaola, F., Flyktman, J., Saarinen, K., Dobrzynski, L., Firszt, F., Legowski, S., Meczynska, H., Paszkowicz, W., Reniewicz, H., 2003, Defect characterization of ZnBeSe solid solutions by means of positron annihilation and photoluminescence techniques, J. Appl. Phys., 94, 16471653. Sanchez, J.M., Ducastelle, F., Gratias, D., 1984, Generalized cluster description of multicomponent systems, Physica A, 128, 334-350. Vanderbilt, D., 1990, Soft-selfconsistent pseudopotentials in a generalized eigenvalue formalism, Phys. Rev. B, 41, 7892-7895. van der Ven, A., Aydinol, M.K., Ceder, G., Kresse, G., Hafner, J., 1998, First-principles investigation of phase stability in LixCoO2, Phys. Rev. B, 58, 2975-2987. van de Walle, A., Ceder, G., 2002a, Automating firstprinciples phase diagram calculations, J. Phase Equilib., 23, 348-359. van de Walle, A., Asta, M., Ceder, G., 2002, The alloy theoretic automated toolkit: A user guide, Calphad, 26, 539-553. van de Walle, A., Asta, M., 2002, Self-driven lattice-model Monte Carlo simulations of alloy thermodynamic, Modelling Simul. Mater. Sci. Eng., 10, 521-538. van de Walle, A., Ceder, G., 2002b, The effect of lattice vibrations on substitutional alloy thermodynamics, Rev. Mod. Phys., 74, 11-45. Vri, C., 1997, Expected pronounced strengthening of II-VI lattices with beryllium chalcogenides, Mater. Sci. Eng. B, 43, 60-64.

OBLICZENIA Z PIERWSZYCH ZASAD DIAGRAMU FAZOWEGO UKADU ZnSeBESE METOD CAKOWANIA TERMODYNAMICZNEGO MONTE CARLO Streszczenie W pracy przedstawiono rezultaty bada teoretycznych stabilnoci fazowej roztworu staego Zn1-xBexSe w zalenoci od temperatury i koncentracji skadnikw. Diagram fazowy T-x wyznaczono na podstawie potencjau termodynamicznego wyliczonego metod cakowania termodynamicznego Monte Carlo w ramach wielkiego rozkadu kanonicznego. W obliczeniach termodynamicznych wykorzystano Hamiltonian sieciowy w postaci tzw. rozwinicia klastrowego (typu modelu Isinga). Wspczynniki energetyczne rozwinicia klastrowego wyznaczono drog dopasowania Hamiltonianu sieciowego do wartoci entalpii tworzenia wyliczonych kwantowymi metodami z pierwszych zasad dla 33 nadstruktur Zn1-xBexSe w caym zakresie koncentracji. W obliczeniach termodynamicznych uwzgldniono rwnie wkad do energii swobodnej pochodzcy od drga sieci, ktry wyznaczono dla 33 nadstruktur w ramach przyblienia kwazi-harmonicznego. Uzyskany z oblicze diagram fazowy T-x charakteryzuje si asymetryczn luk mieszalnoci. Obliczenia termodynamiczne oparte wycznie na czci konfiguracyjnej energii swobodnej

355

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW daj diagram fazowy z grnym punktem krytycznym: xC = 0,69 i TC = 1020 K. Uwzgldnienie drga sieci daje obnion temperatur krytyczn TC = 860 K (xC = 0,65). Prezentowane badania wykazay, e poniej temperatury pokojowej mieszalno faz ZnSe i BeSe jest moliwa wycznie w wskich zakresach koncentracji (x 0 oraz x 1). W podwyszonej temperaturze, 400 K < T < TC, mieszalno faz ZnSe i BeSe jest moliwa rwnie w roztworach Zn1-xBexSe o wzbogaconej zawartoci cynku.
Received: September 20, 2012 Received in a revised form: November 19, 2012 Accepted: November 30, 2012

COMPUTER METHODS IN MATERIALS SCIENCE

356

COMPUTER METHODS IN MATERIALS SCIENCE


Informatyka w Technologii Materia w
Publishing House AKAPIT

Vol. 13, 2013, No. 2

MODELLING MICROSTRUCTURE EVOLUTION DURING EQUAL CHANNEL ANGULAR PRESSING OF MAGNESIUM ALLOYS USING CELLULAR AUTOMATA FINITE ELEMENT METHOD
MICHAL GZYL1, *, ANDRZEJ ROSOCHOWSKI1, ANDRZEJ MILENIN2, LECH OLEJNIK3
2

University of Strathclyde, 75 Montrose Street, Glasgow G1 1XJ, United Kingdom AGH University of Science and Technology, al. Mickiewicza 30, 30-059 Krakw, Poland 3 Warsaw University of Technology, ul. Narbutta 85, 02-524 Warszawa, Poland *Corresponding author: michal.gzyl@gmail.com
Abstract

Equal channel angular pressing (ECAP) is one of the most popular methods of obtaining ultrafine grained (UFG) metals. However, only relatively short billets can be processed by ECAP due to force limitation. A solution to this problem could be recently developed incremental variant of the process, so called I-ECAP. Since I-ECAP can deal with continuous billets, it can be widely used in industrial practice. Recently, many researchers have put an effort to obtain UFG magnesium alloys which, due to their low density, are very promising materials for weight and energy saving applications. It was reported that microstructure refinement during ECAP is controlled by dynamic recrystallization and the final mean grain size is dependent mainly on processing temperature. In this work, cellular automata finite element (CAFE) method was used to investigate microstructure evolution during four passes of ECAP and its incremental variant I-ECAP. The cellular automata space dynamics is determined by transition rules, whose parameters are strain, strain rate and temperature obtained from FE simulation. An internal state variable model describes total dislocation density evolution and transfers this information to the CA space. The developed CAFE model calculates the mean grain size and generates a digital microstructure prediction after processing, which could be useful to estimate mechanical properties of the produced UFG metal. Fitting and verification of the model was done using the experimental results obtained from I-ECAP of an AZ31B magnesium alloy and the data derived from literature. The CAFE simulation results were verified for the temperature range 200-250 C and strain rate 0.01-0.5 s-1; good agreement with experimental data was achieved. Key words: severe plastic deformation, equal channel angular pressing, ultrafine grained metals, magnesium alloys, cellular automata finite element method

1. INTRODUCTION 1.1. Cellular automata finite element (CAFE) method

Cellular automata (CA) technique is used in material science to provide digital representation of material microstructure and simulate its evolution during processing (Das, 2010; Madej et al., 2006;

Svyetlichnyy, 2012). Material is represented as a lattice of finite cells, called the cellular automata space. The interactions between cells describe dynamics of a simulated physical phenomenon. Mathematical description of interactions is introduced by transition functions (transition rules). The current state of each cell is determined by the state of its neighbours and its own state in the previous step. Cellular automata finite element (CAFE) approach is ISSN 1641-8581

357 363

INFORMATYKA W TECHNOLOGII MATERIAW

the example of a multi scale modelling approach. This is the combination of micro scale modelling using cellular automata and finite element (FE) analysis, the most popular method of metal forming simulations. The advantages of using multi scale simulation approach are evident, especially when ultra fine grained materials are considered. By using the CAFE method not only the mean grain size but also microstructure homogeneity and grain size distribution can be calculated. 1.2. Equal channel angular pressing and its incremental variant

1.3.

Microstructure evolution during ECAP of Mg alloys

COMPUTER METHODS IN MATERIALS SCIENCE

In order to improve mechanical properties of magnesium and other metals many thermomechanical processes have been proposed. Severe plastic deformation (SPD) processes, in which a very large strain is imposed to the material to refine its grain microstructure and improve its strength, are very promising. Equal channel angular pressing (ECAP) was developed by Segal (1995). In the ECAP process, a billet is pushed through a die with two channels, which have the same cross section (circular or rectangular) and intersect at an angle that usually varies from 90 to 135. Plastic strain is introduced into metal by simple shear, which occurs at the channel intersection. Since the billet dimensions remain unchanged, the process can be repeated in order to accumulate a desired high strain. Due to force limitation only relatively small amount of material can be processed at each stage. That is the reason why ECAP is not widely used in industrial practice. Recently, incremental ECAP (IECAP) has been developed by Rosochowski and Olejnik (2007). In I-ECAP, the stages of material feeding and plastic deformation are separated, which reduces the feeding force dramatically. The tool configuration consists of a punch working in a reciprocating manner and a die leading and feeding the material in consecutive steps. When the punch moves away from the die, the billet is fed by a small increment. Then, when feeding stops and the billet is in the fixed position, the punch approaches and plastically deforms it. The mode of deformation is simple shear, the same as in classical ECAP.

Figueiredo and Langdon (2010) presented a model, which states that the mechanism of grain refinement during ECAP processing of AZ31 magnesium alloy is dynamic recrystallization (DRX). This hypothesis is based on the occurrence of a bimodal microstructure after ECAP processing. They also introduced a critical grain size term in order to explain that homogenous grain size distribution is possible to achieve only if the initial mean grain size is small enough. Otherwise, the newly formed recrystallized grains are not able to fully consume the initial coarse grains. This observation was made earlier by other researchers (Janeek et al., 2007; Lapovok et al., 2008; Ding et al., 2009). Therefore, cellular automata transition rules usually used to describe DRX during hot forming of metals are applied in the presented model. 2. MODEL DESCRIPTION 2.1. Overview

In the present work, microstructure evolution during ECAP and its incremental variant is modelled using cellular automata finite element (CAFE) technique. The cellular automata (CA) space plays a role of digital material representation in meso scale; artificial grains are represented by CA cells. Microstructure evolution is described by transition functions, whose parameters are the macro scale integration point variables obtained from FE simulation: strain, strain rate and temperature (mapped from FE nodes). The internal state variable method (Pietrzyk, 2002), which treats dislocation density as a material variable, is used to describe changes that occur in the micro scale during plastic deformation. Dislocation density in each cell controls the nucleation process. Random pentagonal neighbourhood and absorbing boundary conditions are used to define CA space and its dynamics. 2.2. Dislocation density evolution

Evolution of dislocation density during hot metal forming is controlled by two competing processes: strain hardening and thermal softening caused by recovery or recrystallization. Increase in dislocation density is caused by storage of dislocations while decrease in dislocation density results from annihilation of dislocations. The change of dislocation densi-

358

INFORMATYKA W TECHNOLOGII MATERIAW

ty due to strain increment is described by the equation (1) (Sellars and Zhu, 2000):
d iCA d iCA d iCA A1 M Q2 CA Q exp 1 d iCA A2 iCA d i 1 exp b RT RT (1)

2.3.

Cellular automata space evolution

where: M Taylor factor, b Burgers vector, d effective strain increment, Q1 effective activation efenergy, R gas constant, T temperature,

fective strain rate, iCA 1 dislocation density at previous strain increment, Q2 apparent energy, A1, A2 fitting parameters. The presented approach is similar to the model developed by Mecking and Kocks (1981) where dislocation density evolution is also introduced as competition between dislocations storage and annihilation. Decrease in dislocation density is dependent on temperature as dislocation annihilation is characterized as thermally activated process (Madej et al., 2006). Parameters of equations (1) were derived from literature (table 1). Fitting coefficients were determined using Hooke-Jeeves optimization method. Dislocation density dependence on strain at 200C and strain rate equal to 0.01 s-1 based on these parameters, together with experimental results obtained for pure magnesium in similar conditions (Klimanek & Poetzsch, 2002; Mathis et al., 2004), are illustrated in figure 1.
Table 1. Parameters of dislocation density evolution equation (1). Q1, kJ/mol (Barnett, 2003) 147

Microstructure evolution during processing is modelled using transition rules, which describe how each CA cell state changes depending on its own and its neighbour state after the previous time increment. The CA space dynamics can be described by the following rules: cell becomes the site of nucleation when its dislocation density exceeds a critical value; privileged areas for the dislocation density rise are grain boundaries and their vicinity (Galiyev et al., 2003); cell changes its state to recrystallized when one of its neighbours is recrystallized; grain grows until its virtual energy related to its grow potential is greater than zero; new grains cannot be consumed by other recrystallized grains. A critical value of dislocation density in a CA cell must be reached to create a new grain nucleus in this cell. The nucleation process will be favoured where the level of stored energy is higher comparing to other areas. Following these arguments, the equation (2) is used to calculate critical dislocation density. This is a simplified form of the formula introduced by Roberts and Ahlblom (1978):
T p2 crit p1

1/ 3

(2)

where: grain boundary energy, p1, p2 fitting parameters (table 2).

A1 3.85e3

A2 30

Q2, kJ/mol (Das, 2010) 21

M, (Chino, 2006) 2.38

b, nm (Chino, 2006)

Fig. 1. Dislocation density evolution at 200 C and strain rate 0.01 s-1.

Grain boundary energy is dependent on the misorientation angle between neighbouring grains and is evaluated from the equation derived by Read and Shockley (1950) for low angle grain boundaries; for high angle grain boundaries it is kept constant. Misorientation between neighbouring grains is calculated using a method presented by Zhu et al. (2000). Orientation of each grain is described by three Euler angles: 1, , 2 (Bunge notation). Wang and Huang (2003) showed the relation between crystallographic orientation and texture component in hcp metals. Since the slip on basal plane is the most favourable deformation mechanism for magnesium, dislocation accumulation is more probable for grains with orientation closer to basal than prismatic or

359

COMPUTER METHODS IN MATERIALS SCIENCE

0.32

INFORMATYKA W TECHNOLOGII MATERIAW

pyramidal one. Processing at elevated temperatures is being simulated, therefore twinning is not taken into account. Moreover, twinning was not revealed in microscopic observations. At each time step, the total dislocation density increment is divided by a number of randomly chosen cells N, which extrinsic dislocation density will be increased. The parameter is dependent on temperature and strain rate obtained from FE simulation. Since grain boundaries are prevailed areas for nucleation, more new nuclei are expected to occur when there are more grains in the CA space. Evolution of N is given by equation (3):

p4 nd 5 exp( p 6T ) N p 3

waveform with frequency 0.5 Hz and amplitude equal to 1.8 mm. A motor driven screw jack was controlled using National Instruments hardware and software (LabVIEW). The feeding stroke was equal to 0.2 mm. A die with 90 angle between channels was used to conduct four passes of I-ECAP, which resulted in the total strain of 4.6. Route BC was used, which meant that after each pass the billet was rotated by 90. AZ31B billets and dies were heated up to 250C using electric heaters. Temperature during processing was kept constant within 2 C, based on the readings obtained from a thermocouple located near deformation zone. 4. RESULTS FE simulations were run using Abaqus/Explicit commercial software. CA microstructure evolution calculations and visualisation were performed using a self-developed software. CA space dimensions were 400 x 400 cells, it corresponded to 100 x 100 m2 area of real material. The same conditions as during I-ECAP experiment were used in simulation. In order to investigate temperature and strain rate effect on the final mean grain size and microstructure homogeneity, additional experimental results obtained for different process parameters were derived from literature (table 3). Simulations were performed for temperatures: 200, 225 and 250C and strain rate within range 0.01-0.5 s-1. The initial microstructure of I-ECAP processed material was heterogeneous; coarse grains were surrounded by smaller ones (figure 2a), what could be attributed to DRX during hot extrusion of supplied
p5 1.06699 p6 -0.092 p7 1.1463 p8 0.094

(3)

where: N number of cells with increased extrinsic dislocation density, nd number of grains in CA space, p3, p4, p5, p6 fitting parameters. Grain growth rate is associated with virtual energy assigned to each new nucleus. Since mean DRXed grain size is dependent mostly on temperature, grain growth energy is the function of temperature and misorientation angle. As the new grain grows its energy is being lowered. The process is stopped when the grain growth energy is equal to zero. In this condition, the grain has no potential for further expansion. The grain growth energy is introduced using an empirical equation (4):
E grain p 7 exp( p8T )10 / 0.7
gg

(4)

where: T temperature, misorientation angle, p7, p8 fitting parameters.


Table 2. Parameters of CA space evolution equations (2-4).

COMPUTER METHODS IN MATERIALS SCIENCE

p1 4.0756e70

p2 -31.4176

p3 0.6677e18

p4 0.004261

3. EXPERIMENTAL PROCEDURE Commercially extruded AZ31B magnesium rods with 17 mm diameter were machined using the EDM cutting technique in order to obtain bars with square cross-section 10 x 10 mm2 and length equal to 100 mm. A double-billet variant of I-ECAP was realised using a 1 MN hydraulic servo press (Rosochowski et al., 2008). The billets were fed using a screw jack whose action was synchronised with the reciprocating movement of the punch. The punch movement followed an externally generated sine

rods. The corresponding digital representation of asreceived material was obtained by uniform grain growth and simulation of DRX (figure 2b). The mean grain size obtained after I-ECAP processing was equal to 6.15 m (figure 2c) what is similar to the results obtained by Suwas et al. (2007). Strain rate values during both processes were significantly different, which could indicate that processing temperature has a dominant effect on the final grain size. The simulated microstructure was similar to the real one and the predicted grain size was 6.3 m (figure 2d).

360

INFORMATYKA W TECHNOLOGII MATERIAW Table 3. Simulations parameters. Initial temperature, C I-ECAP (this work) Ding et al. (2009) Ding et al. (2009) Jin et al. (2006) 250 225 200 225 Strain rate, s-1 0.5 0.3 0.3 0.01 Channel angle, 90 120 120 90

grains (figure 4c). The interior of coarse grains was not consumed during DRX process. The mean grain size after first pass was measured to be 4.1 m while the model predicted 4.25 m.

a)

b)

a)

b)

c)

d)

Fig. 3. Microstructure images of as-received material (a) and after 4th pass of ECAP at 200 C (c) obtained by Ding et al. (2009); corresponding simulation results in (b) and (d), respectively.

c)

d)

Fig. 2. Microstructure images of as-received material (a) and after 4th pass of I-ECAP at 250 C (c); corresponding simulation results in (b) and (d), respectively.

Ding et al. (2009) conducted ECAP of AZ31 magnesium alloy at 200C and 225C using a die with 120 channel angle. The initial microstructure and its digital material representation are shown in figure 3. The initial mean grain size was 7.2 m, the same as the average grain size of CA representation. The microstructure obtained after 4 passes at 200C and the corresponding results of modelling microstructure evolution are shown in figure 3 as well. After 4 passes at 200C the grain size was reduced to 1.8 m while the model predicted 1.7 m. When processing at 225C, the microstructure was less homogenous than at 200C. The mean grain size calculated using developed model was equal to 2.2 m, while experimental result was 2.4 m. The initial mean grain size of the material processed by Jin et al. (2006) was equal to 15.6 m; coarse grains and few smaller grains were observed (figure 4a). A digital material representation was generated using non-uniform grain growth: 80% of grains grew slower than others and the mean grain size was 15.8 m. After first pass at 225C, coarse grains were surrounded by colonies of very small

a)

b)

c)

d)

Fig. 4. Microstructure images of as-received material (a) and after 1st pass of ECAP at 225 C (c) obtained by Jin et al. (2006); corresponding simulation results in (b) and (d), respectively.

Results obtained from the developed CAFE model are in good agreement with experimental data (figure 5). The mean grain size after first pass depends strongly on the initial microstructure. A significant grain refinement is observed after first pass but it is not sufficient to refine the coarse grain dominated microstructure. Only grains smaller than ~15 m can be fully recrystallized during first pass of ECAP at 200C. As a result, heterogeneous grain size distribution is obtained. Further deformation is needed to refine and homogenize a microstructure. Grain refinement is limited by a temperature, it is

361

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW

shown that the mean grain size as small as 2 m can be obtained at 200C. Although smaller grain size cannot be obtained at given temperature, further processing leads to microstructure homogenization.

Fig. 5. Mean grain size obtained from experiments and CAFE simulations.

5.

CONCLUSIONS

COMPUTER METHODS IN MATERIALS SCIENCE

A multi scale CAFE approach was developed in order to model microstructure evolution during equal channel angular pressing. Computations were performed for various temperatures and strain rates that are typical for processing of magnesium alloys. Numerical results were verified using experimental data from conventional and incremental ECAP. Former were derived from literature, latter was obtained from I-ECAP experiment. The model correctly predicted both the mean grain size after subsequent passes of ECAP/I-ECAP and microstructure homogeneity. In particular, a heterogeneous grain size distribution after first pass of ECAP for initially coarse grained microstructure was predicted as well as its further homogenization. Future work will be focused on modelling grain reorientation due to deformation by simple shear. Acknowledgements. Financial support from Carpenter Technology Corporation is kindly acknowledged. REFERENCES
Barnett, M.R., 2003, Recrystallization during and following hot working of magnesium alloy AZ31, Materials Science Forum, 419-422, 503-508. Chino, Y., Hoshika, T., Lee, J.-S., 2006, Mechanical properties of AZ31 Mg alloy recycled by severe deformation, J. Mater. Res., 21, 754-760. Das, S., 2010, Modeling mixed microstructures using a multilevel cellular automata finite element framework, Computational Materials Science, 47, 705-711.

Ding, S.X., Chang, C.P., Kao, P.W., 2009, Effects of Processing Parameters on the Grain Refinement of Magnesium Alloy by Equal-Channel Angular Extrusion, Metallurgical and Materials Transactions A, 40A, 415-425. Figueiredo, R.B., Langdon, T.G., 2010, Grain refinement and mechanical behavior of a magnesium alloy processed by ECAP. Journal of Materials Science, 45, 48274836. Galiyev, A., Kaibyshev, R., Sakai, T., 2003, Continuous Dynamic Recrystallization in Magnesium Alloy, Materials Science Forum, 419-422, 509-514. Janeek, M., Popov, M., Krieger, M.G., Hellmig, R.J., Estrin, Y., 2007, Mechanical properties and microstructure of a Mg alloy AZ31 prepared by equal-channel angular pressing. Materials Science and Engineering A, 462, 116120. Jin, L., Lin, D., Mao, D., Zeng, X., Chen, B., Ding, W., 2006, Microstructure evolution of AZ31 Mg alloy during equal channel angular extrusion, Materials Science and Engineering A, 423, 247-252. Klimanek, P., Poetzsch, A., 2002, Microstructure evolution under compressive plastic deformation of magnesium at different temperatures and strain rates, Materials Science and Engineering A, 324, 145-150. Lapovok, R., Estrin, Y., Popov, M.V., Langdon, T.G., 2008, Enhanced Superplasticity in a Magnesium Alloy Processed by Equal-Channel Angular Pressing with a BackPressure. Advance Engineering Materials, 10, 429-433. Madej, L., Hodgson, P.D., Pietrzyk, M., 2006, Development of the Multi-scale Analysis Model to Simulate Strain Localization Occurring During Material Processing, Arch Comput Methods Eng, 16, 287318. Mathis, K., Nyilas, K., Axt, A., Dragomir-Cernatescu, I., Ungar, T., Lukac, P., 2004, The evolution of non-basal dislocations as a function of deformation temperature in pure magnesium determined by X-ray diffraction, Acta Materialia, 52, 28892894. Mecking, H., Kocks, U.F., 1981, Kinetics of flow and strainhardening, Acta Metallurgica, 29, 1865-1875. Pietrzyk, M., 2002, Through-process modeling of microstructure evolution in hot forming of steels, Journal of Materials Processing Technology,125-126, 53-62. Read, W. T., Shockley, W., 1950, Dislocation Models of Crystal Grain Boundaries, Phys. Rev. 78, 275289. Roberts, W., Ahlblom, B., 1978, A nucleation criterion for dynamic recrystallization during hot working, Acta Metallurgica, 26, 801-813. Rosochowski, A., Olejnik, L., 2007, FEM simulation of incremental shear., Proc. Conf. Esaform 2007, eds, Cueto, E., Chinesta, F., Zaragoza, Spain, 653-658. Rosochowski, A., Olejnik, L., Richert, M., 2008, Double-Billet Incremental ECAP, Materials Science Forum, 584-586, 139-144. Segal, V.M., 1995, Materials processing by simple shear, Materials Science and Engineering A, 197, 157-164. Sellars, C.M., Zhu, Q., 2000, Microstructural modelling of aluminium alloys during thermomechanical processing, Materials Science and Engineering A, 280, 1-7. Suwas, S., Gottstein, G., Kumar, R., 2007, Evolution of crystallographic texture during equal channel angular extrusion (ECAE) and its effects on secondary processing of magnesium, Materials Science and Engineering A, 471, 1 14. Svyetlichnyy, D. S., 2012, Reorganization of cellular space during the modeling of the microstructure evolution by

362

INFORMATYKA W TECHNOLOGII MATERIAW frontal cellular automata, Computational Materials Science, 60, 153162. Wang, Y. N., Huang, J.C., 2003, Texture analysis in hexagonal materials, Materials Chemistry and Physics, 81, 1126. Zhu, G., Mao, W., Yu, Y., 2000, Calculation of misorientation distribution between recrystallized grains and deformed matrix, Scripta materialia, 42, 3741. W niniejszej pracy sprzona metoda automatw komrkowych i elementw skoczonych (cellular automata finite element CAFE) zostaa wykorzystana do opisu rozwoju mikrostruktury podczas czterech przej ECAPu i jego inkrementalnego wariantu, I-ECAPu. Dynamika zmian w przestrzeni automatw komrkowych jest determinowana przez reguy przejcia, ktrych parametrami s odksztacenie, prdko odksztacenia oraz temperatura uzyskane z symulacji metod elementw skoczonych. Model zmiennej wewntrznej opisuje wzrost cakowitej gstoci dyslokacji i przekazuje t informacj do przestrzeni automatw komrkowych. Opracowany model CAFE oblicza redni wielko ziarna oraz generuje cyfrowy obraz mikrostruktury, co moe by przydatne w wyznaczaniu wasnoci mechanicznych otrzymanego materiau. Dopasowanie oraz weryfikacja modelu zostay wykonane przy wykorzystaniu wynikw uzyskanych z przeprowadzonego procesu inkrementalnego ECAPu stopu magnezu AZ31B oraz danych literaturowych. Wyniki symulacji metod CAFE zostay zweryfikowane dla zakresu temperatur 200-250C oraz prdkoci odksztacenia 0.01-0.5 s-1; uzyskano bardzo dobr zgodno z wynikami eksperymentalnymi.
Received: September 20, 2012 Received in a revised form: November 4, 2012 Accepted: November 21, 2012

MODELOWANIE ROZWOJU MIKROSTRUKTURY PODCZAS RWNOKANAOWEGO WYCISKANIA KTOWEGO STOPW MAGNEZU PRZY UYCIU METODY CAFE Streszczenie Rwnokanaowe wyciskanie ktowe (equal channel angular pressing ECAP) jest jedn z najpopularniejszych metod otrzymywania ultra drobnoziarnistych metali. Jednak z powodu duych si potrzebnych do przeprowadzenia procesu, tylko relatywnie krtkie wstpniaki mog by wyciskane. Rozwizaniem problemu moe by opracowany inkrementalny wariant tego procesu, tzw. I-ECAP. Ze wzgldu na to, e przy uyciu I-ECAPu mog by przetwarzane nieskoczenie dugie elementy, moe on znale szerokie zastosowanie w praktyce przemysowej. Mechanizm rozdrobnienia ziarna podczas przerbki plastycznej stopw magnezu rni si znaczco od metali takich jak aluminium lub mied i ich stopy. Ostatnie wyniki wskazuj, e mechanizm rozdrobnienia ziarna podczas ECAPu jest sterowany przez proces rekrystalizacji dynamicznej, a ostateczna rednia wielko ziarna jest zalena gwnie od temperatury procesu.

363

COMPUTER METHODS IN MATERIALS SCIENCE

COMPUTER METHODS IN MATERIALS SCIENCE


Informatyka w Technologii Materia w
Publishing House AKAPIT

Vol. 13, 2013, No. 2

THE MIGRATION OF KIRKENDALL PLANE DURING DIFFUSION


BARTEK WIERZBA Interdisciplinary Centre for Materials Modelling, FMSci&C, AGH University of Science and Technology, Al. Mickiewicza 30, 30-059 Krakw, Poland *Corresponding author: bwierzba@agh.edu.pl
Abstract In this study some aspects of Kirkendall and lattice plane migration in binary diffusion couples are studied by means of numerical simulations by bi-velocity method. The bi-velocity method (Darken method) base on the postulate of the unique transport of the mass due to diffusion. The method deal with the 1) composition dependent diffusivities, 2) different partial molar volumes of components, 3) the stress field during the diffusion process and 4) entropy production. It is shown that the method allows for calculation the trajectory of the Kirkendall plane in binary diffusion couples. Key words: kirkendall plane, bi-velocity method, interdiffusion, trajectory

1. INTRODUCTION In recent years, both experimental characterization and computer simulations have revealed many new phenomena that are not yet fully explained by existing theories and models. A list of examples include: (i) multiple Kirkendall planes (bifurcations), (ii) stability of individual Kirkendall plane, and (iii) discontinuity of the Kirkendall velocity at moving interphase boundaries (van Dal at al., 2001; van Loo at al., 1990). The Kirkendall effect (Smigelskas & Kirkendall, 1947) always accompanies interdiffusion, manifests itself in many phenomena, the migration of inclusions inside the diffusion zone, the development of porosity, generation of stress and in plastic deformation of the material. These diffusion-induced processes are of concern in a wide variety of structures including composite materials, coatings, weld junctions, thin-film electronic devices (Boettinger at al., 2007; Gusak, 2010). While the Darken's treatment of diffusion has withstood the test of time, the efforts directed to-

wards its implementation into physics and thermodynamics are far from being accepted. The reason may be attributed to the inherent experimental difficulties involved in the measurement of material velocities. The rationalization and formal description of the Kirkendall effect are by no means trivial. Regardless of intensive work in this field, a number of fundamental questions still remains to be answered. Is the Kirkendall plane unique? In other words, could the inert particles placed at the initial contact interface migrate differently in the diffusion zone, so that two (or more) "Kirkendall planes" can be expected? (van Dal at al., 2001). Experiments confirm that the fiducial markers may have different trajectories. However the existing methods dealing with Kirkendall trajectories do not quantify the bifurcations, trifurcations, nonstability and discontinuity (van Loo at al., 1990). The bi-velocity method is a generalization of Darken method of interdiffusion. The method based on the rigorous mathematical derivation of mass, momentum and energy conservations (Danielewski

364 367

ISSN 1641-8581

INFORMATYKA W TECHNOLOGII MATERIAW

& Wierzba, 2010). It allows to calculate the densities, drift velocity, energy and entropy densities in multicomponent systems. The method can be used when the gradient of the mechano-chemical potential (from fairly well known thermodynamic properties) and diffusivities as a function of concentrations (from measured tracer diffusivities) are known. The method is limited to the axiatoric part of the stress tensor only, i.e. the rotations of the system are neglected (rot = 0). The purpose of this paper is to use the bi-velocity method to calculate the Kirkendall trajectory (position of the Kirkendall plane). 2. THE BI-VELOCITY METHOD (DANIELEWSKI AT AL., 2010) The core of the bi-velocity method is the mass balance equation:

The pressure, p, generated during diffusion process is described by the Cauchy stress tensor. Thus, the pressure evolution can be approximated from dilatation of ideal crystal:

dp E 2 i i M i id , dt 3 1 2v x i 1

(5)

where E and v are the Young modulus and Poisson ratio, respectively. The bi-velocity allow also to calculate the energy and entropy conservations. Assuming the time independent external forcing (V ext t 0) , the internal energy conservation law becomes:

i J id J idrift , t x
where i is the mass density; J
d i

id drift ui d drift ui i i i i ci p t x x
(6) where ui denote the internal energy. The overall internal energy, u, of the mixture can be calculated from components counterparts u

i 1, 2 ,
and J
drift i

(1)

denote

the diffusion and drift flux, respectively. The diffusion flux, J id i id , in a case when no external forces are considered is given by the Nernst-Planck equation (Nernst, 1889; Planck, 1890):

u .
i 1 i i

Finally, the entropy, s, when diffusion is not negligible can be calculated from partial GibbsDuhem relation,

Tsi ui ich i M i p

and

J id ciid ci

Di ch i i p , RT x

i 1, 2 ,
(2)

s i si .
i 1

where id denote the diffusion velocity, ich is the chemical potential, Di the intrinsic diffusion coefficient and i and Mi are partial molar volume and molecular mass of the i-th component, respectively; R and T are the gas constant and temperature, p denote the pressure field acting on the components. The density i is related with concentration, i = Mici. To calculate the drift flux the Volume Continuity Equation (VCE) is used. The differential form of VCE follow:

3. RESULTS In this paper the three different methods of calculation of the position of Kirkendall plane in the binary A-B diffusion couples are shown and compared. Mainly, 1) the "curve method" (Aloke, 2004; Gusak, 2010), 2) the "trajectory method" and 3) "entropy approximation". The first two methods base on the drift velocity and its integral the last method assumes that the position of the Kirkendall plane is defined by the local maximas of the entropy distribution. The "curve method". The Kirkendall velocity can be calculated from the drift velocity, equation (4). Assuming the diffusion process in the binary system and that the partial molar volumes are constant and equal, i = j i, j, the drift velocity can be rewritten in the following form:

M
i 1 i i i

d i

drift 0 .

(3)

The drift velocity, drift , of the mixture can be defined:

drift i i M iid ,
i 1

(4)

drift c11d c2 2d

(7)

365

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW

From Euler theorem the overall molar volume equal to the inverse of the overall concentration, 1 c . Thus:

K drift 0, t

dx xK dt 2t

(11)

drift

N N
d 1 1

d 2 2

(8)

where the molar ratio, N i

ci . Substituting equac

tion (2) the drift velocity in a binary system when pressure filed is neglected can be rewritten in the following form:

where K is the positions of the Kirkendall plane at time t = t*. The position(s) of the Kirkendall plane(s) can be found at the point of intersection(s) between the drift velocity curve and the straight line calculated from equation (11). "The trajectory method". The position of the Kirkendall plane can also be calculated by following the marker during the diffusion process, equation (12).

drift

D ch D ch N1 1 1 N 2 2 2 RT x RT x
ch i

(9)

xK t2 drift xK t1 , t dt
t1

t2

(12)

When the ideality sweeping statement is assumed, i.e.

ln ci RT thus the drift velocity x x

has its final form:

drift D1 D2

N1 x

(10)

In a diffusion controlled interaction, the Kirkendall plane is a plane of initial contact moving at parabolic dependence, thus the position of the Kirkendall plane:

In each time the new position of the Kirkendall plane is calculated. "The entropy approximation". In this studies it is postulated that the local maxima's calculated on the entropy distribution curve shows the positions of Kirkendall planes (the most favored places) in the diffusion couples. Figure 1 shows the comparison of presented above methods of calculating the position of Kirkendall plane. The data used to calculate the diffusion process are presented in table 1.

COMPUTER METHODS IN MATERIALS SCIENCE

Fig. 1. a) The concentration profile in binary A-B diffusion couple; The comparisons of the position of Kirkendall plane (vertical line) by different calculation methods: b) velocity curve; c) trajectory and d) entropy bi-velocity method.

366

INFORMATYKA W TECHNOLOGII MATERIAW Table 1. The data used in simulations of diffusion in binary A-B diffusion couple. T = 1273K; t = 36000 s; thickness, d = 0.02 cm A diffusion coefficient [cm2 s-1] NA10-10 B diffusion coefficient [cm2 s-1] 10-10

REFERENCES
Aloke, P., 2004, The Kirkendall effect in solid state diffusion, Technische Universiteit Eindhoven, Eindhoven Boettinger, W. J., Guyer, J. E., Campbell, C. E. McFadden, G. B., 2007, Computation of the Kirkendall velocity and displacement fields in a one-dimensional binary diffusion couple with a moving interface, Proc. R. Soc. A, 463, 3347-3373. Danielewski, M., Wierzba, B., 2010, Thermodynamically consistent bi-velocity mass transport phenomenology, Acta Mat. 58, 6717-6727. Gusak, A. M., 2010, Diffusion-controlled Solid State Reactions In Alloys, Thin Films, and Nanosystems, Wiley-Vch Verlag GmbH & Co., Weinheim. Nernst, W., 1889, Die elektromotorische Wirkamkeit der Ionen, Z. Phys. Chem. 4, 129-140 (in German). Planck, M., 1890, Ber die potentialdierenz zwischen zwei verdnnten lsungen binrer elektrolyte, Annu Rev Phys Chem 40, 561-576 (in German). Smigelskas, A.D. Kirkendall, E., 1947, Zinc Diffusion in Alpha Brass, Trans. AIME, 171, 130-142. van Dal, M. J. H., Gusak, A. M., Cserhati, C., Kodentsov, A. A. van Loo F. J. J., 2001, Microstructural Stability of the Kirkendall Plane in Solid-State Diffusion, Phys. Rev. Lett. 86, 3352-3355. van Loo, F. J. J., Pieraggi, B. Rapp, R. A., 1990, Interface migration and the Kirkendall effect in diffusion-driven phase transformations, Acta Metall. Mater. 38, 17691779.

The presented simulation results were calculated using the bi-velocity method. Figure 1 shows a) the concentration profile and comparison of the calculation of Kirkendall plane where different methods were used: b) the velocity curve; c) trajectory method and d) entropy bi-velocity method in a binary A-B diffusion couple. Figure 1 shows that the three presented methods give similar results when only one position of the Kirkendall plane is expected. 4. CONCLUSIONS The bi-velocity method for interdiffusion allows quantitative and qualitative description of the mass transport process in binary systems. The method based on the postulate that each components velocity i must be divided into two parts: id the unique diffusion velocity, which depends on diffusion potential gradient and is independent of the choice of the reference frame; and the drift velocity drift . The drift velocity, common for all components, allows to calculate the position and trajectory of the Kirkendall planes during diffusion process. The method effectively deals with (1) composition dependent diffusivities, (2) different partial molar volumes of components, (3) the stress field during the diffusion process and (4) entropy production. The model was applied for the modeling of the trajectory of the Kirkendall planes in a binary diffusional couples. The examples presented in this work show that the entropy curve can be used to approximate the position of the Kirkendall markers when the diffusion coefficients are composition dependant. Acknowledgments. This work has been supported by the National Science Centre (NCN) in Poland, decision number DEC-2011/03/B/ST8/ 05970. The software CADiff available from author.

KRYTERIA EWOLUCJI PASZCZYZNY KIRKENDALLA PODCZAS PROCESU DYFUZJI Streszczenie W artykule zaprezentowana zostaa metoda dwu-prdkoci pozwalajca na wyznaczenie ewolucji paszczyzny Kirkendalla podczas procesu dyfuzji. Metoda dwu-prdkoci jest uoglnieniem metody Darkena. Opiera si ona na cakowych prawach zachowania masy, pdu oraz energii. Algorytm obliczania trajektorii paszczyzny Kirkendalla pozwala rwnie na wyznaczenie: 1) ste skadnikw, 2) prdkoci dryftu, 3) energii oraz 4) produkcji entropii w wieloskadnikowych fazach skondensowanych. Pokazano, e metoda pozwala na poprawne wyznaczenie pooenia paszczyzny Kirkendalla podczas procesu dyfuzji w ukadach dwu-skadnikowych.
Received: October 25, 2012 Received in a revised form: November 21, 2012 Accepted: : December 12, 2012

367

COMPUTER METHODS IN MATERIALS SCIENCE

COMPUTER METHODS IN MATERIALS SCIENCE


Informatyka w Technologii Materia w
Publishing House AKAPIT

Vol. 13, 2013, No. 2

MODELING OF STATIC RECRYSTALLIZATION KINETICS BY COUPLING CRYSTAL PLASTICITY FEM AND MULTIPHASE FIELD CALCULATIONS
ONUR GVENC1, THOMAS HENKE1, GOTTFRIED LASCHET2, BERND BTTGER2, MARKUS APEL2, MARKUS BAMBACH1*, GERHARD HIRT1
1

Institute of Metal Forming, RWTH Aachen University, Intzestrasse 10, D-52056 Aachen, Germany 2 ACCESS e.V., RWTH Aachen, Intzestrasse 5, D-52072 Aachen, Germany *Corresponding author: bambach@ibf.rwth-aachen.de
Abstract In multi-step hot forming processes, static recrystallization (SRX), which occurs in interpass times, influences the microstructure evolution, the flow stress and the final product properties. Static recrystallization is often simply modeled based on Johnson-Mehl-Avrami-Kolmogorov (JMAK) equations which are linked to the visco-plastic flow behavior of the material. Such semi-empirical models are not able to predict the SRX grain microstructure. In this paper, an approach for the simulation of static recrystallization of austenitic grains is presented which is based on the coupling of a crystal plasticity method with a multiphase field approach. The microstructure is modeled by a representative volume element (RVE) of a homogeneous austenitic grain structure with periodic boundary conditions. The grain microstructure is generated via a Voronoi tessellation. The deformation of the RVE, considering the evolution of grain orientations and dislocation density, is calculated using a crystal plasticity finite element (CP-FEM) formulation, whose material parameters have been calibrated using experimental flow curves of the considered 25MoCrS4 steel. The deformed grain structure (dislocation density, orientation) is transferred to the FDM grid used in the multiphase field approach by a dedicated interpolation scheme. In the phase field calculation, driving forces for static recrystallization are calculated based on the mean energy per grain and the curvature of the grain boundaries. A simplified nucleation model at the grain level is used to initiate the recrystallization process. Under these assumptions, it is possible to approximate the SRX kinetics obtained from the stress relaxation test, but the grain morphology predicted by the 2d model still differs from experimental findings. Key words: static recrystallization, crystal plasticity FEM, multi-phase field method, hot forming, periodic microstructure modeling

1. INTRODUCTION Microstructural changes play a major role in hot working processes, not only because the microstructure defines force requirements for forming through the flow stress but also since the microstructure defines final product properties. Static recrystallization (SRX) is one of the most dominant mechanisms during inter-pass periods of hot rolling or forging processes and it is a common practice to model its kinetics using Johnson-Mehl-Avrami-Kolmogorov

(JMAK) type equations. However, commonly used models such as those proposed by Sellars (1990) lack spatial resolution. They disregard effects of grain topology, misorientations and local accumulations of deformation. Operating on the macro-level, they assume that the microstructure associated with a material point can be described by average values of grain size, dislocation density or even strain. Calculation strategies based on Monte Carlo Potts (Raabe, 1999), cellular automata (Gawad et al., 2008), vertex (Piekos et al., 2008) and multi-phase ISSN 1641-8581

368 374

INFORMATYKA W TECHNOLOGII MATERIAW

2. EXPERIMENTAL ANALYSIS 2.1. Material

The steel grade 25MoCrS4, a case hardening steel for gearing applications for automotive and aerospace industry, was selected as application Table 1. Chemical composition of 25MoCrS4 (1.7326) according to DIN 17210 (Values are in wt. %). material. Its chemical Grade C Mn Si Cr Mo composition is given in 25MoCrS4 0.23 0.29 0.60 0.90 0.15 0.40 0.40 0.50 0.40 0.50 Table 1.

369

COMPUTER METHODS IN MATERIALS SCIENCE

field methods (Takaki & Tomita, 2010) were proposed as an attempt to capture the heterogeneities at the micro level. Among those methods, the phase field method offers a promising approach for modeling static recrystallization after plastic deformation due to its implicit definition of the grain boundaries as a diffusive interface. This simplifies the simulation of interface migration, avoiding the complexity of handling their topology one-by-one. In addition, its theoretical foundation on irreversible thermodynamics allows for the implementation of models based on the minimization of the free energy functional of the polycrystalline microstructure (Steinbach, 2009). In order to take advantage of the phase field approach in an SRX model effectively, a representative initial (i.e. deformed) state of the microstructure is a necessary starting condition. In recent years, crystal plasticity finite element (CP-FEM) simulations have gained momentum and have now reached a level of high predictive quality. The possibility to implement grain-scale flow stress evolution models and to derive intra- and inter-grain crystalline interactions during deformation enables the generation of a representative deformed microstructure for a phase field simulation (Roters et al., 2010). However, the results of intricate models with spatial resolution are often not compared to experimental results. In this paper, the microstructural evolution of a commercial steel grade (25MoCrS4) during SRX after a hot uniaxial compression test is simulated by coupling CP-FEM calculations and phase field simulations. A 2d microstructure is generated via a Voronoi algorithm, used to set up a CPFEM model with random grain orientations, and subjected to uniaxial compression. The results of CP-FEM simulation are mapped onto the finite difference grid of the multi-phase field SRX simulation. Finally, predicted values are critically compared with the results of stress relaxation test.

2.2.

Compression tests

The hardening response of the material was obtained through a set of compression tests at 1100C = 0.01, 0.1, 1, and at five different strain rates: -1 10 100 s . Exact procedures of sample preparation and experimental methodology are described elsewhere (Henke et al., 2011). Fig. 1 shows deformation response of the material under uniaxial compression at 1100C. 2.3. Stress relaxation tests

The SRX kinetics of the material were examined by stress relaxation tests. The compression test specimens (without lubrication pockets) were deformed to a pre-strain below the critical strain for dynamic recrystallization (DRX) at the different strain rates. After the predefined strain level was reached, the cross-head of the servo-hydraulic testing machine was kept at constant height and the force response of the specimen was measured over time. The decrease of the reaction force and the respective stress values were then converted to recrystallized volume fractions (XRX) according to the procedure described by (Gronostajski et al., 1983). Once the time evolution of XRX is known, JMAK kinetics of SRX can be evaluated by determining the unknown parameters of the modified Avrami equation:

X RX

n t 1 exp ln 2 , t0.5

(1)

in which n is the Avrami exponent and t0.5 denotes the time required to reach 50% recrystallization. For the given values of XRX(t) and t0.5, an Avrami exponent of n = 0.56 was determined by regression for = 0.01 s-1 the test case strain: = 0.2, strain rate: at T = 1100C. Light optical microscopy (LOM) was used before and after SRX in order to determine the grain size evolution and nucleation site preference. For details of the sample preparation we refer to Xiong et al. (2011). LOM results show that the average grain sizes before and after the SRX are 36 m and 7 m, respectively, for all considered strain rates.

INFORMATYKA W TECHNOLOGII MATERIAW

In addition, nucleation sites of the SRX are not found inside the grains, but on the grain boundaries. 3. MICROSTRUCTURE MODEL 3.1. Deformation model

In order to simulate the plastic hardening behavior, the well-known phenomenological deformation law by Hutchinson (1976) is used within the simulation software DAMASK (Roters et al., 2012). The law is defined by

The macro-scale stress-strain curve can be converted to its micro-scale counterpart by the method proposed by Taylor (1938). If the material is assumed to be isotropic throughout the deformation process (i.e. Taylor factor M = 3), initial and saturation values of the slip resistance can be determined (0 = 8 MPa, sat = 16 MPa) and the other model parameters can be calculated numerically (h0 = 300 MPa, a = 2). Fig. 1 shows the comparison of experimental and numerical responses of the material. Note that each experiment has been repeated five times in order to take the experimental scatter into account. 3.2. Coupling

0 c
n

1 m

sgn ,

(2)

c h , 1

(3)

is the where denotes the active slip system, 0 is any convenshear rate at the active slip system,
ient reference shear rate, and c denote the stress state and the critical resolved shear stress on the active slip system, m characterizes the strain rate sensitivity and finally h is the function defining the incremental value of c in terms of shear increments on a chosen slip system , which can be calculated using the equation

h0 1 c , sat

(4)

Three types of data have to be mapped from the CP-FEM output to the phase field simulation: The grain index, the mean grain orientation and the mean stored energy per grain. However, the transfer of data from a FEM mesh to a FDM grid requires a dedicated interpolation scheme. In order to transfer the grain index and the grain orientation data from nodes to the grid, it is assumed that each grid point has the index and orientation value of its nearest neighboring node as shown in Fig. 2. In addition, the local orientations have to be averaged after the mapping to determine the mean grain orientations using circular statistics (Berens, 2009). The stored energy of the deformation can be calculated from the flow stress increase using the equations

In equation (4), h0 , a and sat are material parameters (Kalidindi et al., 1992). The parameter set can be calibrated with the compression test results.

c 0 Gb ,
Ed Gb 2 ,

(5) (6)

COMPUTER METHODS IN MATERIALS SCIENCE

Fig. 1. Comparison of CPFEM and compression test results of 25MoCrS4 at 1100C at various strain rates.

370

INFORMATYKA W TECHNOLOGII MATERIAW

Ed

c 0
G

(7)

density gradients inside the grains are taken into account. 3.3. Recrystallization model

In the equations (5), (6) and (7), Ed is the stored energy due to deformation, defines a proportionality constant, G(T ) is the temperature dependent shear modulus, c and 0 are the final and initial values of the shear resistance (Taylor, 1934). Then, the nodal energies are mapped onto the grid points.

For an isothermal, heterogeneous system, grain evolution can be modeled by minimization of the free energy functional under the assumption of a double obstacle function which leads to the popular formulation of multi-phase field method, expressed via equation (8) (Eiken et al., 2006).

i mij[ ij (i 2 j j 2i
j

2 (i j )) 2 2 ij

i j Gij ]

(8)

Fig. 2. The index and orientation of a node(circle) is assigned to all grid points (square) which are far away from the other nodes.

In equation (8), is the interface thickness, mij is the grain boundary mobility, ij denotes the interfacial energy between adjacent grain boundaries, ij is related to the grain boundary curvature and Gij is the contribution of the stored energy (Ed). Therefore, equation (8) models the combined effects of curvature and stored energy on the interface migration. In addition, the nucleation of new grains was assumed to take place at interfaces and triple junctions with site saturation as initial condition. 4. SIMULATION AND RESULTS 4.1. Deformation Isothermal uniaxial compression at a constant = 0.01 s-1) under hostrain rate (T = 1100oC and mogenous boundary conditions was simulated with periodic digital microstructure generated by a planar Voronoi tessellation of 25 randomly oriented grains. In order to avoid the occurrence of DRX, a maximum strain of 0.2 was imposed in the stress relaxation experiments, and the same pre-strain was used in the model. Evolution of slip resistance and misorientation are shown in Fig. 4. 4.2. Recrystallization

Fig. 3. The energy Ed of each grid point (GP) is determined by interpolating that of the nodes. Influence of each node is inversely proportional to its distance to an individual GP.

In the mapping, the interpolated values at the gird points are obtained as the weighted sum of adjacent finite element mesh nodes. The weights are inversely proportional to the relative distance between grid point and finite element node, as illustrated in Fig. 3. Finally, the mean stored energy per grain is found by calculating their arithmetic mean on the grain area. Note that, at the moment no dislocation

After the deformation simulation, the local stored energies were converted to mean energies per grain using equation (7) with G = 32.2 GPa and = 0.54 and converted to the FDM grid. In the SRX simulation was taken to be 1.5 m, the surface

371

COMPUTER METHODS IN MATERIALS SCIENCE

INFORMATYKA W TECHNOLOGII MATERIAW

a)

energy was set to 3e-7 Jcm-2 and the mobility was assumed to be 5e-3 cm4/Js. Nucleation was restricted to occur at the grain boundary interfaces and at triple junctions and only at sites where the stored energy exceeds 2.5e-2 Jcm-3. When these values are chosen, the SRX kinetics and grain sizes of the phase field simulation are in good correlation with the experimental values as seen in Fig. 5 and Table 2. However, the Avrami exponent calculated from the phase field model is observed to be larger than 1 in contrast to the exponent from the experiment. The evolution of the morphology of the deformed microstructure during the SRX was also predicted. It was found that with the aforementioned parameter set, the recrystallized grain front is directional which results in a cuboidal final microstructure as shown in Fig. 6.
Table 2. Comparison of mean grain sizes calculated by LOM and CPFEM-PF after recrystallization Mean grain diameter / m After recrystallization LOM 7 CPFEM-PF 8.2

b) Fig. 4. (a) c and (b) misorientation at the end of deformation.

COMPUTER METHODS IN MATERIALS SCIENCE

Fig. 5. XRX kinetics of CPFEM-PF-simulation, JMAK model and the stress relaxation experiment.

(a) t = 0.1 s

(b) t = 1 s

(c) t = 100 s

Fig. 6. Progression of recrystallization through time: (a) Growth of nuclei on the interfaces, (b) their competitive growth towards nonrecrystallized grains and (c) the fully recrystallized microstructure.

372

INFORMATYKA W TECHNOLOGII MATERIAW

5. DISCUSSION 5.1. CP-FEM and coupling

The microstructural deformation of 25MoCrS4 is successfully modeled by CPFEM using a wellestablished phenomenological hardening law. Material parameters are calibrated using the experimental results at the macro-scale. Even though the model is phenomenological, the inter-/intra-grain scatter of the grain orientation and hardening are captured and mapped onto finite difference grid effectively. However, due to the usage of the mean orientation and stored energy per grain in the phase field simulation, the scatter is unintentionally averaged during mapping which results in a loss of resolution. This problem can be solved in the future by accounting for dislocation gradients and abnormal sub-grain growth in order to improve the simple nucleation model at the grain level. 5.2. Static recrystallization

A dedicated mapping scheme was used to couple the multi-phase field model with a CPFEM deformation model whose deformation conditions correspond to experimental results. The mapping algorithm averages out local gradients and should be improved in the future. The nucleation mechanism of the 2-D model leads to a rather unrealistic grain shape when the model is adjusted to the experimentally obtained SRX kinetics. In spite of the good correlation between model and experiment, a 3-D model with an improved nucleation model at the grain level is necessary to predict the SRX kinetics and the final grain shape more accurately.

Acknowledgements. The authors gratefully acknowledge the financial support of the Deutsche Forschungsgemeinschaft (DFG) for the support of the depicted research within the Cluster of Excellence Integrative Production Technology for High Wage Countries.
REFERENCES
Berens, P., 2009, A MATLAB Toolbox for Circular Statistics. Journal of Stat. Software, 31, 1-21. Eiken, J., Bttger, B., Steinbach, I., 2006, Multiphase-field approach for multicomponent alloys with extrapolation scheme for numerical application, Physical Review E, 73, 1-9. Gawad, J., Madej, W., Kuziak, R., Pietrzyk, M., 2008, Multiscale model of dynamic recrystallization in hot rolling, International Journal of Material Forming, 1, 69-72. Gronostajski, J., Pulit, E., Ziemba, H., 1983, Recovery and recrystallization of Cu after hot deformation, Metal Science Journal , 17, 348-352. Henke, T., Bambach, M., Hirt, G., 2011, Experimental Uncertainties affecting the Accuracy of Stress-Strain Equations by the Example of a Hensel-Spittel Approach, 14th International ESAFORM Conference on Material Forming: ESAFORM 2011, Belfast, 71-77. Hutchinson, J.W., 1976, Bounds and Self-Consistent Estimates for Creep of Polycrystalline Materials. Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences, 348, 101127. Kalidindi, S., Bronkhorst, C., Anand, L., 1992, Crystallographic texture evolution in bulk deformation processing of FCC metals, Journal of the Mechanics and Physics of Solids, 40, 537569. Piekos, K., Tarasiuk, J., Wierzbanowski, K., Bacroix, B., 2008. Use of stored energy distribution in stochastic vertex model, Materials Science Forum, 571-572, 231-236. Raabe, D., 1999, Introduction of a scalable three-dimensional cellular automaton with a probabilistic switching rule for the discrete mesoscale simulation of recrystallization phenomena, Philosophical magazine A-Physics of Condensed Matter Structure Defects and Mechanical Properties, 79, 23392358. Roters, F., Eisenlohr, P., Hantcherli, L., Tjahjanto, D., Bieler, T., Raabe, D., 2010, Overview of constitutive laws, kinematics, homogenization and multiscale methods in crystal plasticity finite-element modeling: Theory, experiments, applications, Acta Materialia, 58, 1152-1211.

6. CONCLUSION In this work a coupled CP-FEM-phase field model based on the mean stored deformation energy and grain boundary curvature is applied to predict SRX kinetics. This mean energy per grain serves as driving force in the recrystallization simulation with the multi-phase field approach. In contrast to conventional JMAK based statistical models, the applied model comprises only physical quantities, e.g. surface energy and interfacial mobility which allow for physical interpretation. Moreover, being a spatially resolved method, the phase field method takes the heterogeneity of the stored energy and boundary curvature into account.

373

COMPUTER METHODS IN MATERIALS SCIENCE

The SRX kinetics and grain size calculated with the phase field method show a good correlation with that of the stress relaxation experiments. The morphological evolution of the microstructure is found to be directional, resulting in unrealistic cuboidal final grain geometry. Due to the low experimental value of the Avrami exponent (n < 1), it was necessary in this model to fill all possible nucleation sites. The usage of a 2D model and the assumption of site saturation restrict the number of nucleation sites (i.e. number of sites per interface) that is available. An extension to 3D space would increase the number of possible nucleation sites per interface and make the predicted grain morphology more realistic.

INFORMATYKA W TECHNOLOGII MATERIAW Roters, F., Eisenlohr, P., Kords, C., Tjahjanto, D., Diehl, M., Raabe, D., 2012, DAMASK: the Dsseldorf Advanced MAterial Simulation Kit for studying crystal plasticity using an FE based or a spectral numerical solver, Procedia IUTAM, 3, 3-10. Sellars, C.M., 1990, Modelling Microstructural Development during Hot Rolling, Mats. Sci. Tech, 6, 1072-1081. Steinbach, I., 2009, Phase-field models in materials science, Modelling and Simulation in Materials Science and Engineering, 17, 1-31. Takaki, T., Tomita, Y., 2010, Static recrystallization simulations starting from predicted deformation microstructure by coupling multi-phase-field method and finite element method based on crystal plasticity, International Journal of Mechanical Sciences, 52, 320-328. Taylor, G.I., 1934, The Mechanism of Plastic Deformation of Crystals. Part I. Theoretical. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 145, 362-387. Taylor, G.I., 1938, Plastic Strain in Metals, J. Inst. Met, 62, 307 324. Xiong, W., Wietbrock, B., Saeed-Akbari, A., Bambach, M., 2011, Modeling the Flow Behavior of a HighManganese Steel Fe-Mn23-C0.6 in Consideration of Dynamic Recrystallization, Steel Research International, 82, 127-136. MODELOWANIE KINETYKI REKRYSTALIZACJI STATYCZNEJ POPRZEZ SPRZENIE PLASTYCZNOCI KRYSZTAW MES Z OBLICZENIAMI PL WIELOFAZOWYCH Streszczenie W wielostopniowych procesach obrbki plastycznej, rekrystalizacja statyczna (ang. static recrystallization - SRX) wystpujca w czasach przerw midzy odksztaceniami, wpywa na rozwj mikrostruktury, naprenie uplastyczniajce oraz waciwoci gotowego produktu. Statyczna rekrystalizacja jest czsto modelowana korzystajc z rwnania Johnson-MehlAvrami-Kolmogorov (JMAK), ktre jest powizane z lepkoplastycznym pyniciem materiau. Taki p-empiryczny model nie jest w stanie przewidzie mikrostruktury ziaren dla SRX. W niniejszym artykule przedstawiono podejcie do symulacji statycznej rekrystalizacji austenitu wykorzystujce poczenie plastycznoci krysztaw z metod pola wielofazowego. Mikrostruktura jest modelowana za pomoc reprezentatywnych elementw objtoci (ang: Representative Volume Element - RVE) jednorodnej struktury ziaren austenitu z okresowymi warunkami brzegowymi. Mikrostruktura jest generowana za pomoc wielobokw Voronoi. Obliczenia odksztacenia RVE s prowadzone poczonymi metodami plastycznoci krysztaw i MES, z uwzgldnieniem rozwoju orientacji ziaren oraz gstoci dyslokacji. Parametry modelu materiau wyznaczono na podstawie dowiadczalnych krzywych pynicia dla stali 25MoCrS4. Odksztacona struktura ziaren (gsto dyslokacji, orientacja) jest przekazywana do siatki rnic skoczonych w modelu pola wielofazowego stosujc metod interpolacji. W obliczeniach pola faz, siy pdne dla statycznej rekrystalizacji s obliczane na podstawie redniej energii w ziarnie i krzywizny granic ziaren. W celu zainicjowania rekrystalizacji stosowany jest uproszczony model zarodkowania na poziomie ziarna. Przy tych zaoeniach moliwe byo oszacowanie kinetyki SRX na podstawie bada relaksacji napre. Z drugiej strony przewidywana w modelu 2D morfologia ziaren wci odbiega od wynikw dowiadczalnych.
Received: September 21, 2012 Received in a revised form: October 29, 2012 Accepted: November 3, 2012

COMPUTER METHODS IN MATERIALS SCIENCE

374

Vous aimerez peut-être aussi