Vous êtes sur la page 1sur 22

CHAPTER

INTRODUCTION

1.1 THE TECHNOLOGY OF COMMINUTION

he word comminution is derived from the Latin comminuere, meaning 'to make small'. Making small particles
out of large particles is a surprisingly pervasive human technology. The breaking of rock, if not quite the oldest
profession, certainly has a pedigree stretching far back into pre-history, whether in building shelters, temples or
military roads, or in creating tools or weapons.

A modem industrial civilisation cannot exist without exploiting a wide range of comminution technologies, from the
coarse crushing of mined ore and quarry rock, to very fine grinding for the production of paint, pharmaceuticals,
ceramics, and other advanced materials. Rock cutting and blasting can also, without too much semantic risk, be
considered the first stage of comminution in mining and quarrying operations. Indeed, there is increasing evidence that
integrating the comminution stages of mining and mineral processing in an holistic way, rather than seeing them as
decoupled or even competitive elements of the production process, can produce substantial economic benefits; this is an
exciting field of current research.
Lest there be any doubt as to the importance of comminution in modem society, a U.S. National Materials Advisory Board
report in 1981 on approaches to improving the energy consumption of comminution processes estimated that 1.5% of all
electrical energy generated in the U.S. A. was consumed in such processes (including the energy required to produce the
steel media used in comminution). The report estimated that realistic improvements in the energy efficiency of
comminution, including aspects of classification and process control, could result in annual energy savings in the U.S.A.
exceeding 20 billion kWh per annum, or about 15% of Australia's entire annual consumption of electrical energy (in
1993/1994).
Nearly all minesite mineral processing operations, including the beneficiation of metalliferous and industrial minerals,
iron oie, coal, precious metals and diamonds, and the preparation of quarry rock, are major users of comminution
machinery
1

Visita: www.losmetalurgistas.blogspot.com

Chapter 1: Introduction

(mineral sands beneficiation is a notable exception for which nature has done the job already).
In the minesite context,
the term 'comminution'
Crushers
- Jaw crushers
- Gyratory crushers
- Cone crushers
- Rolls crushers
- High pressure
grinding rolls
- Impact crushers

Sizing processes
- Screens
Tumbling
mills
- Sieve
- Autogenous
(AG)
bends
mills
- Hydrocycl
- Semi-autogenous
ones
(SAG) mills
- Other
- Rod
mills
classifiers
- Ball mills

encompasses the following unit operations:


Stirred mills
- Tower mills
- Vertical pin mills
- Horizontal pin mills

(Although sizing processes are not in themselves size reduction devices, they are an integral part of any comminution
circuit, and contribute directly to circuit performance and energy utilisation efficiency.)
Comminution forms a correspondingly large proportion of any mineral processing plant's capital and operating cost.
Cohen (1983) estimated that 30-50% of total plant power draw, and up to 70% for hard ores, is consumed by
comminution. The proportion of total plant operating cost attributable to comminution (power plus steel plus labour) is
variable, depending as it does on the nature of the plant and the ore being treated. However, for a 'typical' metalliferous
concentrator quoted by Wills (1992) it was exactly 50%, and a similar figure can be inferred from the operating data for a
range of metalliferous concentrators given by Weiss (1985). For those operations in which comminution is the
predominant unit operation, such as quarries, or iron ore crushing and screening plants, the figure will clearly be much
higher. Capital cost figures also vary, but lie in the range 20-50% for most mixed-process plants.
The corollary of these statistics is that there is much to be gained from improving the practice of comminution.
Improvements can be of two kinds:
Fundamental changes in the technology, or the introduction of novel technology.

Visita: www.losmetalurgistas.blogspot.com

Chapter 1: Introduction

Incremental improvements in the technology, its application and operating practice.

The latter essentially implies optimising the performance of comminution machines, that is, ensuring that the installed
capital asset is exploited as efficiently as possible in an economic sense. The benefits of optimisation may be captured
as:

reduced unit operating costs {/t treated),

increased throughput, and thus value production,

improved downstream process performance as a result of an improved feed size specification,

or some combination of these.


This monograph provides the information to assist the process engineer to realise the benefits of optimisation, through a
methodical and technically sound approach to understanding and analysing his or her comminution circuit. Optimisation
here implies the engineering process of adjusting machine and circuit variables to attain some improved operating
condition. The book does not discuss mathematical process optimisation procedures such as evolutionary operation
(EVOP) and simplex search techniques; these are covered elsewhere (e.g. Bacon 1967, Mular 1972).
1.2 SIMULATION AS AN OPTIMISATION TOOL

Inevitably, in view of its pedigree, there is an emphasis in the book on computer simulation as the principal optimising
tool. Simulation here implies the prediction of the steady state performance of a circuit, in terms of stream properties
such as mass flow, solids concentration and size distribution, as a function of material properties, machine specifications
and operating conditions. Dynamic simulation explores time dependencies for use in plant design and process control
system design, and is not considered here.
The great power of simulation as an optimisation, and indeed design, tool is its ability to explore many different scenarios
quickly and efficiently - the "what if?" questions. This enables the engineer to prescribe with confidence the condition for
optimum performance, in terms of maximising throughput or minimising product size for example, without the need for
expensive, difficult and often inconclusive plant-scale testwork. At the very least, simulation permits confirmatory plant
work to be efficiently designed, leading to reduced costs (e.g. minimising lost production) and better confidence in the
final result.
3
Circuit optimisation by simulation is not a trivial business. It requires engineering skill which, like any other skill, needs to
be learned through study and experience. Computer simulation is simply the vehicle for the exercise of engineering
judgement.
As is discussed in detail elsewhere in the book, the process model structure which provides the platform for the
simulation methodology seeks to decouple and separately estimate material (ore) properties and machine
characteristics. Each are described by parameters which must be estimated from real life. The practice of simulatorbased optimisation therefore comprises the following steps:
1. Characterising the feed material in laboratory tests.

Visita: www.losmetalurgistas.blogspot.com

Chapter 1: Introduction

2. Estimating machine parameters, from plant surveys ('calibrating' the models).


3. Running simulations to explore ways of meeting the optimisation criteria through changes in flowsheet, machine
or operating conditions.
4. Testing and/or implementing the chosen conditions.
Figures 1.1 and 1.2 illustrate the process in more detail. Figure 1.1 shows the procedure for estimating model parameters
from plant surveys, and Figure 1.2 the simulation procedure which uses these parameters for plant optimisation. Note
carefully the decision loops involved in judging the quality of data collected from the plant and in estimating model
parameters, before committing to the simulations.
This monograph provides most of the information required to plan and implement the approach shown in Figure 1.2. The
parameter estimation step (Figure 1.1) using a particular simulator will usually be covered in detail in the simulator
manual. However the treatment of each process model in the following chapters includes discussion of the interpretation
of the model parameters, with some comments on parameter estimation in Chapter 5. In practice, parameters are
sometimes chosen from a 'library' or from previous experience, rather than fitted to a particular dataset.

Visita: www.losmetalurgistas.blogspot.com

Chapter 1: Introduction

1.3 THIS BOOK AND HOW TO USE IT

The book can be read on three levels:

As a general introduction to the ideas and methodology underlying the practice of


comminution circuit optimisation, including practical ideas which do not require the
use of a computer simulation package.

As a more detailed review of specific unit processes, their principles, models,


operational features and optimisation options.

As a companion to the more advanced optimisation methods using computer


simulation packages.
The book is the outcome of over 30 years of research and consulting in
comminution processes by the staff and research students at the JKMRC. It
therefore unashamedly reflects the methodologies (and no doubt prejudices)
developed in the research, and honed by application in a very large number
of case studies in Australia and in many other parts of the world. An
important element of JKMRC work in that period has been the encapsulation
of much of this knowledge in what was the first commercial user-friendly, PCbased, dedicated mineral processing computer simulator, JKSimMet
(Wiseman et al 1991). JKSimMet is supported and marketed internationally
by the JKMRC Commercial Division, JKTech, and in 1996 celebrated 10 years
as a commercial product. Its main features are outlined in Appendix 4.
In some senses this monograph updates the book by the founding Director of
the JKMRC, Alban Lynch (1977). When Lynch and his co-workers published
their work, personal computers were still only a dream, and optimising
comminution circuits by modelling and simulating them on a computer was
an exotic activity for the privileged few, though Lynch's book did much to
introduce the methodology to the practising engineer. Twenty years later,
PCs have invaded our offices and homes, and circuit optimisation by process
simulation is a mature (though still developing) technology, widely if not
universally practiced. The present monograph reflects the advances of that
generation, in terms of models, process understanding, and the entire
methodology of circuit optimisation. It also reflects changes in circuit design
and operating practice, particularly the widespread use today of autogenous
and semiautogenous grinding, and the introduction of new technologies such
as tower mills and new forms of impact crusher.
Unlike most conventional textbooks, this book encourages the reader to
browse. There is some logic to the ordering of the chapters - the basics,
followed by a detailed description of the unit processes, and ending with
some ideas and examples of the

Visita: www.losmetalurgistas.blogspot.com

Chapter 1: Introduction

Visita: www.losmetalurgistas.blogspot.com

Chapter 1: Introduction

methodology. However the approach to the book will depend on the reader's
experience and objectives.

Someone new to the ideas of mathematical modelling as a prerequisite to


optimisation should read Chapter 2 before going further.

Those who simply need a deeper understanding of a particular process


should read the appropriate chapter, Nos 6-10 or 12, which have been
designed to stand alone as far as possible.

Anyone seeking advice on surveying a comminution circuit is directed to


Chapter 5, and if only a reminder of Bond's test or the JKMRC single particle
breakage test procedure is required, the details are given in Chapter 4.

There is even some justification for starting with a review of the optimising
methodology in Chapter 13 (the last), since this gives examples of how to go
about the task, and so provides a basis on which to think about the particular
problem in hand; it can then be re-visited once the reader has familiarised
himself or herself with the principles presented in the earlier chapters.
There is no assumption in the book that the reader has a computer
simulation package available. However the fact is that simulation is
increasingly seen as a standard approach, and the book has been written
with that in mind. It will not be long before the casual browser is confronted
with its value as a routine tool, and the inevitability of its wider use.
A brief discussion of the contents of each chapter and appendix follows.
CHAPTER 2 gives a general background to the way comminution processes
are modelled mathematically. This chapter can be omitted in a first reading,
or if simulation is not to be used in the optimisation process.
CHAPTER 3 discusses some aspects of the measurement and description of
mineral liberation, and its practical use in the prediction of grinding
performance. Again, it can be omitted in a first reading, or if liberation is not
an issue for the circuit under consideration.
CHAPTER 4 is a detailed treatment of methods of assessing the breakage
characteristics of rocks in the context of comminution, including Bond's
methods and the JKMRC single particle breakage testing procedures.
Characterising the crushability or grindability of the feed material is an
essential element of any optimisation exercise.
CHAPTER 5 is a practical guide to surveying comminution circuits, including
some discussion of sources of error, sample size, the data that need to be
collected, and

Visita: www.losmetalurgistas.blogspot.com

Chapter 1: Introduction

Visita: www.losmetalurgistas.blogspot.com

appropriate sampling procedures for particular streams. It includes a limited


discussion of issues relating to data analysis, including mass balancing and
parameter estimation (these aspects are handled more fully elsewhere,
including in the manuals for the appropriate software).
CHAPTERS 6 - 10 deal in depth with the specific unit operations of crushing,
autogenous and semi-autogenous grinding, rod milling, ball milling and stirred
milling respectively. Each chapter includes a process description, discussion of
how the process is modelled, operational features, and comments on
optimisation options.
CHAPTER 11 describes new and powerful methods developed at the JKMRC for
predicting the power draw of crushers and tumbling mills. The management of
power draw is an essential element of any optimisation exercise, since as
noted earlier it comprises a significant component of plant operating cost.
CHAPTER 12 is a detailed description of sizing devices such as screens,
hydrocyclones and cone classifiers. Again, it includes a process description,
models, operational features and optimisation options.
CHAPTER 13 describes some strategies for approaching the optimisation
problem, with some practical examples to illustrate some of the issues and
methods involved.
APPENDIX 1 gives a short account of spline functions and how they are used to
represent data in JKSimMet.
APPENDIX 2 is a discussion of recent JKMRC research which has identified
some useful correlations between grinding performance and slurry rheology.
These trends should be borne in mind when interpreting grinding performance
data and considering optimisation strategies. However they are not yet
sufficiently quantified to enjoy a formal place in the process models of
grinding.
APPENDIX 3 reviews the more important laboratory techniques for particle size
analysis, with comments on the features of each, and the problems that can
be encountered in obtaining reliable results. It is important to emphasise that
careful size analysis forms the basis of all comminution circuit surveys and
optimisation studies.
APPENDIX 4 is a short specification for the steady state comminution circuit
simulator JKSimMet, and its associated mass balancing routine, JKMBal.
The REFERENCES have been chosen principally to support the statements in
the text. Taken together they form a useful bibliography in mineral processing
comminution and its modelling and optimisation. However the literature on
these topics is very large, and this selection is not intended to be
comprehensive.

CHAPTER 2
MODELS

OF

COMMINUTION PROCESSES

2.1 INTRODUCTION

he mechanisms which cause most rock breakage are those of nature


- the action of water and wind. However for the most part these
processes are too slow to be of interest to mineral processors.
Speeding up the breakage process requires an intense application of
energy. Typically, several kilowatt hours of energy are applied to each tonne of
material in mineral comminution processes. This is a great deal of energy.
Dropping an ore particle 10 m generates only 1/37 of a kilowatt hour per
tonne. A single kilowatt hour per tonne of potential energy requires a lift to a
height of 367m.
The main reason for this large energy requirement is that the particle must be
heavily stressed before any substantial breakage occurs. This stress is mostly
stored as elastic energy and is lost when the particle fractures.
Industrial crushers are about 75% efficient in energy utilisation as breaking
rocks one at a time in a laboratory single particle breakage device, such as a
pendulum or drop- weight tester (Morrell et al 1992). But even such ideal
devices do not use energy 'efficiently' in a fundamental sense. Calculations
based on theoretical considerations suggest that most industrial breakage
processes, especially grinding, are at best only a few percent efficient in terms
of the theoretical energy needed to create new surface (Austin et al 1984).
However, nobody has yet developed an industrial scale method for breaking
rocks at high throughput which does not require hitting or squeezing them.
Perhaps the most practical approach to date for fine size reduction is the high
pressure rolls crusher (Section 6.9.4) which retains elastic energy to some
extent, although localised breakage tends to relieve stress in adjacent
particles.
The efficiency of comminution is important because the cost of breakage will
be one of the factors determining whether low grade mineralisation
constitutes an orebody. For example, almost none of the porphyry deposits
(which provide most of the world's

10

Visita: www.losmetalurgistas.blogspot.com

Chapter 2: Models of Comminution Processes

copper production) would be economic without the low cost comminution


technology which has evolved in this century.
Useful models of comminution processes must therefore find a way of
representing the application of energy by a breakage machine (such as a
crusher or ball mill) to an ore. 'Useful' in this context means a model which can
be used in simulation to solve practical problems of comminution circuit
optimisation - the subject of this book.
The model therefore has to describe two elements of the problem:

The breakage properties of the rock - essentially the breakage which


occurs as a result of the application of a given amount of specific

energy.
The features of the comminution machine - the amount and nature of
energy applied, and the transport of the rock through the machine.

2.2 A BRIEF HISTORY OF COMMINUTION MODELS

The modelling of comminution has historically been dependent on the


computational power available to perform the necessary calculations. Before
computers, all models related energy input to the degree of size reduction
expressed as a percent passing size - typically 50, 80 or 90% - or to the
proportion of final product generated.
In mathematical terms, consider the incremental energy dE required to
produce an incremental change, dx, in size (say P80). The following discussion
follows Lynch (1977). It was always clear from even simple experiments that
more energy was required to achieve a similar relative degree of size
reduction as the product became finer. Therefore energy and breakage were
dE =

n
- K . dx/x

(2.1
)

related by
Researchers in the second half of the nineteenth century applied some ideas
from physics to estimate n:
constant energy per unit mass for similar relative reduction:
constant energy per unit of surface area generated:
(Kick
K In (xi / X2)
E
1883)
E

(Rittinger
1867)

x2 X!

11

Visita: www.losmetalurgistas.blogspot.com

(2.2)

(2.3
)

Visita: www.losmetalurgistas.blogspot.com

Chapter 2: Models of Comminution Processes

As particle size reduces, the larger flaws will tend to become external
particle surfaces along with many of the smaller ones. When an excess of
flaws is available, constant size reduction at constant energy input per unit
mass is reasonable.
The single particle breakage tests described in Chapter 4 confirm this effect
for most ore particles in the range 3-100mm, although some material
becomes distinctly 'softer' at larger particle sizes.
The overall effect of breakage is to reduce the notional internal area of flaws
relative to particle volume. Hence the energy required to achieve a certain
degree of size reduction will increase, as suggested by the Bond and
Rittinger equations which do NOT depend on geometric reduction but on
product fineness. Indeed the definition of the Bond Work Index is the energy
per unit mass to reduce a particle from 'infinite' size to 80% passing 100
microns. This is reasonable, as the last term in equation 2.4 simply
disappears a s b e c o m e s infinite.
One other piece of evidence strongly suggests that the flaws control
breakage. As the particles become finer still, there must be a size at which
they will contain no flaws at all and fracture under stress will be replaced by
plastic deformation. Some remarkable experimental work by Schonert (1979)
demonstrated this process for several materials at particles sizes less than
10 microns. As might be expected, this brittle/plastic transformation also
exhibits some loading rate dependence; thus, brittle fracture is more likely at
high loading rates (Inoue and Okya 1994).
The chief shortcoming of this 'ideal' fracture model is that we have no
satisfactory method for quantifying the flaw distribution at present, although
scanning electron microscope technology offers some possibilities.
Pure energy models provide a useful gross description of total breakage.
However, they do not consider particle transport, or the expenditure of
energy which does not result in breakage. Further, the underlying
assumption of all single point size measures is that the shape of the size
distribution remains relatively constant, regardless of breakage history. This
is usually true for rod and ball mills but is often in serious error for crushers,
autogenous mills and SAG mills. To try to overcome these deficiencies,
researchers have considered both breakage and transport at ever increasing
levels of complexity.
2.3 CLASSES OF COMMINUTION MODELS

It is fair to say that the development (and certainly the use) of comminution
models results from the evolution of the digital computer. The inversion of a
30 x 30 matrix -

Visita: www.losmetalurgistas.blogspot.com

Chapter 2: Models of Comminution Processes

13

Visita: www.losmetalurgistas.blogspot.com

Chapter 2: Models of Comminution Processes

even a symmetric matrix - requires a huge investment in time and


concentration without a digital computer.
The early modellers, such as Austin, Lynch and Whiten, were severely limited
by available computational power. This challenge did result in some elegant
and simple models which were also very useful. However, as computational
power per unit cost has doubled about every 18 months since perhaps 1960,
computational cost has largely ceased to be an issue, except for discrete
element models (DEM) and computational fluid dynamics approaches (CFD).
An unfortunate side effect is a tendency to increase model complexity. This
complexity reduces model utility as tools for understanding.
Comminution models can be divided into two main classes:

Those which consider a comminution device as a transform between a feed

and product size distribution, and


Those which consider each element within the process.
The former are now in common use. The latter require huge computational
resources but will become practical as computer power per unit cost
continues to rise.
For want of better terminology these classes are referred to as Black Box and
Fundamental respectively. A black box model aims to predict the product size
distribution from an ore feed size distribution, breakage characterisation and
experience with similar devices, i.e. a data base, encapsulated in an
appropriate algorithm. It is phenomenological in the sense that it seeks to
represent the phenomenon of breakage, rather than the underlying physical
principles. The population balance model is the most widely used example of
this class.
A fundamental model considers directly the interactions of ore particles and
elements within the machine, largely on the basis of Newtonian mechanics;
they are also referred to as mechanistic. Adequate computer power for
fundamental modelling has only become affordable on the desk top since
about 1990, and such models are much less developed than the black box
variety.
2.4

FUNDAMENTAL MODELS

2.4.1 Model Principles


The objective of a fundamental model is to generate a relationship between
detailed physical conditions within a machine and its process outcome. In
practice this means considering a substantial number of elements within a
grinding mill, or flows within a classifier.

Visita: www.losmetalurgistas.blogspot.com

Chapter 2: Models of Comminution Processes

14

Visita: www.losmetalurgistas.blogspot.com

Chapter 2: Models of Comminution Processes

The constraint for this type of modelling is computational power. The main
centres for this endeavour initially were the Comminution Centre at the
University of Utah under the direction of J. A. Herbst and later R.P. King, and
the work of P. Radziszewski at the University of Quebec. Inoue and Okya
(1994, 1995) have subsequently contributed in this area. To make the
computation more manageable, these researchers considered selected zones
of each problem.
Mishra and Rajamani at Utah (1992, 1994a, 1994b) considered a ball mill as
a two dimensional slice of circles. However, the 'circles' were provided with
the mass of equivalent spheres. Radziszewski et al (1989) reduced
computational demand by dividing the mill into zones of impact,
abrasion/attrition and little action, and then characterising each.
For either approach, the simple application of Newton's Laws of Motion very
quickly becomes quite complex. While steel balls (or rods) are approximately
perfectly elastic, the ore particles in between them are definitely not - if they
were, the mill would not produce any product. Mishra and Rajamani (1994a)
approximate ball behaviour using a spring and dashpot model as shown in
Figure 2.2. There is a considerable range of opinion amongst DEM
researchers about appropriate methods for modelling elastic/damped
interactions. Inoue and Okya (1994), for example, use a non-linear spring
with friction and hysteresis effects.
This model considers the motion of each ball in each dimension i (i.e. x, y in
2D or x, y, z in 3D) as a set of vectors.

15

Visita: www.losmetalurgistas.blogspot.com

Chapter 2: Models of Comminution Processes

(2.5)

[M] x'i + [C] i i + [KJj = (f)

where x is acceleration and x is the velocity which results from the


application of a force, f . The first term is Newton's Second Law, i.e. the
acceleration of the particle depends on M, the applied force. However this
acceleration is reduced by the absorption of energy through the damping [C]
and the stiffness [K] of the force loading system. The damping term might
correspond to energy absorbed in breakage or through fluid motion.
Considered along the line of motion, equation 2.5 does not require any
vector terms. In this case the ball motion can be analytically integrated with
respect to time. However, the more general case requires a numerical
solution for acceleration, velocity and position:
x(t + dt)
x(t)
dt

Visita: www.losmetalurgistas.blogspot.com

(2.6)
W

x=

which will work in one, two or three dimensions, or in practice use a small step
dt and approximate

and

x(t + dt)-x(t)
2

(2.7)

(2.8)

= x dt

These numerical integrations become unstable as the time step increases.


Mishra and Rajamam (1994a) suggest dt < 2-^/m/kas a criterion for
convergence where m is the
smallest mass to be considered. In practice this requires a time step of 10 ^
seconds. The interactions must be considered between each pair of balls which
contact, and ball/liner contacts. Therefore it becomes easy to understand why
a very powerful computer is necessary. Even a 2D slice of an industrial scale
mill will contain hundreds of balls. A 3D view contains many thousands.
2.4.2 Testing of Fundamental Models
The basic equations are easy to program. A data structure which can manage
several thousand objects in time and space is no great challenge, even for a
personal computer. While the parameters for damping and stiffness can be
estimated from physical properties or testwork, they cannot yet be predicted
with any accuracy. Hence it is absolutely essential to compare model behaviour
with experimental reality as carefully as possible.

16

Vous aimerez peut-être aussi