Vous êtes sur la page 1sur 198

Digital Image Analysis of Rock

Fragmentation from Blasting


Ayman Bedair
REng. (KSU), M.Eng. (McGill)

Department of Mining and Metallurgical Engineering


McGill University
Montral
September, 1996

A thesis submitt~d to the Eacu:ty of Graduate Studies and R~~~,rch


in partial fulfillment of the requirements for the degree of
Doctor of Philosophy

Ayman Bedair, 1996

0; Canada

Bibliothque nationale
du Canada

Acquisitions and
Bibliographie Services Branch

Direction des acquisitions et


des services bibliographiques

395 Welllnglon Street


Qnawa. Ontario

395. rue Wellington


Ottawa (Onlano)
K~A DN4

National library

K1A QN4

YOUf

"le

vorre

r~l~rt:'l'ICe

The author has granted an


irrevocable non-exclusive licence
allowing the National Library of
Canada to reproduce, loan,
distribute or sell copies of
hisjher thesis by any means and
in any form or format, making
this thesis available to interested
persons.

L'auteur a accord une licence


irrvocable et non exclusive
permettant la Bibliothque
du
Canada
de
nationale
reproduire, prter, distribuer ou
vendre des copies de sa thse
de quelque manire et sous
quelque for'":!e que ce soit pour
mettre des exemplaires de cette
thse la disposition des
personnes intresses.
,- _

The author retains ownership of


the copyright in hisjher thesis.
Neither the thesis nor substantial
extracts from it may be printed or
otherwise reproduced without
hisjher permission.

L'auteur conserve la proprit du


droit d'auteur qui protge sa
thse. Ni la thse ni des extraits
substantiels de celle-ci ne
doivent
tre
imprims
ou
autrement reproduits sans son
autorisation.

ISBN 0-612-19709-3

Canada


Ta my parents

Abstract
A nove! digital image analysis technique to measure the size of fragments on the
surface of a muck-pi!e is presented in this thesis. The technique takes into consideration the physical characteristics of fragment representation and measurement
problems. Using an adaptive smoothing lilter prior to edge detection, each fragment
on the surface is represented by a group of edge segments outlining its boundaries.
These segments are then gror:ped to form continuous contours.
A muiti-layer analysis of the digital image is then formulated where fragments
on the surfacp. are grouped into three layers, each of which is categorized by global
characteristics and is related to other neighbouring layers by local characteristics.
These local relatioDships between the layers are used to approximate the missing
parts of the fragment contour.

An extensive analysis of the sieving process is used in building the relationship


between the shape and the size of individual fragments. Using this relation, a new
multivariable measure for each fragment is developed. These measures are used in
estimating the size distribution of the muck-pile and compared with other existing
measurement techniques. This comparison proves the robustness of the technique
developed in this thesis.

Rsum
Cette thse propose une nouvelle technique d'analyse d'images numriques dans le
but de mesurer la taille des fragments la surface d'un empilement de roche. La
technique tient compte des caractristiques physiques relies la reprsentation des
fragments et leur mesure. Aprs avoir filtr l'image d'un empilement. chaque
fragment en surface est reprsent par une srie de segments dlimitant ses
contours. Ceux-ci sont par la suite groups pour former des contours continus.
L'image de la surface de l'empilement est ensuite divise en trois niveaux,
le premier tant constitu de tous les fragments entirement visibles. Chaque
niveau possde des caractristiques globales et est reli aux autres par des
caractristiques locales. Ces dernires servent tablir les contours cachs des
fragments situs au deux niveaux infrieurs.
Une analyse dtaille du processus de tamisage sen tablir une relation
entre la forme et la taille d'un fragment, permettant ainsi de developper une mesure
multi-variable pour le caractriser. Cett mesure est finalement utilise pour tablir
la distribution de la taille des fragments dans l'empilement. Les rsultats de cette
no'Velle technique se comparent favorablement ceux obtenus par des mthodes
plus onreuses couramment utilises, dmontrant ainsi son efficacit et justifiant
son application future.

Acknowledgements
1 would like to thank my supervisors. Laeeque Daneshmend for his gidance and
enthusiasm throughout this work. and his consistent support over the past several
years, and Carl Hendricks for his valid advice and suggestions on the practicaI aspects
of fragment measurements. 1would like also to thank Gregory Dudek for his comments
and suggestions in image anaIysis, and Malcolm Scoble, for his helpful input on mining
issues.
1 would like to thank both Roussos Dimitrakopoulos and Gregory Carayannis, for
their guidance and support during the first two years of this research. 1 would like
to thank Raymond Langlois of the MineraI Processing laboratory and Mohammed
Hijazi of the Department of ChemicaI Engineering for their help in setting up the
laboratory c-xperiments. 1 would aIso like to thank the staff of the MineraI Processing

lab at Queen's University for lending severaI sieves which enabled me to widen the
range of data used.
Thanks are also due to Mohammed Amjad for reading and discussing sevp.raI parts
of my thesis, Behram Kapadia and Osama Abu-Shihab for taking the time to proof
read this thesis, Ameen MaIuf and JoJ:.ann Legault for the French translation of the
abstracto Furthermore, 1 wish to thank the McGill Centre for Intelligent Machines
(CIM) for use of its computer facilities, and the Canadian Centre for Automation
and Robotics in Mining (CCAfuVl) at McGill University for providing me with t.he
necessary equipment and the laboratory facilities.

.'

FinaIly, 1 would like express my warmest thanks to my parents for their care
and,support both moraIly and financiaIly uring the long period of my study. 1 am
grateful to my wife Magda for her constant support, encouragement and patience.
This work has been partiaIly funded by the NaturaI Sciences and Engineering .
Research Council of Canada (NSERC) under a Strategic Grant, and by the Institute

for Robotics and Intelligent Systems (IRIS) under project ISDE-l.

Contents
Chapter 1 Introduction
1.1

ll'lotivation.....

.)

1.2 Problem Statement

1.3 Objectives . . .

1.4 Scope of Thesis

1.5 Thesis Organization .


9

Chapter 2 Machine Vision and Mining Automation

2.1

Digital Image Concepts

10

2.2

Potential Computer Vision Applications in Mining .

12

2.3

2.4

2.2.1

Te.,ture and Rock Sorting . . .

12

2.2.2

Automatically Guided Vehicles

14

Fragmentation Measurements and Modelling .

16

2.3.1

Fragment Measurement on a Static System.

16

2.3.2

Fragment Measurement on a Dynamic System

19

2.3.3

Rock Modelling

22

Conclusion

24

25

Chapter 3 Preprocessing

3.1

Description of Muck-Pile Digital Images

26

3.2

Image Smoothing

28

3.3 Edge Detection .

32

3.4 Feature & ..t raction of a Muck-Pile .

34

3.5 Conclusion......

41

43

Chapter 4 Fragment Contours


4.1

~Iodelling

44

of Fragment Contours.

4.1.1

Contour Representation

44

4.1.2

Local Parameters . . . .

45

4.1.3

Curvature Estimation: The Selected Method .

49

4.1.4

Global Parameters

51

Edge Map Enhancement

53

4.3 Corners and Junctions .

56

4.2

4.3.1

The Adopted Method .

58

4.4 Contour Completion

4.5

4.4.1

Interpolation

58

4.4.2

Shape Completion

60

4.4.3

The Adopted Method .

60

Conclusion...........

65

Chapter 5 Sieve Analysis and Fragment Size


5.1

3-D Size Classification

67

68

5.2 Weighting Function Formulation .

-?
1-

5.3 Weighting Function of Planar Objects .

5.4

Conclusion

Chapter 6 Size Distribution


6.1

Muck-~ile

80
./.

82

6.2 VIrtual Sieving . . . \:~/. .

84

6.3 Volume Based Size Distribution

85

Sampling of a

6.3.1

Spherical Madel .

86

6.3.2

Ellipsoidal Model

88

6.3.3

Applicability of Sectioning Methods to Projected Images .'

92

6.3.-1
6.-1

From Vinual Si0"illg to Sizl' Distribution.

Conclusion.....................

Chapter Implementation and Experimentation

95

:'\Iuck-Pile Description .

96

-1.-.)

Fragment Measurement .

100

.2.1

Preprocessing .

100

-.)
?
1._.-

Image Analysis

101

.2.3

Classification

109

Conclusion

III

Chapter 8 Comparative Evaluation of VirtuaI Sieving

112

S.l

Comparisons with Stereological Methods . . . .

112

S.2

Comparison with the Physical Sieving Method .

120

S.3 Two Prior Methods for Size Distribution Estimation of NI uck-Piles .


Maerz'a Method ..

124

S.3.2

Kemeny's Method .

125

-,

1?-

S.4.1

Using Artificial Data . . .

1?_1

S.4.2

Using Laboratory Images.

1?_1

S.5 Case Study

130

S.5.1

Intermediate Results

130

8.5.2

Overall Performance

132

8.6 Conclusions -. . . .

132
152

Chapter 9 Conclusious

9.1

123

S.3.1

S.4 Comparison with Prior Methods .

original Contributions

9.2 Limitations

~).j

.1

.3

9:2

9.3 Recommendations for Future Work

153.
155
156
iii

"~

Appendices

178

Appendix A Convex Hull Algorithm

178

Appendix B Curvature Estimation

180

iv

List of Figures
1.1

Open-Pit mining process

1.2

A block diagram of the mining process

2.1

Static System

2.2

Schematic diagram of methods of fragment measurellll'nt for stat.ic

systems

17

18

2.3

Dynamie System

19

3.1

Image of 3-D scene . . . . .

26

3.2

Digital image of a muck pile

3.3 Speckle representation in the digital image

31

3.4

Geometrie Filter

35

3.5

Four configurations of the 8-hull a1gorithm used

3.6

Four configurations of the 8-hull a1gorithm used to process the

.
ta

process the urnbra

35

complement . . . . . . . . . . . .

36

3.i

I-D Geometrie Filter Results .

37

3.8

Geometrie Filter Results

38

3.9

Geometrie Filter Results

40

3.10 Geometrie Filter Results

42

4.1

Curvature

4i

4.2

Perpendicular distance between a point and a line. .

53

4.3

Directional growing of contour segment . . . . . . .

55

4.4

Application of the contour completion a1gorithm on a partially occluded

ellipse . . . . . . . . . . . . .

4.5

Severa! degrees of overlap

.62
63

4.6

Contour completion algorithm of an overlapped rock

64

4.

Failure of the contour completion algorithm ..

65

5.1

Size description of the sphere and the pyramid .

69

5.2

Pyramid vector analysis

5.3

Domain of the Weighting Function

5.4

Weighting Function versus grid size for spheres and other objects

5.5

Projected area of a cube . . . . . . . . . . . . . . . . . . . . . . .

5.6

Weighting Function and diameter of equivalent sphere versus grid size

5.

Error in changing dimensions

6.1

Proposed camera position

83

6.2

Cruz-Orive definition of the principal axes of prolate spheroid

89

.1

A black box model of the blasting process

96

.2

Lab,environment camera setup

.3

Image of lab rock pile.

.4

Overlapping rocks . . .

98

.5

Bisection of overlapping fragments.

99

.6

A black box model of the fragment classification process

100

. Resulting edge map after edge linking .

103

.8

Junction analysis . . . . .

105

.9

Edge map of Figure .3 .

106

.10 Layer classifications. . .

lO

. . . . .

.11 Results of applying the contour completion algorithms 1.0 the second

layer . . . . . . . . . . . . . . . . . . . .

108

.12 Major and minor axes of a fragment.

110

8.1

Simulated size frequency of the spherical model

115

8.2

Simulated size frequency of the ellipsoidal mode! ). = 3

116
vi

8.3

Simulated size frequency of the ellipsoidal model

8.4 Size frequency of spread


8.5

~ocks

= 5 . . . .

using the Cruz-Orive's method

ll;
119

Laboratory test results . . . . . . . . . . . . . . . . . . . .

121

8.6 Weighting function of crushed rocks using the !inear model

122

8.; Size frequency and distribution of spread rocks using the Virtual Sieving
123

method
8.8 Cumulative size distribution of spread rocks
8.9

-:

124

Frequency of arbitrary generated data using Maerz's'and Kemeny's

methods . . . . . . . . . . . . . . . . . . . . .

128

8.10 Size frequency of spread rocks (no overlap)

134

8.11 Cumulative size distribution of spread rocks (no overlap).

135

8.12 Size frequency of overlapping rocks without contour completion

136

8.13

Cumulativ~

size distribution of overlapping rocks without contour

completion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

137

8.14 Size frequency of overlapping rocks with contour completion

138

8.15 Cumulative size distribution of the estimated hidden parts of overlapping


rocks . . . . . . . . . . . . . . .

139

8.16 Muck-pile of open-pit mine.

140

8.17 Manual tracing . . . . . . .

141

8.18 Smoothing of the muck-pile image.

142

8.19 Edge detection of the muck-pile image

143

8.20 Thinning and noise removal of the edge map of the muck-pile image.

144

8.21 The result of applying the edge linking algorithm .

145

8.22 First layer of the muck-pile. . . . . . . . . . . . . .

146

8.23 Second layer Type A of the muck-pile without contour completion

147

8.24 Second layer Type A of the muck-pile with contour completion ..

148

8.25 Second layer Type B of the muck-pile without contour completion

149

vii

8.26 Second layer Type B of the muck-pile with contour completion

150

8.2 Scanned image size frequency and distribution . . . . . . . . . .

151

vili


Chapter 1'
Introduction
The open-pit mining process is generaily made up of a sequence of unit operations
including drilling, blasting, loading, hauling and crushing. Drilling and blasting,

being the first unit operations, can have a major impact on the performance and cost
of subsequent operations. The prime objective of these two operations is to obtain
optimum fragmentation within safe and economical limits.
The output from the blasting process is dependent on many parameters such as
rock composition, layer thickness, type of explosives, etc. As a result, a quick and
accurate eva!uation process is required to assess its effectiveness. In addition, this
eva!uation process can be used to monitor blasting, optimize the blast design and
asSess loading conditions for scoops and shovels. One of the key indicators of the
effectiveness of a blast is the size of the resulting fragmfnts.
To date, the most accurate method of measuring fragment size is sieving analysis.
The drawback of this method is that it is a time consuming and labour intensive
process. This has led many researchers to'use blasting parameters and rock mass
properties to predict the fragment size distribution. Among these parameters are
jointing measurement, empirical formulae, etc. The disadvantage of the prediction

methods is the lack of actual measurement of the fragments which may result in

1. Introduction

inaccurate assessment. Clearly, there is a need for a more reliable and effective way
of obtaining fragment size distribution than by sieving analysis, while providing more
accurate results than the predictive methods.

1.1

Motivation

Most open-pit mining operations employ blasting for primary breakage of the ground.
Inappropriate blasting techniques can result in excessive damage to the wall rock,
decreasing stability and increasing water inflli.'e. In addition, it will result in either
over and/or under breakage of rocks. The presence of over broken rocks can result in
\

decreased wall stability and require additional excavation. In contrast, the presence
of under broken rocks may require secondary blasting and additional crushing.

Since blasting is a major cost factor, both cases (under and over breakage) create
additional costs reflected in the increase of the operation and maintenance of the
machinery. To establish optimum cost values, it is important that the combined performance of the blast controllable parameters be acknowledged, complying with the
ultimate goals of the overall mining operation [116] [70]. This is usually accomplished
with the definition of a set of conditions minimizing total production cost per ton of
rock blasted.
The blasting process has been described in the Iiterature as a nonlinear process
in which several parameters, often diflicult to evaIuate, dictate the outcome. A set
of twenty different parameters were Iisted by Atchison [7] as influencing rock fragmentation in blasting. Generally speaking, these parameters can be grouped into
two categories: controllable parameters (explosives parameters), and uncontrollable
parameters (rock parameters).
The controllable parameters such as, the size of the blast, the position and align-

ment of the holes, the charge distribution, and the delay pattern, have a great in-

1. Introduction

fiuence on fragmentation size and shape. Consequently, the key to blasting control
would be a method to quantify fragmentation size quickly, safely, and accurately.

Figure 1.1: Open-Pit mining process\

, ,.,

Figure 1.1 illustrates a typical open-pit mining process. Since the mining process
sequential in nature, it can be modelled as a set of cascaded black boxes, each representing a unit process (a common representation method used in process control [6]).
This representation is shown in Figure 1.2. As shown in the figure, the blasting process has two sets of inputs, one of which is controllable and the other uncontrollable.
One of the outcomes of this process is the fragments forming the muck-pile, which
is an input to the next process. The next process is loading, followed by hauling.
By monitoring the output of each process, e.g. the profile of the pile, the size of
the fragments, etc., and using it to vary the controllable parameters, one can control
(rninirnize) the cost of each process and consequently the cost of the overall mining

process.

1. Introduction

Uncontrollable
Parameters

Controllable ---,,...Blasting ,-- Loading f - Parameters


'--

Hauling - r -

Mineral
Processing

Updating 1--

Figure 1.2: A block diagram of the mining process

1.2

Problem Statement

Assessment of blasting performance has been pursued in many ways in recent years
with the aim of providing a tool for blast optimization [116] [126] [70]. The optimum blast is characterized by the size distribution of the fragments (Nielsen [116]).
The problem of fragment size distribution measurement can be decomposed into two

subproblems; namely, sample measurement, using either what is visible from the surface of a muck-pile or during the loading process, and estimation of the overa1l size
distribution.
The first subproblem, namely the sample measurement, deals with two major
issues: what and how to automatically measure fragments. Depending on the type
of sensors used in the data acquisition process (structured or unstructured light, or
o

stereo), depth information can play a major role in these measurements. In the
mining industry, the most commonly used sensors are TV cameras, interfaced with
computers by digitizing boards known as frame grabbers. The output is a digital
image referred to as an intensity image.
Using intensity images, many mining researchers implicitly use the projected area
of fragments identified by its contours as size descriptors. This is based on the assumption that fragments can be modelled aS spheres, regardless oftheir actual shapes.

Consequently the diameters of such spheres are simply calculated from the projected

1. Introduction

areas.
The second issue in fragment measurement is the segmentation of the bounding
contours, which is another challenging problem. In general, digitized images contain a great deal of redundancy. To overcome this problem, many researchers traced
fragment contours manually. This method results in subjective measurements which
are labour-intensive and very tedious to obtain. Alternatively, some researchers have
resorted to classical edge-detection techniques which were developed for other applications, and hence do not cater to the specifies of the muck-pile image processing
problem.
The solution to the second subproblem, namely the estimation of the overall size
distribution, is highly dependent on the sampling method, the type of models used
and the accuracy of the measurements obtained from the samples. In general, the

fragment sizes are associated with their frequency of occurrence. The frequency of
occurrence of sizes is ~etermined either by a number, when a number size distribution
is considered, or by weight. The weight size distribution is obtained when the size
frequency is measured on a weight basis.

1.3

Objectives

This thesis is primarily concerned with the development of an automated algorithm


to identify and measure rock fragments using intensity images of the surface of mckpiles. The proposed algorithm is to take into consideration:
the Ullique characteristies of this problem such as texture and overlapping
the rninjrnization of human interaction, and
the provision of an efficient and fast solution.

1. Introduction

In addition, a new measure characterizing each fragment will also be formulated which
will be used to estimate the size distribution of the fragments.

1.4

Scope of Thesis

Recently, the field of machine vision has enriched many areas of science with its
tools and algorithms that have made them capable of performing measurements that
were not possiblt> otherwise. This thesis is concerned with the utilization of machine
vision techniques to segment boundaries of rocks forming a pile. It also considers
the estimation of the size distribution based on the behaviour of rocks during the
classification process (sieving process).
One of the objectives of this study is to segment the bounding contour of individual

fragments to perform the measurements. Due to the nature of rock fragments (heavily te"l:tured objects) a smoothing filter is needed to remove unwanted information.
Foliowing smoothing, an edge detection process is applied to the smoothed image.
This will result in an image containing curved segments of parts of the bounding
contours.
Estimation of the missing part of the boundary of a rock resulting from overlapping
can play a major factor in the accuracy of the overali measurements. Consequently,
the chalienging problem of ovedapping fragments as a source of error in measuring
the occluded fragment will be addressed.
The problem of determining the true size distribution of blast fragmentation from
the surface of a pile of fragments has been studied by many mining researchers. In
this thesis, an attempt is made to derive a reliable measure of fragments, based on
three-dimensional space mapping. There are many factors that control the sieving
process, among them is the geometry of the object. Consequently, a new function

(calied the Weighting Function) will be introduced and used to link the geometry

1. Itltroduct.i,..lU

of the object and the siews used. This can be \'iewed as a probabilit.y funct.ion <lf
the passage of the object through the grid based on the analyt.ical logie d<'flIw<! by
Jeffreys liS].
This model will be used as an interpretation of the sieving process and will b<'
referred 1.0 as "Virtual Sieving'. In addition. Virtual Sieving will be tested 'Uld
compared with the existing techniques in this area.

1.5

Thesis Organization

In chapter 2 a literature survey of the applications of machine vision in mining automation is presented.
The first part of this study (chapters 3 and 4) considers the problem of fragment.

recognition and measurement fr-:m intensity images. Chapt.er 3 describes t.he met.hodology used in smoot.hing and edge detection 1.0 e.:"tract fragment contour segment.s.
Chapter 4 addresses the problem of segment grouping, and fragment identification.
In addition, a method for reconstructing missing parts of overlapped fragments is

presented.
The second part (chapters 5 and 6) considers the relation between the fragment
geometry and sieve analysis. Chapter 5 contains a detailed analysis of fragment
size, and defines the concept of fragment measurement. Il. contains an analytical
derivation of the Weighting Function, which will be used in chapter 6 1.0 estimate t.he
size distribution. Chapter 6 contains an overview of fragment size distribution and
presents the link 1.0 Virtual Sieving.
The third part of the thesis (chapters i and 8) contains experimental results and
comparison of the Virtual Sieving method with other methods. Chapter i presents
the result of imple.menting the image analysis algorithm in a laboratory environment.

Chapter 8 contains the comparison of the results obtained using the Virtual Sieving

1. Introduction

rnethod with other existing rnethods.


Finally. the conclusions of the research are presented in chapter 9 . Sorne open
problerns, unresolved issues and possible directions for further research are also discussed.


Chapter 2
Machine Vision and Mining
Automation

Over the past decade, there has been a significant trend in the mining industry tawards automation. In spite of being common to all industries, process automatin
faces sorne of its greatest challenges in mining under the special environmental conditions in both open-pits and underground mines.
One of the most important parts of the automation process is the sensing device
used to acquire data. Sorne processes might require non-contact sensors either to
replace tactile ones, or because the non-contact sensors are the only tools available
to automate such

p~-=

Sensing devices vary depending on the types of application under study. For
example, there are sorne mining applications where ultrasonic devices have been used
in the navigation system mounted on automatic guided vehicles. Laser technology
has also been used in sorne of the mining applications to measure distance as in
the case of slope monitoring. TV cameras have also been used for identifying and
locating landmarks and objects. Moreover, sorne applications such as the case of

rock modelling require combining more than one of these devices to obtain accurate

2. Machine Vision and Mining Automation


inforUt.:t.ion.

In general, there are thrcc criteria to determine the selection of a specifie sensor.
The first one is its flexibility. i.e: the ability of the device to handle a variety of
situations and environmental conditions. The second is safety issues such as radiation.
Finally, the cost of operating and maintaining such a sensor also plays a role in the
selection process.
Associated with sensors are the algorithms used to interpret and analyse the information acquired. Since visual information is more comprehensive and easier to
understand, much. of the information gathered from the sensors is interpreted and
prcsented using algorithms to simulate special purpose visual systems. These algorithms are grouped under what is called computer or machine vision.
This chapter starts by a general introduction to digital image terminologies which

will be used throughout' this thesis. This is followed by a review of sorne of the
ongoing research }n the mining industry in which machine vision plays a major role
in the automation process. FinaIly, a literature survey of previous work on fragment
measurement and estimation of size distributions from digital images is presented.

2.1

Digital Image Concepts

The visual images perceived in our everyday life are functions of four variables: the
position of the light source(s) used to illuminate the scene, the position of the viewer,
the reflectance of the surface(s), and the geometry of the object in the scene [101]
[90] [131]. Although human beings do not seem to have any problems inferring the
world's structure from visual information, the comple.'l:ity of the image information
process makes the problem of computerized scene reconstruction a very diflicult one.
Marr [101] defines the ward "vision" as a proceJS that produces a description

that is useful to the viewer from images of the e.'\."ternal world and not cluttered with

10

2.

~!achint'

\isioll and

:\Iinin~

Automatillll

irrelevant information. This requin's proel'ssing; thl' information on t"'l) !l'wb:

Il)l'all~'

and globally.
On the locallevel. the lirst important diffl'rl'nce bl't,,"l'l'n hutllan vision and l'llIUputer based image analysis lil's in thl' \\"ay illlagl's arl' acquired. Oilfen'nt imaging
sensors have been de\'eloped to acquiTe \-arious types of measurl'Illents. Thl'n'fon'
one has 1.0 begin by understanding ho\\" these measurl'llIl'nts aTl' creatl'lL Thl' most
common sensor is the standard \'ideo camera. Solid-state c;unl'ras. often rl'fl'rred to
as

eeo

cameras, use a chip \\"ith an array of indi\'idm "pixer (picture c1l'lllent)

sensors. Each detector functions as a photon counter, as electrons are raiscd to the
conduction band in an isolated weil. The signal rea out from each line of detectors
then produces -an analog voltage.
The analog voltage produced by the camera, corresponding to the brightness at

different points in the scene, is then digitized with analog to digital converters which
sample the signal and produce a number between 0

1.0

2-55 for an S-bit quantizer

that represents the brightness (intensity). The digitizing board, known as the frallle
grabber, stores this value in computer memory. Therefore a digital image can be
described in terms of a two-dimensional intensity function l(x, y) of two .discrete
variables x

= 1, ... , n and y = 1, ... , m, denoting the spatial coordinates.

The value

of 1 at any point is proportional to the brightness of the image at that point [90]
[131].
On the globallevel, machine vision is a combination of three computer orientcd
areas, namely, image processing, pattern recognition and artilicial intelligence. It
focuses on the computer analysis of one or more images, taken with single/multi
band sensors. The analysis recognizes and locates the position and orientation of
objects, and provides a sufficiently detailed symbolic description or recognition of
those imaged objects deemed 1.0 be of interest in the three dimensional environment.

In spite of this area's very limit:!cl capability when compared to human vision, il.

11

2. Machine Vision and Mining Automation


has proven to be able to provide excellent performance when the system is properly
designed for a specifie application. For example, machine vision has been used in
astronomy and biology to perform certain operations such as classifying, counting
and tracking.

2.2

Potential Computer Vision Applications in


Mining

Recent advances in computer architecture, improved software reliability, and the availability of sophisticated image acquisition devices have opened a new frontier for novel
computer vision applications. Using these advances in mining process automation
can result in cost reduction of the overail rnining process, increase in productivity,

and enhancement of the working environment [30].


This section contains an overview of some of the computer vision applications in
mining. From the computer vision point of view these applications were classified
into two main groups namely: Texture and Rock Sorting and Automaticaily Guided
Vehicles.

2.2.1

Texture and Rock Sorting

In underground rniJ:les, the harsh environment makes automation highly desirable. As


a result, an increasing amount of research has been carried out in order to automate
certain rnining processes to improve productivity and reduce risks. There has been
much research involving the use of machine vision to evaluate ore characteristics in
situ, and to discriminate between the different ore types found on the rock face.
Bonifazi and Massacci [18] utilized digital image processing techniques to evaluate the characteristics of a stope in terms of the potential production of the faces.
Iterations of these analyss from different faces were used to describe the ore charac-

12

2. Machine Vision and Mining Automatioll


teristic distribution in the ore body. The analysis of the lithologie faces of a wall was
carried out automatically starting from a surveyed image in the ore body area, using
either video images, or photographs and using different filtering and segmentation
techniques to detect relationships between lithologie faces and colour levels.
Bonifazi and Massacci assumed that the face is the result of the intersection between an ore body and surface. They represented it as a set of lithotypes, each of
which was characterized by its own physical individuality, chemical composition and
geometry. For each lithotype, the gray level distribution spectrum was then detected
and a threshold value was chosen.
For some types of ore bodies, recognition of the different ore types present in
the face of a wall depends on colour information. Orteu et al. [123] utilized colour
vision algorithms to discriminate between the different ore types found in a face such

as sylvanite, carnallite and salt. Using the information about the ore distribution,
Orteu et al. adapted path planning algorithms for a computer controlled cutting
boom. The goal of their research was to develop a system capable of recognizing the
mineral distribution on the face and to produce a face map to determine the optimal
cutting trajectory that should be performed in ah automatic mode u1der control of
the system. Their road-header was equipped with cameras andencoders, actuators
and control equipment in order to automatically perform the cutting operation.
Nguyen and q9hen [114] proposed the use of a texture-segmentation based technique to extract the ore distribution map from visual data of the cutting face in
an

un~rground mine.

They used Markov Random Fields to model the texture and

region processes. In their work, the segmentation problem was then formulated as
a Bayesian estimation procedure, which they decomposed into local decisions. The
advantage of this method is it allows the development of a highly parallel and fast segmentation algorithm. This proposed technique was tested in an underground potash

mine and showed promising results.

13

2. Machine Vision and Mining Automation


One of the common problems in geological surveys is the deterrnination of the
geometric parameters of the joints. These can be used to estimate the kinematics
of possible block movements and for the analysis of the stability of the rock mass.
Direct measurements of the slope are often very difficult to perforrn; e.g., where the
joints' out-cropping on rock walls is high and steep. Photogrammetrie techniques
have been developed as a tool for rock slope characterization and monitoring under
diflicult ground conditions.
Baratin et al. [9] proposed a method based on reconstructing the three dimensional
terrain mode! of a rock slope. At least two stereoscopie pictures of the slope were
required for the reconstruction of the digital terrain mode!. The latter was used to
derive the geometric parameters of the joints. A good agreement between both the
calculated and the direct measured values was reported.

Franklin and Maerz [49J applied image processing algorithms to measure block
sizes, spacing and orientation. Using a TV camera and a digitizing board to acquire
the image, Franklin and Maerz applied the gradient operator to extract the edges.
They then used directional filters to smooth the image. During the smoothing process,
a measure of the roughness was extracted for each joint. Then, forming polygons by
interpolation and extrapolation of edges, a measure of persistence was extracted.

2.2.2

Automatically Guided Vehicles

Most mining equipment is mobile, and has to navigate in a hostile environment.


Vehicles that haul ore, waste, supplies, and personnel are the most mobile. These
vehicles have different guidance requirements since they operate at various speeds
in different environments with different tractive effort and effectors. However, they
ail require human operators on board to guide them. There are several motivating

factors, depending on the partcular type of vehicle, to remove the human operator

1"

from these machines. One way to replace the human operator is to u~~/a machine
14

2. Machine Vision and Mining Automation

vision system.
A study by Hurteau et al. [2] resulted in a system designed for underground
vehicle guidance which uses two video cameras: one to evaluate the relative position
of the vehicle from the optically reflecting surfaces (guidelines) as the vehicle moves
forward, and the other for the reverse direction. The vehicle is e.,pected to follow
an opticalline composed of a highly efficient retro-reflector installed on the roof of a
haulage drift. Hurteau et al.'s system is composed of several functional modules; two
of which are the guide-path module which is used to detect the gnide-path defined by
the guideline, and the milestone module used to detect and identify landmark signs.
The guide-path module used by Hurteau et al. is composed of three major components:
1. The opticalline network

2. The opticalline detector

3. The opticalline software which can recoguize branching, end points, and milestones in an opticalline network system.
The optical line detector consisted of a CCD video camera and a lighting source
encapsulated in a single protection shield. The video signal of each camera was
connected to a frame grabber. Through the grabber, the computer has access to any
part of the video picture surface. Image enhancement was provided at the hardware
level through a coaxial lightport ring. The experiments were performed with a
Wagner ST-5 diesel powered LHD al:ready equipped with a Nautilus remote control
system.
Takahashi et al. [151] proposed the use of a machine vision system mounted on an
LHD in mining environments to increase the efficiency of the loading process. Their

.'

vision system consisted of a CCD camera, a laser pointer an a light source. The

15

2. Machine Vision and Mining Automation


system was used to recognize/locate and measure the distance between the LHD and
the muck-pile. This information was then used in planning the shovelling process.
Shaffer et al. [144] used a laser range sensor to determine the position and orientation of a mobile robot in a mine. The perception system detected line segments and
corners, which represented the typical geometry of the mine walls and intersections
fOlmd in a room and piller type mine. Matching these features to a map of the mine,
the system computed the sensors' position. The position estimate was refined by
minimizing the error between the map and the sensed features.

2.3

Fragmentation Measurements and Modelling

The effectiveness of rock breaking with explosives is usually determined by many

criteria; among them, the fragment sizing, the diggability of the broken rock and the
stability of the new face created by the blast. One of the constraints that determines
the applicability of a fragment measurement technique is its speed. In other words,
fragment measurement must not slow the overall mining operation.
This section groups the work done in this area into three categories; fragment
measurement on a static system, fragment measurement on a dynamic system, and
fragment modelling.

2.3.1

Fragment Measurement on a Static System

By Static System, we mean that the digital images of fragments are acquired directly
or indirectly from static muck-piles (see Figure 2.1). In the direct method, a TV
camera is used to acquire the digital image directly from the surface of the muckpile. On the other hand, the indirect method bases its measurement on information
obtained from photographs of the muck-pile and always requires manual editing.

Figure 2.2 shows a schematic diagram of fragment measurement methods for the

16

2. Machine Vision and Mining Automat.ion


static system. In either case, both methods require a careful consideration of the
scale and orientation of the Illuck-pile relative 1.0 the canlera; i.e. the call1era must
be calibrated relative to the Illuck-pile.
Camera

\
Figure 2.1: Static System
It is important to note that both the direct and indirect approaches, as current!y

developed, rely upon human intervention to decide on the outlines and contours of
fragments. In purely manual approaches, fragments have to be traced by hand which
is very time-consuming [115]. In semi-automated (computer-assisted) approaches,
image processing algorithms are used to perform edge detection processing, resulting
in an image in which contours are defined [42J [96] [124] [46] [80] [21J. However, such
computer-assisted techniques assume that the human operator will then select which
of these contours actually corresponds to a rock fragment.
Whichever approach is chosen, direct or indirect, it results in a digital representation of fragment contours. From these contours, measurements of the fragments'
size are obtained. Nie and Rustan [115] manually digitized each fragment, and used
the exposed area as a fragment descriptor. Based on the assumptions that fragments
are uniform, Nyberg et al. [120], used the Sobel operator to obtain the fragments'
contour, and computed what they called the typical diameter as "the distance taken
at the same place on every fragment 1.0 represent each fragment". Maerz et al. [96)

manually traced each fragment on transparencies and then scanned them using a
CCD camera. Assurning the fragments are spherical, they used the area equivalent

17

2. Machine Vision and Mining Automation

Direct

Surface Scan

1 1

Measurement

Manually on Site
Ruler

Indirect

1 1 Sieve

Photographie Method

1 Computer Assisted

Tracing on Transparency

Manual

Point Digitization

Automatic

Fragments Contours

Figure 2.2; Schematic diagram of methods of fragment measurement for


static systems
diameter as a fragment descriptor. Farmer et al. [46], used area and elliptical parameters to describe fragments. Kemeny et al. [80] used a linear function to describe
the fragment screen size frOID its major and minor ~s. Paley et al. [124] traced the
contour of each fragment with a cursor and used the minimum projected chord to
obtain the size distribution.
In both the direct and the indirect methods, human interpretation of fragment
shape plays a key role, resulting in unmeasurable errors within the measurements
obtained (Barry [10]). In addition, neither method addresses the problem of missing

information resulting from overlapping of fragments. Instead, this technique tends to

18

2. Machine Vision and Mining Automation

Camera

Figure 2.3: Dynamic System


split two such fragments by a straight line or utilize an empirical function.
In the direct method, the te."-;:ture of the fragment surface can be dissolved using

a low pass filter; this may result in diluting parts of the contours which would lead
to inaccurate measurements [69].
There has been some researcb in fragment assessment in underground mineS [69].
Nevertheless, the constraints to acquire images are more restrictive due to the lack
of proper lighting conditions and space. These researcbers followed the footsteps of
their colleagues, using the indirect methods forestimating fragment size in open-pit
mines.

2.3.2

Fragment Measurement on a Dynamic System

The major difference between the static and dynarnic systems lies in the method
used for acquiring the digital image. As presented earlier, digital images of the static
system are intensity images of static piles representing two dimensional projections
of individual fragments. In the dynarnic system, fragment movement is involved, e.g.
on top of a moving conveyor belt (see Figure 2.3).

To obtain optimum distribution measurement of fragment size in a dynamic sys-

19

2. Machine Vision and Mining Automation


tem, it is necessary to be able to monitor mineral degradation at various stages of the
process because, in some cases, grades of minerals are determined by the percentage
of fines. This is donc ei.ther by sampling and screening using a mechanical sampler,
which is expensive and difficult to maintain, or by stopping the production line to
collect samples used which is always a costly process.
One of the first applications for a computer vision system to distinguish different
rocks and ore placed on a conveyor belt was developed by Manana et al. [9iJ. They
placed a line scanner perpendicular to the conveyor belt. Thus willle the camera
performed the transversal sweep, the conveyor belt e.,ecuted the longitudinal one.
Lange [Si] proposed a similar system to measure the size distribution by placing a
camera on top of a conveyor belt to measure the chord length of individual rocks. A
low pass filter was used to smooth the interior of the rock boundaries, followed by the

application of the Laplacian operator

1~0

e.\.1;ract the boundaries. In the ne.,t stage,

Lange [Si] utilized a heuristic search to complete the missing parts of the contours.
From these contours, the chord length was measured and used to estimate the size
distribution.
Another system to measure the size distribution of rocks on a conveyor belt was
proposed by Cheung and Ord [23]. Using active stereo vision, their system consisted
of a TV camera, a frame grabber, and a light projector willch was used to project
a stripe of light on the surface of the conveyor belt. The image of the stripe \Vas
captured by a camera placed on top of the conveyor belt. The fragments were then
isolated by tracing the stripe image, and a three-dimensional size distribution was
obtained by recording the chord length of each stripe. From the deviation of the
light stripe from the neutral position, Cheung and Ord were able to determine the
surface of the rock fragment. As the light stripe was projected over the fragments on
a moving conveyor belt, a sequence of pictures were captured by adjusting the time

between the capture of each frame. In order to capture multiple stripes, so that more

20

2. Machinc Vision and Mining Alltomatioll


data can be analyzed, a total of cight frames had to be captured in an adjllstable
interval.
One of the problems encountered by Cheung and Ord [231 \Vas vibration. Since
the camera sees only the top surface of the fragments on the conveyar belt, vibration
resulted in segregation of most fine grains,

50

the observed size distribution had a

larger mean size, and a smaller standard deviation than the total distribution. Since
the camera cannot detect fine fragments because of limited resolution, the RosinRammler distribution [135] [2] was used to infer the percentage of fines from the
distribution of the observed fragments.
A recent study by researchers at the V.S. bureau of mines [59] resulted in an online dynarnic system to measure the size distribution of crushed and broken taconite
ore using digital image analysis teciJ.niques. The system was originally developed

by Grannes [iS] to measure the size distribution of slowly moving spherical taconite
pellets.
In their study, Grannes and Zahl [59] proposed to perform measurements of ore
pieces either:
moving on a conveyor belt including loading and discharge points, or
being dumped into the primary crusher from the mine rail haulage system.
The similarity of these two scenarios is due to the fact that the dumping action of
ore from a side-dump car can be viewed as ore sliding on a metal plate. Given the
opportunity, individual rocks will fall to their lowest energy state with the shortest
dimension perpendicular to the plane.
The Grannes and Zahl [59] system consisted of a CCD Camera, a digitizing system, a 386 personal computer, light source and image interpretation software. The
image interpretation software combined an averaging smoothing filter and a first

derivative edge detection mask (Sobel operator) to outline the edges of the rock and

21

2. Machine Vision and Mining Automation


a measurement algorithm.
Considering only closed contours, each one was rotated in 20 increments between
0 and 80. At each increment, maximum and minimum dimensions were measured.
The minimum of ail of these dimensions was considered the rock's screen size, and the
corresponding dimension is the rock length. This was based on the assumption that
the pieces were resting in the lowest gravity state (Iaying fiat) and the shape factor did
not vary significantly with size. The volumetrie size frequency was determined from
the surface area frequency (i.e. length x width of the visible rocks). They concluded
that this will not be true if shape factor changes with size.
Grannes and Zahl [59] system was able to correctly identify and measure individual
pieces of layered laboratory sample trays. The result was found to correlat' linearly
with the screen sieve methods. Kettunen et al. [81] presented the result of the

prototype of Grannes and Zahl [59] system which was installed at the Minntac #1
crusher to measure the size distribution of ore as it was dumped into the crusher.
The system performance was not degraded by wet or dusty ore orby the presence of
snow in the ore. It did hwever tend to measure frozen chunks as a large rock.

In spite of being a commercial product, this system ignored several important


issues of the size estimation of fragmented rocks. Among them the overlapping problem, contour analysis and fines estimation etc.. In other words, the system trades off
the accuracy of measurement with the speed required to perforrn the computation
on-line.

: 2.3.3

Rock Modelling

In underground mines, many repetitive tasks are e:"ecuted by human operators working in harsh environmental conditions. The rock breaker is an example of this type of
process used in underground hard rock mining. The rock breaker is a four degree of

freedom hydraulic manipulator equipped with a hammer and is perrnanently installed


22

2. Machine Vision and Mining Automation

in front of a grizz:y grid. Its function is to break oversize rocks dumped on the grizzly
by trucks or LHD vehicles. The rock breakers are manually controlled by an operator
located in a cabin from where he maintains visual contact \Vith the grizzly in the
working area. The operators can be exposed to a high level of noise. dust, vibration
and flying rock chips.
In an attempt to automate the rock breaker, Hurteau et al. [il] used a TV camera
to detect and locate the rocks on the grizzly, and tactile sensors to obtain the third
dimension (the height) by contacting the rock. By combining this information a
three-dimensional model of the rock was obtained. The only assumption Hurteau et
al. [il] made is that rocks have uniform surfaces.
Cheung et al. [24] addressed the same problem, using a laser range finder, \Vh~re
the coordinates of discrete points on the surface (x,y,z) were measured. Based on the

assumption that rocks possess smooth surfaces, Cheung et al. decomposed the muck
pile into regions, each corresponding to .the surface of individual rocks. To obtain a
three dimensional shape for each rock, super-quadric models were used to estimate
the rock geometry.
Choi et al. [26] developed the perception module of a system to collect rock from
another planet. They developed a rock sampling system including a robot arm, a
range finder and a small terrain mock-up containing sand and small rocks. The goal
of the rock sampling system was to identify, locate and pick rocks from the terrain.
The process was started by taking a range image of the scene then extracting features
from the image. These features were surface features such as surface discontinuities
that were used to extract the object boundaries. Then the contours of the objects
in the scene were extracted. Based on the concept of deformable contours, the set
of points enclosed by the contour of an object was approximated by a super-quadric
surface. The parameters of the surface that approximate each object on the surface

were used to grasp it.

23

2. Machine Vision and Mining Automation

2.4

Conclusion

From this overview, it is evident that there are many potential applications of machine
vision in the area of mining automation. However, industrial utilizaticn does not
appear to be substantial. It is believed that more research is needed in this area
to achieve practical, usable solutions in actual mining conditions. This requires the
utilization of the recent achievements in both the instrumentation and algorithms of
machine vision. In addition, a more precise and appropriate model of the process
itself is required.

24


Chapter 3
Preprocessing
Rock segmentation from images of muck-piles plays an important role in the fragment
measurement process. At this stage, individual rocks are extracted from the images

for subsequent analysis and calculations. During the last two decades, many image
segmentation techniques have been developed. They are based on one of the two basic
properties of gray level values: similarity and discontinuity. Those utilizing the first
property are

term~d

region-based segmentation techniques (based on thresholding

and region-oriented methods) [52] [138J. While those utilizing the second property
are referred to as boundary estimation edge detection [34] [131]. Application of the
first category of techniques on images of rock piles can be found in [53] [54] [80], and
of the second one in [59] [12]. In this thesis, the second category will be used for rock
segmentation.

In an ideal environment Le. high contrast and noise free images, edge segments
can be detected using gradient operators. The segments are then joined to form
closed boundaries using an edge linking algorithm. However, images of natural scenes
are usually noisy, containing objects \Vith textured regions and fuzzy boundaries.
Consequently, a smoothing process is needed to reduce the noise, homogenize the

regions and enhance the boundaries.

3. Preprocessing
xlj

0'.:.-----

-~., ..-.--"1/
--.. 01

"." .. "

1["..

Figure 3.1: Image of 3-D scene

This chapter starts with a visual description of a muck-pile; in other words, an


analysis of the muck-pile digital image, how it is formed and a description of the
constraints and the problems associated with visual measurements. A review of
smoothing filters, and in particular, Crimmins' geometric filter, which will be used
for smoothing images of muck-piles, will be presented. In addition, a review of edge

detection methods will aIso be presented. Finally, analysis of both Crimmins' filter
and Canny's edge detector and the result of applying them to images of muck-piles
are presented.

3.1 Description of Muck-Pile Digital Images


This section contains a brief description of the physical situation which produces
muck-pile images [61] [77]. In the three dimensional world, an object is observed by
an eye or camera from some point P in space. Let I(p) represent the light intensity
approaching the point P from a direction

p. Let a lens at P focus this light on a

plane R where R C ~. Using the plane coordinates, I(x,y) is the intensity of the
light at a point on R with coordinates (x, y); I(x,y) is a mapping

~f I(p)

givenby

the geometry of the imaging system.

o'

The light reflectd orthe surfaces of various objects (in this case rocks) Oi visible
from P Will strike R in varions regions R;, where R;

; R. When one object 0 1 is


26

3. Preprocessin;;

Figure 3.2: Digital image ot~ ~lUck pile.


:,1

partially in front of another object O2 as seen from P (seeFigure 3.1), and some of .
the object O2 appears as the background to the sides of Oh then the open sets RI and

R2 will have a co=on boundary (the "edge" of object 0 1 in the 'image defined on R)
and one usually expects the image l(x, y) to be discontinuous along this boundary...
Other discontinuities in l will be caused by discontinuities in the surface orienta-,
tion of visible objects (e.g. "edges" of a cube), discontinuities in the objects' albedo
(i.e. surface markings) and discontinuities in the illumination (e.g. shadows). In reality, natural objects in general and rocks in particular are textured and not,smooth,
and surface marking occurs in misleading forms (see Figure 3.2). In addition, shadows
"

are not true discontinuities, and the measurement of l always produces a corrup,ted
noisy approximation of the true image l.

In spite of this, the image l(x, y) can be modelled (on ~ certain scale and to a

27

3. Preprocessing
certain approximation) by a set of smooth functions fi defined on a set of disjoint
regions Il; covering R as will be presented later in this chapter.

3.2

Image Smoothing

Rock fragments do not possess smooth surfaces. This usually results in a "noisy"
image. To be able to extract boundaries of individual rocks, preprocessing of the
image is needed to minimize high frequency intensity variations due to the texture.
The purpose of the smoothing process is to eliminate both weak edges and the
texture as much as possible while preserving the boundaries of the rocks. In the
literature, many filters have been developed to smooth noisy images. The problems
associated with these filters is that many do not preserve image features such as

the true edges. For example, in applying a low pass filter in the frequency domain,
occlusion boundaries can be lost to the background in some cases.
In the spatial domain, smoothing can be achieved by applying an operation such
as averaging, through a mask to the image [56]. The gray level of the pixel at the
centre of this mask is replaced by the gray level average of the pixels inside the mask.
These coefficients are either equal, or decrease from the centre pixel to the outside.
These masks do not take into consideration the change of image content. They will
smooth the image, but at the same time, they also blur the sharpness of the objects
,boundaries.
There are also adaptive smoothing methods, in which a versatile operator which
adapts itself to the local topography of the image is used. Chin et al. [25] and Mastin

,[103] review~d
and evaluated some adaptive smoothing
methods. In this section we
,
.
will review some concepts which form the basis of many of these methods, including
the newest approaches.

Graham [5i] devised a method that blurs more in uniform regions of the picture

28

3. Preprocessing
than in the busy part. He used the second difference computed on the nearest neighbours as a local measure of the level of detail. This method replaces each point by a
weighted average of zero or more of its neighbours, depending on whether the values
of the second partials fall below or above a threshold, which is typically two percent
of the range.
Levet al. [89J proposed similar iterative weighted averaging methods. In particular, they proposed applying a weighted mask at each point, whose coefficients were
based on an evaluation of the differences between the value at the centre point and
the values of its neighbours. A similar approach was used by Wang et al. [157], in
which the weighted coefficients are normalized gradient-inverses between the current
point and each neighbour. Another method by Davis and Rosenfeld [35] is based on
selecting the neighbour points which have the value closest to the central point and

replacing the latter by the average of these values.


Nagao and Matsuyama [109] developed a smoothing method which preserves sharp
edges and region boundaries. Their algorithm is based on rotating an elongated bar
mask around each pixel-in the image, and detecting the position of the mask for which
the variance of the gray level is minimum for each pixel. The average gray level of
the mask at the selected position is used to replace the gray level at this point. This
operation is iteratively repeated until the gray levels of almost ail points in the picture
do not change. The disadvantage of using such a filter is the amount of computation
that is required for each pixel.
Yasuoka and Haralick [165] introduced a peak noise removal method based on
local gray tone extracted statistically from the facet model, where each pixel in an
image is statistically tested to determine whether it belongs to the same gray tone
intensity surface as its neighbourhood pixels. If the gray tone is outside the 95 percent confidence interval estimated from the neighbourhood gray tones, it is judged

as a noise peak and its value is replaced by an average of the gray tone values of the

29

3. Preprocessing
neighbourhood pixels. In order to estimate the local gray tone statistics, an assumption is made that the neighbourhood region is described by a linear or quadratic facet
surface mode!. It is also shown that this method can be successfully applied to scan
line noise removal by using a one dimensional (horizontal or vertical) neighbourhood.
Geman and Geman [55] proposed using simulated annealing, which is computationally expensive. They demonstrated the results of applying their algorithm on images with very few gray levels. Blake and Zisserman [li] proposed a different method
which aims at overcoming the difliculty of local operator approaches by introducing
weak continuity constraints to allow discontinuities in a piecewise continuous reconstruction of a noisy signal. The drawback of their method is the long computation
time required.
The idea of casting adaptive smoothing in terms of nonlinear diffusion was recently

addressed by Perona and Malik [129]. In their method, they allow the image to evolve
over time via the diffusion equation [68J. The resultant image is diffused most where
the gradient is smallest and least where the gradient is largest. As time passes, the
image is smoothed within regions of uniform intensity, but not between regions. In
addition, edges are enhanced due to the smoothing of regions on either sides of an
edge.
Saint-Marc et al. [139] proposed a smoothing filter, which is based on iteratively
convolving the signal to be smoothed with a very small averaging mask whose coefficients reflect, at each point, the degree of continuity orthe signal. Convergence for
their algorithm may take an extremely large number of iterations.
Crimmins [32] developed a nonlinear filter having the property of smoothing the
speckles and preserving features. It was first used in assisting radar image interpretation.This filter was selected as the basis of image smoothing for preprocessing in this
research. One of the motivations for using this filter was the great similarity between

aerial images and the image of the rock pile. In addition, it is simple to implement.

30

3. Prcproccssillg

,,

(a)

(b)

'"
ICI

100

1211

11

110

110

:roo

(c)

Figure 3.3: Speckles representation in the digital image (a) Image of a


rock, (b) 3-D representation of surface ofthe rock, (c) A cross sectional slice
of the surface

In representing the intensity image in 3-D space, specklcs appear as narrow winding walls and valleys. To illustrate this, Figure 3.3 (a) shows an image of a rock,
(b) its 3-D graphical representation, and (c) shows a vertical slice of the surface.
The body of the rock appears as a wide high plateau in (c). The geometric lUter,
through iterative repetition, gradually tears down the narrow walls and fills up the
narrow valleys. It also tears down highplateaus, which are desired to be prcserved.

However, it reduces narrow walls and valleys faster than it reduces the even wier

31

3. Preprocessing
plateaus. In general, the wider any feature is, the more slowly it is reduced. Thus,
only a few iterations are required to reduce the narrow speckle walls and valleys and
these few iterations have very little effect on the wider, even, plateaus (hence rock
shape is preserved).

3.3

Edge Detection

The edge detection process serves to simplify the analysis of images by drastically
reducing the amount of data to be processed, while, at the same time, preserving
useful information about the boundaries. The underlying concept is that abrupt
changes in intensity provide a sufficiently rich source of features that capture the key
aspects for subsequent image analysis, yet at a considerably reduced size. The best

known example (){ such featul'es are step edges, i.e. contours, where the light intensity
changes relatively abruptly from one level to another. Such edges are often associated
with object boundaries, changes in surface orientation, or material properties [98] [99].
Eclge images contain most of the relevant information in the original gray-Ieve! image
in cases where the information is mostly contained in the changing surface material,
in sharp changes in surface depth and/or orientation, and in surface texture, colour,
or grayness.
Many methods for edge detection in noisy images have been proposed, such as the
Robert gradient, Sobel operator, Prewitt operator, facet mode! and Laplacian operators [134] [65] [62] [131]. Overviews of some schemes can be found in [34] [127] [153].
The more recent detection algorithms can be grouped into three main categories,
namely; the optimal operator, multiscale approaches and sequential contour tracing
techniques. These groups validate the detection of local features by considering a
more global conte.xt.

For optimal operators, a single edge is considered. The goal is to :find the optimal

32

3. Prcproccssiug
lilter (in terms of signal to noise ratio) for the detection of such an cdge. Shanlllugalll
et al. [145] defined an edge as a step discontinuity between regions of uniform intensity
and showed that the ideal filter is given by a prolate spherical wave function. Marr
and Hildreth [99] e.,tending the work of Marr and Poggio [100], convolved the sigm
with a rotationally symmetric Laplacian of the Gaussian mask and located zcrocrossings of the resulting output. In their work, they mentioned that a lllultiple scale
approach is necessary, pointing out the difficult problem of integration. Haralick [64]
located edges at the zero-crossing of the second directional derivative in the direction
of the gradient where derivatives were computed by interpolating the data. In [63]
Haralick et al. extended the facet model to the Topographie Primal Sketch. Canny
[20] proposed solving the problem by deriving, using variational methods, an optimal
operator which turns out to be well approximated by a Derivative of Gaussian mask.

Nalwa and Binford [112] proposed an edge detector which fits, at each point, a set
of surfaces within a window and accepts the best surface, in the least squares sense,
which has the fewest parameters.
For multiscale approaches, as noted by severa! authors, automatic adjustment of
the size (or scale) parameter is difficult. Hence using multiple scales should provide
a reasonable answer. This idea is based on some physiological observations [101]
for a few scales, but the integration of these discrete scales is an open problem.
Instead of using discrete scales, Witkin [163] pr~posed a continuum of scales and
showed that, at least in one dimension, the interpretation of the multiscale response
made the important information explicit. In the case of more complex signais, the
discretization of the formulation leads to the need for a large amount of memory
allocation, as in edge focusing [14]; otherwise, heuristics need to be applied to establish
a correspondence between scales. This was done with some success by Asada and
Brady [5] for two dimensional curves in their Curvature Primal Sketch. In his paper

[20], Canny defined a set of heuristic criteria for the integration of multiple size masks,

33

3. Preprocessing
and gave promising results for two scales.
More recent edge detectors, motivated by the need for more recognizable or more
stable contour images, search instead for extremal points of the light intensity distribution known as valleys and ridges, or build up a composite edge representation made
up of a union of step edges, valleys, and ridges [29] [130] [51] [128]. The composite
edge images do not necessarily contain the subset of edges that are stable against
changing illumination; they generally look better than the step edges alone, but that
varies considerably depending on the specifie object.
The third category of edge detection algorithms search\!s the image or a filtered
version of the image for patterns of image intensity that may be edges [107] [102].
These algorithms combine edge detection with edge linking. The analysis of the
patterns of image intensity can be very elaborate. These algorithms are usually used

only in situations where it is necessary to find edges in images with poor quality.
The work in this thesis will follow this third category of edge detection techniques.
Canny'sfilter was selected to extract features of the muck-pile. One of the problems
of this filter is that it may displace the true location of the edge and may also fail to
detect some edges. The edge linking algorithm used in this thesis will be described
in detail in chapter 4.

3.4

Feature Extraction of a Muck-Pile

ln this section, a proposai to utilize Crimmins' geometric filter for smoothing followed
by Canny's edge detector to e:\.1;ract features of rocks forming a pile is presented,
starting with the analysis of Crimmins' filter and its implementation. The theory of
Canny's filter is then presented. Finally the results of applying both filters to images
of rock piles are presented.

Geometrie filter designs, of which Crimmins' is an instance, are based on the use

34

3. Prcproccssinl;

111111
(a)

(c)

(h)

Figure 3.4: Geometrie Filter (a) Curve (b) Umbraofcurve (c) Complement
ofumbra

ffiilllliili
tOI

l I t

Figure 3.5: Four configurations of the 8-hull algorithm used to process the
umbra

of a complementary hulling technique. The 3-D graphical representation of the image;


i.e. z = [(x, y) is used to construct a surface above the image, such that its height .
above any pb::el is proportional to the value of that pb:el (see Figure 3.3 (b)). This
surface is then sliced by al! vertical planes similar to the one shown in Figure 3.3
(c). The slicing is done in four configurations: Ch C2 , Ca and C4 planes that are
paral!el to the {yz-plane}, {xz-plane}, {y

= x, z-plane},

and {y

=1-

x, z-plane}

respectively.
The intersection of any of these vertical planes with the gray-Ievel surface forms
a curve. This curve is used to construct a binary image from the vertical planes, by
defining a discrete grid composed of vertical lines which pass through pixels in the
image (xy-plane) and horizontallines at a height proportional to the intensity value of
that pixel [(x, y) (Figure 3.4 (a)). The points on this vertical grid will be referred to
as vertical pixels and have a value equal to one (1). The umbra of the curve consists
of al! vertical pixels in the binary image, on or below the curve (Figure 3.4 b).

The complementary hulling algorithm constitutes of applying the 8-hull algorithm

35

3. Preprocessing

1 I' ~I
1 1

~
0

00111 0011
0

Figure 3.6: Four configurations of the 8-hull algorithm used to process the
complement
(Appendix A) twice, once to the binary image and once to its complementary, and
complementing the binary image back. Crimmins [32J adapted this iteratively, such
that each iteration composed of two steps. In the first step, he applied the complementary hulling algorithm to the umbra, i.e. using only four of the eight configurations
to smooth the curve forming the top boundary on the umbra (see Figure 3.5). In his
ne:l:t step, he applied the remaining four configurations shown in figure 3.6 to smooth
the bottom boundary of the complement (Figure 3.4 (c)). An iteration is completed

by complementing back the image.


To speed the smoothing process, while applying an iterative step of the 8-hull
algorithm to the umbra, instead of changing a 0 to 1 if any of the four neighbourhood
configurations are present, these configurations are used separately and consecutively.
In other words, a 0 is changed if the first of these feur configurations is present. Then
,the resulting

image~::eplaces

the original image. Then the second configuration is

used, etc. Similarly, the four configurations of the complementary image are applied
separately and consecutively to the complement. This results in a greater modification
of the image at each iterative step and hence fewer iterations are required. It also
causes a greater difference between the reduction rate for wide and narrow features.
Figure 3. shows the result of applying the geometric filter to one slice.
This procedure is performed on ail Cl vertical grids simultaneously and the resulting gray-Ievel image replaces the original gray-Ievel image. The same procedure
is repeated in the C3 grids, then the C2 grids and finally the C4 grids. This completes

one iterative step of the geometric filter.

36

3. Prcprossillg

""

'"

'"'\
..

""

!~

I~

'"

I~

1\

li,",

r\
...

'"'
~.

"', "' ,., '" "'" lr

XiD3IIO..oo_lI

(b)

(a)

""
""

~
X

:lOO3IO..oo"lOkID

li'

''', "' ,., '"' "'" lr

(e)

IL
XiD:lIO_4I01oOl)

(d)

Figure 3.7: l-D Geometrie Filter results, (a) the original image (b) after
one iteration (e) after 10 iterations (d) after 20 iterations
Laboratory experiments of applying the geometric lilter on images of rock piles
concluded that 3 iterations are sufficient to smooth the image and to preserve most
of the rock boundaries. Figure 3.8 presents the result of applying the filter to the
image with different numbers of iterations. As can be seen from Figure 3.8 (d),
over smoothing can result in the destruction of many of the weak boundaries of the
fragments. This will complicate the segmentation process of the individual fragments,
and consequently the scene analysis process, which may result either in failure of the

overall process or in extraction of false measurerr..'2nts.

37

3. Preproeessing

(al

(hl

(cl

(dl

Figure 3.8: Geometrie Filter results, (a) after one iteration, (b) after 3
iterations, (e) after 5 iterations, (d) after 7 iterations

The ne.'Ct stage of feature e.\."traction is edge detection. One of the popular edge
detectors is Canny's edge detector [20]. It is based on a one dimensional continuous
domain mode! of a step edge of amplitude hE with additive Gaussian noise having
standard deviation

Un'

It is assumed that edge detection is performed by convolving

a continuous domain, one dimensional noisy edge signal f(x) with an anti-symmetric
impulse response function h(x) bounded by [-w,w] (of zero amplitude outside the
interval). An edge is marked at the local ma.'Cmum of the convolved gradient f(x)

h(x). The impulse response h(x) is chosen to satisfy the following three criteria:

38

Good detection: The amplitude Signal-to-Noise Ratio (SN R) of th,' gradient


is ma..'timized ta obtain a la\\" probability of failure ta mark n', edge points <Uld
low probability of falsely marking non-edge points. The SNR for the mode! is

Good localization: Edge points marked by the operator should be as close 1.0
the centre of the true edge as possible. The localization factor is defined as

LOC = hE
h'(O) .
un r-w [h'(x)]! dx
where h'(x) is the derivative of h(x).

Single response: There should only be a single response ta a true edge. The
distance between peaks of the gradient when ouly noise is present, denoted as

x m , is set 1.0 sorne fraction k of the operator width factor w. Thus


xm=kw

(3.1)

Canny has combined these three criteria by ma..'timizing the product

J~w h(x)dx

r-w [h(x)f dx

h'(O)

x -::-~"-w::-[h'~(X-'-:)]:;-2dx-

subject 1.0 the constraint of equation (3.1). Due 1.0 the complexity of the formulation,
no analytical solution has been found, but a variational approach has been developed.

In the discrete domain the large size operators defined in the continuons domain
l'an be obtained by sampling their continuons impulse functions over sorne w x w

window. The window size should be chosen sufficiently large sucb that truncation
39

3. Preproeessing

(a)

(h)

(e)

(d)

Figure 3.9: Geometrie Filter results (a) Original image (b)


(d)

0"=4

0"

= 2 (e)

0"

= 3

of the impulse response fullction does not cause high frequency artifacts. Figure 3.9
demonstrates the result of applying Canny's filter, before smoothing the image, using
different window sizes. These results are satisfactory when the rocks are spread, Le.
using large (1 cao smooth the surface, and may result in displacement of the true edge
.

(see Figure 3.9 (d)). This might create a problem in determining the boundaries of
piled rocks.
E-perimental results showed that, in a laboratory environment, the combination

of 3 iterations of Crimm;ns' geometric filter followed by Canny's filter with (1 = 3 cao

40

3. Prcproccssing
give an acceptable result. To demonstrate this, a

labor~tory

enviroument image of a

pile of rocks was used as shown in Figure 3.10 (a). Figur" :1.10 (b) presents the result
of applying 3 iterations of Crimmins' filter. And Figures 3.10 (c) and (d) present the
result of applying Canny's filter to both raw and smoothed images respectively.

3.5

Conclusion

Due to the comple..xity of images of rock-piles, the problem of rock segmentation was
decomposed into two parts. The first part (pile feature e..'l:traction) was presented in
this chapter. This part consisted of two main processes, namely smoothing and edge
detection. For smoothing, the utilization of Crimmins' nonlinear filter was proposed
to reduce the lioise resulting from the te..\.'ture of the rocks. Then Canny's edge

detection algorithm was used to extract the discontinuity in the image representing
parts of the boundaries of the rocks and surfaces within these rocks.

41

.::'

3. Preprocessing

(a)

(h)

(c)

(d)

Figure 3.10: Result of Canny's edge detector before and after smoothing
(a) Raw image of a pile ofrocks, (b) Result of smoothing, (c) The result of
applying Canny's filter on the raw image, (d) The ~esult of applying Canny's
!liter on the smoothed image


Chapter 4
Fragment Contours
An object's edge is a result of a sudden change in colour, te.'l:ture or the direction of
lines, in another words,

an edge signifies the end of a surface, transition to another

surface or simply the end of the object. An edge stands out against another surface
of some other colour, texture, etc., or simply a void. Contours have a similar nature,
Le. they form where there are sudden changes in some gradient: colour, shadow,
parallel lines seen in perspective, or texture. The difference in a contour is the one.
dimensional

int~rface

between figure and background. There is no difference between

the contours or outlines of a two-dimensional form and the edges of an object when
both are projected through the lens onto a plane.
Applying the edge detection algorithm to rock images will result in images that
contain unnecessary information such

aS short lines and small closed regions. This is

a result of either texture or colour change of the srface of the rock and/or multiple
surfaces of the same rock. Some of this information may provide false clues about
the actual boundaries of the individual rocks. A smoothing process applied prior to
edge detection provides a partial solution in reducing the unwanted information. On
the other hand, smoothing may
also result in joining
the contours of. two or more
,
,:'

individual rocks.

/'

;:

4. Fragment Contours
In this chapter, the methodology which will used to segment the muck-pile fragment contours is presented. The starting point is the representation of the contour
and its parameters. Section 4.2 contains the enhancement algorithms which are applied to the edge map images to remove the unwanted information. Gaps can occur in
a region boundary because the contrast between regions may not be enough to allow
the edges along the boundary to be found by the edge detector. As a result, a simple
recursive method will be presented to fil! these gaps. The proposed analysis is based
on a multi-Iayered image. This will raise the issue of junction detection and analysis
resulting from overlapping of the rocks. Finally, a contour completion algorithm will
be presented to estimate the missing parts of overlapped rocks.

4.1

Modelling of Fragment Contours

Contours of fragmented rocks are characterized by many parameters. These parameters can be grouped into local and global parameters. The local parameters represent
the elements of the contour geometry, in which points of the contour are related to
their neighbouring points. These parameters include length, tangent orientation and
curvature. The global parameters on the other hand, characterize the region bounded
by the contour such as area and perimeter.
In this section different methods of representing contour segments will be presented. This will be followed by methods of computing the local parameters. Finally,
a number of algorithms for computing sorne of the global parameters of the contours
will be presented.

4.1.1

Contour Representation

A contour may be represented as an ordered list of straight line edges, or by a curve

as a mathematical mode!. There are several criteria for a good contour representation

44

4. Fragment Contours

such as simplicity, compactness and accuracy [ii]. The simplest representatioll is

lU1

ordered list of its edges. In a discrete form, digital curves can be represented either
by a sequence of points {(xc, Yc), .. . , (xn-r, Yn-il} or by a string of integers each one
ranging from 0 to i depending on the direction of the next point of the curve sequence.
The latter representation is called the chain code sequence [50].
The above representations are as accurate as the location estimates for the edges,
but are not the most compact ones. In addition it may not provide an effective representation for subsequent image analysis. The accuracy of the contour representation
is deterrnined by the form of curve used to model the contour, by the performance of
the curve fitting, and by the accuracy of the estimates of edge location.
Fitting appropriate curve models to the edges increases efliciency by providing
a more appropriate and more compact representation for subsequent operations. In

general, curves in a plane can be represented in three different ways: the c..'Cplicit form

y = f(x), the implicit form f(x, y) = 0, or the parametric form a(t) = (x(t) , y(t))
for some parameter t. The parametric form of a curve uses two functions, x(t), and

y(t), of a parameter t to specify the point along the curve from the starting point of
a curve a(tt}

= (X(ti)' y(ti)) to the end point a(t2) = (X(t2)' y(t2)).

In this thesis, both types of the curve representation (the point sequence and the
mathematical model) will be used. The point sequence representation will be used
in both local and global image analysis. The parametric representation will be used
in estimating the missing part of the contour in the contour completion algorithm as
will be shown later in this chapter.

4.1.2

Local Parameters

Measurement of local parameters plays an important role in image analysis. In this


section, severa! methods of measurement of three local parameters will be presented:

arc length, tangent orientation and curvature measurement.

45

4. Fragment Contours
Determination of the length of a contour is frequently used in image analysis.
Several methods have been proposed to measure the length of a discrete Curve [IDS]
[S5] [41]. The length of a curve segment from point (Xi, Yi) up to point (Xj, Yj), can
be approximated by the length of individual segments between points:
j-l

I(iJ)

J(Xk+l -

xd + (Yk+l - yd

(4.1)

k=i

This approximation will play a major role in noise reduction from the edge map images
as will be demonstrated later in this chapter. Other methods of length measurements
using chain coding can be found in [41].
From the fundamental theorem of differential geometry of curves [150], the first

= (x'(t), y'(t))) corresponds


to the tangent vector of the curve at t. If the tangent vector is of unit length Il ct' (t) Il =

derivative of a parametric form of curve ct at t (Le. d(t)

l, then the parameter t measures the are length along the curve [IDS] [149]. From this
definition, the tangent at a point gives the best !inear approximation to the curve in
the neighbourhood of that point. One of the simplest ways to estimate the tangent is
by determining the parameters of the straight !ine that best approximates the local
curve points. Using a window w centred at (xc, Yc) and estimating the parameters of
a straight !ine, the slope of this !ine is the slope of the tangent of the curve at (xc, Yc).
The equation of the straight !ine is

ax+by+c= 0
where a

= cos((} + I)'

b= Vl- a2 , c

makes with the horizontal

-l

3."l:CS.

= -ax -

(4.2)

by and () is the angle that the !ine

The best !ine is found by trying al! angles () between

and ~, assuming that the !ine_must pass througbat least one of the curve points
'----'-

--_.-~

ill the window. The error associated with any particular line is the sum of the error

values for al! (Xi y) that correspond to the curve points. If the points!ie exactly on the

46

4. Fragment Cont.onrs

line, then the sum is zero, and by choosing the line for which this sum is minimum,
the best line is found.
The second derivative ofthe curve defincs the curvature ti:(t). Viewing the tangent
as the best linear approximation to the curve o:(t), the curvature ti:(t) measures how
rapidly the curve is deviating from this linear approximation, Le. it is a fuuction of
the arc length and is equal to the inverse of the radius of a circle (ti:(t) = ~) locally
coinciding with the curve (sel' Figure 4.1). In a mathematical form:

ti:(t) =

x'(t)y"(t) - y'(t)x"(t)
"
(x 12 (t) + y'2(t))(,)

Figure 4.1: Curvature

In the literature on the differential geometry of curves [150], three equivalent


formulations of curvature are found. Two of them are based on the orientation of the
tangent, where the curve is approximated either by a second or third order polynomial
and the curvature is estimated from the second derivative of the curve. The third
formulation is based on the inverse of the radius of the local touching circle (Le. fitting
a circle and finding its centre and its radius). In [4] [5] digital curvature is based on the
orientation of the tangent (Le. applying a linear differentiating filter to the estimated
orientation), [92] [105] [106] use a polynomial approximation based scheme, and [154]
lits a circular arc. A comparison between these methods can be found in [164]. In this
section two of these methods will be reviewed, namely polynomial and circle fitting

methods.
Using a second order polynomial, Albano [2] used weighted least square estimation

47

4. Fragment Contours
to minimize the error between the given set of data and the curve f(x, y). Lee et al.
[88] approximated a sequence of points by a third order polynomial of arc length. For
a window w of n points,any point
of the window

Q;

Qm

within the neighbourhood of the centre point

= (x;, y;) was expressed as

where t; is the normalized arc length using equation 4.1 such that:

Then the arc length within each neighbourhood has its range from

II < 1.

to + where

The proposed approximation was done using weighted least square estima-

tion to minimize the weighted fitting error defined as:

e2 =

W-l

m=O

(
Wm

Tm -

3 )2

La;t~
i=O

to estimate the coefficients of the polynomial where the weights

Wm

follow a nor-

mal distribution N(O, a). Once the fitting was done for the rows and the column
coordinates, the curvature value at a point, II:". was calculated as follows:

Landau [86] suggested an iterative algorithm for estimating the location of the
. centre of a circular arc and its radius. His algorithm is based on minimization of
the error between a set of given pointS and the estimated arc (a special case of the
general equation 4.3). Using vector notation, Landau [86] was able to express the

minimization condition, and suggested an iterative algorithm to estimate the circular

48

4. Fragment Contonrs

arc centre and radius. Thomas and Chan [154J proposed the use of area mther t.h'UI
length 1.0 estimate the centre and the radius of a circle. Since t.he minimization is
performed on the area error, an estimate bias will result. Thomas and Chan [154]
argued that the bias is small and approaches zero as the number of dat.a point.s
approaches infinity.
Dudek and Tsotsos [44] proposed the curvature-tuned smoothing method 1.0 measure curvature. Using the calculus of variations, they applied a set of smoothing
functionals 1.0 e.." tract multiple interpretations of the original data as a function of a
priori assumptions of target curvature. A similar method will be used in the contour
completion algorithm which will be presented later in

4.1.3

th~

chapter.

Curvature Estimation: The Selected Method

In this thesis, the method of Nitzberg et al. [118] will be used 1.0 estimate the curvature
by finding the centre and the radius of the best fitting circle for the arc.
The general form of the second order polynomial is:
f(x,y) = a+ bx + cy + dx 2 + fxy

+ gy2

(4.3)

=0
:

Setting f = 0 and d = 9 and completing the square will result in the equation of the
circle

The weighted least square method is used 1.0 estimate the :circle centre c = (;:' ;~)
and radius r

= Vb2~.;:td'. [132].

Let i be a point on a curve such that {i = (x;, Yi),

where i = 1 ... n}, the square error e2 , for a given candidate centre c and a radius r,
is equal 1.0 the sum of radial distance squared from the circle 1.0 each point i.e.

49

4. Fragment Contours

e2 = L wi(llai - cll- r?

(4.5)

where Wi is the error weighting, determined by the weighting function used.


Based on the assumption that ai are all close to the candidate circle, Nitzberg et
al. [HS] used the approximation

L Wi(lI ai -

cl1 2 -

r 2)2 = LWi(lIai + cll- r)2(lIai - cll- r)2


i

:::::: (2r)2Lwi(lIai - cll- r)2

(4.6)

Substituting in equation 4.5:

(4.7)

Nitzberg et al. [HS] then estimated the parameters a, b, c and d using equation 4.7.
Appendix B contains a detailed description of their algoritbm. With each fit, the
curvature of only one point is computed using the estimated parameters as follows:
(4.S)
and the errorterm in distance units is given by:

.e=
) J~Wj.
-Ai

(4.9)

Both equations (4.S and 4.9) will be used to detect the discontinuity in curves ex-

50

4. Fragment Contonrs

tracted from the image.

4.1.4

Global Parameters

The remaining part of this section contains an overview sorne of the global parameters
of a region bounded by a closed contour such as the area, the centre of gravity and
the principal a.'l:es.
As mentioned in section 2.1, the digital image is reprcsented by a n

m matri.'I:.

The area of a region is defined as the number of pi.'l:els contained within its boundary.
For example, for a binary image' the area is equal to the sum of pi.'l:el values of the
image
n

A=LL1
i=l ;==1

The balance point of the binary image l (i.e. the centre of gravity) is (xc, Yc)
where:

LLi

i=lj=l

Xc

= A
n

LL'

Yc =

i=lj=l

The principal axes of a region are the eigenvectors of the covariance matrix. The
two eigenvectors of the covariance matri.'I: point in the directions of maximal region
spread, subject to the correspondi1g eigenvalue. Thus the principal spread and direction of a region can be described by the largest eigenvalue and its corresponding
eigenvector [131J. With the centre of gravity established, it is possible to define the
1 A binary image is a matrix with its elements either zero or one: it is assumed that the the
background is set to zero (i.e. l(i,j) = 0) and the foreground is set to one (l(i,j) = 1)

51

4. Fragment Contours
scaled spatial central moment [66] for the row and column of the image,
1

J.l(2.0)

-2

i:::lj=l

1 n
J.l(O,2)

= 2"

L L [j - xl
m

L
L [i i=l ;=1

Ycf

And the row-column cross moment of inertia

The moment of inertia covariance matrb: can then be created as foIIows:

= [J.l(2.0)
J.l(l.l)

J.l(l'l)]

(4.11)

J.l(O.2)

Performing a singular value decomposition of the covariance matrLx results in the


diagonal matrLx

(4.12)
where the columns of

are the eigenvectors of U, and A contains the eigenvalues of U:

. :.-

52

~.

Fragment Contours

Figure 4.2: Pcrpcndicular distance bctwccn a point and a Hnc

angle between the major a:"is and the horizontal a.'ts) be defined as:

() =
)'M,N

_1

tan

()'MU(I,I)
- U(O, 2)

and () define an ellipse, whose major a.'ts is

(4.13)

and whose minor a.'ts

One way to compute the length of the a.'s is by rotating the region by -(), theu
finding the coordinate of the boundiug box. The other way is to use equation 4.2 and

find the points on the contour having the greatest distance from the tine on opposite
sides. The a.'ts length is the sum of these two distances.
The formula for perpendicular distance between a tine and a point (Xi, Yi) (sec
Figure 4.2) is
d2 = (ax;

+ by; + C)2
a2 + 1J2

(4.14)

Knowing the orientation angle () and the centre of gravity (xc, Yc), the length of the
minor ms is computed. The same applies for the major axis using
...

4.2

() + ~) instead.

Edge Map Enhancement

The output of Canny's filter is a binary image containing traces of rock edges (the
edge map image). Usually, these edges are multi-pixel wide, as a result a thinning
algorithm is needed to reduce the thickness to facilitate the image analysis.
Numerous thinning algorithms have been proposed in the literature [125J [148J

[147J. Most of them consider the following requirements:


53

4. Fragment Contours
Connected regions must result in connected line structures
These tines are 8-connected and should approximate the centre line of the edge.
Approximate end line locations should be maintained
In this thcsis a common thinning approach will be used, in which each pixel in
the image is examined within the context of ts neighbourhood region. The thinning
process is performed iteratively such that in each iteration, every image pixel is inspected within 3 x 3 windows, and single-pixel-thick boundaries that are not required
to maintain connectivity or the position of a line are erased. When no changes are
made in an iteration, the process is terminated.
Grouping is a process that organizes the image into parts, each likely to come
from a single object. This is usually done bottom-up using ciues about the nature

of the objects and the image, and does not depend on the characteristics of any
single object model. The hypothesis that humans use grouping may be prompted by
the introspection that when we look at even confusing images in which we cannot
recognize specifie objects, we see th3.t image as a set of chunks of things, not as an
unorganized collection of edges or of pixels of varying intensities.
A variety of ciues indicate tlie relative likelihood that chunks of the image originated from a single source. The gestalt psychologists suggested several ciues, such as
proximity, symmetry, collinearity, and smooth continuation between separated parts.
For example, in an image of line segments, two nearby lines are more likely to be
grouped together by people than are two distant ones, and gestalt psychologists suggested that this is because they are more likely to come from a single object. Low [93]
applied this view to computer vision. Other recently explored grouping ciues inciade
the relative orientation of chunks of edges [76] and the smoothness and continuity of
edges [111] [146] [31].

An active contour model (SNAKE) proposed by Kass et. al. [79] demonstrated an

54

4. Fragment Contours
intcractivity with ahigher visual process for shape correction. It. resulted in smooth
and closed contours through energy minimization. The active contour model, howcvcr
has sorne problems namely: control, scalin;; <lI1d discontinuity.
The active contour modellooks for ma.'tma in intensity gradient magnitude: however, in complex images, neighbouring and stronger edges may trap t.he contour into a
false, unexpected, boundary. Moreover, if an initial contour is placed too far from an
object boundary, or if there is insuflicient gradient magnitude, the resulting contour
will shrink into a conve.'\: closed curve, even if the object is concave. In order to avoid
these

('~ses,

-,~

a spatially smoothed edge representation [i9], a distance-to-edgc map


.

[28], sU:ccessive lengthening of an active contour [13] and an internal pressure force
[28] have been introduced. Unfortunately, even if these techniques were applied, the
edge-based active contour might be trapped by unC-pected edges.

.'

Figure 4.3: Directional growing of contour segment

Gaps between edges poses an additional problem in image analrsis. They are
the result of low contrast between region boundaries. Consequently, an edge linking
algorithm is needed to connect the broken parts of the contours.

/'

Edge linking algorithms can be used to fill gaps between cvntour segments to form
a closed contour. In his book, Pratt [131] categorized the edge linking methods into
three categories;
curve fitting edge linking: classical curve fitting such as Bzier polynomial or
spllii' fitting [125] or the iterative endo!?C?int fitting [125]),

heuristic edge linking methods [134] [113] [40] and


55

4. Fragment Contours
Hough transform edge !inking [43] [67] [48].
A fast simple heuristic solution was developed in this research to fill sma1l gaps
between contour segments. It is based on growing each contour segment at its end
points along each tangent !ine iteratively (see Figure 4.3). On a rectangular lattice
with 8-connected contours, diagonal portions of contours can cross over each other
without colliding at any grid point. To ensure collision, at each iteration a contour
grows by one point,ll"d the five neighbours that agree with the growth trajectory are
checked as weil.

4.3

- .....'

Corners and Junctions

Identification of corner points plays an important role in shape analysis [65] [131].

These are special features in an image,' and are characterized by their curvature [16].
Kitchen and Rosenfeld [83] measured cornarity as the rate of change of gradient
direction along an edge multip!ied by the gradient magnitude. Fang and Huang [45]
defined the corner-ness at any point as the magnitude of the gradient of () (the gradient
direction). This quantity attains a local maxima at a corner point. At each pixel, the
product of the corner-ness !md the edge-ness (the magnitude of the gradient of the
image) is computed, and the pixel is declared a- corner point if the value is greater
than some threshold.
Zuniga -and Haralick's [168] algorithm for corner detection is based on a gray level
facet model [62]. They proposed three different methods for corner detection: (i)
incremental change along tangent !ine, the corner point is identified by comparing
the gradient direction change of two nighbouring edge points (along the tangent
!ine of the edge boundary) and a declared threshold , (li) incremental change along

contour line, 'thismethod is similarc to the previous one with the exception that
the neighbouring points are lying along the contour line rather than the tangent,
56

4. Fragment Contours

(i) instantaneous rate of change. in this case the corner point is identified if the

directional derivative of the gradient along the edge direction is gTeater than sorne
threshold, provided that this point is an edge point. Nagel [110] used a rnethod based
on rninirnizing the squared difference between a second order Taylor series expansion
of grey level values from one frame to another. Nobel [119] showed how the Plessey
corner detector estimates image curvature and has proposed an image representatioD"
that is based on the differential geometrical "topography" of the intensity surface.
Rangarajan et al. [133] have proposed an optimal gray tone detcctor, based on
Canny's optimal one-dimensional detector [20]. They formulate the problem as an
optirnization problem, and solve it using variational calculus. The performance measure that has to be maximized is the ratio of the signal to noise ratio and the de",",calization. They developed a mathematical model for a restricted case and classified

corners into 11 types (a mask was proposed for each type). A low threshold is used
to select candidate pixels for corners, which responds to any of the 11 rnasks. A
candidate pixel is declared to be a corner point if it is does not have two" neighbours
(in a 3 x 3 neighbourhood) with a sirnilar gradient angle, provided that it is an edge
point.
'.

4.3.1

The

Ado~;ted Mthod
"~,

In the implementation usehn this thesis, corners and junctions are identified from
edge map images rather than intensity ones. Consequently the accuracy of the analysis is highly dependent on the edge detector algorithm. By tracing the contour,
corners ar identified as the points having high curvature. A junction is defined as a
point of two or more straight line edges. Location of junctions can easily be dete.cted
while tracing the contours.
Junction analysis is the key point in segmenting fragments. The interpretation

process starts by searching for curve branchlng (what will be calle<tthe "Y" junc-,
'>

57

4. Fragment Contours

tions) to extract complete fragment contours (closed contours) and to determine the
endpoints on the incomplete ones. Two criteria are used

tO

identify the overlapping

fragments: the first is by finding the best circular arc with minimum fitting eITor (Appendix B). The second criterion is by comparing with the average intensity values of
either side of the junction and selecting the highest one.

4.4

Contour Completion

To overcome the problem of overlapping of rocks in muck-piles, we propose the reconstruction of the missing part of their contours. Contours to be partially reconstructed
(completed) are identified as the contour segments connecting two junctions. To reduce the search computation time, severa! cases will not be considered. Among them,

the bisection of two rocks, Le. only connecting the end points of one segment at a
time will be considered.
This section contains a review of different contour completion algorithms. The
objective is to adapt one of these methods to estimate the missing part of mnck-pile
fragments due :to overlapping.

::4.4.1

Interpolation

Cubic polynomial interpolation is a common approach to completing contours [137]


[125] [131] [162]. Lower degree polynomials provide insuflicient f1exibility in controlJing the shape of the curve, while higher-degree polynomials introduc unwanted high
frequency c9mponents and also require more computation. No lower-degree represen.-;
tation allows a c~e segment to interpolate (pass through) two,
specified
endpoints
. . . . . ' ,
. - ,
with specified derivatives at each endpoint.,-

Given<a cubic polynomial with its four unknown coefficients, four known parametersare used to solve for the unknowns. The four knowns can be the two endpoints
58

4. Fragment Contours

and the derivatives at the endpoints. A curve segment can be defined in terms of a
cubic polynomial as follows:

a(t)
(4.15)

To deal with finite segments of the curve, the parameter t is restricted to the interval

[0,1].
With T =

[t 3

t2 t

1], and defining the matrix coefficients of the two poly-

nomials as

A curve segment a(t) is defined by constraints on end .points, tangent vectors, and
continuity between cutve segments. The major types of curves described in [125] [47)
are: Hermite, Bzier, and b-splines.
The Bzier form of the cubic polynomial cllr'e segment indh;~ctly specifies the end

:.

tangent vector by specifying two intermediate points that are not on the curve. The

59

4. Fragment Contours
starting and ending tahgent vectors are determined by the vectors P I P2 and P3P4.
The endpoints are PI and P4 :
Plx Ply
P2x P2y

G=

P3x P3y
P4x P4y
-1

M=

4.4.2

3 -3 1

3 -6

3 0

-3

0 0

0 0

Shape Completion

One of the erlier techniques for contour completion was proposed by Ullmann [155].

He suggested that, in order for a shape completion to have properties of isotropy,


smoothness, and minimum curvature, it should be composed of the arcs of two circles,
one tangent to one edge, the other tangent to the second edge, and such that the two
arcs share a cornmon tangent at their meeting point. Out of al! such pairs of arcs,
the pair which rninimizes the tota! curvature, defined as the integra! J ( r;;;) 2 over the
,.

curve, where a is -the slope of the curve is chosen.

),-

Ullmannwent on to describe a local!y connected ne.twork which computes these


contours. In the absence of such a network, a conventiona! approach must be employed
to compute the contour. Rutkowski [13i] described severa! such methods.

4.4.3

The Adopted Method

Nitzberg and Mumford [Hi] p~oposed the use of a tbird order spline a(t)
which minimizes the

SUffi

= (x(t), y(t))

of length -and curvature squared. This method is similar

60

4. Fragment Contours

to the curvature-tuned smoothing method proposed by Dudek and Tsotsos [44] for
curvature measuremer:,
Given two end points (x(O), y(O and (x(l), y(l and tangents (x'(O), y'(O and

(x'(l), y'(l, we can compute the spline connecting them as :

[ x(t)
y(t)

= [ t 3 t 2 t 1]

x(O)

y(O)

x(l)

y(l)

-2

-3

1)

x'(O) y'(O)

x'(l) y'(l)

1)

1)

-21) -1)

where 1) is the speed of the spline al. both t

(4.16)

= 0 and t = l, which vary 1.0 minimize

the integr'l1

x'(t) ] T =
[ y'(t)

x"(t)
[ y"(t)

]T =

3t2 2t

1]

.;,.2

-3

:5

[2 "":2

[ 6t 2 ]
"-3

1)

1)

-21) -1)
1)

1)
-21)

.7/]
~1)

x(O)

y(O)

x(l)

y(l)

x'(O) y'(O)
x'(l) y'(l)

x(O)

y(O) ,

x(l)

y(l)

x'(O) y'(O)
x'(l) y'(l)

61

4. Fragment Contours

(h)

(a)

(c)

Figure 4.4: Application of the contour completion algorithm on a partially


occluded ellipse (a) part of an ellipse, (b) completing the missing part using
fixed value of 'Y = 1, (c) the same algorithm taking into consideration the
convexity of the visible part

For this application, both parameters v and 'Y were modified such that they were
selected based on the shape of the un occluded part of the contour, by considering
that 'Y and v are inversely related, Le.
1
v= 'Y

(4.17)

Assuming there isa straight line connecting the two end points ((x(O),y(O)) and

(x(I), y(I))), let d be the maximum orthogonal distance between the line and a boundary point on the contour, then;
'Y

= min {d, V(x(l) -

X(O))2 + (y(l) - Y(O))2}

(4.18)

The smaller the value of 'Y the less deformed the constructed spline. To assess the

62

4. Fragment Contours

............!

(a)

(b)

(c)

(e)

(f)

()

(d)

Figure 4.5: Severa! degrees of overlap: (a) a smooth surface rock, (b) and
(c) The rock in (a) overlapped by one rock and two rocks respectively, (d),
(e) and (f) the contours of the rocks, (d), (e) and (f) the contours of (a), (b)
and (c) respectively
performance of the modified algorithm, the contour completion algorithm was applied
to a part of a computer generated ellipse (Figure 4.4 (a)) using two different values
of 'Y: 1 and the computed value from equation 4.18. The results obtained using both
values of 'Y are shown in Figures 4.4 (b) and (c) respectively. Comparing these two
Figures demonstrated the significant influence of selection of the valueiof 'Y on the
,

deformation of the resulting curve. This is clearly shown in Figure (c) where the
generated spline reasonably matches the hidden part of the ellipse.
Another test was also conducted using overlapping rocks (Figure 4.5). In this test,
we varied the visible part of the rock shown in Figure 4.5 (a) overlapping it with one

rock then with two rocks as shown in (b) and (c). Figures (d), (e) and (f) present
their contours respectively. Figures 4.6 (a) and (b) present the contour of the visible

63

4. Fragment Contours

(a)

(b)

(c)

(d)

Figure 4.6: Contour completion algorithm of an overlapped rock: (a) and


(b) Contour of the visible part of the overlapped rock shown in Figure 4.5 (e)

and (f); (c) and (d) The results ofapplying the contour completion algorithm
to (a) and (b) respectively
part ofthe overlapped rock shown in Figure 4.5 (e) and (f) with common boundaries
with the other rocks deleted. Figures 4.6 (c) and (d) present the result of applying the
contour completion algorithm. These results showed a great similarity to the actual
contour of the rock shown in Figure 4.5 (d).
Since the contour completion algorithm is highly dependent on clues given at
the endpoints, false information may result in failure of the algorithm to correctly
estimate the missing part of the contour. Furthermore, it might result in intersection
with the contour.

To demonstrate this, consider the curves shown in Figures 4.7 (a) and (c). Each

64

.1. Fragment Contours

(a)

(b)

(c)

(d)

Figure 4.7: Failure of the contour completinn algorithm: (a) and (c) Con-

tours of the visible part of overlapped groups of rocks, (b) and (d) The rcsults
of applying the contour completion algorithm to (a) and (c) respectively
one of these curves is a result of failure of the edge detector to detect the internai
boundaries of groups of rocks, resulting in a complex contour represented by the
external boundary of the whole group. The result of applying the contour completion
algorithm are shown in Figures 4.7 (b) and (d) respectively. As can be seen from the
figures, the generated contours resulted in more complex contours.

4.5

Conclusion

This chapter reviewed techniques for fragment image representation, measurement,

enhancement, junction analysis and contour completion. Relevant techniques were

65

4. Fragment Contours

suitably adapted and adopted for the application at hand.

66


Chapter 5
Sieve Analysis and Fragment Size
A method of form description is a procedure for selecting and presenting information
about the characteristic way in which an object occupies space [143]. One of the

characteristics of an object is its size. The size, being a measure/description of the


object, can take many forms such as the volume, the area of a two dimensional
projection- or the perimeter. It depends on the information available, the quantity
needed and the tools available for measuring it. Irani and Callis [75J describedthe
size of an object as the representative dimension that best describes the degree of
comminution of the object. For e.'(ample, the diameter of a spherically symmetric
object is a measurement of its perimeter and thus represents its size.

In mining, one of the common ways to measure/describe fragments is to measure


their size by sieving, which is believed to be controlled by the fragments' weight and
dimension. Allen [3] describes sieves as:
"... referred to by their mesh size which is the number of wires per inch."

and the particle sieve size as:

"... the minimum square aperture through. which the particle can pass."

t_

5. Sieve Analysis and Fragment Size


In general, the sieving mechanism can be described as the fragments being classified according to a series of sieves with decreasing mesh openings. This classification
is performed by letting the population of grains work its way through the sieves. This
process will be named Transformation. To obtain the size distribution, one weighs
(or counts) the contents of each sieve, which is translated i:lto Measurement. It is
important to note the relatiol! b<ltween these two processes, i.e. the dependence of
the measurement process on the transformation process.
In this chapter, an extensive analysis of the sieving process will be carried out to
deduce the parameters controlling it, and discuss the constraints involved in tl:e classification pro'::ess. Since fragments possess a nondeterministic geometry (being general
objects in 3-dimensional space), the analysis will be performcd on sorne deterministic
models and then generalized for the nondeterministic ones. This chapter also consid-'

ers the two diI:ilensional projection of the fragments in space and the compatibility of
the parameters nf the 3-dimensional models.

5.1

3-D Size Classification

The classification process,

.~

mentioned earlier, is a form of transformation in which

a three-dimensional fragment (object) is classified, according to constraints imposed


by a two-dimensional grid. This process has been described by Allen [3]:

"Fractionation by sieving is a function of two dimensions only, maximum


" breadth, and maximum thickness for, unless the particles are excessively
elongated, the length does not hinder the passage of particles through the
sieve aperture (this definition applies to sieve having square aperture)".
~.

In mathematicai form, the _classification process can be viewed as a mapping


..~- --. ~
-=-from Three-Dimension toTwo-Dimnsion (3-D -+ 2-D). Till". mapping cannot be

C'.haracterised by the volume (the amount of space the fragment (abject) occupies)

68

5. Sieve Analysis and Fragment Size

'8: : : : : : :,
:. : ,.;
,

;i,./.----.. . . . ~
.'..'

'1.

=====
,/

U'

(h)

(a)

Figure 5.1: Size description of (a) the sphcre, (b) the pyramid
only, since volume is a scalar value which does not provide any information about the
dimensions.
Successful passage of the fragment through a grid of a specified size is highly
dependent on the shape parameters of the two-dimensional projection in a plane

represented by the grid. To demonstrate this, consider the case of a deterministic


object such as the sphere shown in Figure 5.1 (a). Assume that the sphere was held
stationary (ignoring the weight factor) and a set of square grids were used to test its
passage. The sphere will always pass if tJ:;e size of the grid is larger than the largest
Euclidean distance between any two points on the perimeter of the perpendicular
projection of the sphere on a plane parallel to the grid (i.e. the sphere's diameter).
Because of the symmetry of the sphere [91], the orientation of the grid was never
considered. In selecting a less symmetrical model, such as the pyramid shownin
Figure 5.1 (b), the orientation of the grid with respect to the pyramid plays a major
factor in its passage. Assume that the pyramid is oriented according to the. principal
frame fi, 'fi, z (i.e. its base is parallel to the xy-plane and one of its vertices is at 't",

'::;-

origin (0,0,0)).
Using the vector representation of each vertex of the pyramid as shown in Figure
5.2, in which every vertex is represented

Ils

a directional vector from the origin, rand

;; can be projected on the principal frame (see Figure 5.2), i.e :

69

"

5. Sieve Analysis and Fragment Size

z1

Figure 5.2: Pyramid vector analysis

1= (Ui, 0, 0) + (0, li; 0)


li = (a, 0, 0) + (0, b, 0) + (0,0, iii)

The parameter that governs the passage through the grid during the grids' vertical
movement (parallel to the xy-plane) is 1. In other words, the grid size SM that
guarantees the pyramid passage through the sieve has to be larger than l.
A smaller grid size may also allow the pyramid passage in the same movement
direction with a change in the grid's planar orientation. The smallest grid size that
will allow its passage is:
Sm=max(d,w)

For the horizontal movement where the grid is oriented perpendicular to the xy-plane,
SM

= max(m,l)

and Sm

= max(m,d,w).

For any other directional movement SM

and Sm are:
SM

max(l, h, max(m, 1))

max(l,h)

and

Sm. = max(a, b,d, w, m)


,

70

5. Sieve Analysis and Fragment Size

From the above analysis, one Gan deduce that for any grid orientation, the pyrmnid
will alway pas!> provided that
SM = max(l, h)

and the minimum grid size that allows the pyramid to pass at any orientation is:

Sm

= min{max(d,w),max(m,d,w), max(a,b,d,w, m)}

The above illustrates that the classification process of a stationary abject is highly
dcpendent on its shape.
We can apply the SaIne analysis to irregular shaped objects such as fragmented
rocks; assuming that a fragment was held stationary, and a set of square grids was used
to test its passage. Provided that ail grids used were oriented in the same way and

moving in one direction, the fragment will always pass if the size of the grid is larger
than the largest Euc!idean distance between any two points on the perimeter of the
two-dimensional projection orthogonal to the gTid, regardless of the orientation of the
!ine connecting these two points. The distance between these two points represents
the length of the projection of the two dimensions into one dimension (Mi) provided
that the fragment is convex. This particular grid size SMp will be the upper bound
for the fragment passage, since any larger grids will have the SaIne result.
E!iminating the larger grids and performing the passage test in descending sizes,
while changing the planar orientation of each grid (from 0 to 90), smaller grids
may aIlow the passage of the fragment until the size of the grid is smaller than t.he
minimum one-dimensional projection of the the two-dimensional projection of the
fragment (m;). Grids smaller than this size (Sm,) will not allow the fragment to pass
at any orientation.

71

5. Sieve Analysis and Fragment Size

i-P--
1

H--~i0

W if-------cc-l JO,1[1-'------.-.,:

o
:

I-:------H ,p

_-_ .. :

'._._.--_.-

Figure 5.3: Domain of the Weighting Function W


Repeating the same procedure with different directions of grid movement, will
result in:
i = 1,2, ...

(5.1)

i = 1,2, ...

(5.2)

and

Sm = min(Sm,),

Between SM and Sm there is a gray area in which uncertainty about the fragment
passage exists. There are two parameters coutrolling this interval, namely: the size
of the giid and its orientation. The latter parameter is highly dependent on the
roundness of the fragments' two dimensional projection (symmetry around the centre
of gravity). In this fashion, measurement of the object and its shape information are
preserved by the mean of the a.'{es measured.

5.2Weighting Function Formulation


To link the fragment passage and a grid size S, a special function (the" Weighting

Punction" W) will be used. This Weight Function will represent a measure that
characterizes the passage of a fragment at a given grid size.

~,

..,=

Let the domain of the sieve function be the set of positive real numbers (S E lR+),

72

5. Sicvc Analysis and Fragnwnt Sizc

its range will be the set P. The set P contains three logical subsets, namcly: pa.....ing
(P), pos...ible passing (?P) and not passing (..,P). The "Weighting Function" IV can

then be defined as:

Definition 1 The "Weighting Function" W is an intermediate fnnctioTl. with domain


S and Tange [0,1].
In addition, it is a function of two par=et,ers, namely: fragment orient,ation and the
grid size. In mathematical form:

W:S-+[O,l]

(5.3)

Since W is an intermediate function, its range can form a one to one mapping with
the subset of the set P (see Figure 5.3). The idea of using W as an intermediatc

function is based on the following argument:

If an abject can pass through a specific grid size at only one orientation
then it will pass through a larger grid size at the same orientation. Moreover, the probability that the same object will pass at any orientation for
the larger size will increase.
Then for a given grid size, one of the following cases is satisfied:

1
W(S)

-+

]0,1[ -+

S <:: SM

P iIf

?P iff Sm < S < SM

(5.4)

S :::; Sm

-+..,p iff

whre SM and Sm are defined in equations (5.1) and (5.2), respectively.

In the first and second cases, W(S) remains constant at 1 and 0 respectively. In

the second case (W(S) EJO, ID, for a fixed grid size, the planar orientation of the grid
will be the only factor which affects the passage of the fragment. By considering the
"

73

5. Sieve Analysis and Fragment Size

....

"
s' Su'

10

10

Glidffit."S"

(h)

(a)

Figure 5.4: Weighting FUnction W(t) versus grid size: (a) For a spherlcal
shape, (b) For any other abject

vibratory effeet of the grid [22], this factor becomes a function of time. Thus, W will

exhibit a probabilistic behaviour according to the ne>.."t .definition:

Definition 2 The Weighting Function Wo(Si, t), is nonnally d.istributed and approximates the probability of passage of fragment
centred at

SM;Sm

for a particular grid size Si and is

of the abject:

(5.5)
where

-s + SM-Sm
2

Q-

and

where t is the frequency of the orientation of this fragment at a particular instant.

The ooly restriction in applyiL;;; the "Weighting Function" is that the measurement
must be obtained from the object projection and not from a cross section of the object.

5. Sieve Analysis and Fragment Size


Throughout this thesis, S AT will be referred to as the "Major Axis", and Sm as the
"Minor A.xis". The "Major" and "Minor" axes need not intersect at an,. point nor
need they be orthogonal to each other. Detailed definition of both Major and Minor
axes will be presented in chapter .
The smaller the difference between the largest and smallest gric! sizes (SAT and

Sm) the closer the shape to a sphere. This will result in changing the value of the
Weighting Function rapidly, as shown in Figure 5.4 (a). On the other hand, if the
difference is large, W will vary as the size of the grid varies (see Figure 5.4 (b.
To simplify the computation of the Weighting Function, a linear model will be
used to replace the recursive model presented, i.e., W will be assumed to change
linearly between Sm and SM[ll]. Consequently the Weighting Function becomcs:
1

WO(Si,t)

S;-Sm
SM-Sm

5.3

Si ~SM
Sm

(5.6)

< Si < SM

Si ~ Sm

Weighting Function of Planar bjects

As mentioned earlier, the input to the fragment classification proccss is a digital image
of the muck-pile. These images represent a two dimensional projection of the pile.
In other words, the measurement has to be performed on only one projection rather
than multiple projections.
Many mining researchers assume that fragments always possess spherical shapes.
;.\

As a result, the diameter of the cicle equivalent area is always referred to as the
size of the fragment. Table 5.1 summarises different measures used to

charac~erise

fragments.
The area as a stand alone measure ca:nnot provide enough description about the

dimensionality (shape parameters) which is an important factor in the classification

75

5. Sieve Ana!ysis and Fragment Size

Researcher

Measured parameters

Nyberg et al. [21]


Maerz et al. [96)
Nie and Rustan [115]
McDermott et al. [104]
Grainger et al. [58]

Typical Diameter
Arca'
Maximum Diameter and Arca
Area
Fragment length (longest dimension) and
width (normal to the length)
Minimum projected cbord length
Elliptical Parameters (Major and Miner a"'{es of the best fitting
ellipse)
Arca and Elliptical Parameters
Diameter (measure of the thinest portion that crosses approximately
the centre of a fragment)
Area
Weighting Function

Paley et al. [124]


Farmer et al. [46]
Kemeny et al. [80]
Doucet et al. [42]
ScbIeifer et al. [142]
Bedair et al. [11] [12]

Table 5.1: Summary of fragments measurement methods

D
Figure 5.5: Projected area of a cube

process. In addition, the area is a high error sensitive measure. To demonstrate this,
consider the cube shown in Figure 5.5. Assume the cube was viewed from one side
only (i.r tr<! projected image is a rectangle), and the length of the side is equal to
a nnits. The area of the exposed image is a2 and the diameter of the equivalent area

of a circle (de) is:

de

=2~
= 1.1284a

76

5. Sieve Analysis and Fragment Size

"

..

,..
Qrld

SIle os

Figure 5.6: Weighting Function W(t) and diameter of cquivalcnt sphcrc


versus grid size

On the other hand, using the a.,es measurement;


Sm=a

SM = V2a2
= 1.4142a

Applying the same analogy as the Weighiing Function (setion 5.2), one cao conclude that the cube will pass through-the grid at any orientation provided that the
grid size is equal to de' On the other hand, using the cube's a.,es will stretch this
value to the closed interval [Sm, SM] which is considered to be more logical than the
former measure since the cube is less symmeLric around its centroid in comparison
with the sphere. This is demonstrated graphically in Figure 5.6, where a is assumed
to be equal to 1 (note de E [a,1.4142aj).

In viewing a muck-pile, partially occluded fragments are a common problem. Using the expod area of the fragment to compute the diameter of the equivalent sphere
is an inadvisable procedure. To prove this argument, we will use the same square as

in Figure 5.5. Assume a = 5; by continually reducing the length of two parallel

sides (simulating occlusion by a larger square), only SM will change; meanwhile, the
diameter of the equivalent circle will keep changing.

77

Sieve Analysis and Fragment Size

'00

o--.-al

00

00
~

00

!oo

"

!"

00

00

"

..,

"

lJ

................
2

:)

0.5

(a)

................
:1

2.5

:)

(b)

Figure 5.7: Error in changing dimensions

Figure 5. i (a) shows the behaviour of the error of the diameter of the equivalent
circle resulting from the new area and the error of SM' In another simulation, the

hiddcn part was increased diagonally, in this case, both a.'S were preserved by occlusion, meanwhile, the error of the equivalent diameter increased exponentially, this
is demonstrated in Figure 5.i (b).

5.4

Conclusion

In this chapter, a mure appropriate measure for fragment size was presented. This
measure characterizes each fragment by dimensions (a.xes) and a scalar (Weighting
Function) rather than a scalar only (area).
A special type of classification of an object in space (xyz-space), based,on a
two dimensional criteria imposed by a grid (xy-plane) representing the mesh, \Vas
presented. This classification requires object shape descriptor(s) to evaluate a special
function which was named the "Weighting Function" .
Finally, it was demonstrated that the principal a.xes used in this measure are

less sensitive to shape variation resulting from overlapping in comparison with the

i8

5. Sicvc Aualysis und Fragment. Si1.e

diameter of the equivalent circle (a comlllollly used mensure) .

.--

79


Chapter 6
Size Distribution
One of the indicators of the effectiveness of a blast is the size of the fragments forming
a muck-pile.

rvrny factors influence the size of the fragments

and their distribution

in the broken material produced by the blasting, among them:


The size of the blast.
The explosive type and distribution.
The in situ structure of the rock.
The structure of the blast including geometry.
Qualitative studies of fragmented rocks have mainly beim concerned with factors
such as: the energy of fine grinding, the mechanics of production blasting, the stability
of rock slopes, loading and haulage, or the stability and f10w of fragmented rock
and ore in mining operations. Several mathematical models for different types of
fragmentation and comminution have been proposed to represent the size distribution
that results from blasting processes, from mechanical comminution, and from other
fragmentation operations [166J [82J [16] [19].

From the literature, estimation of the size distribution of the blasted material
has been done

b~:one

of the following methods: traditional sieving analysis [1] [15j,

6. Size Distribut.ion
visual estimation and boulder eounting [60]. the predieti\"e method [33]. and t.hl'
photographie method [94] [21] [42] [95] [58].
The photographie method is based on the measuremcnt of soml' paramctcr from
photographs of the muck-pile, either manually. or automatie,ly. This is usu,ly donc
by dividing the image into regions, performing the measurement and interpreting
these measurements. This interpretation is a form of the transformation of somc t\\"odimensional size parameter of the individual bloeks into a three-dimensiom bloek
size distribution.
Estimation of size distribution is also a major research topie in many other fields
such as biology and metallography [84] [36]. The problem is "to obtain truc p,lrticlc
size distribution of grains or bodies embedded in a three dimensiom volume from
measurements on a t\\"o-dimensional section or eut" [141]. For this type of prob-

lem, closed form solutions based on geometrie probabilities exist, and are part of a
discipline known as stereology [156] [158] [159].
Stereology deals with methods for the three-dimensional representation when only
two-dimensional sections through solid bodies or their projections are available [1581.
The aspects of stereology that are pertinent in the context of measuring fragments
are those relating volume of particles to the size distribution of their section [140J.
This chapter starts by describing different sampling methods. This is fol1owcd by
the introduction of "Virtual Sieving utilizing the Weighting Function, developed in the
previous chapter, as a representation of the size of fragments. This is followcd by an
overview of sorne of the stereologieal solutions used to estimate the size distribution.
Finally, the formulation of Virtual Sieving to estimate the volumetrie size distribution
of fragments from their projected images will be presented.

81

6. Sizc Distrihut.ion

6.1

Sampling of a Muck-Pile

In many applications, estimates of pvpulation characteristics are based on examilling


a small fraction of the whole pOlJulation, and the size distribution of a muck-pik
resulting from a blast is no exception. Since measuring the size of ail the fraglllt'Ilts
in a pile is impractical (being a time and labour intensive process), sampling the pile
is essential.
Practically the probability of obtaining a sampie which accurately l'l'presents the
parent distribution is remote. In addition, the characteristics of the several samples
will vary.

Based on the assumption that these sar:J.ples are representative of the

parent distribution, the e.,pected variation may be estimated from statistical analysis
[3]. In general, more samples will result in a closer match between measure<1. sarnple

parameters and the true population pararneters.


To l'l'duce the error in estimating the population pararneters, a reasonable sampling strategy must be used, since selective sarnpling can result in a sampling bias (Le.
a misleading sample parameter). The best result may be obtained by representative
sampling using the perfect sample, the difference in population between this sample
and the ~ulk may be ascribed wholly to the expected difference on a statistical basis.
Many mining researchers assume that the surface of a muck-pile is an ideal random
sample representing the pile. For that reason, many adapted the photographie method
[42], [94], in which fragments on the surface are used to estimate the size distribution
-

of a muck-pile. A faidy large range of techniques that include using different types of
films and equipment, have been experimented with in both surface and underground
. mining. Severa! of these techniques provide acceptable results.
The common mechanism is to take a photograph of the pile normal to a horizontal surface. The accuracy of this estimate can he significantly influenced by the

photographie sa:npling procedure used. Indeed, since ail subsequent operations rely
on photos of muck-pile surfaces, procedural errors may get compounded later.

82

6. Sizc Distribution

Hard

Disk

oy i
_ lCl_

l b1
Fill
=-=:,'-,
,LI"'_-'---e
Video Monitor

Shovel

Figure 6.1, Proposcd camera position


Maerz et al. [95] proposed a different sampling strateg). This \Vas based on the
argument that during the loading proccss, rocks are mixed, and consequentl}., the
resulting surface is more representative of the pile. They photographed the material

in the back of the haulage truck.


Combination of both sampling strategies cau result in a more effective sampling
method. This can be achieved by taking images/photographs of the surface of the
muck-pile during both the digging and dumping phases of the shovelling operation.,
In other words, the camera can be mounted on the shovel and sampling is conducted
repetitively (see Figure 6.1). This method will have the following advantages:
Reduction of the gravitational segregation effect: Samples are not limited to the surface of the pile only.
Elimination of human interaction.
More control over the number of samples collected.
The automatic sampling strategy proposed would likely improve the accuracy of
the estimation size distribution of the pile. This is evident since the shovel digging

operation will result in continuous increase of the e.xposed area of the pile. The

83

l). Size Dist.ribut.ion

remaining question is the actual impll'Illenration of s\lch a stratl'gy. which is

ht'~'\l!ld

the scope of this thesis.

6.2

Virtual Sieving

The definition of the size distribution (Irani and Callis [5]) is giwn as:
"the frequency of occurrence of partic/e., of every si;;c perccnt:'

This is usually represented graphically in one of the following ways:


percentage distribution of particles by weight or volume in the range of sizes
included in the mass, or
cumulative distribution additive from small to large or vice versa, with specified

increments of particle size.


From the sieving analysis point of view, these fractions are the amounts passing a
given screen size and retained in the next smaller one. Adopting a dilferent convention, such as using the percentage of numbers rather than weight or volume, i.e. the
number of rocks retained on a given screen with respect to the total number of rocks
in a given sample, can drastically change the measurement.
Assuming a sample contains N fragments, the Weighting Function Wk(Si, T) for
a specific grid size Si for each individual fragment (k) can be computed as follows;

Wk(Si, t)

= ~

/3v 21r ",=0

e _(.~,.,2

(6.1)

The total Weighting Function of the sampIe is the normalized sum of all Weighting
Functions at this particular grid size. In other words:

W1 (Si' t) =

~ f:Wk(Si, t)

(6.2)

k=l

84

6. Size Distribution
Sllbstitllting cqu<ttion 6.1. cqllation 6.2 bccomcs:
1 sIs.
JF.r(S,. t) = N"
11 vf2ii=
.

r=l

.... /1

-(z-~.f

Le":

(6.3)

x=o

This eqllation can be interpreted as foIlO\\'s:


For a given grid size, all rocks arc measured and tcsted by ealculating the
Weighting Funetion for each individual fragment, and the total Weight
Function is a.'sumed

1.0

be a representative of the percentage of the roeks

that wouId pass through this grid size.


This interpretation matches,

1.0

a large extent, the sieving process meehanism. As

a result, by varying the grid size, the resulting weight ean be eonsidered to be a

simulation of the sieving proeess and will be referred to as "Virtual Sieving" .

6.3

Volume Based Size Distribution

The volumetrie (true block) size distribution of blast fragmentation is eonsidered an


important parameter in assessing the blast. It can be defined as a mapping of twodimensional measure obtained from projeeted image of the surfaee of the muck-pile,
1.0 three-dimensional measure representing the volumetrie measurement of individual

fragments. i.e., 2-D -t 3-D. Similar problems existing in biology and metallography [38J [156] [158] [159] [1361 were studied and constraint solutions were found by
employing stereology.
In general, the process of mapping two dimensional information 1.0 three dimension

(2-D -t 3-D) is a diflicult one. Particularly when the two-dimensional profile is a result
of intersection between an object and a plane as in the cases studied in stereology.

This is because the observed profile size of the object is a function of both the shape
of the object, and of the location and orientation of the sectioning plane. A set of
85

6. Sizp Dist.rihut.ion

assumptions were used to outline the solntion to this problt'm: amonf'. th"lll is the
model which nsed to characterize the objec!.. The sdection of th,'s,' mo<1ds was bas"l!
on the closest matching regnlar geometric shape (nsnally ran<1omly oril'nte<1 simp!"
conyex objects: such as spheres. dlipsoids. etc.).
This section contalns an owrYiew of sorne of the ster,'ological solutions

10

Iwo

geometrical models: name!y spheres and ellipsoids. Applicability of these solutions


to projected images will also be presented. Fi11lJly. the adaptation of the Virtual
Sieviug method to estimate the true size distribution from projected iluages will be
presented.

6.3.1

Spherical Model

There have been several procedures developed to determine l'article size distributions

of spheres from their section size distribution [159] [156] [381 [8J [/4]. Wieksell [1601
pioueered this by formulating the problem in order to solve a eorpnseular problem in
anatomy. In 1958, Saltkov [159] presented one of the most signifieant procednrm to
estimate the true l'article size distribution from section profiles.
Using a spherieal model, Saltkov [159J based his solution on the assumption of a
discrete distribution made up of m classes of equal width (D.), sueh that;

(6.4)
where dmax is the diameter of the largest spheres. As a result, the numerieal densit.y
of profiles (Na) of any class i beeomes;
ni

Na(i)

=L

Na(i,j)

(6.5)

j=l

Using the same number of classes and wiq,th (m and D. respeetively) for the numerieal

86

6. Sizc Distribution

density of v"lnln(' ,'Ii". he proposed a linea:' relation between the number of spheres
in c1a,o;s j. N,,(j) where j

= L ... , m.

and the Ilumber of profiles that these spheres

cont.ribnt.e to profile class i. NaCi,j). accorcling t.o t.he following equation:

r N.(1:
N,;(2)

l
wherc k;j =

JCp -

=~

k ll

k l2

k 1m

Nv(l)

k 22

k'2m

N"(2)

k-. m

N"(m)

No(m)
(i - 1)2) -

J(P -

(6.6)

i 2 ). Nv was then computecl br pre-multiplying

the iliverse of the K matrix br Na;


kil k l2

Nv(l)

Nv(2)
N.(m)

1
=L\

k1m

k 22

k 2m

k mm

-1

r No(l)

l':1:1

(6.i)

Application of such a method is based on the assumptions that the measurements


from the section of the object resemble a sphere, an "equivalent" measure commonly
used is the area equivalent diameter de, defined as the diameter of a circle of area
equal 1.0 a measured cross sectional area, where:

~
4

deo =2 -

;r

where .4 is the area of a non-circular section.


Many of the size distribution methods used in mining employ such a technique or
a derivative. This Ignores the fact of the rare presence of regnlar shapes in fragmented

. rocks. One

o~

the problems associated with this method is its inability 1.0 preserve

the shape information, which is a major factor in the si'lving process.

8i

ti.

SiZt'

Di~trihl1tion

6.3.2

Ellipsodal Model

~Iethods

for est.imating partiele size distributions from St'ction m"'l.'Un'ml'nts usiug,

ellipsoidal modds haw been intensin'ly studied [371 [39] [36]

[l')~I

[1581. Gelwrally

speaking. particles modelled by ellipsoidal Dodi,'s l'an 1", groujlt'd into three main
categories: constant shape parameter (e.g. ellipsoids with constant 'L,iaj ratios). Iwo
variable shape parameters (e.g. variable ellipsoid of revolution). and three variahl,
shape parameters (e.g. tri-axial ellipsoids).
In the category of constant shape, Wicksell [161] proposed a description hy a single s:ze parameter. namely. the geometric mean of the principle axes. The particulat"
phase was then de,.;ribed by a univariate size distribution. and the stereological prohlem was reduced 1.0 identifying the set of profiles pruduccd by random plane sections
through the aggregate of particles.

For the second and the third categories, particles \Vere 'l."Sumed 1.0 exhibit vari- .
ations about a given type of shape as \Vell as size variation [158]. As a result, t.he
corresponding distribution function will be tri and bi-dimensiona! respcctivcly. This
is based on the argumenS-Q!' Cruz-Orive [121] that the p-dimensional partic1e dist.ri'=

bution can be identified from the corresponding profile distribut.ion only if the lat.t.er
has a dimension greater than or equal 1.0 p.
This restricted the solution of the third category 1.0 be nondet.erministic for an
infinitesimally thin section, i.e. identifying a three-variant distribution describing
variable tri-a."(a! ellipsoids from plane sections become indeterminate, which can only
be described by a bivariate distribution (e.g. that governing their major and minor
principal axes).
For the second category, namely two variable shape parameters, Cruz-Orive [122]

[159] used the following assumptions 1.0 estimate the size distribution:

Non overlapping spheroids.

88

6. Size Distribution
;:b

. _ .. _ .. -;

----. -

~.

....

_.

....
-

- ..

_. -. : _._a..
~-

.1
-'-

Figure 6.2: Cruz-Orive definition of the principal a.'<es of prolate spheroid


Spheroids are ail of the same type. namely. either prolate 1 or oblate2
The spheroid centres are assumed to be uniformly scattered w;thin the sample.
The spheroids are isotropically and independently oriented about their centres.
The size and shape of a randomly chosen spheroid are independent from its
position within the sampie.
For pjther spheroid type. prolate or oblate, the major and minor principal semi-

a.'<es, a and b (see Figure 6.2), were assumed to vary between 0 and B, where B is a
constant larger than'or equal to the largest value of b for the prolate, Le. b Elo, Bl,
or of a for the oblate (a
qual width

Elo, BD.

This range was divided into sciasses (s > 1), of

~ = ~. The shape component x 2 =

1- (~)2, in the range [0,1], was also

divided into k classes of equal width r = ~. Thus, the domain of variation of (b, x 2 )
for the prolate, or of (a, x 2 ) for the oblate, was divided into a grid comprising s x k
classes, each class being represented by a rectangle of sides

and r. Each spheroid

belongs to exactly one of such classes.


::

A spheroid belonging to the (i, j)th class is called the ij-spheroid; it must satisfy
the inequalities (i-1)~ < b( or a) ~ i~ and (j -l)r < r ~ jr, where i = 1,2, ... , s
and j = 1,2, ... , k. The :number of the ij spheroids per unit volume of specimen
s

is denoted by Nv( i, j), so that

.~

I:I: Nv( i, j) = N v, which is the overall numerical


i=lj=l

density of spheroids.

1Prolate spheroids are generated by ellipses revolving around their major principal a.'<S
'Oblate spheroids are generated by ellipses revolving around their minor plincipal axis

89

6. Size

Di~trihl1tion

The elliptical profiles WE're also c1assificd hy mcans of t.llt' satu" .,' x k

Sizl~shapl'

grid nsed for the spheroids. Thns the ellipsE' numhcr dE'llsit.i,'s N" an' reli\t.l'd

tll

the spheroid onE'S by pre-multiplicatioll of spllE'roid Ilutubcr dcnsit.ic5 N" by a SiZI'


corrector matrix P (an upper t.riangular mat.rix of sizc s x s) aud post-muIt.iplicat.ion
by a shape corrector matrix Q ( a lower triangular matrix of size k x k) as follo\\'5.

lVa l,l

PI,I Pu
i'lal.k

lVa~.l

lVa ;,.k

where

PiJ=

{H
v((i-n

Pl,.

P2,2

P2,s

Ps,s

ql,l

q'l.,l

Q2,'l

qk,1

qk,2

qk,k

lVV1 1

lVvl,/c

1VVJl 1

lVvtl ,1c
(6 S)

i=j

2-(i- 1n-V((i-t)2- i2) )

>i

(6.9)

The elements of the Q matrix are functions of both k and the spheroid type. For
prolate spheroids,

v(tt -1) f (ti)


V(tt -1) {J (tj )

i=j
-

f (tj+l)}

(6,10)

i>j

90

6. Sizc Distribution

(2k-2H2)
(2i

2j .... l)

For the oblate spheroids.

i=j

(6.11 )
i>j

whcrc, f(t) =

t'~l + tan-1(t).

tj =

Jg~~;1:;l.

The inverse relation to equation 6.8

givcs us the required frequencies in terms of the known one


-1

NUI 1

IVVt,k

'.

~-:.?z

Pl.2

Pl,s

P2,2

P2,s

ps,s

q",

Q2,1

Q2,2

qk,l

qk,2

qk,k

i'\ial,1

JV01,k

IValJ 1

IVall 1c

IVv

Pl,l
1
~

NVIJ k

-1

(6.12)
In spite of not being a popular method for size distribution estimation in the mining industry, the Cruz-Grive method preserves to a certain extent the shape information embedded in the shape factor. The shortcoming of this method is its complexity
in presenting the resulted distribution, Le, interpreting the three dimensionaI size
distribution is not an easy task to achieve.
l
3By definition tanh- (t)

= ! ln (~)

for [tl $ 1, in this

C~~Z~h71(t) = ! ln :~:
1

'-"

91

6. Si1.l' Distrihlltilltl

6.3.3

Applicability of Sectioning Methods to Projected Images

:-rany researchers believe that the mathematics inyolwd in tlll' inlt'fpn'tation of projected images have direct applicability to sectiolll'd onl'S [1561. This is ba..' l'd on thl'
assumption that the profiles which will be measured arc of a speci, typl' of inters!'ction of the l'articles with a sampling plane.
Rather than a random sectioning plane. the sampling will "of Ill'cessity" bl' the
surface of a rock pile. Le. a "projected" profile. where the largest visible dimension
of the fragment. in the direction of projection. is reye,ed.
The main problems encountered here are:
Particles overlapping: where l'articles in the second layer of the pile will be

partially obscured or overlapped by l'articles in the fitst layer, and


shadow effects of l'articles. Generally speaking, in applying any of the above methods directly to the problem of
estimating a block size distribution of a fraglllented rock pile l'rom llleasurelllent made
over its surface, many of the underlying assumptions of these lllethods are violated.
\-

As a result, many researchers build empiri"a! functions to correct the measurements


obtained, particularly l'rom overlapped rocks. In the remaining part of this chapter, a
new volumetric size distribution method will be introduced. This method addresses
the problem of overlapping rocks from a different perspective by reconstructing the
missing part of the fragment to replace th empirica! functions used.

6.3.4

From Virtual Sieving to Size Distribution

Following in the footsteps of Cruz-Orive [121], two parameters will be used to describe

the fragments. Using a bivariate convex model for ail fragments, namely prolate or

92

6. Size Distribution
oblat.e ellipsoids. and modifying Cruz-Orive's [121] assumptions 1.0 cope with the
projected image of rock-pile surface accordingly:
l'ion overlapping spheroids: rather than using an empirical function 1.0 correct
overlapping for one case and generalize il. for all cases, we shal! attempt 1.0
reconst.ruct the missing part of the fragments contour resulting from overlap.
Spheroids are al! of the same type and sieves are imposed in the same viewing
direction.
Rather than using the bivariate distribution proposcd by Cruz-Orive [121], the
weighting function defined in the previous chapter will be used. Assuming that fragments are ellipsoids, the volume of a fragment i can be compute directly using the
following equation:

Vi

where m;

4; mUvl;,

i= 1, ... ,N.

(6.13)

= ~, M; = ~ and Sm and SM are the fragment's measured minor and

major a...i:CS respectively. The discrete distribution is then divided into n classes cf
equal. width

such that
(6.14)

where, for N fragments,

:i=I, ... ,'N.

(6.15)

Using the weighting function's linear mode! (equation 5.6), and defining Wij as the
weighting function of fragment i al. class j, this will result in a N x n matrb:. The

93

tl. SiZt' Distrihut ion

elemer,ts of the

IV

mat-rix art' then norlllalizl'd

al'l'ordin~

to th,'

folh)\\"in~

,'<[nation:
\ li. Ill)

The volumetrie distribution can simply be computed nsing tlll' fo!lowing equation:
T

W11

FI

[ VI

...

11"1..

(6.17)

VI' ]

l/~

H"NI

WN..

This method convolves both shape paranleters used 1.0 represeut thl' fragments iu
the estimation of the volumetrie size distribution. Even though. the discretl' distri-

bution was divided into n classes of l'quai interval (equation 6.14), the method has
the fiexibility of using intervals of variable width. Furthermore, the distribution 01>tained is much simpler 1.0 interpret than the one obtained by the Cruz-Orive Ulethod.
Results of applying this method will be presented in chapter 8.

6.4

Conclusion

In this chapter, Virtual Sieving was presented as a direct application of the weighting
function. This powerful 1.001 of fragment measurement was interpreted as the simulation of the sieve analysis which provided a feasible measure of the size distribution by
the number of fragments retained in each sieve. Using spheroids as a geometric model
which provided a more realistic l'l'presentation of fragments, the volumetric/true size
distribution was then derived. Performance l'valuation of this method as weil as comparison with the' stereological and mining method used in estimating the true size

distribution will be presented in chapter 8.

94


Chapter 7
Implementation and
Experimentation

Using the description of the surface mining process given in chapter 1 as a sequential
process, and by replacing the first closed loop subsystem shown in Figure 1.2 by the
one shown in Figure .1, a significant impact on the subsequent processes such as
digging conditions, enhancement of production quality control, as weIl as furthering
the automation of surface mining operations [126] can be achieved. The single input
and single output black box added to the new system configuration is the process of
fragment measurement. The input to the black box is the visual information acquired
by sensing a muck-pile, e.g. digital images of the pile. The output is a fragment
classification similar to the mechanical classification process (sieving) of fragments.
This chapter describes the process represented by the black box and the link between the input and the output. It also discusses the constraints and the assumptions
made to establish this link. The starting point is a detailed description of the dig!tal
images of muck-piles and outline their characteristics. This is followed by the methods used in imI'lementing the tools described in chapters 3, 4 and 5 which ar used

to analyze these images. During the description of the implementation procedure,

1.

Inlplcmcntation and Experimentation

Uncontrollable
Parameters

Controllable
Parameters

Blasting

I\luck-Pile

1-..-

--,

Black Box

Classificd
Fragmcnts

Updating
Figure 7.1: A black box model of the blasting process
laboratory images will be used demonstrate the results obt:l.ined will be prcscntcr!.

7.1

Muck-Pile Description

The input to the bl"ck box is intensity images (where each pixel reprcsents the gray

level valu,:: of the point on the sensed surface) of muck-piles. These images are eharacterized by many properties, among them:
Surfac' Texture: each fragment possesses a textured surface; in addition, the
pile itself is not of uniform texture. This poses a problem since conventiona!
image processing boundary detection algorithms tend to be highly sensitive to
texture variations.
Multifaceted Fragments: rock fragments may have more than one face visible
in an image. This may result in an edge detection algorithm determining that
a fragment boundary exists at what is actually a face boundary.
View Location: the images acquired vary with the viewing angle, elevation
and distance.
illumination: natural lighting can vary in intensity and angle of incidence.

This can have a very signifieant effect due to shadows, and loss of contrast.

96

7. Implementation and Experimentation

\'" Fragments

Figure 7.2: Lab environment camera setup

Figure 7.3: Image of lab rock pile

Envr.onment:
rain and surface Irioisture can dramatically affect the image
,
properties. Snow can obscure fragments.
AIl of these propertics contribute to the quality of the digital image to be analyzed
as will be shown later.

:: In a Laboratory environment, a CCD camera was mounted on an adjustable height


stand to view a laboratory setup pile from the top as demonstrated in Figure 7.2.
Using several diffused light sources, Figure 7.3 demonstrates a digital image of a

laboratory environment rock pile.


97

Implementation and Experimentation

(h)

(a)

I, ~..
1

(c)

(d)

(e)

(f)

(g)

Figure 7.4: Overiapping rocks: (a) Composite layers, (h) Contours of the
composite layers, (c) and (d)Rocks of the first layer, (e) and (f) Rocks of the
second layer, (g) The third layer rock
Images of the surface of muck-piles usually contain partially occluded rocks. Thus
the surface of the pile can be modelled as being divided into three layers (See section
3.1). This division is adopted to simplify the fragment segmentation and measurement
sub-problems. The three layers in question are the first (top) layer, second (middle)
layer, and background layer, L 1 ,L2 , and La respectively. In measuring the fragrgents,
only the fragments in the first and the second layers will be considered. Figure 704 .
demonstratesa multi-layer image. The criterion used to classify fragments into one
of these layers is based on their contour properties such as continuity and convexity.
The layers classification methodology can be summarised as follows:

First Layer: A rock is in the first layer, R E Lb if it is completely visible (i.e.


its contour is closed and convex) .

98

. Implmentation and Experimentation

(b)

(a)

Figure 7.5: Bisection of overlapping fragments (a) intensity image of two


bisecting rocks (b) contoUTh of the bisecting rocks

For non-convex contours: one of the following cases applies


- Two rocks both in the top layer, Le. Rb R 2 E LI
- One rock in the top layer and one rock in the second layer, Le. RI E LI

and R 2 E L2
Second Layer: A rock is in the second layer, R E L2, if it is partially occluded
by one or more rock in the first layer (Type A) or by another rock in the second
layer (Type B).
Based on these criteria, interpretation of the images of the muck-pile surface
will be analysed. In the analysis, several uncommon situations will not considered.
Among them, the bisection of one rock by another i.e. when their contours meet
at four points (see Figure i.5). Each contour segment connecting two junctions will
--\"

oe:treated independently. In other words, the bisected contour will be split into two
contours and the contour completion algorithm will be applied to each independently.
This strategy was adopted to reduce the computation (time and complexity) required
for the search algorithm to match the segments.

99

. Implementation and Experimentation


r- -

-,

-~~~~~~~~~~~~~~~~--

1
1

Noise Remo\'al

1
_

IjlThinningl~
:
p

1:
l':
1

l '

Smoothing

:
1

Edge Detection

1 ~ _?}"~P!pS~~i!1K.

Edge Linking

L~!~.)AJ;l.I.t~n..Hn~I)~.i

A:u'8 Mea.'mrement

'

:1

:1
~ '-~==:J==::::;--' : r::-~=::;::J=::;:~~...,

r.:Weighting

1 ,

Function Ca1culation ,

Sizc Distribution

: ,-

1
.1

' -_ _--,

:1

__ .5ize.. Classification.

:1

:'----'--.,------'

:,-----'-----,

,: ' -

-'-_---'

~p_~yi~______

'--

Black Bax..

Figure 7.6: A black box model orthe fragment classification process

1.2

Fragment Measurement

The fragment measurement process (described earlier as a black box) consists of three
main subprocesses, namely: preprocessing, analysis and size classification (sec Figure
7.6). The theoretical aspect of these subproccsses was described in detail in chapters
3, 4 and 5 respectively. In this section we will present their implementation. In
addition, for each of the implemented subprocesscs a laboratory environment image
will be used demonstrated its results.

7.2.1

Preprocessing

Actual rock fragments do not possess smooth surfaces; this usually rcsults in "noisy"
images. Prior to the application of contour extraction algorithms, smoothing of the
image is needed to reduce the image artifacts. In chapter 3 we showed that the

noise in a muck-pile image can be signilicantly reduced using the Crimmins filter [32J.
Being a nonlinear filter, it has the property of smoothing the surface and prcserving

100

Implementation and Experimentation

bOllndary features.
The nllmber of iterations used affects, to sorne extent, the contrast between the
bOllndaries, especially if the image contains small rocks. Consequently, the optimum
number of iterations varies from one image to another depending on the lighting
conditions and fragment colour and size. In the laboratory environment (where we
have control on illumination and rock size range), two or three iterations were found
to be sufficient to smooth fragments' surfaces and minimise noise adequately.
Following smoothing is the application of an edge detection algorithm. Canny's
filter [20J was selected to perform this process. Since this filter convolves the image
with a Gaussian smoothing filter to smooth the image, optimum

al

is determined by

the contrast and resolution of the image. Experiments showed that setting a to a
value of 3 can reduce the presence of unwanted edges.

7.2.2

Image Analysis

Fragment contours are difficult to discriminate from other edges resulting from fragment features. In addition, the image preprocessing yields an image with numerous
unconnected edge segments. Hence a method of extracting these contours from the
pre-processed image is needed. The objective of this process is to identify individual
fragments from edge map images. The analysis process performs this task in several
stages.
Initially, the boundaries of individual fragments are not identified, hence, the term
"region" will used to indicate either boundaries of individual fragments or boundaries
of surfaces of fragments. Also the term "noise" will be used to refer to the unwanted
information such as short edge segments and small closed contours.
The first stage of the image analysis process is the edge enhancement process.
This consists of thinning, noise removal and edge linking algorithms. Applying the

er is the spread (standard deviation) of the Gaussian and controis the degree of smoothing

101

. Implementation and Experimentation

thinning algorithm to the edge map image will l'l'suit in an image eontaining one..
pL,el..wide edge segments. This is an important process which prepares the image for
the local and regional analysis. As mentioned in chapter 4. thE' thinning process is
performed iteratively in which each edge point is inspected within 3 x 3 windows for
the maintaining of connectivity and the position of the edge.
Short edges may provide a false indication about the presence of region boundaries.
As a result, they are considered noise and an algorithm is implemented to filter out
these edges. The short edge elimination algorithm is based on length measurement
of the edges (equation 4.1), where edges of a length below a predefined threshold ((/)
are removed from the edge map image. Setting the threshold for this process varies
depending on the ma:cimum and minimum edge length present in the image.
In the implementation, arcs of length of 10 pixels

((1

= 10)

or less and closed

contours with contour length less than 20 pL,els ((/ = 20) are eliminated. One
possible way to automatically set the length threshold is to selected a percentage
value of the longest arc length (for examplEi'10% of the longest arc)
In the last part of the edge enhancement process, small gaps between edge segments are filled. This is accomplished by extending each segment from its two endpoints along their tangents. The tangent direction at the end points i3 usually unreliable. To estimate a more accurate direction, we move back along the segment by n
points and estimate the tangent direction at that point instead (in our implementation, we used live pixels Le. n = 5).
The extension process is done iteratively, such that, in eaeh iteration each of the
unconneeted segments grows by on!! point on each end (if both ends are unconnected
otherwise, the unconnected end only). To ensure collision with other edge segments
(sometimes with itself), the live neighbours that agree with the growth trajectory
are checked in each iteration. To demonstrate this algorithm, we applied the edge

enhancement algorithm to the edge map image shown in Figure 3.10 (d): the result

102

. Implementation and Experimentation

(a)

Figure 7.7: Resulting edge map after edge linking: (a) The resulting edge
map of the image shown in Figure 3.10 after deleting closed contours and
short edge segments, (b) The result of applying the gap filling algorithm

103

is presented in Figure

Implementation and Experimentation

. .

To avoid connecting mismatched edges. a threshold for the nnmber of extension


points is used. Initially, the extended part of the edge is labelled. If au added point.
collides wit.h another extension line, t.he point. is marked. Once the extension process
is terminated, a search for the marked points starts. The purpose of this search is to
determine the stray parts of the extension lines and eliminate them. To enhauce the
new connected segments, one of the curve fitting interpolation algorithms described
in chapter 4 can be implemented using the end points and the point of intersection of
the tangent lines2 In many situations the o.,tension may be paraliel: this is a result
of the inaccuracy in the tangent estimation algorithm.
The resulting image contains closed separated regions or net-like regions or both.
To locate T-junctions (points where three curve segments meet) we trace the con-

tour net. These junctions can be either boundary intersections, surface intersections,
shadows and surfaces, or shadows and boundaries. We use the criteria of finding the
best fitting circular arc, and the two lines with the least fitting errClr (Appendix B)
are assuined to be continuous. Figure 7.8 demonstrates tms algorithm.
Another criterion is also used to consider o.,ceptions to the above technique. In
cases where the junction occurs near a corner of a rock or at the boundary of the pile,
the best fitting circular arc does not provide the proper selection. A window centred
at the junction point is o.,tracted and the average intensity values of the smoothed
image of the three regions are compared. If the two lines with the least fitting error
bound the minimum average intensity of the junction window (Sec Figure 7.8 (c)) the
least fitting error criteria is ignored and the maximum average intensity is considered.
Once the junction's two line segments are selected, the tmrd line is disconnected
from the junction. Each edge segment !s then traced: the direction of movement is
always counter clockwise. For each point, the tangent and the curvature are estimated

2This can be used if the extension lines are long

104

7. Implementation and Experimentation

(b)

(a)
(c)

Figure 7.8: Junction analysis: (a) Edge map of three rocks, (b) Window
of the junction Edge map, (b) Window of the junction smoothed image

as described in chapter 4 and AppendL'C B.


The first layer of the pile is identified as the closed contours of the resulting edge
map. The remaining contours form the second layer. Small closed contours which
may result from either strong texture, or a change of colour within the fragment
surface, are considered noise and are eliminated. In this case, the threshold is areabased (equation 4.10) rather than length based, Le. setting a threshold
a closed region is deleted if its area is less than the threshold (A :5
In the implementation, the area threshold is set to 50 pixels, Le.
to the arc length threshold,

such that

a).
a

= 50. Similar

can also be set to a percentage value of the largest

bounded area.
During the tracing process, if the curvature of a point exceeds a predefined threshold (positive sign) this identifies a concavity of the contour. Hence, a number of points
on both sides of the high curvature point are eliminated. It is observed that the error

term in distance units

E:

(equation 4.9) can effectively be utilized as a threshold for

105

;. Implementation and Experimentation

Figure 7.9: Edge map of Fignre ;.3

detecting discontinuity in a curve provided the curvature

If,

obtained by equation 4.8

is positive. Also straight lines are considered surface intersections consequently they
are eliminated.
The fragments of the second layer are identified by their edge maps. each of
which has two end points. As mentioned earlier, fragment biscction will not be
considered. When this occurs. each part of the bisected fragment contour will be
processed individually Le. the broken contour will be considered the contours of two
fragments.
To compute the hidden part of a

fr~ the second layer, the third order

spline (a(t)) (presented in chapter 4) is constructed linking the two end points. Given
the two end points (x(O), y(O)) and (x(l), y(l)) and their tangents (x'(0). y'(O)) and

(x'(l), y'(l)), a(t) = (x(t), y(t)) is computed iteratively using the modified algorithm
of Nitzberg and Mumford [11 iJ using equations 4.16, 4.1 i. and 4.18 respectively.
To demonstrate the image analysis process. Figure i.9 presents the edge map

of the image shown in Figure i.3 after the edge enhancement process. Figure 7.10

106

Implement".ti0n and Experimentation

(a)

(b)

00
a

~'"

<0

"

r5J

(c)

Figure 7.10: Layer classifications of Figure 7.9, (a) Edge map of the first
layer, (b) Edge map ofthe second layer type A, (c) Edge map of the second
layer type B

107

1.

Implementation a.nd Experimt'utation

(al

Q
()o

00
()

...
Il

OJ
(hl
:

Figure 7.11: Results of applying the contour completion algorithms to the


second layer: (a) Type A, (b) Type B

lOS

7. Implementation and Experimentation


presents the result of the layer classification process. Figure 7.10 (a) presents the first
layer, and in (h) and (c) the second layer in which each fragment is represented by its
c10sed contour which includes the common boundaries with the first layer3 (this will
be used in chapter 8 for comparison with other size distribution methods). Figure 7.11
presents the result of applying the contour completion algorithm for the fragments
of the second layer (Figure 7.11 (a) and (b) are the result of contour completion
algorithm of Figure 7.10 (b) and Figure 7.11 (c) is of Figure 7.10 (c)). The total
number of rock fragments detected by the image analysis was 273 rocks compared to
312 actually in the original image.

7.2.3

Classification

The last part of the fragment classification process is the measurement process. In

this section we will only consider the measurement of individual fragments. The
remaining part of the classification subprocess will be addressed in chapter 8.
Fragments resulting from a blast do not possess regular (e.g. ellipsoid) shapes in
the general case. This is a result of many factors that control the blasting process
and the nature of the rocks (i.e. mineralogic composition, layer thickness, etc.). In
addition, depending on its orientation, the fragment can pass through the grid in
sorne cases whereas its passage is blocked in others.
Fragment geometry and orientation are the two major factors that control the
sieving process. Though both are non-deterministic, many researchers tried to use
a unique model for the geometry, ignoring the orientation. The term size can have
many meanings, for this particular problem the size which will be used is defined as
follows (See Figure 7.12):

Definition 3 Assuming a fragment contour forms a regular, convex, closed curve,

3The visible part of the overlapped fragments

109

. Implementation and Experimentation

Major Axis

Minor Axis
Figure 7.12: Major and minor axes of a fragment

then the fragment size can be characterized by the length of both its major and minor
axes, where the major axis is defined as the longest Euclidean distance between two
eztreme points on the fragment contour, and the minor axis as the sum of the max-

--:.

imum orthogonal distance between points of the contour and the major axis on boUl
its sides.

Both major and minor axes will be used to compute the weighting function W(t)
which is considered the symbolic model of a fragment. As described in Chapter 6,
these axes form a limit for the grid size dA in which the weighting function has a
value different from

a and 1, Le.

dA E lb, a] <* W(t) E ]0,1].

Measuring the major and minor axes can be achieved as follows: for each fragment
compute the centre of gravity, then the moments, then the eigenvectors. The minor
axis is computed using the orientation angle () (equation 4.13) and the centre of
gravity: the maximum and llnimum points orthogonal to the principal axis can be
found using by traversing along the contour and computing the orthogonal distance
using equation 4.14. Similarly the major axis is computed in the same manner using

() + ~ and the centre of gravity.


The output of this process is a list of the principal axes of the fragments of the first

and second layers. This list will then be used in the computation of the Weighting
110

'.:

7. Implementation and Experimentation


Function as will be shown in the next chapter.

7.3

Conclusion

In this chapter the fragment measurement process was decomposed into three subprocesses, namely: preprocessing, image analysis and measurement. The output of
the overall process is the major and minor axes of al! fragments of the surfa.ce of the
pile. The problem of overlapping and multifaceted rocks was also addressed in this
chapter, and a solution was presented.
The fragment measurement process introduced has the advantage of being near
fully automatic, Le. minimum human interaction is required to interpret the boundaries of the fragments.

111 ..


Chapter 8
Comparative Evaluation of
Virtual Sieving

The Virtual Sieving method, a tool for quantitative analysis of the fragments size,
is evaluated in this chapter. This evaluation is initially based on the comparison of
performance and accuracy of this method with two stereological

methods~

namely:

the Saltkov method and the Cruz-Grive method. For this coniparison, a computer
generated data set is used to sim1ate the ideal situation.
This chapter also presents an overview of two commonly used methods to estimate
size distribution from muck-pile images in the mining industry. These methods are
then compared to the Virtal Sieving method. The comparison is done by applying
these methods on two different sets of data: the computer generated data and data
obtained from'laboratory CY.IJerim~nts.

8.1

Comparisons with Stereological Methods


--

In order to evaluate the performance of the Virtual Sieving algorithm, and to compare
:

'

it with existing. stereological methods, a bottom-up analysis procedure wasadapted.

8. Comparative Evaluation of Virtual Sieving


An arbitrary set of data was generated to represent a collection of different sizes
of spheres (and later on ellipsoids) using a normal density function. The generated
data is shown in Figure 8.1 (a). The diameters of these spheres follow a normally
distributed density function. The horizontal axis represents the diameters of the
spheres and the vertical axis represents the frequency of occurrence of the spheres
with the corresponding diameter.
Figure 8.1 (b) demonstrates a three dimensional plot of the generated data. Note
that spheres are a special case of ellipsoids. The size frequency is presented \Vith
respect to the length of its major and minor axes (in this case, both major and mix:or
axes are equal). From this Figure, the size frequency can be described as a plane, and
its projections on both the {xz}-plane and the {yz}-plane are equal (these spheres
were assumed to be spread, i.e. no overlapping, and viewed from the top).

Three methods were used to estimate the size frequency. In the first method,
the cross-sectional area was calculated for each sphere, and the equivalent diameter
was used to estim~te the size frequency, using Saltkov's method [159] (equation 6.7).
Figure 8.1 (c) shows the regenerated frequency ofthe spheres when Saltkov'smethod
[159] is used. For the other two methods, namely: Cruz-Orive [121] and the Virtual
the diameter of the spheres was used twice (as major and minor
Sieving method,
.
.

axes) for the estimation. Figure 8.1 (d) shows the size frequency estimated using
the Cruz-Orive method [121], equation 6.10 1 This results in a fiat surface, which
is e.''l:pected since the shape factor of the sphere is equal to zero. Finally, the result
shown in the Figure 8.1 (e) is obtained using the Virtual Sieving method. In this
case, the

tot~l

weighting function is assumed to be equivalent to the distribution;

consequently, the frequency is calculated as follows:

IThe axis label "semi-axis" denotes the use of half of the axis (section 6.3.2)
113

8. Comparative Evaluation of Virtual Sicvillg

where 8(S) is the impulse function and the term 8(S - Si) is a vertical arrow of unity
amplitude at S

= Si.

Comparing Figures (c), (d) and (e) one can notice that Saltkov's method yielded
a normally distributed size frequency of the spheres. The Cruz-Orive method did
not show any result: since the shape factor x 2 for the sphere is equal to zero, the
classification constraints fail. In other words, for a spheroid to belong to the (i, j)
class, it must satisfy the inequalities
(i - 1).6. <

a::; i.6.

(j - l)r < x 2

::;

jr

On the other hand, the Virtual Sieving method provided a more reasonable approximation of the generated data in comparison with the other two methods by demon-

strating a normally distributed size frequency.


These algorithms were then applied to ellipsoids to generalize the case. A similar
procedure was adopted to generate data representing a spread set of different sizes
of ellipsoids. In this case, the minor axes of the ellipsoids were generated using the
frequency of a normal distribution function. Assuming that the ellipsoids maintain
a constant axes ratio, the major axes were assumed to be a scalar multiple of the
minor ones. This scaler multiple results in the increase of the mean and the variance
of the frequency.

For demonstration purposes, two different values of the scalar

>. were used. A plot of the frequency using >. = 3 is shown in


Figure 8.2 (a) and that for using >. = 5 is shown in Figure 8.3 (a). Figures 8.2 (b)

multiplication factor

and 8.3 (b) present the tbree dimensional representation of the function, where the
x-axis represents the minor axis and the y-axis the represents the major axis.
This arbitrary data representing ellipsoids was employed to evaluate the performance of the same three methods, the results are shown in Figures 8.2 and 8.3. The

size frequency shown in Figures 8.2 (c) and 8.3 (c) was generated using the Saltkov

114

8. Comparative Evaluation of Virtual Sieving

..

,
,

"
(h)

(a)

; '"

il ..

i'

Il: -0.5

.,,

'.

"

"

nn
m

[n_
m

GrldS"-lcm)

"
(d)

(c)

..
....
;
."
J...
~

~:-

..

"

".

J
m

m
GItd SIh (tm)

rl
~

(e)

Figure 8.1: Simulated size frequency 01 the spherical model: (a) One dimensional generated frequency of the spheres diameters, (b) Two dimensional generated frequency of the spheres diameters,(c) Size frequency using the Saltkov method, (d) Size frequency using the Cruz-Orve method,
(e) Size frequency using the Virtual Sieving method

115

8. Comparative Evaluation of Virtua! Sieving

'"

Qrid SIno

lem)

(a)

(h)

'"
~

r
10;

!.,

1:

0.1

."
,,

..
..

..r

h.
OIicISlNlcml

...........

,m

'"

"
(d)

(c)

"

"
",

"
"
"

"

.-

"

Il-,
m

'"

.m

(e)

Figure 8.2: Simulated size frequency of the ellipsoidal mode! >. = 3: (a) One
dimensional generated frequency of the model's major and minor .axes, (b)
Two dimensional generated frequency of the model's axes, (c) Size frcquency
using the Saltkov method, (d) Size frequency using the Cruz-Orive method,
(e) Size frequency using the Virtual Sieving method

116

8. Comparative Evaluation of Virtual Sieving

"
o

(b)

(a)

...
"

~"

~"

r
It.

~"

J".,

0,1

00

,
Qrld

..

n.
su. (cm)

"

(c)

(d)

.,
.,

,...

.-

,...
-

100
\10
0Iid Slnlc::rnl

rh
..

(e)

Figurei:3: Simulated size frequency of the ellipsoidal mode! >. = 5: (a)


One dimensional generated frequency of the mode!'s major and minor axes,
(b) Two dimensional generated frequency of the model's major and minor
axes, (cl Size frequency using the Saltkov method, (d) Size frequency using
the Cruz-Orve method, (e) Size frequency using the Virtual Sieving method

117

S. Comparative Evaluation of Virtual Sicviug

method. The circle equivalent diameter was computed from t,he cross-sectiom ,uea
.4.. defined as:

where Sm and SM are the minor and major a..,es respectively. Figures 8.2 (d) and 8.3
(d) show the size frequency generated using Cruz-Orive method for prolate ellipsoids.
Finally; the size frequency shown in (e) was generated using the Virtm Sieving
method.
Comparing the three plots of Figures 8.2 (c), (d) and (e), one can observe that the
Saltkov method resulted in a normally distributed size frequency with mean ::::: 56 (as
compared with Figure 8.2 (a)). The Cruz-Orive method shown in (d) also resulted in
a normally distributed size frequency with mean ::::: 189 which is the smne as the mean
of the major a..'es of the generated data provided that the Figure was viewed from the

xz-plane. On the other hand, the Virtual Sieving method resulted in a log-normal
distribution of size frequency with mean ::::: 63.
The same comparison can be also applied to Figures 8.3 (c), (d) and (e); the
Saltkov method starts to deform to a log-normal shape with mean ::::: 69, the CruzOrive method shows the same result as that of>' = 3, with a shift along the y-axis due
the change is the shape factor. Finally, the results using the Virtual Sieving method
remains log-normal with a change in the mean ::::: 9l.
In conclusion, the Saltkov method showed the least accurate results among the
tested methods. This can be seen clearly in the second part of the simulation, in
which the use of elongated ellipsoids resulted in a small shift of the mean value.
This behaviour contradicts the logic of the sieving process. By contrast, in the three
dimensional frequency representation resulting from the Cruz-Orive method, the two
parameters, namely: the major axis and the shape factor, were treated individually
without any attempt to link them. From the simulation, it is clearly shown that

changing

>. resulted in shifting the frequency with respect to the shape factor ouly in
118

8. Comparative Evaluation of Virtual Sieving

0.05

0.04

:;;

Il. 0.03

go
~ 0.02

cr

0.01

20

(al

...

....

....

."..."
.....

i:
l4. Q.Q2

o.OlS

...."

.."

~o.035

1\
1\

1\

i:

1'/1\

~,:;-.~,,;--;..; -

MIjor SlIml-Alda NormallZ'llcl

(bl

...,

1\
1\\

0.01

~~::lS.:=-:.:.-,::,,:-;";..

'.f--I'~.~~.""'f.

"v

0.03

1\

8012141.

Shape Factor

~..

"

(cl

Figure 8.4: Size frequency of spread rocks using the Cruz-Orive's method:
(a) 3-D representation, (b) Size frequency with respect to the normalized
. major semi-axis, (c) Size frequency with respect to the normalized shape
factor

119

8. Comparative Evaluation of Virtual Sieving

one dimension. In other words, changing the shape factor and projecting the three
dimensional frequency onto two planes (parallel to the {x:: }-plane and the the {y::}plane), will result in two frequencies. One of these frequencies is the size frequency
with respect to one of the spheroid's a..'i:es, either the major or the minor depending on
the model selected. The second is the size frequency with respect to the shape factor,
which does not provide useful information for the sieving process when viewed as a
classification process. A demonstration of this is shown in Figure 8.4, in which (a)
shows a three dimensional size frequency of a set of rock, and (b) and (c) demonstrate
their two dimensional projections.
In contrast, the Virtual Sieving method provided a more logical result from the

sieving point of view. This result is more accurate than those obtained using the
Saltkov method since it considers the shape parameters (i.e. the ellipsoid a..'i:es). In

addition the Virtual Sieving method provided a link between the ellipsoids parameters, which resulted in a more useful representation of the size frequency than the
one obtained by the Cruz-Orive method.

8.2

Comparison with the Physical Sieving Method

To evaluate the accuracy of the Virtual Sieving method, the algorithm was also applied to actual data obtained from laboratory experiments and compared with laboratory experimental results. To obtain the laboratory data, a number of rocks were
crushed and sieved in the laboratory.
Generally speaking, the size and shape of rock fragmented by blasting are largely
influenced by the structural condition in the rock mass (i.e. pressure, crust movement,
etc.). On the other hand, breakage of rocks during the crushing process are influenced
more by the minerai structure and composition. In this section, it will be assumed

that both processes (blasting and crushing) will result in similar shapes.

120

8. Compa,-ative Evaluation of Virtual Sieving


,,

,c

"

,.

l.5

GIld Slz.(O'TI)

2.~

"

1.S

GtId Slze {O'TI}

:ts

(h)

(a)

Figure 8.5: Laboratory test results: (a) Rocks distribution by number of


rocks passed for each grid (b) Rocks distribution by weight

During sieving, the crushed rocks were manually reoriented to ensure their passage

through the relevant sieves. The retained rocks were then weighed and counted.
Figure S.5 gives the results of this e.'Cperiment. Figure S.5 (a) shows the distribution
of the rocks by number of rocks retained per grid, and Figure S.5 (b) shows their
clistribution by weight.
The rocks were then mixed again and a number of images of these rocks were
acquired. For these images, the rocks were spread such that they did not overlap. A
total of 45 images of 4056 rocks were taken at random orientations. Each rock was
then measured using the techniques of chapter 4, in which area, major and minor a'Ces
were computed. The linear model of the Weighting Function as defined in chapter 5
was then applied on these measurements using equation 5.6 as follows:
T

WT(Si> t) =

L Wk(Si, t)

(S.l)

k=l

121

S. Comparative Evaluatiou of Virtual

Si,,\itl~

]~/\~--~----__

'.

.;

OIId~lanl

(a)

(h)
2' 10"

~
~

S~

."

...
..
... W
."

100

1100

ClIlll_11lft1

(c)

Figure 8.6: Weighting function of crushed rocks using the !incar model
(Equation 6.3): (a) Weighting function of crushed rocks, (b) Error between
the actual sieving results and the Virtual Sieving, (c) Error between the
nonlinear and the !inear models
where

o
Wk(Si, t)

= ;:"-Ss":'
1

Sm"? Si
Sm

< Si < SM

(8.2)

SM ~ Si

The results axe shawn in Figure 8.6. The distribution of the spread rock using the
lineax mode! Weighting Function is shawn in Figure 8.6 (a). Figure 8.6 (b) shows the
error between the actual and Virtual Sieving, which is less than 15%. This erraI is

122

8. Comparative Evaluation of Virtual Sieving

,...

'".l:

11

a"

.~

JO.15

0::0,11

i- ...
~o

l"

11:005

,Ir

Il_

GrId S1ze (cm)

1:

~02

il

"

"

(a)

GncI SIle (cm) ,

"

(h)

Figure 8.7: Size frequency and distribution of spread rocks using the Virtuai Sieving method: (a) Size frequency, (b) Size distribution

acceptable since the sieving test was manual. Figure 8.6 (c) shows the error between

the linear and the nonlinear models for the same set of spread rocks. This error is
very small (i.e. in the range of 10-3 ); consequently, the linear model of the weighting
function will be used since it requires less computational time.
Figure 8.7 presents the size frequency and the cumulative size distribution of the
spread rocks (i.e. no overlap) using the Virtual Sieving method. In comparison to the
distribution obtained by physical sieving (shown in Figure 8.8), the technique shows
excellent correlation.

8.3

Two Prior Methods for Size Distribution Estimation of Muck-Piles

Many mining researchers have studied the problem of estimating the size distribution
from the surface of a muck-pile. In this section we will present the theory of two of

the most commonly used methods, namely, Maerz's and Kemeny's methods.

123

8. Comparative Evaluation of Virtual Sicving

'1:
::J

~0.8
en
en

en

.:.:

gO.6

CI:

:E

~0.4

3:
.~

'ii

::; 0.2

Virtuel Sfeving

:l

Actuel

468
Grid Size (cm)

10

12

Figure 8.8: Cumulative size distribution of spread rocks

8.3.1

Maerz's Method

In their method, Maerz et al. [95] modelled fragments as approximate spheres. From
the projected area of each individual fragment, the diameter of a circle of equivalent
area (dea = 2ffJ was computed. The distribution of dea's was then divided into s
classes (s

= 10) of equal class width, .D. = ~ where deaM is the maximum size of

dea. The frequencies in each size class (d;) were expressed as the number ofblocks (N)

of a particular diameter class (d;) per unit area (A) of the fragment surface (Na(d;)).
The true or three dimensional size distribution was then estimated by applying
the following equation:

(8.3)

where f(d;) is an empirical calibration function (for each class diameter there is a

124

8. Comparative Evaluation of Virtual Sieving


different calibration factor). The significance of fis that it accounts for any systematic differences between the theoretical solution Nv(d) = Na(d) for a polydispersed
system of spheres and the actual solution for fragmentation. 'The justification of the
use of f was to account for a combinatlon of the following three factors:
The effect of overlapping of fragments.
The effect of missing fines.
The effect of the shape of the distribution.
The method requires a-priori knowledge of the calibration function (I), which was
defined as a list of empirical calibration factors for each class.

8.3.2

Kemeny's Method

': Kemeny et al. [SOl combined two measured parameters in estimating the size distriIn
bution. These parameters are: projected area and axes of the best fitting ellips.
1

. this metnod, the equivalent diameter (fragment screen size) was calculated for each
fragment as follows
d;

= 0.45 . Mi + 0.73 . 'In;

(8.4)

where Mi is the major a.'Cs of the best fitting ellipse of fragment iand 'In; is theminor
'axis. To estimate the volume Kemeny et al. [SOl multiplied the projected area for
each fragment Ai by the equivalent diameter, i.e.
vi=Aid;

,1

(S~5)

The size distribution was then classified into k equal classes; th~ a probabilitY
matrbi: P was computed for the number of fragments, where the dimension of P is

N x k. Using the midpointsof each class, ((j, j = 1, ... , k) the relative fragment size
1

125

8. Comparative Evaluation of Virtua! Sieving


was computed over all classes and, consequently,

was computed for cach fragmcnt

Pij

as follows:
4i.

Xi

Pil

= a

(1

Xi

4i.
(n'

n= 2, ... ,k-1

Pin

l-(a+,6),

n = 2, .. . ,k-1

Xi

4i.

Pik

(8.6)

(.

= ,6

where

-0.0525 + (0.9898 + 2.1581x3.1581 )-1

a=l iff x ~ 0.2

>1

{ a=O if! a < 0

= 1 iff
f3 = 0 iff

,6.

,6 = 0.0401 + 20.8973x9.3D84e-4.7464x

Il

,6>1
,6 < Ox

0.2

(8.7)
The elements of the probability matrL"< were caleulated by normalizing the elements
of P as follows:
p.lJ.. -_

Pij

(8.8)

LPij
j=1

The volumetrie frequeney was then eomputed as follow:


T

Pn

-1

Pn

(8.9)

126

8. Comparative Evaluation of Virtual Sieving

8.4

Comparison with Prior Methods

The two methods explained in the previous section are the most established and
commonly used methods in the mining industry for estimation of the size distribution
of fragments from the surface of muck-piles. These two methods were applied to the
generated data described in section 8.1, and to the experimental data of section 8.2.
This section discusses the results of this application and compares it with the Virtual
Sieving method described in this thesis.

8.4.1

Using Artificial Data

The two methods were first applied to the spherical data of section 8.1. The estimated
size frequencies obtained using the two methods are given in Figures 8.9 (a) and (b)

for Maerz and Kemeny methods respectively. From these figures, one can notice the
similarity between Maerz's solution and the Saltkov method (see Figure 8.1 (c)) .
The two methods were then applied to the ellipsoid data of section 8.1 for two
different values of >., 3 and 5. The resulting size frequency using the two methods
for>' = 3 are given in Figures 8.9 (c) and (d). Figures 8.9 (e) and (f) show the same
respective resu1ts when >.

= 5. In both cases Maerz's method correlates with Saltkov

results, while Kemeny's method shows a nondeterministic behaviour. This is due to


the calibration functions used in the Kemeny et al. [80J algorithm.

8.4.2

Using Laboratory Images

The two methods were also applied to the actual data of section 8.2. Three tests were
conducted using the experimental data. In the first test, images of spread rocks were
used. In the second and third tests images of the overlapping rocks were considered.
The results of these tests are presented in Figures 8.10, 8.12 and 8.14 respectively.

Figure 8.10 (a) shows the size frequency of the actual rocks. Figures (b) and (c)
127

8. Comparative Evaluation of Virtm,l Sieviug

0"

r-

cr-

~n

o
0

.,

r-

"

_-r-_

"

"

ni

rI~

(b)

(a)

.,.

..

~."

r..
~

00

....,
m

0rId SlU (cm)

.m

n-

(c)

"

."

J.:

J."

.,

.,.
00

,m

."

~."

(d)

0"

o.,

IL

'"
(e)

"

."

o0

nm

h
,m

OIldSlNl<:rn1

..

'"

(f)

Figure 8.9: Frequency of arbitrary generated data using Maerz's and Kemeny's methods: (a) Size frequency using Maerz's method for spheres, (h)
Size frequency using Kemeny's method for spheres, (c) Size frequency using
Maerz's method for ellipsoids ( = 3), (d) Size frequency using Kemeny's
method for ellipsoids ( = 3), (e) Size frequency using Maerz's method for
ellipsoids ( = 5), (f) Size frequency using Kemeny's method for ellipsoids
( = 5)

128

8. Comparative Evaluation of Virtual Sieving


presents the size frequency obtained using Maerz's and Kemeny's methods respectively. In (d) the size frequency obtained using the Virtual Sieving method is shown.
By comparing Figures (a), (b), (c) and (d) it can be clearly seen that the shape of
the frequency given by the Virtual Sieving method is the closest in shape to the size
frequency obtained from the actual sieving process. In contrast, both Maerz's and
Kemeny's methods show log-normal frequency shape. In addition, the cumulative size
distribution of the Virtual Sieving method appeared to be the closest to the actual
cumulative size distribution, followed by Kemeny's solution and subsequently Maerz's
as shown in Figure 8.11.
In the second test overlapping was considered, such that only visible parts of the
rocks ofthe second layer were measured. Figure 8.12 (b), (c) and (d) show the results
of size frequency obtained using Maerz's, Kemeny's and the Virtual Sieving methods
respectively. From these Figures, it is clear that the Virtual Sieving method main-

tained the closest size frequency to the actual one in spite of the missing information
due to overlapping of rocks. On the other hand, Maerz's and Kemeny's methods
showed sensitivity to the variation of the exposed area. Figure 8.13 compares the cumulative size distributions of the actual data and those given by the three methods.
From the Figure, it can be seen that missing information due to occlusion degrades
the performance of Virtual Sieving, but it still gives the closest fit.
In the third test, contour completion algorithms were used to complete the missing
part of the rocks in the second layers caused by overlapping. Figure 8.14 (b), (c) and
(d) shows the results of using Maerz's, Kemeny's and the Virtual Sieving methods
respectively. In this case, improvement of the shape of the cumulative size distribution
using the Virtual Sieving and Kemeny's methods was seen, as shown in Figure 8.15.
This improvement resulted in a good match of the Virtual Sieving results and the
aetual results. Meanwhile, Maerz's distribution remains unchanged.

--

The reason that size distribution obtained using the Virtual Sieving method is

129

8. Comparative Evaluatiou of Virtual Sieviuf:

the least affected when ignoring the hidden part of overlapped rock8 i8 beCml8l' the
method is based on a,es measurement which is less sensitive 1.0 variations of projected
regions than the area based methods (sel' section 5.3). A not.iceable improvcment, 1.0
the cumulative size distribution is achieved using the contour complct.ion a1goritlun
in conjunction with Virtual Sieving.

8.5

Case Study

To evaluate the performance of the overall algorithm in an actual mining situation,


a photograph of an open-pit muck-pile under natural lighting was scanned. Figures
8.16 and 8.1i show the digitized image of the muck-pile and its manually traced image
respectively.

8.5.1

Intermediate Results

Using two iterations of Crimmins filter resulted in a fairly smoothed image as show in
Figure 8.18. Figure 8.19 show the result of applying Canny's lUter 1.0 the smoothed
image using

(1

= 2.0.

The thinning algorithm described in Section 4.2 is then applied 1.0 the edge map
image (Figure 8.19). Short, unconnected edges are then eliminated provided that
their lengths do not exceed the defined threshold (in this case ~l = lU for arcs and
~l

= 50 pixels for closed contours). The results of the thinning and noise removal are

shown in Figure 8.20.


The next step is 1.0 link the loose ends, using the edge linking algorithm. In this
step, the unconnected edges are iteratively extend along their tangents, using the
tangent of the fifth previous point, until ail are connected. The resulting image is
shown in Figure 8.2l.

The last stage of the image analysis is the layer classification. The fust step in

130

8. Comp.rative Evaluation of Virtual Sieving


this stage is to discriminate edgc nets resulting from overlapping of rocks from simple
closcd contours. The latter are elements of the first layer. The junction analysis
algorithm is thcn applied to the edge nets to isolate the remaining rocks of the first
layer rocks from the second layer. The two criteria described in chapter 4 were applied
to a clipping window centred at the junction point (the size of the window used is
w

= 11).

Once the junction analysis step is completed, rocks of the first layer are

isolated. Figure 8.22 show the first layer of the muck-pile.


The second layer (the remaining edges) are further classified as Type A and Type
B depending on whether both endpoints were connected to a contour of the first
layer or not. Figures 8.23 and 8.25 show the second layer Type A and Type B
respectively prior to the application of the contour completion algorithm. Note that
edges that are terminated at the boundary of the image are eliminated. Using the

contour completion algorithms described in chapter 4, Figures 8.24 and 8.26 present
the completed contours of the two types of the second layer (Le. Types A and B).
Due to false edge information, the curve generated by the contour completion
algorithm might intersects with the edge map itself, this can be seen in Figures 8.24
and 8.26. As a result these contours are eliminated from the image (they will not be
considered in the size computation). In addition, due to smoothing many rocks are
merged together to form one contour. To overcome this problem, each closed contour
is traced, and for each point the curvature is estimated using the Nitzberg method
described in Section 4.1.3 using u

= 2.0 and m = 7.

The contour is disconnected at

points where the curvature exceeds a fixed threshold (K: > 0.1 and error > 40%) by
eliminating seven points inclusively (three points on each side). If the contour breaks
into an even number of segments, the end points of each segment are connected by
a straight line. Otherwise each endpoint is connected to the point on the contour
orthogonal to its tangent.

131

8. Comparative Evaluation of Virtual Si,wing

8.5.2

Overall Performance

The overall image processing algorithms performed weil: they detected 353 rocks,
compared to 451 obtained by manual tracing. Most of the rock boundaries were
detected correctly with the e:l:ception of two rocks bounded by the white dashed
boxes of Figure 8.16. Many small rocks were also merged together or with larger rock
as a result of the smoothing algorithm used.
The "Virtual Sieving" method was then used to measure the performance of the
overall technique by comparing the size frequency and cumulative distributions of
the muck-pile using the manual traced image and the one obtained from the image
analysis algorithms. Figures 8.27 (a) and (b) show the size frequency of the seanned
image btained from the manual tracing method and the image analysis algorithm
respectively. One can notice the great similarity between the two frequencies. This

becomes more clear when the cumulative size distributions of both methods are plotted on the same graph as shown in Figure 8.27 (c). The slight difference between the
two distribution is a result of many factors. Among these is the human bias during
the manual tracing process including the approximation in locating the boundaries of
the individual rocks. Also, the failure of the image analysis algorithm to detect some
of the boundaries and merging the contour of more than one rock affects the final
result. This demonstrates the effectiveness of the overall technique which resulted in
a size distribution curve that closely follows the one obtained by manual tracing.

8.6

Conclusions

In this chapter we have presented comparison of the Virtual Sieving method with the

existing methods for size distribution measurement used in stereology as weil as in the
mining industry. The comparison was done using an artificial data set generated by

computer, as weil as using actual data obtained from laboratory experiments. This

132

8. Comparative Evaluation of Virtual Sieving


chaptcr aIso compared the effect of fragment overlap on the methods commonly used
by the mining industry, as weil as on the method of Virtual Sieving.
As a result of this comparison, the Virtual Sieving method appears to be more
accurate, as weil as more robust in estimation of the cumulative size distribution for
aIl cases tested. In addition, it considered the shape parameters in its computation.
These parameters were either ignored or partially used in other methods. Hence,
more accurate constraints for the classification process were utilized.

133

S. Comparative Evaluation of Virtual Sieving

,.,

,
,

c--

.--

.,

'", .!1
,

,.--

MH

..

Gl1d SIle (cm)

,,

rI,

':

..
5
Gnd Slztt (cm)

(b)

(a)

....

~O.25

lu

0015

.~

i"
'''Ii
'.

"-

.10.15

I...

r-

GIkl SIltI (cm)

(c)

~,

,1, li

i-

rh
, ,

Grtel SIle (cm)

(d)

Figure 8.10: Size frequency of spread rocks (no overlap): (a) Frequency
of the actual rocks from sieving, (b) Sizc frequency result using Macrz's
method, (c) Size frcquency rcsult using Kemcny's mcthod (d) Size frcqucncy
result using the Virtual Sieving method

134

8. Comparative Evaluation of Virtual Sieving

=>

.,

Q)

C.

-----------

.1

-go.a

r.

1:

fi)
fi)

tU
C.

fi)

i'
.i'

.>::

go.a

-- - - --- - --

.-/

'2

a:

:
Cl

.'

,,

.' 1
1 1
1

0;0.4

3:

Maerz

, ,
, ,

.~

ai

~0.2

-.-.- Kemeny

1
1

Virtual Sieving

:l

Actual

1 .

Grid Size (cm)

10

12

Figure 8.11: Cumulative size distribution of spread rocks (no overlap)

135

S. Comparativc Evaluation of Virtnal

Sicvin~

,.,

:~

,
,

"lo.3S

r-

1"
~

~'"

."
~

r-

.,

,."
,,

JH
:

r-

i'"

~"

Gnd SlztIlcm)

hnr

'",
,

...

"i..zs

1
jO.15

!0:

"l0-2
~

0.2

jO.15

.i ,.

00

0
0

l ,."

" 0,05

,1, r


, Gncs Slze (cm)
(c)

"

(b)

(a)

Gnd $Ile (cm)

,, r-

IL

Grid sa. (cm)

(d)

Figure 8.12: Size frequency of overlapping rocks without contour complction: (a) Frequency of the actual rocks from sieving, (b) Size frequency rcsult
using Maerz's method, (c) Size frequency rcsult using Kemeny's method (d)
Size frequency rcsult using the Virtual Sieving method

136

8. Comparative Evaluatior. of Virtual Sieving

=>

" .... /::

III

0-

1
1

"go.a
<Il
Q.
fi)

13
0.6
0
cr

.S!'0.4

":'J:

.if

1
1
1
1
1
1
1
1
1
1
1
1
1
1
1 .'

fi)
fi)

,"

---

Maerz

-.-.- Kemeny

'.'

III

.2:

/1

~0.2

i(

:::l

Virtual Sieving

Actual

1 :

0
0

Grid Size (cm)

10

12

Figure 8.13: Cumulative size distribution of overlapping rocks without


contour completion

137

---""

8. Comparative Evaluation of Virtual Sicving

,.

, ,..

0.45

.~ 0.4

"lo.3$

":O~

~~.,

i""

i""

"
""

"
""

.---

l"

,~ 0..

,os

,,

,-

1.,
,

i',"..

...

tJH.

Gnd SIle (cm)

"

I~

,,

:a

:,

,, r

III

,...

..

",.

Il

Grid Slze {cm)

Il

,, r

:'-.':::

.1

:a

T.

s,.

(h)

(a)

..

Grld S1z. (cm)

Il
2

:1

of

Il

GrId SIze (cm)

(c)

Figure 8.14: Size frequency of overlapping rocks with contour completion: ~


(a) Frequency of the actual rocks from sieving, (b) Size frequency result
using Maerz's method, (c) Size frequen:y.result using Keme!lY's method (d)
Size frequency result Ufling the Virtual Sieving method

138

8. Comparative Evaluation of Vrtual Sieving

=>

...

,
,,

CIl

Q.

al O.S

en
en

1
1

<Il
Q.

en

---

"

,,
1

-Uo.S
0
lI:

--

,,
1

oC
CIl

1
1

1
1
1

.2'0.4

Maerz

1
1

-.-.- Kemeny

~
:;=

~0"2
E
:::l

Virtual Sieving

1 .
"
1

Actual

i:

1:

0
0

10

12

Grid Size (cm)

Figure 8.15: Cumulative size distribution of overlapping rocks with contour


completion

139

8. Comparative Evaluation of Virtua! Sicving

".
Figure 8.16: Muck-pile of open-pit mine

140

8. Comparative Evaluation of Virtual Sieving

Figure 8.17: Manual tracing

141

8. Comparative Evaluation of Virtuai Sicving

Figure 8.18: Smoothing of the muck-pile image

142

8. Comparative Evaluation of Virtual Sieving

Figure 8.19: Edge detection of the muck-pile image

143

8. Comparative Evaluation of Virtual Sieving

Figure 8.20: Thinning and noise removal of the edge map of the muck-pile
image

144

8. Comparative Evaluation of Virtual Sieving

Figure 8.21: The result of applying the edge linking a1gorithm

145

8. Comparativc Evaluation of Virtual Sicving

Figure 8.22: First layer of the muck-pile

146

8. Comparative Evaluation of Virtual Sieving

;)

1\ ~cV

((1

-1)1}

eJ\

()

J!

J t?~
"

"""JI

Figure .8.23: Second layer Type A of the muck-pile without contour com"
pletion

147

S. Comparative Evaluation of Virtnal Sieving

()

~o

ZJ of" \0
~ ~ c1Cj

f\

Clt?Q
\, --oD.

r?

Figure 8.24: Second layer Type A of the muck-pile with contour completion

148

8. Comparative Evaluation of Virtual Sieving

li"'

7
~'igure

8.25: Second layer Type B of the muck-pile without contour com-

. pletion

149

8. Comparativc Evaluation of Virtual Sicviug

{]

o
Figure 8.26: Second layer Type B of the muck-pile with contour completion

150

8. Comparative Evaluation of Virtual Siedng

..
5 ..
"
r ..
2

..,,0.'

~,

r-

0-

~o.

r-

"

5lO.,

r-

'i 0 .1

=,

~O.1

c:

rI

r-

'r-

I,

r-

~o,~

."
,o

1
~

100

G""""

l~

r-

i,

00.01

r-

r-

~,,

r-

I~

'",
o

,~

100

nnn

Grid Sae

I~

(b)

(a)

rT.
alo.a
(J)

gO.6

a:

Cl

-0;0.4

;:

Manual

.~

~0.2

Automatic

E
:>

20

40

60

ao

100

120

140

160

1ao

200

GridSize
(c)

Figure 8.27: Scanned image size frequency and distribution: (a) Size frequency from manually traced image, (b) Size frequency form the automatic
image analysis, (c) Size distribution of the manually traced image and the
automatic image analysis

151


Chapter 9

Conclusions
This thesis addressed the problem of estimation of the size distribution of fragmented
rocks in a muck-pile. This problem was decomposed into two subproblems, namely:

analysis of the digital image of muck-piles that resulted from blasting, and the design
of an effective and efficient measure that correlated with the classical definition of
size measurement in mining.
Steps toward the solution of the first subproblem have been presented based on
computer vision techniques. Many problems associated with the analysis of the surface of muck-piles were addressed in this study and solutions to these problems were
proposed. Among these are fragment identification and the over1apping problem. The
overall approach formu1ated for image analysis involves four main steps:
1. Fragments contours extraction from intensity images.

2. Identification of key points, such as junctions.


3. Linking matched ends as an estimate of the over1apped part of the fragments.
4. Measurement of the identified fragments.
For the second subproblem:::1:he study concentrated on the analysis of the sieving

:::.."::---:"

process to deduce the parameters controlling it. These parameters were used tel

9. Conclusions
define a measure which was then linked to the estimation of the size distribution of
fragments.
This chapter first presents the original contributions of the thesis. This is followed
by a discussion of the limitations of the proposed solution. Finally recommendations
for future work are presented.

9.1

Original Contributions

The major

~ontributions of

this

researciiar,~:

1. Utilization,o; pre-processing techniques:

The combination of the Crimmins smoothing lilter and the Canny's edge detector, resulting in a cleaner image which simplifies the fragment contour e>.:traction

process.
2. Development of a simple recursive edge linking strategy:
Used to fill small gaps between contour segments.

3. Adptation of two local criteria for the analysis of junctions:


These criteria are geometrical and intensity based and are used to facilitate the
layer classification process.
4. Development of a layer classification strategy:
Fragments of the muck-pile surface are classified into three layers depending
on the continuity of their contours.

This is the first step to overcome the

overlapping problem.
5. Development of an adaptive technique for the contour completion

algorithm:

Used to estimate the missing part of partially occ!uded fragments. Using direct

153

9. Conclusions
shape measurement, the new formulation controls the deformation of the cur'"
used in estimating the contour of the hidden part of the fragment.

6. Utilization of both principal axes of fragments as classification constraints:


This yields a flexible constraint that vanes according to shape information
rather than a fi.xed one (such as the equivalent diameter of the projected area
typically used by other mining researchers).
.

Development of the Weighting Function:


Based on the fragment shape information, the Weighting Function is used to
quantify, in a probabilistic manner, the likelihood of a specifie fragment passing
through various mesh sizes.
~~''-

8. Development of the Virtual Sieving method:


Derived from the logie of the sieving process, the Virtual Sieving method is a
powerfcl tool for estimating the size distribution either by number or by volume.

9. Design of a performance evaluation strategy:


This strategy consists of two phases, simulation and experimentation.

10. Integration of the above mo.ules into a robst method:


In comparison with other metho(ls~~e newly developed method is more robust
'.

and accurate. It have the following advantages:


Flexible mode!.
A simple representation of the distribution.
Less computations are required.

154

9. Conclusions

9.2

Limitations

The image analysis a1gorithm presented in this thesis showed encouraging results.
However, the success of this a1gorithm is highly dependent on the quality of the
image. Consequently, the a1gorithm suffers from the following limitations:
In many images, boundaries of rocks are poorly defined, thus measurements
depend heavily on the reconstruction methods.
The viewing angle was never addressed, it was always assumed that the camera
was orthogonal to the pile surface.
Due :to the lack of depth information, layer rescaling was not considered.
The juaction analysis a1gorithm considers only the local image information

(neighbouring pixels) to simplify and reduce the amount of computations.


The layer classification strategy was based on a limited number of fragment con":'
figurations. Furthermore, bisecting fragments and equivalents (i.e. disruption
of the contour at more than two locations) .were never addressed in this study.
Theoverall a1gorithm is configured as a coll~("cion of f1exi~le modules (Figure
.6) which requiies tuning of sorne of its individual parameters.
For size distributioD. estimation, fine correction was never addressed because of
the limited resolution of the camera. In the literature, there are many methods to
overcome this problem. One of the common methods is to utilize the Rosin-Rammler
equation to infer percentage of fines from the distribution. A second method is to base
the correction on the ;:Iea occupied by the:fines. A third method consists of zooming
in, perforrning the measurement, and subsequently correcting the measurement.

155

9. Conclusions

9.3

Recommendations for Future Work

This study has presented several opportunities for future research. Among them are:
e A different type of sensing methodology such as colour images, stereo vi3ion,
or laser range finders is needed for more accurate results and to obtain depth
information.
Three dimensional modelling of fragments is also recommended to increase the
effectiveness of the Virtual Sieving algorithm.
Bisecting rocks are an important issue, requiring the development of a search
strategy to group the unconnected parts of the contour.
Improvement of the overall algorithms is also recommended to reduce the num-

ber of parameters needed to segment fragments.


A more generallayer classification definition is requircd.
The computational cost has not been studied in this thesis, but it is very important for the practicality of the automation process. Therefore, it should be
"

given a high priority in future research.


Il

Fine correction is an important issue for blast assessment; consequently a fast,


effective and accurate method is required.

Since we have successfully demonstrated the applicability of the proposed Virtual Sieving algorithm, it is reco=ended that it be implemented in a mining
environment.

156

References
[IJ J. Adams. Sieve size statistics from grain measurement. The Journal of Geology,
Vol. 85(No. 6):209 - 22i, January-November 19ii.
[2J A. Albano. Representation of digitized contours in terms of conie arcs and
straight-line segments. Computer Vision, Graphies, an Image Processing, Vol.
3(No. 1):23 - 33, March 19i4.
[3J T. Allen. Particle" size Measure.ment. Cb.apman and Hall, New York, third
edition, 1981.
[4J 1. Ande~on and J. Bezdek. Curvature and tangential deflection of discrete arcs:

A theory based on thee commutator of scatter matrix pairs and its application to
vertex detection in planar shape data. IEEE Transactions on Pattern Analysis
and Machine Intelligence, Vol. PAMI-6(No. 1):2i - 40, January 1984.

[5] H. Asada and M. Brady. The curvature primal sketch. IEEE Transactions on
Pattern Analysis and Mchine Intelligence, Vol. PAMI-8(No. 1):2 -14, January

1986.
[6] K. Astrom and B. Wittenmark. Computer Controlled Systems Theory and
Design. Prentice-Hall, Inc., Englewood Cliffs, N.J., 1984.

[il T. Atchison. Fragmentation principles. In E. Plleider, editor, Surface Mining,


pages 355 - 3i2. The American Institute of Mining, Metallurgical and Petroleum
Engineers, Inc., New York, NY, 1968.

15i

References

158

[8] G. Bach. Size distribution of particles deri~'ed from the size distribution of thcir
sections. In H. Elias, editor, Proceedings of the Second International Congre.<s
for Stereology, pages 1i4 - 186, Chicago, April 196i.

[9] L. Baratin, F. Crasilla, and P. Paronuzzi. Image processing for determining


joint parameters in diflicult rock slope conditions. In Proceedings, Close-Range
Photogrammetry Meets Machine Vision, volume Vol. 395, pages 8i8 - 885,

Zurich, Switzerland, 1990.


[10] B. Barry. Errors in Practical Measurement in Science, Engineering, and Technology. John Wiley and Sons, Inc., New York, New York, 19i8.

[11] A. Bedair, L. Daneshmend, C. Hendricks, and M. Scoble. Automated image segmentation and measurement for rock fragmentation analysis. In Fourth Cana-

dian Symposium on Mining Automation, pages 50 - 56, Montral, Qubec,

October 1994.
[12] A. Bedair, L. Daneshmend, C. Hendricks, and M. Scoble. Robust computer
vision techniques for rock fragmentation and loading analysis. In Third Conference on Computer Applications in the Mineral Industry, pages 664 - 61,

Montral, Qubec, October 1995.


[13] M. Berger and R. Mohr. Towards autonomy in active contour models. In
Procep.dings of the 10th International Conference on Pattern Recognition, pages

: 84i - 851, Atlantic City, New Jersey, June 1990.


[14] F. Bergholm. Edge focusing. IEEE Transactions on Pattern Analysis and
Machine Intelligence, Vol. PAMI-9(No. 6):726 - i41, November 198i.

[15] O. Bergmann, J. Riggle, and F. Wu. Model rock blasting effect of explosives

properties and other variables on blasting results. International Journal of Rock

References

159

Mechanics and Mining Sciences f1 Geomechanics Abstracts, Vol. 11:586 - 612,


1.973.
[16] J. Beusmans, D. Hoffman, and B. Bennett. Description of solid shape and its
inference from occluding contours. Journal of the Optical Society of America
A, Vol. 4(No. 7):1155 - E67, July 1987.

[17] A. Blake and A. Zisserman. Visual Reconstruction. MIT Press, Cambridge,


MA., 1987.
[18] G. Bonifazi and P. Massacci. Ore deposit structure evaluation by image processing of exploition walls. In Processing to the 6th IFAC Symposium on Automation in Mining, Mineral and Metal Processing, pages 39 - 43, Buenos Aires,
Argentina, September 1989.

[19] G. Buchan, K. Grewal, and A. Robson. Improved models of particle-size distribution: An illustration of model comparison techniques. Soil Science Society
of America Journal, Vol. 57(No. 4):901 - 908, July-August 1993.
[20] J. Canny. A computational approach to edge detection. IEEE Transactions
on Pattern Analysis and Machine Intelligence, Vol. PAMI-8(No. 6):679 - 698,
November 1986.
[21] O. Carlsson and L. Nyberg. A method for estimation of fragment size distribution with automatic image processing. In Proceedings of the First International
Symposium on Rock Fragmentation by Blasting, pages 333 - 345, Lulea, Sweden,
August 1983.
[22] A. Carter An experimental sieving machine. Journal of Testing and Evaluation,
Vol. 15:87 - 94, 1987.

References

160

[23] C. Cheung and A. Ordo An on line fragment size analyser using image proccssing techniques. In Proceedings ta the Third International Symposium on Rock
Fragmentation by Blasting, pages 233 - 238, Brisbane, Austria, August 1990.

[24] W. Cheung, F.P Ferrie, R. Dimitrakopoulos, and G. Carayannis. Computer


vision-based rock modelling. Computing Systems in Engineering, Vol. 3(NO.
5):601 - 608, 1992.
[25] R. Chin and C. Yeh. Quantitative evaluation of sorne edge-preserving noisesmoothing techniques. Computer Vision, Graphies, and Image Processing, Vol.
23(No. 1):6i - 91, July 1983.
[26] T. Choi, H. Delingette, M. DeLuise, Y. Hsin, M. Hebert, and K. Ikeuchi. A
perception and manipulation system for collecting rock sarnples. In Proceedings

of the Fourth Annual Space Operations, Applications, and Research Symposium,

Albuquerque, NM, June 1990.


[2i] G. Clark. Principles of Rock Fragmentation. John Wiley and Sons, Inc., New
York, N.Y., 198i.
[28J L. Cohen and 1. CohenCohen. A finiteelement method applied to a new active
contour models and 3d reconstruction from cross section. In Third International
Conference on Computer Vision, pages 58i - 591, Osaka, Japan, December

1990.
[29J M. Concetta-Morrone and D. Burr. Feature detection in human vision: A
phase-dependent energy model. Proceedings of the Royal Society of London,
Series B, Vol. 235(No. 1280):221 - 245, December 1988.
[30J P. Corke. Machine vision feedback control of mining machinery. In Third
International Symposium on Mine Mechanization and Automation, volume 1,

pages 5-1 - 5-11, Golden, Colorado, June 1995.

ReFerences

161

[31J 1. Cox, J. Rehg, and S. Hingorani Cox, 1. A bayesian multiple hypothesis


approach to contour grouping. In G. Sandini, editor, Proceedings of the Second European Conference on Computer Vi..ion ECCV'92, pages 2 - 86, Santa
Margherita Ligure, Italy, May 1992. Springer Verlag.
[32] T. Crimmins. Geometrie lilter for speckle reduction. Applied Optics, Vol. 24(No.
10):1438 - 1443, May 1985.
[33] C. Cunningham The kuz-ram model for prediction of fragmentation from blasting. In First International Symposium on Rock Fragmentation by Blasting, volume Vol. 2, pages 439 - 452, Lulea, Sweden, 1983.
[34J L. Davis. A survey of edge detection techniques. Computer Vision, Graphies,
and Image Processing, Vol. 4(No. 3):248 - 270, September 1975.

[35] L. Davis and A. Rosenfeld. Noise clearing by iterated local averaging. IEEE
Transactions on Systems, Man, and Cybernetics, Vol. SMC-8(No. 9):705 - 710,
September 1978.
[36] R. DeHoff and P. Bousquet. Estimation of the size distribution of triaxial ellipsoidal particles from the distribution of linear intercepts. Journal of Microscopy,
Vol. 92:119 - 135, October 1970.
[37] R. DeHoff and F. Rhines. Determination of number of particles per unit volume
from measurements made on random plane sections: The general cylinder and
the ellipsoid. Transactions of the Metallurgical Society of AIME, Vol. 221:975
- 982, October 1961.
[38J R. DeHoff and F. Rhines. Quantitative Microscopy. McGraw-Hill, Inc, New
York,1968.

References

162

[39] R. DeHoff. The determination of the size distribution of ellipsoid, particles


from measurements made on random plane sections. T:unsactions of the Metallurgical Society of AIME, Vol. 224:2i4 - 2ii, June 1962.

[40] M. Diamond, N. Narasimhamurthi, and S. Ganapathy.

Optimization ap-

proaches to the problem of edge Iinking with a focus of parallel processing.


In A. Bundy, editor, Proceedings of the Eighth International Joint Conference
on Artificial Intelligence, pages 1004 - 1009, Karlsruhe, Germany, September

1983.
[41] L. Dorst and A. Smeulders. Length estimators for digitized contours. Computer
Vision, Graphies, and Image Processing, Vol. 40(No. 3):311 - 333, December

198i.

[42]" C. Doucet and Y. Lizotte. Rock fragmentation assessment by digital photographyanalysis. Technical Report MRL 92-116 (TR), Mining Research Laboratories, CANMET, Val d'Or Quebec, November 1992.
[43] R. Duda and P. Hart. Use of hough transformation to deteet Iines and eurves in
pietures. Communications of the ACM, Vol. 15(No. 1):11 - 15, January 1972.
[44] G. Dudek and J. Tsotsos. Shape representation and recognition from eurvature.
In Proceedings of the 1991 IEEE Computer Society Conference on Computer
Vision and Pattern Recognition, pages 30 - 3i, Lahaina, Maui, Hawaii, June

1991.
[45] J. Fang and T. Huang. A eornel' finding a1gorithm for image analysis and
registration. In Proceedin9s of the National Conference on Artificial Intelligence
AAAI-82, pages 46 - 49, Pittsburgh, Pennsylvania, Augnst 1982.

[46] 1. Farmer, J. Kemeny, and C. MeDaniel. Analysis ofrock fragmentation in bench

blasting using digital image processing after blasting by photographie method.

163

References

In Proceedings of the International Congress on Rock Mechanics, pages 1037 1042, Aachen, Germany, 1991.
[47) J. Foley, A. van Dam, S. Feiner, and J. Hughes. Computer Graphies Princi-

pies and Practice. Addison-Wesley Publishing Company, Inc., Reading, Massachusetts, 1990.
['18] G. Foresti, V. Muino, C. Regazzoni, and G. Vernazza. Groupingof rectilinear
segments by labelled hough transform.

CVGIP: Image Understand:ag, Vol.

59(No. 1):22 - 42, January 1994.


[49J J. Franklin and N. Maerz. Digital photo-analysis ofrockjointing. In Proceedings

of the 39th Canadian Geotechnical Conference, pages

IJ>~.20,
'"

1986.

~'-\'

[50J H. Freeman. On the encoding of u.rbitrary geometric configurations. IRE Trans-

actions on ]i;lectronic Computers, Vol. EC-10:260 - 268, June 1961.


[51] W. Frecman and E. Adelson. The design and use of steerable filters. IEEE

Transactions on Pattern Analysis and Machine Intelligence, Vol. 13(No. 9):891


- 906, September 1991.
[52J K. Fu and J. Mui. A survey on image segmentation. Pattern Recognition, Vol.
13(No. 1):3 - 16, 1981.
[53] Q. Gao and A. Wong. Rock image segmentation. In Proceedings of Vision

Interface '89, pages 125 - 133, London, Ontario, June 1989.


[54] C. Garbay. Image structure representation and processing: A discussion of some
segmentation methods in cytology. IEEE Transactions on Pattern Analysis and

Machine Intelligence, Vol. PAMI-8(No. 2):140 - 146, March 1986.

References

16-1

[55] S. Geman and D. Geman. Stochastic rela.'l:ation, gibbs distribntions. ,Uld the
bayesian restoration of images. IEEE Transactions on Pattcrn

.4naly.~is

and

Machine Intelligence, Vol. PAMI-6(No. 6):721 - i41, November 1984.


[56] R. Gonzalez and P. Wintz. Digital Image Processing. Addison-Wesley Publishing Company, New York, 198i.
[5i] R. Graham. Snow removal - a noise-striping process for picture signais. IRE
Transactions on Information Theory, Vol. IT-8(No. 2):129 -144, February 1962.
[58] K.M. Grainger and G.G. Paine. The development and application of photographie fragmentation sizing assessment technique for blast analysis. In Proceedings to the Third International Symposium on Rock Fragmentation by Blasting,
pages 255 - 258, Brisbane, Austria, August 1990.

[59] S. Grannes and R. Zahl. Development of a digital image based on-line product
size sensor for taconite mining. In Proceedings of the 10th WVU International
Mining Electrotechnology LJonference, pages 102 - 109, July 1990.
[60] J. Grant and A. Dutton. Development of a fragmentation monitoring system
for evaluating open slope blast performance at mount isa mines. In First International Symposium on Rock Fragmentation by Blasting, volume Vol. 2, pages
63i - 652, Lulea, Sweden, 1983.
[61] R. Haralick and L. Shapiro. Computer and Robot Vision, volume Volume 1.
Addison-Wesley Publishing Company, Inc., Reading, Massachusetts, 1992.
[62] R. Haralick and L. Watson. A facet model for image data. Computer Vision,
Graphies, and Image Processing, Vol. 15(No. 2):113 - 129, February 1981.
-

[63] R. Haralick, L. Watson, and T. Laffey. The topographie primai sketch. Inter-

national Journal of Roboties Research, VoL 2(No. 1):50 - 72, 1983.

!:C

165

References

[64J R. Hlulick. Digital step erlges from zero crossings of second directional derivatives. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.
PAMI-5(No. 1):58 - 68, .January 1984.
[65] R. Duda P. Hart. Pattern Classification and Scene Analysis. John Wiley and
Sons, Inc., New York, 1973.
[66] M. Hu. Visual recognition by moment invariants. IRE Transactions on Infor-

mation Theory, Vol. IT-8(No. 2):179 - 187, February 1962.


[67] J. Huddleston and J. Ben-Arie. Grouping edgels into structural entities using circular symmetry. the d.istributed hough transform, and probabilistic nonaccidentalness. CVGIP: Image Understanding, Vol. 57(No. 2):227 - 242, March
1993.

[68] R. Hummel. Representation based on zero-crossings in scale-space. In Proceed-

ings of the 1985 IEEE Computar Society Conference on Computer Vision and
Pattern Recognition, pages 204 - 209, Miil.mi Beach, Florida, June 1986.
[69] G. Hunter, C. McDermott, N. Miles, A. Singh, and M. Scoble. A reyiew,of
image analysis techniques for measuring blast fragmentation. Mining Science
~.~

and Technology, Vol. 11:19 - 36, 1990.'

[70] G. Hunter, D. Sandy, and N. Miles. Optim.ization of blasting in a large open


pit mine. In Proceedings of the Third International Symposium on Rock Frag-

mentation by Blastiag, pages 21 - 30, Brisbane Austria, August 1990.

[71] R. Hurtau, P.k Corbeil, and A. Piche. Automatic positioning of a rockbreaker


using vision and a tactile sensor. In Proceedings Interntional Workshop on Sen-

sorial Integration for Industrial Robots:

A~~tecture
-::.:::---.

334 - 336, Zaragoza, Spain, November 1989.

and Applications, pages


.'""7"

'.'

References

166

[72] R. Hurteau, M. St-Amant, Y.Laperriere, G. Chevrette, and A. Piche. Optically


guided lhd: A demonstration prototype. In International Symposium on Mine
Mechanization and Automation, volume Vol. 1, pages 6-11 - 6-20, Colorado,

June 1991.
[73J S. Grannes . Determining size distribution of moving pellets by computer image processing. In R. Ramani, editor, Proceedings of the 19th Application of
Computers and Operations Research in the Mineral Industry, pages 746 - 753.

Society of Mining Engineers, Inc., April 1986.


[74] S. Saltikov . The determination of the size"distribution of particles in an opaque
material from a measurement of the size distribution of their sections. In
H. Elias, editor, Proceedings of the Second International Congress for Stere-

ology, pages 163 - 173, Chicago, April ]::67.

[75] R. Irani and C. Callis. Particle Size: Measurement, interpretation, and Application. Wiley, New York, 1963.

[76] D. Jacobs. Recognizing 3-d objects using 2-d images. Mz.:ster's thesis, Department of Electrical"Engineering and Computer Science, Massachusetts Institutc
of Technology, 1992.
[77] R. Jan, R. Kasturi, and B. Schnck. Machine Vision. McGraw-Hill, Inc., New
York,1995.
[78] H. Jeffreys. Scientific Inference. Cambridge University Press, London, UK,
third edition, 1973.
[79] M. Kass, A. Witkin, and D. TerzopoulosKass, M. Snakes: Active contour
models. In First International Conference on Computer Vision, pages 259 -

268, London, England, June 1987.

References

16

[80] J. Kemcny, A. Devgan, R. Hagaman, and X. Wu . Analysis of rock fragmentation using digital image processing. Journal of Geotechnical Engineering, Vol.
119(No. ):1144 - 1160, July 1993.
[81] B. Kettunen, P. NiIes, and R. Bleifuss. Size distribution of mine ron ore by
image analysis. In Proceedings of the 65th Annual Meeting of the Minnesota

Section, SME, pages 119 - 130, Duluth, Minnesota, 1992.


[82] R. King Determination of the distribution of size of irregtarly shaped particles
from mcasurements on sections or projected areas. Powder Technology, Vol.
32(No. 1):8 - 100, May-June 1982.
[83] L. Kitchen and A. Rosenfeld. Gray level corner detection. Pattern Recognition

Letters, Vol. 1:95 - 102, 1982.

[84] L. Kbinova. Recent stereological methods for the measurement of leaf anatomical characteristics: Estimation of the number and size of stomata and mesophyll
cells. Journal of Experimental Botany, Vol. 45(No. 2O):119 - 12, January
1994.
[85] Z. Kulpa. Area and perimeter measurement of blobs in discrete binary pictures.

Computer Vision, Gmphics, and Image Processing, Vol. 6(No. 5):434 - 451,
October 19.
[86] U. Landau. Estimation of a circular arc center and its radius. Computer Vision,

Gmphics, and Image Processing, Vol. 38(No. 3):31 - 326, November1987.


[87] T. Lange. Real-time measurement of the size distribution of rocks on a conveyor
belt. In International Fedemtion on Automatic Control: Workshop on Applied

Measurements in Mineml and Metal Processing, Johannesburg, South Africa,

October 1988.

References

168

[88J C. Lee, R. Karalick, and K. Deguchi. Estimation of curvature from sanlpled


noisy data. In Proceedings of the 1993 IEEE Computer Society Conference on
Computer Vision and Pattern Recognition, pages 536 - 541, New York, New

York, June 1993.


[89] A. Lev, S. Zucker, and A. Rosenfeld. Iterative enhancement o noisy images.
IEEE Transactions on Systems, Man, and Cybernetics, Vol. SMC-(No. 6):435

- 442, June 19.


[90] M. Levine. Vision in Man and Machine. McGraw-Hill, !ne., 1985.
[91] E. Lord and C. Wilson. The Mathematical Description of Shape and Form. Ellis
Horwood Limited, Chichester, England, 1984.

[92] D. Lowe Lowe. D. Organization of smooth image eurves at multiple seales. In


Proceedings of the Second International Conference on Computer Vision, pages

558 - 56, Tampa, FL, Deeember 1988.


[93] D. Lowe. Pereeptual Organization and Visual RecoQnition.
Kluwer Academie
..,.........
Publishers, Hingham, Massachusetts, 1985.
[94] R. MaeClachlan and A. Singh. Photographie determination of oversize particles
in heaps of blasted rock. J. S. Afr. Inst. Min. Metall., Vol. 89(No. 5):147 - 152,
1989.
[95] N. Maerz, J. Franklin, and D. Coursen. Fragmentation measurement for experimental blasting in virginia. In Society of Explosives Engineers Proceedings of
the Third Mini-Symposium on Explosives and Blasting Researeh, pages 56 - 70,

Miami, FL, February 1987.

References

169

[96] N. Maerz, .1. Franklin, L. Rothenburg, and D. Coursen. Measurement of rock


fragmentation by digital photoanalysis. In Proceedings of the Fifth International
Congress on Rock Mechanics, pages 687 - 692, Montreal, Quebec, August 1987.

[97] R. Manana, J. J. Artieda, and J. C. Catalina. Ore sorting and artificialrision. In


Proceedings to the First IFAC Symposium on Automation for Mineral Resource
and Development, pages 235 - 240, Queensland, Australia, 1985.

[98] D. Marr. Early processing of visual information. Proceedings of the Royal


Society of London, Series B, Vol. 175:483 - 534, 1976.

[99J D. Marr and E. Hildreth. Theory of edge detection. Proceedings of the Royal
Society of London, Series B, Vol. 207(No. 1167):187 - 217, February 1980.

[100] D. Marr and T. Poggio. A computational theory of human stereo vision. Proceedings of the Royal Society of London, Series B, Vol. 204(No. 1156):301 - 328,

May 1979.
[101] D. Marr. Vision. W. H. Freeman and Company, San Francisco, 1982.
[102J A. Martelli. An application of heuristic search method to edge and contour
detection. Communications of the ACM, Vol. 19(No. 2):73 - 83, February 1976.
[103] G. Mastin. Adaptive filters for digital image noise smoothing: An evaluation.
Computer Vision, Graphies, and Image Processing, Vol. 31(No. 1):103 - 121,

July 1985.
[104J C. McDermott, G. Hunter, and N. Miles. The application of image analysis to
the measurement of blast fragmentation. Technical report, Nottingham University, Nottingham, UK, 1989.

References

170

[105] G. Medioni and Y. Yasumoto. Corner detection and curve representation using
cubic b-splines. Computer Vision, Graphies, and Image Processing, Vol. 39(No.
3):26i - 2i8, September 1987.
[106] F. Mokhtarian and A. Mackworth. Scale-based description and recognition
of planer curves and two-dimensional shapes. IEEE Transactions on Pattern
Analysis and Machine Intelligence, Vol. PAMI-8(No. 1):34 - 43, January 1986.
[lOi] U. Montanari. On the optimal detection of curves in noisy pictures. Communications of the ACM, Vol. 14(No. 5):335 - 345, May 19i1.
[108] P. Moran. Measuring the length of a curve. Biometrika, Vol. 53(No. 3 and
4):359 - 34, 1966.

[109] M. Nagao and T. Matsuyama. Edge preserving smoothing. Computer Vision,


Graphies, and Image Processing, Vol. 9(No. 4):394 - 40i, April 19i9.
[UO] H. Nagel. Displacement vectors derived from second-order intensity variations
in image sequences. Computer Vision, Graphies, and Image Processing, Vol.
21(No. 1):85 - Ui, January 1983.
[U1) K. Nakayama and S. Shimojo. Da vinci stereopsis: depth and subjective occluding contours from unpaired image points. Vision Research, Vol. 30(No. U):18U
- 1825, 1990.
[U2J V. Nalwa and T. Binford. On detecting edges. IEEE Transactions on Pattern
Analysis and Machine Intelligence, Vol. PAMI-8(No. 6):699 - l4, November
1986.
[U3] R. Nevatia. Locating object boundaries in textured environments. IEEE TransactionsQn Computers, Vol. C-25(No. U):U70 - U75, November 1976.

11

References

[114] H.H. Nguyen and P. Cohen. Automated recognition of ore distribution by texturai segmentation. In Third International Symposium on Mine Mechanization
and Automation, volume Vol. l, pages 3-1- 3-11, Golden, Colorado, June 1995.

[115J S-L. Nie and A. Rustan. Techniques and procedures in analyzing fragmentation after blasting by photographie method. In Proceeding of the Second
International Symposium on Rock Fragmentation by Blasting, pages 102 - 113,

Colorado, August 198.


[116] K. Nielsen. Optimum fragmentation in underground mining. In R. Ramani,
editor, j9th Application of Computers and Operations Research in the Mineral
Industry, pages 46 - 53. Society of Mining Engineers, Ine., April 1986.

[l1J M. Nitzberg and D. Mumford. The 2.1-d sketch. In Third International Con-

ference on Computer Vision, pages 138 - 144, Osaka, Japan, Deeember . ~990 .

[118] M. Nitzberg, D. Mumford, and T. Shiota. Filtering, Segmentation and Depth.


Springer-Verlag Berlin Heidelberg, Germany, 1993.
[119J A. Nobel. Finding corners. Image and Vision Computing, Vol. 6(No. 2):121 128,1988.
[120] L. Nyberg, O. Carlsson, and B. Schrnidtbauer. Estimation of the size distribution of fragmented rock in ore mining through automatic image processing. In
Proceedings of the IMEKO 9th World Congress, pages 293 - 302, May 1982.

[121] L. Cruz Drive. Particle size-shape distribution: The general spheroid problem,
i. mathematical model. Journal of MicTOscoPY, Vol. 10:235 - 253, August 196.
[122] L. Cruz Drive. Particle size-shape distribution: The general spheroid problem,
ii. stochastic modei and practical guide. Journal p( MicTOscopy, Vol. 112:153 -

/"

16, March 198.

References

_,)

.-

[123] J.J. Orteu, J. C. Catalina, and M. Devy. Perception for a roadheader in automati~

3e;"ctive cutting operation. In Proceedings of the 1992 IEEE International

Conference on Robotics and Automation, pages 626 -- 632, Nice, France, l'I'!ay

1992.
[124] N. Paley, G.J. Lyman. and A. Kavetsky. Optical blast fragmentation assessment. In Proceedings ta the Third International Symposium on Rock Fragmentation by Blasting, pages 291 - 301, Brisbane Austria. August 1990.

[125] T. Pavlidis. Algoriihms for Graphies and Image Processing. Computer Science
Press, 1982.
[126] J. Peck, C. Hendricks, and M. Scoble. Blast optimization through performance
monitoring of drills and shovels. In M. Singhal and M. Vavra, editors, Proceed-

ings of the Second International Symposium on Mine Planning and Equipment


Selectinn, pages 159 - 166, Calgary, Alberta, November 1990.

[127] T. Peli and D. Malah. A study of edge detedion algorithms. Computer Vision,
Gmphics, and Image Processing, Vol. 20(No. 1):1 - 21, September 1982.

[128] P. Perona. Steerable-sealable kernels for edge deteetion and junetion analysis. In
G. Sandini, editor, Proceedings of the Second European Conference on Computer
Vision ECCV'92, pages 1 - 18, Santa Margherita Ligure, Italy, May 1992.

Springer Verlag.
[129] P. Perona and J. Malik. Seale-spaee and edge deteetion using anisotropie diffusion. IEEE Transactions on Pattern Analysis ani-Machine Intelligence, Vol.
12(No. 7):629 - 639, July 1990.
[130] P. Perona and J. MalikPerona. Deteeting and loealizing edges composed of
steps, peaks and roofs. In Third International Conference on Computer

pages 52 - 57, Osaka, Japan, Deeember 1990.

Vi.~ion,

ReFerences

li3

[131] W. Pratt. Digital Image Processing. John Wiley and Sons, Inc., New York,
second edition, 1991.
[132J W. Press, B. Flannery, S. Teukolsky, and W. Vetterlirrg Numerical Recipes in
C. Cambridge University Press, 1990.
[133J K. Rangarajan, M. Shah, and D. Van Brac1cle. Optimal corner detector. In
Proceedings of the Second International Conference on Computer Vision, pages

90 - 94, Tampa, FL, December 1988.


[134] L. Roberts. Machine perception of three-dimensional solids. In J. Tippett,
D. Berkowitz, L. Clapp, C. Koester, and A. Vanderburgh, editors, Optical and
Electro-Optical Information Processing Technology, pages 159 - 19. The Mas-

sachusetts Institute of Technology Press, 1964.

[135J P. Rosin and E. Rammler. Laws governing the fineness of powered coal. J. inst.
Coal, Vol. i:29 - 36, 1933.

[136] J.C. Russ. Image analysis of the microstucture of materials. In R. Gronsky


D. B. Williams, A. R. Pelton, editor, Images of Materials, pages 338 - 3i3.
Oxford University Press Inc., New York, 1991.
[13i] W. Rutkowski. Shape completion. Computer Vision, Graphics, and Image
Processing, Vol. 9(No. 1):89 - 101, January 19i9.

[138J P. Sahoo, S. Soltani. K. Wong, and Y. Chen. A survey of thresholding techniques. Compute,' \ ,.",n, Graphics, and Image Processing, Vol. 41(No. 2):233
- 260, February 1988.
[139] P. Saint-Marc, J.S. Chen, and G. Medioni. Adaptive smoothing: A general
tool for early vision. IEEE Transactions on Pattern Analysis and Machine

Intelligence, Vol. 13(No. 6):514 - 529, June 1991.

References

171

[140] 1. Santal6. Integral Geometry and Geometrie Probability. Addisoll-Wesley Publishing Company, 1976.
[141] 1. SaxI. Stereology of Objects with Internai Structure. Elsevicr Scicncc Publishers, Amsterdam, Netherland, 1989.
[142] J. Schleifer, R. Chavez, D. Leblin, and S. Grollier. Grain size distribution
analysis for blasting by means of image processing. In J. Elbrond and X. Tang,
editors, Proceedings of the International Symposium on the Application of Computers and OperatioTlS Research in the Mineral Industries, pages 361 - 367,

Montreal, Quebec, November 1993.


[143] J. Serra. Image Analysis and Mathematical Morphology. Academic Press, Inc.,
New York, NY, 1982.

[144] G. Shaffer, A. Stentz, W. Wittaker, and K. Fitzpatrik. Position estimator for


underground mine equipment. IEEE Transactions on In.dustry Applications,
Vol. 28(No. 5):1131 - 1140, September-October 1 9 9 2 . ,
[145] K. Shanmugam, F. Dickey, and J. Green. An optimp.1 frequency domain filter
for edge detection in digital pictures. IEEE Transactions on Pattern Analysis
and Machine Intelligence, Vol. PAMI-1(No. 1):37 - 49, January 1979.

[146J A.

Shashu~.

Geometry and photometry in 3d visual recognition. M!lSter's

thesis, Department of Brain and Cognitive Science, Massachusetts Institute of


~

Technology, 1992.
[147J J. Shu. One-pixel-wide edge detection. Pattern Recognition, Vol. 22(No. 6):665
- 673,1989.
[148J R. Stefanelli and A. Rosenfeld. Some parallel thinning algorithms for digital

pictures. Journal of the ACM, Vol. 18(No. 2):255 - 264, April 1971.

Referenccs

175

[149] H. Steinhaus. Length, shape and area. Colloquium Mathematieum, Vol. 3:1 13, 1954.
[150J D. Struik. Lectures on Classieal Differentiai Geometry. Addison-Wesley Publishing Company, New York, 1961.
[151J H. Takahashi, H. Kamata, T. Masuyama, and S. Sarata. Autonomous shovelling
of rocks by using image vision system on lhd. In Third International Symposium
on Mine Meehanization and Automation, volume Vol. l, pages 1-33 - 1-44,
Golden, Colorado, June 1995.
[152] G. Tallis. Estimating the size distribution of spherical and elliptical bodies in
conglomerates from plane sections. Biometries, Vol. 26(No. 1):87 - 103, March
1970.

[153J H. Tan, S. Gelfand, and E. Delp. A compararative cost function approach to


edge detection. IEEE Transactions on Systems, Man, and Cyberneties, Vol.
19(No. 6):1337 - 1349, November/December 1989.
[154] S. Thomas and Y. Chan. A simple approach for the estimation of circular arc
center and its radius. Computer Vision, Graphies, and Image Proeessing, Vol.
45(No. 3):362 - 370, March 1989.
[155] S. Ullmann. Filling-in the gaps: The shape of subjective contours and a model
for their generation. Biologieal Cyberneties, Vol. 25:1 - 6, 1976.
[156] E. Underwood. Quantitative Stereology. Addison-Wesley Publishing Company,
New York, 1970.
[157J D. Wang, A. Vagnucci, and C. Li. Gradient inverse weighted smoothing scheme
and the evaluation of its performance. Computer Vision, Graphies, and Image

Proeessing, Vol. 15(No. 2):167 - 181, February 1981.

References

1;6

[158J E. Weible. Stereological Methods, volume Volume 1. Academc Press, London,


UK,19;9.
[159J E. Weible. Stereological Methods, volume Volume 2. Academie Press, Loudon,
UK,1980.
[160J S. Wieksell. The corpuscle problem: A mathematieal study of a biometric
problem. Biometri1>a, Vol. 1;:84 - 99, 1925.
[161] S. Wicksell. The corpuscle problem: Second memoir, case of ellipsoidal corpscles. Biometrika, Vol. 18:152 - 1O, 1926.
[162] L. Williams and A. HansonWilliams. Perceptual completion of occluded surfaces. In Proceedings of the 1994 IEEE Computer Society Confel'ence on Com-

puter Vision and Pattern Recognition, pages 104 - 112, Seattle, Washington,

June 1994.
[163] A. Witkin. Scale-space filtering. In A. Bundy, editor, Proceedings of the Eighth
International Joint Conference on Artificiul Intelligence, pages 1019 - 1022,

Karlsruhe, Germany, September 1983.


[164J M. Worring and A. Smeulders. Digital curvature estimation. CVGIP: Image
UnderstG.nding, Vol. 58(No. 3):366 - 382, November 1993.

[165] Y. Yasuoka and R. Haralick. Peak noise removal by a facet mode!. Pattern
Recognition, Vol. 16(No. 1):23 - 29, 1983.

[166] A. Yu and N. Standish. A study of particle sizl! distribution. Powder Technology,


Vol. 62(No. 2):101 - 118, August 1990.

[16] P. Yu and J. Gentry Comparison of generie inversion algorithms for character-

izing particle size. Powder Technology, Vol. 50(No. 1):9 - 89, March 1987.

[168J O. Zuniga and R. Haralick. Corner detetion using the facet mode!. In Pro-

ceedings of the 1983 IEEE Computer Society Conference on Computer Vision


and Pattern Recognition, pages 30 - 3, Washington, D.C., June 1983.


Appendix A
Convex Hull _A..Igorithm
The conve:" hull of a set is the intersection of ail the half-planes containing it. An
approximation to this, called the 8-hull, is defined as the intersection of only those

half-planes which contains the set and whose edges are either horizoutal or vertical
or lie in either of the 45 diagonal directions. the 8-hull of a set has at most, eight
sides.
For a binary image, the convex hull iterative algorithm works as follows: at each
iteration, the value of a pb:el is changed from 0 to l if its neighbouring pixels have
one arranged in any one of the following configurations:
l

l
l

178

..l.PPENDIX A.

CONIE"\. HULL .1LGORlTIf.lI

l9

The blank squares can be eil.her zeros or ones. If enough iterations of l.his stl'P an'
performed. eventually the S-hull of the giwn set, will be general.i'd and il. will tH'
invariant under further iteral.ions.
An il.erative algorithm for smoothing the ragged edges of binary images of vehides
obtained by slicing gray image-level radar images (referred to as the cornplelllclll.;try
hulling algorithlll. One step of the S-hull algorithlll described above is applied

1.0

thc

set. Then one step of the S-hull algorithm is applied to its complemt:nt. In othe1'
words, one step of the S-hull algorithm is applied, then zeros and ones are interchanged, the another step of the S-hull algorithm is applied, and finall)', zeros and
ones are interchanged again. This has the eff:?ct of graduall)' reducing the ma.'cimum
curvature of the boundary of the set. More precisel)', with few e.\:ceptions, the bound-

ar.,r set of a set invariant under this algorithm can turn a ma.'cimum of 45 at an)'
"~

verte.\:."


Appendix B
Curvature Estimation
The Nitzberg et al. [118] mt>thod of estimating the curvature is based on the local
circle touching method, Le. estimating the parameters of the best fitting circle and

finding its centre and its radius.


Let Qi be a point. on a curve such that

{Qi

(Xi> Yi),

where i = 1 ... n}, the square

error e2 , for a given candidate centre c and a radius r, is equal to the sum of radial
distance squared from the circle to each point Le.

(B.1)

where Wi is the error weighting, determined by the weighting function used.


Based on the assumption that

are ail close to the candidate circle, Nitzberg et

Qi

al. [118J used the approxi!natioD

LWi(IIQi -

cJ12 -

r 2)2

= LWi(IIQi + cll- r)2(II

i -

(2r)2L w i(llQi -

cll- r)2

cll- r)2
(B.2)

Thus, equation B.2 becomeS:

180

.4.PPENDIX B.

CURUTtiRE ESTI.\i.UION

ISl

lB.3)

Nitzberg et al. [118J estimated the parameters a. b. c and d nsing cqnat.:on 8.3
by first constructing the design matrix .1.

.4 =

where n is the number of point used to fit the mode!. They defined the pararneter
vector B as follows:

B=

For the error weight matri:", Nitzberg et al. [118] smoothed the data using' a fixed
standard deviation

(1

for the Gaussian weighting function, and

a~fixed

odd inteser

window size m (the number of points to fit at a time), they then built a diagonal
matrix W of the size of the window Wi, i = l, ... , 2m + 1 of weights given by

APPENDIX B.

so that

1J!m+J

CURVATURE ESTIAJATION

182

is the peak of the sampled weighting fun('tion. Thus

Tl' =

W,

W2

o
o
o

Wn

The numerator of equation B.3 can he written as!:

where
n

LWi
LWi

LWi
i=l

i=I
n

:EWi Xi

LWi 'Yi

LWi '110;11 2

i=l

i=l
n

oX ;

L;Wi -Xi

LWi . Xi '110;11 2

Yi

i=l

i=l

i=l

i=l

LWi' Yi

LWi-XioYi

LWi 'Y;

LWi' Yi

i=l
n

i=l

i=l

LWi . Xi

i=l

i=l

[b2 + c2

2
LWi 'lI oill

.Also the denominator b2 + c2

'lIoill2

LW; . Yi'

lI oill 2

'lI oill 2

i=l
n

LWi '110;11

i=l

i=l

- 4ad cao he represented in a matri.x form as follows:

- 4ad] =BTCB'
r

= ~ '2

b c d]

!The superscript T is the transpose of the matrix

-Xi

0 -2

-2

O.

.4.PPENDIX B.

CURHTURE E5TD1.4.TION

183

Equation B.3 is transformed iuto:

B T .4.T IV A B
BTCB

(B..t)

Perfcrming a singular value decomposition of the square matrix AT .H' . A rcsllits


in the diagonal matrLx:
(B.5)

Where E is an orthogonal matrLx2 and A contains the eigenvalues of .4.T l,V .4..
From linear algebra, the square root of A (a diagonal matrix) is equal to the square
roots of its elements. Equation B.5 can be rewritten as follows:
(8.6)

If an element i of the diagonal of A is zero at this point, then thrre has been an

exact fit of all points, and the algorithm yields

B = (i th column of E).
Let

where Al is a symmetric positive definite matrix.


Equation BA can then be reduced to:

A2
C

_1

20 rt hogonal matrices arecharacteri2ed by ET = E- I

(B.i)

APPENDIX B.

CURV.4.TURE ESTIMATION

184

Performing a singular value decomposition of .4. ' . C A':

.'\.2 has three positive and one negative entries. Let

be the largest positive entry,

then
B = .42"' . (i th column of E 2 )
The error term in distance units is given by

With each fit, the curvature of only one point -the middle point of the window m + lis estimated
4 If

K.

= 7":b2:-+---...,Cl::-_---:4-.-:-a-.~d

Vous aimerez peut-être aussi