Vous êtes sur la page 1sur 23

Advanced Robotics Research Limited

D epartment of C ognitive Science and A r tifi cial Inte lli gence

Robotic Vision: A Preliminary Mathematical Model for Object Recognition

Colin Jenkins, August 1990

Abstract: TBD but mention transputers, GR6, Marr, vision, Kalman Filtering etc. Note that
Mark Orr will probably tqke over this project...

1. Introduction
The goal of this work is to propose and implement an object recognition methodology using ex-
isting quantitative and qualitative techniques. Further more, no use is made of ad hoc methods;
our attempt being to formulate a computational theory. See Marr '82 who expounds at great
length about the importance of a computational theory. Because of the complexity of object rec-
ognition we state simplifications (Section 2) which constrain the task to a manageable size
whilst retaining the features pertinent to ARRL. Many of the initial simplifications will be with-
drawn in the future as development continues.

For our purposes, we define object recognition to be the task of identifying a data object by
means of comparing it with model objecfs, while simultaneously computing a geometrical trans-
formation which takes the model object to the data object. The result is a logical statement
which can be insened into the logical world model.

As proposed by Marr and as adopted by ARRL, data objects will be composed of collections of
surfaces (inidallyplanar) existing inasurfaceworldmodel. A sisterproject; DamFusion,is
responsible for constructing and maintaining this world model. Determining which surfaces be-
long to data objects is initially performed by data fusion and subsequently augmented during
object recognition.

Object recognition in the context of GR5 is dipicted in Figure 1.

2. Assumptions & Simplifications


. Object models will be constructed from planar surfaces.

. Object models are rigid, (no moving pa.rts).

. Object models are not hierarchical (in the short term).

. Surface connectivity within object models will be provided by the model mak-
er'
-,1

. Data objects to which models are$atched are non-hierarchical transitively


connected collections of planar surfaces called clusters.

. Models will be matched to clusters, resulting in (if successful) an exact match


or one being a sub-component of the other.

Advanced Robotics Research Limited


. No provision is made to match similar but differently scaled objects.

. Recognition algorithms are combinatorial. :

. Surface boundaries are not explicitly used during recognition.

. No attempt is made to recognise across clusters, ie. a cluster is matched to an


object model on a one to one basis.

3. Objectives

We wish to implement a flexible set of rools which can be used to develop and evaluate object
recognition methodologies. Thus we do not implement an object recognition program, rather
supply a toolkit for constructing such programs easily. Included in these tools must be a variety
of Man-Machine interfaces enabling fast, efficient debugging and evaluation of both tools and
resulting applications alike. Having implemented these tools we wish to construct and evaluate
various object recognition schemes. Finally, a computational theory and implementation will

Advanced Robotics Research Limited


emerge from analysis of and experimentation of these schemes.

Sensor Systems

Geometrical World Models

Grid

Motion Planning
.Obj eCi,,ReCoff tion ,..:.:.:::::::::.:1.:.:

Situation Recognition

Situations

Figure I: Object Recognition - Context

4. Approach

Based largely around the numerical work of Oliver Faugeras and the ideas of Bob Fisher we
decompose objectrecognition into the following tasks: matching,Iocalizationandverification.
During matching we compare surfaces of data objects against surfaces of model objects, result-
ing in grcups of potentially corresponding surfaces, which we call correspondence hypotheses.
Localization takes these hypotheses and attempts to compute a ffansform from model to data for
each in turn resultingin transformation hypotheses. Verification applys these transforms and
compares the data object to the Eansformed model objects. This is our overall approach; there
are many ways to achieve each individual task and many ways to impose a control strategy,
which is why we choose to develop a toolkit. Justification of the overall approach comes from
the statement that it is a generalisation of many recent object recognition systems and that it sat-

Advanced Robotics Research Limited


isfies all of the criteria specific to ARRL. Figure 2 pictures the overall approach functionally.

Matching

Data &
Model
Objects

Figure 2: Approach

These criteria are:-

. The approach must facilitate incremental recognition. ie. the ability to update
identification and location by using crurent identificationlocation and new data
surfaces. This is in preferance to having to recalculate using new and old surfac-
es. Thus as we see more surfaces of an object we revise its identification and lo-
cation incrementally.

. The approach must facilitate localization of objects, even if they are unidenti-
fiable. This follows from the fact that all objects must be manipulatble by the ro-
bot, regardless of whether or not it knows what they are, and from the
observation that in the real world it is infeasable to assume that all objects can be
identified

Surface Worlds

One of our simplifications is that data clusters and object models have the same data sffucture
and as such both reside in surface worlds. A surface world is used for the following purposes:-

. To contain surface images loaded from RD8 (and other sensor systems).

Advanced Robotics Research Limited


. As a representation of the world: surface world model.

. To contain object models.

Each surface world is organised as a collection of planar surfaces, planar boundaries, planar
joints and clusters. Here we include models under the term clusters. Each of these features is
represented as a node in a network. Nodes refer to each other by name, each name being unique.
Figure 3 dipicts the suructure of a cluster.

C - Cluster node
S - Planar Surface node
B - Planar Boundary node
J - Planar Joint node

Possible
Visualisation

Figure 3: Cluster Network

In fact the cluster will have twice an many nodes as this since both sides of each surface are
visible. Note that clusters cannot be hierarchical at present. Section ?.? describes the implemen-
tation of surface worlds in detail which provide generalised network searching algorithms as
well as surface world construction facilities. This software will henceforth be refered to as
swtool.

Inout Imaserv

At present, images are used from the results of RD8. Many images can reside in a single surface
world. Facilities are available for clustering, ie. given a collection of surfaces, boundaries and

Advanced Robotics Research Limited


joints - determine which surfaces belong to together as constituents of clusters. RD8 must pro-
vide data files as specified in Section 6 which can be read and converted into a surface world.

Obiect Matchine & Localization

We have made the decision to use a numerical technique - Kalman Filtering, as a sound mathe-
matical basis for data fusion and object localization. We chose it because: it is well tried and
tested, its mathematics are well known, it incorporates error measures - essential for us, it is it-
erative (in that partial results can be formed) and because Faugeras used it! Evaluation hascon-
firmed our choice. As previously mentioned object localization is the process of finding a
transform from a model object to a data cluster. A simple combinatorial algorithm has been im-
plemented. It is known that in order to find such a transform, comprising of a rotation and trans-
lation, we need 3 pairs of corresponding surfaces from the object model and data cluster. Our
approach then, is to generate all such tuples of pairs (matching) and try to localize each in tum.
However, a pruning process discards any obviously inconsistent pairings before localization as
follows:-

. If any two of the these surfaces are parallel or anti-parallel then prune.

. If none of the surfaces are physically adjacent then prune.

Here we imply parallel as meaning that two surfaces have the same norrnal and anti-parallel as
two surfaces having complementary normals. These two rules discard most of the potential pair-
ings. The resulting correspondance hypotheses are passed to a Kalman filter which attempts to
extract a transformation from each. A further pruning process is implicit in the Kalman filter
which inherently rejects any hypotheses which are numerically incompatible. The result is a col-
lection of transformation hypotheses, each composed of an axis of rotation &, an angle of rota-
tion about this axis 0 and a covariance mafix S. The covariance matrix indicates the degree of
confidence we have in the result.

Software has been developed to implement this technique as a breadth first generate and test al-
gorithm. It will localize a single data cluster given an object model.

Obiect Verification

Working on the results of localization, verification compares each transformation hypothesis


against the original data cluster. The result will be one of:-

. Data cluster is sub-object of model.

. Model object is sub-object of data cluster.

. Exact match.

. Failure.

Comparison is based on the so called nearest neighbour standard filter, NNSF. Very much relat-
ed to the Klaman filter this technique provides us with a sound basis for matching features, for
example, testing a value against a threshold criteria. This replaces the ad hoc lx-yl<t.In this in-
stance we compare planar equations.

A simple algorithm has been implemented. For each surface in a given transformation hypoth-

Advanced Robotics Research Limited


esis, look for a matching surface in the origional data cluster. If all surfaces can be found then
the verification succeeds. A slight change in control allows the sub-object possibilities men-
tioned above.

Control strategies across collections of data clusters and object models have not been imple-
mented. Simple combinatorial algorithms could be trivally generated above the existing cluster-
model algorithms, although the cumulative effects of combinatorial seaches will soon provide
an upper limit to the complexity of potential world models and model data bases. A better ap-
proach might be to extend the existing algorithms to have better and wider searching mecha-
nisms and employ some sort of invocation methodology, see Fisher & Orr.

5. Theory

For an explaination of the nomencleture used in the following sections see Appendix I.

Note: A vector is assumed to be a column matrix when used in matrix calculations unless ex-
plicitly transposed notationally.

Note: A lot of this theory involves some horrible looking equations, however, their appearance
usually masks their underlying simplicity which one is well to remember at all times.

Planar Surface Equations

We define an infinite (unbounded) plane by the vector equation:-


n.x = d
Where n is a unit vector normal to the surface, x a point on the surface and d the orthogonal dis-
tance of the plane from the origin. Note that the planes represented by {nd} and {-n,-d} are in

Figure ? : All possible planes from n and d.

fact the same physical surface and in the past efforts have been made to canonical these two rep-
resentations (so Kalman filtering works). However we make use of the distinction to model
'both sides' of a surface. All visible surface sides (henceforth called surfaces) are included in
the model. For example consider the simple illustration below. This model has 4 surfaces and
one planar joint. The viewing angle will determine if this joint is concave or sonvex. All 4 sur-
face must be observed to completely verify the identity of the object. This might seem over com-
plicated. However, consider textured or coloured surfaces and the reasoning becomes obvious.

Thus we do not have to canonicalise {nd} and {-n,-d} since they represent dffirent surfaces.

Advanced Robotics Research Limited


igure ?: An object modelled by 4 surfaces.

Planar Surlace'llanslbrmations

The following equations transform a plane represented by {nd} into a plane {n' d'} by applying
the rotation matrix R and translation r. Object localization and object modelling are founded on
these two very important equations:-

n'=Rn
d'-d+Rn-t
- d+n' .t

Rotation Representations

We represent rotation in two ways. Firstly as a traditional 3x3 matrix R (as above) and secondly
by a vector r, representing an axis and angle of rotation. r encapsulates the axis t and angle 0
by the equation r= k0. Note that there are two numerically different, but functionally equiv-
alent ways of representing a particular rotation; these are:-

kg
-k(zn - 0)
We use r in our Kalman formulation since it is more compact than R, requiring only a 3x3 co-
variance matrix. We can transform points and planes using r dircectly or by converting r into R.
To rotate a point or position vector v into v' using r we employ the equation given in Altmann
1986, pp. 163:-

v' = 9i(r, y) = cosO + (fr @ v) + (1 - cosO) (k'v) k

This the Conical Transform, where:-

0=lrl and k-;

Note that there are other methods - see Appendix tr. Adding translation r we have:-
v' = S(r, v)+t

8 Advanced Robotics Research Limited


Tb convert r into R we refer again to Altman (pp. ??) and use the equations:-

R= I+Zsinl+ (l- cos0)22

Where Z is the anti-svmmetrix matrix:-

o -k- k..1
I
z=lu
t"^l
o'-i-l
l-k, k' ol

In computing Jacobians neccessary for the Kalman formulation (see later), we need to differen-
tiate an applied rotation Rn with respect to r. We do this by defining a function K(R,n) which can
be obtained by a combination of the chain rule and Rodrigues's equation.

K(R, n) = =
*ro,

#t,r, rrHn r,Hl


* i'(o)T ,
0 VF''
,
rrH'n rrH'nf
,l
+g(o)E'az ja n kal
+ft(e)[1;8 (rOn) +r8 (ian)) (Js (r@n) +rO (78n)) (,t@ (ra n) +r8 (ron))]

From which we can see that K(R,n) is a 3x3 matrix.ll is the anti-symmetrix matrix, defined:-

,,f
[o -,, _r,l
, = lr, 0
l-', ', ol

and g,h are the functions:-

s(o)=# h@)-L#
Elementary differentiation gives: -

s'(o) = -{"4q /,'(e) =


s(0)-2h(g)
0

Note that when performing Kalman filtering we must decide on a canonical form for r otherwise
two rotationally equivalent values of r will be interpreted differently. We chose to canonicalise

9 Advanced Robotics Research Limited


such that 0 < e < rs which sometimes involves changing the sign of t. Note: this is very impor-
tant,the filter does not perform correctly if r is not canonicalised.

Kalman Formulation for Obiect Localization

The justification for using a Kalman filter was stated in section 3; here we show how such a filter
can be implemented for solving object localization; that is, finding a transformation which maps
a model coordinate frame to a data object/cluster . We must express the transformation in the
form fl-r, a) = O. Where x is an observation vector and a the state vector. To perform localiza-
tion our observations will be equations of the data and model object surfaces, and our state - the
result of the filter - will be the best rotation & transformation satisfying these pairs of equations.
Note that three good pairs of equations are required to compute a translation and two.for rota-
tion. qoodpeing aenned as not parallel or anti-parallel. Thus we set 1 = ln,, n, d, A' ,
o = Vt /) and use the standard transformation equations stated previousfy forl,

n'-Rn=O
d'-d-(n'.rt)=0

We now compute the two Jacobians:

L6St'' and
o1
#o,",
Firstly then, differentiating with respectto a, we have the 4x6 matrix:-

o-uJo,', =
f:t:'-":,1
Then differentiating with respect to.r we have the 4x8 matrix:-

Lr#,,, =l'rr,l, tl't:l

Covariance matricies for each observation pair and initial state must be specified. Each obser-

l0 Advanced Robotics Research Limited


vation covariance matrix l,; has rank 8 and form:-

o],xrl oo,n, 6o,ro,, o n'


,o'
o,nyn, o2,
ny 6o,ro,, on'ro'

6o'n' oo'n' o?, o n'


zzzt "o'
"
6'o c nrnro orn, ooro
Lj= u

o orot -o2oo
n, - nrn, - nyd

6o, c nro, 6'n" o o"o


z,

oo'o', oo'n', oo'o'" o),


o on, odo, odo, o3-

The initial state covariance matrix,Sg has rank 6 and form:-

6", o,,zy o,,z2


6,,yt o! 6,,
) lz
C,,zl C,, o?
zj
So= z

o?'\ C,, rft


zJ
6,,yx o?, o,,yz
oil cn o?,
z x z,

The values for the covariances on data equations will come from RDS. The values for the cova-
riances on mdel equations can be set to 0.0; ie. we assume that they are exactly corect. In prac-
tice it is usual to set a very small figure, say 0.001 instead of 0.0. This is because of the limits
of the implementation. In the future, covariances on models could be utilised in a useful manner;
see ??. Note also that in practice RD8 does not supply covariances, which are presently guessed
by the recognition software - in load-rd8}.Large values are needed for the initial state covari-
ance values, the actual valuebeing fairly arbitary and 100.0 usually suffices. It has been found
in practice that if a good eltimate of the initial state a0 is not available (probably the case), then
a value of co =
[0r", Or"J' is better than a random guess. The reason for this is not known.
Again a value of 0.01 or0.O01 should be used in place of 0.0 to keep the implementation happy.

11 Advanced Robotics Research Limited


We can now state the equations for evaluating r and r given a sequence of observations {x;{}
and initiat state {ao,Ss} . The calculation is recursive, each new state S; being a combination of
the last state S;-7 and current observation ri. From ?? we get:-

ai= di_1- Gf(x;, a;_)


s, =
[1
-G,al#uo,-,)]s,-,

Where Gg is the Kalman gain defined as:-

Gi = si- ,47f1ro,-r)'tw, +*o,r-'

W*i andWoi are the weight of the current observation and last estimate respectively:-

,4t',i= Lr{O, a,-r)t,!nf@u o,-r)'

,Y o,=
#Ora; - 1)Si- ,Lai@, o,- r)'

There are many references to verify these equations, for example see ??.

Apnlvine Tbansformations.

If the plane has no associated covariance matrix then we can simply use the equations ?and ?.
If however the plane has an associated covariance matrix then we must update this as well as
the normal and distance parameters. We can do this by using a Kalman filter similar to the one
described above, changing the definition ofx anda. Note that we only compute al andST since
we believe the observation, (we would of computed the transformational part of the observation
using the filter described above). Our observation will be the existing plane equationL/covariance
matrix and the transformation {r,t}. The initial state is arbitrary and the result is the new plane
equationL/covariance matrix. The same definition of f(x,a)=Q can be used together with the new
values forx and a. x=[nt d rt f 1t, a=[n't d' ].Wemust therefore calculate the Jacobians cone-
sponding to this new formulation, which can then be used in the standard way. We have:-

,o_';tr
Lr.'o,,,=
[* it;"}tr and Lr#,,,=
[o-^ i il
The observation covariance maffix will be 10x10 and the state covariance maffix 4x4. If we

12 Advanced Robotics Research Limited


adopt the notation C(i j) to represent to covariance matrix of parameters i andT then we have:-

t- l'@,a I So = [c(r', d')]


L C(r,t)l

Applying the filter gives {n' ,d' ,C(n' d')} from {n,d,C(n,d)}.

An Alternative Localization Formulation

Notice that because of the order of the parameters in x= [ nt n't d d' ltour ob servation covariance
matricies become somewhat complicated and unatural. Unatural because the covariance matrix
of a plane {nd} wrll be held as a 4x4 matrix. A more natural formulation then, is to setx=[n't
d' nt dlt,a remaining unchanged. We then have:-

Lr{o,', =Vrti' f t:;j d'1 l


L,
' = lc{n"
L c@,4)
This was not done in the first place because we used to nomencleture of Faugeras somewhat
blindly!

Iterated Kalman Filter

The iterated Kalman filter is a refinement of the above recursive formulation and can be used
when the initial estimate of state is particularly bad (or 0) - ie. we have no idea what it should
be at all. The result a;cdn be refined within each recusrive step by recomputing it a number of
times until successive estimates are to within a threshold value of eachother ta- or until the op-
eration has been peformed a set number of times no. Note that the covarianve matrix Si-1 and L;
are used unchanged throughout the calculation. Thus we perform:-

ai = di-r-Gf(x,,a,_r) until
la,- o,_rl 3 t, or
i=no

Note that this includes the recalculation of:-


Gi wo, wx,

Outlier Detection

During Kalman filtering we can detect those observations which do not lie within a specified
distance of f(x,a)=O. In practice this means that we can eliminate those observations which are
statistically detected to be of no use. We make use of the nearest neighbour standard filter
NNSF) and compute the generalised Mahalanobis distance d. It is known that d has a 12 disri-

13 Advanced Robotics Research Limited


bution and we can test it against standard tables. See Appendix Itr.
d - lf(x i, a, - r)lt Q;t lf(x p a i- )l
Where:-

Qi = W,.*Wo

dhas q degrees of freedom, where q=rank(Q). Object localization gives q=(, (for either for-
mulation given). dcan now be tested against say the gl%ohypothesis value for 4 degrees of free-
dom e, rejecting observations where d >= e. Note that Q;t itcalculated during localization and
outlier detection is a small overhead.

Matching Features

We make use of the Mahalanobis distance to match two features described by vectors or scalers
and having associated covariance matricies. For example matching planar surface equations
during object verification. In this instance we make use of the equations above and simply for-
mulatef(x,a)=0 as x-a.It does not matter which vector is defined as x and similarly for a. Ad-
ditionally there is no concept of current observation and last state and the i suffixes can be
dropped. To avoid confusion we shall define our equations as matching vector y and vector y'.
We have then, adapted from Faugeras '86:-

J(x, a) =f(v', v) : (v'- v) = 0


d = V(v', v)1, e-r V(v,, v)l
g-
ln ', v)c(u')rlu/(r',r)' **n1{u', v)c(v)ir/v',u)'

Givenf(v' ,v) defined as v'-u=0 we can easily write our Jacobians:-

Lrrf,'"v) = -l
#r',v)=I and

Substitution gives us:-

d = lv, _ vlte-Llv, _ vl
Q=C(v')+C(v)

Of course v' and v must have the same length and C(v' ) and C(v,) must have the same rank. If
the length of v' and v is 1, ie they are both scaler quantities then we can simplify to:-

, (v'-v)z
Lr - ----;-;
o;' + o;

14 Advanced Robotics Research Limited


In both instances d is tested as described above.

Merging Measurements

We often wish to merge two measurments into a single quantity. For example, merging two state
estimates obtained from independent observations. We again tum to the Kalman filter for the
solution. Obviously we formulatef(x,a)=Q as x:-x2=0,'where r= x1t and a=x2t are the two mea-
sunnents we wish to merge. Again the Jacobians are I and -I for x and a respectively. If the co-
variance matricies of x7 and x2 are C 1 and C2 respectively then the merged result {x3,C j} can
be found by subtituting into the standard Kalman equations:-

G= CzGI) (Cz+ C,)-t


- -C2(Cz+C,)-t
x3 = xz- G (xt- xr)
Cr= (I-G(-I))C2
= (1+ G) Cz
= Cz* GCz

These equations, written in slightly different form can be verified in Smith and Cheeseman ??.
Note that the apparant asymmetery in the equations is of no consequence; the same result
formed for both formulations of x and a. We have once again dropped the suffix i since there is
no temporal dependency.

6. Source Location

The source code for GR5:Object Recognition is resident in the directory -crj. See Mark Orr for
password. I am sure the code will move soon, so check with Dave Wheble as well as to whether
or not -crj still exists. Anyway let us assume that there is an object recognition root directory
which we will call '.'. The following subdirectories exist.

./GR5 Root for object recognition source


./RD8 RD8 executables. See MJO.
./backup Object recognition backup directory, saves on logout.
./bin Some useful executables.
./docs Documentation.
./l1b Generic library source.
tmp Scratch pad.

./bin

p Prints specified file to laser printer.


prgr5 Prints all relevant GR5 source code.
bugr5 Backup GR5 source to ./backup.
tidygr5 Delete old and redundent GR5 files; keep typing 'y'.
execgr5 Start object recognition.

15 Advanced Robotics Research Limited


Note that *gr5 scripts can be executed from any directory.

./GR5

GR5/images Images formed by RD8, including surface images.


GR5/models Collection of model object definitions.
GR5/test Rubbish.
GR5/src/clisp Object recognition source code.

GR5/models

base.lsp Base level definitions.


model-base.lsp Useful object definitions.
match-test.lsp Used for testing recognition.

GR5/src/clisp

or-utils.lsp Generalobjectrecognitionrelatedutilities.
init.lsp Loaded by LISP on invocation.
swtool.lsp Tool for building surface worlds.
rd8.lsp Function for loading RD8 images into surface worlds.
modeltool.lsp Tool for building object models in surface worlds.
swmmi.lsp Menu based MMI into surface worlds.
swdisplay.lsp Functions for graphical display of surface worlds.
cluster.lsp Function for creating clusters in surface worlds.
covariance.lsp Functions for creating covariance matricies.
rotation.lsp Usefulrotationutilityfunctions.
kalman-utils.lsp Useful utility functions required by kalman filtering.
gkalman.lsp Generalised Kalman filter.
kalman.lsp Kalman filter for localization.
cmatch.lsp Combinatorialmatchingalgorithms.
clocalize.lsp Combinatorial localization algorithms.
cverify.lsp Combinatorialverificationalgorithms.
kalman-test.lsp Test program and MMI.
mmi.lsp Master MMI, executed at end of init.lsp file.
sysmmi.lsp Menu based MMI into implementation parameters.
setup.lsp Loaded at end of init.lsp
setup.p Loaded at end of init.lsp
.ltib

lib/clisp Common LISP libraries.


lib/c C libraries (empty).
lib/pop11 Pop1l libraries (empty).

lib/clisp

utils.lsp Generally useful functions.


numberlsp Scalerfunctionlibrary.
vector.lsp Vector function library.
matrix.lsp Marix function library.

16 Advanced Robotics Research Limited


pwmlib.lsp WIMPS interface to pwmwindows from LISP. Not complete.

Each of these libraries is used by object recognition.

./docs

gr5 This document.


gr5-seminar Overhead projector overview of GR5 (by MJO).
gr5-overview Single page picture of object recognition.
gr5-sensors Single page picture ofinput sensors to object recognition.

.IRD8

label/lab-io.c Code to generate LISP file from RD8 data structures.

7. Invocation

Login as crj.
Execute window env. sunview
then (i)
Move to LISP source cd GR5/src/clisp
Execute poplog env. pwmtool clisp &
or (ii)
Execute script execgrS &
See Mark Orr for pasword.

8. Creating Surface Worlds

Software for creating and manipulating surface worlds resides in the frle swmmi.Isp andis com-
monly refered to as swtool: surface world tool. Two examples of use are in RD8.kp andmodel-
tool.lsp.

A surface world is a collection of nodes. Each node has a unique name and nodes refer to ea-
chother by name. Each surface world has a name and a list of currently existing surface worlds
can be found in the variable $$surface-worlds. There is a notion of the cwrent surface world,
which is referenced by the variable $$current-world. Although surface worlds are used to con-
tain planar surface structures, swtool is written so that the number of rypes of node and their
characteristics can be easily extended. The following node types are currently used:-

Narne Description

PLANAR-SUREACE Node containing description of a planar surface.


PLANAR-BOUNDARY Node containing description of a planar boundary.
PLANAR-JOINT Node containing description of connection
. between two planar surfaces.
CLUSTER Node listing a collection of planar surfaces
representing a cluster.

The name of each node is a prefix character followed by numerals rendering it unique. Prefix
characters are S, B, J and C respectively.

L7 Advanced Robotics Research Limited


Each node is described by a list of attributes. Each attribute has a name and a value. If the at-
tribute name is a node type, then its value will be a list of nodes names of the said type. For ex-
ample a cluster node will have an attribute PLANAR-SURFACE, the value of which is a list of
surfaces making the cluster. Similarly a planar-joint will have an attribute PLANAR-SURFACE
listing the two constituent surfaces of that joint. All nodes have the attribute type, defrning the
type of the node. The implementation does not limit the number of attributes belonging to a par-
ticular type of node.

Looking at a typical cluster then, it has a name, typeand references a list of planar surfaces. Each
of these surfaces references the cluster it belongs to, the planar joint of which it is part (if any)
and the boundary nodes which define its shape. Additionally a suface has the following at-
tributes: equation of plane, centre of gravity, area and sometimes a symbolic name such as
RECTANGLE. Each planar jointreferences two surfaces of which it is comprised and addition-
ally has two attributes: joint angle and a joint character which is CONCAVE or CONYEX. See
Fisher for a description of these characteristics. Each boundary node references the surface it
defines and an attribute whose value is a list of 3D coordinates defining its shape.

swtool provides functions to create and manipulate nodes in a flexible manner. It also provides
algorithms for finding node neighbours, both locally and transitively. One use of this is to per-
form clustering. Consider a network defined without clusters; simply a collected of inter-related
surfaces, joints and boundaries. We can perform a simple clustering algorithm by collecting all
groups of surfaces which are transitively connected by convex planar joints - into clusters. A
clustering algorithm is provided in cluster.lsp, which uses a generic neighbourhood finding al-
gorithm parameterised to look for connected convex joints. Other clustering techniques can be
evaluated by using this function.

A large number of functions are provided and unfortunately it is out of the scope of this report
to described each in detail. However each is described breifly in swtool.lsp. In addition, please
refer to the two previously mentioned examplars..

9. Creating Object Models

base.lsp provides an exemplar for model building. Models are clusters and reside in surface
worlds. modeltool.lsp provides a collection of algorithms for defining and transforming surfaces
and clusters. Facilities exist to progressively abstract a planar surface - the base component of
a model- into objects. The first stage is to create a surface world for the objects to reside in, then
compose descriptions of the required objects, which can then be instantiated into the surface
world. Descriptions can be parameterised enabling for example a cuboid (x,y,z) to be defined.
Basic surfaces are created with the function create-planar-surface and clusters with the function
create-cluster.The function obj-transform can be used to translate and rotate both individual
surfaces and clusters. Each function operates by supplying a list of keywords and associated val-
ues. Some keywords are optional resulting in default values being used.

create-planar-surface

Keywords:-

nene description default value

world Name of world within which


. node will reside $Scurrent-world

18 Advanced Robotics Research Limited


properties List of exfa attributes nil
boundaries List of boundaries specified
. as 3D coordinate lists nil
equation Planar equation Must be supplied
area Surface area Must be supplied
moment0 Centre of gravity Must be supplied
name Svmbolic name PLANAR-SURFACE

Please see base.lsp for examples.

create-cluster

Kevwords:-

name description defaultvalue

world As above.
components List of surface node names
forming the cluster Must be supplied
called Reference name for each
node Must be supplied
connecuvlty List of surface pairs which
are physically connected. Use
reference names. Must be supplied

The following example defines a model called cube, which takes 2 aqguments : side defining the
lengthof eachsideof thecubehavingdefaultvalue l.0.name definingthesymbolicnameof
the cube, default name being cube.

(defun cube (&key (side 1.0) (name 'cube))


(let*
((d (/ side 2.0))
(-d c d)))
(create-cluster
:components
'(,(-zsquare :side side :distance -d)
,(zsquare :side side :distance d)
,(-ysquare :side side :distance -d)
,(ysquare :side side :distance d)
,(-xsquare :side side :distance -d)
,(xsquare :side side :distance d))
:called
'(front back bottom top left right)
:connectivity
'((front top) (front righ| (front left)
(front bottom)(back top) (back right)
(back left) (back bottom)(top right) (top left)
(bottom righ| Gottom left))
:name name)))

Notice that the definition is a LISP function and as such can have local variables etc.

19 Advanced Robotics Research Limited


obi-transform

Kevwords:-

name description default value

object Name of surface or cluster Must be supplied.


rotation Rotation specification No rotation
translation Translation specifi cation No translation

Rotation can be supplied in one of 3 forms: see modeltool.lsp for details. Translation is supplied
as a vector. Again, use base.lsp as an exemplar.

10. Using the Menu MMI


Surface World MMI
This facility allows the user to traverse and inspect a surface world. Additionally a world, cluster
or individual surface can be projected onto a 2D window. Use of the MMI is mainly self evident,
the less intuitive features being described below.

To perform object localization two clusters must bemarked: one as the data cluster and the other
as the model cluster. This can be done by selectin g the mark as model and mark as data menu
items. Note that models can be matched to models for evaluation purposes.

The menu item links is used to enter a menu of node types referenced by the current node.

A control panel enabling projection parameters to be altered cannot be used until the menu MMI
is terminated. To reinvoke use (mmi).

Kalman Filter Evaluation MMI

Used to evaluate the Kalman filter implementation the MMI operates as follows. Parameters are
selected for input to the filter. The filter is invoked (select go). Results are displayed indicating
its performance. The test algorithm operates as follows.

A number of infinite planes are created at random (minimum 3).A transformation is created at
random. The transformation is applied to each infinite plane. The resulting and original infinite
planes are colrupted with noise (possibly none). The transformation is comrpted with noise and
in a special case set to k-(0,0,0) 0=0. The two sets of planes and the comrpted transformation
are then used to compute a new transformation - the result of the filter. Results and calculations
comparing this calculated transformation and the original are displayed.

Svstem MMI
Not fully implemented; at present simply printing the value of global variables and performing
garbage collection.

Performing Localization

Mark two clusters as described above and select localization option. A surface world calledver-

20 Advanced Robotics Research Limited


ification-world is created, the contents of which is a number of clusters. Each cluster represent-
ing the model cluster moved by each transformation hypothesis. Attributesrotation, translation
and state describe the applied fiansformation. Atfibutes model-cluster ard data-clusterrefet-
ence its source clusters. An attributeverified will be set to (no value), see below.

Performin g Verifi cation

Having localized a cluster as above, each of the resulting clusters can be verified against the
original data cluster in turn by slecting this menu option. Inspect the attribute verify, of each
cluster to establish the result of the verification.

2l Advanced Robotics Research Limited


Appendix I : N omencleture

{nd} Equation of plane with normal n and distance from origin d.


n Unit vector as described above.
d Scaler as described above.
d Mahalanobis distance.
R 3x3 matrix representing a 3D rotation about the coordinate axes.
r Vector representing a 3D translation.
r Axis/angle rotation representation where r=k0.
k Unit vector representing axis of rotation.
e Angle of rotation.
S(r, v) Function rotating vector v using r.
Z 3x3 anti-symmetrix matrix formed form the components of k.
H 3x3 anti-symmetrix matrix formed from the components of r.
K(R,n) 3x3 matrix representing the differentiation of Rn with
respect to r.
x Observation vector.
a State vector.
L Covariance matrix for an observation r.
S Covariance matrix for a state a.
G Kalman gain matrix.
W Observation or state weight matrix.
O Sum of observation weight and state weight matricies.
C(p,q) Covariance matrix for entities p and q.

22 Advanced Robotics Research Limited


Appendix II: Conversions from r to R
There are number of ways to convert an axis/angle rotation represented by the vector r = /<0 into
a standard 3x3 matrix rotation R.

i. Using Rodrigues equation and the anti-symmetric matrix Z formed, from elements of ft.

I o -k, kr]
R - r+Zsino+ (l- cos})* z - | k, 0 -k,l
l-0, o, ol

Sometimes written:-

R = I+Zsing +zsinzlz2
/.

ii. Using Rodrigues equation and the anti-symmetric matrix.Fl formed from elements of r.
rf

R= r*"# * (1--cosg)
"z , =1,,r,!,-A
iii. By forming the matrix:-

t - z 14 * *2,> rit 9, -,t, sin 0 + 2k,k rsn2 f,-,t, sin 0 + 2 k,k,sinz I


R_ -,t,sin0 + 2k,krsin|f, t - z &? + rt> rinzf, - t"sin0 + Zkrk,sinz|

- t, sin 0 + 2k rk rsinz|- t" sin 0 + zlcrtc,snz9, | - 2 (4 * $l rin'


I

23 Advanced Robotics Research Limited

Vous aimerez peut-être aussi