Vous êtes sur la page 1sur 11

Design and Control of an Omnidirectional Mobile Robot

Cao Xu Wen
SRP student, Raffles Junior College

Abstract
In this paper, the design and control of an omnidirectional mobile robot with three “omniwheels” is discussed. First,
kinematic analysis of the robot is performed. Second, a multilayered architecture is introduced which enables
omnidirectional robot system to be performed at various rates and different levels of information abstracted. Third,
the detailed hybrid control method used in the autonomous navigation of the robot, which includes PID control and
fuzzy logic, are presented. Finally, techniques employed in the robot sensing, mapping, obstacle-avoidance and path-
planning process are presented together with relevant experimental results.

Index terms
omnidirectional robot, mechanical design, kinematic analysis, control methods, navigation

1. Introduction
The well-known omni-directional transport systems can be separated into two basic categories [1]: orthogonal wheels
(pairs of near-spherical wheels), and universal wheels (wheels with rollers). The universal wheels are used in this
project [can be seen in Figures 1(a) and 1(b)] as they are mass manufactured and more obtainable compared to the
orthogonal wheels. The small rolling cylinders along the periphery of the wheel allow the wheel to roll in a direction
that is perpendicular to the direction in which the wheel normally turns. This permits the robot to perform translation
and side-rotation simultaneously. The mobile robots consisting of three omniwheels represent the omnidirectional
mobile robot with mobility of three and steeribility of zero [2].
Due to its inherent agility benefits [3], omnidirectional robot has become a research interest in the recent years. Many
research groups are studying omnidirectional mobile robots [4-6], and they are the inspiration for the robot designed
in this project. This paper discusses various issues concerning the software and hardware design, control and
navigation of the omnidirectional robot.

2. Mechanical Design
The robot was conceived around the following very loose design criteria:
-- Survive in an unknown static indoor environment
-- Provide a robust and reliable platform for developing navigation and behavior software.
-- Be entertaining and aesthetic for student community.

Figure 1(a): The omnidirectional robot base Figure 1(b): The omnidirectional robot with sensors

1
2.1 Chassis
The Chassis is constructed using the Lego® Mindstorm Set. The Lego blocks
are both light and rigid, which perfectly suit a small robot. An equilateral
triangular base [see Figure 2] provides a foundation for an equal distribution
of weight to all three wheels. This is important because the control system
for locomotion depends on the wheels spinning with the same rotations per
minute (rpm). An equal distribution of weight to the wheel axles will ensure
that each wheel overcomes the same amount of minimal friction as the
others. Another advantage to a triangular base is the structural integrity
gained from the geometry of a triangle. Apart from the fact the symmetry
created greatly simplifies calculations, the triangles vertices are fixed,
maintaining three 60 degree angles, and therefore create a stable structure.
Base Height(without mirrors)/cm 9
Figure 2: A sketch of the robot chassis.
Robot Height/cm 27
The three rectangular wheel assemblies
Base Radius/cm 21 form a stable triangular base.
Wheel Radius/cm 2.50
Table 1: Physical specifications

2.2 Locomotion
The robot is driven by three Lego® Electric Technic Mini-Motors (maximum torque 7Ncm, maximum speed
340RPM [8]) and one Lego® Electric Technic Motor(used to spin the mirrors on top). These DC motors are powered
by a special type of lithium-polymer battery (11.7V).

2.3 Sensors

Figure 3 (from left to right): wheel encoder, Sharp GP2D02 Infrared sensor, Sharp GP2D12 Infrared sensor,
bumper sensor (limit switch)

For this robot, optical encoders are employed to monitor the wheel revolution to compute the offset from a known
starting position. External sensors include bumper sensor (64mm limit switch) and infrared sensor. To avoid possible
interference, two types of Infrared sensor are used – Sharp GP2D02 and Sharp GP2D12, both can detect object
within a distance of 10 to 80 cm. By using a set of IR sensors and mirrors, an omnidirectional sensing system is
formed to construct a local map. This novel approach of mapping shall be examined in detail later [Section 6].

2.4 Microcontroller
The robot controller is the M.I.T. designed Handy Board. Based on the Motorola 68HC11 microprocessor, the Handy
Board includes 32K of battery-backed static RAM, outputs for four DC motors, inputs for a variety of sensors [7].
This small yet powerful microcontroller integrates all the sensors and motors of the omnidirectional robot. A detailed
pin-out can be found in Appendix A.

2
3. Kinematic Analysis
Figure 4 shows the kinematic diagram of the omnidirectional robot (top view). Assume the inertially fixed frame is
{O} and {M}denotes the moving Cartesian reference frame.The centre of mass for the robot is located at its
geometric centre, which is the origin of {M}. All the three omniwheels are symmetrically placed, aligned by constant
angle βi (i denotes the wheel number) from the xm axis. The angle ψ gives the orientation of the robot with respect to
the x0 axis. Vi gives the linear velocity of the ith wheel and L refers to the length from the center of gravity of the
robot to the center of each wheel.

ym

V1 xm
L
V3
ψ
M
y0
β
V2

x0
O Figure 4: Kinematic diagram (top view)

3.1 The Inverse Solution ..


As the robot is highly.
symmetrical, the inverse Jacobian matrix is easily calculated. Given the linear velocities x, y,
and angular velocity ψ in the {O} reference frame, the linear velocities of each wheel can then be obtained:

(1)

For my robot, β1=π, β2=5/3π, β3=1/3π, and the velocity Vi=θi*R, where θi denotes the angular speed of the wheel (in
radians per second) and R is the radius of the wheel. Therefore the equation becomes:

(2)

3.2 The Forward Solution


Using Mathematica® to calculate the inverse of the Jacobian matrix in equation (2), the forward solution is obtained:

(3)

4. System Architecture

3
4.1 Development Environment
The software environment for the robot is an interactive p-code interpreter called "Interactive C"(IC) which compiles
not quite ANSI standard C source. In this project, a reference library of subroutines for functions like pulse-width
modulation, shaft-encoder routines, and IR detection which specially caters to the omnidirectional robot was written
(a detailed listing of programs can be found in Appendix B). The library files were mostly written in assembly
language (“.asm”) and compiled to “.icb” files using a cross assembler named "AS11" that runs in MS-DOS [9]. The
converted S-Record file can then be loaded onto the Handy Board via a phone cable and executed by the Motorola
68HC11 microprocessor as machine code. The software for the robot was all developed and tested in the IC p-code
environment.

4.2 Robot Architectures


The earliest control architectures for autonomous robot systems consisted of simple Sense-Plan-Act (SPA) loops
[10], which only allow unidirectional and linear flow of information. Most of the time, they lead to difficult
manageability and complex control flow. Reactive architectures [11], on the other hand, emphasized real-time
response over deliberation. Subsumption [12] and its variants [13] were later proposed as a radical form of the SPA.
Subsumption is able to decompose the tasks into layers of task-specific control programs which are sequenced
according to priority. It greatly improved the robot performance in terms of the processing rate. In my own research,
the architecture designed adopts some ideas of Subsumption while maintains the ability to address high-level goal-
oriented tasks.

4.3 The Three-layered Architecture


weak
User Input Decision Making
world model Execution Strategic
path planning
Perception
interpretation Decision Making
sensor fusion local mapping Execution Tactical Voting
obstacle-avoidance Power

Decision Making
external PID control Execution Operational
sensors reactive control

strong

internal Output motion


World sensors motors

Figure 5: The System Architecture

The robot is structured as a three-layered control system, where perception, decision making and execution occur at
each layer of the system. Each layer extracts different levels of spatial information and processes the information at
different rates. The operational layer provides raw sensor data processing, real-time reactive control and PID
feedback control. It has the strongest voting power, meaning it is prioritized over the other two layers. The tactical
layer basically extracts information from IR sensors to build a local map and use it to avoid visible obstacles. It has a
stronger voting power. The strategic layer accepts user input and is responsible for long-term planning and building a
topological world map. The strategic layer has the weakest voting power, which means the execution signal it sends
can be overridden by the two layers below. In the IC p-code environment, the rate at which the layers perform system
control can be manually adjusted, using a process controller. Each layer can be started, terminated or swapped out at
run-time. The default settings, the strategic layer operates at 0.5Hz, the tactical layer 5Hz and the operational layer

4
20Hz. The multi-rate system allows the robot to respond to urgent control demands in real-time, while maintaining
the ability to handle complex tasks in a deliberative manner. Combined with the three-layered structure, it brings
robustness and fault-tolerance characteristics to the architecture.

5. Control methods
The distributed system architecture described in the previous section makes use of low-level control sub-systems,
which includes PID control and fuzzy obstacle avoidance. Due to their simplicity, PID control and fuzzy logic are
widely implemented in robotic applications [14][15]. In this project, the PID control and reactive control were
mainly done through the operational layer [Figure 5].

5.1 PID Control


desired position
(x, y, ψ)
actual x, y, ψ forward
matrix
PID (x, y, ψ)
.. . .
ideal θ1 .
ideal x, y, ψ PID (θ1) motor1

ideal θ2 .
inverse matrix PID (θ2) motor2 internal

ideal θ3 . sensors
PID (θ3) motor3

actual θ
Figure 6: PID control loops
The PID control sub-system designed for my robot consists of two PID loops: a velocity-control inner loop and a
position-control outer loop. The position-controller reads the input of desired position and orientation generated by
the robot (or by user input) to compute the ideal linear and angular velocity. Using inverse matrix [Section 3.1], these
values are converted to ideal wheel angular velocity. Hardware timer interrupt generators in the Handy Board then
generate PWM signals that control the L293 H-Bridges, which drive the main motors. The internal sensors – optical
encoders update the actual velocity of the motors (and by forward matrix [Section 3.2], the position of the robot)
back to the PID controllers, thus form two closed-loops – velocity-control and position-control loop respectively.
This sub-system can operate without exact knowledge about the dynamics of the robot. The typical PID equation
used was: .
V = Kp * e + Ki * ∫ e dt + Kd * e (4)
where e denotes the error signal and V the output value.
A series of experiments was performed to optimize the velocity profile

rotation movement. For inner loop, parameters in the 12


PID equation were successively determined in the test
velocity/radians per seconds

[Figure 7] where one of the motor was commanded to a 10

pre-defined velocity of 8.0 radians/second. First, only 8


proportional gain (Kp=1.5) was implemented; integral
6
gain (Ki=1.0) decreases the rise time and eliminate the
Kp=1.5,Ki=0.0,Kd=0.0
steady-state error, but increases the overshoot and the 4
Kp=1.5,Ki=1.0,Kd=0.0
setting time. To overcome this, a derivative component 2
Kp=1.5,Ki=1.0,Kd=0.1
(Kd=0.1) was added, and the desired effect was attained:
low overshoot, low steady-state error, fast rise time and 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2
settling time. A similar test was performed to obtain the time/seconds
parameters for the position-control PID loop, and the
results are shown in Figure 8. Figure 7: Experimentally determine three parameters Kp,
Ki and Kd in the inner-loop PID equation. The desired
velocity was set to 8.0 radians/second.

5
position profile_x position profile_y

3.5 2.5
3
2

position_y/meter
position_x/meter

2.5
2 1.5

1.5 1
1
0.5
0.5
0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10

time/seconds time/seconds

position profile_psi

3.5
3
2.5 Figure 8: Position Profile of the robot. Position profile
orietation_psi/radians

2 in x-axis (upper left); y-axis (upper right); orientation


1.5 (bottom left). The robot is driven to stop at the position
1 [x y ψ] = [3 2 π] from the origin [0 0 0].
0.5
0
0 1 2 3 4 5 6 7 8 9 10
-0.5
-1
time/seconds

position profile_x position profile_y position profile_psi


Kp, Ki, Kd 5.0, 1.7, 1.0 5.0, 1.7, 1.0 1.0, 0.3, 0.2
Steady-State Error (%) -1.0 1.5 0.3
Overshoot (%) 4.3 2.0 4.1
Rise Time (seconds) 4.5 4.1 2.2
Settling Time (seconds) 6.5 8.0 7.5
Table 2: Experimental result for the position-control PID loop

5.2 Fuzzy Logic and Reactive Control


The reactive control has the highest priority in this robot. It extracts raw data reading from bumper sensors and IR
sensors to produce an appropriate motor command to avoid the obstacles. Each input space is partitioned by fuzzy
sets [Figure 9]. Triangular and trapezoidal functions which allow a fast computation [16], essential under real-time
conditions, are utilized to describe each fuzzy set. The final output of the fuzzy unit is given by a weighted average
over all rules [Table 3]. All the input and output are performed in the moving frame {M}.

NL NM NS Z PS PM PL VC C F VF

-π -⅔π -⅓π 0 ⅓π ⅔π π 0 20 40 60 80
Angle to Obstacle (rad) Distance to Obstacle (cm)
Figure 9: Fuzzy Sets

6
Very Close Close Far Very Far
Negative Large Very High Right Small High Right Small Low Right Small Very Low Right Small
Negative Medium Very High Right Medium High Right Medium Low Right Medium Very Low Right Medium
Negative Small Very High Right Large High Right Large Low Right Large Very Low Right Large
Zero Very High Backward High Backward Low Backward Very Low Backward
Positive Small Very High Left Large High Left Large Low Left Large Very Low Left Large
Positive Medium Very High Left Medium High Left Medium Low Left Medium Very Low Left Medium
Positive Large Very High Left Small High Left Small Low Left Small Very Low Left Small
Table 3: Anecdotes and Conclusions. Rows and columns represent angle and distance to the obstacle
respectively. Each cell can be regarded as a set of pre-defined motor command (in the moving frame {M}).

6. Navigation
An omnidirectional sensing system is set-up using a set of mirrors and Infra-Red Sensors. The spatial information
scanned by sensors is then integrated to metric maps, occupancy grids [17].
6.1 Sensing and Mapping
The Sharp GP2D infra-red sensors are accurate in the range of 10~80cm [15]. It was found that the output from the
sensors is not linear. A calibration process was performed to normalize the output values and a power-regression
approximation was obtained:
D=7810.6R -1.3466 (5)
where d is the distance in centimeters, and r the analog reading from the sensors.
In my robot, the IR sensors are stationary with respect to the reference frame {M} [Section 3] while a set of four
mirrors rotated by the motor reflects the IR ray such that the IR sensors can detect objects in every direction.
Moreover, redundant sensors will reduce random errors in the sensing process. The IR sensors are tilted at a small
angle with respect to the horizontal [Figure 3] to avoid blocking of the IR light by the sensors themselves.

IR sensor

Figure 10: Top view of the sensor


set-up. Four IR sensors are placed 90
σ degrees apart. Motor at the center
spins four mirrors (vertical z-axis
through the center of the robot). IR
2σ γ rays are reflected by the mirrors.
Motor
5cm 5cm When the motor turns an angle of σ
mirror (measured by the optical encoder),
the angle of the detected object γ can
length 9.5cm be approximated as 2σ with respect to
the centre of the robot, since the
object distance >> 5cm.

In this project, spatial information is represented on the 2D metric maps, which are easy to maintain. Each metric cell
( x , y ) has a value attached, P(x,y), representing the probability that the cell is occupied.
Probability 0.0 The cell is empty
1
P(x,y) = 0.5 Unknown (6)
1.0 The cell is occupied
0.5 Figure 11: Translate sensor data to Probability Profile. The IR sensor
measures a range value D (computed using equation (5)), the
corresponding cell has occupancy probability 1; the preceding cells are
empty and have probability 0; and the subsequent ones remain unknown,
0 D distance therefore the probability 0.5.

7
More accurate descriptions are obtained at the expense of higher computational costs. For my robot, the sensing and
mapping process is mainly performed in the tactical layer [Section 4.3] which runs at 5Hz. The IR sensors scan the
surroundings and feed the information to tactical layer. All the sensory data are integrated to the map over time using
following equation:

(7)
where P’(x,y) denotes the prior state of the metric cell, n(x,y) number of times this cell has been updated, S(x,y) the
new state of the cell (either 0, 1 or 0.5, equation[6]) detected by the sensors. Examples of local maps are shown in
Figure 12.

Figure 12: Probability Map obtained after 10seconds of mapping process. corner (first row) and wall (second row)
Each cell has the dimension 10cm*10cm, robot is held stationary at the centre of the map.

6.2 Obstacle-Avoidance and path-planning


Aside from the low-level obstacle-avoidance employed in the operational level [Section 4.3], the robot also makes
use of the occupancy grids obtained to perform path-planning in the strategic level which runs at 0.5Hz. The
algorithm proposed here is based on dynamic programming [18], which is a fast and reliable method to determine the
robot path in a known environment.
0.50 0.28 0.50 0.87 0.87 0.61 0.49 0.50 0.53 0.50
0.32 0.45 0.84 0.98 0.92 0.50 0.46 0.58 0.50 0.54
0.63 0.52 0.92 0.94 0.92 0.70 0.67 0.72 0.45 0.51
0.26 0.79 0.77 0.92 0.87 0.78 0.52 0.39 0.74 0.50
0.33 0.56 0.18 0.05 0.37 0.29 0.21 GOAL 0.37 0.50
0.32 0.22 0.39 0.33 0.28 0.33 0.56 0.21 0.13 0.28
0.05 0.10 0.58 0.65 0.50 0.31 0.28 0.27 0.06 0.48
0.28 0.19 0.67 0.84 0.72 0.47 0.17 0.18 0.30 0.39
ROBOT 0.17 0.76 0.93 0.80 0.63 0.57 0.49 0.45 0.47
0.59 0.68 0.70 0.50 0.72 0.73 0.66 0.50 0.61 0.50
Table 4: Example of 2D Metric Map obtained by the robot. The dimensions of the cell are selected such that the robot
can fit into a single cell. Cells have values between 0 to 1, where 0 denotes empty, 1 occupied.

8
The omni-directional robot is able to move from one cell to every adjacent cell. Each cell is initialized to have an
infinite cost (except the one robot is in). Dynamic programming, using a flood-fill method, computes the minimum
cost from the robot to every cell. The dynamic transfer function is:
Cost ( x , y )

Cost ( x , y ) = Min (8)

P(x,y) + Min{ Cost(x+a,y+b) } a,b=0, +1, -1

where cost(x,y) denotes the current minimum cost from the robot to cell(x,y), and P(x,y) the occupancy probability
of the cell. Once the minimum cost is calculated from the ROBOT to GOAL, a route can be traced, which is the path
selected by the robot.
0.50 0.28 0.50 0.87 0.87 0.61 0.49 0.50 0.53 0.50
0.32 0.45 0.84 0.98 0.92 0.50 0.46 0.58 0.50 0.54
0.63 0.52 0.92 0.94 0.92 0.70 0.67 0.72 0.45 0.51
0.26 0.79 0.77 0.92 0.87 0.78 0.52 0.39 0.74 0.50
0.33 0.56 0.18 0.05 0.37 0.29 0.61 GOAL 0.37 0.50
0.32 0.22 0.39 0.33 0.28 0.33 0.56 0.49 0.13 0.28
0.05 0.10 0.58 0.65 0.50 0.31 0.28 0.27 0.06 0.48
0.28 0.19 0.67 0.84 0.72 0.47 0.17 0.18 0.30 0.39
ROBOT 0.17 0.76 0.93 0.80 0.63 0.57 0.49 0.45 0.47
0.59 0.68 0.70 0.50 0.72 0.73 0.66 0.50 0.61 0.50
Table 5: The Path Selected by the Robot (undelined).

The time complexity of this algorithm is O(mn), where m and n refer to the two dimensions of the metric arrays. It is
a fast algorithm that can be completed in real-time. However, a large space of memory is required to caching all the
information. So it is important to keep the metric array size small by increasing the area represented by each cell.

7. Conclusion and future work


This paper presents the hardware and software design of an omnidirectional robot. The kinematic analysis and
control methods are performed and integrated into the navigation process. Techniques employed in the robot sensing,
mapping, obstacle-avoidance and path-planning processes are also discussed.
Currently, my robot depends solely on wheel encoders for localization [Section 5.1]. However, the world model
relies crucially on the alignment of the robot with its map. Drift and slippage impose limits on its ability to estimate
the location of the robot within its global map. So far the world model obtained is not that accurate. The high-level
global mapping is still under work and a land-mark method is being incorporated into the localization process.

Acknowledgment
I would like to thank Assoc. Prof. Marcelo H. Ang, Jr. for many helpful discussions. This work was conducted in the
Control and Mechatronics Lab 1, NUS.

References
[1]K. Watanabe. 1998. Control of an omnidirectional mobile robot. Second International Conference on Knowledge-
Based Intelligent Electronic Systems, p51-60.
[2]Campion, G. Bastin and B. D. Andrea-Novel. 1996. Structural properties and classification of kinematic and
dynamic models of wheeled mobile robots. IEEE Trans. on Robot. & Automat. Vol 12, p47-57.
[3]F. G. Pin and S. M. Killough. 1994. A new family of omnidirectional and holonomic wheeled platforms for
mobile robots. IEEE Trans. on Robot. & Automat. Vol 10, p480-489.
[4]M. J. Jung, H. S. Kim, S. Kim. 2000. Omnidirectional Mobile Base OK-II. Proceedings of IEEE International
Conference on Robotics and Automation. Vol 4, p3449-3454.
[5]G. Witus. 2000. Mobility Potential of a Robotic 6 Wheeled Omnidirectional Drive Vehicle with Z-Axis and Tire
Inflation Control. Proceedings of SPIE, p106-114.

9
[6]S. L. Dickerson and B.D. Lapin. 1991. Control of an Omnidirectional Robotic Vehicle with Mecanum Wheels.
National Telesystems Conference Proceedings. Vol 1, p323-328.
[7]Fred G. Martin. 2000. The Handy Board Technical Reference.
[8]http://www.motorcomp.com
[9]Peter Spasov. 1999. Microcontroller Technology: The 68HC11. Prentice Hall, Upper Saddle River, N.J.
[10]T. Stentz. 1989. The NAVLAB System for Mobile Robot Navigation. PhD Thesis, Carnegie-Mellon University,
PA.
[11]I. E. Paromtchik and U. M. Nassal. 1995. ECC95 European Control Conference, Rome.
[12]R. A. Brooks. 1989. A Robust Layered Control System For a Mobile Robot. IEEE Journal of Robotics and
Automation, RA-2(1).
[13]R. C. Arkin and J. Murphy. 1990. Autonomous Navigation in a Manufacturing Environment. IEEE Transactions
on robotics and Automation, 6(4).
[14]K. J. Aström, and T. Hägglund. 1995. PID Control: Theory, Design and Tuning. Instrument Society of America,
Research Triangle Park, NC.
[15]P.Xian-Tu. 1990. Generating Rules for Fuzzy Logic Controllers by Functions. Fuzzy Sets and Systems, p83-89.
[16]Kosko, B. 1992. Neural Networks and Fuzzy Systems., Prentice-Hall Int. Editions.
[17]A. Elfes. 1992. Multi-source Spatial Data Fusion Using Bayesian Reasoning. Data fusion in Robotics and
Machine Intelligence, p136-161, Academic Press.
[18]Thomas H. Cormen, Charles E.l. 1998. Introduction to Algorithm, MIT Press, p339-349.

Appendix A – I/O port pinout

DC Motor Output port Memory address Reference library


Motor0 Motor0 First four bits at $0e decides the state (on/off), last Motor.icb
Motor1 Motor1 four decides direction. The speed for each motor
Motor2 Motor2 can be controlled via $22, $23, $24, $25 (value 0
Motor3 Motor3 to 128)

Optical Encoder Input port Memory address Input port Memory address Reference
(direction) (incremental) library
Encoder0 Digital0 8th bit at $7000 Analog2 $1032 Encdr0.icb
Encoder1 Digital1 7th bit at $7000 Analog3 $1033 Encdr1.icb
Encoder2 Digital2 6th bit at $7000 Analog4 $1034 Encdr2.icb
Encoder3 Digital3 5th bit at $7000 Analog5 $1035 Encdr3.icb

Infra-red Sensor Input port Access path Reference library


IR0 Analog17 _exp_analog(1<<8) Libexpbd.icb
IR1 Analog16 _exp_analog(0<<8)
IR2 Analog18 _exp_analog(2<<8)
IR3 Digital7, T03(impulse) Pulse(1) Pulse.icb

Bumper Sensor Input port Access path Reference library


Bumper0 Analog21 _exp_analog(5<<8) Libexpbd.icb
Bumper1 Analog22 _exp_analog(6<<8)
Bumper2 Analog20 _exp_analog(4<<8)

10
Appendix B – Program File Listing

Library files Functions


Pcode_hb.s19 Basic firmware, includes memory management and math functions
Lib_hb.icb/.asm Access the analog ports from 0 to 7
Libexpbd.icb/.asm Access the analog ports from 16 to 32(on the expansion board)
Motor.icb/.asm Control the direction and speed of the motors
Encdr0-3.icb/.asm Manage optical encoders, sample rate 1000Hz
Pulse.icb/.asm Manage Infrared sensor,
Lock_process.icb/.asm Process controller

Program files Functions


Lego_test.ic Test rotation speed of Lego motors under different speed output value
Motor_test.ic Test routine for the motors
IR_test1.ic Calibration process for the infrared sensors
IR_test2.ic Mapping using a set of IR sensors and mirrors
Move.ic PID controller
S_layer.ic The strategic layer
T_layer.ic The tactical layer
O_layer.ic The operational layer
Architecture.ic The robot system architecture
Navigation.ic Specifies the goal of the navigation

11

Vous aimerez peut-être aussi