Vous êtes sur la page 1sur 50

ii

DECLARATION



I do hereby declare that the work reported in this dissertation was exclusively carried out
by me under the supervision of Mr.Duminda T Wijesinghe.It describes the results of my
own independent research except where due reference has been made in the text. No part
of this dissertation has been submitted earlier or concurrently for the same or any other
degree.

. .
Date Signature
(H.M.C.B Mawilmada)

We/I endorse the declaration by the candidate.


..........................
MR.Duminda T Wijesinghe

Date: .



iii

ABSTRACT

Outdoor localization involves handling environmental ambient noises as well as adapting the
system to sudden changes in parameters such as light level variation. Trilateration algorithms
are used for localizing robot in a limited area with a higher accuracy. Researchers have carried
out in trilateration algorithm either using ultrasonic sensors or computer vision. Ultrasonic
sensory trilateration involves a higher error ratio due to variations of environmental air velocity,
moisture level and air density. CV mainly lacks adjustability to light condition variations.
Higher the accuracy in CV, greater the cost of implementation.
This Project is based upon combining both algorithms to get a higher accuracy output and
making the algorithm viable with higher ambient noise with a low cost. Three stationary
ultrasonic receivers were placed in known distance apart with a colour tag(red). The robot was
assembled with colour identification using a web camera (to track the colour tag on the
receivers) and an ultrasonic transmitter. RF signal was initiated simultaneously with the
ultrasonic burst which directed towards the receivers. The time taken for the ultrasonic burst to
be received after receiving the RF signal was recorded for three receivers separately and
distance from each station was calculated.
Trilateration was used to predict the position and Kalman filtering was used to filter out the
noises System was implemented and tested in an open ground which has minimal obstacle
disturbances but high noise levels (light and sound) with about 1000m
2
area and was able to
achieve accuracy of 40-50cm with 80% confidence level for stationary robot and while moving
at 1ms
-1
accuracy of 60cm is achieved with a confidence level of 80% which is acceptable
accuracy to localize an outdoor robot. Higher accuracy of 40-50cm was obtained for stationary
positions, which suggests system is suitable for localizing slow moving robots such as grass
cutters, Automated guided vehicles, combined harvesters etc.



iv


ACKNOWLEDGEMENT.

At this important milestone of entire hard work I would like to express my gratitude to
all those who were with me and helped me to finish this journey successfully.
First of all I would like to express my very great appreciation to MR.Duminda T
Wijesinghe my project supervisor for his wisdom and enormous support and
encouragement given to me throughout the three months to make this project very
successful. He had been very kind and patient while suggesting me the outlines of this
project and correcting my doubts. I thank him for his overall support.
I'm pleased to thank Auston Ceylon and all the staff members for their kind cooperation
and help.
I also like to thank Mr. Aruna Rubasinghe and Bhanu Watawana for sharing their
experience and knowledge with me to make this project successful.
At last but not least, I would be very grateful to my parents who were behind all the time
encouraging me and making all the preparations required to pass this important mile
stone of my life.













v

TABLE OF CONTENT

DECLARATION ii
ABSTRACT iii
ACKNOWLEDGEMENTS iv
TABLE OF CONTENTS v
LIST OF FIGURES vii
CHAPTER
1. Introduction 1
1.1. Problem Statement 2
1.2. Literature Survey 2
1.3. Solution Overview 3
2. Methodology 5
2.1. Ultrasonic Sensor 5
2.1.1. Ultrasonic Sensor Coverage-Horizontal Plane 6
2.1.2. Ultrasonic Sensor Coverage-Vertical Plane 7
2.2. Sensor System 7
2.2.1. Ultrasonic Transmitter System 8
2.2.2. Ultrasonic Receiver System 8
2.3. Node Placement 9
2.4. Outdoor Localization System 9
2.5. Trilateration Algorithm 10
2.6. Improving Results with Kalman Filter 11
2.7. Discrete Kalman Filter for Radius Measurements 11



vi

3. Implementation Of The System 12
3.1. Ultrasonic Receiver Unit 12
3.1.1. Ultrasonic Transducer 13
3.1.2. Ultrasonic Amplifier 13
3.1.3. Frequency Detector 15
3.1.4. Microcontroller 18
3.1.5. Power Supply 19
3.1.6. Embedded Algorithm 20
3.2. Ultrasonic Transmitter Unit 21
3.2.1. Power Supply 22
3.2.2. Microcontroller 22
3.2.3. Boost Converter 23
3.2.4. H-Bridge 25
3.2.5. XBee Module 26
3.2.6. Embedded Algorithm 27
3.3. Servo Controller Unit 28
3.3.1. Power Supply 28
3.3.2. Microcontroller 28
3.3.3. PC USB Interface 30
4. Results and Discussion 31
5. Conclusions 33

References 34
Appendices 35



vii

LIST OF FIGURES

Figure 1: Robot fabricated ................................................................................................ 1
Figure 2:Beam pattern as in the manufacturer datasheet .................................................. 6
Figure 3: 2D view of the single ultrasonic sensor ............................................................. 7
Figure 4: Placing the sensors ............................................................................................ 8
Figure 5: 3 circle method ................................................................................................ 10
Figure 6: Block diagram of the Node.............................................................................. 10
Figure 7: 400SR160 ........................................................................................................ 13
Figure 8: Waveforms of received ultrasonic signal and amplified output ...................... 14
Figure 9 : Block diagram ................................................................................................ 14
Figure 10: Amplifier Schematic ...................................................................................... 14
Figure 11: Frequency vs. sensitivity of ultrasonic transducer ........................................ 16
Figure 12: Schematic of frequency detector ................................................................... 17
Figure 13: Output of frequency detector w.r.t input ultrasonic signal ............................ 17
Figure 14: Schematic of microcontroller unit ................................................................. 18
Figure 15: Schematic of power supply unit .................................................................... 19
Figure 16: Basic algorithmic flow diagram of ultrasonic receiver unit .......................... 20
Figure 17: Structure of ultrasonic transmitter unit .......................................................... 20
Figure 18: Schematic of microcontroller unit ................................................................. 21
Figure 19: Standard schematic of MC34063 based boost converter .............................. 22
Figure 20: SMPS Schematic ........................................................................................... 23
Figure 21: Waveform of ultrasonic transducer drive voltage for 8V supply .................. 24
Figure 22: Schematic of H-bridge section ...................................................................... 25
Figure 23: Basic algorithmic flow diagram of ultrasonic transmitter unit ...................... 27
Figure24: XBee module....26
Figure 25: Basic algorithmic flow diagram of ultrasonic transmitter unit..27
Figure 26: Schematic of Servo Controller......29
Figure 27: Schematic of USB interface circuit30
Figure28: Occurrence graph........................................................................................311

1

CHAPTER 1

1. INTRODUCTION

Robotic Localization is a vast area of research & implemented at different scales.
Many researches are carried throughout the world to find optimum and most cost
effective method of implementing a localization system. In these project,
technologies like Computer Vision, GPS, Wi-Fi, GSM, RFID and ultrasonic are used
for the localization. The main concept of robotic localization is similar where it uses
a known stationary position as the base and calculates the relative position with
different technologies. Outdoor localization involves handling environmental
ambient noises as well as adapting the system to sudden changes in parameters such
as light level variation. The motivation of this project is to address this issue and
find an optimum trilateration algorithm for robotic localization. CV, Wi-Fi, GSM,
RFID and Ultrasonic sensory localization uses trilateration algorithms. Trilateration
algorithms are used for localizing robot in a limited area with a higher accuracy.
Researchers have carried out in
trilateration algorithm either using
ultrasonic sensors or computer vision.
Ultrasonic sensory trilateration involves
a higher error ratio due to variations of
environmental air velocity, moisture
level and air density. CV mainly lacks
adjustability to light condition
variations. Higher the accuracy in CV,
greater the cost of implementation. This
project is based upon combining both
algorithm to get a higher accuracy
output and making the algorithm viable
with higher ambient noise with a low
cost. Figure 1: Robot fabricated

2

1.1 Problem Statement
Outdoor Robotic Localization involves high costs and use of advanced technologies.
The main objective was to build and implement a low cost, high accuracy outdoor
localization system. It is consisting ultrasonic transmitters and receivers. Usage of
minimum number of receiver base stations and deliver an acceptable accuracy for
localized robot was also considered. To minimize the power usage and the number
of base stations Computer Vision was used to target the outgoing burst. Low cost
ultrasonic sensors usually work in a range of less than 4m. Improvements were made
to increase the range.

1.2 Literature Survey
The active bat location system
The active bat location system consists of mobile or fixed wireless transmitters, a
matrix of receiver elements, and a central controlling system. Receivers are placed in
a square grid, 1.2m apart, and are connected by a high-speed serial network in daisy-
chain fashion. The central controller starts the activity of transmitters by periodically
broadcasting messages addressed to each of them in turn. A transmitter, up on
hearing a message addressed to it, sends out an ultrasonic pulse. The receiver
systems which also listen to the initial RF signal will determine the time interval
between the receipt of the RF signal and the receipt of the ultrasonic signal. By this
method they estimate the distance to the transmitter. By considering enough distance
measurements the system can then determine the location of the transmitter. The
system has an accuracy of 3cm. Though the accuracy of the system is very high it
need a sophisticated environment to deploy.

Active badge system
Active badge system uses infrared transmitters and receivers to pinpoint a location of
an object. This is one of the first localization systems. But infrared suffers from
having dead spots the localization accuracy of the system is low. The system can be
used to track objects successfully at the room level.




3

The cricket localization system
The cricket localization system consists of two types of nodes, they are cricket
beacons that act as fixed reference points of the location system and are typically
attached to the ceiling and walls of a building, and listeners that are attached to fixed
and mobile objects that need to determine their location. In contrast to active bat
system cricket does not need a grid like system to deploy. Therefore it is easy to
deploy the system in an indoor environment. It also uses a passive mobile system
which has ultrasonic receivers in the listeners. Therefore the system has an added
advantage of privacy and power saving. The system has accuracy around 10cm. This
system also uses more beacons than our system.

BATSY system
BATSY system is also a system that uses a combination of ultrasonic and RF for
localization. The system has an accuracy of 3cm.

IEEE 802.11 based localization systems
Many research groups have developed localization systems using IEEE 802.11. But
those systems have a median estimation error around 1- 3m. The advantage of these
systems is it can use the existing IEEE 802.11 nodes therefore special anchor nodes
are not needed. Stereo vision involves two images taken at the same time, some
space apart. This involves high-tech gear for higher accuracy. Low cost Stereo vision
algorithms are available even though the accuracy and the outdoor compatibility is
less. Ultrasonic Trilateration involves three or more ultrasonic receivers placed in a
known distance apart and an ultrasonic burst is send towards it. When the pulse
received, a calculation is done trilaterating the robots location to pinpoint its exact
location. The error may vary around 10 20 cm.

1.3 Solution Overview

The approach is to pinpoint the location using measured distances from at least 3
stationary point ultrasonic receivers using Ultrasonic trilateration. Using the readings
from CV the transmitter was oriented towards the receiver. Robot was equipped with
ultrasonic transmitter and a web camera which capture the colour tag as well as

4

orienting transmitter to receivers locations. Ultrasonic receivers were placed at
stationary positions on the outdoor environment. Ultrasonic sound wave propagation
time from robot to stationary positions was measured and distance was calculated
using the velocity of sound and heading angle was calculated using the direction of
the transmitter.




5

CHAPTER 2

2. METHODOLOGY
This chapter is about the sensor type that was used in the project and describes the
types of sensor systems used in the project and the different methods available in
order to achieve the objective. Then describes the node attributes and node
placement criteria used in the project. Afterwards describes about the trilateration
algorithm and filtering methods used in the system to reduce the effect of noise.

2.1 Ultrasonic Sensor
Ultrasonic sensors are without a doubt, one of the most frequently used sensors
in robotic applications and many other industrial applications. They are often used to
detect the presence of objects in robotic applications. It also can be used to measure
the wind speed and direction, fullness of a tank etc. Certain limitations of the
ultrasonic sensors have a major effect on the success of the related applications and
they often become the reason for the failures of those as well.

Ultrasonic sensors work the same principle as the radar or sonar. It generates
high frequency signals and the receiver waits for the echo or the direct transmitted
signal and calculates the time interval for the arrival of the echo or the direct signal
to determine the distance. Most often ultrasonic sensors are used to determine the
presence of obstacles.

But in this project ultrasonic sensors were used in a different perspective. In here
transmitters and the receivers are in distant locations and the time taken by the
ultrasonic signal to travel from the transmitter to the receiver was measured. In this
project 400ST160 ultrasonic transmitters and the 400SR160 ultrasonic receivers
were used. This type was chosen due to high sensitivity and sound pressure level,
low cost, excellent temperature and humidity durability.


6

2.1.1 Ultrasonic Sensor Coverage-Horizontal Plane

Ultrasonic sensor coverage is shown below according to the manufacturer datasheet.




Figure 2:Beam pattern as in the manufacturer datasheet


The center frequency of this sensor is 40 KHz. According to the data sheet it has a
beam angle of 55 degrees. Apart from the datasheet information tests were carried
out a testing to determine the achievable accuracy of a single ultrasonic sensor to
different distances. The following figure shows the accuracy levels we have obtained
to different distance levels.

7



Figure 3: 2D view of the single ultrasonic sensor


As shown in the above figure it was possible to use for 60 degree beam angle from a
single ultrasonic sensor. Three ultrasonic transmitters were used to increase the
angle. The angle between two consecutive sensors is 30 degrees.

2.1.2 Ultrasonic Sensor Coverage-Vertical Plane
The ultrasonic beam pattern in the vertical plane is not much wider than in the
horizontal plane. In the project all the nodes in the same height were placed same
height and height difference between a node and the user equipment considered a
negligible value.

2.2 Sensor System
The sensor system consists of three parts.
Ultrasonic transmitter system
Ultrasonic receiver system
Object tracking system






8













Figure 4: Ultrasonic Transmitter

2.2.1 Ultrasonic Transmitter System

The transmitter sensor system consists of 3 ultrasonic transmitters mounted on a
casing. As described in section 2.1 the maximum angle that a single ultrasonic sensor
can cover the area is 55 degrees according to the given datasheet. So using 3
ultrasonic sensors can transmit signals wider range. All the 3 ultrasonic sensors are
placed equal space in order to achieve maximum signal coverage. The depth of the
ultrasonic sensor signal coverage is not much considered as the receiver and the
transmi1tter are placed approximately the same level and directs the transmitters and
the receivers to the same direction.

2.2.2 Ultrasonic Receiver System

The receiver sensor system consists of 3 ultrasonic sensors mounted with equal
angles between them. As the receiver sensor system is a stationary system the
transmitter should be directed. The depth of the ultrasonic sensor signal coverage is
not much considered in here as well.
To have accurate location coordinates it is necessary to measure the distance with
high accuracy. In order to achieve that it is necessary to have a good ultrasonic
coverage over the outdoor environment. To obtain good coverage the
transmitter/receiver units need to place properly in the testing environment.


9

2.2.3 Object tracking system

The common web camera was used for object tracking along with a High power
servo motor. The web camera identifies the colour tag on the receiver is then
checked for the center point of the tag. It is then checked with a tolerance of 10
pixels. If out of tolerance, PC sends the command to the servo to turn the camera left
or right according to the position.

2.3 Node Placement

The localizing part of the project depends on the concept called trilateration. In
contrast to the popular triangulation algorithm this uses distances instead of angles.
The concept behind this algorithm is the intersection of three circles (or four non
coplanar spheres for a three dimensional coordinate system).
Since using a 2 dimensional planer coordinate system for the localization this suit the
three circle method.

2.4 Outdoor Localization System

The outdoor localization system consists of 4 subsystems.
Transmitter subsystem
Receiver subsystem
Servo Central controller
Software Installed PC

Servo controller is connected to the laptop on the robot. It is connected to the
receiver subsystem and the PC from a wired link. Transmitter subsystem is
connected to the network from a wireless link. The details of each of the subsystems
are described in chapter 3. The following figure shows the configuration of the
Outdoor localization system.

10











Figure 5: Placing the sensors


2.5 Trilateration Algorithm
Suppose the system have three circles with coordinates
C
1
(x
1
,y
1
) C
2
- (x
2
,y
2
) C
3
-(x
3
,y
3
)
Respectively, and further know their distances l1, l2 and l3 from some unknown
point(x,y). This unknown point is located at the intersection of three circles with
C
1
,C
2
, and C
3
as their centers and l1, l2, l3 as their radii, respectively. Then the
intersection point of these three circles which is the unknown point can be expressed
as follows;










(x
1
-x)
2
+ (y
1
-y)
2
= l
1
2

(x
2
-x)
2
+ (y
2
-y)
2
= l
2
2

(x
3
-x)
2
+ (y
3
-y)
2
= l
3
2
Figure 6: 3 circle method

11

2.6 Improving Results with Kalman Filter

To further increase the results system was programmed with a one dimensional
Kalman filter for the calculated radiuses from each of the receivers.


2.7 Discrete Kalman Filter for Radius Measurements

The Kalman filter is used to address a problem which is trying to estimate a discrete
time controlled process that is governed by the following equation.

X
k
=Ax
k-1
+ Bu
k
+ w
k-1




















12

CHAPTER 3

3. IMPLIMENTATION OF THE SYSTEM

This chapter describes how each and every sub system of the project works. Task,
circuitry, explanations are well described in the chapter.

3.1 Ultrasonic Receiver Unit

Ultrasonic receiver is a Node device which is responsible for measuring the
ultrasonic sound wave propagation time and calculating the distance between the
ultrasonic transmitter and the receiver. Sensor network consist of 3 such Node
devices and the central controller is connected as a ring network using Coaxial
cables. Each receiver unit consumes about 40mA.


Figure 7: Block diagram of the Node



13


3.1.1 Ultrasonic Transducer
Ultrasonic transducer used in this project is a 400SR160 type ultrasonic receiving
module. It has 40 kHz center frequency with 55
O
beam width. One or more such
transducers are connected in parallel and mounted with some angle between
transducers, in order to increase the total beam angle, hence increase the range of
operation.


















Figure 8: 400SR160 ultrasonic receiver


3.1.2 Ultrasonic Amplifier
The output of the ultrasonic transducer is a signal with amplitude of few mili-volts.
This amplitude varies with the strength of the receiving ultrasonic sound wave.
Therefore this small signal needs to be amplified to a sufficient value before
processing further. This is important to detect ultrasonic signals emitting from a
location few meters away. Below, oscilloscope screen shows voltage signal (in
yellow) generated by ultrasonic transducer and signal after amplification (in blue).
Signal before amplification has about 16mV peak to peak value.

14







Figure 9: Waveforms of received ultrasonic signal and amplified output


Amplifier section can be expanded as below.




Figure 10 : Block diagram of amplifier expansion

Figure 11: Amplifier Schematic

15

The amplifier is based on TL082CP operational amplifier manufactured by Texas
Instruments. There are several important factors that need to be considered when
selecting an op amp for an application like amplifying a signal from an ultrasonic
transducer.
Three main factors were considered when choosing the op amp for the design;
Op amp input bias current
Op amp gain bandwidth product
Number of op amps in a single package

As the signal received from the ultrasonic transducer is very small, loading effect
can easily attenuate and distort the input. Therefore amplifier's input current
required to bias the internal circuitry needs to be extremely low. Unlike bipolar input
op-amps, TL082 have JFET inputs which require only about 50 pA of input bias
current. Therefore loading effect caused by amplifier stage can be negligible.

Op-amps gain bandwidth product (GBP) is also an important factor when designing
the amplifier stage. Voltage signal of the transducer is 40 kHz signal. Op amp should
be able to amplify this 40 kHz signal without distortion. TL082 has a gain bandwidth
product of 4 MHz . Therefore maximum amplification that can be achieved by one
op amp on 40 kHz signal is 100 times.

3.1.3 Frequency Detector
Normally, amplified signal from the amplifier is sent through a band pass filter and a
peak detector followed by a comparator to get a constant dc voltage as a trigger
signal to the microcontrollers. This is done assuming ultrasonic transducer is only
sensitive to its resonance frequency which is 40 kHz. These methods are erroneous
and amplified signal cannot be directly fed into the microcontroller because the
amplified signal can be a signal other than an ultrasonic signal with 40 kHz
frequency. This is possible because transducer used to receive ultrasonic signals is
also sensitive to other frequency ranges.
Ultrasonic transducers resonance frequency is around 40 kHz, but it generates
voltage signals for any other sound frequencies. However these signals generated by

16


frequencies other than 40 kHz are attenuated due to less sensitivity of transducer to
non 40 kHz signals. However after amplified by the amplifier, these signals can
become significant, hence will act as false triggers to the microcontroller especially
if the amplifier gain is sufficiently large. Following graph shows transducers
sensitivity to the receiving sound frequencies.
In this design amplified output of the received signal is given to a frequency detector
circuit. This is done in order to verify whether the signal amplified by the amplifier,
is actually a 40 kHz signal or any other noise signal.

Figure 12: Frequency vs. sensitivity of ultrasonic transducer

Frequency detection is done using the LM567 tone decoder IC by National
semiconductors. LM567 tone and frequency decoder is a highly stable phase-locked
loop with synchronous AM lock detection and power output circuitry.


17






Figure 13: Schematic of frequency detector

LM567 has an active low open collector output. Circuit set its output to logic low
from logic high, whenever a frequency within its detection band is present at the
self-biased input. Below oscilloscope screen shows output of LM567 (in blue) for
input of 40 kHz bursts (in yellow).





Figure 14: Output of frequency detector w.r.t input ultrasonic signal

18



Center frequency of this detection bandwidth can be calculated using following
equation.

RV1 is calculated from above equation for f0 = 40 kHz and C18 = 4.7 nF as 5.32
k. RV1 is replaced with a 10 k multi-turn trim-pot in order to set the center
frequency precisely to 40 kHz. Temperature instability in capacitors can cause
frequency drift in the system which leads to malfunctions over the time. Therefore a
Mylar capacitor is chosen over a ceramic capacitor for the C18 as Mylar capacitors
have higher temperature stability than ceramic capacitors.

3.1.4 Microcontroller
PIC12F683 microcontroller by Microchip Technologies Inc. was chosen as the heart
of the receiver subsystem. It is an 8 pin microcontroller with 2k flash memory and
operates with 20 MHz clock speed. Main function of the microcontroller is to
calculate the distance between the ultrasonic transmitter and the receiver by
measuring the time difference between sync pulse from central controller and the
ultrasonic signal from the transmitter. Calculated distance is stored in the memory
until the central controller request the distance.



Figure 15: Schematic of microcontroller unit

19


3.1.5 Power Supply
Power supply to the system contains + 9V, - 9V, +5 V and ground. 9V dual supply is
used to operate the op-amp and 5 V is for the frequency detector circuit and the
microcontroller unit.




Figure 16: Schematic of power supply unit


20

3.1.6 Embedded Algorithm



Figure 17: Basic algorithmic flow diagram of ultrasonic receiver unit


21


If the issued command is equals to the broadcasting address, receiver sub system
start the process to calculate distance. It time synchronized with transmitter unit
using a synchronizing bit via central controller. Time synchronized receiver sub
system measure the time duration to propagate ultrasonic wave. Using the speed of
sound in air, distance calculation will be completed. We used 330ms-1 as the speed
of sound by neglecting the effect from atmospheric temperature variation. This is
because at the trilateration, error due to temperature variation cancelled out.
Calculated distances are stored to send later when a request is made by the central
controller. If the received command is equals to the device ID of a receiver
subsystem, it sends the stored distance to central controller immediately. All the
decoded commands are error checked for accurate communication. If a single error
occurs, receiver will neglect the data.

3.2 Ultrasonic Transmitter Unit
Ultrasonic transmitter unit is the mobile device of the system.



Figure 18: Structure of ultrasonic transmitter unit

22


3.2.1 Power Supply
Power supply to the system is provided using the robot's battery with 7000mAh
capacity. This is reduced to 5v and 3.3v to power up the controller board and the
XBee module. And battery power is directly connected to input of the boost
converter, which used to drive the ultrasonic transducers. Power consumption of the
transmitter is about 150mA.
3.2.2 Microcontroller
PIC16F876A microcontroller was chosen as the brain of the system. It is responsible
for following tasks;
1. Communicate with central controller through RF using the XBee module.
2. Generating ultrasonic burst at central controllers request
3. Controlling the boost converter to adjust the power of the ultrasonic burst
4. Receive calculated coordinates from central controller and pass it to
through the output port.






Figure 19: Schematic of microcontroller unit

23


3.2.3 Boost Converter
Displacement of the ultrasonic transmitter transducer diaphragm is proportional to
the voltage applied to the diaphragm. As applied voltage increases diaphragm
displacement increases, hence it increases the energy of the ultrasonic burst,
allowing it to travel higher distances before decaying. Boost converter is used to
generate 4 predefined voltages up to 20v from battery input.
These voltages are used to drive ultrasonic transducers. When transmission over
higher distance is required output voltage of the boost converter is adjusted by the
microcontroller as necessary. Boost converter is based on MC34063A IC by on
semiconductors.







Figure 20: Standard schematic of MC34063 based boost converter

Original boost converter design is modified to make a voltage controlled, voltage
output so that microcontroller can adjust the output voltage of the boost converter in
real time. Modified boost converter is shown below in Figure



24






Figure 21: SMPS Schematic

Modification allows microcontroller to set boost converter's output voltage to 4
different pre defined voltages (8V, 12V, 15V and 20V). This is done by applying 5V
or 0V to the terminals Vset_8, Vset_12, Vset_15 or Vset_20. When 5V is applied to
a terminal it will reverse bias the diode connected to it and will effectively remove
the effect of the resistor attached to that terminal. When 0V is applied to a terminal
by the microcontroller it will forward bias the diode connected to it, and it will act as
a ground path to the diode and the resistor connected to it. Hence it will form a
voltage divider between output voltage and ground.

Microcontroller can turn off the boost converter by applying 5V on all four
terminals, which will turn off all 4 diodes. This will connect feedback pin (pin 5) of
MC34063 to Vout through R10 which act as a pull up resistor to feedback pin. This
will set voltage of feedback pin to a higher voltage than 1.25V thus MC34063A will
set its duty cycle to 0 trying to decrease voltage of feedback pin to 1.25V by
decreasing the Vout.

Boost converter is turned off when RF communication is active in order to avoid any
electromagnetic interference generated by inductor of the boost converter, to the RF
communication. Boost converter is only turned on before transmitting an ultrasonic

25


burst. This also acts as an energy saving process because power wastage by boost
converter is minimal during its turned off mode.

3.2.4 H-Bridge
Direction of ultrasonic transmitter diaphragm's movement depends on the polarity of
applied voltage. The diaphragm can be moved in both forward and backward
directions by applying an alternative voltage. Such movement increases the energy
stored in the ultrasonic burst. Therefore the H bridge circuit is used to provide
alternative voltage signal at 40 kHz to the transducers.

The output signal of H-Bridge for 8V supply is as below. It has near 16V peak to
peak value.




Figure 22: Waveform of ultrasonic transducer drive voltage for 8V supply

H-bridge circuit is based on L293 IC by Texas instruments. Direction change pins of
L293 are supplied with 40 kHz signal and 1800 phase shifted 40 kHz signal.
Ultrasonic burst is generated by applying logic 1 to L293's enable pin.
Output of H-Bridge is connected to 8 ultrasonic transducers connected in parallel.
When logic 1 is applied to enable pin, transducers are applied with twice voltage as
boost converters output voltage, by the H-bridge.

26







Figure 23: Schematic of H-bridge section



3.2.5 XBee MODULE

RF communication between central controller and the Transmitter unit is done using
a XBee module. RF module used in this project is a 2.4GHz, 1mW, series 1 XBee
module by Digi internationals. Operating range of module is 100m. It is based on
IEEE 802.15.4 specification and Zigbee network standard.



Figure :24 XBee module

27

3.2.6 Embedded Algorithm




Figure 25: Basic algorithmic flow diagram of ultrasonic transmitter unit


28

Figure 25 shows the basic algorithmic flow chart of transmitter sub system.
Communication channel between the transmitter sub system and central controller is
a radio frequency interface using XBee modules. Transmitter sub system responds to
two different kinds of commands.
Mainly if the central controller request the generation of tracking signal, transmitter
sub system emits 100ms ultrasonic sound burst in to the air. Controlling the power of
ultrasonic burst also controlled according to the central controllers request.

3.3 Servo Controller Unit
This circuit was designed to communicate with the laptop that is placed on the robot
and orient the ultrasonic burst towards the receivers.

3.3.1 Power Supply
Central controller is powered by mains supply. As ultrasonic receiver units are
powered from central controller, 24V center tapped transformer is used to drop
230V. Single power supply of 5V is used to drive circuits in central controller while
+12V and -12V DC voltage is send to ultrasonic receiver units.

3.3.2 Microcontroller
PIC 16F876A is used as the microcontroller of the Central controller. Main task of it
is to communicate with ultrasonic receivers and ultrasonic transmitter to synchronize
them and then collect distance measurements from each receiver.









29







Figure 246: Schematic of Servo controller


30


3.3.3 PC USB Interface
Central controller is connected to PC through USB connection. For this purpose
separate PIC18F2550 microcontroller with USB capability is used and USB
interface software is based on PicKit2 Software. Data is transferred through serial
communication at 9600bps, and circuit will act as a serial to USB converter.






Figure 27: Schematic of USB interface circuit


31


CHAPTER 4

4.RESULTS AND DISSCUTION
Measurements were taken in several stationary positions (50 positions) to estimate
the Root Mean Square Error with extended Kalman filters. Average data and
weighted average data was analyzed and following figure shows the occurrence
pattern and the mean error of the data.




Figure 28: Occurrence graph


RMSE =





32

System was implemented and tested in an open ground which has minimal obstacle
disturbances but high noise levels (light and sound) with about 1000m
2
area and was
able to achieve accuracy of 40-50cm with 80% confident level for stationary robot
and while moving at 1ms
-1
accuracy of 60cm is achieved with confident level of
80% which is acceptable accuracy to localize an outdoor robot.























33

CAPTER 5

5. CONCLUTION

In this research and development project I have worked on the ultrasonic sensor
system to localize a robot in a predefined outdoor area using the minimum possible
number of sensors.

The system proves to do position estimation of a robot with an acceptable accuracy
by using this minimum number of sensor nodes.

Higher accuracy of 40-50cm was obtained for stationary positions, which suggests
system is suitable for localizing slow moving robots such as grass cutters,
Automated guided vehicles, combined harvesters etc.

Wind strength in the place is also a key factor which affects the accuracy. As the
sound waves are propagated by moving air particles, strong winds can change the
path of ultrasonic sound wave causing delayed response of the system. Therefore
system's accuracy drops when implemented on outdoors where strong wind may
occur.




34

REFERENCES


[i] L. Nissanka B. Priyantha, Anit Chakraborty, and Hari Balakrishnan, The Cricket
Location-Support System, MIT Laboratory for Computer Science.
[ii] Bahl, P., and Padmanabhan, V. Radar: The Building RF-based User Location and
Tracking System. InProc. IEEE INFOCOM (Tel-Aviv, Israel, Mar. 2000).
[iii] Want, R.; Hopper, A.; Falcao, V. & Gibbons, J. (1992). The Active Badge Location
System. ACM Transactions on Information Systems, Vol. 10, Issue 1.
[iv] R.W.M.C.M.S. Kapukotuwe, M.A.U.S. Malasinghe, D.M.S. Palipana, P.
Wijenayaka, S.R. Munasinghe, Self Localized Field Robot
[v] N. S. Kodippili, Dileeka Dias, Integration of Fingerprinting and Trilateration
Algorithms for Improved Indoor Localization Performance
[vi] Krishna Chintalapudi, Anand Padmanabha, Iyer Venkata, N. Padmanabhan, Indoor
Localization Without the Pain
[vii] 400SR160-400ST160 ultrasonic transducers datasheet
[viii] TL082 datasheet
[ix] http://ww1.microchip.com/downloads/en/DeviceDoc/PICkit2PCAppSourceV261.zip
: Pickit2 source code
[x] Zsolt Parisek, Zoltn Ruzsa, Gza Gordos, Mathematical algorithms of an indoor
ultrasonic localization system Bay Zoltn Foundation for Applied Research,
Institute for Applied Telecommunication Technologies.
[xi] Motilal Agrawal; Kurt Konolige, Real-time Localization in Outdoor Environments
using Stereo Vision and Inexpensive GPS, SRI International
[xii] Gary Bishop, Greg Welch University of North Carolina at Chapel Hill, An
Introduction to the Kalman Filter
[xiii] Welch, G.; Allen, B.; Ilie, A. & Bishop, G. (2007). Measurement Sample Time
Optimization for Human Motion Tracking/Capture Systems, Proceedings of Trends
and Issues in Tracking for Virtual Environments, Workshop at the IEEE Virtual
Reality 2007
[xiv] Pedro Davalos - Se, Lowe, Visionbased Mobile Robot Localization and
Mapping using ScaleInvariant Features, Little IEEEICRA, 2001




35

APPENDICES








36


Receiver Code

#use rs232(baud=9600,parity=N,xmit=PIN_A0,rcv=PIN_A1,bits=8)

#define led pin_a2
#define pulse pin_a3
int16 time=0;
char answer;
void main()
{ while(true){
do{
answer=getch();
}while(answer!='T');
set_timer1(0);
while(input(pulse)&&get_timer1()<10000){}
time=get_timer1();
output_high(led);
printf("%ld\n",time);
delay_ms(300);
output_low(led);
}}



Transmitter Code

#use rs232(baud=9600,parity=N,xmit=PIN_B7,rcv=PIN_B6,bits=8)
#define V40V pin_A5
#define V30V pin_E0
#define V20V pin_E1
#define V10V pin_E2
#define enable pin_c3
char answer;
void main()

{ setup_timer_2(T2_DIV_BY_1,124,1);
setup_timer_3(T3_DISABLED|T3_DIV_BY_1);
setup_ccp1(CCP_PWM);
set_pwm1_duty(62);
output_low(v10v); // setting up the burst voltage (adjust Vref of SMPS)
output_float(v20v);
output_float(v30v);
output_float(v40v);
output_low(enable);
while(true){

37

do{

answer=getch(); // waits for key
}while(answer!='T');
output_high(enable); send the burst
delay_ms(10);
output_low(enable);
}}




Servo Controller Code

#include <12F683.h> // default settings
#device adc=8
#FUSES HS //High speed Osc (> 4mhz)
#use delay(clock=20000000)
#use rs232(baud=9600,bits=8,parity=N,xmit=PIN_A0,rcv=PIN_A1,ERRORS)
#define PinServo0 PIN_A2 // servo pin defined
int PosServo0;
int i = 0;
char direction='a';
#int_TIMER1 // servo control timer interrupt
void TIMER1_isr(void)
{
output_high(PinServo0);
for (i=0;i<=PosServo0;i++)
delay_us(10);
output_low(PinServo0);
}

void main()
{
byte Sync = 0; // 0xFF
byte ServoNo = 0; // 1 - 255
setup_timer_1(T1_INTERNAL|T1_DIV_BY_2); // timer set to servo delay time
enable_interrupts(INT_TIMER1);
enable_interrupts(global);
PosServo0 = 150; // default servo pos
output_low(PinServo0);

while (TRUE)
{
if (kbhit()) // waits for the character to be sent from laptop
{
direction = getchar();


38

if(direction=='l'){ // turns servo left
PosServo0=PosServo0-3;
direction='a';

printf("%d",PosServo0);
}
else if(direction=='r'){ // turns servo right
PosServo0=PosServo0+3;
direction='a';
printf("%d",PosServo0);
}
else{}
}}}


39

Image processing code


namespace WindowsFormsApplication1
{
public partial class Form1 : Form
{
private FilterInfoCollection videoDevice;
VideoCaptureDevice videoSource; //to hold selected video device
Boolean isActivated = false;

int red, green, blue,tracker;
int mouseMoveX, mouseMoveY;
Bitmap proccedImage;
Rectangle[] rects;
int rectsLength = 0;

int detectedX, detectedY;//to hold the center

public Form1()
{
InitializeComponent();
serialPort.PortName = "COM1";
serialPort.BaudRate = 9600;
serialPort.DataBits = 8;
serialPort.Parity = Parity.None;
serialPort.StopBits = StopBits.One;
}


serialPort.Open();
}

private void btnActivate_Click(object sender, EventArgs e)
{
//Create video capture device as videoSource by using
selected videoDevice from comboBox
videoSource = new
VideoCaptureDevice(videoDevice[cmbDevice.SelectedIndex].MonikerStrin
g);

videoSource.NewFrame += new
NewFrameEventHandler(videoSource_NewFrame);
// start the video source
videoSource.Start();

isActivated = true;
btnStart.Enabled = false;
btnStop.Enabled = true;
void videoSource_NewFrame(object sender, NewFrameEventArgs
eventArgs)
{
Bitmap img = (Bitmap)eventArgs.Frame.Clone();
//create morror object
Mirror mirrorObj = new Mirror(false, true);
img = mirrorObj.Apply(img);

processTheImage(ref img, ref proccedImage);


40

pbPreview.Image = img;
pbPreviewProceed.Image = proccedImage;
}

private void processTheImage(ref Bitmap image,ref Bitmap proImage)
{
int area;
Bitmap img = image;
//Create the filter to filter the image
ColorFiltering colorFilter = new ColorFiltering();
colorFilter.Red = new IntRange(red - 40, red + 40);//set red color
colorFilter.Green = new IntRange(green - 40, green + 40);//set green
colorFilter.Blue = new IntRange(blue - 40, blue + 40); //set blue

img = colorFilter.Apply(img);//apply the filter to the image
proImage = img;
//Now image have only color which define from above
filter and the rest is color of black
IFilter grayscaleFilter = new GrayscaleBT709();//Create
the grayscale filter
img = grayscaleFilter.Apply(img);//apply to the image
b.c. blobcounter supports grayscale 8 bpp indexed imge only

BitmapData imageData = img.LockBits(new Rectangle(0, 0,
img.Width, img.Height), ImageLockMode.ReadWrite,
PixelFormat.Format8bppIndexed);//Convert to 8 bpp indexed image

BlobCounter blobCounter = new BlobCounter();//create counter object
blobCounter.ProcessImage(imageData);//Process locked image
//blobCounter.ObjectsOrder = ObjectsOrder.Size;//arrange the blobs
according to size
Rectangle[] blobRects =
blobCounter.GetObjectsRectangles();//get the blob's cordinates to
the array of type Rectangle
img.UnlockBits(imageData);//release the imageData
img.Dispose();//release bitmap of temp

if (blobRects.Length != 0)//chk wether at least one blob
{
rectsLength = blobRects.Length;
rects = blobRects;
recQuickSort(0, blobRects.Length - 1);//sort array in ascending
//now largest rectange is last one
detectedX = getCenter(rects[rectsLength - 1]).X;
detectedY = getCenter(rects[rectsLength - 1]).Y;

if (detectedX > 380 && tracker == 1)
serialPort.Write("r");
if (detectedX < 280 && tracker == 1)
serialPort.Write("l");

}
}

private int getArea(Rectangle aRect)
{
return aRect.Width * aRect.Height;
}

public void recQuickSort(int left, int right)
{

41

if (right - left <= 0) // if size <= 1,
return; // already sorted
else // size is 2 or larger
{
long pivot = getArea(rects[right]); // rightmost item
// partition range
int partition = partitionIt(left, right, pivot);
recQuickSort(left, partition - 1); // sort left side
recQuickSort(partition + 1, right); // sort right side
}
} // end recQuickSort()

public int partitionIt(int left, int right, long pivot)
{
int leftPtr = left - 1;// left (after ++)
int rightPtr = right;// right-1 (after --)
while (true)
{// find bigger item
while (getArea(rects[++leftPtr]) < pivot)
; // (nop)
// find smaller item
while (getArea(rects[rightPtr]) > 0 &&
getArea(rects[--rightPtr]) > pivot)
; // (nop)
if (leftPtr >= rightPtr)// if pointers cross,
break;// partition done
else // not crossed, so
swap(leftPtr, rightPtr);// swap elements
} // end while(true)
swap(leftPtr, right);// restore pivot
return leftPtr;// return pivot location
} // end partitionIt()

public void swap(int dex1, int dex2)// swap two elements
{
Rectangle temp = rects[dex1];// A into temp
rects[dex1] = rects[dex2];// B into A
rects[dex2] = temp;// temp into B
} // end swap()


private void pbPreview_Paint(object sender, PaintEventArgs e)
{
if (pbPreview.Image != null)
{

SolidBrush brushNormal = new SolidBrush(Color.FromArgb(0, 255, 0));
SolidBrush brush0 = new SolidBrush(Color.FromArgb(255,0,0));
Font font = new Font("Arial", 9);

e.Graphics.DrawRectangle(rectPen0, rects[rectsLength - 1]);
e.Graphics.DrawString((rectsLength - 1).ToString(), font, brush0,
new Point(rects[rectsLength - 1].X, rects[rectsLength - 1].Y));

lblX.Text = detectedX.ToString();
lblY.Text = detectedY.ToString();
}}

private void Form1_FormClosed(object sender,
FormClosedEventArgs e)
{

42

if (isActivated == true)//chk is device activated
{
videoSource.SignalToStop();// signal to stop
}}

private void btnStop_Click(object sender, EventArgs e)
{
if (isActivated == true)//chk is device activated
{
videoSource.SignalToStop();// signal to stop
btnStart.Enabled = true;
btnStop.Enabled = false;
}}
private void pbPreviewProceed_Paint(object sender, PaintEventArgs e)
{
int relativeX, relativeY;
int relativeWidth, relativeHeight;

if (pbPreview.Image != null)
{
Pen rectPenNormal = new Pen(Color.FromArgb(0, 255, 0), 2);// for
blobs
Pen rectPen0 = new Pen(Color.FromArgb(255, 0, 0), 2);// for blobs
Pen penCross = new Pen(Color.FromArgb(0, 0, 255), 2); // for cross

SolidBrush brushNormal = new SolidBrush(Color.FromArgb(0, 255, 0));
SolidBrush brush0 = new SolidBrush(Color.FromArgb(255, 0, 0));
Font font = new Font("Arial", 9);

relativeX = Convert.ToInt32(rects[rectsLength - 1].X * 0.75);//only
for draw the rects on the resized picture box
relativeY = Convert.ToInt32(rects[rectsLength - 1].Y * 0.75);
relativeHeight = Convert.ToInt32(rects[rectsLength - 1].Height *
0.75);
relativeWidth = Convert.ToInt32(rects[rectsLength - 1].Width *
0.75);

e.Graphics.DrawRectangle(rectPen0,relativeX,relativeY,relativeWidth,
relativeHeight);
e.Graphics.DrawString((rectsLength - 1).ToString(),
font, brush0, new Point(relativeX, relativeY));

e.Graphics.DrawLine(penCross, 0,
Convert.ToInt32(mouseMoveY * 0.75), pbPreviewProceed.Width,
Convert.ToInt32(mouseMoveY * 0.75));
e.Graphics.DrawLine(penCross,
Convert.ToInt32(mouseMoveX * 0.75), 0, Convert.ToInt32(mouseMoveX *
0.75), pbPreviewProceed.Height);

}
}

private Point getCenter(Rectangle refRectangel)
{
int centerX, centerY;

centerX = Convert.ToInt32(refRectangel.Width / 2) + refRectangel.X;
centerY = Convert.ToInt32(refRectangel.Height / 2) + refRectangel.Y;

return new Point(centerX, centerY);
}

43

Vous aimerez peut-être aussi