Vous êtes sur la page 1sur 39

A Wearable Stereo Camera System for Distance

Measurement towards Assistive Robot

Mohamed Elsharawy
Matriculation No. 306264

Supervisor: Prof. Dr.-Ing. Axel Gräser


Tutor:M.Sc. Qinyuan Fang
13th February, 2019

1/39
Overview
 1 Introduction
 2 Hardware
Selection criteria
Model Design
 3 Software
Theoretical Background
Method and Algorithms
Implementation
 4 Experiments and Analysis
 5 Summary and Future work

2/39
Introduction
Motivation
Project MobILe aims to provide people with physical movement limitations
the possibility to perform basic skills in their daily life, such as drink and food
intake with the help of assistive robotics.
 Assistive Drinking Scenario
 Robot arm is supposed to bring a cup of drink to the mouth of the end-
user autonomously
 Precise distance between the human user mouth and the cup is very
necessary to be measured in real time ,especially in close range .

3/39
Introduction
 Current Solution from MobILe
 An RGB-D 435 camera mounted on the wrist of the robot
Depth sensor has a working range 11-100 cm
 very good solution
However,
an RGB-D camera is very expensive
Placement of the camera can not measure the distance between the
mouth and the cup.
From viewpoint of functional safety, the addition of a redundant system
to measure distance is an advantage.

4/39
Introduction
 Main Work of this Project
Overhead lightweight low-cost wearable system
o stereo-camera built onto a helmet to watch the interaction scene beneath
and measure the distance between the cup and the mouth
Hardware :selection criteria and model design
Software :
Theoretical background
Board square size selection
Stereo calibration
Rectification
Experiments of distance measurements

5/39
Hardware: Selection criteria and model design

 Focal length(FL) :single cameras has 2.1mm,(smaller wider FOV at short

ranges.)

 Shutter type : expected slow movements from the user Rolling /Global

shutter is OK.

 Resolution : for faster image processing,640 × 480 is a good option.

 Frame rate :for normal PC 30 fps is acceptable, for small computer boards

,limited resources can be a challenge.

6/39
Hardware: Selection Criteria & Model Design
ELP VGA USB Camera

 FL:2.1mm
 1-60 fps
 Pixel Size :6.0um X 6.0um
 Resolution :640 × 480

7/39
Hardware: Selection Criteria & Model Design

 Hardware Synchronization

 a synchronous , not a serial , capture of both camera images ensures

correct matching points .

 This is not easily possible for single cameras.

•OpenCV uses 2 functions :

grab() takes raw captures and protect them in a buffer

Retrieve() allows to use and process them separately as the

program requires

8/39
Hardware:
Selection criteria and model design

 Building of the Model

The stereo rig has to be built as a frontal parallel on the same smooth plane.

This improves the results of stereo calibration and rectification.

 The Next images visualize this concept

9/39
Hardware:
Selection criteria and model design

Fig.1 3D view of the holder

10/39
Hardware:
Selection criteria and model design

Fig. .2 3D view of the total structure

11/39
Hardware:
Selection criteria and model design

Fig. .3 Realization of the model

12/39
Software: Theoretical Background
Triangulation (Ideal Case )

•If the following is given


The intrinsic matrices (camera calibration)
 R&T (Stereo Calibration)
row alignment between 2 image planes. (stereo rectification)
2 rays propagating, from each camera center and passing
through the corresponding points,
then the result is reaching the desired 3D point P.
See next figure …

13/39
Software: Theoretical Background
(Triangulation: ideal case )
By similarity of
triangles,

Z=f T/(xl – xr)

X= (xl. Z)/f

Y=(yl. Z)/f

Fig.5 Triangulation (Front view)

Fig.4 Triangulation of a parallel frontal stereo camera


14/39
Software: Theoretical Background
(Camera Calibration)
Finding the camera extrinsic and intrinsic as well as the distortion
parameters of both cameras .
Extrinsics define Rotation and Translation of the camera w.r.t a
world coordinate system
Intrinsics define mapping between camera coordinates and pixel
coordinates in the image
Distortion : Radial and Tangential

Fig.6 Radial and Tangential distortion

15/39
Software: Theoretical Background
(Stereo Calibration)

 Purpose: coplanar camera images.


 Finds Rotation and Translation between the 2 stereo cameras
•Stereo rectification
Two types :Hartley's and Bouguet's
Hartley's does not use calibrated cameras 3D
reconstructed images are up to
Similarity :if extrinsics Rl, Rr, Tl, Tr are not known
Projective :if intrinsics as well are not known.

16/39
Software: Theoretical Background
(Stereo rectification)

Fig.7 Scaling and projective effects

17/39
Software: Theoretical Background
(Stereo rectification) : Bouguet's algorithm
•Purpose: make the images row aligned
•Concept: if 2 axes of 2 coordinate systems are aligned, the 3rd is aligned

Fig 7 Assumed random orientations of the camera systems. They are not rectified, but R and T are known.
18/39
Software: Theoretical Background
(Stereo rectification) : Bouguet's algorithm

Fig.8 Apply R/2 to the right( rr ) camera and the other half of R-1 to the left ( rl ). So, both cameras are aligned along z.
19/39
Software: Theoretical Background
(Stereo rectification) : Bouguet's algorithm
Rrect =[e1 e2 e3]T

Rr=Rrect . rr
Rl=Rrect . rl

Fig.9 Apply Rrect to both cameras, so they are aligned along z as well.

20/39
Software: Methods and Algorithms

Synchronization

Chessboard Stereo camera Stereo camera Distortion


selection Calibration Rectification Removal

Locating 3D
points

 SW Synchronization √

21/39
Software: Methods and Algorithms

Chessboard selection
Object points are collected for camera calibration by using 2D checkerboards
The objects are very near to the cameras, around 10 cm
So, the squares of calibration board have to be small enough to be all seen in
the camera vision domain.
 A 1 cm square side for each of the 10 × 7 squares of the checkerboard is
chosen.

Fig.10 A checkerboard model

22/39
Software: Methods and Algorithms

Stereo camera calibration


Capturing and Saving the correct pair of images
Number of images used 240
Run stereo calibration and save calibration matrices
Use board points, pair image points , image size , to generate xml file with :
• camera matrices, distortion ,R ,T
• The estimated error=0.89

Fig.11 A captured image pair


23/39
Software: Methods and Algorithms

Stereo camera rectification


Using the output data from the stereo calibration:M1,D1,M2,D2,R,T 
 stereorectify () gives the matrices R1, R2, P1, P2 ,which are necessary
to remap the input image points to their correct locations.
 R1 ,R2 are the rectification matrices
 P1,P2 are the new total camera matrices (Intrinsic K . Extrinsic[R T]).
 The new intrinsic matrices K are chosen based on the criteria of
increasing the common FOV.

24/39
Software: Methods and Algorithms
Rectification & Calibration code Results

Fig. 12 Raw input pair(uncalibrated & unrectified) . Notice the horizontal black line along the
images. That is why we need rectification.

25/39
Software: Methods and Algorithms
Rectification & Calibration code Results

Fig. 13 perfectly Rectified image pair with noise around the edges . That is why we need
image cropping. Notice the horizontal black line along the images

26/39
Software: Methods and Algorithms
Distortion Removal

 Defining a noise free ,similar rectangular zones in both camera images ,crop the rest
, and resizing the image  Resolution a bit reduced ,but this does not affect the results

Fig. 14 Rectified image pair without noise around the edges . That is why we need image
cropping. Notice the horizontal black line along the images.

27/39
Software: Methods and Algorithms
Locating 3D points

 Stereorectify() from OpenCV outputs a Q matrix .


Its elements are similar to those equations from the triangulation slide
It calculates the actual X', Y', Z' based on the image point location(x,y) on
the left image and the disparity between this point and its similar point on the
other image.
It has the values generated from stereorectify():cx, cy, f, b
 actual points can be calculated from the following equation

28/39
Experiments and Analysis
Measurement of distance between 2 points

 Purpose :check how large is the error between actual and measured values
Method : hochcircles() is used to determine center of the circles
There are 5 experiments. The 1st 4 has a parallel plane to the camera pair
 For each of first 3 , there is one fixed depth (13 , 8, 10. 5 cm ) and the
distance between the 2 red circles is incremented in steps of 1 cm.
 The 4th has a fixed distance between the circles and an incremented depth.
The 5th is the same as the 4th a tilted plane to the cameras .

Fig..15 Right and Left images for parallel and tilted positions
29/39
Experiments and Analysis
Measurement of distance between 2 points

Experiment 1

30/39
Experiments and Analysis
Measurement of distance between 2 points

Experiment 2

31/39
Experiments and Analysis
Measurement of distance between 2 points

Experiment 3

32/39
Experiments and Analysis
Measurement of distance between 2 points

Experiment 4

33/39
Experiments and Analysis
Measurement of distance between 2 points

Experiment 5

34/39
Experiments and Analysis
Measurement of distance between 2 points

Observations:

•At a fixed depth,

 the error is increasing with the separation distance between the 2 points.

The maximum recorded error is around 7 mm for a separation distance of 11 cm

• At a fixed distance between the two points ,

with increasing depth, the error has almost a constant value of 6 mm.

Error causes:

Hough method in reading circle centers

 Setting of the experiments

Calibration and rectification of the cameras.

35/39
Summary

A light ,low-cost wearable stereo camera is built

Methods of synchronization, calibration, and rectification have been

developed and implemented.

Experiments for distance measurement have been conducted, and

have shown that at a working distance around 10 cm , the error is in

the range of 0 – 7 mm

36/39
Suggestions for Future work

implementation of an online algorithm using Raspberry pi or


Beaglebone computer boards.

Or, alternatively, faster is the hardware implementation option with the


use of VHDL.

Use of real feature detection, e. g. face and cup as traceable features,


instead of the experimental algorithm of detecting circle centers.

Use of wireless communication medium, instead of cables to the PC.

Integration of the measurements with a robot system

37/39
Thank You

38/39
Any Questions ?

39/39

Vous aimerez peut-être aussi