Académique Documents
Professionnel Documents
Culture Documents
Abstract
Methods
Results
Coding:
An open source C++ code from Brown University
was used[1]. Modifications to the code were made
to make it specific and compatible for the specific
equipment used to carry out the experiment.
Visual Studio was used to run and modify the
code, then Meshlab, which is also an open source
program was used for the surface reconstruction
from the point clouds.
1. Implement code for camera and projector
calibrations[1, derived from 3].
2. Implement structured light scanner.
3. Save images and call them into Meshlab.
4. Use Meshlab to reconstruct surfaces from point
clouds and reconstruct the scanned image.
Background
3D images can be used in research to accurately
create models of specimens. Current widely used
camera systems capture images in 2D, but to get
accurate measurements and models of the
specimens they need to be viewed as 3D images.
A solution to this problem is the use of structured
light to take scans that can be interpreted as 3D
images. These scans use a series horizontal and
vertical grey patterns. Scans of an object can be
taken using programming languages such as C++.
The code for the calibrations and scanning was
found as an open source file from Brown
University[1]. The code used various C++ libraries
including OpenCV. The complication with the file
was that it was not compatible with current
operating systems, so modifications were
necessary to run the code. These scans produce
horizontal and vertical interpretations of the
specimen or object being scanned. The individual
interpretations are then decoded to create a depth
map. The depth map is then used to construct a
point cloud which can then be saved as .wrl files.
The models can be opened with a program such
as Meshlab which is used to process and edit 3D
point clouds. In Meshlab these individual points
are reconstructed into a surface which can be
meshed into a well defined 3D image.
Actual
Location
(mm)
Top to First Ring
68
Top to Second Ring 74
Across Top
58
Inner Rectangle
31
Between Rings
7
Imaged
(mm)
68.32
73.54
55.57
30.45
7.15
%Error
0.470588
-0.62162
-4.18966
-1.77419
2.142857
Conclusion
Figure 4. Untouched scan image
Physical Setup:
The setup is done on an optical board and
encased by a dark box made out of painted acrylic.
The setup allows for the camera to have a
horizontal view of the specimen. The camera and
pico projector(AAXA P4X) is mounted a certain
distance away from the platform on which the
specimen is placed.
In conclusion:
Structured light scanner was created using C++
codes and camera-projector pair.
System was calibrated using checkerboard
pattern.
When calibration is preformed properly, less
than 5% error (<0.5mm) can be achieved
between the actual object and reconstructed
scan.
Calibration is the key issue when achieving a
reasonable accurate scan.
References
1.http://mesh.brown.edu/3DPGP-2009/index.html
2. http://code.google.com/p/structuredlight/downloads/detail?name=ThreePhase-1-win.zip&can=2&q=
3. Zhang S, Huang PS; Novel method for structured light system
calibration. Opt. Eng. 0001;45(8):083601-083601-8.
Acknowledgements
Figure 2. Setup on the optical board