Vous êtes sur la page 1sur 4

Abstract:

I.

INTRODUCTION.

In this assignment particularly the first section we are required to use single view
metrology, which intends to provide the description of different feature in the
affine 3D geometry present in a scene and possess the ability to be examined by
a single perspective image. It should be noted that the parallel lines and planes
are existing in the particular scene. The concept of vanishing point imparts to us
the knowledge of projection of a particular point at the infinity and combining
these multiple points we have vanishing lines. But it should be noted that
vanishing lines are different for different planes.

Fig 1: Displaying vanishing lines for the two vanishing points v1 and v2
Now extending the notion of vanishing lines to its reference plane that can be
obtained from the image combining with a vanishing point to obtain another
reference direction, which will be not parallel to the plane. Now after knowing
the key concepts we can move forward to the important measurement details,
first, calculating the distance among the planes that are parallel in orientation to
the reference plane, second, specific types of measurement on the concerned
planes and then comparing it with respect to any other plane, third, is the prime
concern as it is the measurement of the cameras location inculcating the
direction and the reference plane, both terms. However, we are not at all
interested in the cameras internal features or parameters.
II.

GEOMETRY

The images where we are having structured scenes there we can have the
assumption that vanishing line in the scene of an exclusive reference plane can
be obtained via measurements of the image combining with vanishing points for
a totally different direction and not in particular parallel orientation to the plane.
Unwanted effects for example radial distortion are removable and hence are not
given due importance. Next coming to the methodology for the computation of
vanishing lines and points altogether and detection of the required lines in the
given images, the corresponding steps are as followed,
Detecting the edge
Detection of the segments of straight lines are done with the help of Canny edge
detectors and the level of accuracy is at the sub pixels level. The other
subsequent step are linking of edges, particularly at the points which possess
curvature of high degree we apply segmentation of the edge chain, and at the
last stage we fit the straight lines which at the end result provides segments of
chain by implementing orthogonal regression. It is interesting to note that

occlusion is the prime reason for the appearance of broken lines in the image
even if they are edges physically present in the real world. Different kinds of
algorithm can be applied to merge those broken lines which are indeed edges,
and the algorithm should be implementing orthogonal regression. The reason for
the merging activity is to increase the accuracy of the objection location and
how the objected is oriented.
Estimating vanishing point and scene calibration
The provided image can be solely used to approximate or estimate the required
vanishing points and vanishing lines. This also removes the constraint of
acquiring the information of relative geometry between the camera and the
scenes that are viewed through it. Another valuable information is that the
vanishing, points and lines can be lying outside the physical image, and this has
no implications in the processes of computation.
Computation of the Vanishing point
It is an assumption that all the lines present in the real world context that are
parallel in orientation to the direction of reference are the lines that ultimately
have the same vanishing point as the point of their intersection. Which allows us
to have the possession of two such lines for the purpose of definition, or if more
than two such lines are present then we can apply some estimation algorithm
which would result in vanishing points. Examples of such algorithms are L&Z
algorithms for estimation.
Computation of the Vanishing Line
The line images which have the oriented parallel to each other as well as to a
plane ultimately intersects in points and that plane vanishing line. So our need
will be two of those lines but having different directions, which will suffice to give
us the plane vanishing line. If the case arises where there are more than two
orientations then we the application of algorithm based on maximum likelihood
will give the result.

Part 2. STEREO DISPARITY MAP COMPUTATION

I.

Disparity Map

Regions of uniform Intensity - Under this section we acknowledge that the


substantial problem is the case when approximately all the pixels are having
same intensity for a large region. This doesnt allow us to use intensity based
minimization, as the appearing difference is negligible, which intends
arbitrariness in the regions. The solution to this problem the approach based
on propagation.
Maximum Disparity Threshold Threshold minimum distance is the key
criteria for differentiating between the objects. As its value helps the
algorithm from getting confused concerning the less varying intensity among
the objects in the image. This should be chosen appropriately.

Difference between SAD and SSD The usage of SSD is more critical in nature
as it is more sensitive because the difference is squared in this approach, as
well as it more affected by noise. Due to the process of squaring the errors,
even the minutest of them become remarkable in nature and the bigger ones
are already into the picture, therefore the overall situation deteriorates.
Effect of Window Size The effect of window size on disparity map is
blurriness. The increment in the size of window allows the betterment
handling of noise but the price paid is the blurriness in the disparity map and
henceforth the overall accuracy declines or reduces.
Laplacian of Gaussian The information about edges are obtained via this
operator in general. The result is perfect for the edges but in the case of
uniformness in terms of intensity the algorithm underperforms.

II.

Rectification

This approach reduces the workload of matching the pixels. This process is
basically another way that aims towards the perfect alignment of cameras,
this is done due to the practical implication associated with the alignment of
cameras. Linear transformation is the approach to rectify the images obtained
from two cameras, but the condition to it is that the images should not be
geometrically distorted. There are primarily three main ways for rectification
that are I.) Planar rectification, II.) Cylindrical rectification and III.) Polar
rectification. The properties satisfied by rectified images are as followed:
i.)
ii.)

The orientation of all epipolar lines are parallel in nature to the


horizontal axis.
The points which are corresponding to each other possess identical
vertical coordinates.

Fig 2. Image rectification

III.

Matching

There are many limitations in the field of stereo correspondence. Few of them
are I.) Similarity between the pixel intensity this point aims to follow on that
the color and the corresponding intensity value should not change depending
on the viewpoints, II.) Uniqueness this limitation is an important one, as it
demands that for each pixel in the left image there should be an exclusive
one pixel in the right image and III.) Smoothness and continuity this
limitation informs us that at the boundaries of the object there is the

presence of discontinuity and on the other hand the physical objects surface
present in the real world vary smoothly.

IV.

Depth

For a specific 2D image the depth can be obtained in many ways. The
possible approaches are
I.) Usage of laser range camera and II.) Stereo
image pair with triangulation. In the second process the aim is to match pixels
in the two provided images but a few points that are having horizontal
separation.

Isha for references, just add this image, for the


references section, as writing it would add up in
plagiarism on turnit in

Vous aimerez peut-être aussi