Vous êtes sur la page 1sur 4

Calibration

In order for the computer to correctly interpret the real-world coordinate of the
peg, we have to convert the position of the peg in pixel into centimeter. And to do it, we
have to have two fixed and distinct points for the computer to fixate on. I choose the
points’ color to be matte black because the color will not reflect too much light and get
mixed up with other colors.
Then, we let the camera take a set of sample photos of the two point, in my work,
it would be a hundred samples. With each sample, we will run the color and shape
detection algorithm to pick up the points. The algorithm will return the couple’s
coordinate in pixel and put them in a list. But due to the varying light conditions, the
returned values may not be consistent. I propose a solution by finding the most common
tuple in the coordinate list since a large percentage of coordinate tuple in the list is
acceptable.

P2
P1

Next, we calculate the distance between the two points in pixel and we also have
to take into account the angle created the line between the points and the camera axis.

2 2
𝐿 = √(𝑃1𝑋 − 𝑃2𝑋 ) + (𝑃1 𝑌 − 𝑃2𝑌 )
𝑃2𝑌 − 𝑃1𝑌
𝛼 = 𝑎𝑡𝑎𝑛( )
𝑃2𝑋 − 𝑃1𝑋
In real world, I deliberately placed the two point exactly 20cm apart, so we can
calculate the ratio between the length in pixel and centimeter.
𝐿
𝑟𝑎𝑡𝑖𝑜 =
20
But this still not suffice since we have to determine if the object center is on the
left or right of the Y-axis (Left takes the negative value and right takes the positive
value).

a < 0 and b > 0 a > 0 and b > 0

II I
a (xcenter, ycenter)
b

III IV
a < 0 and b < 0 a > 0 and b < 0

Denote 𝑥𝑐𝑒𝑛𝑡𝑒𝑟 and 𝑦𝑐𝑒𝑛𝑡𝑒𝑟 as the coordinate of the object center in pixel:
𝑎 = 𝑥𝑐𝑒𝑛𝑡𝑒𝑟 − 𝑃1 𝑋

𝑏 = 𝑦𝑐𝑒𝑛𝑡𝑒𝑟 − 𝑃2𝑌
𝑐 = √𝑎2 + 𝑏2
If a > 0 and b < 0 or a < 0 and b < 0 (3rd or 4th quadrant):
𝑏
𝛽 = − atan ( )
𝑎
If a < 0 and b > 0 or a > 0 and b > 0 (1st or 2nd quadrant):
𝑏
𝛽 = 2𝜋 − atan ( )
𝑎
𝛾 =𝛽+𝛼
The corrected (x, y) coordinate of the peg after transposition:
𝑥𝑐𝑒𝑛𝑡𝑒𝑟𝑡 = sin(𝛾 ) 𝑐

𝑦𝑐𝑒𝑛𝑡𝑒𝑟𝑡 = cos(𝛾 ) 𝑐
The real-world coordinate of the object center can be obtained by diving the
transposed coordinate by the ratio.
𝑥𝑐𝑒𝑛𝑡𝑒𝑟𝑡
𝑥𝑐𝑒𝑛𝑡𝑒𝑟 𝑟𝑒𝑎𝑙 =
𝑟𝑎𝑡𝑖𝑜
𝑦𝑐𝑒𝑛𝑡𝑒𝑟𝑡
𝑦𝑐𝑒𝑛𝑡𝑒𝑟 𝑟𝑒𝑎𝑙 =
𝑟𝑎𝑡𝑖𝑜

Since the objects used for the pick-and-place process are of specific shapes (circle,
rectangle, triangle,…) and lay flat on the ground, it would be difficult for the jaw to grasp
the edges without slipping. I added an additional raised surface on top of the object and
color it white so that the manipulator will have an easier time catching it. But it comes as
a challenge that the raised surface will be at an odd angle with the jaw so I have written a
script that let the computer figure out the grabbing surface’s angle and turning the jaw
parallel with it.
The method calls for drawing a rotated rectangle on the specified object. Using
cv2.minAreaRect of OpenCV to get the following information:
 Center of the object (x and y coordinate)
 Dimension of the object (width and height)
 Angle of rotation (always in negative value)
Denote width and height as 𝑤 𝑎𝑛𝑑 ℎ, the angle of rotation as 𝜃 and 𝜃𝑟𝑒𝑎𝑙 is the
perceived angle of the raised surface.
If w < h:
𝜃𝑟𝑒𝑎𝑙 = 90 − 𝜃
If w > h:
𝜃𝑟𝑒𝑎𝑙 = −𝜃

Vous aimerez peut-être aussi