Vous êtes sur la page 1sur 28

MACHINE VISION

INTRODUCTION
MACHINE VISION : Acquisition of image data, followed by the processing and
interpretation of these data by computer for some useful application like
USES OF counting
ROBOT etc.
VISION
inspection,
Vision based guidance of a robot arm
Inspection for close dimensional tolerance.
Improved object recognition
Improved part location capabilities
ROBOT VISION REQUIREMENTS
Cheaper computational device
Increased speed
Better algorithms

INTRODUCTION

1) Captures images
MACHINE VISION

2) Extracts useful information from theses im

What is an image??
An image is a reflection of a 3-D world on a 2-D plane
An image is captured at a given instant of time.
Therefore 1 image or many images at regular intervals of time can be taken

TYPES OF MACHINE VISION SYSTEM


2-D system

Most commonly used system.

For measuring dimensions of parts.

Verifying presence of components.

Checking features of Flat or semi flat surfaces.

3-D system

Not used frequently

Requires special lighting techniques

Sometimes 2 cameras are required to obtain a


stereoscopic view of the scene.

TYPES OF MACHINE VISION SYSTEM


Binary system

The video signal is divided into white (1) or black(0) signals based on the

Grey scale
system
threshold
level

The brightness graduation is divided into 256 levels

Machine Vision
Stored
programs/
algorithms
Camera

A/D

Auxiliary
storage

Frame
Grabber

Computer
(processor)

Lighting

I/F

Robot
controller

(TASK)

Monitor

Hardware
1. Image acquisition &
digitizing image data

Function
-

Signal Conversion
- Sampling

Keyboard

2. Image processing &


analysis
a)

Data reduction

b) Segmentation

- Encoding

Techniques

- Quantization

- Image storage
- Lighting

c)

Feature extraction

d) Object recognition

3. Applications

- Inspection

- Material handling
- Safety monitoring

1. Image acquisition and digitization


Image acquisition
Image acquisition and digitization is accomplished using a video camera
and a digitizing system to store the image data for subsequent analysis.
VIDICON CAMERA

1. Image acquisition and digitization


Image acquisition
VIDICON CAMERA

1. Image acquisition and digitization


Lighting
The scene captured by the vision camera must be well illuminated and illumination
must be constant over time.

There are 4 categories of lighting systems:


1) Front lighting
2) Back lighting

3) Side lighting
4) Structured lighting

1. Image acquisition and digitization


Lighting
1. Front lighting
Light source is placed on the same side of the camera

Produces a reflected light from the object that allows inspection of surface feature

Camera

Light source

Light
field
Dark
field

Object

1. Image acquisition and digitization


Lighting
2. Back lighting
Light source is placed behind the object being viewed by the camera.

This creates a dark silhouette of the object that contrasts sharply with the light back

Used to inspect part dimensions and distinguish part outlines


Camera

Object shadow
Object
Diffuser

Light source

Silhouette of a tensile test specimen

1. Image acquisition and digitization


Lighting
3. Side lighting
Light source is placed at the side of the surface to be illuminated.
Generally used for finding out surface irregularities, flaws, defects on surfaces

Camera

Light source

Object

1. Image acquisition and digitization


Lighting
4. Structured lighting
Makes use of patterns of light instead of diffused light.
2 sheets of light meet at a point.

FRONT VIEW-NO OBJECT

TOP VIEW- PATTERN


WITH NO OBJECT

FRONT VIEW-WITH OBJECT

TOP VIEW-PATTERN
WITH OBJECT

When the object is in the vicinity of the light, a different pattern is formed.
This pattern is studied to extract information about the object.

1. Image acquisition and digitization


Analog to digital conversion
A-D conversion is done in 3 steps
1)Sampling

2)Quantization

3) Encoding

1. Sampling
A process in which the analog signal obtained by scanning a single line is
sampled at regular intervals to obtain a discrete time analog signals.

MORE THE NUMBER OF SAMPLING POINTS, MORE IS THE NUMBER OF PIX

Voltage

Voltage

Sampled Points

Time

Time

1. Image acquisition and digitization


1. Sampling (contd..)
Example: A vision system uses a vidicon tube. An analogue video signal is
generated for each line of the 512 lines comprising the faceplate. The sampling
capability of the A-D converter is 100 Nano seconds. This is the cycle time
required to complete the A-D conversion process for 1 pixel. Using the American
standard of 33.33 milliseconds (1/30 sec) to scan the entire faceplate consisting
of 512 lines, determine the number of pixels that can be processed per line.
Sampling time or time for processing a single pixel = /
Time for scanning 512 lines = .
.

Therefore, time for scanning 1 line =


= . /

.
Hence, number of pixels per line =
= 651 pixels/line

Thus, the sampling rate determines the number of pixels horizontally and no
of scanning lines determine the no of pixels vertically

1. Image acquisition and digitization


Analog to digital conversion
2. Quantization
Quantization is a process wherein the amplitude levels of the discrete voltage
signals are assigned a value which corresponds to the grey scale used in the

system
The number of quantization level is dependent on the bit storage capacity of the A-D c

No of quantization levels = 2

If n= 8 bits, the converter would allow us to quantize 28 = 256 different values

1. Image acquisition and digitization


Analog to digital conversion
3. Encoding
Encoding is a process of converting quantized amplitude level into a digital

code representing the amplitude level as a sequence of binary digits


EXAMPLE OF QUANTIZATION AND ENCODING

VOLTAGE
RANGE

BINARY
NUMBER

GREY SCALE

0 - 0.0195

0000 0000

0.0195 - 0.0390

0000 0001

0.0390 - 0.0585

0000 0010

4.9610 - 4.9805

1111 1110

254

4.9805 - 5

1111 1111

255

ENCODING

QUANTIZATION

2. Image processing and analysis


Image processing is a procedure of extracting useful information
from the image captured and digitized in the previous steps
Steps in Image processing and analysis
Digital conversion
Windowing
Data reduction

Thresholding

Segmentation

Region growing

Feature extraction

Edge detection

Object recognition

2. Image processing and analysis


Data reduction
Main objective of image data reduction is to reduce the volume of data.
Steps in data reduction
1) Digital conversion
2) Windowing

Digital conversion: Process of reducing the number of grey levels used by the
machine vision system
Example: For an image digitized at 128 points per lines and 128 lines, determine
(i) The total number of bits to represent the grey level values required if an 8 bit
converter is used to indicate various shades of gray and
(ii) The reduction in data volume if only black and white values are digitized.
Windowing: Only a portion of the total image is used for image processing and
analysis.

2. Image processing and analysis


Segmentation
Segmentation techniques are intended to define and separate regions of
interest having similar characteristics within the image.
Thresholding
Conversion of each pixel intensity level into a binary value, representing either
white or black.

It is done by comparing the intensity value at each pixel with a defined


threshold value.
If the pixel value is greater than the threshold, it is given the binary bit value of
white, say 1. If it is less than the defined threshold, it is given the bit value of
black, say 0.

Thresholding

2. Image processing and analysis


Segmentation
Region growing
Region growing is a process wherein grid elements possessing similar attributes
are grouped to form a region

Procedure:
A pixel on the object is identified and assigned the value 1.
The adjacent pixel is tracked for match in the attributes.

The matching pixel is assigned 1 and non matching pixel with 0.


The terms are repeated till the complete screen is covered resulting in
growth and identification of region

2. Image processing and analysis


Segmentation
Region growing

Original image

Assigning values to pixels

Simplified image

Note that the procedure did not identify the hole. This can be resolved by
decreasing the distance between grid points

2. Image processing and analysis


Segmentation
Edge detection
Edge detection is concerned with determining the location of boundaries between
an object and its surroundings in an image. This is accomplished by identifying
the contrast in light intensity that exists between adjacent pixels at the borders of

the object.

2. Image processing and analysis


Feature Extraction
Characterize an object in the Image by means of the object's features.
Some of the features of an Object include the object's area, length, width,
diameter, perimeter, centre of gravity, and aspect ratio.

Feature extraction methods are designed to determine these features based


on the area and boundaries of the object (using thresholding, edge detection,
and other segmentation techniques).
For example: the area of the object can be determined by counting the
number of white (or black) pixels that make up the object. Its length can be
found by measuring the distance (in terms of pixels) between the two extreme
opposite edges of the part.

2. Image processing and analysis


Object Recognition
For any given application, the image must be interpreted based on the extracted
features. The objective in these tasks is to identify the object in the image by
comparing it with predefined models or standard values.
Template matching is the name given to various methods that attempt to
compare one or more features of an image with the corresponding features of a
model or template stored in computer memory.
The most basic template matching technique is one in which the image is
compared pixel by pixel with a corresponding computer model. Within certain
statistical tolerances, the computer determines whether the image matches
the template.
One of the technical difficulties with this method is the problem of aligning the
part in the same position and orientation in front of the camera to allow the
comparison to be made without complications in image processing.

2. Image processing and analysis


Object Recognition
Feature Weighing
A technique in which several features (e.g., area, length, and perimeter) are
combined into a single measure by assigning a weight to each feature according
to its relative importance in identifying the object.

The score of the object in the image is compared with the score of an ideal

object residing in computer memory to achieve proper identification

3. Applications
Inspection

Dimensional measurement: These applications involve determining the size


of certain dimensional features of parts.
Dimensional gauging: This is similar to the preceding except that a gauging

function rather than a measurement is performed.


Verification of the presence of components in an assembled product.
Verification of hole location and number of holes in a part: Operationally,
this task is similar to dimensional measurement and verification of components
Detection of surface flaws and defects: Flaws and defects on the surface of
a part or material often reveal themselves as a change in reflected light
Detection of flaws in a printed label: The defect can be in the form of a

poorly located label or poorly printed text numbering or graphics on the label.

3. Applications
Visual guidance and control
Involves applications in which a vision system is teamed with a robot or similar
machine to control the movement of the machine.
Examples of these applications include seam tracking in continuous arc
welding, part positioning and/or reorientation, bin picking, collision avoidance,
machining operations, and assembly tasks.
Part identification
The applications are those in which the vision system is used to recognize and
perhaps distinguish parts or other objects so that some action can be taken.
The applications include part sorting, counting different types of parts flowing
past along a conveyor, and inventory monitoring. Reading of 2-D bar codes
and character recognition.

Vous aimerez peut-être aussi