Académique Documents
Professionnel Documents
Culture Documents
1. Image processing:
Image processing is any form of signal processing for which the input is an
image, such as photographs or frames of video. The output of image processing can be
either an image or a set of characteristics or parameters related to the image. Most image-
processing techniques involve treating the image as a two-dimensional signal and applying
standard signal-processing techniques to it.
Image processing usually refers to digital image processing, but optical and
analog image processing are also possible.
In the initial stage, the input is a scene (visual information), and the output is
corresponding digital image. In the secondary stage, both the input and the output are
images where the output is an improved version of the input. And, in the final stage, the
input is still an image but the output is a description of the contents of that image.
2. Tasks:
Digital image processing allows the use of complex algorithms for image
processing, and hence can offer both more sophisticated performance at simple tasks, and
the implementation of methods which would be impossible by analog means. In particular,
digital image processing is the only practical technology for:
Since many computer vision algorithms use feature detection as the initial
step, a very large number of feature detectors have been developed. These vary widely in
the kinds of feature detected, the computational complexity and the repeatability. At an
overview level, these feature detectors can be divided into the following groups:
2.1.1) Edges
Edges are points where there is a boundary between two image regions. In
practice, edges are usually defined as sets of points in the image which have a strong
gradient magnitude. These algorithms may place some constraints on the shape of an edge.
Locally, edges have a one dimensional structure.
2.1.2) Corners
The terms corners are used somewhat interchangeably and refer to point-like
features in an image, which have a local two dimensional structure. The name "Corner"
arose since early algorithms first performed edge detection, and then analyzed the edges to
find rapid changes in direction. These algorithms were then developed so that explicit edge
detection was no longer required, for instance by looking for high levels of curvature in the
image. It was then noticed that the corners were also being detected on parts of the image.
Color matching quantifies which colors and how much of each color exist in
a region of an image and uses this information to check if another image contains the same
colors in the same ratio. We can use color matching in applications that require the
comparison of color information to make decisions.
Following figure shows the difference between color location and color
pattern matching. Figure (a) is the template image of a resistor that the algorithms are
searching for in the inspection images. Although color location, shown in Figure (b), finds
the resistors, the matches are not very accurate because they are limited to color
information. Color pattern matching uses color matching first to locate the objects and then
uses pattern matching to refine the locations, providing more accurate results, as shown in
Figure (c).
Figure shows to accurately locate the resistors using color pattern matching
For analog signals, signal processing may involve the amplification and
filtering of audio signals for audio equipment or the modulation and demodulation of
signals for telecommunications. For digital signals, signal processing may involve digital
filtering and compression of digital signals. Different types of signal processing are shown
below:
• Analog signal processing — for signals that have not been digitized, as
in classical radio, telephone, radar, and television systems.
• Discrete signal processing — for signals that are defined only at
discrete points in time, and as such are quantized in time, but not magnitude.
• Digital signal processing — for signals that have been digitized.
• Statistical signal processing — analyzing and extracting information
from signals based on their statistical properties.
• Audio signal processing — for electrical signals representing sound,
such as speech or music.
• Speech signal processing — for processing and interpreting spoken
words.
• Image processing — in digital cameras, computers, and various
imaging systems.
• Video processing — for interpreting moving pictures.
3. Applications
Computer vision
Face detection
Feature detection
Medical image processing
Microscope image processing
Remote sensing
Morphological image processing
Computer vision is concerned with the theory for building artificial systems
that obtain information from images. The image data can take many forms, such as a video
sequence, views from multiple cameras, or multi-dimensional data from a medical scanner.
features, after which a classifier trained on example faces decides whether that particular
region of the image is a face, or not.
A given natural image often contains many more background patterns than
face patterns. Indeed, the number of background patterns may be 1,000 to 100,000 times
larger than the number of face patterns. This means that if one desires a high face-detection
rate, combined with a low number of false detections in an image, one needs a very specific
classifier. Applications in this field often use the rough guideline that a classifier should
yield a 90% detection rate, combined with a false-positive rate in the order of 10-6.
Microscope image processing is a broad term that covers the use of digital
image processing techniques to process, analyze and present images obtained from a