Vous êtes sur la page 1sur 33

IMAGE

AN OVERVIEW OF IMAGES

Introduction

The word ‘image’ usually means a real world picture captured using a camera while the
word ‘graphics’ usually denotes pictures drawn by an individual, either hand-drawn or by
using computerised tool. Here the word ‘image’ will be used generally to mean both type
of pictures unless differentiated explicitely. Initially when personal computers became
available in the early ‘80s, they could only display and process text. Each character or
alphabet was displayed on the screen as a collection of dots, each of whose positions
were fixed relative to the others. Later as technology progressed, computers acquired the
ability to display and process images. In order to do so they had to control and
manipulate each dot individually and this called for more sophisticated hardware and
software. With the advent of color monitors, each dot had associated with it not only its
location but also color and brightness parameters, which had to be stored thus requiring
more memory and storage areas.

Raster Images and Vector Graphics

Computer graphics fall into two main categories – raster images and vector graphics.
Adobe Photoshop and other paint and image editing programs generate raster images,
also called bitmap images. Bitmap images use a grid (the bitmap or raster) of small
squares known as pixels to represent images. Each pixel is assigned a specific location
and color value. For example, a bicycle tyre in a bitmap image is made up of a mosaic of
pixels in that location. When working with bitmap images you edit pixels rather than
objects or shapes.

A bitmap image is resolution dependent i.e. it contains a fixed number ofpixels to


represent its image data. As a result a bitmap image can lose detail and appear jagged if
viewed at a high magnification on screen. Bitmap images are the best choice for
representing fine gradations of shades and color for examples photographic plates.

Drawing programs like Adobe Illustrator create vector graphics made of lines and curves
defined by mathematical objects called vectors. Vectors describe graphics according to
their geometric characteristics. For example a bicycle tyre in a vector graphic is made up
of a mathematical definition of a circle drawn with a certain radius, set at a specific
location and filled with a specific color. You can move, resize or change the color of the
tyre without losing the quality of the graphic.
A vector graphic is resolution independent i.e. it can be scaled to any size and printed on
any output device at any resolution without losing its detail or clarity. As a result vector
graphics are the best choice fro graphics that must retain crisp lines when scaled to
various sizes, for example, company logos.

Note : Because computer monitors represent images by displaying them on a grid, both
vector and bitmap images are displayed as pixels on-screen.

Image Size and Resolution

The number of pixels along the height


and width of a bitmap image is known
as the pixel dimensions of the image.
The display size of an image on-screen is determined by the pixel dimensions of the
image plus the size and setting of the monitor. The file size of an image is proportional to
its pixel dimensions.

A typical 13-inch monitor displays 640 pixels horizontally and 480 vertically. An image
with a pixel dimensions of 640 by 480 would fill this small screen. On a larger monitor
with a 640 by 480 setting, the same image would still fill the screen, but each pixel would
appear larger. Changing the setting of this larger monitor to 800 by 600 would display the
image at a smaller size, occupying only part of the screen. When preparing an image for
online display, for example a Web page, pixel dimensions become an important
consideration. Because your image may be viewed on a 13-inch monitor, you will
probably want to limit your image to a maximum of 640 pixels by 480 pixels.

The number of pixels displayed per unit length in a printed image is known as the image
resolution, and usually measured in dots per inch (dpi). An image with a high resolution
contains more and therefore smaller pixels than an image with a low resolution. For
example, a 1-inch by 1-inch image with a resolution of 72 dpi contains a total of (72 X
72) or 5184 pixels. The same 1-inch by 1-inch image with a resolution of 300 ppi would
contain a total of 90000 pixels.

Because there are more pixels per unit area, a higher resolution image usually produce
more detail and finer color transitions than a low resolution image. However increasing
the resolution of an image created at a lower resolution merely spreads the original pixel
information across a greater number of pixels and rarely improves image quality.
The number of pixels or dots displayed per unit length of the monitor is called the
monitor resolution and is same as the image resolution when the image is shown on the
screen. This is also measured in dots per inch (dpi) and the value ranges from 72 dpi to
96 dpi. Monitor resolution depends on the size of the monitor plus its pixel setting. When
an image on shown on the screen, image pixels are translated directly to monitor pixels.
This means that when the image resolution is higher than the monitor resolution then the
image appears larger than its printed dimensions, and conversely when the image
resolution is lower than the monitor resolution, the image appears smaller than its printed
size. For example, a 1-inch by 1-inch image of resolution 144 dpi is viewed on a 72 dpi
monitor, then the image appears in a 2-inch by 2-inch area on the screen. This is because
the monitor can display only 72 pixels per inch and it needs 2 inches to display the entire
144 pixels of the image.

Thus an image which occupies the entire screen on a 15” monitor at a setting of 640 X
480 pixels, will still occupy the entire screen when viewed on a 20” monitor using the
same setting, because the total number of pixels remains the same. The pixels in the
second case will be farther apart from each other or their size may be slightly larger than
the pixels of the 15” monitor. However, when the monitor setting is changed to 832 X
624 pixels or 1024 X 768 pixels, then the image is seen to occupy only a part of the
screen. This is because the image is now still made up of 640 X 480 pixels but the
monitor has an additional number of pixels. Since the size of the monitor remains the
same, increasing the pixel count increases the monitor resolution. Therefore we conclude,
that the on-screen size of an image on a monitor depends both on the image resolution as
well as the monitor resolution.
File Size

The digital size of an image measured in kilobytes (KB), megabytes (MB) or gigabytes
(GB), is proportional to the pixel dimensions of the image. Images with more pixels may
produce more detail but they require more storage space and may be slower to edit and
print. For instance, a 1-inch by 1-inch 200-ppi image contains four times as many pixels
as a 1-inch by 1-inch 100-ppi image and so has four times the file size. Image resolution
thus becomes a compromise between image quality and file size. Software application
packages have have their own restrictions for processing images of large size and
resolution. For example Photoshop 5.0 supports a maximum file size of 2 GB and
maximum pixel dimensions of 30,000 by 30,000 pixels per image.

To determine the file size of a digital image, use this formula:


(pixel width x pixel height) x (bit depth ÷ 8)
The result will be the file size in bytes. Divide this by 1024 to determine the size in
kilobytes (and by 1024 again if you want the size in megabytes). For example, a 24-bit
RGB image that is 459 pixels wide and 612 pixels tall would have a file size of 823K:
(459 x 612) x (24 ÷ 8) = 842,724 bytes ÷ 1024 = 823KB

Image Processing

Image processing can be divided into 3 stages : input, editing and output. The input stage
is where images are fed into the system. This typically involves digitization of analog
images using scanners. If the images are already digital, they can be simply copied to the
storage area. After the input stage, the editing stage involves the use of an image
processor software to edit the images according to requirement. Typical editing
operations include cropping, scaling, color adjustments, applying filters etc. Finally the
output stage involves displaying the edited images either separately or as part of a
presentation. This is usually done using the monitor but can also involve printers.

IMAGE PROCESSING : INPUT

Digitization

Our real world is analog in nature. The images that we generally see on paper, film etc.
are all continuous in nature and therefore are also analog. Since computers work with
discrete numbers, getting an image into a computer system involves discretization of the
analog images. This is known as digitization. Digitization can generally be done in two
ways. First, the image of a real world object or individual can be taken through a
conventional camera, developed and printed on paper and then the print can be digitized
by using a scanner. Prints other than real world images like paintings, drawings, line-
arts, motifs etc. can also be digitized in the same procedure. The scanner converts the
paper print into a computer file which can be copied to the system through an interface.
The second way of digitization involves taking the image of a real world object by a
digital camera instead of a conventional camera. In this case the digital camera
automatically digitizes the image and stores it in a form which can be simply copied to
the storage media of the system. In both the cases continuous or analog images are
broken into discrete dots before being stored in the system as binary data. Thus we also
have the concept of resolution when we deal with a scanner or digital camera i.e. how
many dots per inch we get after digitization.

Scanner Technology

Scanners convert continuous tone photographic prints into digital images that can be
manipulated on a computer. Scanners work by reflecting light from the printed image
being scanned. The reflected light is then directed to a scanning head which typically
consists of an array of sensing devices. These devices can convert the intensity of light
falling on them into a value which is then stored inside the system as binary data.

All scanners consist of a light source which is made to fall on the image to be scanned.
When scanners were first introduced, many manufacturers used fluorescent bulbs as light
sources. While good enough for many purposes, fluorescent bulbs have two distinct
weaknesses: they rarely emit consistent white light for long, and while they're on they
emit heat which can distort the other optical components. For these reasons, most
manufacturers have moved to 'cold-cathode' bulbs that deliver whiter light and less heat.
Fluorescent bulbs are now found primarily on low-cost units and older models. The
image is placed face down on a glass plate and the light strikes the image from below the
glass plate. After getting reflected by the image, the light is made to fall on the sensor
elements through a series of mirrors. The amount of light reflected off by the different
parts of the image are not equal but will depend on the color shades of the image. The
brighter parts of the image reflect off a larger amount of light while the darker parts of
the image absorb most of the incident light and reflect a small portion of the original
light. The reflected light then falls on a grid of sensors which generates voltages
proportional to the intensity of the light falling on it. The higher intensity light reflected
from the brighter areas of the image thus generate higher voltage; conversely the lower
intensity light reflected from the darker areas of the image generate lower voltage levels.

The sensor itself is implemented using one of the two different technologies:
Photo Multiplier Tube (PMT) is the sensor technology used by the high-end drum
scanners used by colour prepress companies. Expensive and difficult to operate, these
were the devices used to load images into a computer before the advent of desktop
scanning. Technicians would carefully mount originals on a glass cylinder which was
then rotated at a high speed around a sensor located in the centre. With PMT, the light
detected by the sensor is split into three beams which are passed through red, green and
blue filters and thence into the photomultiplier tubes - where the light energy is converted
into an electrical signal. Drum scanners are less susceptible to errors due to refraction or
focus than their flatbed counterparts. This, together with the fact that a PMT sensor is
more sophisticated than a CCD device, accounts for their superior performance. They're
expensive though and these days are generally used only for specialised high-end
applications.
Charge Coupled Device (CCD) technology is responsible for having made scanning a
desktop application and has been in use for a number of years in devices such as fax
machines and digital cameras. A charge-coupled device is a solid state electronic device
that converts light into an electric charge. A desktop scanner sensor typically has
thousands of CCD elements arranged in a long thin line. The scanner shines light through
red, green and blue filters and the reflected light is directed into the CCD array via a
system of mirrors and lenses.

The voltages generated by the sensors are then fed to an analog-to-digital converter
(ADC) which samples the values of the voltage levels and converts them into binary
numbers. Thus the original image is now represented by a sequence of bits which can be
stored into the magnetic or optical storage media of a computer.

Scanner Performance
Just as monitor resolution is a measure of the density of the pixels on the screen, the
density of the CCD elements in a scanner is related to the scanner resolution. Since the
CCD elements gives rise to the dots in an image, the scanner resolution is directly related
to the image resolution itself. The greater the number of CCD elements in a given area,
the greater is the density of pixels in the image and better is the quality of the digital
image created. However it should be borne in mind that the image quality is dependant
not only on the resolution but also on the quality of the optical and electronic devices
inside the scanner.

The CCD array is mounted on a moveable platform inside the scanner called the scan
head. The scan head moves beneath the paper and collects light reflected from it.
Although the movement of the scan head may seem continuous, in reality it moves a
fraction of an inch at a time taking a reading between each movement. The movement is
driven by a stepper motor which is a device that rotates by a pre-defined amount each
time an electrical pulse is fed.

Scanner resolution can be classified into two categories; the optical resolution refers to
the actual number of sensor elements per inch on the scan head. Most of the scanners
nowadays have optical resolutions of the order of 600 dpi to 1200 dpi. If each sensor
element creates an image pixel, the image resolution should be exactly equal to the
optical resolution of the scanner. Scanners however are often rated with resolution values
of 2400, 7200 or 9600 dpi. These resolutions are called interpolated resolutions and
basically involves an integrated circuit chip in the scanner generating new data by taking
the dots the scanner actually sees, and calculating where the dots in-between would most
likely fall, using an algorithm to 'guess' the color of the new dots by averaging the color
of adjacent dots. The problem with this scheme is that best guesses can never be truly
accurate. Interpolated images will always seem too smooth and slightly out of focus.
Thus for the best quality scans it is better to stick to the optical resolution value.
Color scanners have three sets of CCDs instead of one. White light reflected from the
image is passed through an optical beam splitter like a prism, which splits the light into
three primary colors red, green, blue. These three primary colors are read by three
different sensor elements and then the values are combined to form a single pixel of the
colored image, containing the RGB intensity values. These values are called pixel
attributes and are used to display the appropriate colors on the monitor screen.

After resolution, the next important parameter for determining scanner performance is the
bit-depth. While resolution determines the fineness of the details captured, the bit-depth
detremines the total number of colors captured by the scanner. The origin of the term
comes from the total number of binary digits or bits used to represent each pixel of the
image which in turn determines the number of color shades possible. If each pixel is
represented by a single bit number, then that pixel can be either white or black
(corresponding to 1 and 0). If each pixel is represented by 4 bits, then we can have a total
of 2^4 or 16 color shades. Modern scanners have a bit-depth of 24 which means that they
can represent a total of 2^24 or 16.7 million color shades. This seems to be sufficient to
represent photographic images in their true colors and hence is called true color scanning.
It should be noted that even though larger bit depths gives better color gradations, they
also increase the file size of the image.

Different types of images may be captured by the scanner using different bit-depths. Line
arts are made of predominantly black and white lines, so they can be captured using 1 or
2 bits. Halftones or grayscale photos can be captured using 8 or 16 bits while color
photographs usually require 24 bits. The scanner software gives the user flexibility to
choose the bit depth as well as resolution before starting the scan.

The third parameter of importance is the dynamic range of a scanner. This measures
how wide a range of tones the scanner can record and determines the largest contrasts in
the image. Dynamic range is measured on scale from 0.0 (perfect white) to 4.0 (perfect
black), and the single number given for a particular scanner tells how much of that range
the unit can distinguish. Most colour flatbeds have difficulty perceiving the subtle
differences between the dark and light colours at either end of the range, and tend to have
a dynamic range of about 2.4. For greater dynamic range, the next step up is a top-quality
colour flatbed scanner with extra bit-depth and improved optics. These high-end units are
usually capable of a dynamic range between 2.8 and 3.2, and are well-suited to more
demanding tasks like standard colour prepress. For the ultimate in dynamic range, the
only alternative is a drum scanner. These units frequently have a dynamic range of 3.0 to
3.8, and deliver all the colour quality one could ask of a desktop scanner. Although they
are overkill for most projects, drum scanners do offer high quality in exchange for their
high price.
Image Types

Conventional images can be of various types and when scanned, the scanner allocates
different bit-depths to these. Line art is the smallest of all the image types. Since only
black and white information is stored, the computer represents black with a 1 and white
with a 0. It only takes 1-bit of data to store each dot of a black and white scanned image.
Line art is most useful when scanning text or line drawing. Pictures do not scan well in
line art mode. Examples of line art include potraits, architectural drawings, sketches etc.
Greyscale images correspond to a black and white photograph and contains a variety of
shades of grey. Since humans can perceive about 256 shades of grey, these kind of
images are usually represented digitally using 8 bits (2^8 = 256). While computers can
store and display greyscale images, imagesetters used for printing newspapers, books and
magazines can only print about 50 shades of grey. A technique called halftoning is used
to break done a continuous tone image into solid spots of different sizes to create the
illusion of a continuous tone image. On examining the halftone images it can be made out
that the image is not really a continuous tone image but rather is made of a pattern of
different sized dots. When the light is reflected through the camera, it is filtered through a
halftone screen or contact sheet, which then breaks up the image into a pattern of dots
making a halftoned negative on the film. Areas of the original that reflect more light,
create larger dots; areas that reflect less light create smaller dots.

Photographic images are called continuous tone images and contain a large variety of
shades of various colors. Continuous tone images require 24 bits to store and thus have
the best quality and largest sizes. Depending on the type of the image scanned the user
can select various scan modes to allocate different number of bits to the digital image
created. See the scanner interface for details.

Optical Character Recognition (OCR)

When a page of text is scanned into a PC, it is stored as an electronic file made up of tiny
dots, or pixels; it is not seen by the computer as text, but rather, as a ‘picture of text’.
Word processors are not capable of editing bitmap images. In order to turn the group of
pixels into editable words, the image must go through a complex process known as
Optical Character Recognition

OCR research began in the late 1950s, and since then, the technology has been
continually developed and refined. In the 1970s and early 1980s, OCR software was still
very limited - it could only work with certain typefaces and sizes. These days, OCR
software is far more intelligent, and can recognise practically all typefaces as well as
severely degraded document images.

One of the earliest OCR techniques was something called matrix, or pattern matching.
Most text is either in Times, Courier, or Helvetica typefaces in point sizes between 10
and 14. OCR programs which use the pattern matching method have bitmaps stored for
every character of each of the different font and type sizes. By comparing a database of
stored bitmaps distributed to the bitmaps of the scanned letters the program attempts to
recognise the letters.

Feature extraction was the next step in OCR’s development. This attempted to recognise
characters by identifying their universal features, the goal being to make OCR typeface-
independent. If all characters could be identified using rules defining the way that loops
and lines join each other, then individual letters could be identified regardless of their
typeface. For example: the letter 'a' is made from a circle, a line on the right side and an
arc over the middle. The arc over the middle is optional. So, if a scanned letter had these
'features' it would be correctly identified as the letter 'a' by the OCR program. No OCR
software ever recognises 100% of the scanned letters. Some OCR programs use the
matrix/pattern matching and/or feature extraction methods to recognise as many
characters as possible - and complement this by using spell checking on the hitherto
unrecognised letters.

Scanning Software Interface

A typical scanning software interface is as shown above.


Some of the parameters are explained below :
(a) Type : Determines how the image is scanned and processed. Three basic images
types are drawing, halftones and photos. For all the image types both black and white
as well as color options are also available.
(b) Path : Determines the destination of the scanned image. The image may be saved as
a file on the hard disk, be exported to an application software like Photoshop or be
directly sent to a printer for printing.
(c) Brightness : The brightness control changes the lightness or darkness of a scanned
image. The settings range from 1 to 250. The default is 125.
(d) Contrast : The contrast control adjusts the range between the darkest and lightest
shades in the image. Higher contrast settings display black and white dots while lower
contrast settings display grey dots.
(e) Scaling : Changes the size of the scanned image with respect to the original size of
the image. The third button from the left toggles between uniform and non-uniform
scaling. Uniform scaling maintains the proportional relationship of the height and
width of the image while non-uniform scaling allows independent control of height
and width.
(f) Mirror button : This button (first from left) creates a mirror image of the image.
(g) Negative button : This button (second from left) converts the white areas of the
image to black and black areas to white. For color images the complementary color is
selected.
(h) Lock button : This button (fourth from left) is used to maintain a constant physical
image size by locking the height and width dimensions. This is useful when scanning
images to a specific size.
(i) Preview button : Creates a low-resolution scan of the image so that it may be
adjusted before taking a final scan.
(j) Zoom button : Rescans and enlarges the selected portion of an image to the
maximum size allowed by the Preview Area.
(k) Final button : Saves the scanned image to an image file on the hard disk. Several
file formats are provided in which the image can be saved.
(l) Color Adjustment : Available from the Tools menu, this tool is used to adjust the
color hue and saturation.
(m) Resolution : Available from the Custom menu, this tool is used to determine how
many dots per inch are generated for the final scan. This is an important factor for
fixing the quality and clarity of the image.

TWAIN Driver

A very important standard in image acquisition, developed by Hewlett-Packard, Kodak,


Aldus, Logitech and Caere which specifies how image acquisition devices such as
scanners, digital cameras and other devices transfer data to software applications.
TWAIN allows software applications to work with image acquisition devices without
knowing anything about the device itself. If a device is TWAIN compliant and a software
application is TWAIN compliant, the two should work together regardless of whether or
not the software was bundled with the image acquisition device when it was purchased.
Source : http://www.pctechguide.com

It is possible to attach more than one TWAIN compliant image acquisition device to a PC
at the same time. Each of the devices will have it's own separate TWAIN module. The
user would be prompted to select a suitable TWAIN source which would launch the
device’s own driver, all without leaving the main application. After scanning, the driver
automatically closes, leaving the scanned image open in the main application. No
unnecessary quitting, launching, or saving of potentially large and possibly useless files.

Digital Camera

Apart from the scanner used to digitize paper documents and film, the second device used
to digitize real world images is the digital camera. Just like a conventional camera, a
digital camera also has a lens through which light from real world objects enter the
camera. But instead of falling on film thereby causing a chemical reaction, the light
instead falls on a CCD array. Depending on the intensities of the light falling on them the
CCD generates proportional electrical signals and sends them to an ADC for converting
into digital numbers. Each digital number represents the amount of light falling on a
specific CCD element and stored as an image pixel in the storage device. A collection of
all the pixel information can be used to recreate the image on the screen. Like scanners,
optical splitters are used to split white light into primary color constituents, each of which
are stored separately. These are then recombined to generate a color digital image on the
screen.

Source : http://www.pctechguide.com

More is the number of pixels generated per unit length of the image, higher is the
resolution of the camera and better is the picture quality. Unlike a scanner a digital
camera is usually not attached to a computer via a cable. The camera has its own storage
facility inside it usually in the form of a floppy drive, which can save the image created
into a floppy disc. Images however could not be stored in floppy drives in their raw forms
as they would tend to take too much space. So instead they are compressed to reduce
their file size and stored usually in the JPEG format. This is a lossy compression
technique and results in slight loss in image quality. Earlier digital cameras had a
resolution of about 640 X 480 pixels but modern high-end cameras have as many as 2048
X 1536 pixels in its CCD array.

Source : http://www.pctechguide.com

Most of the digital cameras have an LCD screen at the back, which serve two important
purposes : first it can be used as a viewfinder and can be used by the user for composition
and adjustment, secondly after the image has been stored on the floppy disc, it can be
used for previewing the image. Most digital cameras feature auto-focus lenses providing
equivalent coverage to a standard film camera. Aperture and shutter speed control are
also fully automated with some cameras allowing manual adjustments. Motorised lenses
provides zoom capability by varying the focal length. A self-timer is a common feature,
typically providing a 10-second delay between the time the shutter is activated and when
the picture is taken and all modern day digital cameras have a built-in automatic flash,
with a manual override option. The recent innovation of built-in microphones provides
for sound annotation, in standard WAV format. After recording, this sound can be sent to
an external device for playback, or played back on headphones using an ear socket.

Despite the trend towards removable storage, digital cameras still allow connection to a
PC for the purpose of image downloading. Transfer is usually via a conventional RS-232
serial cable at a maximum speed of 115Kbit/s, although some professional models offer a
fast SCSI connection. The release of Windows 98 in mid-1998 brought with it the
prospect of connection via the Universal Serial Bus, and digital cameras are now often
provided with both a serial cable and a USB cable. Most of the digital cameras however
also have some form of internal storage device like a floppy drive. Each floppy inserted
into the camera can hold about 15 to 25 images, depending on the amount of
compression. The floppy can then simply be taken out of the camera, inserted into a PC
and the files copied.

Digital cameras also have some kind of software utility resident in a ROM chip inside it
which allow the user to toggle between the CAMERA mode (for taking pictures) and
PLAY mode (for displaying pictures). In the PLAY mode the user is presented with a
menu structure having some of the functionalities like : displaying all the images on the
floppy, selecting a particular image, deleting the images not required, write protecting the
important images from deletion, setting the date and time, displaying how much of the
floppy disk space is free and even allowing a floppy to be formatted in the drive.

Business users have more to gain from digital


photography than home users. The technology lets the
user put a photo onto the computer monitor within
minutes of shooting, translating into a huge
productivity enhancement and a valuable competitive
edge. Digitally captured photos are going into
presentations, business letters, newsletters, personnel ID badges, and Web- and print-
based product catalogues.

Digital Images

Images which are already available in digital form need not be scanned. They can simply
be copied from the source into your applications, keeping in view the copyright
restrictions. Major sources of digital images include :

Clipart : This represents a gallery of ready-to-use images and accompanies many of the
popular software application packages, like MS-Office, MS-Frontpage etc. The images
are usually categorised under various headings making it convenient to search for them,
and can include both vector and raster images.

Photo-CD : Companies like Kodak has popularised the concept of Photo-CDs which
contain a collection of high-quality digital photographs. Each photo is usually available at
a number of different resolutions, which the users can select as per their requirements.

Internet : The Internet, especially the World Wide Web, is a storehouse of millions of
web sites each of which may contain a variety of digital images on varied topics. Seach
Engines can be used to search out images based on subject matter and topics.

IMAGE PROCESSING : EDITING

Color Management and Device Profiles

A Color management system (CMS) helps to reduce or eliminate color-matching


problems and makes color portable, reliable, and predictable. This section provides an
introduction to color management systems, explains why they are of increasing
importance in the desktop publishing industry, and defines some of the basic concepts
and components of a CMS.

What is a CMS?

The challenge of color publishing is to reproduce colors the eye sees on a series of
devices that have progressively diminishing color capabilities. Even the best
photographic film can capture only a small portion of the colors discernible to the human
eye. A computer monitor can display only a small fraction of those colors, and a printing
press can reproduce fewer colors still.

A color management system (CMS) is a collection of software tools designed to reconcile


the different color capabilities of scanners, monitors, printers, image-setters, and printing
presses to ensure consistent color throughout the print production process. Ideally, this
means that the colors displayed on your monitor accurately represent the colors of the
final output. It also means that different applications, monitors, and operating systems
will display colors consistently. A CMS maps colors from a device with a large color
gamut, such as a monitor, to a device with a smaller color gamut, such as a proofer or
printing press; consequently, all colors on the monitor represent colors that the output
device can reproduce.

Need for Color Management

Before desktop publishing, high-end pre-press operators used proprietary, or closed,


systems, where all devices were integrated and calibrated to known values in order to
work together.
Certain factors in the pre-press, printing, film, and video industries have made these high-
end proprietary solutions less viable. Desktop publishing has brought about the increase
of open production systems. The design and production workflow is no longer confined
to a closed system, but may be distributed across many different systems made up of
devices from different vendors.

Because each device reproduces color differently, the color you see at one stage of design
and production rarely matches what you see at another. In other words, color is device-
dependent—the color you see depends on the device producing it. A scanner interprets an
image as certain RGB values according to its particular specifications; a particular
monitor displays RGB colors according to the specifications of its phosphors; a color
desktop printer outputs in RGB or CMYK according to its own specifications. And, each
press produces printed output according to the specifications followed and the type of
inks used. Moreover, by their very natures, monitors and printing presses reproduce color
in completely different ways. A monitor uses the RGB color model whereas a printer
uses the CMYK color model.

Thus the need for an open color management system to communicate color reliably
between different devices and operating systems. Open color management lets you
compensate for the differences in these devices and communicate color in a device-
independent manner.

Device Independent Color

As the previously explained, color varies depending on the device that produces it. In a
sense, each device speaks its own color language, which it can't communicate well to
another device. What is needed is an interpreter.
To illustrate this, imagine four people in a room. Each person is assigned a task that
requires agreement among them all. One speaks Swahili, one speaks French, one speaks
Mandarin, and one uses sign language. For the group to communicate, they need an
interpreter who knows all four languages, as well as an agreed-upon neutral language. All
discussion must first go through the interpreter who then translates it to the neutral
language that all can understand. Each will continue to use his or her own native
language, but will communicate with the others by using the neutral language.
A color management system works in much the same way, using a device-independent
color model as the neutral color language by which all color information is referenced.
The particular color model used is CIELAB, developed in 1976 by the Commission
Internationale de l'Eclairage (International Committee on Illumination, or CIE). CIE's
standard for measuring color is based on how the human eye perceives it.

The ICC Color Management Model

The color-managed workflow is fairly straightforward and possesses two major


characteristics:
* Images are edited in a device-independent color space that is larger than the color
space of the output device, such as a computer monitor, a TV screen, film, or a four-
color press.
* Image files can be saved with profiles that contain information describing the
characteristics of the source and output color devices.
These two considerations make a color-managed workflow advantageous. The image
files become portable since they can be re-purposed for output on widely differing
devices simply by tagging them with different output profiles.

In 1993, members of the computer and color publishing industry began working toward a
common approach to color management. They formed the International Color
Consortium (ICC) in order to establish color standards that would help users achieve
reliable and reproducible color throughout the entire reproduction process. They also
endorsed an open framework for developing color management systems.
An ICC color management system has three major components:

• A device-independent color space, also known as a Reference Color Space.


• Device profiles that define the color characteristics of a particular device.
• A Color Management Module (CMM) that interprets the device profiles and
carries out the instructions on what to do with different device's color gamuts.

One of the first decisions made by the ICC was that color space transformations were the
responsibility of the operating system. Placing the responsibility there meant that color
management would not have to be replicated in each application while still being
available to all applications. Device profiles, which contain information on the color
behavior of the various peripherals, provide the data necessary to perform these
transformations.
The ICC chose the CIE color model as the device-independent color space for color
management. Since device-specific colors from any device can be mapped into a device-
independent color space, it's much easier to combine equipment from different vendors
into one system and maintain color specifications. Because they are well-defined and
reproducible, the CIE color space (CIELAB) is an excellent language for communicating
color information between different systems.
Device Profiles

A color management system must have available to it the characteristics of each device in
the production process, namely their color "behaviors" and color gamuts. It gets this
information from files called device profiles. A device profile enables the CMS to
convert between a device's native color space and a device-independent reference color
space. Each device in the production system has its own device profile, either provided as
part of the CMS, available from the device's manufacturer, or included with third party
hardware, software, or both. The CMS uses these profiles to convert one device-
dependent color space into the device-independent reference color space and then to a
second device-dependent color space.

Source :
http://www.adobe.com

Device profiles
characterize a particular
device by describing the
characteristics of the color
space for that device in a
particular state. Profiles
can also be embedded
within image files.
Embedded profiles allow
for the automatic interpretation of color information as the color image is transferred
from one device to another.
Device profiles are divided into three classifications:

• Input profiles for devices such as scanners and digital cameras (also known as
source profiles).
• Display profiles for devices such as monitors and flat panel screens.
• Output profiles for devices such as printers, copiers, film recorders, and printing
presses (also known as destination profiles).

The Color Management Module (CMM)

The Color Management Module (CMM), sometimes called the Color Engine, is the part
of the CMS that maps one gamut to another. When colors consistent with one device's
gamut are displayed on a device with a different gamut, a CMM uses device profiles and
render intents to optimize the displayed colors between the two devices. The CMM does
this by mapping the out-of-gamut colors into the range of colors that can be produced by
the destination device. Each CMS has a default CMM, but may support additional CMMs
as well. Windows 98 and Windows 2000 use ICM 2.0, which was developed by
Microsoft. A CMM maps colors from one device's color space to another according to an
algorithm called a render intent.
Basic Color Theory

The phenomenon of seeing color is dependent on a triad of factors: the nature of light, the
interaction of light and matter, and the physiology of human vision. Each factor plays a
vital part and the absence of any one would make seeing color impossible.
In broad terms, we see color when a light source that emits a particular distribution of
differently colored wavelengths of light strikes a colored object. The object reflects (or
transmits) that light in another particular distribution of colored wavelengths, which is
then received by the photoreceptors of the human eye. The photoreceptors are sensitive to
yet another particular distribution of wavelengths of light, which is sent as a stimulus to
the brain, causing us to perceive a particular color.

Light is electromagnetic
(EM) radiation, the fluctuations of electric and magnetic fields in nature. More simply,
light is energy and the phenomenon of color is a product of the interaction of energy and
matter. There are different types of EM radiation including gamma rays, x-rays, radio
waves, ultraviolet, and infrared. The whole array of these is known as the
electromagnetic spectrum, which runs in order of wavelength from longest (radio waves
that range from 1 millimeter to several kilometers) to shortest (gamma rays at less than
0.1 nanometers, or 1/10,000,000,000th of a meter). The human eye is only sensitive to
EM radiation at wavelengths that range roughly between 780 nanometers and 380
nanometers. This small segment is called the visible spectrum or visible light. This is
usually what we mean when we speak of "light" (though, properly speaking, all EM
radiation is light). Infrared lies just below red light; ultraviolet exists just above violet
light. Both are invisible to humans and other creatures (though some reptiles can see
infrared and some insects can see ultraviolet).

The visible spectrum contains numerous colors that are distinguished by wavelength and
amplitude; wavelength determines color and amplitude determines brightness. Of these
colors, the human eye can distinguish about 10,000. The combination of these light
waves produces white light, which is what we see from the Sun and from most artificial
light sources. A breakdown of the individual colors themselves is only visible under
certain circumstances. This occurs naturally in a rainbow; it also occurs when white light
is refracted through a prism.
The nature of light and the visible spectrum are only one part of what's needed for us to
see color. The second part of the triad has to do with the interaction of light and matter,
for when we see an object as blue or red or purple, what we're really seeing is a partial
reflection of light from that object. The color we see is what's left of the spectrum after
part of it is absorbed by the object.

Transmission takes place when light passes through an object without being essentially
changed; the object, in this case, is said to be transparent. Some alteration does take
place, however, according to the refractive index of the material through which the light
is transmitted. Refractive index (RI) is the ratio of the speed of light in a vacuum (i.e.,
space) to the speed of light in a given transparent material (e.g., air, glass, water). For
example, the RI of air is 1.0003. If light travels through space at 186,000 miles per
second, it travels through air at 185,944 miles per second—a very slight difference. By
comparison, the RI of water is 1.333 and the RI of glass will vary from 1.5 to 1.96—a
considerable slowing of light speed. The point where two substances of differing RI meet
is called the boundary surface. At this point, a beam of transmitted light (the incident
beam) changes direction according to the difference in refractive index and also the angle
at which it strikes the transparent object. This is called refraction. If light is only partly
transmitted by the object (the rest being absorbed), the object is translucent.

Source :
http://www.adobe.com

When light strikes an


opaque object (that is, an
object that does not
transmit light), the object's
surface plays an important
role in determining whether the light is fully reflected, fully diffused, or some of both. A
smooth or glossy surface is one made up of particles of equal, or nearly equal, refractive
index. These surfaces reflect light at an intensity and angle equal to the incident beam.
Scattering, or diffusion, is another aspect of reflection. When a substance contains
particles of a different refractive index, a light beam striking the substance will be
scattered. The amount of light scattered depends on the difference in the two refractive
indices and also on the size of the particles. Most commonly, light striking an opaque
object will be both reflected and scattered. This happens when an object is neither wholly
glossy nor wholly rough.

Source :
http://www.adobe.com
Finally, some or all of the light may be absorbed depending on the pigmentation of the
object. Pigments are natural colorants that absorb some or all wavelengths of light. What
we see as color, are the wavelengths of light that are not absorbed.

Source : http://www.adobe.com

The third part of the color triad is human vision. The retina is the light-sensitive part of
the eye and its surface is composed of photoreceptors or nerve endings. These receive the
light and pass it along through the optic nerve as a stimulus to the brain. The
photoreceptors are of two types, rods and cones.

The major environmental variable concerns the kind of ambient light under which a
color is seen. This is directly related to the spectral power distributions discussed earlier.
What we see outdoors is illuminated by the sun. Light from artificial sources is rarely that
bright. Since luminance is an important factor in seeing color, the brightness of your
environment will have a lot to do with the color you see.

Color Models

Color models are used to classify colors and to qualify them according to such attributes
as hue, saturation, chroma, lightness, or brightness. They are further used for matching
colors and are valuable resources for anyone working with color in any medium: print,
video, or Web.

RGB Model

Red, green, and blue are the primary stimuli for human color perception and are the
primary additive colors. The relationship between the colors can be seen in this
illustration. The secondary colors of RGB, cyan, magenta, and yellow, are formed by the
mixture of two of the primaries and the exclusion of the third. Red and green combine to
make yellow, green and blue make cyan, blue and red make magenta. The combination of
red, green, and blue in full intensity makes white. RGB model is also called the Additive
model. Additive colors are created by mixing spectral light in varying combinations. The
most common examples of this are television screens and computer monitors, which
produce colored pixels by firing red, green, and blue electron guns at phosphors on the
television or monitor screen.

Source : http://www.adobe.com

CMY(K) Model

Subtractive colors are seen when pigments in an object absorb certain wavelengths of
white light while reflecting the rest. We see examples of this all around us. Any colored
object, whether natural or man-made, absorbs some wavelengths of light and reflects or
transmits others; the wavelengths left in the reflected/transmitted light make up the color
we see. This is the nature of color print production and cyan, magenta, and yellow, as
used in four-color process printing, are considered to be the subtractive primaries.

Source : http://www.adobe.com
As the illustrations show, the colors created by the subtractive model of CMY don't look
exactly like the colors created in the additive model of RGB. Particularly, CMY cannot
reproduce the brightness of RGB colors. In addition, the CMY gamut is much smaller
than the RGB gamut. The CMY model used in printing lays down overlapping layers of
varying percentages of transparent cyan, magenta, and yellow inks. Light is transmitted
through the inks and reflects off the surface below them (called the substrate). The
percentages of CMY ink subtract inverse percentages of RGB from the reflected light so
that we see a particular color.

Gamut Constraints

One problem that needs also to be addressed in discussing RGB and CMY is the issue of
gamut constraints. The representation of the whole range, or gamut, of human color
perception is quite large. However, when we look at the RGB and CMY color models—
which are essentially models of color production—we see that the gamut of colors we can
reproduce is far less than what we can actually see.
While not precise, the illustration below clearly shows this problem by superimposing
representative RGB and CMY gamuts over the CIE Chromaticity Diagram (representing
the whole gamut of human color perception).

Source : http://www.adobe.com

Both models fall short of reproducing


all the colors we can see. Furthermore,
they differ to such an extent that there
are many RGB colors that cannot be
produced using CMY(K), and
similarly, there are some CMY colors
that cannot be produced using RGB.

HSB Model

HSB/HLS are two variations of a very basic color model for defining colors in desktop
graphics programs that closely matches the way we perceive color.
Hue defines the color itself, for example, red in distinction to blue or yellow. The values
for the hue axis run from

0–360° beginning and ending with red and running through green, blue and all
intermediary colors like greenish-blue, orange, purple, etc. This representation is known
as the Color Wheel and forms the basis of representing colors in terms of angles.
Source : http://www.adobe.com

Saturation indicates the degree to which the hue differs from a neutral gray. The values
run from 0%, which is no color saturation, to 100%, which is the fullest saturation of a
given hue at a given percentage of illumination.
Lightness indicates the level of illumination. The values run as percentages; 0% appears
black (no light) while 100% is full illumination, which washes out the color (it appears
white). In this respect, the lightness axis is similar to Munsell's value axis. Colors at
percentages less than 50% appear darker while colors at greater than 50% appear lighter.

A color solid (i.e., a three-dimensional representation) of the HLS model is not exactly
cylindrical since the area truncates towards the two ends of the lightness axis and is
widest in the middle range. Thus it forms an ellipsoid.

CIE LAB Model

CIE stands for Comission Internationale de l'Eclairage (International Commission on


Illumination). The commission was founded in 1913 as an autonomous international
board to provide a forum for the exchange of ideas and information and to set standards
for all things related to lighting. The CIE color model was developed to be completely
independent of any device or other means of emission or reproduction and is based as
closely as possible on how humans perceive color. According to CIE somewhere between
the optical nerve and the brain, retinal color stimuli are translated into distinctions
between light and dark, red and green, and blue and yellow. CIELAB indicates these
values with three axes: L*, a*, and b*.

The central vertical axis represents lightness (signified as L*) whose values run from 0
(black) to 100 (white).
The color axes are based on the fact that a color can't be both red and green, or both blue
and yellow, because these colors oppose each other. On each axis the values run from
positive to negative. On the a-a' axis, positive values indicate amounts of red while
negative values indicate amounts of green. On the b-b' axis, yellow is positive and blue is
negative. For both axes, zero is neutral gray.

Source : http://www.adobe.com

Calibration

Calibrating and characterizing your monitor is the first step in any color-managed
workflow. Calibration is the process of adjusting a device to a known set of conditions by
setting the monitor's gamma and a known white point. (In simplest terms, white point is
the balance between the red, green, and blue primaries which, combined in equal amounts
at full intensity, create white.)
Characterizing creates a monitor profile for use with a Color Management System
(CMS). Once the profile is created, it provides information to ICC-aware applications
about the monitor. An accurate monitor profile is critical to a color-managed workflow
since you will be making judgments based on the colors you see on your monitor.
Monitor calibration and characterization is best done with specialized software and
hardware. Adobe Gamma is a control panel utility in Mac OS and Windows. It is used to
calibrate and characterize your monitor, resulting in the creation of an ICC monitor
profile for use in Photoshop 5.x, InDesign, Illustrator, and all other ICC-aware
applications.

Using Adobe Gamma

With Adobe Gamma you can calibrate your monitor's contrast and brightness, gamma
(midtones), color balance, and white point. These settings are then used to characterize,
or create a profile for, your monitor.

Before you begin :


• Make sure your monitor has been turned on for at least half an hour so its display
has stabilized.
• Set the room lighting at the level you plan to maintain.
• Turn off any desktop patterns and change the background color on your monitor
to a light gray. This prevents the background color from interfering with your
color perception and helps you adjust the display to a neutral gray.

In Windows 95 or Windows 98, choose Start > Settings > Control Panel.
In the Control Panel, right-click the Adobe Gamma icon, and then choose Open from the
shortcut menu.
When accessing Adobe Gamma for the first time,
you will see the following window prompting you
to choose between the Step By Step utility using
the Adobe Gamma Wizard (Windows), or manual
setup using the Control Panel utility.

Step 1: Choose Your Initial Monitor


Profile.
The first window of the step-by-step
setup will prompt you to confirm the
monitor profile. This is only a starting
point. The calibration you're doing
with Adobe Gamma will further
characterize the profile to match your
monitor's particular characteristics.
These profiles are located at System
Folder: Windows\System\Color.

Step 2: Adjust Your Monitor's Brightness and Contrast


In this window of the step-by-step setup, you will adjust your monitor's brightness and
contrast.

Step 3: Confirm Your Monitor's Phosphors


In this window, you have the option to choose a pre-defined phosphor setting, or enter
custom phosphor values.

Step 4: Choose the Desired Gamma


Here you will adjust the brightness of the monitor's midtones by matching the brightness
of the center square to the pattern of horizontal lines

Step 5: Choose the Hardware White Point


Next, you'll make sure that the white point setting matches the white point of your
monitor. The monitor's white point is the point at which equal combinations of red, green,
and blue light at full intensity create white.

Step 6: Choose the Adjusted White Point


This option, when available, is used to choose a working white point for monitor display
if that differs from the hardware white point. For example, if your hardware white point
is 6500K, but you want to edit an image at 5000K because that most closely represents
the environment in which it will normally be viewed, you can set your Adjusted White
Point to 5000K, and Adobe Gamma will change the monitor display accordingly:

Step 7: Compare Before and After Adjustments


At this stage you can compare monitor appearance before using Adobe Gamma and after
by simply selecting the "Before" and "After" options

Step 8: Save Your New Monitor Profile


When you click Finish in the final setup window, a Save As dialog box appears. The file
name you select will become the ICC device profile that defines your monitor. You can
now use this profile in any application that is ICC-compliant.

Concepts related to Adobe Photoshop

Channels

Photoshop uses special grayscale channels to store an image's color information. If an


image has multiple layers, each layer has its own set of color channels. Color information
channels are created automatically when you open a new image. The image's color mode
(not its number of layers) determines the number of color channels created. For example,
an RGB image has four default channels: one for each of the red, green, and blue colors
plus a composite channel used for editing the image. You can create alpha channels to
store selections as 8-bit grayscale images. You use alpha channels to create and store
masks, which let you manipulate, isolate, and protect specific parts of an image. An
image can have up to 24 channels.

Masks

Masks let you isolate and protect areas of an image as you apply color changes, filters, or
other effects to the rest of the image. When you select part of an image, the area that is
not selected is "masked" or protected from editing. You can also use masks for complex
image editing such as gradually applying color or filter effects to an image. In addition,
masks let you save and reuse time-consuming selections as alpha channels. (Alpha
channels can be converted to selections and then used for image editing.) Because masks
are stored as 8-bit grayscale channels, you can refine and edit them using the full array of
painting and editing tools. In a mask, selected areas appear white, deselected areas appear
black, and partially selected areas appear as shades of gray.

Layers
Layers allow you to make changes to an image without altering your original image data.
For example, you might store photographs or elements of photographs on separate layers
and then combine them into one composite image. Think of layers as sheets of acetate
stacked one on top of the other. Where there is no image on a layer (that is, in places
where the layer is transparent), you can see through to the layers below. All layers in a
file have the same resolution, start with the same number of channels, and have the same
image mode (RGB, CMYK, or Grayscale). You can draw, edit, paste, and reposition
elements on one layer without disturbing the others. Until you combine, or merge, the
layers, each layer remains independent of the others in the image.

Indexed Color Mode

This mode uses at most 256 colors. When converting to indexed color, Photoshop builds
a color lookup table (CLUT), which stores and indexes the colors in the image. If a color
in the original image does not appear in the table, the program chooses the closest one or
simulates the color using available colors. By limiting the palette of colors, indexed color
can reduce file size while maintaining visual quality--for example, for a multimedia
animation application or a Web page.

Dither
To simulate colors not in the color table, you can dither the colors. Dithering mixes the
pixels of the available colors to simulate the missing colors. A higher amount dithers
more colors, but may increase file size. Dithering helps to reduce the blocky or banded
appearance of an image.

Anti-alias

The Anti-alias option removes jagged edges from a pasted or placed selection by making
a subtle transition between the edges of the selection and its surrounding pixels. Turning
off this option produces a hard-edged transition between pixels--and therefore the
appearance of jagged edges--when vector artwork is rasterized.

Duotones

Duotones are used to increase the tonal range of a grayscale image. Although a grayscale
reproduction can display up to 256 levels of gray, a printing press can reproduce only
about 50 levels of gray per ink. This means that a grayscale image printed with only
black ink can look significantly coarser than the same image printed with two, three, or
four inks, each individual ink reproducing up to 50 levels of gray.
Sometimes duotones are printed using a black ink and a gray ink--the black for shadows
and the gray for midtones and highlights. More frequently, duotones are printed using a
colored ink for the highlight color. This technique produces an image with a slight tint to
it and significantly increases the image's dynamic range.

Compression and File Formats

Compared to text files image files occupy a lot of disk space. A 5-page DOC file may
typically occupy 500 KB of space, whereas a single 800 by 600 image file with 24-bit
color depth take up (800 * 600 * 24) or about 11250 KB of space.
Image compression helps to reduce both disk space requirements as well as bit rate
requirements for processing. Image Compression can be achieved using softwares called
CODECs. CODECs manipulate image data for reducing file size for the purpose of
storing on disk. For displaying the image, the CODEC again decompresses the data into
its original form

Image compression can be classified into two categories : lossless and lossy.
In lossless compression the original image data is not changed permanently during
compression. After decompression therefore the original data can be retrieved exactly.
Text compression is a good example of lossless compression. Spreadsheets, word
processor files, databse files etc. usually contain repeated sequences of characters.
Compression techniques usually reduce repeated characters to a count value and thereby
save disk space. When decompressed the repeated characters are reinstated. There is no
loss of information in this approach. Similarly, images also contain information that is
repetitive in nature i.e. a series of pixels having the same color values, which allows
replacement by count values. Compression ratios are typically of the order of 2 to 5
times. An example of lossless compression is RLE.

While lossless compression is always desirable images with very little redundancy do not
produce acceptable results with this technique. In lossy compression parts of the original
data are discarded permanently to reduce file size. After decompression there is a
degradation of image quality. However if done correctly the quality loss may not be
immediately apparent because of the limitations of the human eye and also because of the
tendency of the human senses to bridge the gaps in information. An important
consideration is how much information can be lost before the human eye can tell the
difference. Compression ratios of the order of 10 to 50 times may be achieved by this
method. Some of the popular lossy compression methods include JPEG (for still image)
and MPEG (for motion video).

BMP : BMP is the standard Windows image format on DOS and Windows-compatible
computers. The BMP format supports RGB, indexed-color, grayscale, and Bitmap color
modes, and does not support alpha channels.

GIF : The Graphics Interchange Format (GIF) is the 8-bit file format commonly used to
display indexed-color graphics and images in hypertext markup language (HTML)
documents over the World Wide Web and other online services. The GIF format does not
support alpha channels but supports background transparency. The GIF format uses LZW
compression, which is a lossless compression method. However, because GIF files are
limited to 256 colors, optimizing an original 24-bit image as an 8-bit GIF can result in the
loss of color information.

JPEG : The Joint Photographic Experts Group (JPEG) format is commonly used to
display photographs and other continuous-tone images in hypertext markup language
(HTML) documents over the World Wide Web in 24-bit format. Unlike GIF format, JPG
retains all color information in an RGB image but compresses file size by selectively
discarding data which is lossy in nature. A higher level of compression results in lower
image quality and a lower level of compression results in better image quality.

IMAGE PROCESSING : OUTPUT

Printers

Laser Printer

The laser printer was introduced by Hewlett-Packard in 1984, based on technology


developed by Canon. It worked in a similar way to a photocopier, the difference being the
light source. With a photocopier a page is scanned with a bright light, while with a laser
printer the light source is, not surprisingly, a laser. After that the process is much the
same, with the light creating an electrostatic image of the page onto a charged
photoreceptor, which in turn attracts toner in the shape of an electrostatic charge. Laser
printers quickly became popular due to the high quality of their print and their relatively
low running costs.

Operation

1 : Charging Roll applies a uniform negative charge


2 : Laser places image on drum by charging it positively at image points
3 : Developer Roll applies toner which is attracted by positive charge
4 : Transfer Roll attracts toner off the drum.
5 : Toner is melted (fused) to the paper with heat from Hot Roll

Laser printers are usually monochrome devices, but as with most mono technologies,
laser printing can be adapted to colour. It does this by using cyan, magenta and yellow in
combination to produce the different printable colours. Four passes through the electro-
photographic process are performed, generally placing toners on the page one at a time or
building up the four-colour image on an intermediate transfer surface.

Inkjet Printer

Inkjet printing, like laser printing, is a non-impact method. Ink is emitted from nozzles as
they pass over a variety of possible media, and the operation of an inkjet printer is easy to
visualise: liquid ink in various colours being squirted at the paper to build up an image. A
print head scans the page in horizontal strips, using a motor assembly to move it from left
to right and back, as another motor assembly rolls the paper in vertical steps. A strip of
the image is printed, then the paper moves on, ready for the next strip. To speed things
up, the print head doesn’t print just a single row of pixels in each pass, but a vertical row
of pixels at a time.

Most inkjets use thermal technology, whereby heat is used to fire ink onto the paper.
There are three main stages with this method. The ink emission is initiated by heating the
ink to create a bubble until the pressure forces it to burst and hit the paper.
The bubble then collapses as the element cools, and the resulting vacuum draws ink from
the reservoir to replace the ink that was ejected. This is the method favoured by Canon
and Hewlett-Packard.

Thermal technology imposes certain limitations on the printing process in that whatever
type of ink is used, it must be resistant to heat because the firing process is heat-based.
The use of heat in thermal printers creates a need for a cooling process as well, which
levies a small time overhead on the printing process.

Epson's proprietary piezo-electric technology uses a piezo crystal at the back of the ink
reservoir.
Uses the property of certain crystals that causes them to oscillate when subjected to
electrical pressure (voltage).
So, whenever a dot is required, a current is applied to the piezo element, the element
flexes and in so doing forces a drop of ink out of the nozzle.

There are several advantages to the piezo method. The process allows more control over
the shape and size of ink droplet release. The tiny fluctuations in the crystal allow for
smaller droplet sizes and hence higher nozzle density. Also, unlike with thermal
technology, the ink does not have to be heated and cooled between each cycle. This saves
time, and the ink itself is tailored more for its absorption properties than its ability to
withstand high temperatures. This allows more freedom for developing new chemical
properties in inks.

The ink used in inkjet technology is water-based and this poses other problems. The
results from some of the earlier inkjet printers were prone to smudging and running, but
over the past few years there have been enormous improvements in ink chemistry. Oil-
based ink is not really a solution to the problem because it would impose a far higher
maintenance cost on the hardware. Printer manufacturers are making continual progress
in the development of water-resistant inks, but the results from inkjet printers are still
weak compared to lasers.

Vous aimerez peut-être aussi