Vous êtes sur la page 1sur 47

SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

GRAPHICS AND IMAGE PROCESSING


IV SEMESTER

GRAPHICS AND IMAGE PROCESSING Page 1


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

CS T46 GRAPHICS AND IMAGE PROCESSING


UNIT I
Graphics Systems and Graphical User Interface: Pixel – Resolution – types of video display devices –
Graphical input devices – output devices – Hard copy devices – Direct screen interaction – Logical input
function – GKS User dialogue – Interactive picture construction techniques.
UNIT II
Geometric Display Primitives and Attributes: Geometric display primitives – Points – Lines and
Polygons – Point display method – Line drawing methods.
2D Transformations and Viewing: Transformations – types – matrix representation – Concatenation –
Scaling – Rotation – Translation – Shearing – Mirroring – Homogeneous coordinates.
Window to view port transformations: Windowing And Clipping: Point – Lines – Polygons - boundary
intersection methods.
UNIT III
Digital Image Fundamentals and Transforms: Nature of Image processing – related fields – Image
representations – Image types – Image processing operations – Applications of Image processing –
Imaging system – Image Acquisition – Image Sampling and Quantization – Image quality – Image
storage and file formats - Image processing operations - Image Transforms - need for Transforms –
Fourier Transforms and its properties – Introduction to Walsh, Hadamard, Discrete Cosine, Haar, Slant,
SVD, KL and Hotelling Transforms.
UNIT IV
Image Enhancement and Restoration: Image Quality and need for Enhancements – Point operations -
Histogram Techniques – Spatial filtering concepts – Frequency Domain Filtering – Image Smoothening –
Image Sharpening - Image degradation and Noise Models – Introduction to Restoration Techniques.
UNIT V
Image Compression: Compression Models and measures – coding types – Types of Redundancy -
Lossless compression algorithms – Lossy compression algorithms – Introduction to compression
standards.
Image Segmentation: Detection of Discontinuities – Edge Detection – Thresholding – Region Based
Segmentation.
TEXTBOOK
1. Donald D. Hearn, M. Pauline Baker and Warren Carithers, “Computer Graphics with OpenGL”,
Fourth Edition, Pearson Education, 2010.
2. S. Sridhar, “Digital Image Processing”, Oxford Press, First edition, 2011.
REFERENCES
1. Anil Jain K, “Fundamentals of Digital Image Processing”, Prentice-Hall of India, 1989.
2. Sid Ahmed, “Image Processing”, McGraw-Hill, 1995.
3. Gonzalez R. C and Woods R.E., “Digital Image Processing”, Pearson Education, Second edition,
2002.
4. Newmann W.M. and Sproull R.F., "Principles of Interactive Computer Graphics", Tata McGraw-
Hill, Second edition, 2000.
5. Foley J.D., Van Dam A, Fiener S.K. and Hughes J.F., “Computer Graphics", Second edition,
Addison-Wesley, 1993.
WEBSITE
1. http://nptel.ac.in/courses/106106090/ for graphics
2. http://nptel.ac.in/courses/106105032/ for digital image processing

GRAPHICS AND IMAGE PROCESSING Page 2


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

Department of Computer Science and Engineering

Subject Name: GRAPHICS & IMAGE PROCESSING Subject Code: CS T46

Prepared by:
Mrs. C. Kalpana, Asst.Prof /CSE
Mrs. P. Bhavani, Asst.Professor/CSE

Verified by: Approved by:

UNIT- I
Graphics Systems and Graphical User Interface: Pixel – Resolution – types of video display
devices – Graphical input devices – output devices – Hard copy devices – Direct screen interaction –
Logical input function – GKS User dialogue – Interactive picture construction techniques.

1.1 Introduction to Graphics


Definition
Graphics are visual presentations on some surface such as wall, canvas, computer screen
or paper. Graphics can be functional, artistic, realistic or imaginary. E.g.: Photographs,
drawings, line art, graphs, diagrams, symbols, maps, geometric designs & engineering drawings.

APPLICATION OF COMPUTER GRAPHICS


Computer Graphics has numerous applications, some of which are listed below −
 Computer graphics user interfaces (GUIs) − A graphic, mouse-oriented paradigm
which allows the user to interact with a computer.
 Business presentation graphics − "A picture is worth a thousand words".

GRAPHICS AND IMAGE PROCESSING Page 3


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

 Cartography − Drawing maps.


 Weather Maps − Real-time mapping, symbolic representations.
 Satellite Imaging − Geodesic images.
 Photo Enhancement − Sharpening blurred photos.
 Medical imaging − MRIs, CAT scans, etc. - Non-invasive internal examination.
 Engineering drawings − mechanical, electrical, civil, etc. - Replacing the blueprints of
the past.
 Typography − The use of character images in publishing - replacing the hard type of the
past.
 Architecture − Construction plans, exterior sketches - replacing the blueprints and hand
drawings of the past.
 Art − Computers provide a new medium for artists.
 Training − Flight simulators, computer aided instruction, etc.
 Entertainment − Movies and games.
 Simulation and modeling − Replacing physical modeling and enactments

GRAPHICS SOFTWARE PACKAGES


 Early graphics libraries:
 GKS (Graphical Kernel System)
 PHIGS
 OpenGL (Silicon Graphics)
 Java2D (Sun Microsystems)
 Java3D (Sun Microsystems)
 VRML (Silicon Graphics)
Graphics: Main Components
 Theory
 Analytical Geometry
 Vectors and Matrices
 Algorithms
 Eg: Line drawing, Filling etc.
 Implementation

GRAPHICS AND IMAGE PROCESSING Page 4


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

 Programming (OpenGL)

1.2 PIXEL
Definition:
A pixel is generally thought of as the smallest complete sample of an image.
(or)
A pixel is a single point in a graphic image.
 Each such point (or) information element is not really a dot, nor a square.
 Pixels are measured in dpi (dots per inch) (or) ppi (pixels per inch).
The more pixel used to represent an image
History
 The term Pixel is the Abbreviation for “Picture Element “.
 The word pixel was first published in 1965 by Frederic C. Billingsley to describe the
picture elements of video images from space probes to the moon & mars.
 The word Pix was actually coined in 1932 in a magazine as an abbreviation for the word
pictures in reference to movies.
 The earliest publications of the term picture element was in “Wireless World “magazine
in 1927.
Terminologies
Some of the terminologies related to Pixel are given below:
i) Resolution
ii) Sub pixel
iii) Megapixel
iv) Bits per inch ( BPI )
Resolution
The number of pixels in an image is called the Resolution.
Eg: 640 by 480 display
(or)
640* 480 display
ie 640 pixels from side to side & 480 pixels from top to bottom
640 * 480 = 307,200 pixels
(Or)

GRAPHICS AND IMAGE PROCESSING Page 5


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

0.3 Megapixels.
Bits per Pixel
 The number of distinct colors that can be represented by a pixel depends on the number
of “bits per pixel” (bpp).
 The maximum number of colors a pixel can take can be found by taking two to the power
of the color depth.
Eg: 256 colors = 2^8, 8 bpp
2^16 = 65536 colors = 16 bpp (high color or thousands)
2^24 = 16,777,216 colors = 24 bpp (true color or millions)
2^48 = 48 bpp (for all practical purposes & in flatbed scanners)
256 colors
Stored in the computers video memory.
Examples: Animated startup logos of Windows 95 & windows 98.
 16 bpp
Divided into its RGB components.
i.e: 5 bits for Red.
5 bits for Blue &6 bits for green.
 24 bpp
Divided into its RGB components with 8 bits each for Red, Blue & Green.
Sub Pixel
 Many display and image acquisition systems are not capable of displaying (or) sensing
the different color channels at the same site.
 The above problem is generally resolved by using multiple sub pixels, each of which
handles a single color channel Red, Green (or) Blue.
Example,
i) LCD’s typically dividing each pixel horizontally into three sub pixels.
ii) Most LCD displays divide each pixel into four sub pixels; one Red, one Green
and two Blue.

Mega pixel
A megapixel is 1 million pixels.
Eg: 2048 * 1536 pixels = 3.1 megapixel (3,145,728 pixels)

GRAPHICS AND IMAGE PROCESSING Page 6


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

Several other types of object are derived from the idea of pixel namely
 Voxel - volume element
 Texel - texture element
 Surfel - surface element
Have been created for computer graphics & image processing uses.
1.3 Resolution
Resolution is defined as the total number of pixels per image.
Image resolution
 Image resolution describes the detail an image holds. The term applies to raster digital
images, film images, and other types of images. Higher resolution means more image
detail.
 Image resolution can be measured in various ways. Basically, resolution quantifies how
close lines can be to each other and still be visibly resolved.
 Resolution units can be tied to physical sizes (e.g. lines per mm, lines per inch), to the
overall size of a picture (lines per picture height, also known simply as lines, or TV
lines), or to angular subtenant. Line pairs are often used instead of lines; a line pair
comprises a dark line and an adjacent light line.

Pixel resolution

 The term resolution is often used for a pixel count in digital imaging.
 The pixel counts are referred to as resolution, the convention is to describe the pixel
resolution with the set of two positive integer numbers, where the first number is the
number of pixel columns (width) and the second is the number of pixel rows (height), for
example as 640 by 480.

 Number of megapixels, which can be calculated by multiplying pixel columns by pixel


rows and dividing by one million.

 Other conventions include describing pixels per length unit or pixels per area unit, such
as pixels per inch or per square inch.

GRAPHICS AND IMAGE PROCESSING Page 7


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

 Pixel resolutions are true resolutions, but they are widely referred to as such; they serve
as upper bounds on image resolution.

Spatial resolution
 The measure of how closely lines can be resolved in an image is called spatial resolution,
and it depends on properties of the system creating the image, not just the pixel resolution
in pixels per inch (ppi).
 For practical purposes the clarity of the image is decided by its spatial resolution, not the
number of pixels in an image. In effect, spatial resolution refers to the number of
independent pixel values per unit length.
Spectral resolution
Color images distinguish light of different spectra. Multi-spectral images resolve even
finer differences of spectrum or wavelength than is needed to reproduce color. That is, they can
have higher spectral resolution.
Display resolution

 The display resolution of a digital television or display device is the number of distinct
pixels in each dimension that can be displayed. It can be an ambiguous term especially as
the displayed resolution is controlled by all different factors in cathode ray tube (CRT)
and flat panel or projection displays using fixed picture-element (pixel) arrays.
 One use of the term “display resolution” applies to fixed-pixel-array displays such as
plasma display panels (PDPs), liquid crystal displays (LCDs), Digital Light Processing
(DLP) projectors, or similar technologies, and is simply the physical number of columns
and rows of pixels creating the display (e.g., 1920×1080).

 A consequence of having a fixed grid display is that, for multi-format video inputs, all
displays need a "scaling engine" (a digital video processor that includes a memory array)
to match the incoming picture format to the display.

 The term “display resolution” is usually used to mean pixel dimensions, the number of
pixels in each dimension (e.g., 1920×1080), which does not tell anything about the
resolution of the display on which the image is actually formed: resolution properly refers

GRAPHICS AND IMAGE PROCESSING Page 8


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

to the pixel density, the number of pixels per unit distance or area, not total number of
pixels.

 In digital measurement, the display resolution would be given in pixels per inch. In
analog measurement, if the screen is 10 inches high, then the horizontal resolution is
measured across a square 10 inches wide. This is typically stated as "lines horizontal
resolution, per picture height

Computer Monitors
 Computer monitors have higher resolutions than most televisions.
 1024×768 extended Graphics Array was the most common display resolution When a
computer display resolution is set higher than the physical screen resolution, some video
drivers make the virtual screen scrollable over the physical screen thus realizing a two
dimensional virtual desktop with its viewport.
 Most LCD manufacturers do make note of the panel's native resolution as working in a
non-native resolution on LCDs will result in a poorer image, due to dropping of pixels to
make the image fit (when using DVI) or insufficient sampling of the analog signal (when
using VGA connector).
1.4 Video Display Devices
 The primary output device in a graphics system is a video monitor.
 The operation of most video monitors is based on the standard cathode-ray tube.
The video display devices discussed here are given below:
 Refresh cathode ray tubes
 Raster scan displays
 Random scan displays
 Color CRT monitors
 Direct view storage tubes
 Flat panel displays &
 Three dimensional viewing devices
Refresh Cathode Ray Tubes

 The electron gun emits a beam of electrons (cathode rays).

GRAPHICS AND IMAGE PROCESSING Page 9


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

 The electron beam passes through focusing and deflection systems that direct it towards specified
positions on the phosphor-coated screen.

 When the beam hits the screen, the phosphor emits a small spot of light at each position contacted by
the electron beam.

 It redraws the picture by directing the electron beam back over the same screen points quickly.

Basic design of a magnetic deflection CRT.

Operation of an electron gun with an accelerating anode.


Working
 The primary components of an electron gun are heated metal cathode and a control grid.
 Heat is supplied to the cathode by directing a current through a coil of wire called the
filament inside the cylindrical cathode structure.
 This causes electrons to be boiled off
 In the vaccum inside the CRT envelope the free negatively charged electrons are then
accelerated toward the phosphor coating by a high positive voltage.
 The accelerating voltage can be generated with a positively charged metal coating on the
inside of the CRT near the phosphor screen.
Electrostatic Deflection

GRAPHICS AND IMAGE PROCESSING Page 10


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

 Deflection of the electron beam can be controlled either with electric fields or with
magnetic fields.
 Cathode ray tubes constructed with magnetic deflection coils mounted on the outside of
the CRT envelope.
 Two pairs of coils are used.

Electrostatic deflection of the electron beam in a CRT.


 One pair is mounted on the top & bottom of the neck.
 Other pair is mounted on the opposite sides of the neck.
 The magnetic field produced by each pair is of coils results in transverse deflection force
that is perpendicular both to the direction of the magnetic field & to the direction of travel
of the electron beam.
Raster Scan Displays
The most common type of graphics monitor employing a CRT is the Raster scan display,based
on television technology
Working Principle

 In a raster scan system, the electron beam is swept across the screen, one row at a time
from top to bottom. As the electron beam moves across each row, the beam intensity is
turned on and off to create a pattern of illuminated spots.

 Picture definition is stored in memory area called the Refresh Buffer or Frame Buffer.
This memory area holds the set of intensity values for all the screen points. Stored
intensity values are then retrieved from the refresh buffer and “painted” on the screen
one row (scan line) at a time as shown in the following illustration.

GRAPHICS AND IMAGE PROCESSING Page 11


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

 Each screen point is referred to as a pixel (picture element) or pel. At the end of each
scan line, the electron beam returns to the left side of the screen to begin displaying the
next scan line.

Fig: A raster-scan system displays an object as a set of displays points across each
scan line.
 Picture definition is stored in memory area called Refresh buffer (or) Frame buffer.
 The frame buffer holds the set of intensity values for all the screen points.

GRAPHICS AND IMAGE PROCESSING Page 12


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

 The stored intensity values are then retrieved from the refresh buffer and painted on the
screen one row at a time.
 One row is also referred to as Scan Line.
 Each screen point is referred to as Pixel

Bit per Pixel


 Intensity range for pixel positions depend on the capability of the raster system.
 In black & white system each screen point is either on (or) off.
Only one bit per pixel is needed to control the intensity of screen positions.
1 – electron beam is turned on
0 – electron beam is turned off
 High quality systems use upto 24 bits per pixel. The frame buffer of such system require
several MB’s of storage for the frame buffer.
 Bitmap-One bit per pixel, frame buffer commonly referred as bitmap
 Pixmap -Multiple bit per pixel,frame buffer commonly referred as pixmap
Refreshing
 Refreshing of Raster scan displays is carried out as the rate of 60 to 80 frames per second.
 Refresh rates are described in units of cycles per second or Hertz.
1 cycle – 1 frame
 At the end of each scan line the electron beam returns to the left side of the screen to
begin displaying the next scan line.
 The return of the electron beam to the left of the screen is called the horizontal retrace.
 At the end of each frame the electron beam returns to the top left corner of the
screen.This is referred to as vertical retrace.
Random – Scan Displays
 In a Random- scan display unit a CRT has the electron beam directed only to the parts of
the screen where a picture is to be drawn.

GRAPHICS AND IMAGE PROCESSING Page 13


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

 Random scan monitors draw a picture one line at a time & for this reason they are
referred to as vector displays or stroke writing or calligraphic displays
 The component lines of a picture can be drawn and refreshed by as random scan system
in any specified order.
 A Pen plotter operates in a similar way and is an example of random scan,hard copy
devices
Refresh Rate
 Refresh rate depends on the number of lines to be displayed.
 A refresh buffer or refresh display file or display list or display program is used to store
picture definition. The line drawing commands are stored in the refresh buffer.
 To display a specified picture the system processes the set of drawing commands.
 Random scan displays are designed to draw all the components lines of a picture 30 to 60
times each second.
 Random scan system are designed for line drawing applications and cannot display
realistic shaded scenes.
 Random scan displays produce smooth line drawings because the CRT beam directly
follows the line path.

GRAPHICS AND IMAGE PROCESSING Page 14


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

Color CRT Monitors


A CRT monitor displays color pictures by using a combination of phosphors that emit different
colored light.
Two basic techniques are used for producing color display with a CRT namely
i) Beam penetration method.
ii) Shadow mask method.
Beam Penetration Method
 This method is used with random scan monitors.
 Here two layers of phosphor namely red & green are coated onto the inside of the CRT
screen.
The displayed color depends on how far the electron beam penetrates into the phosphor layers.
 A slow beam of electrons penetrates through the screen & excites only the outer
red layer.
 A beam of very fast electrons penetrates through the red layer and excites the
inner green layer.
 An intermediate beam speed produces orange & yellow color (combinations of
red and green lights)
 The speed of the electrons & hence the screen color at any point is controlled by
the beam acceleration voltage.

Shadow Mask Methods


 This method is commonly used in Raster scan systems.

GRAPHICS AND IMAGE PROCESSING Page 15


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

A shadow mask CRT has three phosphor color dots at each pixel position.
One for red light, one for green light & one for blue light.
 This type of CRT has three electron guns, one for each color dot.
 This CRT also has a shadow mask grid just behind the phosphor coated screen.
 The three electron beams are deflected & focused as a group onto the shadow mask.
 The shadow mask are aligned with the phosphor dot patterns.
 When the beams pass through the holes in the shadow mask they activate a dot triangle.
 Another configuration for the electron guns is an in-line arrangement where the 3
electron beams are aligned to a single scan line.
 By varying the intensity of levels of the three electron beams various color variations are
obtained.

The color got depends on the amount of red, green and blue phosphors.
 A white area indicates that all the three electron beams are with the same
intensity.
Color Combinations
Yellow – green & red beams
Magenta - blue & red beams
Cyan – blue & green beams
More sophisticated systems can set intermediate intensity levels for the electron beams.
Direct view storage tubes
An alternative method for maintaining a screen image is to store the picture information inside
the CRT instead of refreshing the screen.
Two electron guns are used in DVST
DVST stores the picture information as a change distribution just behind the phosphor coated
screen.
The primary electron gun stores the picture pattern.
The flood gun maintains the picture display.
Advantages
i) No refreshing is needed.
ii) Very complex pictures can be displayed at very high resolutions without flicker.
Disadvantages

GRAPHICS AND IMAGE PROCESSING Page 16


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

i) Selected parts of a picture cannot be erased.


ii) They ordinarily do not display color.

Flat Panel Display


Introduction
The term refers to a class of video devices that have reduced volume, weight & power
requirements.
 A significant feature of that panel displays is that they are thinner than CRT’s.
 Flat panel displays are available as pocket notepads.
Uses of flat panel displays are
 TV monitors
 Calculators
 Pocket video games
 Laptop computers
 Armrest viewing of movies on airplanes.
 Advertisement board in elevators
Categories
There are two major categories of flat panel displays
 Emissive displays
 Non emissive displays

Emissive displays
Convert electrical energy into light.
Examples
i) Plasma panels
ii) Thin film electroluminescent displays
iii) Light emitting diodes.
Plasma Panels
 Plasma Panels Also called as Gas-discharge displays.
 Plasma panels are constructed by filling the region between two glass plates with a
mixture of gases that usually include Neon.

GRAPHICS AND IMAGE PROCESSING Page 17


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

 A series of vertical plates is placed in one glass panel.


 A series of horizontal plates is built into the other glass panel.
 When firing voltages are applied to a pair of horizontal & vertical conductors the gas at
the intersection of the plates breaks down into glowing plasma of electrons & ions.
 Alternating current AC methods are used to provide faster application of the firing
voltages & thus brighter displays.
 One disadvantage of plasma panels was that they were monochromatic devices but
systems have been developed that are now capable of displaying color & grayscale.
Thin Film Electroluminescent Displays
 These displays are similar in construction to a plasma panel.
 The difference is that the region between the glass plates is filled with a phosphor, such
as Zinc Sulphide doped with manganese instead of a gas.
 When a high voltage is applied to a pair of crossing electrodes the phosphor becomes a
conductor in the area of intersection of the electrodes.
 The manganese atoms absorb the electrical energy as a spot of light.
 Electroluminescent displays require more power than plasma panels.
 Good color & gray scale displays are hard to achieve.
Light Emitting Diode (LED)
 A matrix of diodes is arranged to form the pixel positions in the display.
 Picture definition is stored in a refresh buffer.
 Information is read from the refresh buffer & converted to voltage levels that are applied
to the diodes to produce the light patterns in the display.
Non emissive displays
These displays use optical effects to convert sunlight from some other source into
graphics patterns.
Examples
Liquid crystal device.
Liquid Crystal Device (LCD’s)
 These displays are commonly used in small systems such as calculators & portable laptop
computers.
 The term liquid crystal refers to compounds which have a crystalline arrangement of
molecules. However it flows like liquid.

GRAPHICS AND IMAGE PROCESSING Page 18


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

 Flat panel displays use nematic (thread like ) liquid crystal compounds.
 The liquid crystal material is sandwiched between two glass plates each containing a light
polarizer at right angles to the plate.
 Horizontal conductors are built into one glass plate & vertical conductors are built into
another glass plate.
 The intersection of the two conductors define a pixel position.
 Polarized light passing through the material is twisted so that it will pass through the
opposite polarizer.
 The light is then reflected back to the viewer.
 This type of flat panel device is referred to as a passive matrix LCD.
 Another method for constructing LCD’s is to place the transistor at each pixel location
using thin- film transistor technology. These devices are called Active – Matrix LCD.
Three Dimensional Viewing Devices
 Graphics monitors for the display of three dimensional scenes have been devised using a
technique that reflects a CRT image from a vibrating, flexible mirror.
 As the mirror vibrates it changes focal length.
 These vibrations are synchronized with the display of on a CRT so that each point on the
object is reflected from the mirror into a spatial position corresponding to the distance of
that point from a specified viewing position.
This allows a person to walk around an object or science and view it from different sides.
Real – time example
 Geisco space graph system – used in medical applications
 For analyzing data from ultrasonography & CAT scan devices.
 In geological applications to analyze topographical & scismic data.

1.5 Graphical devices


Input Devices
 Various devices are available for data input on graphics workstations.
 Most systems have a keyboard & one or more additional devices specially designed for
interactive input.
The various input devices that are discussed in this section are
i) keyboards

GRAPHICS AND IMAGE PROCESSING Page 19


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

ii) mouse
iii) trackball & space ball
iv) joysticks
v) data glove
vi) digitizers
vii) image scanners
viii) touch panels
ix) light pens
x) voice systems

Keyboards
 The keyboard is an efficient device for inputting such nongraphic data as picture labels
associated with a graphic display.
 Alphanumeric keyboard
 Keyboards are provided with features to facilitate entry of screen co-ordinates, menu
selections or graphics functions.
The common features on general purpose keyboards are
i) cursor-control keys
ii) function keys
(i) Cursor control keys:
Can be used to select displayed objects or co-ordinate positions by positioning the screen cursor.
(ii)Functional keys:
Allow users to enter frequently used operations in a single keystroke. Additionally a numeric
keypad is often included on the keyboard for fast entry of numeric data.
Mouse
 A mouse is a small hand held device used to position the screen cursor.
 Wheels or rollers on the bottom of the mouse can be used to record the amount &
direction of movement.
 Another method for detecting mouse movements is with an optical sensor.
 One, two or three buttons are usually included on the top of the mouse for signaling the
execution of some operation.
Z-Mouse

GRAPHICS AND IMAGE PROCESSING Page 20


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

 Additional devices can be included in the basic mouse design to increase the number of
allowable input parameters.
 The z-mouse includes three buttons, a thumbwheel on the side, a trackball on the top & a
standard mouse ball underneath.
 With the z-mouse, one can pick up an object, rotate it and move it in any direction or the
navigation of one’s viewing position & ordination through a 3D scene.
 Application of z-mouse are
 virtual reality
 CAD
 Animation
 Trackball And Space ball
Trackball
 Trackball is a ball that can be rotated with the fingers or palm of the hand to produce
screen cursor movements.
 Potentiometers attached to the ball measure the amount & direction of rotation.
 They are often mounted on keyboard or z-mouse.
 Trackball is a two dimensional Positioning device
Space ball
 A space ball provides size degrees of freedom.
 A space ball does not actually move.
 Strain gauges measure the amount of pressure applied to the space ball to provide input
for spatial positioning and orientation as the ball is pushed or pulled in various directions.
 Space balls are used for three-dimensional positioning
 Applications where space balls are used are
i) 3D positioning
ii) Virtual reality systems
iii) Modeling
iv) Animation
v) CAD
Joysticks
A joystick consists of a small vertical lever called the stick
The stick is mounted on a base which steers the screen around.

GRAPHICS AND IMAGE PROCESSING Page 21


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

There are 2 types of joystick


i) movable stick joystick
ii) non-movable stick joystick
(i) Movable Stick Joystick
 The stick is actually placed in the center position.
 Screen cursor movement is achieved by moving the stick in any direction from the center.
 Potentiometers also mounted at the base of the joystick & they measure the amount of
movement.
 Potentiometers also return the stick to the center position when released.
 In another type of movable joystick the stick is used to activate switches that cause the
screen cursor to move at a constant rate in the selected direction.
Non Movable Stick Joystick
These joysticks are also called isometric joysticks.
They have a non movable stick & pressure applied on the joystick is measured by strain gauges
& converted to the movement of the cursor in the specified direction.
Data Glove
 Data glove can be used to grasp a virtual object
 The glove has a series of sensors that detect hand & finger movements.
 Electromagnetic coupling between transmitting antennas & receiving antennas is used ot
provide information about the position & orientation of the screen.
 Input from the glove can be used to position or manipulate objects in a virtual scene.
 A 2D projection of the scene can be viewed using video monitor.
 A 3D projection can be viewed with a headset.
Digitizers
 A digitizer is a common device for drawing, painting or interactively selecting co-
ordinating positions on an object.
 Digitizers are used to input co-ordinate values in either a 2D or 3D space.
 A digitizer is used to scan over a drawing or object & to input a set of discrete co-ordinate
positions.
 Graphics tablet is an example of digitizer.
 Many graphics tablets are constructed with a rectangular grid of wires embedded in the
tablet surface. Electromagnetic pulses are generated in sequence along the wires, and an

GRAPHICS AND IMAGE PROCESSING Page 22


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

electric signal is induced in a wire coil in an activated stylus or hand cursor to record a
tablet position. Depending on the technology, either signal strength, coded pulses, or
phase shifts can be used to determine the position on the tablet.
Image Scanners
 An image scanner is used for storing drawings, graph, color & black and white photos or
text by passing an optical scanning mechanism for computer processing.
 The gradations of gray scale or color are then recorded and stored in an array.
 Transformations can be applied to rotate, scale or crop the picture to a particular screen
area.
 Various image processing methods can be applied to modify the array representations or
text & they come in a variety of sizes & capabilities.
Touch Panels
 Touch panel allow displayed objects or screen positions to be selected with the touch of a
finger.
 A typical application of touch panel’s is for the selection of processing options that are
represented with graphical icons.
o Eg ATM center touch panel
 Touch input can be recorded using optical, electrical or acoustical methods.
Optical Touch Panels
 These panels employ a line of infrared light emitting diodes ( LED’s) along one vertical
edge & along one horizontal edge of the frame.
 The opposite vertical edge & horizontal edge contain light detectors.
 When the panel is touched these light detectors record which beams are interrupted.
 The 2 cross beams that are interrupted identify the horizontal & vertical co-ordinates of
the screen position selected.
 Positions can be selected with an accuracy of about ¼ inch.
 The LED’s operate at infrared frequencies so that the light is not visible to a user.
Electrical Touch Panel
 These plates are constructed with 2 transparent plates separated by a small distance.
 One plate is coated with a conducting material & the other with resistive material.
 Touching the outer plate forces it into contact with the inner plate.

GRAPHICS AND IMAGE PROCESSING Page 23


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

 This contact creates a voltage drop across the resistive plate that is converted into co-
ordinate values of the selected screen position.
Acoustical Touch Panel
 Here high frequency sound waves are generated by the horizontal & vertical directions
across a glass plate.
 Touching the screen causes part of each wave to be reflected from the fingers to the
emitters.
 The screen position is calculated from a measurement of the time interval between the
transmission of each wave & its reflection to the emitter.
Light Pen
 Light pens are pencil-shaped devices used to select screen positions by detecting the light
coming from points on the CRT screen.
 Light pens are sensitive to short burst of light emitted from the phosphor coating of the
CRT.
 Other light sources are not usually detected by a light pen.
 An activated light pen pointed at a spot on the screen as the electron beam lights up that
spot generates an electric pulse that causes the co-ordinate position of the electron beam
to be recorded.
 The recorded light pen co-ordinates can be used to position an object or to select a
processing option.
Disadvantages
i) Screen image obscured by hand & light pen
ii) Prolonged use causes arm fatigue
iii) Light pens require special implementation for some applications.
iv) Sometimes light pens give false readings due to background lighting in room.
Voice Systems
 Speech recognizers are used in some graphics workstations as input devices to accept
voice commands.
 The voice system input can be used to initiate graphics operations or to enter data.
 These systems operate by matching an input against a predefined dictionary of words and
pharses.
Output Devices

GRAPHICS AND IMAGE PROCESSING Page 24


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

Speakers

The primary method of sound output in most computers today. Sound is translated from bits to
electrical signals in a sound card, which then channels the signals to the speakers.

MIDI

 MIDI (Musical Instrument Digital Interface), is an industry-standard protocol that


enables electronic musical instruments (synthesizers, drum machines), computers and
other electronic equipment (MIDI controllers, sound cards, samplers) to communicate
and synchronize with each other.
 Unlike analog devices, MIDI does not transmit an audio signal — it sends event
messages about pitch and intensity, control signals for parameters such as volume, vibrato
and panning, cues, and clock signals to set the tempo.

 MIDI files are typically created using computer-based sequencing software (or
sometimes a hardware-based MIDI instrument or workstation) that organizes MIDI
messages into one or more parallel "tracks" for independent recording and editing.

Microphone

A microphone is a device used to change sound into electric signals. Microphones are
used in telephones, tape recorders, hearing aids and many other devices.

Printer

A printer is a device that prints text or images, making a physical copy (usually with
some kind of ink on paper). Printers are classified into two types:

Impact and non-impact printers.

(i) Impact printers rely upon a mechanical impact to transfer ink to paper. Early printers
were more like electric typewriters, striking an ink ribbon against the paper with a lever with a
raised image of a letter on the end, or a rotating ball with the same.

GRAPHICS AND IMAGE PROCESSING Page 25


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

(ii)Non-impact printers depend on other ways of creating an image. Plotters, covered


separately, can also be used as printers.

Laser Printer

 A laser printer directs a laser beam onto a rotating drum, covered by a photoconductor (a
material that conducts electricity when illuminated), carrying an electrostatic charge.
 The laser "erases" the charge in areas it strikes.

 Then a powdered ink ("toner"), charged the same polarity as the original surface charge
on the drum, is spread across the surface, and it is repelled from the areas that still carry
the original charge, but it is attracted to the discharged areas where the image was
"written" by the laser.

 The drum then rolls the image formed onto paper, which is then be heated ("fused") to
make the toner stick to itself and the paper.

Inkjet Printer

An inkjet printer is a type of computer printer that creates a digital image by propelling
variably-sized droplets of liquid material (ink) onto a page. Inkjet printers are the most common
type of printer and range from small inexpensive consumer models to very large and expensive
professional machines.
Hard-Copy Devices
 Hard copy output for images can be obtained in several formats
 Users can put pictures on paper by directing graphics output to a printer or plotter.
 The quality of pictures obtained from a device depends on the dot size & dot per inch or
lines per inch.
 Smooth characters can be produced in printed text strings by high quality printers.
 These printers shift dot positions so that adjacent dots overlap
There are 2 major methods or types of printers namely

GRAPHICS AND IMAGE PROCESSING Page 26


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

i) Impact devices &


ii) Non impact devices
(i) Impact Devices
 Impact printers press formed character faces against an inked ribbon onto paper
 A typeface is used to make the impression
Typefaces are normally mounted on bands, chains, drums or wheels.
Eg dot matrix printer
Dot Matrix Printer
 It is an example of impact printer
 Dot matrix printers have a dot matrix print head containing a rectangular array of
protruding pins
 The no. of pins depends on the quality of the printer
 Individual characters or graphic patterns are obtained by retracting certain pins so that the
remaining pins form the pattern to be printed
(ii)Non Impact Devices
Non impact plotters and printers use laser techniques, ink jet sprays, xerographic processes,
electrostatic methods & electro thermal methods to get images on the paper
Laser Printer
 In a laser device , a laser beam creates a charge distribution on a rotating drum coated
with a photoelectric material such as selerium
 Toner is applied to the drum and then transferred to paper
 The quality of the printout depends upon the dots per inch (dpi)
Ink Jet Printer
 These printers produce output by squirting ink in horizontal rows across a roll of paper
wrapped on a drum.
 The electrically charged ink stream is deflected by an electric field to produce dot matrix
patterns.
Electrostatic Printer
 These printers place a negative charge on the paper one complete row at a time along the
length of the paper.
 The paper is then exposed to the toner
 The toner is positively charged and is attracted to negative charged areas on the paper.

GRAPHICS AND IMAGE PROCESSING Page 27


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

Color Output On Impact And Non Impact Printers


 Limited color output can be obtained using non impact printers by using different color ribbons
 Various techniques are used by non impact printers to combine three color pigments (cyan,
magenta & yellow)
 Laser & xerographic devices deposit the three pigments on separate passes
 Ink jet methods shoot the three colors simultaneously on a single pass along each print line on the
paper
PEN PLOTTER
 A pen plotter has one or more pens mounted on a carriage or across bar that spasm a sheet
of paper. Pens with varying color and widths are used to produce a variety of shadings &
line styles. Wet ink , ball point and felt tip pens are all possible choices for use with a pen
plotter. Pen plotter paper can lie flat or be rolled onto a drum or belt
 Crossbars can be either movable or stationary. The paper is held in position by using
clamps , vaccum or by an electrostatic charge.

GARPHICS PACKAGES
A set of libraries that provide programmatically access to some kind of graphics 2D
functions.
Types:
 GKS-Graphics Kernel System – first graphics package – accepted by ISO & ANSI
 PHIGS (Programmer’s Hierarchical Interactive Graphics Standard)-accepted by ISO &
ANSI
 PHIGS + (Expanded package)
 Silicon Graphics GL (Graphics Library)
 Open GLPixar Render Man interface
 Postscript interpretersPainting, drawing, design packages
1.6 Direct screen interaction Method

 Interaction on touch sensitive screens is literally the most "direct" form of HCI, where information display
and control are but one surface.
 The zero displacement between input and output, control and feedback, hand action and eye gaze, makes
touch screens very intuitive to use, particularly for novice users. Not surprisingly, touch screens have been

GRAPHICS AND IMAGE PROCESSING Page 28


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

widely and successfully used in public information kiosks, ticketing machines, bank teller machines and the
like.

 Being direct between control and display, touch screens also have special limitations. First, the user's
finger, hand and arm can obscure part of the screen.

 Second, the human finger as a pointing device has very low "resolution". It is difficult to point at targets
that are smaller than the finger width. As touch screen technology becomes more available at a lower price
and better quality, we expect its greater use in many different domains.

 We set out to explore touch screen interaction techniques that can handle pointing at individual pixel
levels. High precision interaction on touch screens is necessary and important in many situations including
dealing with geographical systems or high precision drawings.

 One area in particular is command and control where many characteristics of the touch screen mentioned
earlier are desirable, but where high accuracy techniques have to be developed in order to deal with
geographical information.

 For example, computer supported command and control systems used in military vehicles are constrained
by space limitations and rugged environments.

 Screen size is therefore limited. To interact with these systems--for example when deploying geographical
orders--users need to maintain an overview of an area of interest (which determines the zoom level) and yet
be able to point at precise locations.

Introduction

 In computer music, different strategies are possible to control sound processes. A first one consists in
using the computer properties of calculation power and flexibility in the design phase of an
instrument. Today, many researches are conducted to create powerful digital musical instruments.
 To design them, a critical part of the work consists in the mapping between the gestural devices and
the sound processes to control . Those instruments tend to reproduce the “instrumental link” that is
intrinsic to the acoustic instruments and that has often disappeared in the electronic and numerical
systems.

GRAPHICS AND IMAGE PROCESSING Page 29


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

 A second strategy consists in using the computer for its powerful interaction trough a graphical user
interface (GUI).Regarding today musical software’s, they essentially use a mouse and a keyboard
with a current GUI: all sound parameters are controllable via graphical objects that generally
represent real objects like piano keyboards, faders, etc.

 Complete studios equipments and electronic instruments emulators are now integrated in the
computer. The GUIs tend to reproduce on the screen an interaction area close to the real one, like
front panels of electronic instruments. The aim of such interfaces is to give the user the impression of
real objects in front of him.

 Nevertheless, with a single mouse, the interaction process i spoor: the gesture space (the place where
is the mouse) is separated from the interaction space (the screen) and only one object can be
manipulated at one time. This explains why many software programs are configured to use “external”
devices like MIDI controllers, software-specific control surfaces or alternative controllers. In this
case, the full system is similar to those of the first strategy; the graphical objects, which are designed
for interaction, are only used for visual feedback or not used at all.

 The system we introduce in this article enables the control of graphical objects in GUI’s like real
objects and rather follows the second strategy. This new powerful multimodal system, the Pointing
Fingers, performs a direct control on GUIs with a multi-touch touch screen-like device, designed for
musical control. The interaction principle; It describes the gesture device and the software
implementation. The musical examples of what is possible with such a system are exposed.

A New approach in interactive systems

 The system is based on the combination of two crucial features: the superposition of both gesture
spatial place and visual feedback spatial place and the ability to have multiple simultaneous controls
when using a GUI.
 Some systems that have these two features already exist; one of them was developed to control
musical processes: the Audio Pad based on tangible interfaces in which the objects to manipulate are
real and interact with graphics. Our system is closer from current GUIs because the objects to
manipulate are the virtual graphical objects displayed on screen. This type of system provides the

GRAPHICS AND IMAGE PROCESSING Page 30


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

most direct and intuitive interaction possible: our fingers are manipulating graphical objects as if they
were real objects.

 There are no material constraints on the objects: they can change in position, size, shape and
function. It is possible to display some information be side the objects to help the user. It is a very
efficient system to control virtual copies of real objects.

 Finally, interaction situations that are impossible in the real world can be implemented here, like
manipulating moving objects, as it will be demonstrated in the section 5.In interaction with a real
object, this object provides so mephitic feedback: the contact with the object shape, the force it needs
to be manipulated, the degrees of freedom it offers and the spatial limits of its displacement.

 This feedback is so important that the user could manipulate an object with thee yes closed. With our
system, the hap tic feedback is reduced to the contact between fingers and screen. Sight and hearing
are fully used; sight permits to locate the position of the object sin the screen and hearing can
reinforce sight when an object is manipulated, through the effect of manipulation on the sound.

 The GUI of our system is close to those using a mouse to control graphical object; the differences
are that the object needs a bigger size, because a fingertip is bigger than a mouse pointer. The screen
area contains different interaction zones; each zone will have its own interaction mode and
connection to the sound process parameters. Different types of gestures are necessary to act in a zone:
selection gesture to select the chosen zone among several zones, modulation or continuous gesture to
modify the parameters that are associated with the zone, and decision gesture to stop the interaction.
For example, if the user wants to manipulate the graphical object “ fader”, he selects this fader with
one of his fingers, manipulates it, and then he lifts his finger off the screen area.

The pointing fingers system

We want a device that follows our requirements: having multi-touches and interacting directly
with the interface. Commercial touch screens fulfill the second point, but solutions have been developed in
different labs, as the following examples: the Smart Skin system combines apron to type of multi-touch
surface with a video projection; the vision-based finger tracking determinates the fingers’ positions through
a video analysis;

GRAPHICS AND IMAGE PROCESSING Page 31


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

The Gesture Device

 The device we introduce now is a first prototype we have made to perform multi-touches on a screen.
It consists of 2semi-gloves (recovering the thumb and the index) with two 3Dposition/orientation
sensors and two switches per hand (This device is close to Mulder’s Cyber Gloves and Polhemus
system , but is less expensive in hardware and simpler to implement.
 This device can give the position of 4 digits (the thumb and the index of each hand) with
approximately 1 mm accuracy and the on/off values of the switches (an equivalent of the mouse click
button) localized at the extremity of the fingers; those switch buttons indicate if the fingertips are
physically touching the screen or not. All the data of the sensors are processed in the Max/MSP
environment.

 The flock of birds is a commercial device composed of a transmitter and several receivers, called
birds; the device communicates with the computer trough a serial interface and a serial/USB
converter. We use the serial object of Max to receive the data.

 The switches are connected to the electronic of an USB joy stick and we receive its data, using the in
sprock object. However, this device has some limitations.

 The flock of bird’s device introduces some latency: we have not measured it but we estimate it to be
approximately 30 ms with four sensors; this lag is too important to create really reactive instruments,
but is acceptable for our experiments and applications with modulation-like instruments. Another
problem is the choice of the screen: CRT screens are disturbed by magnetic fields, and some LCD
screens disturb the magnetic field of the sensor.

Converting the Data of the 3D Sensors

We have developed a specific C object for Max to transform the data of the birds and find the
fingertips coordinates in the screen base. The sensor gives the absolute position and orientation of the 4 birds
in space, relatively to the transmitter; with this data, the object calculates the position of the tips using the
rotation matrix between each bird base and the transmitter base.

This coordinates are then rotated and translated to the screen base and rescaled in order to obtain the position
of the tips in pixel, which is the mouse coordinates unit .(A calibration procedure calculates the screen

GRAPHICS AND IMAGE PROCESSING Page 32


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

position and size in the transmitter base and then determines the screen base. Visual feedback spatial
correspondence Sound feedback Gesture Graphical objects

Controlling graphical objects

 We develop how the data of our gesture device or any equivalent device will be processed. Indeed, in
our approach, we try to build modular systems. So the control of the graphical object is completely
independent from the gesture device: we consider that any gesture devices that can give us lists with
the point number, (X,Y) coordinates in the screen basis and the value of a on/off button can be used
instead of the Pointing Fingers.
 For this reason, we will call pointer a point on the screen that is given by the gesture device. Our
gesture device gives simultaneously 4 pointers. We used the Max/MSP environment and we created a
specific Max object to manage the data for a given zone of the screen: multipoint provides some
confusion problems that did not exist with the only mouse. The object receives all data lists from all
pointers.

 The delimitations of the object action zone are given by sending specific instructions to it. describes
how this Max object manages multiple points for a given zone.

 The outputs of this max object can be connected with many graphical objects, taking care of the
coherence between the visual effects of the interaction on the graphical object and the position of the
pointer on the screen.

 This implementation is simple but sufficient to perform numerous things. Firstly, lots of Max
graphical objects can be used with our Max object and can be manipulated simultaneously. Secondly,
many original graphical objects or interaction zones can be created and used with our system, and we
can imagine multipoint interaction zones using several units of our Max object.

Conclusion - perspectives

 The computer often seems to be a powerful creature inside a closed box. Its screen shows us
marvelous worlds, but interacting with a mouse is frustrating, especially when we want to perform
music.

GRAPHICS AND IMAGE PROCESSING Page 33


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

 As the two examples shows, our system will help to design musical instruments that benefits of the
advantages of the computers’ universality and flexibility, through a powerful control of Graphical
User Interfaces. Our works on this system have just established its basis; in the future, we will
develop new objects, implement other synthesis techniques and improve the system to provide a
complete environment to create new digital musical instruments.

1.7 Logical input function


Graphical input functions can be set up to allow users to specify the following options:
 Which physical devices are to provide input within a particular logical classification (for
example, a tablet used as a stroke device).
 How the graphics program and devices are to interact (input mode). Either the program or
the devices can initiate data.. Entry, or both can operate simultaneously.
 When the data are to be input and which device is to be used at that time to deliver a
particular input type to the specified data variables.
lnput Modes
 Functions to provide input can be structured to operate in various input modes, which
specify how the program and input devices interact. Input could be initiated by the
program, or the program and input devices both could be operating simultaneously, or
data input could be initiated by the devices.
 These three input modes are referred to as request mode, sample mode, and event mode.
 In request mode, the application program initiates data entry. lnput values are requested
and processing is suspended until the required values are received. This input mode
corresponds to typical input operation in a general programming language. The program
and the input devices operate alternately. Devices are put into a wait state until an input
request is made; then the program waits until the data are delivered.
 In sample mode, the application program and input devices operate independently. Input
devices may be operating at the same time that the program is processing other data. New
input values from the input devices are stored, replacing previously input data values.
When the program requires new data, it samples the current values from the input
devices.
 In event mode, the input devices initiate data input to the application program. The
program and the input devices again operate concurrently, but now the input devices

GRAPHICS AND IMAGE PROCESSING Page 34


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

deliver data to an input queue, All input data are saved. When the program requires new
data, it goes to the data queue.
 Any number of devices can be operating at the same time in sample and event modes.
Some can be operating in sample mode, while others are operating in event mode. But
only one device at a time can be providing input in request mode.
 An input mode within a logical class for a particular physical device operating on a
specified workstation is declared with one of six input-class functions of the form
set . . . Mode (us, device Code, input Mode. echo flag) where device code is a positive
integer; input Mode is assigned one of the values:request,sample, or event;

Request Mode
 Input commands used in this mode correspond to standard input functions in a high-level
programming language. When we ask for an input in request mode, other processing is
suspended until the input is received.
 After a device has been assigned to request mode. as discussed in the preceding section,
input requests can be made to that device using one of the six logical-class functions
represented by the following:
r e q u e s t . . . (ws, devicecode, status . . . . )
 Values input with this function are the workstation code and the device code. Returned
values are assigned to parameter status and to the data parameters corresponding to the
requested logical class.
 A value of ok or none is returned in parameter status, according to the validity of the
input data. A value of none indicates that the input device was activated so as to produce
invalid data. For locator input, this could mean that the coordinates were out of range. For
pick input, the device could have been activated while not pointing at a structure. Or a
"break" button on the input device could have been pressed. A returned value of none can
be used as an end-of-data signal to terminate a programming sequence.

GRAPHICS AND IMAGE PROCESSING Page 35


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

Locator and Stroke Input in Request Mode


 The request functions for these two logical input classes art.
request Locator (wi, devcode, s t a t u s , viewIndex, pt)
r e q u e s t s t r o k e (ws, devCode, nMax, s t a t u s , viewIndex, n , pts)
 For locator input, pt is the world-coordinate position selected. For stroke input, pts is a
list of n coordinate positions, where parameter n Max gives the maxi-mum Functions
number of points that can go in the input list. Parameter view Index is assigned the two-
dimensional view index number.
 Determination of a world-coordinate position is a two-step process:
(1) The physical device selects a point in device coordinates (usually from the video-display
screen) and the inverse of the workstation transformation is performed to obtain the
corresponding point in normalized device co-ordinates.
(2) Then, the inverse of the window-to-viewport mapping is carried out to get to viewing
coordinates, then to world coordinates.
 Since two or more views may overlap on a device, the correct viewing transformation is
identified according to the view-transformation input priority number. By default, this is
the same as the view index number, and the lower the number, the higher the priority.
View index 0 has the highest priority. We can change the view priority relative to another
(reference) viewing transformation with
SetViewTransformationInputPriority(WS,ViewIndex,refviewIndex,priority)
where viewIndex identifies the viewing transformation whose priority is to be changed,
refViewIndex identifies the reference viewing transformation, and parameter p r i o r i t y is
assigned either the value lower or the value higher. For example, we can alter the priority of the
first fnur viewing transformations on workstation 1
setViewtransformationInputPriority(1,3, 1, higher)
setViewtransformationInputPriority(1,0, 2,lower)

String Input in Request Mode


Here, the request input function is
requeststring (ws, devcode, status, nChars, str)
Parameter str in this function is assigned an input string. The number of characters in the string is
given in parameter nChars

GRAPHICS AND IMAGE PROCESSING Page 36


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

.
Valuator lnput in Request Mode
A numerical value is input in request mode with
requestvaluator (ws, devcode, status, value)
Parameter value cal be assigned any real-number value.
Choice lnput in Request Mode
We make a menu selection with the following request function:
request choice (ws, devCode, status, itemNum)
Parameter itemNum is assigned a positive integer value corresponding to the menu item selected.
Pick lnput in Request Mode
For this mode, we obtain a structure identifier number with the function
requestpick (ws, devCode, maxPathDepth, status. pathDepth,pickpath)
Parameter pickpath is a list of information identifying the primitive selected.This list contains the
structure name, pick identifier for the primitive, and the element sequence number. Parameter
pickDepth is the number of levels returned in pickpath, and maxPathDepth is the specified
maximum path depth that can be included in pickpath.
Sample Mode
 Once sample mode has been set for one or more physical devices, data input begins
without waiting for program direction. If a joystick has been designated as a locator
device in sample mode, coordinate values for the current position of the activated joystick
are immediately stored. As the activated stick position changes, the stored values are
continually replaced with the coordinates of the current stick position.
 Sampling of the current values from a physical device in this mode begins when a sample
command is encountered in the application program.
 A locator device is sampled with one of the six logical-class functions represented by the
following:

GRAPHICS AND IMAGE PROCESSING Page 37


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

sample . . . (ws, devicecode, . . .)


 Some device classes have a status parameter in sample mode, and some do not. Other
input parameters are the same as in request mode.
 As an example of sample input, suppose we want to translate and rotate a selected object.
A final translation position for the object can be obtained with a locator, and the rotation
angle can be supplied by a valuator device, as demonstrated in the following statements.
samplelocator (wsl, devl, viewIndex, p:)
samplevaltator (ws2. dev2, angle)
Event Mode
 When an input device is placed in event mode, the program and device operate
simultaneously. Data input from the device is accumulated in an event queue, or input
queue.
 All input devices active in event mode can enter data (referred to as "events") into this
single-event queue, with each device entering data values as they are generated. At any
one time, the event queue can contain a mixture of data types, in the order they were
input. Data entered into the queue are identified according to logical class, workstation
number, and physical-device code.
 An application program can be directed to check the event queue for any input with the
function
awaitEvent (time, ws, deviceClass, devicecode)
 Parameter t i m e is used to set a maximum waiting time for the application program.I t
the queue happens to be empty, processing is suspended until either the number of
seconds specified in time has elapsed or an input arrives. Should the waiting time run out
before data values are input, the parameter device class is assigned the value none. When
t i m e is given the value 0, the program checks the queue and immediately returns to
other processing if the queue is empty.
Concurrent Use of Input Modes

An example of the simultaneous use of input devices in different modes is given in the
following procedure. An object is dragged around the screen with a mouse. When a final position
has been selected, a button is pressed to terminate any further movement of the object. The
mouse positions are obtained in sample mode, and the button input is sent to the event queue

GRAPHICS AND IMAGE PROCESSING Page 38


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

1.8 User dialogue

 For a particular application, the user’s model serves as the basis for the design of the
dialogue.
 The user's model describes what the system is designed to accomplish and what graphics
operations are available. It state; the type of objects that can be displayed and how the
objects can be manipulated.
 For example, if the graphics system is to be used as a tool for architectural design, the
model describes how the package can be used to construct and display views of buildings
by positioning walls, doors, windows, and other building; components.
 Similarly,for a facilitv-layout system, objects could be defined as a set of furniture items
(tables, chair, etc.), and the available operations would include those for positioning and
removing different pieces of furniture within the facility layout
 A circuit-design program might use electrical or logic elements for objects, with
positioning operations. ,available for adding or deleting within the overall circuit design
 All information in the user dialogue is then presented in the language of the application.
In an architectural design package, this means that all interactions are described only in
architectural terms, without reference to particular data structures or other concepts that
may be unfamiliar to an architect.
Windows and Icons
 Visual representations are used both for objects to be manipulated in an application and
for the actions to be performed on the application objects.
 A window system provides a window-manager interface for the user and functions for
handling the display and manipulation of the windows.
 Common functions for the window system are opening and closing windows,
repositioning windows, resizing windows, and display routines that provide interior and
exterior clipping and other graphics functions.
 Typically, windows are displayed with sliders, buttons, and menu icons for selecting
various window options.
 Some general systems, such as X Windows and News, are capable of supporting multiple
window managers so that different window styles can be accommodated, each with its

GRAPHICS AND IMAGE PROCESSING Page 39


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

own window manager. The window managers can then be designed for particular
applications.
 Icons representing objects such as furniture items and circuit elements are often referred
to as application icons. The icons representing actions, such as rotate, magnify, scale,
clip, and paste, are called control icons, or command icons.
Accommodating Multiple Skill Levels
 Interactive graphical interfaces provide several methods for selecting actions.
 For example, options could be selected by pointing at an icon and clicking different
mouse buttons, or by accessing pull-down or pop-up menus, or by typing keyboard
commands. This allows a package to accommodate users that have different skill levels.
 For a less experienced user, an interface with a few easily understood operations and
detailed prompting is more effective than one with a large, comprehensive operation set.
 A Simplified set of menus and options is easy to learn and remember and the user can
concentrate on the application instead of on the details of the interface.
 Experienced users typically wants speed this means fewer prompts and more inputs from
the keyboard or with multiple mouse button clicks.
 Actions are selected with function keys or with combination of keyboard keys .since
experienced users will remember these shortcuts for commonly used actions.
 Similarly, help facilities can be designed on several levels so that beginners can carry on
detailed dialogue, while more experienced users can reduce or eliminate prompts and
messages
 Help facilities also include one or more tutorial applications

Consistencies

 An important design consideration in an, interface for consistency, For example, a


particular icon shape should always have a single meaning, rather than serving to re
present different actions or objects depending on the context.
 Some other examples of consistency always placing menus in the same relative positions
so that a user does not have to hunt for a particular option. Always using a particular
combination of keyboard keys for the same action.

GRAPHICS AND IMAGE PROCESSING Page 40


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

 Color coding so that the same color does not have different meaning in different
situations.
 Generally a complicated, inconsistent model is difficult for a user to understand and to
work in an effective way. The objects and operations provided should be designed to
form a minimum and consistent set so that the system is easy to learn
Minimizing Memorization
 Operations in an interface should also be structured that they are easy to understand and
to remember.
 Abbreviated command formats lead to confusion and reduction in the effectiveness of the
use of the package. One key or button used for all delete operations f r example, is easier
to remember than a number of different keys for different types of delete operations
 Icons and windows systems also aid in minimizing memorization.
 Different kinds of information can be separated into different windows, so that we d o not
have to rely on memorization when different information displays overlap.
 We can simply retain the multiple information on the screen in different windows, and
switch back and forth between windows areas. Icons are used to reduce memorizing by
displaying easily recognizable shapes for various objects and actions.
Backup and Error handling
 Backup can be provided in many forms. A standard undo key or command is used to
cancel a single operation.
 Sometimes a system can be backed up through several operations, allowing us to reset
the system to some specified point.
 In a system with extensive backup capabilities, all inputs could be saved so that we can
back up and "replay" any part of a session.
 Sometimes operations cannot be undone. Once we have deleted the trash in the desktop
waste basket, for instance, we cannot recover the deleted files. In this case, the interface
would ask us to verify the delete operation before proceeding.
 Good diagnostics and error messages are designed to help determine the cause of an
error.
 Additionally, interfaces attempt to minimize error possibilities by anticipating certain
actions that could lead to an error. Examples of this are not allowing us to transform an
object position or to delete an object when no object has been selected, not allowing us to

GRAPHICS AND IMAGE PROCESSING Page 41


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

select a line attribute if the selected object is not a line, and not allowing us to select the
paste operation if nothing is in the clipboard.
Feedback
 Interfaces are designed to carry on a continual interactive dialogue so that we are
informed of actions in progress at each step. This is particularly important when the
response time is high. Without feedback, we might begin to wonder what the system is
doing and whether the input should be given again.
 As each input is received, the system normally provides some type of response.
 An object is highlighted, an icon appears, or the message is displayed. This not only
informs us that the input has been received, but it also tells us what the system is doing.
 If processing cannot be completed within a few seconds, several feedback messages
might be displayed to keep us informed of the progress of the system. In some cases, this
could be a flashing message indicating that the system is still working on the input
request.
 With function keys, feedback can be given as an audible click or by lighting up the key
that has been pressed.
 Audio feedback has the advantage that it does not use up screen space, and we do not
need to take attention from the work area to receive the message. When messages are
displayed on the screen, a fixed message area can be used so that we always know where
to look for messages.
 In some cases, it may be advantageous to place feedback messages in the work area near
the cursor. Feedback can also be displayed in different colors to distinguish it from other
displayed objects.
 To speed system response, feedback techniques can be chosen to take advantage
of the operating characteristics of the type of devices in use.
 Special symbols are designed for different types of feedback
1.9 Interactive Picture construction techniques
An interactive construction technique means we will use a technique to construct a picture.
There are different picture construction techniques are available to construct a picture.
(i) Positioning – In this method we will position different objects according to the
requirement of an application. We will use various input device like mouse and

GRAPHICS AND IMAGE PROCESSING Page 42


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

various other device to change the location of an object. In this method we will decide
the position of an object.
Advantages:
 We can easily change the location of an object with the help of mouse
 We can easily view where our object will appear on the window.
 We can easily observe one object can overlap with each other or not.
Disadvantages
 We cannot get the actual position of an object.
 We cannot calculate accurate positioning point of an objects.
(ii) Dragging - In this method we will drag the object from one location to another location .In
this method we will select an object from one location and drag that object to another location.
This method is quite good to see the appearance of an object by dragging them from one location
to another location.
Advantages:
 We can easily drag an object from one location to another location
 We can easily see the appearance of final output that will come in the front of us.
Disadvantages
 We are totally bounded to mouse.
 Floating point position cannot be given to change the location of an object

(iii) Constraints - In this method we will use various rules that can be used to make a particular
picture. Constraints will contain various rules in the form of tools. Constraints can be
implemented according to the requirement of the user. Constraints can include eraser and shape
of pens and different other options.
 Advantages:
 We can modify a picture according to our requirement.
 We can perform various operation on picture modification.
 We can easily implement the rules over picture construction and user can be bounded to
do the task.
Disadvantages
Some time one rule or constraint cannot be implemented on any application or object.
(iv)Grids -In this method we will divide a picture into grids. Grids will be constructed
according to the size of an image. If image is large then grid will be more otherwise grid will be
less. Object will be arranged according to intersection of row and column . Where intersection
is happening where object will be placed. Suppose one object location position at 4.6 then it is
automatically shifted to 5 if it is at 4.3 then it is automatically shifted to 4.

GRAPHICS AND IMAGE PROCESSING Page 43


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

Advantages:
 Object will provide accurate position of an object
 Object can be moved from one grid location to another location
 Object are automatically shifted to rounded location
Disadvantages:
 We cannot get accurate location of an object.
 We cannot get accurate value of an object.

(v) Gravity field- In this method we will make gravity field around a particular line. Gravity
field will make a gravity around a line that look like any two lines are combining with each
other.
Advantages:
 We can combine various shapes with each other without combining them.
 We can change the appearance of different shapes without any changes in actual shape.

Disadvantages:
 We cannot get actual appearance of a picture.
 We cannot get the actual co-ordinates of an object.
(vi) Rubber-Band Method.
 Straight lines can be constructed and positioned using rubber-band methods, which
stretch out a line from a starting position as the screen cursor is moved. Rubber-band
methods are used to construct and position other objects besides straight lines.

GRAPHICS AND IMAGE PROCESSING Page 44


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

(vii)Painting and Drawing


 Options for sketching, drawing, and painting come in a variety of forms. Straight
 lines, polygons, and circles can be generated with methods
 Curve drawing options can be provided using standard curve shapes, such as circular arcs
and splines, or with freehand sketching procedures.
 Splines are interactively constructed by specifying a set of discrete screen points that give
the general shape of the curve.
 Then the system fits the set of points with a polynomial curve. In freehand drawing,
curves are generated by following the path of a stylus on a graphics tablet or the path of
the screen cursor on a video monitor. Once a curve is displayed, the designer can alter the
curve shape
 By adjusting the positions of selected points along the curve path.

GRAPHICS AND IMAGE PROCESSING Page 45


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

 Line widths, line styles, and other attribute options are also commonly found in -painting
and drawing packages.

PONDICHERRY UNIVERSITY QUESTIONS


GRAPHICS AND IMAGE PROCESSING
1. Discuss about Video Display Devices. (APR 2012) (APRIL 2013) (NOV 2013)
2. List out and discuss the Output devices. (APR 2012) (NOV 2012)
3. Discuss the function of raster-scan and random-scan display. (NOV 2012) (NOV2018)
4. List out and discuss the input devices. (APR 2013) (NOV 2013) (APR 2014)
5. What are the interactive picture construction techniques and explain in detail? (NOV
2014) (MAY 2015)
6. Explain graphical input devices and output devices in graphics system. (MAY 2015)
(MAY2018)
7. Explain the architecture of Raster Graphics System with display process in detail (NOV
2014)
8. Describe the basic operation of CRT (APR 2015)(MAY2017)
9. Explain any four hard copy devices. (APR 2015)
10. Explain with neat diagrams the various types of video display devices(APR 2016)
11. Explain the interactive picture construction techniques (APR 2016)(APR 2017)
(APR2018)

GRAPHICS AND IMAGE PROCESSING Page 46


SRI MANAKULA VINAYAGAR ENGINEERING COLLEGE DEPT OF CSE

12. Consider a raster scan system with the resolution of 1024*768 pixels and the color palette
calls for 65,536 colors. What is the minimum amount of video RAM that the computer
must have to support the above mentioned resolution and number of colors?(NOV2016)
13. Consider two raster systems with the resolutions of 640*480 and 1280*1024(NOV2016)
(a).How many pixels could be accessed per second in each of these systems by a display
controller that refreshes the screen at a rate of 60 frames per second?
(b).What is the access time per pixel in each system?
14.Explain and differentiate the functionality of LED and LCD(NOV2017)
15.Explain Rubber band method,Zooming,Panning and Dragging. (NOV2017)
16..Discuss input devices in detail(NOV2018)

GRAPHICS AND IMAGE PROCESSING Page 47

Vous aimerez peut-être aussi