Vous êtes sur la page 1sur 77

Computer Graphics And Visualization 10CS65

Dept of CSE, SJBIT Page 1



VTU QUESTION BANK SOLUTION
UNIT-1
INTRODUCTION
1. Explain the concept of pinhole cameras which is an example of an imaging
system.(10 marks)(Dec/Jan 12)(Jun/Jul 13)

The geometry related to the mapping of a pinhole camera is illustrated in the figure. The figure
contains the following basic objects
A 3D orthogonal coordinate system with its origin at O. This is also where the camera aperture is
located. The three axes of the coordinate system are referred to as X1, X2, X3. Axis X3 is pointing in
the viewing direction of the camera and is referred to as the optical axis, principal axis, or principal
ray. The 3D plane which intersects with axes X1 and X2 is the front side of the camera, or principal
plane.
An image plane where the 3D world is projected through the aperture of the camera. The image plane
is parallel to axes X1 and X2 and is located at distance f from the origin O in the negative direction of
the X3 axis. A practical implementation of a pinhole camera implies that the image plane is located
such that it intersects the X3 axis at coordinate -f where f > 0. f is also referred to as the focal
length[citation needed] of the pinhole camera.
A point R at the intersection of the optical axis and the image plane. This point is referred to as the
principal point or image center.
A point P somewhere in the world at coordinate (x_1, x_2, x_3) relative to the axes X1,X2,X3.
The projection line of point P into the camera. This is the green line which passes through point P and
the point O.
The projection of point P onto the image plane, denoted Q. This point is given by the intersection of
the projection line (green) and the image plane. In any practical situation we can assume that x_3 > 0
which means that the intersection point is well defined.
There is also a 2D coordinate system in the image plane, with origin at R and with axes Y1 and Y2
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 2

which are parallel to X1 and X2, respectively. The coordinates of point Q relative to this coordinate
system is (y_1, y_2) .
The pinhole aperture of the camera, through which all projection lines must pass, is assumed to be
infinitely small, a point. In the literature this point in 3D space is referred to as the optical (or lens or
camera) center.
Next we want to understand how the coordinates (y_1, y_2) of point Q depend on the coordinates
(x_1, x_2, x_3) of point P. This can be done with the help of the following figure which shows the
same scene as the previous figure but now from above, looking down in the negative direction of the
X2 axis.
2. Derive the expression for angle of view. Also indicate the advantages and
disadvantages. (10 marks) (Dec/Jan 12)

In this figure we see two similar triangles, both having parts of the projection line (green) as their
hypotenuses. The catheti of the left triangle are -y_1 and f and the catheti of the right triangle are x_1
and x_3 . Since the two triangles are similar it follows that
\frac{-y_1}{f} = \frac{x_1}{x_3} or y_1 = -\frac{f \, x_1}{x_3}
A similar investigation, looking in the negative direction of the X1 axis gives
\frac{-y_2}{f} = \frac{x_2}{x_3} or y_2 = -\frac{f \, x_2}{x_3}
This can be summarized as
\begin{pmatrix} y_1 \\ y_2 \end{pmatrix} = -\frac{f}{x_3} \begin{pmatrix} x_1 \\ x_2 \end{pmatrix}
which is an expression that describes the relation between the 3D coordinates (x_1,x_2,x_3) of point
P and its image coordinates (y_1,y_2) given by point Q in the image plane
3. With an aid of a functional schematic, describe the graphics pipeline with major steps in the
imaging process (10 marks)(Jun/Jul 12) (Dec/Jan 13) (Jun/Jul 13)
Pipeline Architecture

Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 3



E.g. An arithmetic pipeline
Terminologies :
Latency : time taken from the first stage till the end result is produced. Throughput
: Number of outputs per given time.

Graphics Pipeline :






Process objects one at a time in the order they are generated by the application
All steps can be implemented in hardware on the graphics card
Vertex Processor
Much of the work in the pipeline is in converting object representations from one coordinate
system to another
Object coordinates
Camera (eye) coordinates
Screen coordinates
Every change of coordinates is equivalent to a matrix transformation
Vertex processor also computes vertex colors


Primitive Assembly
Vertices must be collected into geometric objects before clipping and rasterization can take place
Line segments
Polygons
Curves and surfaces

Clipping
Just as a real camera cannot see the whole world, the virtual camera can only see part of the world
or object space
Objects that are not within this volume are said to be clipped out of the scene


Rasterization :
If an object is not clipped out, the appropriate pixels in the frame buffer must be assigned colors
Rasterizer produces a set of fragments for each object
Fragments are potential pixels
Have a location in frame bufffer
Color and depth attributes
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 4

Vertex attributes are interpolated over objects by the rasterizer

Fragment Processor :
Fragments are processed to determine the color of the corresponding pixel in the frame buffer
Colors can be determined by texture mapping or interpolation of vertex colors
Fragments may be blocked by other fragments closer to the camera
Hidden-surface removal
4. Describe the working of an output device with an example.(5 marks) (Jun/Jul 12)
Graphics


Graphical output displayed on a screen.
A digital image is a numeric representation of an image stored on a computer. They don't have any
physical size until they are displayed on a screen or printed on paper. Until that point, they are just a
collection of numbers on the computer's hard drive that describe the individual elements of a picture
and how they are arranged.Some computers come with built-in graphics capability. Others need a
device, called a graphics card or graphics adapter board, that has to be added.Unless a computer has
graphics capability built into the motherboard, that translation takes place on the graphics
card.Depending on whether the image resolution is fixed, it may be of vector or raster type. Without
qualifications, the term "digital image" usually refers to raster images also called bitmap images.
Raster images that are composed of pixels and is suited for photo-realistic images. Vector images
which are composed of lines and co-ordinates rather than dots and is more suited to line art, graphs or
fonts.To make a 3-D image, the graphics card first creates a wire frame out of straight lines. Then, it
rasterizes the image (fills in the remaining pixels). It also adds lighting, texture and color.
Tactile
Haptic technology, or haptics, is a tactile feedback technology which takes advantage of the sense of
touch by applying forces, vibrations, or motions to the user.Several printers and wax jet printers have
the capability of producing raised line drawings. There are also handheld devices that use an array of
vibrating pins to present a tactile outline of the characters or text under the viewing window of the
device.
Audio
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 5

Speech output systems can be used to read screen text to computer users. Special software programs
called screen readers attempt to identify and interpret what is being displayed on the screen and speech
synthesizers convert data to vocalized sounds or text.
5. Name the different elements of graphics system and explain in detail.(8 marks)(Dec/Jan
13)
A user interacts with the graphics system with self-contained packages and input devices. E.g. A paint
editor.
This package or interface enables the user to create or modify images without having to write
programs. The interface consists of a set of functions (API) that resides in a graphics library


The application programmer uses the API functions and is shielded from the details of its
implementation.
The device driver is responsible to interpret the output of the API and converting it into a form
understood by the particular hardware.
-plotter model :
This is a 2-D system which moves a pen to draw images in 2 orthogonal directions.
E.g. : LOGO language implements this system.
moveto(x,y) moves pen to (x,y) without tracing a line. lineto(x,y) moves pen to (x,y) by tracing a
line.
Alternate raster based 2-D model : Writes pixels directly to frame buffer
E.g. : write_pixel(x,y,color)
In order to obtain images of objects close to the real world, we need 3-D object model.
6. Describe the working of an output device with an example.( 5 marks) (Jun/Jul 12)
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 6


The cathode ray tube or CRT was invented by Karl Ferdinand Braun. It was the most common type of
display for many years. It was used in almost all computer monitors and televisions until LCD and
plasma screens started being used.
A cathode ray tube is an electron gun. The cathode is an electrode (a metal that can send out electrons
when heated). The cathode is inside a glass tube. Also inside the glass tube is an anode that attracts
electrons. This is used to pull the electrons toward the front of the glass tube, so the electrons shoot out
in one direction, like a ray gun. To better control the direction of the electrons, the air is taken out of
the tube, making a vacuum.
The electrons hit the front of the tube, where a phosphor screen is. The electrons make the phosphor
light up. The electrons can be aimed by creating a magnetic field. By carefully controlling which bits
of phosphor light up, a bright picture can be made on the front of the vacuum tube. Changing this
picture 30 times every second will make it look like the picture is moving. Because there is a vacuum
inside the tube (which has to be strong enough to hold out the air), and the tube must be glass for the
phosphor to be sible, the tube must be made of thick glass. For a large television, this vacuum tube can
be quite heavy.
6. Define computer graphics. Explain the application of computer graphics.(10 marks)
(Jun/Jul 13)(Dec/Jan 14)
Computer graphics is a sub-field of computer science which studies methods for digitally synthesizing
and manipulating visual content. Although the term often refers to the study of three-dimensional
computer graphics, it also encompasses two-dimensional graphics and image processing.
The following are also considered graphics applications
Paint programs: Allow you to create rough freehand drawings. The images are stored as bit maps and
can easily be edited. It is a graphics program that enables you to draw pictures on the display screen
which is represented as bit maps (bit-mapped graphics). In contrast, draw programs use vector graphics
(object-oriented images), which scale better.
Most paint programs provide the tools shown below in the form of icons. By selecting an icon, you can
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 7

perform functions associated with the tool.In addition to these tools, paint programs also provide easy
ways to draw common shapes such as straight lines, rectangles, circles, and ovals.
Sophisticated paint applications are often called image editing programs. These applications support
many of the features of draw programs, such as the ability to work with objects. Each object, however,
is represented as a bit map rather than as a vector image.
Illustration/design programs: Supports more advanced features than paint programs, particularly for
drawing curved lines. The images are usually stored in vector-based formats. Illustration/design
programs are often called draw programs. Presentation graphics software: Lets you create bar charts,
pie charts, graphics, and other types of images for slide shows and reports. The charts can be based on
data imported from spreadsheet applications.
A type of business software that enables users to create highly stylized images for slide shows and
reports. The software includes functions for creating various types of charts and graphs and for
inserting text in a variety of fonts. Most systems enable you to import data from a spreadsheet
application to create the charts and graphs. Presentation graphics is often called business graphics.
Animation software: Enables you to chain and sequence a series of images to simulate movement.
Each image is like a frame in a movie. It can be defined as a simulation of movement created by
displaying a series of pictures, or frames. A cartoon on television is one example of animation.
Animation on computers is one of the chief ingredients of multimedia presentations. There are many
software applications that enable you to create animations that you can display on a computer monitor.
There is a difference between animation and video. Whereas video takes continuous motion and breaks
it up into discrete frames, animation starts with independent pictures and puts them together to form
the illusion of continuous motion.
CAD software: Enables architects and engineers to draft designs. It is the acronym for computer-aided
design. A CAD system is a combination of hardware and software that enables engineers and architects
to design everything from furniture to airplanes. In addition to the software, CAD systems require a
high-quality graphics monitor; a mouse, light pen, or digitizing tablet for drawing; and a special printer
or plotter for printing design specifications.
CAD systems allow an engineer to view a design from any angle with the push of a button and to zoom
in or out for close-ups and long-distance views. In addition, the computer keeps track of design
dependencies so that when the engineer changes one value, all other values that depend on it are
automatically changed accordingly. Until the mid 1980s, all CAD systems were specially constructed
computers. Now, you can buy CAD software that runs on general-purpose workstations and personal
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 8

computers.
Desktop publishing: Provides a full set of word-processing features as well as fine control over
placement of text and graphics, so that you can create newsletters, advertisements, books, and other
types of documents. It means by using a personal computer or workstation high-quality printed
documents can be produced. A desktop publishing system allows you to use different typefaces,
specify various margins and justifications, and embed illustrations and graphs directly into the text.
The most powerful desktop publishing systems enable you to create illustrations; while less powerful
systems let you insert illustrations created by other programs.
7. Explain the different graphics architectures in detail, with the aid of functional
schematics.(10 Marks)(Dec/Jan 14)
A Graphics system has 5 main elements






Pixels and the Frame Buffer
ray (raster) of picture elements (pixels).


Properties of frame buffer:
Resolution number of pixels in the frame buffer
Depth or Precision number of bits used for each pixel
E.g.: 1 bit deep frame buffer allows 2 colors
8 bit deep frame buffer allows 256 colors.
A Frame buffer is implemented either with special types of memory chips or it can be a part of system
memory.
In simple systems the CPU does both normal and graphical processing.
Graphics processing - Take specifications of graphical primitives from application program and assign
values to the pixels in the frame buffer It is also known as Rasterization or scan conversion.
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 9




UNIT-2
THE OPENGL

1. Write an OpenGL program for a 2d sierpinski gasket using mid-point of each triangle.( 10
marks) (Dec/Jan 12) (Dec/Jan 13)
#include <GL/glut.h>
/* a point data type
typedef GLfloat point2[2];
/* initial triangle global variables */
point2 v[]={{-1.0, -0.58}, {1.0, -0.58},
{0.0, 1.15}};
int n; /* number of recursive steps */
int main(int argc, char **argv)
{
n=4;
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_SINGLE|GLUT_RGB);
glutInitWindowSize(500, 500);
glutCreateWindow(2D Gasket");
glutDisplayFunc(display);
myinit();
glutMainLoop();
}
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 10

void display(void)
{
glClear(GL_COLOR_BUFFER_BIT);
divide_triangle(v[0], v[1], v[2], n);
glFlush();
}
void myinit()
{
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(-2.0, 2.0, -2.0, 2.0);
glMatrixMode(GL_MODELVIEW);
glClearColor (1.0, 1.0, 1.0,1.0)
glColor3f(0.0,0.0,0.0);
}
2. Briefly explain the orthographic viewing with OpenGL functions for 2d
and 3d viewing. Indicate the significance of projection plane and viewing point in this.(10
marks) (Dec/Jan 12) (Jun/Jul 12)
gluOrtho2D define a 2D orthographic projection matrix
C Specification
void gluOrtho2D( GLdouble left,
GLdouble right,
GLdouble bottom,
GLdouble top);
Parameters
left, right
Specify the coordinates for the left and right vertical clipping planes.
bottom, top
Specify the coordinates for the bottom and top horizontal clipping planes.
Description
gluOrtho2D sets up a two-dimensional orthographic viewing region. This is equivalent to calling
glOrtho with near = -1 and far = 1 .
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 11

3. Explain any 2 control functions in openGL. (4 marks) (Jun/Jul 12) (Dec/Jan 13)
glutMainLoop
glutMainLoop enters the GLUT event processing loop.
Usage
void glutMainLoop(void);
Description
glutMainLoop enters the GLUT event processing loop. This routine should be called at most once in a
GLUT program. Once called, this routine will never return. It will call as necessary any callbacks that
have been registered.
4. Explain how indexed color mode is implemented in graphics system. (4 marks) (Dec/Jan
13) (Jun/Jul 13)

In computing, indexed color is a technique to manage digital images' colors in a limited fashion, in
order to save computer memory and file storage, while speeding up display refresh and file transfers. It
is a form of vector quantization compression.
When an image is encoded in this way, color information is not directly carried by the image pixel
data, but is stored in a separate piece of data called a palette: an array of color elements, in which every
element, a color, is indexed by its position within the array. The image pixels do not contain the full
specification of its color, but only its index in the palette. This technique is sometimes referred as
pseudocolor or indirect color, as colors are addressed indirectly.
Perhaps the first device that supported palette colors was a random-access frame buffer, described in
1975 by Kajiya, Sutherland and Cheadle.This supported a palette of 256 36-bit RGB colors.
The palette itself stores a limited number of distinct colors; 4, 16 or 256 are the most common cases.
These limits are often imposed by the target architecture's display adapter hardware, so it is not a
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 12

coincidence that those numbers are exact powers of two (the binary code): 22 = 4, 24 = 16 and 28 =
256. While 256 values can be fit into a single 8-bit byte (and then a single indexed color pixel also
occupies a single byte), pixel indices with 16 (4-bit, a nibble) or fewer colors can be packed together
into a single byte (two nibbles per byte, if 16 colors are employed, or four 2-bit pixels per byte if using
4 colors). Sometimes, 1-bit (2-color) values can be used, and then up to eight pixels can be packed into
a single byte; such images are considered binary images (sometimes referred as a bitmap or bilevel
image) and not an indexed color image.
If simple video overlay is intended through a transparent color, one palette entry is specifically
reserved for this purpose, and it is discounted as an available color. Some machines, such as the MSX
series, had the transparent color reserved by hardware.
Indexed color images with palette sizes beyond 256 entries are rare. The practical limit is around 12-bit
per pixel, 4,096 different indices. To use indexed 16 bpp or more does not provide the benefits of the
indexed color images' nature, due to the color palette size in bytes being greater than the raw image
data itself. Also, useful direct RGB Highcolor modes can be used from 15 bpp and up.
5. List and explain graphics functions.(10 marks) (Jun/Jul 13) (Dec/Jan 14)
Control Functions (interaction with windows)
Single buffering
Properties logically ORed togetherglutWindowSize in pixels
glutWindowPosition from top-left corner of display
glutCreateWindow create window with a particular title
Aspect ratio and viewports
Aspect ratio is the ratio of width to height of a particular object.
We may obtain undesirable output if the aspect ratio of the viewing rectangle (specified by glOrtho), is
not same as the aspect ratio of the window (specified by glutInitWindowSize)
Viewport A rectangular area of the display window, whose height and width can be adjusted to
match that of the clipping window, to avoid distortion of the images.
void glViewport(Glint x, Glint y, GLsizei w, GLsizei h) ;
The main, display and myinit functions
In our application, once the primitive is rendered onto the display and the application program ends,
the window may disappear from the display.
Event processing loop :
void glutMainLoop();
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 13

Graphics is sent to the screen through a function called display callback.
void glutDisplayFunc(function name)
The function myinit() is used to set the OpenGL state variables dealing with viewing and attributes.
Control Functions
glutInit(int *argc, char **argv) initializes GLUT and processes any command line arguments (for X,
this would be options like -display and -geometry). glutInit() should be called before any other GLUT
routine.
glutInitDisplayMode(unsigned int mode) specifies whether to use an RGBA or color- index color
model. You can also specify whether you want a single- or double- buffered window. (If youre
working in color-index mode, youll want to load certain colors into the color map; use glutSetColor()
to do this.)
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH).
If you want a window with double buffering, the RGBA color model, and a depth buffer, you might
call
glutInitWindowPosition(int x, int y) specifies the screen location for the upper-left corner of your
window.
glutInitWindowSize(int width, int size) specifies the size, in pixels, of your window.
int glutCreateWindow(char *string) creates a window with an OpenGL context. It returns a unique
identifier for the new window. Be warned: Until glutMainLoop() is
called.
6. Write a typical main function that is common to most non-interactive applications and
explain each function call in it. (10 Marks) (Dec/Jan 14)
int main(int argc, char **argv)
{
n=4;
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_SINGLE|GLUT_RGB);
glutInitWindowSize(500, 500);
glutCreateWindow(2D Gasket");
glutDisplayFunc(display);
myinit();
glutMainLoop();
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 14

}
7. Explain Color Cube in brief. (03 Marks) (Dec/Jan 14)
The colour space for computer based applications is often visualised by a unit cube. Each colour (red,
green, blue) is assigned to one of the three orthogonal coordinate axes in 3D space. An example of such
a cube is shown below along with some key colours and their coordinates.


Along each axis of the colour cube the colours range from no contribution of that component to a
fully saturated colour.
The colour cube is solid, any point (colour) within the cube is specified by three numbers,
namely, an r,g,b triple.
The diagonal line of the cube from black (0,0,0) to white (1,1,1) represents all the greys, that is,
all the red, green, and blue components are the same.
In practice different computer hardware/software combinations will use different ranges for the
colours, common ones are 0-256 and 0-65536 for each component. This is simply a linear scaling of the
unit colour cube described here.
This RGB colour space lies within our perceptual space, that is, the RGB cube is smaller and
represents fewer colours than we can see.
UNIT - 3
INPUT AND INTERACTION
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 15

1. What are the various classes of logical input devices that are supported by openGL?
Explain the functionality of each of these classes.(10 marks) (Dec/Jan 12)
The six input classes and the logical input values they provide are:
LOCATOR
Returns a position (an x,y value) in World Coordinates and a Normalization Transformation number
corresponding to that used to map back from Normalized Device Coordinates to World Coordinates.
The NT used corresponds to that viewport with the highest Viewport Input Priority (set by calling
GSVPIP). Warning: If there is no viewport input priority set then NT 0 is used as default, in which case
the coordinates are returned in NDC. This may not be what is expected!
CALL GSVPIP(TNR, RTNR, RELPRI)
TNR
Transformation Number
RTNR
Reference Transformation Number
RELPRI
One of the values 'GHIGHR' or 'GLOWER' defined in the Include File, ENUM.INC, which is listed in
the Appendix on Page gif.
STROKE
Returns a sequence of (x,y) points in World Coordinates and a Normalization Transformation as for the
Locator.
VALUATOR
Returns a real value, for example, to control some sort of analogue device.
CHOICE
Returns a non-negative integer which represents a choice from a selection of several possibilities. This
could be implemented as a menu, for example.
STRING
Returns a string of characters from the keyboard.
PICK
Returns a segment name and a pick identifier of an object pointed at by the user. Thus, the application
does not have to use the locator to return a position, and then try to find out to which object the position
corresponds

Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 16

2. Enlist the various features that a good interactive program should include. (5 marks)
(Dec/Jan 12) (Dec/Jan 13)


Interactive programming is the procedure of writing parts of a program while it is already active. This
focuses on the program text as the maininterface for a running process, rather than an interactive
application, where the program is designed in development cycles and used thereafter (usually by a so-
called "user", in distinction to the "developer"). Consequently, here, the activity of writing a program
becomes part of the program itself.
It thus forms a specific instance of interactive computation as an extreme opposite to batch processing,
where neither writing the program nor its use happens in an interactive way. The principle of rapid
feedback in Extreme Programming is radicalized and becomes more explicit.

3. Suppose that the openGL window is 500 X 50 pixels and the clipping window is a unit
square with the origin at the lower left corner. Use simple XOR mode to draw erasable
lines.(10 marks) (Jun/Jul 12)
void MouseMove(int x,int y)
{

if(FLAG == 0){
X = x;
Y = winh - y;

Xn = x;
Yn = winh - y;

Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 17

FLAG = 1;
}

else if(FLAG == 1){
glEnable(GL_COLOR_LOGIC_OP);
glLogicOp(GL_XOR);

glBegin(GL_LINES);
glVertex2i(X,Y);
glVertex2i(Xn,Yn);
glEnd();
glFlush();/*Old line erased*/

glBegin(GL_LINES);
glVertex2i(X,Y);
glVertex2i(x, winh - y);
glEnd();
glFlush();

Xn = x;
Yn = winh - y;
}
}
4. What is the functionality of display lists in modeling. Explain with an example (5 marks)
(Dec/Jan 12) (Jun/Jul 13)
Display lists may improve performance since you can use them to store OpenGL commands for later
execution. It is often a good idea to cache commands in a display list if you plan to redraw the same
geometry multiple times, or if you have a set of state changes that need to be applied multiple times.
Using display lists, you can define the geometry and/or state changes once and execute them multiple
times
A display list is a convenient and efficient way to name and organize a set of OpenGL commands. For
example, suppose you want to draw a torus and view it from different angles. The most efficient way to
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 18

do this would be to store the torus in a display list. Then whenever you want to change the view, you
would change the modelview matrix and execute the display list to draw the torus. Example illustrates
this.
Creating a Display List: torus.c
#include <GL/gl.h>
#include <GL/glu.h>
#include <stdio.h>
#include <math.h>
#include <GL/glut.h>
#include <stdlib.h>

GLuint theTorus;

/* Draw a torus */
static void torus(int numc, int numt)
{
int i, j, k;
double s, t, x, y, z, twopi;

twopi = 2 * (double)M_PI;
for (i = 0; i < numc; i++) {
glBegin(GL_QUAD_STRIP);
for (j = 0; j <= numt; j++) {
for (k = 1; k >= 0; k--) {
s = (i + k) % numc + 0.5;
t = j % numt;

x = (1+.1*cos(s*twopi/numc))*cos(t*twopi/numt);
y = (1+.1*cos(s*twopi/numc))*sin(t*twopi/numt);
z = .1 * sin(s * twopi / numc);
glVertex3f(x, y, z);
}
}
glEnd();
}
}

5. Explain Picking operation in openGL with an example. (10 marks) (Jun/Jul 12)
The OpenGL API provides a mechanism for picking objects in a 3D scene. This tutorial will show you
how to detect which objects are bellow the mouse or in a square region of the OpenGL window. The
steps involved in detecting which objects are at the location where the mouse was clicked are:
1. Get the window coordinates of the mouse
2. Enter selection mode
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 19

3. Redefine the viewing volume so that only a small area of the window around the cursor is rendered
4. Render the scene, either using all primitives or only those relevant to the picking operation
5. Exit selection mode and identify the objects which were rendered on that small part of the screen.
In order to identify the rendered objects using the OpenGL API you must name all relevant objects in
your scene. The OpenGL API allows you to give names to primitives, or sets of primitives (objects).
When in selection mode, a special rendering mode provided by the OpenGL API, no objects are actually
rendered in the framebuffer. Instead the names of the objects (plus depth information) are collected in an
array. For unnamed objects, only depth information is collected.
Using the OpenGL terminology, a hit occurs whenever a primitive is rendered in selection mode. Hit
records are stored in the selection buffer. Upon exiting the selection mode OpenGL returns the selection
buffer with the set of hit records. Since OpenGL provides depth information for each hit the application
can then easily detect which object is closer to the user.
Introducing the Name Stack
As the title suggests, the names you assign to objects are stored in a stack. Actually you don't give
names to objects, as in a text string. Instead you number objects. Nevertheless, since in OpenGL the
term name is used, the tutorial will also use the term name instead of number.
When an object is rendered, if it intersects the new viewing volume, a hit record is created. The hit
record contains the names currently on the name stack plus the minimum and maximum depth for the
object. Note that a hit record is created even if the name stack is empty, in which case it only contains
depth information. If more objects are rendered before the name stack is altered or the application leaves
the selection mode, then the depth values stored on the hit record are altered accordingly.
A hit record is stored on the selection buffer only when the current contents of the name stack are altered
or when the application leaves the selection mode.
The rendering function for the selection mode therefore is responsible for the contents of the name stack
as well as the rendering of primitives.
OpenGL provides the following functions to manipulate the Name Stack:
void glInitNames(void);
This function creates an empty name stack. You are required to call this function to initialize the stack
prior to pushing names.
void glPushName(GLuint name);
Adds name to the top of the stack. The stacks maximum dimension is implementation dependent,
however according to the specs it must contain at least 64 names which should prove to be more than
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 20

enough for the vast majority of applications. Nevertheless if you want to be sure you may query the state
variable GL_NAME_STACK_DEPTH (use glGetIntegerv(GL_NAME_STACK_DEPTH)). Pushing
values onto the stack beyond its capacity causes an overflow error GL_STACK_OVERFLOW.
void glPopName();
Removes the name from top of the stack. Popping a value from an empty stack causes an underflow,
error GL_STACK_UNDERFLOW.
void glLoadName(GLunit name);
This function replaces the top of the stack with name. It is the same as calling
glPopName();
glPushName(name);
This function is basically a short cut for the above snippet of code. Loading a name on an empty stack
causes the error GL_INVALID_OPERATION.
Note: Calls to the above functions are ignored when not in selection mode. This means that you may
have a single rendering function with all the name stack functions inside it. When in the normal
rendering mode the functions are ignored and when in selection mode the hit records will be collected.
6. Explain logical classifications of Input devices with examples. (Dec/Jan 13) (Dec/Jan 14)
LOCATOR
Returns a position (an x,y value) in World Coordinates and a Normalization Transformation number
corresponding to that used to map back from Normalized Device Coordinates to World Coordinates.
The NT used corresponds to that viewport with the highest Viewport Input Priority (set by calling
GSVPIP). Warning: If there is no viewport input priority set then NT 0 is used as default, in which
case the coordinates are returned in NDC. This may not be what is expected!
CALL GSVPIP(TNR, RTNR, RELPRI)
TNR
Transformation Number
RTNR
Reference Transformation Number
RELPRI
One of the values 'GHIGHR' or 'GLOWER' defined in the Include File, ENUM.INC, which is listed in
the Appendix on Page gif.
STROKE
Returns a sequence of (x,y) points in World Coordinates and a Normalization Transformation as for
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 21

the Locator.
VALUATOR
Returns a real value, for example, to control some sort of analogue device.
CHOICE
Returns a non-negative integer which represents a choice from a selection of several possibilities. This
could be implemented as a menu, for example.
STRING
Returns a string of characters from the keyboard.
PICK
Returns a segment name and a pick identifier of an object pointed at by the user. Thus, the application
does not have to use the locator to return a position, and then try to find out to which object the
position corresponds
7. How are menus and submenus created in openGL? Explain with an example.(Dec/Jan 13)
//#include <windows.h>
#include <iostream>
#include <GL/glut.h>


/* process menu option 'op' */
void menu(int op) {

switch(op) {
case 'Q':
case 'q':
exit(0);
}
}

/* executed when a regular key is pressed */
void keyboardDown(unsigned char key, int x, int y) {

switch(key) {
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 22

case 'Q':
case 'q':
case 27: // ESC
exit(0);
}
}

/* executed when a regular key is released */
void keyboardUp(unsigned char key, int x, int y) {

}

/* executed when a special key is pressed */
void keyboardSpecialDown(int k, int x, int y) {

}

/* executed when a special key is released */
void keyboardSpecialUp(int k, int x, int y) {

}

/* reshaped window */
void reshape(int width, int height) {

GLfloat fieldOfView = 90.0f;
glViewport (0, 0, (GLsizei) width, (GLsizei) height);

glMatrixMode (GL_PROJECTION);
glLoadIdentity();
gluPerspective(fieldOfView, (GLfloat) width/(GLfloat) height, 0.1, 500.0);

Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 23

glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}

/* executed when button 'button' is put into state 'state' at screen position ('x', 'y') */
void mouseClick(int button, int state, int x, int y) {

}

/* executed when the mouse moves to position ('x', 'y') */
void mouseMotion(int x, int y) {

}

/* render the scene */
void draw() {

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();

/* render the scene here */

glFlush();
glutSwapBuffers();
}

/* executed when program is idle */
void idle() {

}

Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 24

/* initialize OpenGL settings */
void initGL(int width, int height) {

reshape(width, height);

glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClearDepth(1.0f);

glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
}

/* initialize GLUT settings, register callbacks, enter main loop */
int main(int argc, char** argv) {

glutInit(&argc, argv);

glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);
glutInitWindowSize(800, 600);
glutInitWindowPosition(100, 100);
glutCreateWindow("Perspective's GLUT Template");

// register glut call backs
glutKeyboardFunc(keyboardDown);
glutKeyboardUpFunc(keyboardUp);
glutSpecialFunc(keyboardSpecialDown);
glutSpecialUpFunc(keyboardSpecialUp);
glutMouseFunc(mouseClick);
glutMotionFunc(mouseMotion);
glutReshapeFunc(reshape);
glutDisplayFunc(draw);
glutIdleFunc(idle);
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 25

glutIgnoreKeyRepeat(true); // ignore keys held down

// create a sub menu
int subMenu = glutCreateMenu(menu);
glutAddMenuEntry("Do nothing", 0);
glutAddMenuEntry("Really Quit", 'q');

// create main "right click" menu
glutCreateMenu(menu);
glutAddSubMenu("Sub Menu", subMenu);
glutAddMenuEntry("Quit", 'q');
glutAttachMenu(GLUT_RIGHT_BUTTON);
initGL(800, 600);
glutMainLoop();
return 0;
}

8. Name different types of graphics input devices. Explain the input modes in detail with
example.(10 marks) (Jun/Jul 13)
Keyboard
Most common and very popular input device is keyboard. The keyboard helps in inputting the data to
the computer. The layout of the keyboard is like that of traditional typewriter, although there are some
additional keys provided for performing some additional functions.
Keyboards are of two sizes 84 keys or 101/102 keys, but now 104 keys or 108 keys keyboard is also
available for Windows and Internet.
The keys are following
Sr.
No.
Keys Description
1 Typing Keys
These keys include the letter keys (A-Z) and digits
keys (0-9) which generally give same layout as that of
typewriters.
2 Numeric Keypad
It is used to enter numeric data or cursor movement.
Generally, it consists of a set of 17 keys that are laid
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 26

out in the same configuration used by most adding
machine and calculators.
3 Function Keys
The twelve functions keys are present on the keyboard.
These are arranged in a row along the top of the
keyboard. Each function key has unique meaning and
is used for some specific purpose.
4 Control keys
These keys provide cursor and screen control. It
includes four directional arrow key. Control keys also
include Home, End, Insert, Delete, Page Up, Page
Down, Control(Ctrl), Alternate(Alt), Escape(Esc).
5 Special Purpose Keys
Keyboard also contains some special purpose keys
such as Enter, Shift, Caps Lock, Num Lock, Space bar,
Tab, and Print Screen.

Mouse
Mouse is most popular Pointing device. It is a very famous cursor-control device. It is a small palm size
box with a round ball at its base which senses the movement of mouse and sends corresponding signals
to CPU on pressing the buttons.
Generally, it has two buttons called left and right button and scroll bar is present at the mid. Mouse can
be used to control the position of cursor on screen, but it cannot be used to enter text into the computer.
ADVANTAGES
Easy to use
Not very expensive
Moves the cursor faster than the arrow keys of keyboard.
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 27


Joystick
Joystick is also a pointing device, which is used to move cursor position on a monitor screen. It is a stick
having a spherical ball at its both lower and upper ends. The lower spherical ball moves in a socket. The
joystick can be moved in all four directions.
The function of joystick is similar to that of a mouse. It is mainly used in Computer Aided Designing
(CAD) and playing computer games.

Light Pen
Light pen is a pointing device, which is similar to a pen. It is used to select a displayed menu item or
draw pictures on the monitor screen. It consists of a photocell and an optical system placed in a small
tube.
When light pen's tip is moved over the monitor screen and pen button is pressed, its photocell sensing
element, detects the screen location and sends the corresponding signal to the CPU.
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 28


Track Ball
Track ball is an input device that is mostly used in notebook or laptop computer, instead of a mouse.
This is a ball, which is half inserted and by moving fingers on ball, pointer can be moved.
Since the whole device is not moved, a track ball requires less space than a mouse. A track ball comes in
various shapes like a ball, a button and a square.

Scanner
Scanner is an input device, which works more like a photocopy machine. It is used when some
information is available on a paper and it is to be transferred to the hard disc of the computer for further
manipulation.
Scanner captures images from the source which are then converted into the digital form that can be
stored on the disc. These images can be edited before they are printed.
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 29


Digitizer
Digitizer is an input device, which converts analog information into a digital form. Digitizer can convert
a signal from the television camera into a series of numbers that could be stored in a computer. They can
be used by the computer to create a picture of whatever the camera had been pointed at.
Digitizer is also known as Tablet or Graphics Tablet because it converts graphics and pictorial data into
binary inputs. A graphic tablet as digitizer is used for doing fine works of drawing and images
manipulation applications.

9. Write a program on rotating a color cube.(10 marks) (Jun/Jul 13)
#include<stdlib.h>
#include<GL/glut.h>

GLfloat vertices[][3] = {{-1.0,-1.0,-1.0},{1.0,-1.0,-1.0},{1.0,1.0,-1.0},{-1.0,1.0,-1.0},{-1.0,-
1.0,1.0},{1.0,-1.0,1.0},{1.0,1.0,1.0},{-1.0,1.0,1.0}};
GLfloat colors[][3] = {{0.0,0.0,0.0},{1.0,0.0,0.0},{1.0,0.0,0.0},{1.0,1.0,0.0},
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 30

{0.0,0.0,1.0},{1.0,0.0,1.0},{1.0,1.0,1.0},{0.0,1.0,1.0}};
static GLfloat theta[]={0.0,0.0,0.0};
GLint axis =1;
void polygon(int a, int b,int c,int d)
{
//draw a polygon via list of vertices
glBegin(GL_POLYGON);
glColor3fv(colors[a]);
glVertex3fv(vertices[a]);
glColor3fv(colors[b]);
glVertex3fv(vertices[b]);
glColor3fv(colors[c]);
glVertex3fv(vertices[c]);
glColor3fv(colors[d]);
glVertex3fv(vertices[d]);
glEnd();
}
void colorcube(void)
{ //map vertices to faces
polygon(0,3,2,1);
polygon(2,3,7,6);
polygon(0,4,7,3);
polygon(1,2,6,5);
polygon(4,5,6,7);
polygon(0,1,5,4);
}

void display(void)
{
// display callback , clear frame buffer an Z buffer ,rotate cube and draw , swap buffer.
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 31

glRotatef(theta[0],1.0,0.0,0.0);
glRotatef(theta[1],0.0,1.0,0.0);
glRotatef(theta[2],0.0,0.0,1.0);
colorcube();
glFlush();
glutSwapBuffers();
}
void spinCube()
{
// idle callback,spin cube 2 degreees about selected axis
theta[axis] +=1.0;
if(theta[axis]>360.0)
theta[axis]-= 360.0;
glutPostRedisplay();
}
void mouse(int btn,int state,int x,int y)
{
//mouse calback ,select an axis about which to rotate
if(btn== GLUT_LEFT_BUTTON && state ==GLUT_DOWN) axis =0;
if(btn== GLUT_MIDDLE_BUTTON && state ==GLUT_DOWN) axis =1;
if(btn== GLUT_RIGHT_BUTTON && state ==GLUT_DOWN) axis =2;
}
void myReshape(int w,int h)
{
glViewport(0,0,w,h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-2.0,2.0,-2.0 , 2.0, -10.0,10.0);
glMatrixMode(GL_MODELVIEW);
}
int main(int argc,char** argv)
{
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 32

glutInit(&argc,argv);
//need both double buffering and Zbuffer
glutInitDisplayMode(GLUT_DOUBLE|GLUT_RGB|GLUT_DEPTH);
glutInitWindowSize(500,500);
glutCreateWindow("Rotating a color cube ");
glutReshapeFunc(myReshape);
glutDisplayFunc(display);
glutIdleFunc(spinCube);
glutMouseFunc(mouse);
glEnable(GL_DEPTH_TEST); //Enable hidden surface removal
glutMainLoop();
return 0;
}
10. What is double buffering? How does openGL support this? Discuss. (06 Marks) (Dec/Jan
14)
Double buffering is one of the most basic methods of updating the display. This is often the first
buffering technique adopted by new coders to combat flickering.
Double buffering uses a memory bitmap as a buffer to draw onto. The buffer is then drawn onto
screen. If the objects were drawn directly to screen, the display could be updated mid-draw leaving
some objects out. When the buffer, with all the objects already on it, is drawn onto screen, the new
image will be drawn over the old one (screen should not be cleared). If the display gets updated before
the drawing is complete, there may be a noticeable shear in the image, but all objects will be drawn.
Shearing can be avoided by using vsync.

UNIT - 4
GEOMETRIC OBJECTS AND TRANSFORMATIONS 1
1. Explain the complete procedure of converting a world object frame into camera or eye
frame, using the model view matrix. (10 marks) (Dec/Jan 12)
World Frame
The world frame is right-handed as the body frame
Origin
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 33

On a ground point
Vectors
X - front
Y - left
Z - up
Angles
(Phi), Aviation: roll
(Theta), Aviation: pitch / nick (helicopters)
(Psi), Aviation: yaw
Coordination transformation
To transform from the world frame to the body frame the following convention is used:


2. With respect to modeling discuss vertex arrays. (10 marks) (Dec/Jan 12)
The vertex specification commands described in section 2.7 accept data in almost any format, but their
use requires many command executions to specify even simple geometry. Vertex data may also be
placed into arrays that are stored in the client's address space. Blocks of data in these arrays may then
be used to specify multiple geometric primitives through the execution of a single GL command. The
client may specify up to six arrays: one each to store edge flags, texture coordinates, colors, color
indices, normals, and vertices. The commands

void EdgeFlagPointer ( sizei stride, void *pointer ) ;
void TexCoordPointer ( int size, enum type, sizei stride, void *pointer ) ;
void ColorPointer ( int size, enum type, sizei stride, void *pointer ) ;
void IndexPointer ( enum type, sizei stride, void *pointer ) ;
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 34

void NormalPointer ( enum type, sizei stride, void *pointer ) ;
void VertexPointer ( int size, enum type, sizei stride, void *pointer ) ;
describe the locations and organizations of these arrays. For each command, type specifies the data
type of the values stored in the array. Because edge flags are always type boolean, EdgeFlagPointer
has no type argument. size, when present, indicates the number of values per vertex that are stored in
the array. Because normals are always specified with three values, NormalPointer has no size
argument. Likewise, because color indices and edge flags are always specified with a single value,
IndexPointer and EdgeFlagPointer also have no size argument. Table 2.4 indicates the allowable
values for size and type (when present). For type the values BYTE, SHORT, INT, FLOAT, and
DOUBLE indicate types byte, short, int, float, and double, respectively; and the values
UNSIGNED_BYTE, UNSIGNED_SHORT, and UNSIGNED_INT indicate types ubyte, ushort, and
uint, respectively. The error INVALID_VALUE is generated if size is specified with a value other than
that indicated in the table.
3. Explain modeling a color cube in detail (10 marks) (Jun/Jul 12)
The case is as simple 3D object. There are number of ways to model it. CSG systems
regard it as a single primitive. Another way is, case as an object defined by eight vertices.
We start modeling the cube by assuming that vertices of the case are available through an
array of vertices i.e,
GLfloat Vertices [8][3] =
{{-1.0, -1.0, -1.0},{1.0, -1.0, -1.0}, {1.0, 1.0, -1.0},{-1.0, 1.0, -1.0} {-1.0, -1.0, 1.0}, {1.0,
-1.0, 1.0}, {1.0, 1.0, 1.0}, {-1.0, 1.0, 1.0}}
We can also use object oriented form by 3D point type as follows
Typedef GLfloat point3 [3];
The vertices of the cube can be defined as follows
Point3 Vertices [8] =
{{-1.0, -1.0, -1.0},{1.0, -1.0, -1.0}, {1.0, 1.0, -1.0},{-1.0, 1.0, -1.0} {-1.0, -1.0, 1.0}, {1.0,
-1.0, 1.0}, {1.0, 1.0, 1.0}, {-1.0, 1.0, 1.0}}
We can use the list of points to specify the faces of the cube. For example one face is
glBegin (GL_POLYGON);
glVertex3fv (vertices [0]);
glVertex3fv (vertices [3]);
glVertex3fv (vertices [2]);
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 35

glVertex3fv (vertices [1]);
glEnd ();
Similarly we can define other five faces of the cube.
3. 2 Inward and outward pointing faces
When we are defining the 3D polygon, we have to be careful about the order in which we
specify the vertices, because each polygon has two sides. Graphics system can display
either or both of them. From the cameras perspective, we need to distinguish between the
two faces of a polygon. The order in which the vertices are specified provides this
information. In the above example we used the order 0,3,2,1 for the first face. The order 2
1,0,2,3 would be same because the final vertex in polygon definition is always linked
back to the first, but the order 0,1,2,3 is different.
We call face outward facing, if the vertices are traversed in a counter clockwise order,
when the face is viewed from the outside.
In our example, the order 0,3,2,1 specifies outward face of the cube. Whereas the order
0,1,2,3 specifies the back face of the same polygon.


4. Explain affine transformations(10 marks) (Jun/Jul 12) (Dec/Jan 12) (Dec/Jan 14)
The essential power of afne transformations is that we only need to transform the endpoints of a
segment, and every point on the segment is transformed, because lines map to lines. Hence, the
transformation must be linear. What linear means in this context is the following:
f(p + q) = f(p) + f(q)
What this equation means is that the mapping f, applied to a linear combination of p and q, is the same
as the linear combination of f applied to the p and q. Thus, In order to transform every point on a line
segment, its sufcient to transform the endpoints. In particular, to transform a line drawn from A to B,
its sufcient to transform A and B and then draw the lines between the transformed
endpoints.Fortunately, matrix multiplication has this property of being linear.
5. List and explain different frame coordinates in openGL.(10 marks) (Jun/Jul 13)
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 36

Cartesian coordinate system
The prototypical example of a coordinate system is the Cartesian coordinate system. In the plane, two
perpendicular lines are chosen and the coordinates of a point are taken to be the signed distances to the
lines.
Rectangular coordinates
In three dimensions, three perpendicular planes are chosen and the three coordinates of a point are the
signed distances to each of the planes. This can be generalized to create n coordinates for any point in
n-dimensional Euclidean space. Depending on the direction and order of the coordinate axis the system
may be a right-hand or a left-hand system.
Polar coordinate system
Another common coordinate system for the plane is the Polar coordinate system. A point is chosen as
the pole and a ray from this point is taken as the polar axis. For a given angle , there is a single line
through the pole whose angle with the polar axis is (measured counterclockwise from the axis to the
line). Then there is a unique point on this line whose signed distance from the origin is r for given
number r. For a given pair of coordinates (r, ) there is a single point, but any point is represented by
many pairs of coordinates. For example (r, ), (r, +2) and (r, +) are all polar coordinates for the
same point. The pole is represented by (0, ) for any value of .
Cylindrical and spherical coordinate systems
There are two common methods for extending the polar coordinate system to three dimensions. In the
cylindrical coordinate system, a z-coordinate with the same meaning as in Cartesian coordinates is
added to the r and polar coordinates. Spherical coordinates take this a step further by converting the
pair of cylindrical coordinates (r, z) to polar coordinates (, ) giving a triple (, , )
Homogeneous coordinates
A point in the plane may be represented in homogeneous coordinates by a triple (x, y, z) where x/z and
y/z are the Cartesian coordinates of the point. This introduces an "extra" coordinate since only two are
needed to specify a point on the plane, but this system is useful in that it represents any point on the
projective plane without the use of infinity. In general, a homogeneous coordinate system is one where
only the ratios of the coordinates are significant and not the actual values.
6. Define and discuss with diagram translation, rotation and scaling.(10 marks) (Jun/Jul 13)
Rotation
For rotation by an angle clockwise about the origin (Note that this definition of clockwise is dependent
on the x axis pointing right and the y axis pointing up. In for example SVG, where the y axis points
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 37

down, the below matrices must be swapped) the functional form
is and . Written in matrix form, this becomes:

Similarly, for a rotation counter clockwise about the origin, the functional form
is and and the matrix form is:


Scaling
For scaling (that is, enlarging or shrinking), we have and . The matrix form
is:

When , then the matrix is a squeeze mapping and preserves areas in the plane.
If or is greater than 1 in absolute value, the transformation stretches the figures in the
corresponding direction; if less than 1, it shrinks them in that direction. Negative values of or also
flips (mirrors) the points in that direction.
Applying this sort of scaling times is equivalent to applying a single scaling with factors and .
More generally, any symmetric matrix defines a scaling along two perpendicular axes
(the eigenvectors of the matrix) by equal or distinct factors (the eigenvaluescorresponding to those
eigenvectors).
Shearing
For shear mapping (visually similar to slanting), there are two possibilities.
A shear parallel to the x axis has and . Written in matrix form, this becomes:

A shear parallel to the y axis has and , which has matrix form:

Reflection
To reflect a vector about a line that goes through the origin, let be a vector in the direction of
the line:
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 38


Orthogonal projection
To project a vector orthogonally onto a line that goes through the origin, let be a vector in the
direction of the line. Then use the transformation matrix:

As with reflections, the orthogonal projection onto a line that does not pass through the origin is an
affine, not linear, transformation.
Parallel projections are also linear transformations and can be represented simply by a matrix. However,
perspective projections are not, and to represent these with a matrix,homogeneous coordinates must be
used.

7. Explain the mathematical entities point, scalar and vector with examples for each.(06
Marks) (Dec/Jan 14)
Vector
Magnitude
Direction
NO position
Can be added, scaled, rotated
CG vectors: 2, 3 or 4 dimensions


Points
Location in coordinate system
Cannot add or scale
Subtract 2 points = vector

Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 39


Vector-Point Relationship
Diff. b/w 2 points = vector
v = Q P
Sum of point and vector = point
v + P = Q

8. Explain Bilinear interpolation method of assigning colors to points inside a
quadrilateral.(04 Marks) (Dec/Jan 14)
Algorithm
Suppose that we want to find the value of the unknown function f at the point P = (x, y). It is assumed
that we know the value of f at the four points Q
11
= (x
1
, y
1
), Q
12
= (x
1
, y
2
), Q
21
= (x
2
, y
1
), and Q
22
=
(x
2
, y
2
).
We first do linear interpolation in the x-direction. This yields

where ,

where
We proceed by interpolating in the y-direction.

This gives us the desired estimate of f(x, y).
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 40




UNIT - 5
GEOMETRIC OBJECTS AND TRANSFORMATIONS 2
1. Write the translation matrices 3D translation, rotation and scaling and explain.(6 marks)
(Dec/Jan 13) (Jun/Jul 12) (Dec/Jan 12) (Dec/Jan 14)
Rotation
The matrix to rotate an angle about the axis defined by unit vector (l,m,n) is
\begin{bmatrix}
ll(1-\cos \theta)+\cos\theta & ml(1-\cos\theta)-n\sin\theta & nl(1-\cos\theta)+m\sin\theta\\
lm(1-\cos\theta)+n\sin\theta & mm(1-\cos\theta)+\cos\theta & nm(1-\cos\theta)-l\sin\theta \\
ln(1-\cos\theta)-m\sin\theta & mn(1-\cos\theta)+l\sin\theta & nn(1-\cos\theta)+\cos\theta
\end{bmatrix}.
Reflection
To reflect a point through a plane ax + by + cz = 0 (which goes through the origin), one can use
\mathbf{A} = \mathbf{I}-2\mathbf{NN}^T , where \mathbf{I} is the 3x3 identity matrix and
\mathbf{N} is the three-dimensional unit vector for the surface normal of the plane. If the L2 norm of
a, b, and c is unity, the transformation matrix can be expressed as:
\mathbf{A} = \begin{bmatrix} 1 - 2 a^2 & - 2 a b & - 2 a c \\ - 2 a b & 1 - 2 b^2 & - 2 b c \\ - 2 a c &
- 2 b c & 1 - 2c^2 \end{bmatrix}
Composing and inverting transformations
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 41

One of the main motivations for using matrices to represent linear transformations is that
transformations can then be easily composed (combined) and inverted.
Composition is accomplished by matrix multiplication. If A and B are the matrices of two linear
transformations, then the effect of applying first A and then B to a vector x is given by:
\mathbf{B}(\mathbf{A} \vec{x} ) = (\mathbf{BA}) \vec{x}
(This is called the associative property.) In other words, the matrix of the combined transformation A
followed by B is simply the product of the individual matrices. Note that the multiplication is done in
the opposite order from the English sentence: the matrix of "A followed by B" is BA, not AB. A
consequence of the ability to compose transformations by multiplying their matrices is that
transformations can also be inverted by simply inverting their matrices. So, A1 represents the
transformation that "undoes" A.
2. What are vertex arrays? Explain how vertex arrays can be used to model a color cube. (8
marks) (Dec/Jan 13) (Jun/Jul 12)
Instead you specify individual vertex data in immediate mode (between glBegin() and glEnd() pairs),
you can store vertex data in a set of arrays including vertex positions, normals, texture coordinates and
color information. And you can draw only a selection of geometric primitives by dereferencing the
array elements with array indices.
Take a look the following code to draw a cube with immediate mode.
Each face needs 6 times of glVertex*() calls to make 2 triangles, for example, the front face has v0-v1-
v2 and v2-v3-v0 triangles. A cube has 6 faces, so the total number of glVertex*() calls is 36. If you
also specify normals, texture coordinates and colors to the corresponding vertices, it increases the
number of OpenGL function calls.
The other thing that you should notice is the vertex "v0" is shared with 3 adjacent faces; front, right
and top face. In immediate mode, you have to provide this shared vertex 6 times, twice for each side as
shown in the code.
glBegin(GL_TRIANGLES); // draw a cube with 12 triangles

// front face =================
glVertex3fv(v0); // v0-v1-v2
glVertex3fv(v1);
glVertex3fv(v2);

Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 42

glVertex3fv(v2); // v2-v3-v0
glVertex3fv(v3);
glVertex3fv(v0);

// right face =================
glVertex3fv(v0); // v0-v3-v4
glVertex3fv(v3);
glVertex3fv(v4);

glVertex3fv(v4); // v4-v5-v0
glVertex3fv(v5);
glVertex3fv(v0);

// top face ===================
glVertex3fv(v0); // v0-v5-v6
glVertex3fv(v5);
glVertex3fv(v6);

glVertex3fv(v6); // v6-v1-v0
glVertex3fv(v1);
glVertex3fv(v0);

... // draw other 3 faces

glEnd();

sing vertex arrays reduces the number of function calls and redundant usage of shared vertices.
Therefore, you may increase the performance of rendering. Here, 3 different OpenGL functions are
explained to use vertex arrays; glDrawArrays(), glDrawElements() and glDrawRangeElements().
3. Write a short note on current transformation matrix.(8 marks) (Dec/Jan 12) (Jun/Jul 13)
The Current Transformation Matrix (CTM)
Current transformation matrix that is applied to any vertex that is defined subsequent to its
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 43

setting. If we change the CTM, we change the state of the system. The CTM is part of the
pipeline shown in figure below. The CTM is a 4 X 4 Matrix and it can be altered by a set of functions
provided by the graphics package.

We can do the next replacement operations:
Initialization C I
Post multiplication
C CT
C CS
C CR
C CM
Setting C T
C S
C R
C M
where C is the CTM
I is an identity matrix
T is a translation matrix
S is a scaling matrix
R is a rotation matrix
M is an arbitrary matrix
4. What is transformation? Explain affine transformation.(12 marks) (Jun/Jul 13)
An affine transformation is any transformation that preserves collinearity (i.e., all points lying on
a line initially still lie on a line aftertransformation) and ratios of distances (e.g., the midpoint of a line
segment remains the midpoint after transformation). In this sense, affine indicates a special class of
projective transformations that do not move any objects from the affine space to the plane at infinity
or conversely. An affine transformation is also called an affinity.
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 44

Geometric contraction, expansion, dilation, reflection, rotation, shear, similarity transformations, spiral
similarities, and translation are all affine transformations, as are their combinations. In general, an affine
transformation is a composition of rotations, translations, dilations, and shears.
While an affine transformation preserves proportions on lines, it does not necessarily preserve angles or
lengths. Any triangle can be transformed into any other by an affine transformation, so all triangles are
affine and, in this sense, affine is a generalization of congruent and similar. A particular example
combining rotation and expansion is the rotation-enlargement transformation







Separating the equations,


This can be also written as


where


The scale factor is then defined by


and the rotation angle by


An affine transformation of is a map of the form

Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 45

for all , where is a linear transformation of . If , the transformation is orientation-
preserving; if , it is orientation-reversing.
5. How does instance transformation help in generating a scene? Explain. (06 Marks)
(Dec/Jan 14)
Start with unique object (a symbol)
Each appearance of object in model is an instance
Must scale, orient, position



UNIT - 6
VIEWING
1. What is canonical view volume? Explain the mapping of a given view volume to the
canonical form.(10 marks) (Dec/Jan 12)
The canonical view volumes are a 2x2x1 prism for parallel projections and the truncated right regular
pyramid for perspective projections. It is much easier to clip to these volumes than to an arbritrary view
volume, so view volumes are often transformed to one of these before clipping. It is also possible to
transform a perspective view volume to the canonical parallel view volume, so the same algorithm can
be used for clipping perspective scenes as for parallel scenes.
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 46



The two canonical view volumes.
EXAMPLE: To transform a parallel view volume to a canonical parallel view volume: Translate the
front centre of the view volume to the origin, then scale it to 2x2x1. Given a cubic view volume from
(Umin, Vmin, F) to (Umax, Vmax, B):
/ -(Umin + Umax) , -(Vmin + Vmax) , -F \
T = | -------------- ------------- |
\ 2 2 /


/ 2 2 1 \
S = | ------------- , ------------- , ------- |
\ Umax - Umin Vmax - Vmin F - B /

2. Derive equations for perspective projection and describe the specifications of a
perspective camera view in openGL. ( 1 0 ma r k s ) (Dec/Jan 12)
This is a more realistic projection as approximately used by cameras. Again we project to some plane,
but this time with rays starting from some `location' (on the side of the plane opposite to the objects).
The simplest situation is, when the location is at the origin, and the plane we project to is given
by . The projection is then given by

Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 47

Unfortunately this is not longer a linear mapping (because of the term). In order to describe this
again by a matrix we consider homogeneous coordinates, i.e. we extend the coordinates of points
in by appending a forth coordinate 1. And on the other hand we associate to each vector
in (with non-vanishing last coordinate) that point in , which we get by dividing the first 3
coordinates by the forth. More generally we identify two vectors ( ) in if they are
collinear, i.e. there exists a number such that . Up to this identification we can write
this projection as

Similar formulas hold for arbitrary projection planes (and location).
The projection onto the plane is given by:

or, equivalently, by

If we translate 0 and the plane to the projection becomes
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 48









For this is the orthographic projection from above.
Let the projection plane be given in normal form by , where is a unit normal
vector to the plane and is the signed distance to 0, i.e. is positive if points from 0 to the plane and
negative otherwise. The corresponding perspective projection is given by ,
where , i.e. , thus

In homogeneous coordinates this can be described by the matrix

Since we have not infinite large images but some finite rectangle we have to clip the image, and hence
we may clip the objects to some prism or pyramid.
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 49



3. Explain the following (10 marks) (Jun/Jul 12)
i. gluLookAt
void gluLookAt( GLdouble eyeX,
GLdouble eyeY,
GLdouble eyeZ,
GLdouble centerX,
GLdouble centerY,
GLdouble centerZ,
GLdouble upX,
GLdouble upY,
GLdouble upZ);

Parameters
eyeX, eyeY, eyeZ
Specifies the position of the eye point.
centerX, centerY, centerZ
Specifies the position of the reference point.
upX, upY, upZ
Specifies the direction of the up vector.
Description
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 50

gluLookAt creates a viewing matrix derived from an eye point, a reference point indicating the center
of the scene, and an UP vector.
The matrix maps the reference point to the negative z axis and the eye point to the origin. When a
typical projection matrix is used, the center of the scene therefore maps to the center of the viewport.
Similarly, the direction described by the UP vector projected onto the viewing plane is mapped to the
positive y axis so that it points upward in the viewport. The UP vector must not be parallel to the line
of sight from the eye point to the reference point.
ii. gluPerspective
void gluPerspective( GLdouble fovy,
GLdouble aspect,
GLdouble zNear,
GLdouble zFar);
Parameters

fovy
Specifies the field of view angle, in degrees, in the y direction.

aspect
Specifies the aspect ratio that determines the field of view in the x direction. The aspect ratio is the
ratio of x (width) to y (height).

zNear
Specifies the distance from the viewer to the near clipping plane (always positive).

zFar
Specifies the distance from the viewer to the far clipping plane (always positive).

Description

gluPerspective specifies a viewing frustum into the world coordinate system. In general, the aspect
ratio in gluPerspective should match the aspect ratio of the associated viewport. For example, aspect =
2.0 means the viewer's angle of view is twice as wide in x as it is in y. If the viewport is twice as wide
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 51

as it is tall, it displays the image without distortion.

The matrix generated by gluPerspective is multipled by the current matrix, just as if glMultMatrix
were called with the generated matrix. To load the perspective matrix onto the current matrix stack
instead, precede the call to gluPerspective with a call to glLoadIdentity.

Given f defined as follows:

f = cotangent fovy 2
The generated matrix is

f aspect 0 0 0 0 f 0 0 0 0 zFar + zNear zNear - zFar 2 zFar zNear zNear - zFar 0 0 -1 0

4. Write a note on hidden surface removal.(10 marks) (Jun/Jul 12)
The elimination of parts of solid objects that are obscured by others is called hidden-surface removal.
(Hidden-line removal, which does the same job for objects represented as wireframe skeletons, is a bit
trickier.
Methods can be categorized as:
Object Space Methods
These methods examine objects, faces, edges etc. to determine which are visible. The complexity
depends upon the number of faces, edges etc. in all the objects.
Image Space Methods
These methods examine each pixel in the image to determine which face of which object should be
displayed at that pixel. The complexity depends upon the number of faces and the number of pixels to
be considered.
Z Buffer
The easiest way to achieve hidden-surface removal is to use the depth buffer (sometimes called a z-
buffer). A depth buffer works by associating a depth, or distance from the viewpoint, with each pixel
on the window. Initially, the depth values for all pixels are set to the largest possible distance, and then
the objects in the scene are drawn in any order.
Graphical calculations in hardware or software convert each surface that's drawn to a set of pixels on
the window where the surface will appear if it isn't obscured by something else. In addition, the
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 52

distance from the eye is computed. With depth buffering enabled, before each pixel is drawn, a
comparison is done with the depth value already stored at the pixel.
If the new pixel is closer to the eye than what's there, the new pixel's colour and depth values replace
those that are currently written into the pixel. If the new pixel's depth is greater than what's currently
there, the new pixel would be obscured, and the colour and depth information for the incoming pixel is
discarded. Since information is discarded rather than used for drawing, hidden-surface removal can
increase your performance.
5. Derive the perspective projection matrix.(8 marks) (Dec/Jan 13)
Perspective projection depends on the relative position of the eye and the viewplane. In the usual
arrangement the eye lies on the z-axis and the viewplane is the xy plane. To determine the projection
of a 3D point connect the point and the eye by a straight line, where the line intersects the viewplane.
This intersection point is the projected point.


6. Explain glFrustrum API.(8 marks) (Dec/Jan 13)
void glFrustum( GLdouble left,
GLdouble right,
GLdouble bottom,
GLdouble top,
GLdouble nearVal,
GLdouble farVal);
Parameters
left, right
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 53

Specify the coordinates for the left and right vertical clipping planes.
bottom, top
Specify the coordinates for the bottom and top horizontal clipping planes.
nearVal, farVal
Specify the distances to the near and far depth clipping planes. Both distances must be positive.
Description
glFrustum describes a perspective matrix that produces a perspective projection. The current matrix
(see glMatrixMode) is multiplied by this matrix and the result replaces the current matrix, as
if glMultMatrixwere called with the following matrix as its argument:
2 nearVal right - left 0 A 0 0 2 nearVal top - bottom B 0 0 0 C D 0 0 -1 0
A = right + left right - left
B = top + bottom top - bottom
C = - farVal + nearVal farVal - nearVal
D = - 2 farVal nearVal farVal - nearVal
Typically, the matrix mode is GL_PROJECTION, and left bottom - nearVal and right top -
nearVal specify the points on the near clipping plane that are mapped to the lower left and upper right
corners of the window, assuming that the eye is located at (0, 0, 0). - farVal specifies the location of the
far clipping plane. Both nearVal and farVal must be positive.Use glPushMatrix and glPopMatrix to save
and restore the current matrix stack.
7. Bring out the difference between object space and image space algorithm. (4 marks)
(Dec/Jan 13)
In 3D computer animation images have to be stored in frame buffer converting two dimensional arrays
into three dimensional data. This conversion takes place after many calculations like hidden surface
removal, shadow generation and Z buffering. These calculations can be done in Image Space or Object
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 54

Space. Algorithms used in image space for hidden surface removal are much more efficient than object
space algorithms. But object space algorithms for hidden surface removal are much more functional than
image space algorithms for the same. The combination of these two algorithms gives the best output.
Image Space
The representation of graphics in the form of Raster or rectangular pixels has now become very popular.
Raster display is very flexible as they keep on refreshing the screen by taking the values stored in frame
buffer. Image space algorithms are simple and efficient as their data structure is very similar to that of
frame buffer. The most commonly used image space algorithm is Z buffer algorithm that is used to
define the values of z coordinate of the object.
Object Space
Space object algorithms have the advantage of retaining the relevant data and because of this ability the
interaction of algorithm with the object becomes easier. The calculation done for the color is done only
once. Object space algorithms also allow shadow generation to increase the depth of the 3 dimensional
objects on the screen. The incorporation of these algorithms is done in software and it is difficult to
implement them in hardware.
8. What are the two types of simple projection? List and explain the details. (10 marks)
(Jun/Jul 13)
Parallel projection
In parallel projection,the lines of sight from the object to the projection plane are parallel to each
other. Within parallel projection there is an ancillary category known as "pictorials". Pictorials show
an image of an object as viewed from a skew direction in order to reveal all three directions (axes) of
space in one picture. Because pictorial projections innately contain this distortion, in the rote, drawing
instrument for pictorials, some liberties may be taken for economy of effort and best effect.it is a
simultaneous process of viewing the image give pictures
Orthographic projection
The Orthographic projection is derived from the principles of descriptive geometry and is a two-
dimensional representation of a three-dimensional object. It is a parallel projection (the lines of
projection are parallel both in reality and in the projection plane). It is the projection type of choice for
working drawings.
Pictorials
Within parallel projection there is a subcategory known as Pictorials. Pictorials show an image of an
object as viewed from a skew direction in order to reveal all three directions (axes) of space in one
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 55

picture. Parallel projection pictorial instrument drawings are often used to approximate graphical
perspective projections, but there is attendant distortion in the approximation. Because pictorial
projections inherently have this distortion, in the instrument drawing of pictorials, great liberties may
then be taken for economy of effort and best effect. Parallel projection pictorials rely on the technique
of axonometric projection ("to measure along axes").
Axonometric projection
Axonometric projection is a type of parallel projection used to create a pictorial drawing of an object,
where the object is rotated along one or more of its axes relative to the plane of projection.
There are three main types of axonometric projection: isometric, dimetric, and trimetric projection.
The three axonometric views.
Isometric projection
In isometric pictorials (for protocols see isometric projection), the direction of viewing is such that the
three axes of space appear equally foreshortened, and there is a common angle of 60 between them.
As the distortion caused by foreshortening is uniform the proportionality of all sides and lengths are
preserved, and the axes share a common scale. This enables measurements to be read or taken directly
from the drawing.
Dimetric projection
In dimetric pictorials (for protocols see dimetric projection), the direction of viewing is such that two
of the three axes of space appear equally foreshortened, of which the attendant scale and angles of
presentation are determined according to the angle of viewing; the scale of the third direction (vertical)
is determined separately. Approximations are common in dimetric drawings.
Trimetric projection
In trimetric pictorials (for protocols see trimetric projection), the direction of viewing is such that all
of the three axes of space appear unequally foreshortened. The scale along each of the three axes and
the angles among them are determined separately as dictated by the angle of viewing. Approximations
in Trimetric drawings are common.
Oblique projection
In oblique projections the parallel projection rays are not perpendicular to the viewing plane as with
orthographic projection, but strike the projection plane at an angle other than ninety degrees. In both
orthographic and oblique projection, parallel lines in space appear parallel on the projected image.
Because of its simplicity, oblique projection is used exclusively for pictorial purposes rather than for
formal, working drawings. In an oblique pictorial drawing, the displayed angles among the axes as
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 56

well as the foreshortening factors (scale) are arbitrary. The distortion created thereby is usually
attenuated by aligning one plane of the imaged object to be parallel with the plane of projection
thereby creating a true shape, full-size image of the chosen plane. Special types of oblique projections
are:


9. Derive matrix representation for prospective projection, with diagram if necessary.(10
marks) (Jun/Jul 13) (Dec/Jan 14)

In perspective projection, a 3D point in a truncated pyramid frustum (eye coordinates) is mapped to a
cube (NDC); the range of x-coordinate from [l, r] to [-1, 1], the y-coordinate from [b, t] to [-1, 1] and the
z-coordinate from [n, f] to [-1, 1].
Note that the eye coordinates are defined in the right-handed coordinate system, but NDC uses the left-
handed coordinate system. That is, the camera at the origin is looking along -Z axis in eye space, but it is
looking along +Z axis in NDC. Since glFrustum() accepts only positive values
of near and far distances, we need to negate them during the construction of GL_PROJECTION matrix.
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 57

In OpenGL, a 3D point in eye space is projected onto the near plane (projection plane). The following
diagrams show how a point (x
e
, y
e
, z
e
) in eye space is projected to (x
p
, y
p
, z
p
) on the near plane.

Top View of Frustum

Side View of Frustum
From the top view of the frustum, the x-coordinate of eye space, x
e
is mapped to x
p
, which is calculated
by using the ratio of similar triangles;
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 58


From the side view of the frustum, y
p
is also calculated in a similar way;

Note that both x
p
and y
p
depend on z
e
; they are inversely propotional to -z
e
. In other words, they are both
divided by -z
e
. It is a very first clue to construct GL_PROJECTION matrix. After the eye coordinates
are transformed by multiplying GL_PROJECTION matrix, the clip coordinates are still a homogeneous
coordinates. It finally becomes the normalized device coordinates (NDC) by divided by the w-
component of the clip coordinates. (See more details on OpenGL Transformation.)
,
Therefore, we can set the w-component of the clip coordinates as -z
e
. And, the 4th of
GL_PROJECTION matrix becomes (0, 0, -1, 0).


10. List the differences between perspective projection and parallel projection. (04 Marks)
(Dec/Jan 14)
Drawing is a visual art that has been used by man for self-expression throughout history. It uses
pencils, pens, colored pencils, charcoal, pastels, markers, and ink brushes to mark different types of
medium such as canvas, wood, plastic, and paper.
It involves the portrayal of objects on a flat surface such as the case in drawing on a piece of paper or a
canvas and involves several methods and materials. It is the most common and easiest way of
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 59

recreating objects and scenes on a two-dimensional medium.
To create a realistic reproduction of scenes and objects, drawing uses two types of projection: parallel
projection and perspective projection. What humans usually see is perspective projection. We see a
horizon wherein everything looks small, and we see bigger things when they are nearer to us.
Perspective projection is seeing things larger when theyre up close and smaller at a distance. It is a
three-dimensional projection of objects on a two-dimensional medium such as paper. It allows an artist
to produce a visual reproduction of an object which resembles the real one.
The center of projection in a perspective projection is a point which is at a distance from the viewer or
artist. Objects located at this point appear smaller and will appear bigger when they are drawn closer to
the viewer. Perspective projection produces a more realistic and detailed representation of an object
allowing artists to create scenes that closely resemble the real thing. The other type of projection which
is also used aside from perspective projection is parallel projection.
Parallel projection, on the other hand, resembles seeing objects which are located far from the viewer
through a telescope. It works by making light rays entering the eyes parallel, thus, doing away with the
effect of depth in the drawing. Objects produced using parallel projection do not appear larger when
they are near or smaller when they are far. It is very useful in architecture. However, when
measurements are involved, perspective projection is best.
It provides an easier way of reproducing objects on any medium while having no definite center of
projection. When it is not possible to create perspective projection, especially in cases where its use
can cause flaws or distortions, parallel projection is used.

11. Explain the perspective projection and parallel projection along with their openGL
functions. (08 Marks) (Dec/Jan 14)
glOrtho multiply the current matrix with an orthographic matrix

C Specification
void glOrtho( GLdouble left,
GLdouble right,
GLdouble bottom,
GLdouble top,
GLdouble nearVal,
GLdouble farVal);
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 60

Parameters

left, right
Specify the coordinates for the left and right vertical clipping planes.

bottom, top
Specify the coordinates for the bottom and top horizontal clipping planes.

nearVal, farVal
Specify the distances to the nearer and farther depth clipping planes. These values are negative if the
plane is to be behind the viewer.

Description

glOrtho describes a transformation that produces a parallel projection. The current matrix (see
glMatrixMode) is multiplied by this matrix and the result replaces the current matrix, as if
glMultMatrix were called with the following matrix as its argument:

2 right - left 0 0 t x 0 2 top - bottom 0 t y 0 0 -2 farVal - nearVal t z 0 0 0 1
where

t x = - right + left right - left
t y = - top + bottom top - bottom
t z = - farVal + nearVal farVal - nearVal
Typically, the matrix mode is GL_PROJECTION, and left bottom - nearVal and right top - nearVal
specify the points on the near clipping plane that are mapped to the lower left and upper right corners
of the window, respectively, assuming that the eye is located at (0, 0, 0). - farVal specifies the location
of the far clipping plane. Both nearVal and farVal can be either positive or negative. Use glPushMatrix
and glPopMatrix to save and restore the current matrix stack.
gluPerspective set up a perspective projection matrix

C Specification
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 61


void gluPerspective( GLdouble fovy,
GLdouble aspect,
GLdouble zNear,
GLdouble zFar);
Parameters

fovy
Specifies the field of view angle, in degrees, in the y direction.

aspect
Specifies the aspect ratio that determines the field of view in the x direction. The aspect ratio is the
ratio of x (width) to y (height).

zNear
Specifies the distance from the viewer to the near clipping plane (always positive).

zFar
Specifies the distance from the viewer to the far clipping plane (always positive).

Description

gluPerspective specifies a viewing frustum into the world coordinate system. In general, the aspect
ratio in gluPerspective should match the aspect ratio of the associated viewport. For example, aspect =
2.0 means the viewer's angle of view is twice as wide in x as it is in y. If the viewport is twice as wide
as it is tall, it displays the image without distortion.
The matrix generated by gluPerspective is multipled by the current matrix, just as if glMultMatrix were
called with the generated matrix. To load the perspective matrix onto the current matrix stack instead,
precede the call to gluPerspective with a call to glLoadIdentity.
Given f defined as follows:
f = cotangent fovy 2
The generated matrix is
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 62

f aspect 0 0 0 0 f 0 0 0 0 zFar + zNear zNear - zFar 2 zFar zNear zNear - zFar 0 0 -1 0
UNIT - 7
LIGHTING AND SHADING
1. Explain phong lighting model. Indicate the advantages and disadvantages. (10 marks)
(Dec/Jan 12) (Dec/Jan 14)
The Phong reflection model (also called Phong illumination or Phong lighting) is an empirical model
of the local illumination of points on a surface. In 3D computer graphics, it is sometimes ambiguously
referred to as Phong shading, in particular if the model is used in combination with the interpolation
method of the same name and in the context of pixel shaders or other places where a lighting
calculation can be referred to as shading.

Phong reflection is an empirical model of local illumination. It describes the way a surface reflects
light as a combination of the diffuse reflection of rough surfaces with the specular reflection of shiny
surfaces. It is based on Bui Tuong Phong's informal observation that shiny surfaces have small intense
specular highlights, while dull surfaces have large highlights that fall off more gradually. The model
also includes an ambient term to account for the small amount of light that is scattered about the entire
scene.
For each light source in the scene, components i_s and i_d are defined as the intensities (often as RGB
values) of the specular and diffuse components of the light sources respectively. A single term i_a
controls the ambient lighting; it is sometimes computed as a sum of contributions from all light
sources.
For each material in the scene, the following parameters are defined:
k_s, which is a specular reflection constant, the ratio of reflection of the specular term of
incoming light,
k_d, which is a diffuse reflection constant, the ratio of reflection of the diffuse term of
incoming light (Lambertian reflectance),
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 63

k_a, which is an ambient reflection constant, the ratio of reflection of the ambient term present
in all points in the scene rendered, and
\alpha, which is a shininess constant for this material, which is larger for surfaces that are
smoother and more mirror-like. When this constant is large the specular highlight is small.
Furthermore, we have
\mathrm{lights}, which is the set of all light sources,
\hat{L}_m, which is the direction vector from the point on the surface toward each light source
(m specifies the light source),
\hat{N}, which is the normal at this point on the surface,
\hat{R}_m, which is the direction that a perfectly reflected ray of light would take from this
point on the surface, and
\hat{V}, which is the direction pointing towards the viewer (such as a virtual camera).
Then the Phong reflection model provides an equation for computing the illumination of each
surface point I_p:
I_p = k_a i_a + \sum_\mathrm{m \; \in \; lights} (k_d (\hat{L}_m \cdot \hat{N}) i_{m,d} +
k_s (\hat{R}_m \cdot \hat{V})^{\alpha}i_{m,s}).
where the direction vector \hat{R}_m is calculated as the reflection of \hat{L}_m on the
surface characterized by the surface normal \hat{N} using:
\hat{R}_m = 2(\hat{L}_m\cdot \hat{N})\hat{N} - \hat{L}_m
and the hats indicate that the vectors are normalized. The diffuse term is not affected by the
viewer direction (\hat{V}). The specular term is large only when the viewer direction
(\hat{V}) is aligned with the reflection direction \hat{R}_m. Their alignment is measured by
the \alpha power of the cosine of the angle between them. The cosine of the angle between the
normalized vectors \hat{R}_m and \hat{V} is equal to their dot product. When \alpha is large,
in the case of a nearly mirror-like reflection, the specular highlight will be small, because any
viewpoint not aligned with the reflection will have a cosine less than one which rapidly
approaches zero when raised to a high power.
Although the above formulation is the common way of presenting the Phong reflection model, each
term should only be included if the term's dot product is positive. (Additionally, the specular term
should only be included if the dot product of the diffuse term is positive.)When the color is
represented as RGB values, as often is the case in computer graphics, this equation is typically
modeled separately for R, G and B intensities, allowing different reflections constants k_a, k_d and
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 64

k_s for the different color channels.

2. What are the different methods available for shading a polygon? Briefly discuss any 2 of
them. ( 10 marks) (Dec/Jan 12) (Jun/Jul 13)
Constant (or Flat) Shading
Calculate normal (how?)
Assume L.N and R.V constant (light & viewer at infinity)
Calculate IRED, IGREEN, IBLUE using Phong reflection model
Use scan line conversion to fill polygon
Gouraud Shading
Gouraud shading attempts to smooth out the shading across the polygon facets
Begin by calculating the normal at each vertex
Phong Shading
Phong shading has a similar first step, in that vertex normals are calculated - typically as average of
normals of surrounding faces
3. Write a program segment using structures to represent meshes of quadrilaterala and
shade them. (10 marks) (Jun/Jul 12)

Objects created with polygon meshes must store different types of elements. These include vertices,
edges, faces, polygons and surfaces. In many applications, only vertices, edges and either faces or
polygons are stored. A renderer may support only 3-sided faces, so polygons must be constructed of
many of these, as shown in Figure 1. However, many renderers either support quads and higher-sided
polygons, or are able to convert polygons to triangles on the fly, making it unnecessary to store a mesh
in a triangulated form. Also, in certain applications like head modeling, it is desirable to be able to
create both 3- and 4-sided polygons.
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 65

A vertex is a position along with other information such as color, normal vector and texture
coordinates. An edge is a connection between two vertices. A face is a closed set of edges, in which a
triangle face has three edges, and a quad face has four edges. A polygon is a coplanar set of faces. In
systems that support multi-sided faces, polygons and faces are equivalent. However, most rendering
hardware supports only 3- or 4-sided faces, so polygons are represented as multiple faces.
Mathematically a polygonal mesh may be considered an unstructured grid, or undirected graph, with
additional properties of geometry, shape and topology.
Surfaces, more often called smoothing groups, are useful, but not required to group smooth regions.
Consider a cylinder with caps, such as a soda can. For smooth shading of the sides, all surface normals
must point horizontally away from the center, while the normals of the caps must point straight up and
down. Rendered as a single, Phong-shaded surface, the crease vertices would have incorrect normals.
Thus, some way of determining where to cease smoothing is needed to group smooth parts of a mesh,
just as polygons group 3-sided faces. As an alternative to providing surfaces/smoothing groups, a
mesh may contain other data for calculating the same data, such as a splitting angle (polygons with
normals above this threshold are either automatically treated as separate smoothing groups or some
technique such as splitting or chamfering is automatically applied to the edge between them).
Additionally, very high resolution meshes are less subject to issues that would require smoothing
groups, as their polygons are so small as to make the need irrelevant. Further, another alternative
exists in the possibility of simply detaching the surfaces themselves from the rest of the mesh.
Renderers do not attempt to smooth edges across noncontiguous polygons.
Mesh format may or may not define other useful data. Groups may be defined which define separate
elements of the mesh and are useful for determining separate sub-objects for skeletal animation or
separate actors for non-skeletal animation. Generally materials will be defined, allowing different
portions of the mesh to use different shaders when rendered. Most mesh formats also suppose some
form of UV coordinates which are a separate 2d representation of the mesh "unfolded" to show what
portion of a 2-dimensional texture map to apply to different polygons of the mesh.
4. Explain the different light sources.(5 marks) (Jun/Jul 12) (Jun/Jul 13) (Dec/Jan 14)
There are many sources of light. The most common light sources are thermal: a body at a given
temperature emits a characteristic spectrum of black-body radiation. A simple thermal source is
sunlight, the radiation emitted by the chromosphere of the Sun at around 6,000 Kelvin peaks in the
visible region of the electromagnetic spectrum when plotted in wavelength units and roughly 44% of
sunlight energy that reaches the ground is visible. Another example is incandescent light bulbs, which
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 66

emit only around 10% of their energy as visible light and the remainder as infrared. A common
thermal light source in history is the glowing solid particles in flames, but these also emit most of their
radiation in the infrared, and only a fraction in the visible spectrum. The peak of the blackbody
spectrum is in the deep infrared, at about 10 micrometer wavelength, for relatively cool objects like
human beings. As the temperature increases, the peak shifts to shorter wavelengths, producing first a
red glow, then a white one, and finally a blue-white colour as the peak moves out of the visible part of
the spectrum and into the ultraviolet. These colours can be seen when metal is heated to "red hot" or
"white hot". Blue-white thermal emission is not often seen, except in stars (the commonly seen pure-
blue colour in a gas flame or a welder's torch is in fact due to molecular emission, notably by CH
radicals (emitting a wavelength band around 425 nm, and is not seen in stars or pure thermal
radiation).
Atoms emit and absorb light at characteristic energies. This produces "emission lines" in the spectrum
of each atom. Emission can be spontaneous, as in light-emitting diodes, gas discharge lamps (such as
neon lamps and neon signs, mercury-vapor lamps, etc.), and flames (light from the hot gas itselfso,
for example, sodium in a gas flame emits characteristic yellow light). Emission can also be stimulated,
as in a laser or a microwave maser.
Deceleration of a free charged particle, such as an electron, can produce visible radiation: cyclotron
radiation, synchrotron radiation, and bremsstrahlung radiation are all examples of this. Particles
moving through a medium faster than the speed of light in that medium can produce visible Cherenkov
radiation.
Certain chemicals produce visible radiation by chemoluminescence. In living things, this process is
called bioluminescence. For example, fireflies produce light by this means, and boats moving through
water can disturb plankton which produce a glowing wake.
Certain substances produce light when they are illuminated by more energetic radiation, a process
known as fluorescence. Some substances emit light slowly after excitation by more energetic
radiation. This is known as phosphorescence.
Phosphorescent materials can also be excited by bombarding them with subatomic particles.
Cathodoluminescence is one example. This mechanism is used in cathode ray tube television sets and
computer monitors.
5. What are the various methods available for specifying material properties in Opengl. (10
marks)(Dec/Jan 13) (Dec/Jan 14)
GLfloat material_diffuse = { 1, 0, 0, 1 };
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 67

GLfloat material_specular = { 1, 1, 1, 1 };
GLfloat material_shininess = { 100 };
glMaterialfv(GL_FRONT, GL_DIFFUSE, material_diffuse);
glMaterialfv(GL_FRONT, GL_SPECULAR, material_specular);
glMaterialfv(GL_FRONT, GL_SHININESS, material_shininess);
A material can have separate diffuse color and specular color
Diffuse color is the "color" we tend to think of
Specular color is the color of highlights
Specular color tends to be white for plastics, same as diffuse color for metals
An OpenGL material also has a "shininess" - determines width of specular peak
Other fancy stuff settable with glMaterialfv, including separate color for front and back. Another
possibility - glColorMaterial - lets you vary material parameters across the surface by calling
glColor3f
6. Explain diffuse and specular reflections. (5 marks) (Jun/Jul 12)
Reflection off of smooth surfaces such as mirrors or a calm body of water leads to a type of reflection
known as specular reflection. Reflection off of rough surfaces such as clothing, paper, and the asphalt
roadway leads to a type of reflection known as diffuse reflection. Whether the surface is
microscopically rough or smooth has a tremendous impact upon the subsequent reflection of a beam of
light. The diagram below depicts two beams of light incident upon a rough and a smooth surface.

A light beam can be thought of as a bundle of individual light rays which are traveling parallel to each
other. Each individual light ray of the bundle follows the law of reflection. If the bundle of light rays is
incident upon a smooth surface, then the light rays reflect and remain concentrated in a bundle upon
leaving the surface. On the other hand, if the surface is microscopically rough, the light rays will reflect
and diffuse in many different directions.
7. Explain shading of the sphere model. (10 marks) (Dec/Jan 13)
Smooth shading is accomplished via linear interpolation and the method is known as Gouraud shading.
The shading model is set via a call to the function glShadeModel() with the appropriate argument. If no
shade model is set, smooth shading is used by default. In this lesson, we drew tha same objects with both
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 68

flat and smooth shading. Below, we have the two spheres that we rendered. Notice that the smoothly
shaded sphere looks more spherical than the flatly-shaded one.

From the outside, we can look at the scene as we have rendered it. The spheres are centered on the plane
z = 2 and the triangles are in the plane z = .2. The small red sphere shows that placement of the
viewer at the origin. The small white sphere shows the position of the light source at (0.0, 1.0, .5). The
bounding box is in gray and runs from 1 to 1 in the x and y directions and 3 to 3 in the z-direction.
The viewer looks toward the negative z-axis.

To render our spheres, we use the glutSolidSphere() function. This function takes three parameters: the
radius, the slice count, and the stack count. The slice count determines polygonal boundaries along lines
of longitude and the stack count determines boundaries along lines of latitude. As we increase the
number of slices and stacks, our spheres look more and more spherical.

In flat shading mode, polygons are colored the color of a single vertex. This vertex is that last vertex that
is specified for all geometric primitives, except GL_POLYGON, which uses the first vertex. In our
example code, the triangle that rendered was colored blue, the color of the last specified vertex.
8. Write a brief on global illumination.(4 marks) (Jun/Jul 13)
Global illumination is a general name for a group of algorithms used in 3D computer graphics that are
meant to add more realistic lighting to 3D scenes. Such algorithms take into account not only the light
which comes directly from a light source (direct illumination), but also subsequent cases in which light
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 69

rays from the same source are reflected by other surfaces in the scene, whether reflective or not
(indirect illumination).
Theoretically reflections, refractions, and shadows are all examples of global illumination, because
when simulating them, one object affects the rendering of another object (as opposed to an object
being affected only by a direct light). In practice, however, only the simulation of diffuse inter-
reflection or caustics is called global illumination.
Images rendered using global illumination algorithms often appear more photorealistic than images
rendered using only direct illumination algorithms. However, such images are computationally more
expensive and consequently much slower to generate. One common approach is to compute the global
illumination of a scene and store that information with the geometry, e.g., radiosity. That stored data
can then be used to generate images from different viewpoints for generating walkthroughs of a scene
without having to go through expensive lighting calculations repeatedly.
Radiosity, ray tracing, beam tracing, cone tracing, path tracing, Metropolis light transport, ambient
occlusion, photon mapping, and image based lighting are examples of algorithms used in global
illumination, some of which may be used together to yield results that are not fast, but accurate.
These algorithms model diffuse inter-reflection which is a very important part of global illumination;
however most of these (excluding radiosity) also model specular reflection, which makes them more
accurate algorithms to solve the lighting equation and provide a more realistically illuminated scene.
UNIT - 8
IMPLEMENTATION
1. Discuss the Brenhams rasteri zati on al gori thm. How is i t advantageous when
compared to other existing methods? Describe. (10 marks) (Dec/Jan 12)
The common conventions will be used:
the top-left is (0,0) such that pixel coordinates increase in the right and down directions (e.g. that the
pixel at (7,4) is directly above the pixel at (7,5)), and
that the pixel centers have integer coordinates.
The endpoints of the line are the pixels at (x0, y0) and (x1, y1), where the first coordinate of the pair is
the column and the second is the row.
The algorithm will be initially presented only for the octant in which the segment goes down and to the
right (x0 x1 and y0 y1), and its horizontal projection x_1-x_0 is longer than the vertical projection
y_1-y_0 (the line has a negative slope whose absolute value is less than 1). In this octant, for each
column x between x_0 and x_1, there is exactly one row y (computed by the algorithm) containing a
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 70

pixel of the line, while each row between y_0 and y_1 may contain multiple rasterized pixels.
Bresenham's algorithm chooses the integer y corresponding to the pixel center that is closest to the
ideal (fractional) y for the same x; on successive columns y can remain the same or increase by 1. The
general equation of the line through the endpoints is given by:
\frac{y - y_0}{y_1-y_0} = \frac{x-x_0}{x_1-x_0}.
Since we know the column, x, the pixel's row, y, is given by rounding this quantity to the nearest
integer:
y = \frac{y_1-y_0}{x_1-x_0} (x-x_0) + y_0.
The slope (y_1-y_0)/(x_1-x_0) depends on the endpoint coordinates only and can be precomputed, and
the ideal y for successive integer values of x can be computed starting from y_0 and repeatedly adding
the slope.
In practice, the algorithm can track, instead of possibly large y values, a small error value between 0.5
and 0.5: the vertical distance between the rounded and the exact y values for the current x. Each time x
is increased, the error is increased by the slope; if it exceeds 0.5, the rasterization y is increased by 1
(the line continues on the next lower row of the raster) and the error is decremented by 1.0.
In the following pseudocode sample plot(x,y) plots a point and abs returns absolute value:
function line(x0, x1, y0, y1)
int deltax := x1 - x0
int deltay := y1 - y0
real error := 0
real deltaerr := abs (deltay / deltax) // Assume deltax != 0 (line is not vertical),
// note that this division needs to be done in a way that preserves the fractional part
int y := y0
for x from x0 to x1
plot(x,y)
error := error + deltaerr
if error 0.5 then
y := y + 1
error := error - 1.0
2. Explain any one of the line clipping algorithm. (7 marks) (Jun/Jul 12) (Dec/Jan 14)
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 71

The DDA starts by calculating the smaller of dy or dx for a unit increment of the other. A line is then
sampled at unit intervals in one coordinate and corresponding integer values nearest the line path are
determined for the other coordinate.
Considering a line with positive slope, if the slope is less than or equal to 1, we sample at unit x intervals
(dx=1) and compute successive y values as


Subscript k takes integer values starting from 0, for the 1st point and increases by 1 until endpoint is
reached. y value is rounded off to nearest integer to correspond to a screen pixel.
For lines with slope greater than 1, we reverse the role of x and y i.e. we sample at dy=1 and calculate
consecutive x values as


Similar calculations are carried out to determine pixel positions along a line with negative slope. Thus, if
the absolute value of the slope is less than 1, we set dx=1 if i.e. the starting extreme
point is at the left.

3. Explain how Rasterization is carried out? (6 marks) (Jun/Jul 12) (Dec/Jan 13)
The term "rasterisation" in general can be applied to any process by which vector information can be
converted into a raster format.
The process of rasterising 3D models onto a 2D plane for display on a computer screen is often carried
out by fixed function hardware within the graphics pipeline. This is because there is no motivation for
modifying the techniques for rasterisation used at render time and a special-purpose system allows for
high efficiency.
Basic approach
The most basic rasterization algorithm takes a 3D scene, described as polygons, and renders it onto a 2D
surface, usually a computer monitor. Polygons are themselves represented as collections of triangles.
Triangles are represented by 3 vertices in 3D-space. At a very basic level, rasterizers simply take a
stream of vertices, transform them into corresponding 2-dimensional points on the viewers monitor and
fill in the transformed 2-dimensional triangles as appropriate.
Transformations
Transformations are usually performed by matrix multiplication. Quaternion math may also be used but
that is outside the scope of this article. The main transformations aretranslation, scaling, rotation,
and projection. A 3 dimensional vertex may be transformed by augmenting an extra variable (known as
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 72

a "homogeneous variable") and left multiplying the resulting 4-component vertex by a 4 x 4
transformation matrix.
A translation is simply the movement of a point from its original location to another location in 3-space
by a constant offset. Translations can be represented by the following matrix:

X, Y, and Z are the offsets in the 3 dimensions, respectively.
A scaling transformation is performed by multiplying the position of a vertex by a scalar value. This has
the effect of scaling a vertex with respect to the origin. Scaling can be represented by the following
matrix:

X, Y, and Z are the values by which each of the 3-dimensions are multiplied. Asymmetricscaling can be
accomplished by varying the values of X, Y, and Z.
Rotation matrices depend on the axis around which a point is to be rotated.
Rotation about the X-axis:

Rotation about the Y-axis:

Rotation about the Z-axis:

in all each of these cases represent the angle of rotation.
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 73


4. What is the backward mapping problem in texture mapping? (5 marks) (Dec/Jan 12)
Backward mapping (also known as inverse mapping or screen order ) is a technique used intexture
mapping to create a 2D image from 3D data. Because a texel (the smallest graphical element in a texture
map) does not correspond exactly to a screen pixel , the developer must apply a filter computation to
map the texels to the screen. Forward mapping steps through the texture map and computes the screen
location of each texel. In comparison, backward mapping steps through the screen pixels and computes a
pixel's color according to the texels that map to it.
Real time video effects systems use forward mapping. However, since many texels are likely to map to a
single pixel, performing the filter computation at each pixel for every frame is very expensive. Most
systems that don't have to produce real time effects use backward mapping.

5. What is area averaging in texture mapping? (5 marks) (Dec/Jan 12)
Each bitmap image of the mipmap set is a downsized duplicate of the main texture, but at a certain
reduced level of detail. Although the main texture would still be used when the view is sufficient to
render it in full detail, the renderer will switch to a suitable mipmap image (or in
fact, interpolate between the two nearest, if trilinear filtering is activated) when the texture is viewed
from a distance or at a small size. Rendering speed increases since the number of texture pixels
("texels") being processed can be much lower with the simple textures. Artifacts are reduced since the
mipmap images are effectively already anti-aliased, taking some of the burden off the real-time renderer.
Scaling down and up is made more efficient with mipmaps as well.
If the texture has a basic size of 256 by 256 pixels, then the associated mipmap set may contain a series
of 8 images, each one-fourth the total area of the previous one: 128128 pixels, 6464, 3232, 1616,
88, 44, 22, 11 (a single pixel). If, for example, a scene is rendering this texture in a space of 4040
pixels, then either a scaled up version of the 3232 (without trilinear interpolation) or an interpolation of
the 6464 and the 3232 mipmaps (with trilinear interpolation) would be used. The simplest way to
generate these textures is by successive averaging; however, more sophisticated algorithms (perhaps
based on signal processing and Fourier transforms) can also be used.

6. What is a big problem with the painters algorithm? (4 marks) (Dec/Jan 13)
The painter's algorithm, also known as a priority fill, is one of the simplest solutions to the visibility
problem in 3D computer graphics. When projecting a 3D scene onto a 2D plane, it is necessary at some
point to decide which polygons are visible, and which are hidden.
The name "painter's algorithm" refers to the technique employed by many painters of painting distant
parts of a scene before parts which are nearer thereby covering some areas of distant parts. The painter's
algorithm sorts all the polygons in a scene by their depth and then paints them in this order, farthest to
closest. It will paint over the parts that are normally not visible thus solving the visibility problem
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 74

at the cost of having painted invisible areas of distant objects.
7. Explain the cohen-sutherland line clipping algorithm in detail. (10 marks) (Jun/Jul 12)
(Dec/Jan 13) (Dec/Jan 14)
The algorithm includes, excludes or partially includes the line based on where:
Both endpoints are in the viewport region (bitwise OR of endpoints == 0): trivial accept.
Both endpoints share at least one non-visible region which implies that the line does not cross the
visible region. (bitwise AND of endpoints != 0): trivial reject.
Both endpoints are in different regions: In case of this nontrivial situation the algorithm finds one of
the two points that is outside the viewport region (there will be at least one point outside). The
intersection of the outpoint and extended viewport border is then calculated (i.e. with the parametric
equation for the line) and this new point replaces the outpoint. The algorithm repeats until a trivial
accept or reject occurs.
The numbers in the figure below are called outcodes. An outcode is computed for each of the two
points in the line. The outcode will have four bits for two-dimensional clipping, or six bits in the
three-dimensional case. The first bit is set to 1 if the point is above the viewport. The bits in the 2D
outcode represent: Top, Bottom, Right, Left. For example the outcode 1010 represents a point that is
top-right of the viewport. Note that the outcodes for endpoints must be recalculated on each
iteration after the clipping occurs.





The CohenSutherland algorithm can be used only on a rectangular clipping area. For other convex
polygon clipping windows, use the CyrusBeck algorithm.

8. Write a short notes on : (20 marks) (Jun/Jul 13)


a. Library organization in Open GL
Function classes within the OpenGL API:
1. Control: window system interactions.
2. Primitives: rendering.
3. Attribute: current color.
4. Viewing: camera settings.
1001 1000 1010
0001 0000 0010
0101 0100 0110
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 75

5. Transformation: translation, rotation, scale.
6. Input: interaction.
Three-layered libraries: GL, GLU, GLUT.
Library organization:


b. Mapping between coordinates.
Coordinate systems play an essential role in the graphics pipeline. They are not complicated; coordinates
are one of the first things we learn in school when we study geometry. However, learning a few things
about them now will make it easier to understand matrices.
In the previous chapter we mentioned that points and vectors (as used in CG) are represented with three
real numbers. But what do these numbers mean? Each number represents a signed distance from the
origin of a line to the position of the point on that line. For example, consider drawing a line and putting
a mark in the middle. We will call this mark the origin. This mark becomes our point of reference: the
position from which we will measure the distance to any other point. If a point lies to the right side of
the origin, we take the signed distance to be greater than zero (that is, positive something). On the other
hand, if it is on the left side of the origin, the value will be negative (negative something).
We assume the line goes to infinity on either side of the origin. Therefore, in theory, the distance
between two points on that line could be infinitely large. However, this presents a problem: in the world
of computers, there is a practical limit to the value you can represent for a number. Thankfully, this
maximum value is usually big enough to build most of the 3D scenes we will want to render; all the
values we deal with in the world of CG are bounded anyway. With that said, let's not worry too much
about this computational limitation for now.
Now that we have a line and an origin we add some additional marks at a regular interval (unit length)
on each side of the origin, effectively turning our line into a ruler. With the ruler established, we can
simply use it to measure the coordinate of a point from the origin ("coordinate" being another way of
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 76

saying the signed distance from the origin to the point). In computer graphics and mathematics, the ruler
defines what we call an axis.


9. Clip the polygon using Sutherland-Hodgeman algorithm (06 Marks) (Dec/Jan 14)

The algorithm begins with an input list of all vertices in the subject polygon. Next, one side of the clip
polygon is extended infinitely in both directions, and the path of the subject polygon is traversed.
Vertices from the input list are inserted into an output list if they lie on the visible side of the extended
clip polygon line, and new vertices are added to the output list where the subject polygon path crosses
the extended clip polygon line.
This process is repeated iteratively for each clip polygon side, using the output list from one stage as the
input list for the next. Once all sides of the clip polygon have been processed, the final generated list of
vertices defines a new single polygon that is entirely visible. Note that if the subject polygon was
concave at vertices outside the clipping polygon, the new polygon may have coincident (i.e.
overlapping) edges this is acceptable for rendering, but not for other applications such as computing
shadows.
10. Write short notes on: (08 Marks) (Dec/Jan 14)
i. Z-buffer algorithm.
Computer Graphics And Visualization 10CS65
Dept of CSE, SJBIT Page 77

In computer graphics, z-buffering, also known as depth buffering, is the management of image depth
coordinates in three-dimensional (3-) graphics, usually done in hardware, sometimes in software. It is
one solution to the visibility problem, which is the problem of deciding which elements of a rendered
scene are visible, and which are hidden. The painter's algorithm is another common solution which,
though less efficient, can also handle non-opaque scene elements.
When an object is rendered, the depth of a generated pixel (z coordinate) is stored in a buffer (the z-
buffer or depth buffer). This buffer is usually arranged as a two-dimensional array (x-y) with one
element for each screen pixel. If another object of the scene must be rendered in the same pixel, the
method compares the two depths and overrides the current pixel if the object is closer to the observer.
The chosen depth is then saved to the z-buffer, replacing the old one. In the end, the z-buffer will allow
the method to correctly reproduce the usual depth perception: a close object hides a farther one. This is
called z-culling.
The granularity of a z-buffer has a great influence on the scene quality: a 16-bit z-buffer can result in
artifacts (called "z-fighting") when two objects are very close to each other. A 24-bit or 32-bit z-buffer
behaves much better, although the problem cannot be entirely eliminated without additional algorithms.
An 8-bit z-buffer is almost never used since it has too little precision.

Vous aimerez peut-être aussi