Vous êtes sur la page 1sur 23

Introduction to Computer Graphics

Computer graphics are graphics created using computers and, more generally, the representation and manipulation of image data by a computer with help from specialized software and hardware. The development of computer graphics has made computers easier to interact with, and better for understanding and interpreting many types of data. Developments in computer graphics have had a profound impact on many types of media and have revolutionized animation, movies and the video game industry. Definition:The term computer graphics has been used in a broad sense to describe "almost everything on computers that is not text or sound". Typically, the term computer graphics refers to several different things:

the representation and manipulation of image data by a computer the various technologies used to create and manipulate images the images so produced, and the sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content, see study of computer graphics.

Today, computers and computer-generated images touch many aspects of daily life. Computer imagery is found on television, in newspapers, for example in weather reports, or for example in all kinds of medical investigation and surgical procedures. A well-constructed graph can present complex statistics in a form that is easier to understand and interpret. In the media "such graphs are used to illustrate papers, reports, thesis", and other presentation material. Many powerful tools have been developed to visualize data. Computer generated imagery can be categorized into several different types: 2D, 3D, and animated graphics. As technology has improved, 3D computer graphics have become more common, but 2D computer graphics are still widely used. Computer graphics has emerged as a sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content. Over the past decade, other specialized fields have been developed like information visualization, and scientific visualization more concerned with "the visualization of three dimensional phenomena (architectural, meteorological, medical, biological, etc.), where the emphasis is

on realistic renderings of volumes, surfaces, illumination sources, and so forth, perhaps with a dynamic (time) component". The phrase Computer Graphics was coined in 1960 by William Fetter, a graphic designer for Boeing. The field of computer graphics developed with the emergence of computer graphics hardware. Early projects like the Whirlwind and SAGE Projects introduced the CRT as a viable display and interaction interface and introduced the light pen as an input device. Applications Computer graphics may be used in the following areas:

Computational biology Computational physics Computer-aided design Computer simulation Digital art Education Graphic design Infographics Information visualization Rational drug design Scientific visualization Video Games Virtual reality Web design

Subfields of computer graphics


A broad classification of major subfields in computer graphics might be: 1. Geometry: studies ways to represent and process surfaces 2. Animation: studies with ways to represent and manipulate motion 3. Rendering: studies algorithms to reproduce light transport 4. Imaging: studies image acquisition or image editing

1. Geometry

Successive approximations of a surface computed using quadric error metrics. The subfield of geometry studies the representation of three-dimensional objects in a discrete digital setting. Because the appearance of an object depends largely on its exterior, boundary representations are most commonly used. Two dimensional surfaces are a good representation for most objects, though they may be non-manifold. Since surfaces are not finite, discrete digital approximations are used. Polygonal meshes (and to a lesser extent subdivision surfaces) are by far the most common representation, although point-based representations have become more popular recently (see for instance the Symposium on Point-Based Graphics). These representations are Lagrangian, meaning the spatial locations of the samples are independent. Recently, Eulerian surface descriptions (i.e., where spatial samples are fixed) such as level sets have been developed into a useful representation for deforming surfaces which undergo many topological changes . Geometry Subfields

Implicit surface modeling - an older subfield which examines the use of algebraic surfaces, constructive solid geometry, etc., for surface representation.

Digital geometry processing - surface reconstruction, simplification, fairing, mesh repair, parameterization, remeshing, mesh generation, surface compression, and surface editing all fall under this heading.[6][7][8]

Discrete differential geometry - a nascent field which defines geometric quantities for the discrete surfaces used in computer graphics.[9]

Point-based graphics - a recent field which focuses on points as the fundamental representation of surfaces.

Subdivision surfaces

Out-of-core mesh processing - another recent field which focuses on mesh datasets that do not fit in main memory.

2. Animation The subfield of animation studies descriptions for surfaces (and other phenomena) that move or deform over time. Historically, most work in this field has focused on parametric and datadriven models, but recently physical simulation has become more popular as computers have become more powerful computationally. Subfields

Performance capture Character animation Physical simulation (e.g. cloth modeling, animation of fluid dynamics, etc.)

3. Rendering

Indirect diffuse scattering simulated using path tracing and irradiance caching. Rendering generates images from a model. Rendering may simulate light transport to create realistic images or it may create images that have a particular artistic style in nonphotorealistic rendering. The two basic operations in realistic rendering are transport (how much light passes from one place to another) and scattering (how surfaces interact with light). See Rendering (computer graphics) for more information. Rendering is the process of

generating an image from a model (or models in what collectively could be called a scene file), by means of computer programs. A scene file contains objects in a strictly defined language or data structure; it would contain geometry, viewpoint, texture, lighting, and shading information as a description of the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The rendering program is usually built into the computer graphics software, though others are available as plug-ins or entirely separate programs. The term "rendering" may be by analogy with an "artist's rendering" of a scene. Though the technical details of rendering methods vary, the general challenges to overcome in producing a 2D image from a 3D representation stored in a scene file are outlined as the graphics pipeline along a rendering device, such as a GPU. A GPU is a purpose-built device able to assist a CPU in performing complex rendering calculations. If a scene is to look relatively realistic and predictable under virtual lighting, the rendering software should solve the rendering equation. The rendering equation does not account for all lighting phenomena, but is a general lighting model for computer-generated imagery. 'Rendering' is also used to describe the process of calculating effects in a video editing file to produce final video output. Other subfields

physically based rendering - concerned with generating images according to the laws of geometric optics

real time rendering - focuses on rendering for interactive applications, typically using specialized hardware like GPUs

non-photorealistic rendering relighting - recent area concerned with quickly re-rendering scenes

4. Imaging: There are two types of imaging: Two-dimensional

Raster graphic sprites (left) and masks (right) 2D computer graphics are the computer-based generation of digital imagesmostly from two-dimensional models, such as 2D geometric models, text, and digital images, and by techniques specific to them. 2D computer graphics are mainly used in applications that were originally developed upon traditional printing and drawing technologies, such as typography, cartography, technical drawing, advertising, etc.. In those applications, the two-dimensional image is not just a representation of a real-world object, but an independent artifact with added semantic value; two-dimensional models are therefore preferred, because they give more direct control of the image than 3D computer graphics, whose approach is more akin to photography than to typography. Three-dimensional 3D computer graphics in contrast to 2D computer graphics are graphics that use a threedimensional representation of geometric data that is stored in the computer for the purposes of performing calculations and rendering 2D images. Such images may be for later display or for real-time viewing. Despite these differences, 3D computer graphics rely on many of the same algorithms as 2D computer vector graphics in the wire frame model and 2D computer raster graphics in the final rendered display. In computer graphics software, the distinction between 2D and 3D is

occasionally blurred; 2D applications may use 3D techniques to achieve effects such as lighting, and primarily 3D may use 2D rendering techniques. 3D computer graphics are often referred to as 3D models. Apart from the rendered graphic, the model is contained within the graphical data file. However, there are differences. A 3D model is the mathematical representation of any three-dimensional object. A model is not technically a graphic until it is visually displayed. Due to 3D printing, 3D models are not confined to virtual space. A model can be displayed visually as a two-dimensional image through a process called 3D rendering, or used in non-graphical computer simulations and calculations. There are some 3D computer graphics software for users to create 3D images.

Computer animation

Example of Computer animation produced using Motion capture Computer animation is the art of creating moving images via the use of computers. It is a subfield of computer graphics and animation. Increasingly it is created by means of 3D computer graphics, though 2D computer graphics are still widely used for stylistic, low bandwidth, and faster real-time rendering needs. Sometimes the target of the animation is the computer itself, but sometimes the target is another medium, such as film. It is also referred to as CGI (Computer-generated imagery or computer-generated imaging), especially when used in films. Virtual entities may contain and be controlled by assorted attributes, such as transform values (location, orientation, and scale) stored in an object's transformation matrix. Animation is the change of an attribute over time. Multiple methods of achieving animation exist; the rudimentary form is based on the creation and editing of keyframes, each storing a value at a given time, per attribute to be animated. The 2D/3D graphics software will interpolate

between keyframes, creating an editable curve of a value mapped over time, resulting in animation. Other methods of animation include procedural and expression-based techniques: the former consolidates related elements of animated entities into sets of attributes, useful for creating particle effects and crowd simulations; the latter allows an evaluated result returned from a user-defined logical expression, coupled with mathematics, to automate animation in a predictable way (convenient for controlling bone behavior beyond what a hierarchy offers in skeletal system set up). To create the illusion of movement, an image is displayed on the computer screen then quickly replaced by a new image that is similar to the previous image, but shifted slightly. This technique is identical to the illusion of movement in television and motion pictures.

Transport:
Transport describes how illumination in a scene gets from one place to another. Visibility is a major component of light transport.

Scattering
Models of scattering and shading are used to describe the appearance of a surface. In graphics these problems are often studied within the context of rendering since they can substantially affect the design of rendering algorithms. Shading can be broken down into two orthogonal issues, which are often studied independently: 1. scattering - how light interacts with the surface at a given point 2. shading - how material properties vary across the surface The former problem refers to scattering, i.e., the relationship between incoming and outgoing illumination at a given point. Descriptions of scattering are usually given in terms of a bidirectional scattering distribution function or BSDF. The latter issue addresses how different types of scattering are distributed across the surface (i.e., which scattering function applies where). Descriptions of this kind are typically expressed with a program called a shader. (Note that there is some confusion since the word "shader" is sometimes used for programs that describe local geometric variation.)

Persistence:
The time it takes the emitted light from the screen to decay one tenth of its original intensity is called as persietence.

Resolution:
The maximum number of points that can be displayed without an overlap on a CRT is called as resolution.

Aspect Ratio:
The ratio of vertical points to the horiontal points necessary to produce length of lines in both directions of the screen is called Aspect Ratio. Usually the aspect ratio is . An aspect ratio of means that a vertical line plotted with three points has same length as a horizontal line plotted with 4 points.

Refreshing:
Some method is needed for maintaining the picture on the screen. Refreshing of screen is done by keeping the phosphorus glowing to redraw the picture repeatedly. (i.e.) By quickly directing the electronic beam back to the same points.

Refresh buffer/Frame buffer:


Picture definition is stored in a memory area called the refresh buffer or frame buffer. This memory area holds the set of intensity values for all the screen points.

Pixel:
Each screen point is referred to as a pixel or pel.

Bitmap:
On a black and white system with one bit per pixel, the frame buffer is commonly known as a Bitmap.

Programs:
Write a program to draw line using inbuilt functions. #include<iostream.h> #include<conio.h> #include<graphics.h> void main() { clrscr(); int gd=0,gm; initgraph(&gd,&gm,C:\\TC\\BGI); int x0,x1,y0,y1; cout<<enter the initial coordinates of the line x0,y0 ; cin>>x0>>y0; cout<<enter the final coordinates of the line x1,y1; cin>>x1>>y1; line(x0,y0,x1,y1); getch(); closegraph(); } Output:

Write a program to draw a circle using inbuilt functions. #include<iostream.h> #include<conio.h> #include<graphics.h> void main() { clrscr(); int gd=0,gm; initgraph(&gd,&gm,C:\\TC\\BGI); int x0,y0,radius; cout<<Enter coordinates of center of circle x0,y0 ; cin>>x0>>y0; cout<<Enter radius of cirle r ; cin>>radius; circle(x0,y0,radius); getch(); closegraph(); } Output:

Write a program to draw a rectangle using inbuilt functions. #include<iostream.h> #include<conio.h> #include<graphics.h> void main() { clrscr(); int gd=0,gm; initgraph(&gd,&gm,C:\\TC\\BGI); int l,t,r,b; cout<<Enter the top left corner points of rectangle:; cin>>l>>t; cout<<Enter the bottom right corner points of rectangle:; cin>>r>>b; rectangle(l,t,r,b); getch(); closegraph(); } Output:

Write a program to draw a ellipse using inbuilt functions. #include<iostream.h> #include<conio.h> #include<graphics.h> void main() { clrscr(); int gd=0,gm; initgraph(&gd,&gm,C:\\TC\\BGI); int x,y,s,e,xr,yr; cout<<enter the initial points of ellipse:; cin>>x>>y; cout<<enter the starting & ending angles of ellipse:; cin>>s>>e; cout<<enter the arcss radius of ellipse:; cin>>xr>>yr; ellipse(x,y,s,e,xr,yr); getch(); closegraph(); } Output:

Write a program to draw line using DDA algorithm. #include<iostream.h> #include<conio.h> #include<graphics.h> #include<math.h> #include<dos.h> void main() { clrscr(); int x1,y1,x2,y2,i,len,gdrive=DETECT,gmode; float incx,incy,x,y; initgraph(&gdrive,&gmode,C:\\TC\\BGI); cout<<enter the initial coordinates of line:; cin>>x1>>y1; cout<<enter the final coordinates of line:; cin>>x2>>y2; len=abs(x2-x1); if(abs(y2-y1)>len) len= abs(y2-y1); incx=(x2-x1)/len; incy=(y2-y1)/len; x=x1+0.5; y=y1+0.5; for(i=1;i<=len;i++) { putpixel(x,y,9); x=x+incx; y=y+incy; delay(100); } getch(); closegraph(); }

Output:

Write a program to draw line using Bresenhams Algorithm. #include<iostream.h> #include<conio.h> #include<graphics.h> void main() { clrscr(); int x1,y1,x2,y2,m,x,y,b,dx,dy,ds,dt,d; int gd=DETECT,s; initgraph(&gd,&s,C:\\TC\\BGI); cout<<enter the initial coordinates of line:; cin>>x1>>y1; cout<<enter the final coordinates of line:; cin>>x2>>y2; putpixel(x1,y1,WHITE); putpixel(x2,y2,WHITE); dx=x2-x1; dy=y2-y1; dt=2*(dy-dx); ds=2*dy; d=(2*dy)-dx; while(x1<x2) { x1=x1+1; if(d<0) { d=d+ds; } else { y1=y1+1; d=d+dt; } putpixel(x1,y1,WHITE);

} getch(); closegraph(); } Output:

Write a program to demonstrate 2D scaling of triangle. #include<iostream.h> #include<conio.h> #include<graphics.h> int x1,y1,x2,y2,x3,y3,mx,my; void draw(); void scale(); void main() { int gd=DETECT,gm; int c; initgraph(&gd,&gm,C:\\TC\\BGI); cout<<enter the values: ; cin>>x1>>y1>>x2>>y2>>x3>>y3; draw(); scale(); } void draw() { line(x1,y1,x2,y2); line(x2,y2,x3,y3); line(x3,y3,x1,y1); } void scale() { int x,y,a1,a2,a3,b1,b2,b3; int mx,my; cout<<enter scaling:; cin>>x>>y; mx=(x1+x2+x3)/3; my=(y1+y2+y3)/3; char device(); a1=mx+(x1-mx)*x; b1=my+(y1-my)*y; a2=mx+(y2-mx)*y;

b2=my+(y2-my)*y; a3=mx+(y2-my)*y; b3=my+(y3-my)*y; line(a1,b1,a2,b2); line(a2,b2,a3,b3); line(a3,b3,a1,b1); draw(); getch(); closegraph(); } Output:

Write a program to demonstrate boundary fill algorithm. #include<iostream.h> #include<conio.h> #include<graphics.h> #include<dos.h> void fill(int x.int y,int bcolor,int fcolor); { int curr; curr=getpixel(x,y); if((curr!=bcolor)&&(curr!=fcolor)) { delay(20); putpixel(x,y,fcolor); fill(x+1,y,bcolor,fcolor); fill(x,y+1,bcolor,fcolor); fill(x-1,y,bcolor,fcolor); fill(x,y-1,bcolor,fcolor); } } void main() { int gd=0,gm,x1,y1,x1y2,bcolor=1,fcolor=3 ; cout<<enter endpoints of rectangle:<<endl; cout<<enter top left corner points of rectangle:<<endl; cin>>x1>>y1; cout<<enter bottom right corner points of rectangle:<<endl; cin>>x2>>y2; initgraph(&gd,&gm,C:\\TC\\BGI); setcolor(bcolor); rectangle(x1+150,y1+100,x2+150,y2+100); fill(x1+151,y1+101,bcolor,fcolor); getch(); closegraph(); }

Output:

Write a program to demonstrate flood fill algorithm. #include<iostream.h> #include<conio.h> #include<graphics.h> #include<dos.h> void fill(int,int,int,int); void main() { clrscr(); int gd=DETECT,gm; initgraph(&gd,&gm,C:\\TC\\BGI); int x,y,co,bco; cout<<enter the x,y coordinates of pixel position:\n; cin>>x>>y; cout<<enter the fill color and background color:\n; cin>>co>>bco; setcolor(8); rectangle(200,200,500,400); setfillstyle(1,bco); bar(201,201,499,399); fill(x,y,co,bco); getch(); closegraph(); } void fill(int x, int y, int co, int bco) { if(getpixel(x,y)==bco) { putpixel(x,y,co); delay(10); fill(x+1,y,co,bco); fill(x-1,y,co,bco); fill(x,y+1,co,bco); fill(x,y-1,co,bco);

} } Output: