Vous êtes sur la page 1sur 19

Computer Graphics

CSE 323
Name-ASHISH MITTAL
ADMISSION NO- 15SCSE101443
BATCH-B8

DIRECTIONS

Please provide short written answers to the questions in the space provided.
If you require extra space, you may staple additional pages to the back of
your assignment. Feel free to talk over the problems with your classmates,
but please answer the questions on your own.

ASHISH MITTAL
NAME: __________________________________________
Problem 1.

The Phong shading model can be summarized by the following equation:


  
I phong  k e  k a I a    I li k d  N  L i    k s  V  R i  s min 1,
n 1 
2 

i   a0  a1d i  a 2 d i 

where the summation i is taken over all light sources.

The variables used in the Phong shading equation are summarized below:

I a0 a1 a2 di ke ka kd ks ns Ia Ili Li Ri N V

(a) Which of the quantities above are affected if...


…the viewing direction changes?

ANS. I,A0

…the position of the ith light changes?

ANS. A2, di

…the orientation of the surface changes?

Ans.. n, v
Problem 1 - continued.

(b) Blinn and Newell have suggested that, when V and L are assumed to be
constants, the computation of V  R can be simplified by associating with each
light source a fictitious light source that will generate specular reflections.
This second light source is located in a direction H halfway between L and V.
The specular component is then computed from  N  H  n instead of from
s

 V  R  n .
s

Under what circumstances might L and V be assumed to be constant?

Ans. So making V constant, is like sticking the view at


infinity (this is done by normalizing the view vector).
This means that the falloff of the specular light at sharp angles
between the surface and view is not taken into account (it's linear).
So the highlight will be too big and intense at sharp angles (the
falloff will be linear in respect to the view position). Also, the
highlight won't move correctly with the view.

How does the new equation using H simplify shading equations?

(c) The ambient term in the Phong model is one way to guarantee that all
visible surfaces receive some light. Another possibility is to use the
"headlamp" method in which a point light source is positioned at the eye, but
no ambient term is used.

Are these two methods equivalent? If so, explain why. If not, describe a
scene in which the results would be clearly different.
ANS..THESE METHODS ARE NOT EQUIVALENT

If you have directional lights (L at infinity) then the highlight will


be
incorrectly positioned and spread -- since the angle between any given
point and the light will be more or less the same (to an infintecial
degree). On a flat surface you'd be depending on the angle the view
makes
with the surface. Although, this isn't too much to worry about --
I think directional lighting is a choice, not a compromise.
Problem 1 - continued.

(d) Respond TRUE or FALSE to each of these statements and explain your
reasoning.

_____ The Phong model is a physical simulation of the behavior of


real-world light.

ANS….TRUE

_____ For polished metal, the specular component ns would be large.

ANS……TRUE

_____ A rough surface with many tiny microfacets is likely to have a


large diffuse reflection coefficient.

ANS……………FALSE

(e) Describe the relationships between N, Li, and Ri that would result in a
point shaded with the Phong model appearing maximally bright.

ANS.

^N
|
Lÿÿÿ | R V
\ÿÿ | / __/
\ÿ | / __/
\ | / __/
\ | /__/
\|//
-------------.--------------
P
^
point under consideration

It's important you know what these values actually are:

N = surface normal
L = unit vector between point and light
V = unit vector between point and view
R = light reflection unit vector (mirror of L about N)

First, the diffuse relfection is given by the Lamertian Relfection


equation:

diffuse = Kd * (N dot L)

Where Kd is the diffuse relfection constant. (N dot L) is the same as


the cosine of the angle between N and L, so as the angle decrease, the
resulting diffuse value is higher.

Phong gave spectral reflectivity as:

diffuse + Ks * (R dot V)^n

Which is:

Kd * (N dot L) + Ks * (R dot V)^n

Where Kd is the diffuse component and Ks is the specular compoenet.


This
is the generally accepted phong lighting equation. Ks is generaly
taken to
be a specularity constant (although Phong defined it as W(i).. see
later).

As the angle between the view (V) and the reflected light (R)
decreases,
you will get more specularity.

(f) The equation above is not the only hallmark to Mister Phong's fame. We
also talked in class about the difference between two polygon shading
methods, one called Phong and one called Gouraud. Describe a scene where
the difference between Phong and Gouraud shading would be noticeable.
ANS..
VOLTAIRE METHOD IS MORE ACCURATE
If V is properly calculated, for a flat surface, the angle between the
view and the normal (which is constant) will alter.

N
V ^
\ \ |
\ \ |
\ \ |
\ \ |
\ \ |
\ \ |
-------.-----------.---
p1 p2
The angle at p1 is obviously sharper than the angle at p2. Even though
N is the same at both points. It all becomes obvious now. Unless you
calculate V correctly, the reflected specular light over a flat
surface
will be the even at any point on it

Problem 2.

(a) The company you work for has just bought rights to a raytracing engine.
Unfortunately, you don’t have the source code, just a compiled library. You
have been asked to determine how rays are terminated. So, you call the
authors you find out even they don’t remember for sure. All they can tell you
is this: The termination criteria for tracing rays is either (a) rays are traced
to a maximum recursion depth of 5, or (b) rays are adaptively terminated
based on their contribution to a pixel color.

Describe a scene that can be used to determine which method is used. Be


specific about all relevant aspects of the scene and what you would look for
in the resulting image to determine which termination method is used.

ANS. inding out the color of objects at any point on their surface is a
complex problem. The appearance of objects is essentially the result
of light bouncing off of the surface of objects or traveling through
these objects if they are transparent. Effects like reflection or
refraction, is the result of light rays being refracted while traveling
through transparent surfaces or being reflected off of the surface of
a shiny object for example. When we see an object, what we actually
"see" are light rays reflected by this object in the direction of our
eyes. Many of the rays reflected off of the surface of that object may
travel away from the eyes though some of these rays may be
indirectly reflected back towards the eye by another object in the
scene (figure 1). The way light is reflected by objects is actually a well
know and well studied phenomena but this is not a lesson on
shading, so we won't get into details at this point. All we want you to
know in the context of this lesson, is that to compute the color of
object at any given point on the surface of that object, we actually
need to simulate the way light is reflected off of the surface of
objects. In compute graphics, this is the task of what we call a light
transport algorithm. We will study the concept of light transport in
detail in some of the following lessons so don't worry too much if
only scratch the surface of the subject at this point.
We could simulate the transport of light in the scene in a brute
force manner by just applying the laws of optics and physics which
describe the way light interacts with matter; we would probably get
an extremely accurate image though, the problem with this
approach is that (and to stay really short) it would probably take a
very very long time to get that perfect result, even with some of the
most powerful computers we can get today. This approach is simply
not a practical.
"To truly simulate all effects of light accurately, you would basically
have to program the universe".
What light algorithms do is essentially find ways of simulating all sort
of lighting scenarios in an "acceptable" amount of time while
providing the best visual accuracy possible. You can see a light
transport algorithm as a strategy for solving that light transport
problem (and we all know that some strategies to solve a given
problem are better than others and that a strategy that is good in a
given scenario might not work well in other scenario - light transport
algorithms are exactly the same, which is again why so many of
them exist).
Computing the new path of a light ray that strikes the surface of an
object is something for which we have well defined mathematic
models. The main problem is that the ray may be reflected back into
the scene and strike the surface of another object in the scene. Thus,
if you want to simulate the trajectory or the paths of a light ray as it
bounces off from surface to surface, what we need is a solution to
compute the intersection of rays with surfaces. As you can guess,
this is where ray-tracing comes into play. Ray-tracing is of course
ideally suited to compute the first visible surface from a any given
point in the scene in any given direction. Rasterization in contrast, is
not well suited for this task at all. It works well to compute the first
visible surface from the camera point of view, but not from any
arbitrary point in the scene. Note that it can be done with
rasterization. The process would just be very inefficient (readers
interested in this approach can search for a technique called
hemisphere or hemicube which is used in the radiosity algorithm)
even compared to ray-tracing which has a high computational cost.
In conclusion, if these light transport algorithms involve to compute
the visibility between surfaces, any technique such as ray-tracing or
rasterization can be used to do so, though, as explained before, ray-
tracing is simply easier to use in this particular context. For this
reason, some of the most popular light-transport algorithms such as
path-tracing or bidirectional path-tracing have become inevitably
associated with ray-tracing. In the end, this has created a confusing
situation in which ray-tracing was actually understood as a technique
for creating photo-realistic images. It is not. A photo-realistic
computer generated image is a mix of a light transport algorithm
and a technique to compute visibility between surfaces. Thus when
you consider any give rendering program, the two questions you
may want to ask yourself to understand the characteristics of this
particular program are: what light transport algorithm does it use,
and what techniques does it use to compute visibility in general:
rasterization, ray-tracing or a mix of the two techniques?
As mentioned earlier, don't worry too much about light transport
algorithms for now. The point we are trying to make is really that you
should not mix ray-tracing with the process of actually finding out
the color of objects in the scene. We insist on this point as ray-
tracing is often presented as a technique for producing photo-
realistic images of 3D objects. It is not. The job of simulating global
illumination effects is that of the light transport algorithm.
For an in-depth introduction on light transport algorithm please read
the lesson "Introduction to Shading: The Rendering Equation and
Light Transport Algorithms" [link]
Problem 2 - continued.

(b) One of the features included in the raytracing engine your company
bought is a brand new algorithm for antialiasing by adaptive supersampling.

The normal implementation is to sample rays at the corner of every pixel,


compare the colors of each sample, and if the difference between neighboring
sample colors is too great, subdivide that region recursively and sample more
times. (See the diagram below, or Foley, et al., 15.10.4)
A B
Sce
ne

O n e p ix e l o f im a g e p la n e
geo
me

A n o th e r p ix e l o f im a g e p la n e
try

P la c e w h e r e a r a y is c a s t

D
C

However, in this new algorithm, we subdivide and supersample if neighboring


rays intersect different objects. In other words, note the light-grey pixel
above. Three of the four corner samples (a, b, and c) intersect the scene
geometry. The fourth corner (d), misses the geometry completely. So we
choose to supersample this pixel without ever comparing colors.

In what ways is this better than the traditional way? In what ways is it
worse?

ANS.

Anti Aliasing
In order to solve the fundamental problem of representing continuous objects in
discrete systems, a variety of anti-aliasing techniques have evolved. The
following techniques are all methods for combating spatial aliasing.

Supersampling
Supersampling is the process of measuring an increased number of rays
for every pixel. This does not solve aliasing problems, but it does
attempt to reduce the effect they have on the final image. In the
following example, nine rays are sent out from a pixel. Six are blue
and 3 are green. The resulting color for the pixel would be two-thirds
blue and one-third green.Adaptive Supersampling - Monte-Carlo
Sampling
Adaptive Supersampling (also known as Monte-Carlo Sampling) is an attempt
to supersample in a more intelligent manner. It starts by sending a fixed number
of rays into a pixel and comparing their colors. If the colors are similar, the
program assumes that the pixel is looking at one object and the average of the
rays are computed to be the color of that pixel. If the rays are sufficiently
different (as defined by some threshold value) then some "interesting is going
on in that pixel, and further examination is required. In this case, the pixel is
subdivided into smaller regions, and each new region is treated as a whole pixel.
The process begins again, with the same pattern of rays being fired into each
new section. Once no further subdivision is necessary (because all the sub-
pixels are internally consistent) the average of the sub-pixel colors are taken to
determine the color of the pixel as a whole.

Stochastic Ray Tracing


Unfortunately, Adaptive Supersampling still divides pixels into regular patterns
of rays, and suffers from aliasing that can occur from regular pixel subdivision.
Stochiastic (Random) sampling sends a fixed number of rays into a pixel, but
makes sure they are randomly distributed (but more or less evenly cover the
area). In addition, stochiastic ray tracing attempts to solve the problem of
following incoming rays. On bumpy surfaces, an outgoing ray can be the
product of many incoming rays, and a ray tracer must decide which one to
follow when it recursively traces the ray back to the light source. Stochiastic
tracing randomly picks an incoming ray to trace, and tries to optimize picking
by selecting rays that lead to light sources, and ignoring ones that do not.

Statistical Supersampling
All of the following techiques attempt to provide for a "good enough" estimate
for the color of a pixel. In order to calculate what is "good enough", one sends a
number of rays into a pixel to obtain a larger picture of everything that the pixel
"sees". A vast body of statistical analysis can be used to calculate, on the basis
of the distribution of the rays and their number, the likelyhood that a calculated
color is actually the true color for that pixel. For example, sending in 4 pixels
distributed in a certain pattern might be calculated to be 90% accurate, using
certain statistical equations. Depending on the desired quality and speed of the
renderer, one can specify what percentages are "good enough" and what
percentages require more analysis (by shooting more rays into the object). This
proces is called statistical supersampling.
Problem 2 - continued.

(c) Your next job at your company is to go through old images in the archives
and decide how they were built. One such 3D model is shown below,
rendered from two different viewpoints. Describe the Constructive Solid
Geometry (CSG) operations that might have yielded this shape. Note that we
looking for how CSG could have formed this shape, including the names of
the primitives used and what CSG operations were used to combine them.
We are not expecting you to generate the transformations (scale, rotate,
translate) that were used to move each primitive into position.
Problem 3.

(a) Respond TRUE or FALSE to each of these statements and explain your
reasoning.

_____ Mean filters can be used for smoothing an image as well as


reducing noise.

TRUE

_____ When using a mean filter, increasing the size of the filter
sharpens the image.
TRUE

_____ A median filter does a better job of throwing out impulse and
salt and pepper noise than a mean filter.

FALSE

_____ A median filter is a type of convolution filter.


FALSE
Problem 3 - continued.

(b) Convolution filtering can modify images in a variety of ways. Describe


the expected effect of filtering an image using the following convolution
kernel. Justify your answer.

 1 3  1
 3 16  3
 
  1 3  1

ANS.
Convolution is the process of adding each element of the image to its local neighbors, weighted by the
kernel. This is related to a form of mathematical convolution. It should be noted that the matrix
operationbeing performed - convolution - is not traditional matrix multiplication, despite being similarly
denoted by *.

For example, if we have two three-by-three matrices, the first a kernel, and the second an image piece,
convolution is the process of flipping both the rows and columns of the kernel and then multiplying
locally similar entries and summing. The element at coordinates [2, 2] (that is, the central element) of
the resulting image would be a weighted combination of all the entries of the image matrix, with weights
given by the kernel:

The other entries would be similarly weighted, where we position the center of the kernel on each of the
boundary points of the image, and compute a weighted sum.

The values of a given pixel in the output image are calculated by multiplying each kernel value by the
corresponding input image pixel values. This can be described algorithmically with the following pseudo-
code:

for each image row in input image:


for each pixel in image row:

set accumulator to zero

for each kernel row in kernel:


for each element in kernel row:

if element position corresponding* to pixel position then


multiply element value corresponding* to pixel value
add result to accumulator
endif

set output image pixel to accumulator

*corresponding input image pixels are found relative to the kernel's origin.
If the kernel is symmetric then place the center (origin) of kernel on the current pixel. Then kernel
will be overlapped with neighboring pixels too. Now multiply each kernel element with the pixel
value it overlapped with and add all the obtained values. Resultant value will be the value for the
current pixel that is overlapped with the center of the kernel.

(c) Some image I(i,j) is given as an array of greyscale values, each between 0
and 1. If we filter this image with the convolution kernel in part (b), we get a
new image I’(i,j). What are the maximum and minimum values the filtered
image I’(i,j) can take on at a particular pixel?

Edge Handling[edit]

Extend Edge-Handling
Kernel convolution usually requires values from pixels outside of the image boundaries. There are a
variety of methods for handling image edges.
Extend
The nearest border pixels are conceptually extended as far as necessary to provide values for
the convolution. Corner pixels are extended in 90° wedges. Other edge pixels are extended in
lines.
Wrap
The image is conceptually wrapped (or tiled) and values are taken from the opposite edge or
corner.
Mirror
The image is conceptually mirrored at the edges. For example, attempting to read a pixel 3
units outside an edge reads one 3 units inside the edge instead.
Crop
Any pixel in the output image which would require values from beyond the edge is skipped. This
method can result in the output image being slightly smaller, with the edges having been
cropped.

Normalization[edit]
Normalization is defined as the division of each element in the kernel by the sum of
all kernel elements, so that the sum of the elements of a normalized kernel is one.
This will ensure the average pixel in the modified image is as bright as the average
pixel in the original image.
Problem 3 - continued.

(d) Why might the following convolution kernel be a good choice for
smoothing images taken from interlaced video?

1 1 1
1 
8 16 8
38  
1 1 1

Normalization is defined as the division of each element in the kernel by the sum of
all kernel elements, so that the sum of the elements of a normalized kernel is one.
This will ensure the average pixel in the modified image is as bright as the average
pixel in the original image.

(e) Suppose you had a digital photograph where the camera was moving
downward at the exact instant the picture was taken. That is, everything in
the image is blurred a little bit vertically, but not horizontally. Devise a
convolution operator to sharpen this image. Note that a general “sharpen”
filter would work, but for this special case, it can be done better.

.
Extend
The nearest border pixels are conceptually extended as far as necessary to provide values for
the convolution. Corner pixels are extended in 90° wedges. Other edge pixels are extended in
lines.
Wrap
The image is conceptually wrapped (or tiled) and values are taken from the opposite edge or
corner.
Mirror
The image is conceptually mirrored at the edges. For example, attempting to read a pixel 3
units outside an edge reads one 3 units inside the edge instead.
Crop
Any pixel in the output image which would require values from beyond the edge is skipped. This
method can result in the output image being slightly smaller, with the edges having been
cropped.
Normalization is defined as the division of each element in the kernel by the sum of
all kernel elements, so that the sum of the elements of a normalized kernel is one.
This will ensure the average pixel in the modified image is as bright as the average
pixel in the original image.

Vous aimerez peut-être aussi