Vous êtes sur la page 1sur 13

Back face Culling

Back-face culling is a method in computer graphics programming which determines whether


a polygon of a graphical object is visible; if it is not, the polygon is culled from a rendering
process, which increases efficiency by reducing the number of polygons that the hardware
has to draw. For example, in a city street scene, there is generally no need to draw the
polygons on the sides of the building facing away from the camera; as they are completely
occluded by the sides facing the camera.Back-face culling can be assumed to produce no
visible artifact in a rendered scene if it contains only closed and opaque geometry.

View Frustum
The word frustum refers to a solid shape that looks like a pyramid with the top cut off parallel
to the base. This is the shape of the region that can be seen and rendered by a perspective
camera.
A cross-section of the view frustum at a certain distance from the camera defines a rectangle in
world space that frames the visible area. It is sometimes useful to calculate the size of this
rectangle at a given distance, or find the distance where the rectangle is a given size. For
example, if a moving camera needs to keep an object (such as the player) completely in shot at
all times then it must not get so close that part of that object is cut off.

http://docs.unity3d.com/Manual/FrustumSizeAtDistance.html
Dolly Zoom is the well-known visual effect where the camera simultaneously moves towards
a target object and zooms out from it.The result is that the object appears roughly the same
size but all the other objects in the scene change perspective. Done subtly, dolly zoom has the
effect of highlighting the target object, since it is the only thing in the scene that isnt shifting
position in the image. Alternatively, the zoom can be deliberately performed quickly to create the
impression of disorientation.

Occlusion culling
Occlusion Culling is a feature that disables rendering of objects when they are not currently
seen by the camera because they are obscured (occluded) by other objects.
Occlusion culling is different from Frustum culling. Frustum culling only disables the renderers for
objects that are outside the camera`s viewing area but does not disable anything hidden from
view by overdrawn.The occlusion culling process will go through the scene using a virtual
camera to build a hierarchy of potentially visible sets of objects. This data is used at runtime by
each camera to identify what is visible and what is not. Equipped with this information, Unity will
ensure only visible objects get sent to be rendered. This reduces the number of draw calls and
increases the performance of the game.

Setting up Occlusion Culling


In order to use Occlusion Culling, there is some manual setup involved. First, your level
geometry must be broken into sensibly sized pieces. It is also helpful to lay out your levels into
small, well defined areas that are occluded from each other by large objects such as walls,
buildings, etc. The idea here is that each individual mesh will be turned on or off based on the
occlusion data. So if you have one object that contains all the furniture in your room then either
all or none of the entire set of furniture will be culled. This doesnt make nearly as much sense as
making each piece of furniture its own mesh, so each can individually be culled based on the
cameras view point.
You need to tag all scene objects that you want to be part of the occlusion to Occluder
Static in the Inspector. The fastest way to do this is to multi-select the objects you want to be
included in occlusion calculations, and mark them as Occluder Static and Occluder Static.

http://docs.unity3d.com/Manual/OcclusionCulling.
html

Rendering Process
Rendering is the process of generating an image from a 2D or 3D model (or models in what
collectively could be called a scene file) by means of computer programs. Also, the results of
such a model can be called a rendering. A scene file contains objects in a strictly defined
language or data structure; it would contain geometry, viewpoint, texture, lighting, and shading

information as a description of the virtual scene. The data contained in the scene file is then
passed to a rendering program to be processed and output to a digital image or graphics
image file. The term "rendering" may be by analogy with an "artist's rendering" of a scene.
Though the technical details of rendering methods vary, the general challenges to overcome in
producing a 2D image from a 3D representation stored in a scene file are outlined as
the graphics pipeline along a rendering device, such as a GPU
.A GPU is a purpose-built device able to assist a CPU in performing complex rendering
calculations. If a scene is to look relatively realistic and predictable under virtual lighting, the
rendering software should solve the rendering equation. The rendering equation doesn't account
for all lighting phenomena, but is a general lighting model for computer-generated imagery.
'Rendering' is also used to describe the process of calculating effects in a video editing program
to produce final video output.
https://en.wikipedia.org/wiki/Rendering_(computer_graphics)

Radiosity

Radiosity is a method of rendering based on a detailed analysis of light reflections off


diffuse surfaces. The images that result from a radiosity renderer are characterized by soft
gradual shadows.In 3D computer graphics, radiosity is an application of the finite element
method to solving the rendering equation for scenes with surfaces that reflect light diffusely.
Unlikerendering methods that use Monte Carlo algorithms (such as path tracing), which handle
all types of light paths, typical radiosity only account for paths (represented by the code "LD*E")
which leave a light source and are reflected diffusely some number of times (possibly zero)
before hitting the eye. Radiosity is a global illumination algorithm in the sense that the
illumination arriving on a surface comes not just directly from the light sources, but also from
other surfaces reflecting light. Radiosity is viewpoint independent, which increases the
calculations involved, but makes them useful for all viewpoints.

Ray Tracing
In computer graphics, ray tracing is a technique for generating an image by tracing the path
of light through pixels in an image plane and simulating the effects of its encounters with a
virtual object.

Lighting
Lighting is one of the most important elements in an environment. It helps the scene so
much and add more realism to the game. Lighting creates atmosphere and it can set the
scene and also can affect the mood of the game. The most basic rule of lighting is the
source. It is impossible to have lighting in a scene without a light source. Light sources can
be anything: lamps, torch, sun, moon etc. Colour is a very important rule of lighting and
definitely the most complex. Colour can shape the atmosphere and emotions associated
with a scene.

Textures
Textures are hugely important in making 3D scenes look real. They are basically little
pictures that you break up into polygons and apply to an object or area in a scene. A lot of
textures can take up a lot of memory so its helps to manage their size with various
techniques.

Fogging
Fog is a visual effect of sorts. Most game engines these days can handle this. It comes in
very handy for fading out of the world in the distance so you do not see models and scene
geography popping up in the distance as they would come into visual range crossing the far
clipping plane.

Week 2

Anti-aliasing
This is a technique used to smooth otherwise jagged textures by blending the colour
of an edge with the colour of pixels around it. There are different types of antialiasing:
SSAA- Super sampling Anti-Aliasing was the first type of anti-aliasing available. It is
mostly used on photorealistic images, but isnt common in games now due to how
much processing power it uses.
MSSA- Multisample anti- aliasing is a more common type of anti-aliasing in modern
video games. MSSA is different from SSAA in that it only smooths out the edges of
polygons, nothing else. This cuts down the processing power compared to SSAA but
doesnt solve pixelated textures.
Anti-Aliasing has become less and less necessary as graphics become better and
monitor resolution increases.

Vertex Shader
In the field of computer graphics, a shader is a computer program that is used to do shading:
the production of appropriate levels of colour within an image, or, in the modern era, also to
produce special effects or do video post-processing.
Shaders calculate rendering effects on graphics hardware with a high degree of flexibility. Most
shaders are coded for a graphics processing unit (GPU), though this is not a strict requirement.
Shading languages are usually used to program the programmable GPU rendering pipeline,
which has mostly superseded the fixed-function pipeline that allowed only common geometry
transformation and pixel-shading functions; with shaders, customized effects can be used. The
position, hue, saturation, brightness, and contrast of all pixels, vertices, or textures used to
construct a final image can be altered on the fly, using algorithms defined in the shader, and can
be modified by external variables or textures introduced by the program calling the shader.

https://en.wikipedia.org/wiki/Shader

A vertex shader is a GPU component that is programmed using a specific assemblylike language such as pixel shaders but are orientated to the scene geometry. It has
many functions such as adding cartoony silhouette edges to objects. Vertex shaders
calculate the shading effects in a 3d environment.

Pixel Shader

Pixel shaders is a GPU component as well but is programmed to operate on a per


pixel basis and it takes care of stuff like lighting and bump mapping.

Animation systems
Inverse kinematics-

This refers to the use of the kinematics equations of a robot to determine the joint
parameters that provide a desired position of the end effector. Specification of the
movement of a robot so that the end-effector achieves a desired task which is known
as motion planning.
The movement of a kinematic chain whether it is a robot or an animated character is
modeled by the kinematics equations of the chain. These equations define the configuration
of the chain in terms of its joint parameters. Forward kinematics uses the joint parameters to
compute the configuration of the chain, and inverse kinematics reverses this calculation to
determine the joint parameters that achieves a desired configuration.[2][3][4]
For example, inverse kinematics formulas allow calculation of the joint parameters that
position a robot arm to pick up a part. Similar formulas determine the positions of the
skeleton of an animated character that is to move in a particular way.
Inverse kinematics is important to game programming and 3D animation, where it is used to
connect game characters physically to the world, such as feet landing firmly on top of
terrain.
It is often easier for computer-based designers, artists and animators to define the spatial
configuration of an assembly or figure by moving parts, or arms and legs, rather than directly
manipulating joint angles. Therefore, inverse kinematics is used in computer-aided design
systems to animate assemblies and by computer-based artists and animators to position
figures and characters.

https://en.wikipedia.org/wiki/Inverse_kinematics

Forward kinematics
Forward kinematics refers to the use of the kinematics equations of a robot to compute the
position of the end-effector from specified values for the joint parameters. The kinematics
equations of the robot are used in robotics, computer games and animation.

The forward kinematic equations can be used as a method in 3D computer graphics for
animating models.
The essential concept of forward kinematic animation is that the positions of particular parts of
the model at a specified time are calculated from the position and orientation of the object,
together with any information on the joints of an articulated model. So for example if the object to
be animated is an arm with the shoulder remaining at a fixed location, the location of the tip of
the thumb would be calculated from the angles of
the shoulder, elbow, wrist,thumb and knuckle joints. Three of these joints (the shoulder, wrist and
the base of the thumb) have more than one degree of freedom, all of which must be taken into
account. If the model were an entire human figure, then the location of the shoulder would also
have to be calculated from other properties of the model.

https://en.wikipedia.org/wiki/Forward_kinematics

Week 3

Particle Systems
A particle system is a technique in game physics and computer graphics that uses a large
number of very small sprites, 3D models, or other graphic objects to simulate certain kinds of
`fuzzy` phenomena, which are otherwise very hard to reproduce with conventional rendering
techniques.
Particles are small, simple images or meshes that are displayed and moved in great numbers
by a particle system. Each particle represents a small portion of a fluid or amorphous entity and
the effect of all the particles together creates the impression of the complete entity. Using a
smoke cloud as an example, each particle would have a small smoke texture resembling a tiny

cloud in its own right. When many of these mini-clouds are arranged together in an area of the
scene, the overall effect is of a larger, volume-filling cloud.

http://docs.unity3d.com/Manual/PartSysWhatIs.html

Week 4
Systems

Sound
Networking

Week 5
Physics
Video game physicsinvolves the introduction of the laws of physics into a simulation or
game engine. Its purpose is to make the effects appear more real to the player. Simulation
physics including the physics engine, program code that is used to simulate Newtonian
physics within the environment, and collision detection used to solve used to solve the
problem of determining when any two or more physical objects in the environment cross each
other's path.
https://en.wikipedia.org/wiki/Game_physics

In video game physics, we want to animate

objects on

screen and give them realistic physical behavior. This is achieved with physics-based
procedural animation, which is animation produced by numerical computations applied to the
theoretical laws of physics.
Animations are produced by displaying a sequence of images in succession, with objects
moving slightly from one image to the next. When the images are displayed in quick
succession, the effect is an apparent smooth and continuous movement of the objects.
Thus, to animate the objects in a physics simulation, we need to update the physical state of
the objects (e.g. position and orientation), according to the laws of physics multiple times per
second, and redraw the screen after each update.
https://www.toptal.com/game/video-game-physics-part-i-an-introduction-to-rigid-bodydynamics

Artificial intelligence
Video games, artificial intelligence is used to generate intelligent behaviours primarily in nonplayable characters (NPC`S) often simulating human-like intelligence. AI is so important in
video games because they are what keeps the game in motion. Friendly AI are put in place
so the player can interact with them but also because they add life to the scene. Good
programmed AI will react to your actions e.g. GTA V has AI that will react to your actions
such as if you attack someone, they will call the police. Some games now have very
intelligent enemy AI which will fight back when you trigger an event. Some games such as
The Last of Us, AI is very unpredictable and will even try to flank your cover.
AI works different in some games as it depends on the type of game it is, e.g. Stealth game
like Metal Gear Solid has very good stealth mechanics so then the Enemy AI would be

programmed to also go into stealth as well. This challenges the player by making think more
about how to approach the level. The AI is now more sophiscated now where enemy`s can
dodge bullets and counter attack in combat. They will also work to together in some cases
when you are out of their line of sight as they can team up.

Week 6
World navigation and pathfinding
The Navigation system allows you to create characters that can intelligently move in the game
world. The navigation system uses navigation meshes to reason about the environment. The
navigation meshes are created automatically from your Scene geometry. Dynamic obstacles
allow alter the navigation of the characters at runtime, and off-mesh links let you to build specific
actions such as opening doors, or jumping down from a ledge. This section describes Unitys
navigation and pathfinding in detail.
The Navigation System allows you to create characters which can navigate the game world.
It gives your characters the ability to understand that they need to take stairs to reach
second floor, or to jump to get over a ditch. The Unity NavMesh system consists of the
following pieces:

NavMesh (short for Navigation Mesh) is a data structure which describes the
walkable surfaces of the game world and allows to find path from one walkable location to
another in the game world. The data structure is built, or baked, automatically from your level
geometry.

NavMesh Agent component help you to create characters which avoid each other
while moving towards their goal. Agents reason about the game world using the NavMesh
and they know how to avoid each other as well as moving obstacles.
Off-Mesh Link component allows you to incorporate navigation shortcuts which
cannot be represented using a walkable surface. For example, jumping over a ditch or a
fence, or opening a door before walking through it, can be all described as Off-mesh links.
NavMesh Obstacle component allows you to describe moving obstacles the agents
should avoid while navigating the world. A barrel or a crate controlled by the physics system
is a good example of an obstacle. While the obstacle is moving the agents do their best to
avoid it, but once the obstacle becomes stationary it will carve a hole in the navmesh so that
the agents can change their paths to steer around it, or if the stationary obstacle is blocking
the path way, the agents can find a different route.
http://docs.unity3d.com/Manual/nav-NavigationSystem.html

Week 7
Neural nets
In machine learning and cognitive science, artificial neural networks (ANNs) are a family
of models inspired by biological neural networks (the central nervous systems of animals, in
particular the brain) and are used to estimate or approximate functions that can depend on a
large number of inputs and are generally unknown.
For example, a neural network for handwriting recognition is defined by a set of input
neurons which may be activated by the pixels of an input image. After being weighted and
transformed by a function (determined by the network's designer), the activations of these
neurons are then passed on to other neurons. This process is repeated until finally, an
output neuron is activated. This determines which character was read.

https://en.wikipedia.org/wiki/Artificial_neural_network
For games, neural networks offer some key advantages over more traditional AI techniques.
First, using a neural network may allow game developers to simplify the coding of complex
state machines or rules-based systems by relegating key decision-making processes to one

or more trained neural networks. Second, neural networks offer the potential for the game's
AI to adapt as the game is played. This is a rather intriguing possibility and is a very popular
subject in the game AI community at this time.
http://www.onlamp.com/pub/a/onlamp/2004/09/30/AIforGameDev.html

Week 8
Fuzzy logic
Fuzzy logic is a superset of conventional logic that has been extended to handle
the concept of partial-truth values between the Boolean dichotomy of true and
false. Fuzzy logic usually takes the form of a fuzzy reasoning system and its
components are fuzzy variables, fuzzy rules and a fuzzy inference engine.

http://www.academia.edu/6850195/The_use_of_Fuzzy_Logic_for_Artificial_Intelligenc
e_in_Games
Fu z z y l o g i c c a n b e u s e f u l t o g a m e A I i n s e v e r a l a s p e c t s . Among
other uses, it can be used for NPC decision making such as item or
weapon selection, for the control of units movement similar to what
happens with control systems, for enabling an AI opponent to assess threats
and for classica-tion, for example by ranking players and NPCs in terms
of health or power using fuzzy variables
http://www.academia.edu/6850195/The_use_of_Fuzzy_Logic_for_Artificial_Intelligenc
e_in_Games

Week 9
Middleware

Vous aimerez peut-être aussi