Vous êtes sur la page 1sur 21

Digital Cinematography

Return of the Points

>>

>>

>>

>>

>>

>>

Reality is out there.


We perceive reality through some mediation. But our perception is always continous. Anything which is a true or analogous recording of recording of reality is known as analog. An image can also be made in a discrete way. Pointillist painters like Seurat always knew that. In electronic terms, the presence or absence of a black dot is known as the absence or presence of a electric charge at a particular area in the picture frame. For convenience the absence or presence of the charge can be referred to as 0 or 1. In mechanical-electrical devices this means switch-off or switch-on.
>> 0 >> 1 >> 2 >> 3 >> 4

>>

Information from the real world are carried over to the human sensory organ through some mode of communication.
Such communication packages are commonly known as signals.

A signal can be time-varying or space-varying, depending on what it is doing. To remap the signal to a digital domain, we need to take samples at discrete points, for a regular time or space interval. In principle, this is same as scaling factor for an Atlas. But that is analog (continuous in nature, while digital mapping is discrete.
>> 0 >> 1 >>

Signals can be continuous like the reality world, or they can be broken down into pieces for easy transport and copying (from one system to another.)

>>

>>

>>

The big question is how the sampling is done for any analog signal, specially the image and video signals.

>>

>>

>>

>>

>>

>>

Sampling, otherwise known as the first step to translate analog signals to digital domain, is a process of remapping the continuous signal through time and/or space to a number of discrete values (the position and orientation of the signal in the domain through a regular time and/or space interval.)

That can be transcoded as the presence or absence of the signal in the domain (sometimes known as the matrix) at a particular point. It is always stored as an electric charge.

>>

>>

>>

>>

>>

>>

If the signal is not present at a particular point in the matrix, it will be a zero (no charge), while presence will be shown as 1. The value of a signal at a particular point in the matrix depends on its position in the matrix and its direction vectors. This is very similar to the graphs of the equations we used to draw in school. It follows the same principle. The truthfulness of the digital signal depends on how frequently samples are taken from its analog form.

Normally, an image or video signal can be closely reconstructed in the digital domain if the sampling rate is higher than twice the highest frequency in the signal.

>>

>>

>>

>>

>>

>>

Now what does that mean?


We all know how an aliased image looks. Anti-aliasing is not really the solution. Aliasing arises when the signal is discretely sampled at a rate insufficient to capture the changes in the signal (change over time or space.) The only solution is to reach a minimal sampling rate. For most practical purposes, a sampling rate of twice the highest frequency in the signal is enough (or twice the frequency, if the signal is uniform throughout as the one below.)

>>

>>

>>

>>

>>

>>

What happens if we take a sample at a rate lower than the signal waves highest frequency?

We can see the nature of the wave can partially, or completely, change, if we do not follow this simple rule of sampling. The reconstruction of the signal is now false, and the playing back of this signal as an image or video will definitely present wrong information, in terms of presence of signal at a point, its value and its direction. All such wrong information are collectively known as aliasing.
>> 0 >> 1 >> 2 >> 3 >> 4

>>

Even analog has aliasing!


The Wagon-Wheel effect
If the wheel of a cart or car with spokes rotate at a speed less than the sampling rate of the reality (yes! Analog has a sampling rate too!), the wheel seems to be rotating backwards on the screen. For analog film, the sampling frequency is its frame rate (ie, 24 frames per second.)

>>

>>

>>

>>

>>

>>

What will you do if your sampling rate is fixed?


One very common approach, which we just have seen, is known as smoothing, or purposefully blurring the signal. In human eye, the lens can not discern a spatial variation more than 60 cycles per degree. According to Nyquist theorem, the number of photoreceptors per degree of vision field should be at least 120 degree. Recently neurologists proved that indeed the number is around 120!
>> 0 >> 1 >> 2 >> 3 >> 4

Smoothing is also known as antialiasing in computer graphics world.

>>

Blur

S a m p l e

S a m p l e

>>

>>

>>

>>

>>

>>

What is the highest frequency?


The highest frequency of any signal is also known as its bandwidth.

For a black and white film it means, how many times the medium can switch between the building blocks of visual information (ie, the presence of a black dot on a white screen, or the reverse.)
For analog PAL monochrome TV image, the bandwidth is practically 5.5 MHz. The modern HDTVs handle 20 MHz or more of bandwidth.

>>

>>

>>

>>

>>

>>

How is the signal stored in Digital?


After sampling, an image can be stored as a discrete value of either 1 or 0, for a particular point in the frame, with another set of 1 and 0 determining the pixels brightness. Depending on the bit Depth of the pixel, as many values can be stored as luma information. A 1 bit picture can store 21 number of values per pixel. This means black or white. A 2 bit picture can store 22 diferent values, translating the image in black, white and two shades of grey in between. Accordingly 8 bit corresponds to 256 different greytones.

>>

>>

>>

>>

>>

>>

If the image is color, instead of monochrome, there are three separate color information channels.
For each such primary color channel there is a bit-depth involved, making the number of tonal variations for the primary color. If it is a 8-bit color depth for each channel (RGB), there is a totality of 24 bit-depth involved in the image. As we know, we can get other colors by mixing these three channels, in terms of tonal variation. Peak white can be obtained by mixing all the three channels in their highest tone (tonal peak.) Black can be obtained by mixing them in their tonal minimality (that is same as absence of any colour pixel in that area in the image.) In between there are 254 shades of grey.
>> 0 >> 1 >> 2 >> 3 >> 4

>>

How is the image processed?

After the image is acquired as combination of 1 and 0s. It has to be handled in a way so that it can talk to the computer interface (ie, the OS running inside or outside the chip.) For that purpose, the binary numbers (voltage or lack of voltage) are coded into something which the machine can read. A translator program helps the machine on this job. That program is called a codec. At the time of getting back the voltage information as pixel information on the TV, the reverse translation process is on the run. This whole translation process is known as compression (compressing the information is just one part of the process actually.) There are lossy and lossless image and video compression methods.
>> 0 >> 1 >> 2 >> 3 >> 4

>>

Compression came to rule the as the digital avatar of the analog signals were too heavy for the channels available. It was not technically possible to mass produce VCRs or Camcorders that operate at 216 Mbps, in the 1980s.

With the new HDTV systems at the end of 1990s and in the new Millennium, the bitrate increased manifold. So compression persisted.
>> 0 >> 1 >> 2 >> 3 >> 4

>>

Digital Compression takes place by rejecting information (a) that could be easily reconstructed

(b) that is considered nonessential


the absence of which is not noticeable
>> 0 >> 1 >> 2 >> 3 >> 4

>>

Seen as such, even analog TV worked on the principle of compression.

The interlace principle in itself represents a 2:1 compression.


A progressive scanning requires a bandwidth twice as large.
>> 0 >> 1 >> 2 >> 3 >> 4

>>

As mentioned already, there are two otypes of Compressions

Lossless

Lossy

>>

>>

>>

>>

>>

>>

Every signal has two parts (a) Entropy : An unpredictable part which can not be erased or compressed

(b) Redundancy : A part that has a very high degree of repetitiveness, highly predictable, and can be easily reconstructed from a simple initial indication
All compression methods function by rejecting as much redundancy as possible while preserving the entropy untouched. Practical compressors retains some residual redundancy, but ensuring that no entropy is lost. If entropy is lost, video signal becomes choppy (eg, image telephony)

>>

>>

>>

>>

>>

>>

Three Types of Video Redundancy


Spatial Redundancy: Spatial Compression also known as Intraframe Compression. The totality of the removable redundancy is located inside one single frame. Temporal Redundancy : Temporal Compression also known as Interframe Compression. An extreme example is freeze frames. Statistical Redundancy: Elements that are regularly repeated, including the vertical and horizontal sync pulses.

>>

>>

>>

>>

>>

>>

Vous aimerez peut-être aussi