Vous êtes sur la page 1sur 225

1.

DIGITAL CAMERA SENSORS -


A digital camera uses a sensor array of millions of tiny pixels in order to produce the final image. When you press your camera's shutter button and the exposure begins, each of these pixels has a "photosite" which is uncovered to collect
and store photons in a cavity. Once the exposure finishes, the camera closes each of these photosites, and then tries to assess how many photons fell into each. The relative quantity of photons in each cavity are then sorted into various
intensity levels, whose precision is determined by bit depth (0 - 255 for an 8-bit image).

Each cavity is unable to distinguish how much of each color has fallen in, so the above illustration would only be able to create grayscale images. To capture color images, each cavity has to have a filter placed over it which only allows
penetration of a particular color of light. Virtually all current digital cameras can only capture one of the three primary colors in each cavity, and so they discard roughly 2/3 of the incoming light. As a result, the camera has to
approximate the other two primary colors in order to have information about all three colors at every pixel. The most common type of color filter array is called a "Bayer array," shown below.
Color Filter Array

A Bayer array consists of alternating rows of red-green and green-blue filters. Notice how the Bayer array contains twice as many green as red or blue sensors. Each primary color does not receive an equal fraction of the total area
because the human eye is more sensitive to green light than both red and blue light. Redundancy with green pixels produces an image which appears less noisy and has finer detail than could be accomplished if each color were treated
equally. This also explains why noise in the green channel is much less than for the other two primary colors (see "Understanding Image Noise" for an example).

Original Scene What Your Camera Sees


(shown at 200%) (through a Bayer array)

Note: Not all digital cameras use a Bayer array, however this is by far the most common setup. The Foveon sensor used in Sigma's SD9 and SD10 captures all three colors at each pixel location. Sony cameras capture four colors in a
similar array: red, green, blue and emerald green.
BAYER DEMOSAICING
Bayer "demosaicing" is the process of translating this Bayer array of primary colors into a final image which contains full color information at each pixel. How is this possible if the camera is unable to directly measure full color? One
way of understanding this is to instead think of each 2x2 array of red, green and blue as a single full color cavity.

—>

This would work fine, however most cameras take additional steps to extract even more image information from this color array. If the camera treated all of the colors in each 2x2 array as having landed in the same place, then it would
only be able achieve half the resolution in both the horizontal and vertical directions. On the other hand, if a camera computed the color using several overlapping 2x2 arrays, then it could achieve a higher resolution than would be
possible with a single set of 2x2 arrays. The following combination of overlapping 2x2 arrays could be used to extract more image information.

—>

Note how we did not calculate image information at the very edges of the array, since we
assumed the image continued on in each direction. If these were actually the edges of the cavity array, then calculations here would be less accurate, since there are no longer pixels on all sides. This is no problem, since information at
the very edges of an image can easily be cropped out for cameras with millions of pixels.
Other demosaicing algorithms exist which can extract slightly more resolution, produce images which are less noisy, or adapt to best approximate the image at each location.

DEMOSAICING ARTIFACTS
Images with small-scale detail near the resolution limit of the digital sensor can sometimes trick the demosaicing algorithm—producing an unrealistic looking result. The most common artifact is moiré (pronounced "more-ay"), which
may appear as repeating patterns, color artifacts or pixels arranges in an unrealistic maze-like pattern:

Two separate photos are shown above—each at a different magnification. Note the appearance of moiré in all four bottom squares, in addition to the third square of the first photo (subtle). Both maze-like and color artifacts can be seen in
the third square of the downsized version. These artifacts depend on both the type of texture and software used to develop the digital camera's RAW file.
MICROLENS ARRAYS
You might wonder why the first diagram in this tutorial did not place each cavity directly next to each other. Real-world camera sensors do not actually have photosites which cover the entire surface of the sensor. In fact, they often
cover just half the total area in order to accommodate other electronics. Each cavity is shown with little peaks between them to direct the photons to one cavity or the other. Digital cameras contain "microlenses" above each photosite to
enhance their light-gathering ability. These lenses are analogous to funnels which direct photons into the photosite where the photons would have otherwise been unused.

Well-designed microlenses can improve the photon signal at each photosite, and subsequently create images which have less noise for the same exposure time. Camera manufacturers have been able to use improvements in microlens
design to reduce or maintain noise in the latest high-resolution cameras, despite having smaller photosites due to squeezing more megapixels into the same sensor area.

2.CAMERA EXPOSURE -
A photograph's exposure determines how light or dark an image will appear when it's been captured by your camera. Believe it or not, this is determined by just three camera settings: aperture, ISO and shutter speed (the "exposure
triangle"). Mastering their use is an essential part of developing an intuition for photography.

UNDERSTANDING EXPOSURE

Achieving the correct exposure is a lot like collecting rain in a bucket. While the rate of rainfall is uncontrollable, three factors remain under your control: the bucket's width, the duration you leave it in the rain, and the quantity of rain you
want to collect. You just need to ensure you don't collect too little ("underexposed"), but that you also don't collect too much ("overexposed"). The key is that there's many different combinations of width, time and quantity that will
achieve this. For example, for the same quantity of water, you can get away with less time in the rain if you pick a bucket that's really wide. Alternatively, for the same duration left in the rain, a really narrow bucket can be used as long as
you plan on getting by with less water.
In photography, the exposure settings of aperture, shutter speed and ISO speed are analogous to the width, time and quantity discussed above. Furthermore, just as the rate of rainfall was beyond your control above, so too is natural light
for a photographer.

EXPOSURE TRIANGLE: APERTURE, ISO & SHUTTER SPEED

Each setting controls exposure differently:


Aperture: controls the area over which light can enter your camera
Shutter speed: controls the duration of the exposure
ISO speed: controls the sensitivity of your camera's sensor to a given amount of light
One can therefore use many combinations of the above three settings to achieve the same exposure. The key, however, is knowing which trade-offs to make, since each setting also influences other image properties. For example, aperture
affects depth of field, shutter speed affects motion blur and ISO speed affects image noise.
The next few sections will describe how each setting is specified, what it looks like, and how a given camera exposure mode affects their combination.

SHUTTER SPEED
A camera's shutter determines when the camera sensor will be open or closed to incoming light from the camera lens. The shutter speed specifically refers to how long this light is permitted to enter the camera. "Shutter speed" and
"exposure time" refer to the same concept, where a faster shutter speed means a shorter exposure time.
By the Numbers. Shutter speed's influence on exposure is perhaps the simplest of the three camera settings: it correlates exactly 1:1 with the amount of light entering the camera. For example, when the exposure time doubles the amount
of light entering the camera doubles. It's also the setting that has the widest range of possibilities:
Shutter Speed Typical Examples

1 - 30+ seconds Specialty night and low-light photos on a tripod

To add a silky look to flowing water


2 - 1/2 second
Landscape photos on a tripod for enhanced depth of field

To add motion blur to the background of a moving subject


1/2 to 1/30 second
Carefully taken hand-held photos with stabilization

1/50 - 1/100 second Typical hand-held photos without substantial zoom

To freeze everyday sports/action subject movement


1/250 - 1/500 second
Hand-held photos with substantial zoom (telephoto lens)

1/1000 - 1/4000 second To freeze extremely fast, up-close subject motion

How it Appears. Shutter speed is a powerful tool for freezing or exaggerating the appearance of motion:

Slow Shutter Speed Fast Shutter Speed

With waterfalls and other creative shots, motion blur is sometimes desirable, but for most other shots this is avoided. Therefore all one usually cares about with shutter speed is whether it results in a sharp photo -- either by freezing
movement or because the shot can be taken hand-held without camera shake.
How do you know which shutter speed will provide a sharp hand-held shot? With digital cameras, the best way to find out is to just experiment and look at the results on your camera's rear LCD screen (at full zoom). If a properly focused
photo comes out blurred, then you'll usually need to either increase the shutter speed, keep your hands steadier or use a camera tripod.

APERTURE SETTING
A camera's aperture setting controls the area over which light can pass through your camera lens. It is specified in terms an f-stop value, which can at times be counterintuitive, because the area of the opening increases as the f-stop
decreases. In photographer slang, the when someone says they are "stopping down" or "opening up" their lens, they are referring to increasing and decreasing the f-stop value, respectively.
By the Numbers. Every time the f-stop value halves, the light-collecting area quadruples. There's a formula for this, but most photographers just memorize the f-stop numbers that correspond to each doubling/halving of light:
Aperture Setting Relative Light Example Shutter Speed

f/22 1X 16 seconds

f/16 2X 8 seconds

f/11 4X 4 seconds

f/8.0 8X 2 seconds

f/5.6 16X 1 second

f/4.0 32X 1/2 second

f/2.8 64X 1/4 second

f/2.0 128X 1/8 second

f/1.4 256X 1/15 second

The above aperture and shutter speed combinations all result in the same exposure.
Note: Shutter speed values are not always possible in increments of exactly double or half another shutter speed, but they're always close enough that the difference is negligible.
The above f-stop numbers are all standard options in any camera, although most also allow finer adjustments, such as f/3.2 and f/6.3. The range of values may also vary from camera to camera (or lens to lens). For example, a compact
camera might have an available range of f/2.8 to f/8.0, whereas a digital SLR camera might have a range of f/1.4 to f/32 with a portrait lens. A narrow aperture range usually isn't a big problem, but a greater range does provide for more
creative flexibility.
Technical Note: With many lenses, their light-gathering ability is also affected by their transmission efficiency, although this is almost always much less of a factor than aperture. It's also beyond the photographer's control. Differences in
transmision efficiency are typically more pronounced with extreme zoom ranges. For example, Canon's 24-105 mm f/4L IS lens gathers perhaps ~10-40% less light at f/4 than Canon's similar 24-70 mm f/2.8L lens at f/4 (depending on the
focal length).
How it Appears. A camera's aperture setting is what determines a photo's depth of field (the range of distance over which objects appear in sharp focus). Lower f-stop values correlate with a shallower depth of field:

Narrow Aperture
Wide Aperture
f/16 - large f-stop
f/2.0 - low f-stop number
number
shallow depth of field
large depth of field
ISO SPEED
The ISO speed determines how sensitive the camera is to incoming light. Similar to shutter speed, it also correlates 1:1 with how much the exposure increases or decreases. However, unlike aperture and shutter speed, a lower ISO speed is
almost always desirable, since higher ISO speeds dramatically increase image noise. As a result, ISO speed is usually only increased from its minimum value if the desired aperture and shutter speed aren't otherwise obtainable.

Low ISO High ISO


Speed Speed
(low image (high image
noise) noise)

note: image noise is also known as "film grain" in traditional film photography
Common ISO speeds include 100, 200, 400 and 800, although many cameras also permit lower or higher values. With compact cameras, an ISO speed in the range of 50-200 generally produces acceptably low image noise, whereas with
digital SLR cameras, a range of 50-800 (or higher) is often acceptable.

CAMERA EXPOSURE MODES

Most digital cameras have one of the following standardized exposure modes: Auto ( ), Program (P), Aperture Priority (Av), Shutter Priority (Tv), Manual (M) and Bulb (B) mode. Av, Tv, and M are often called "creative modes" or
"auto exposure (AE) modes."
Each of these modes influences how aperture, ISO and shutter speed are chosen for a given exposure. Some modes attempt to pick all three values for you, whereas others let you specify one setting and the camera picks the other two (if
possible). The following charts describe how each mode pertains to exposure:
Exposure Mode How It Works

Auto ( ) Camera automatically selects all exposure settings.

Camera automatically selects aperture & shutter speed; you can choose a corresponding ISO speed & exposure compensation. With some cameras, P can also act as a hybrid of the
Program (P)
Av & Tv modes.

Aperture Priority
You specify the aperture & ISO; the camera's metering determines the corresponding shutter speed.
(Av or A)

Shutter Priority (Tv


You specify the shutter speed & ISO; the camera's metering determines the corresponding aperture.
or S)

Manual (M) You specify the aperture, ISO and shutter speed -- regardless of whether these values lead to a correct exposure.

Bulb (B) Useful for exposures longer than 30 seconds. You specify the aperture and ISO; the shutter speed is determined by a remote release switch, or by the duration until you press the
shutter button a second time.

In addition, the camera may also have several pre-set modes; the most common include landscape, portrait, sports and night mode. The symbols used for each mode vary slightly from camera to camera, but will likely appear similar to
those below:
Exposure
How It Works
Mode

Portrait
Camera tries to pick the lowest f-stop value possible for a given exposure. This ensures the shallowest possible depth of field.

Landscape
Camera tries to pick a high f-stop to ensure a large depth of field. Compact cameras also often set their focus distance to distant objects or infinity.

Sports/Action
Camera tries to achieve as fast a shutter speed as possible for a given exposure -- ideally 1/250 seconds or faster. In addition to using a low f-stop, the fast shutter speed is usually
achieved by increasing the ISO speed more than would otherwise be acceptable in portrait mode.

Night/Low-
Camera permits shutter speeds which are longer than ordinarily allowed for hand-held shots, and increases the ISO speed to near its maximum available value. However, for some cameras this setting means that a flash
light is used for the foreground, and a long shutter speed and high ISO are used expose the background. Check your camera's instruction manual for any unique characteristics.

However, keep in mind that most of the above settings rely on the camera's metering system in order to know what's a proper exposure. For tricky subject matter, metering can often be fooled, so it's a good idea to also be aware of when it
might go awry, and what you can do to compensate for such exposure errors (see section on exposure compensation within the camera metering tutorial).
Finally, some of the above modes may also control camera settings which are unrelated to exposure, although this varies from camera to camera. Such additional settings might include the autofocus points, metering mode and autofocus
modes, amongst others.

3.CAMERA METERING & EXPOSURE -


Knowing how your digital camera meters light is critical for achieving consistent and accurate exposures. Metering is the brains behind how your camera determines the shutter speed and aperture, based on lighting conditions and ISO
speed. Metering options often include partial, evaluative zone or matrix, center-weighted and spot metering. Each of these have subject lighting conditions for which they excel-- and for which they fail. Understanding these can improve
one's photographic intuition for how a camera measures light.
Recommended background reading: camera exposure: aperture, ISO & shutter speed

BACKGROUND: INCIDENT vs. REFLECTED LIGHT


All in-camera light meters have a fundamental flaw: they can only measure reflected light. This means the best they can do is guess how much light is actually hitting the subject.

If all objects reflected the same percentage of incident light, this would work just fine, however real-world subjects vary greatly in their reflectance. For this reason, in-camera metering is standardized based on the luminance of light
which would be reflected from an object appearing as middle gray. If the camera is aimed directly at any object lighter or darker than middle gray, the camera's light meter will incorrectly calculate under or over-exposure, respectively. A
hand-held light meter would calculate the same exposure for any object under the same incident lighting.
Approximations* of 18% Luminance:
18% Gray 18% Red Tone 18% Green Tone 18% Blue Tone

*Most accurate when using a PC display which closely mimics the sRGB color space,
and have calibrated your monitor accordingly.
Monitors transmit as opposed to reflect light, so this is also a fundamental limitation.
What constitutes middle gray? In the printing industry it is standardized as the ink density which reflects 18% of incident light, however cameras seldom adhere to this. This topic deserves a discussion of its own, but for the purposes of
this tutorial simply know that each camera has a default somewhere in the middle gray tones (~10-18% reflectance). Metering off of a subject which reflects more or less light than this may cause your camera's metering algorithm to go
awry-- either through under or over-exposure, respectively.
An in-camera light meter can work surprisingly well if object reflectance is sufficiently diverse throughout the photo. In other words, if there is an even spread varying from dark to
light objects, then the average reflectance will remain roughly middle gray. Unfortunately, some scenes may have a significant imbalance in subject reflectivity, such as a photo of a
white dove in the snow, or of a black dog sitting on a pile of charcoal. For such cases the camera may try to create an image with a histogram whose primary peak is in the
midtones, even though it should have instead produced this peak in the highlights or shadows (see high and low-key histograms).

METERING OPTIONS
In order to accurately expose a greater range of subject lighting and reflectance combinations, most cameras feature several metering options. Each option works by assigning a weighting to different light regions; those with a higher
weighting are considered more reliable, and thus contribute more to the final exposure calculation.

Center-Weighted Partial Metering Spot Metering

Partial and spot areas are roughly 13.5% and 3.8% of the picture area, respectively,
which correspond to settings on the Canon EOS 1D Mark II.
The whitest regions are those which contribute most towards the exposure calculation, whereas black areas are ignored. Each of the above metering diagrams may also be located off-center, depending on the metering options and
autofocus point used.
More sophisticated algorithms may go beyond just a regional map and include: evaluative, zone and matrix metering. These are usually the default when your camera is set to auto exposure. Each generally works by dividing the image
up into numerous sub-sections, where each section is then considered in terms of its relative location, light intensity or color. The location of the autofocus point and orientation of the camera (portrait vs. landscape) may also contribute to
the calculation.

WHEN TO USE PARTIAL & SPOT METERING


Partial and spot metering give the photographer far more control over the exposure than any of the other settings, but this also means that these is more difficult to use-- at least initially. They are useful when there is a relatively small
object within your scene which you either need to be perfectly exposed, or know that it will provide the closest match to middle gray.
One of the most common applications of partial metering is a portrait of someone who is backlit. Metering off of their face can help avoid making the subject look like an under-exposed silhouette against the bright background. On the
other hand, care should be taken as the shade of a person's skin may lead to inaccurate exposure if it is far from neutral gray reflectance-- but probably not as inaccurate as what would have been caused by the backlighting.
Spot metering is used less often because its metering area is very small and thus quite specific. This can be an advantage when you are unsure of your subject's reflectance and have a specially designed gray card (or other small object) to
meter off of.

Spot and partial metering are also quite useful for performing creative exposures, and when the ambient lighting is unusual. In the examples to the left and right below, one could meter off of the diffusely lit foreground tiles, or off of the
directly lit stone below the opening to the sky.
NOTES ON CENTER-WEIGHTED METERING
At one time center-weighted metering was a very common default setting in cameras because it coped well with a bright sky above a darker landscape. Nowadays, it has more or less been surpassed in flexibility by evaluative and matrix,
and in specificity by partial and spot metering. On the other hand, the results produced by center-weighted metering are very predictable, whereas matrix and evaluative metering modes have complicated algorithms which are harder
to predict. For this reason some prefer to use it as the default metering mode.

EXPOSURE COMPENSATION
Any of the above metering modes can use a feature called exposure compensation (EC). The metering calculation still works as normal, except the final settings are then compensated by the EC value. This allows for manual corrections
if you observe a metering mode to be consistently under or over-exposing. Most cameras allow up to 2 stops of exposure compensation; each stop of exposure compensation provides either a doubling or halving of light compared to what
the metering mode would have done otherwise. A setting of zero means no compensation will be applied (default).
Exposure compensation is ideal for correcting in-camera metering errors caused by the subject's reflectivity. No matter what metering mode is used, an in-camera light meter will always mistakenly under-expose a subject such as
a white dove in a snowstorm (see incident vs. reflected light). Photographs in the snow will always require around +1 exposure compensation, whereas a low-key image may require negative compensation.
When shooting in RAW mode under tricky lighting, sometimes it is useful to set a slight negative exposure compensation (0.3-0.5). This decreases the chance of clipped highlights, yet still allows one to increase the exposure afterwards.
Alternatively, a positive exposure compensation can be used to improve the signal to noise ratio in situations where the highlights are far from clipping.

4.UNDERSTANDING CAMERA LENSES -


Understanding camera lenses can help add more creative control to digital photography. Choosing the right lens for the task can become a complex trade-off between cost, size, weight, lens speed and image quality. This tutorial aims to
improve understanding by providing an introductory overview of concepts relating to image quality, focal length, perspective, prime vs. zoom lenses and aperture or f-number.

LENS ELEMENTS & IMAGE QUALITY


All but the simplest cameras contain lenses which are actually comprised of several "lens elements." Each of these elements aims to direct the path of light rays such that they recreate the image as accurately as possible on the digital
sensor. The goal is to minimize aberrations, while still utilizing the fewest and least expensive elements.

Optical aberrations occur when points of the image do not translate back onto single points after passing through the lens, causing image blurring, reduced contrast or misalignment of colors (chromatic aberration). Lenses may also suffer
from uneven, radially decreasing image brightness (vignetting) or distortion. Try moving your mouse over each of the options below to see how these can impact image quality for extreme cases.
Loss of Contrast Blurring

Original Image Chromatic Aberration Distortion

Vignetting Original

Any of the above problems is present to some degree with any lens. In the rest of this tutorial, when a lens is referred to as having lower optical quality than another lens, this is manifested as some combination of the above
artifacts. Some of these lens artifacts may not be as objectionable as others, depending on the subject matter.
Note: For a much more quantitative and technical discussion of the above topic, please see the
tutorial on camera lens quality: MTF, resolution & contrast.

INFLUENCE OF LENS FOCAL LENGTH The focal length of a lens determines its angle of view, and thus also how much the subject will be magnified for a
given photographic position. Wide angle lenses have small focal lengths, while telephoto lenses have larger corresponding focal lengths.

Note: The location where light rays cross is not necessarily equal to the focal length, as shown above, but is instead roughly proportional to this distance. Therefore longer focal lengths still result in narrower angles of view, as depicted.

Top of Form
Required Focal Length Calculator

Subject
Distance:

Subject Size:

Camera Type:

Approximate Required Focal Length:

Note: Calculator assumes that camera is oriented such that the maximum
subject dimension given by "subject size" is in the camera's longest dimension.
Calculator not intended for use in extreme macro photography, but does
take into account small changes in the angle of view due to focusing distance.
Bottom of Form

Many will say that focal length also determines the perspective of an image, but strictly speaking, perspective only changes with one's location relative to their subject. If one tries to achieve the same subjects filling the frame with both a
wide angle and telephoto lens, then perspective does indeed change because one is forced to move closer or further from their subject. For these scenarios only, the wide angle lens exaggerates or stretches perspective, whereas the

telephoto lens compresses or flattens perspective.


Perspective control can be a powerful compositional tool in photography, and often determines one's choice in focal length (when one can photograph from any position). Move your mouse over the above image to view an
exaggerated perspective due to a wider angle lens. Note how the subjects within the frame remain nearly identical-- therefore requiring a closer position for the wider angle lens. The relative sizes of objects change such that the distant
doorway becomes smaller relative to the nearby lamps.
The following table provides a overview of what focal lengths are required to be considered a wide angle or telephoto lens, in addition to their typical uses. Please note that focal lengths listed are just rough ranges, and actual uses may
vary considerably; many use telephoto lenses in distant landscapes to compress perspective, for example.
Lens Focal Length* Terminology Typical Photography

Less than 21 mm Extreme Wide Angle Architecture

21-35 mm Wide Angle Landscape

35-70 mm Normal Street & Documentary

70-135 mm Medium Telephoto Portraiture

135-300+ mm Telephoto Sports, Bird & Wildlife

*Note: Lens focal lengths are for 35 mm equivalent cameras. If you have a compact or digital SLR camera, then you likely have a different sensor size. To adjust the above numbers for your camera, please use the focal length converter
in the tutorial on digital camera sensor sizes.
Other factors may also be influenced by lens focal length. Telephoto lenses are more susceptible to camera shake since small hand movements become magnified within the image, similar to the shakiness experience while trying to look
through binoculars with a large zoom. Wide angle lenses are generally more resistant to flare, partially because the designers assume that the sun is more likely to be within the frame for a wider angle of view. A final consideration is
that medium and telephoto lenses generally yield better optical quality for similar price ranges.

FOCAL LENGTH & HANDHELD PHOTOS


The focal length of a lens may also have a significant impact on how easy it is to achieve a sharp handheld photograph. Longer focal lengths require shorter exposure times to minimize burring caused by shaky hands. Think of this
as if one were trying to hold a laser pointer steady; when shining this pointer at a nearby object its bright spot ordinarily jumps around less than for objects further away.

This is primarily because slight rotational vibrations are magnified greatly with distance, whereas if only up and down or side to side vibrations were present, the laser's bright spot would not change with distance.
A common rule of thumb for estimating how fast the exposure needs to be for a given focal length is the one over focal length rule. This states that for a 35 mm camera, the exposure time needs to be at least as fast as one over the focal
length in seconds. In other words, when using a 200 mm focal length on a 35 mm camera, the exposure time needs to be at least 1/200 seconds-- otherwise blurring may be hard to avoid. Keep in mind that this rule is just for rough
guidance; some may be able to hand hold a shot for much longer or shorter times than this rule estimates. For users of digital cameras with cropped sensors, one needs to convert into a 35 mm equivalent focal length.

ZOOM LENSES vs. PRIME LENSES


A zoom lens is one where the photographer can vary the focal length within a pre-defined range, whereas this cannot be changed with a "prime" or fixed focal length lens. The primary advantage of a zoom lens is that it is easier to
achieve a variety of compositions or perspectives (since lens changes are not necessary). This advantage is often critical for dynamic subject matter, such as in photojournalism and children's photography.
Keep in mind that using a zoom lens does not necessarily mean that one no longer has to change their position; zooms just increase flexibility. In the example below, the original position is shown along with two alternatives using a
zoom lens. If a prime lens were used, then a change of composition would not have been possible without cropping the image (if a tighter composition were desirable). Similar to the example in the previous section, the change of
perspective was achieved by zooming out and getting closer to the subject. Alternatively, to achieve the opposite perspective effect, one could have zoomed in and gotten further from the subject.

Two Options Available with a Zoom Lens:

Change of Composition Change of Perspective

Why would one intentionally restrict their options by using a prime lens? Prime lenses existed long before zoom lenses were available, and still offer many advantages over their more modern counterparts. When zoom lenses first arrived
on the market, one often had to be willing to sacrifice a significant amount of optical quality. However, more modern high-end zoom lenses generally do not produce noticeably lower image quality, unless scrutinized by the trained eye
(or in a very large print).
The primary advantages of prime lenses are in cost, weight and speed. An inexpensive prime lens can generally provide as good (or better) image quality as a high-end zoom lens. Additionally, if only a small fraction of the focal
length range is necessary for a zoom lens, then a prime lens with a similar focal length will be significantly smaller and lighter. Finally, the best prime lenses almost always offer better light-gathering ability (larger maximum aperture)
than the fastest zoom lenses-- often critical for low-light sports/theater photography, and when a shallow depth of field is necessary.
For compact digital cameras, lenses listed with a 3X, 4X, etc. zoom designation refer to the ratio between the longest and shortest focal lengths. Therefore, a larger zoom designation does not necessarily mean that the image can be
magnified any more (since that zoom may just have a wider angle of view when fully zoomed out). Additionally, digital zoom is not the same as optical zoom, as the former only enlarges the image through interpolation. Read the fine-
print to ensure you are not misled.

INFLUENCE OF LENS APERTURE OR F-NUMBER


The aperture range of a lens refers to the amount that the lens can open up or close down to let in more or less light, respectively. Apertures are listed in terms of f-numbers, which quantitatively describe relative light-gathering area
(depicted below).

Note: Above comparison is qualitative; aperture opening (iris) is rarely a perfect circle,
due to the presence of 5-8 blade-like lens diaphragms.
Note that larger aperture openings are defined to have lower f-numbers (often very confusing). These two terms are often mistakenly interchanged; the rest of this tutorial refers to lenses in terms of their aperture size. Lenses with larger
apertures are also described as being "faster," because for a given ISO speed, the shutter speed can be made faster for the same exposure. Additionally, a smaller aperture means that objects can be in focus over a wider range of
distance, a concept also termed the depth of field.
Corresponding Impact on Other Properties:
f-# Light-Gathering Area
Required Shutter Speed Depth of Field
(Aperture Size)

Higher Smaller Slower Wider

Lower Larger Faster Narrower

When one is considering purchasing a lens, specifications ordinarily list the maximum (and maybe minimum) available apertures. Lenses with a greater range of aperture settings provide greater artistic flexibility, in terms of both
exposure options and depth of field. The maximum aperture is perhaps the most important lens aperture specification, which is often listed on the box along with focal length(s).

An f-number of X may also be displayed as 1:X (instead of f/X), as shown below for the Canon 70-200 f/2.8 lens (whose box is also shown above and lists f/2.8).

Portrait and indoor sports/theater photography often requires lenses with very large maximum apertures, in order to be capable of faster shutter speeds or narrower depth of fields, respectively. The narrow depth of field in a portrait helps
isolate the subject from their background. For digital SLR cameras, lenses with larger maximum apertures provide significantly brighter viewfinder images-- possibly critical for night and low-light photography. These also often
give faster and more accurate auto-focusing in low-light. Manual focusing is also easier because the image in the viewfinder has a narrower depth of field (thus making it more visible when objects come into or out of focus).
Typical Maximum Apertures Relative Light-Gathering Ability Typical Lens Types

Fastest Available Prime Lenses


f/1.0 32X
(for Consumer Use)

f/1.4 16X
Fast Prime Lenses
f/2.0 8X

Fastest Zoom Lenses


f/2.8 4X
(for Constant Aperture)

f/4.0 2X
Light Weight Zoom Lenses or Extreme Telephoto Primes
f/5.6 1X

Minimum apertures for lenses are generally nowhere near as important as maximum apertures. This is primarily because the minimum apertures are rarely used due to photo blurring from lens diffraction, and because these may require
prohibitively long exposure times. For cases where extreme depth of field is desired, then smaller minimum aperture (larger maximum f-number) lenses allow for a wider depth of field.
Finally, some zoom lenses on digital SLR and compact digital cameras often list a range of maximum aperture, because this may depend on how far one has zoomed in or out. These aperture ranges therefore refer only to the range of
maximum aperture, not overall range. A range of f/2.0-3.0 would mean that the maximum available aperture gradually changes from f/2.0 (fully zoomed out) to f/3.0 (at full zoom). The primary benefit of having a zoom lens with a
constant maximum aperture is that exposure settings are more predictable, regardless of focal length.
Also note that just because the maximum aperture of a lens may not be used, this does not necessarily mean that this lens is not necessary. Lenses typically have fewer aberrations when they perform the exposure stopped down one
or two f-stops from their maximum aperture (such as using a setting of f/4.0 on a lens with a maximum aperture of f/2.0). This *may* therefore mean that if one wanted the best quality f/2.8 photograph, a f/2.0 or f/1.4 lens may yield
higher quality than a lens with a maximum aperture of f/2.8.
Other considerations include cost, size and weight. Lenses with larger maximum apertures are typically much heavier, larger and more expensive. Size/weight may be critical for wildlife, hiking and travel photography because all of
these often utilize heavier lenses, or require carrying equipment for extended periods of time.

5.CAMERA LENS FILTERS -


Camera lens filters still have many uses in digital photography, and should be an important part of any photographer's camera bag. These can include polarizing filters to reduce glare and improve saturation, or simple UV/haze filters to
provide extra protection for the front of your lens. This article aims to familiarize one with these and other filter options that cannot be reproduced using digital editing techniques. Common problems/disadvantages and filter sizes are
discussed towards the end.

OVERVIEW: LENS FILTER TYPES


The most commonly used filters for digital photography include polarizing (linear/circular), UV/haze, neutral density, graduated neutral density and warming/cooling or color filters. Example uses for each are listed below:
Filter Type Primary Use Common Subject Matter

Linear & Circular Reduce Glare Sky / Water / Foliage


Polarizers Improve Saturation in Landscape Photography

Waterfalls, Rivers
Neutral Density (ND) Extend Exposure Time
under bright light

Graduated Neutral Control Strong Light Gradients


Dramatically Lit Landscapes
Density (GND) Reduce Vignetting

Improve Clarity with Film


UV / Haze Any
Provide Lens Protection

Landscapes, Underwater,
Warming / Cooling Change White Balance
Special Lighting

LINEAR & CIRCULAR POLARIZING FILTERS


Polarizing filters (aka "polarizers") are perhaps the most important of any filter for landscape photography. They work by reducing the amount of reflected light that passes to your camera's sensor. Similar to polarizing sunglasses,
polarizers will make skies appear deeper blue, will reduce glare and reflections off of water and other surfaces, and will reduce the contrast between land and sky.
Select: No Polarizer Polarizer at Max
two separate handheld photos taken seconds apart
Note how the sky becomes a much darker blue, and how the foliage/rocks acquire slightly more color saturation. The intensity of the polarizing effect can be varied by slowly rotating your polarizing filter, although no more than 180° of
rotation is needed, since beyond this the possible intensities repeat. Use your camera's viewfinder (or rear LCD screen) to view the effect as you rotate the polarizing filter.

The polarizing effect may also increase or decrease substantially depending on the direction your camera is pointed and the position of the sun in the sky. The effect is strongest when your camera is aimed in a direction which is
perpendicular to the direction of the sun's incoming light. This means that if the sun is directly overhead, the polarizing effect will be greatest near the horizon in all directions.
However, polarizing filters should be used with caution because they may adversely affect the photo. Polarizers dramatically reduce the amount of light reaching the camera's sensor—often by 2-3 f-stops (1/4 to 1/8 the amount of
light). This means that the risk of a blurred handheld image goes up dramatically, and may make some action shots prohibitive.
Additionally, using a polarizer on a wide angle lens can produce an uneven or unrealistic looking sky which visibly darkens. In the example to the left, the sky could be considered unusually uneven and too dark at the top.
Linear vs. Circular Polarizing Filters: The circular polarizing variety is designed so that the camera's metering and autofocus systems can still function. Linear polarizers are much less expensive, but cannot be used with cameras that
have through-the-lens (TTL) metering and autofocus—meaning nearly all digital SLR cameras. One could of course forego metering and autofocus, but that is rarely desirable.

NEUTRAL DENSITY FILTERS


Neutral density (ND) filters uniformly reduce the amount of light reaching the camera's sensor. This is useful when a sufficiently long exposure time is not otherwise attainable within a given range of possible apertures (at the lowest ISO
setting).
Situations where ND filters are particularly useful include:
photo with a smoothed water effect from a long exposure
• Smoothing water movement in waterfalls, rivers, oceans, etc.
• Achieving a shallower depth of field in very bright light
• Reducing diffraction (which reduces sharpness) by enabling a larger aperture
• Making moving objects less apparent or not visible (such as people or cars)
• Introducing blur to convey motion with moving subjects

However, only use ND filters when absolutely necessary because they effectively discard light—which could otherwise be used to enable a shorter shutter speed (to freeze action), a smaller aperture (for depth of field) or a lower ISO
setting (to reduce image noise). Additionally, some ND filters can add a very slight color cast to the image.
Understanding how much light a given ND filter blocks can sometimes be difficult since manufacturers list this in many different forms:
Amount of Light Reduction
Hoya, B+W and Cokin Lee, Tiffen Leica
f-stops Fraction

1 1/2 ND2, ND2X 0.3 ND 1X

2 1/4 ND4, ND4X 0.6 ND 4X

3 1/8 ND8, ND8X 0.9 ND 8X

4 1/16 ND16, ND16X 1.2 ND 16X

5 1/32 ND32, ND32X 1.5 ND 32X

6 1/64 ND64, ND64X 1.8 ND 64X

Generally no more than a few f-stops is need for most waterfall scenarios, so most photographers just keep one or two different ND filter amounts on hand. Extreme light reduction can enable very long exposures even during broad
daylight.

GRADUATED NEUTRAL DENSITY FILTERS


Graduated neutral density (GND) filters restrict the amount of light across an image in a smooth geometric pattern. These are sometimes also called "split filters". Scenes which are ideally suited for GND filters are those with simple
lighting geometries, such as the linear blend from dark to light encountered commonly in landscape photography (below).
GND Filter Final Result

Prior to digital cameras, GND filters were absolutely essential for capturing dramatically-lit landscapes. With digital cameras one can instead often take two separate exposures and blend these using a linear gradient in photoshop. On the
other hand, this technique is not possible for fast moving subject matter or changing light (unless it is a single exposure developed twice from the RAW file format, but this increases image noise). Many also prefer using a GND to see
how the final image will look immediately through the viewfinder or rear LCD.
GND filters come in many varieties. The first important setting is how quickly the filter blends from light to dark, which is usually termed "soft edge" or "hard edge" for gradual and more abrupt blends, respectively. These are
chosen based on how quickly the light changes across the scene, where a sharp division between dark land and bright sky would necessitate a harder edge GND filter, for example. Alternatively, the blend can instead be radial to either add
or remove light fall-off at the lens's edges (vignetting).
Soft Edge Hard Edge
Radial Blend
GND GND

note: in the above diagrams white = clear, which passes 100% of the light
Placing the blend should be performed very carefully and usually requires a tripod. The soft edge is generally more flexible and less forgiving of misplacement. On the other hand, a soft edge may produce excessive darkening or
brightening near where the blend occurs if the scene's light transitions faster than the filter. One should also be aware that vertical objects extending across the blend may appear unrealistically dark
Location of GND
Choose: Final Photo
Blend

Note how the rock columns become nearly black at their top compared to below the blend;
this effect is often unavoidable when using GND filters.
A problem with the soft and hard edge terminology is that it is not standardized from one brand to another. One company's "soft edge" can sometimes be nearly as abrupt a blend as another company's so called "hard edge". It is therefore
best to take these on a case by case basis and actually look at the filter itself to judge the blend type. Most manufacturers will show an example of the blend on their own websites.
The second important setting is the differential between how much light is let in at one side of the blend versus the other (the top versus bottom in the examples directly above). This differential is expressed using the same
terminology as used for ND filters in the previous section. A "0.6 ND grad" therefore refers to a graduated neutral density filter which lets in 2 f-stops less light (1/4th) at one side of the blend versus the other. Similarly, a 0.9 ND grad lets
in 3 f-stops less light (1/8th) at one side. Most landscape photos need no more than a 1-3 f-stop blend.

HAZE & UV FILTERS


Nowadays UV filters are primarily used to protect the front element of a camera lens since they are clear and do not noticably affect the image. With film cameras, UV filters reduce haze and improve contrast by minimizing the amount of
ultraviolet (UV) light that reaches the film. The problem with UV light is that it is not visible to the human eye, but is often uniformly distributed on a hazy day; UV therefore adversely affects the camera's exposure by reducing contrast.
Fortunately, digital camera sensors are nowhere near as sensitive to UV light as film, therefore UV filtration is no longer necessary.

77 mm UV filter
However, UV filters have the potential to decrease image quality by increasing lens flare, adding a slight color tint or reducing contrast. Multicoated UV filters can dramatically reduce the chance of flare, and keeping your filter very clean
minimizes any reduction in image quality (although even invisible micro abrasions will affect sharpness/contrast). High quality UV filters will not introduce any visible color cast.
For digital cameras, it is often debated whether the advantage of a UV filter (protection) outweighs the potential reduction in image quality. For very expensive SLR lenses, the increased protection is often the determining factor, since it
is much easier to replace a filter than to replace or repair a lens. However, for less expensive SLR lenses or compact digital cameras protection is much less of a factor—the choice therefore becomes more a matter of personal preference.
Another consideration is that UV filters may increase the resale value of the lens by keeping the front lens element in mint condition. In that sense, a UV filter could also even be deemed to increase image quality (relative to an unfiltered
lens) since it can be routinely replaced whenever it is perceived to adversely affect the image.
COOL & WARM FILTERS
Cooling or warming filters change the white balance of light reaching the camera's sensor. This can be used to either correct an unrealistic color cast, or to instead add one, such as adding warmth to a cloudy day to make it appear more
like during sunset.

Above image's orange color cast is from the monochromatic sodium streetlamps;
with this type of light source virtually no amount of white balance correction can restor full color.
A cooling filter or special streetlight filter could be used to restore color based on other light sources.
These filters have become much less important with digital cameras since most automatically adjust for white balance, and this can be adjusted afterwards when taking photos with the RAW file format. On the other hand, some situations
may still necessitate color filters, such as situations with unusual lighting (above example) or underwater photography. This is because there may be such an overwhelming amount of monochromatic light that no amount of white balance
can restore full color—or at least not without introducing huge amounts of image noise in some color channels.

PROBLEMS WITH LENS FILTERS

visible filter vignetting


Filters should only be used when necessary because they can also adversely affect the image. Since they effectively introduce an additional piece of glass between your camera's sensor and the subject, they have the potential to reduce
image quality. This usually comes in the form of either a slight color tint, a reduction in local or overall image contrast, or ghosting and increased lens flare caused by light inadvertently reflecting off the inside of the filter.
Filters may also introduce physical vignetting (light fall-off or blackening at the edges of the image) if their opaque edge gets in the way of light entering the lens (right example). This was created by stacking a polarizing filter on top of a
UV filter while also using a wide angle lens—causing the edges of the outermost filter to get in the way of the image. Stacking filters therefore has the potential to make all of the above problems much worse.

NOTES ON CHOOSING A FILTER SIZE FOR A CAMERA LENS


Lens filters generally come in two varieties: screw-on and front filters. Front filters are more flexible because they can be used on virtually any lens diameter, however these may also be more cumbersome to use since they may need to be
held in front of the lens. On the other hand, filter holder kits are available that can improve this process. Screw-on filters can provide an air-tight seal when needed for protection, and cannot accidentally move relative to the lens during
composure. The main disadvantage is that a given screw-on filter will only work with a specific lens size.
The size of a screw-on filter is expressed in terms of its diameter, which corresponds to the diameter usually listed on the top or front of your camera lens. This diameter is listed in millimeters and usually ranges from about 46 to 82 mm
for digital SLR cameras. Step-up or step-down adapters can enable a given filter size to be used on a lens with a smaller or larger diameter, respectively. However, step-down filter adapters may introduce substantial vignetting (since the
filter may block light at the edges of the lens), whereas step-up adapters mean that your filter is much larger (and potentially more cumbersome) than is required.
The height of the filter edges may also be important. Ultra-thin and other special filters are designed so that they can be used on wide angle lenses without vignetting. On the other hand, these may also be much more expensive and often
do not have threads on the outside to accept another filter (or sometimes even the lens cap).

6.TUTORIALS: DEPTH OF FIELD -


Depth of field is the range of distance within the subject that is acceptably sharp. The depth of field varies depending on camera type, aperture and focusing distance, although print size and viewing distance can influence our perception
of it. This section is designed to give a better intuitive and technical understanding for photography, and provides a depth of field calculator to show how it varies with your camera settings.

The depth of field does not abruptly change from sharp to unsharp, but instead occurs as a gradual transition. In fact, everything immediately in front of or in back of the focusing distance begins to lose sharpness-- even if this is not
perceived by our eyes or by the resolution of the camera.

CIRCLE OF CONFUSION
Since there is no critical point of transition, a more rigorous term called the "circle of confusion" is used to define how much a point needs to be blurred in order to be perceived as unsharp. When the circle of confusion becomes
perceptible to our eyes, this region is said to be outside the depth of field and thus no longer "acceptably sharp." The circle of confusion above has been exaggerated for clarity; in reality this would be only a tiny fraction of the camera
sensor's area.

When does the circle of confusion become perceptible to our eyes? An acceptably sharp circle of confusion is loosely defined as one which would go unnoticed when enlarged to a standard 8x10 inch print, and observed from a standard
viewing distance of about 1 foot.

At this viewing distance and print size, camera manufactures assume a circle of confusion is negligible if no larger than 0.01 inches (when enlarged). As a result, camera
manufacturers use the 0.01 inch standard when providing lens depth of field markers (shown below for f/22 on a 50mm lens). In reality, a person with 20-20 vision or
better can distinguish features 1/3 this size or smaller, and so the circle of confusion has to be even smaller than this to achieve acceptable sharpness throughout.

A different maximum circle of confusion also applies for each print size and viewing distance combination. In the earlier example of blurred dots, the circle of confusion is actually smaller than the resolution of your screen for the two
dots on either side of the focal point, and so these are considered within the depth of field. Alternatively, the depth of field can be based on when the circle of confusion becomes larger than the size of your digital camera's pixels.
Note that depth of field only sets a maximum value for the circle of confusion, and does not describe what happens to regions once they become out of focus. These regions also called "bokeh," from Japanese (pronounced bo-ké). Two
images with identical depth of field may have significantly different bokeh, as this depends on the shape of the lens diaphragm. In reality, the circle of confusion is usually not actually a circle, but is only approximated as such when it is
very small. When it becomes large, most lenses will render it as a polygonal shape with 5-8 sides.

CONTROLLING DEPTH OF FIELD


Although print size and viewing distance are important factors which influence how large the circle of confusion appears to our eyes, aperture and focal distance are the two main factors that determine how big the circle of confusion will
be on your camera's sensor. Larger apertures (smaller F-stop number) and closer focal distances produce a shallower depth of field. The following depth of field test was taken with the same focus distance and a 200 mm lens (320 mm
field of view on a 35 mm camera), but with various apertures:

f/8.0 f/5.6 f/2.8

CLARIFICATION: FOCAL LENGTH AND DEPTH OF FIELD


Note that I did not mention focal length as influencing depth of field. Even though telephoto lenses appear to create a much shallower depth of field, this is mainly because they are often used to make the subject appear bigger when one
is unable to get closer. If the subject occupies the same fraction of the image (constant magnification) for both a telephoto and a wide angle lens, the total depth of field is virtually* constant with focal length! This would of course
require you to either get much closer with a wide angle lens or much further with a telephoto lens, as demonstrated in the following chart:
Focal Length (mm) Focus Distance (m) Depth of Field (m)

10 0.5 0.482

20 1.0 0.421

50 2.5 0.406

100 5.0 0.404

200 10 0.404

400 20 0.404

Note: Depth of field calculations are at f/4.0 on a Canon EOS 30D (1.6X crop factor),
using a circle of confusion of 0.0206 mm.
Note how there is indeed a subtle change for the smallest focal lengths. This is a real effect, but is negligible compared to both aperture and focus distance. Even though the total depth of field is virtually constant, the fraction of the depth
of field which is in front of and behind the focus distance does change with focal length, as demonstrated below:

Distribution of the Depth of Field

Focal Length (mm) Rear Front

10 70.2 % 29.8 %

20 60.1 % 39.9 %

50 54.0 % 46.0 %

100 52.0 % 48.0 %

200 51.0 % 49.0 %

400 50.5 % 49.5 %

This exposes a limitation of the traditional DoF concept: it only accounts for the total DoF and not its distribution around the focal plane, even though both may contribute to the perception of sharpness. A wide angle lens provides a more
gradually fading DoF behind the focal plane than in front, which is important for traditional landscape photographs.
On the other hand, when standing in the same place and focusing on a subject at the same distance, a longer focal length lens will have a shallower depth of field (even though the pictures will show something entirely different). This is
more representative of everyday use, but is an effect due to higher magnification, not focal length. Longer focal lengths also appear to have a shallow depth of field because they flatten perspective. This renders a background much
larger relative to the foreground-- even if no more detail is resolved. Depth of field also appears shallower for SLR cameras than for compact digital cameras, because SLR cameras require a longer focal length to achieve the same field of
view.
*Note: We describe depth of field as being virtually constant because there are limiting cases where this does not hold true. For focal distances resulting in high magnification, or very near the hyperfocal distance, wide angle lenses may
provide a greater DoF than telephoto lenses. On the other hand, for situations of high magnification the traditional DoF calculation becomes inaccurate due to another factor: pupil magnification. This actually acts to offset the DoF
advantage for most wide angle lenses, and increase it for telephoto and macro lenses. At the other limiting case, near the hyperfocal distance, the increase in DoF arises because the wide angle lens has a greater rear DoF, and can thus
more easily attain critical sharpness at infinity for any given focal distance.

CALCULATING DEPTH OF FIELD


In order to calculate the depth of field, one needs to first decide on an appropriate value for the maximum allowable circle of confusion. This is based on both the camera type (sensor or film size), and on the viewing distance / print size
combination.
Depth of field calculations ordinarily assume that a feature size of 0.01 inches is required for acceptable sharpness (as discussed earlier), however people with 20-20 vision can see features 1/3 this size. If you use the 0.01 inch standard of
eyesight, understand that the edge of the depth of field may not appear acceptably sharp. The depth of field calculator below assumes this standard of eyesight, however I also provide a more flexible depth of field calculator.
Top of Form
Depth of Field Calculator
Camera Type

Selected aperture

Actual lens focal


length mm

Focus distance (to


subject) meters

Closest distance of acceptable sharpness

Furthest distance of acceptable sharpness

Total Depth of Field

Note: CF = "crop factor" (commonly referred to as the focal length multiplier)


Bottom of Form

DEPTH OF FOCUS & APERTURE VISUALIZATION


Another implication of the circle of confusion is the concept of depth of focus (also called the "focus spread"). It differs from depth of field in that it describes the distance over which light is focused at the camera's sensor, as opposed to
how much of the subject is in focus. This is important because it sets tolerances on how flat/level the camera's film or digital sensor have to be in order to capture proper focus in all regions of the image.

Diagram depicting depth of focus versus camera aperture. The purple lines represent the extreme angles at which light could potentially enter the aperture. The purple shaded in portion represents all other possible angles. Diagram can
also be used to illustrate depth of field, but in that case it's the lens elements that move instead of the sensor.
The key concept is this: when an object is in focus, light rays originating from that point converge at a point on the camera's sensor. If the light rays hit the sensor at slightly different locations (arriving at a disc instead of a point), then this
object will be rendered as out of focus -- and increasingly so depending on how far apart the light rays are.

OTHER NOTES
Why not just use the smallest aperture (largest number) to achieve the best possible depth of field? Other than the fact that this may require prohibitively long shutter speeds without a camera tripod, too small of an aperture softens the
image by creating a larger circle of confusion (or "Airy disk") due to an effect called diffraction-- even within the plane of focus. Diffraction quickly becomes more of a limiting factor than depth of field as the aperture gets smaller.
Despite their extreme depth of field, this is also why "pinhole cameras" have limited resolution.
For macro photography (high magnification), the depth of field is actually influenced by another factor: pupil magnification. This is equal to one for lenses which are internally symmetric, although for wide angle and telephoto lenses
this is greater or less than one, respectively. A greater depth of field is achieved (than would be ordinarily calculated) for a pupil magnification less than one, whereas the pupil magnification does not change the calculation when it is
equal to one. The problem is that the pupil magnification is usually not provided by lens manufacturers, and one can only roughly estimate it visually.

7.UNDERSTANDING CAMERA AUTOFOCUS -


A camera's autofocus system intelligently adjusts the camera lens to obtain focus on the subject, and can mean the difference between a sharp photo and a missed opportunity. Despite a seemingly simple goal—sharpness at the focus point
—the inner workings of how a camera focuses are unfortunately not as straightforward. This tutorial aims to improve your photos by introducing how autofocus works—thereby enabling you to both make the most of its assets and avoid
its shortcomings.
Note: Autofocus (AF) works either by using contrast sensors within the camera (passive AF) or by sending out a signal to illuminate or estimate distance to the subject (active AF). Passive AF can be performed using either the contrast
detection or phase detection methods, but both rely on contrast for achieving accurate autofocus; they will therefore be treated as being qualitatively similar for the purposes of this AF tutorial. Unless otherwise stated, this tutorial will
assume passive autofocus. We will also discuss the AF assist beam method of active autofocus towards the end.

CONCEPT: AUTOFOCUS SENSORS


A camera's autofocus sensor(s) are the real engine behind achieving accurate focus, and are laid out in various arrays across your image's field of view. Each sensor measures relative focus by assessing changes in contrast at its
respective point in the image— where maximal contrast is assumed to correspond to maximal sharpness.
Change Focus Amount: Blurred Partial Sharp

Sensor Histogram

400%

Please visit the tutorial on image histograms for a background on image contrast.
Note: many compact digital cameras use the image sensor itself as a contrast sensor (using a method called contrast detection AF), and do not necessarily have multiple discrete autofocus sensors (which are more common using the phase
detection method of AF).
Further, the above diagram illustrates the contrast detection method of AF;
phase detection is another method, but this still relies on contrast for accurate autofocus.
The process of autofocusing generally works as follows:
(1) An autofocus processor (AFP) makes a small change in the focusing distance.
(2) The AFP reads the AF sensor to assess whether and by how much focus has improved.
(3) Using the information from (2), the AFP sets the lens to a new focusing distance.
(4) The AFP may iteratively repeat steps 2-3 until satisfactory focus has been achieved.
This entire process is usually completed within a fraction of a second. For difficult subjects, the camera may fail to achieve satisfactory focus and will give up on repeating the above sequence, resulting in failed autofocus. This is the
dreaded "focus hunting" scenario where the camera focuses back and forth repeatedly without achieving focus lock. This does not, however, mean that focus is not possible for the chosen subject. Whether and why autofocus may fail is
primarily determined by factors in the next section.

FACTORS AFFECTING AUTOFOCUS PERFORMANCE


The photographic subject can have an enormous impact on how well your camera autofocuses—and An example illustrating the quality of different focus points has been shown to the left; move your mouse over
often even more so than any variation between camera models, lenses or focus settings. The three this image to see the advantages and disadvantages of each focus location.
most important factors influencing autofocus are the light level, subject contrast and camera
Note that each of these factors are not independent; in other words, one may be able to achieve autofocus even
for a dimly lit subject if that same subject also has extreme contrast, or vice versa. This has an important
implication for your choice of autofocus point: selecting a focus point which corresponds to a sharp edge or
pronounced texture can achieve better autofocus, assuming all other factors remain equal.
In the example to the left we were fortunate that the location where autofocus performs best also corresponds to
the subject location. The next example is more problematic because autofocus performs best on the background,
not the subject. Move your mouse over the image below to highlight areas of good and poor performance.
or subject motion.

In the photo to the right, if one focused on the fast-moving light sources behind the subject, one would risk an out-of-focus subject when the depth of field is shallow (as would be the case for a
low-light action shot like this one).
Alternatively, focusing on the subject's exterior highlight would perhaps be the best approach, with the caveat that this highlight would change sides and intensity rapidly depending on the location
of the moving light sources.
If one's camera had difficulty focusing on the exterior highlight, a lower contrast (but stationary and reasonably well lit) focus point would be the subject's foot, or leaves on the ground at the same
distance as the subject.

What makes the above choices difficult, however, is that these decisions often have to be either anticipated or made within a fraction of a second. Additional specific techniques for autofocusing on still and moving subjects will be
discussed in their respective sections towards the end of this tutorial.

NUMBER & TYPE OF AUTOFOCUS POINTS


The robustness and flexibility of autofocus is primarily a result of the number, position and type of autofocus points made available by a given camera model. High-end SLR cameras can have 45 or more autofocus points, whereas other
cameras can have as few as one central AF point. Two example layouts of autofocus sensors are shown below:

Max f/#: f/2.8 f/4.0 f/5.6 f/8.0 f/2.8 f/4.0 f/5.6


High-End SLR Entry to Midrange SLR

Cameras used for left and right examples are the Canon 1D MkII and Canon 50D/500D, respectively.
For these cameras autofocus is not possible for apertures smaller than f/8.0 and f/5.6.

Two types of autofocus sensors are shown:

cross-type sensors (two-dimensional contrast detection,


+
higher accuracy)

vertical line sensors (one-dimensional contrast detection,


l
lower accuracy)

Note: The "vertical line sensor" is only called this because it detects contrast along a vertical line.
Ironically, this type of sensor is therefore best at detecting horizontal lines.
For SLR cameras, the number and accuracy of autofocus points can also change depending on the maximum aperture of the lens being used, as illustrated above. This is an important consideration when choosing a camera lens: even if
you do not plan on using a lens at its maximum aperture, this aperture may still help the camera achieve better focus accuracy. Further, since the central AF sensor is almost always the most accurate, for off-center subjects it is
often best to first use this sensor to achieve a focus lock (before recomposing the frame).
Multiple AF points can work together for improved reliability, or can work in isolation for improved specificity, depending on your chosen camera setting. Some cameras also have an "auto depth of field" feature for group photos which
ensures that a cluster of focus points are all within an acceptable level of focus.

AF MODE: CONTINUOUS & AI SERVO vs. ONE SHOT


The most widely supported camera focus mode is one-shot focusing, which is best for still subjects. The one shot mode is susceptible to focus errors for fast moving subjects since it cannot anticipate subject motion, in addition to
potentially also making it difficult to visualize these moving subjects in the viewfinder. One shot focusing requires a focus lock before the photograph can be taken.
Many cameras also support an autofocus mode which continually adjust the focus distance for moving subjects. Canon cameras refer to this as "AI Servo" focusing, whereas Nikon cameras refer to his as "continuous" focusing. It works
by predicting where the subject will be slightly in the future, based on estimates of the subject velocity from previous focus distances. The camera then focuses at this predicted distance in advance to account for the shutter lag (the delay
between pressing the shutter button and the start of the exposure). This greatly increases the probability of correct focus for moving subjects.
Example maximum tracking speeds are shown for various Canon cameras below:

Values are for ideal contrast and lighting, and use the Canon 300mm f/2.8 IS L lens.
The above plot should also provide a rule of thumb estimate for other cameras as well. Actual maximum tracking speeds also depend on how erratic the subject is moving, the subject contrast and lighting, the type of lens and the number
of autofocus sensors being used to track the subject. Also be warned that using focus tracking can dramatically reduce the battery life of your camera, so use only when necessary.

AUTOFOCUS ASSIST BEAM


Many cameras come equipped with an AF assist beam, which is a method of active autofocus that uses a visible or infrared beam to help the autofocus sensors detect the subject. This can be very helpful in situations where your subject is
not adequately lit or has insufficient contrast for autofocus, although the AF assist beam also comes with the disadvantage of much slower autofocus.
Most compact cameras use a built-in infrared light source for the AF assist, whereas digital SLR cameras often use either a built-in or external camera flash to illuminate the subject. When using a flash for the AF assist, the AF assist
beam may have trouble achieving focus lock if the subject moves appreciably between flash firings. Use of the AF assist beam is therefore only recommended for still subjects.

IN PRACTICE: ACTION PHOTOS


Autofocus will almost always perform best with action photos when using the AI servo or continuous modes. Focusing performance can be improved dramatically by ensuring that the lens does not have to search over a large range of
focus distances.

Perhaps the most universally supported way of achieving this is to pre-focus your camera at a distance near where you anticipate the moving subject to pass through. In the biker example to
the right, one could pre-focus near the side of the road since one would expect the biker to pass by at near that distance.
Some SLR lenses also have a minimum focus distance switch; setting this to the greatest distance possible (assuming the subject will never be closer) can also improve performance.

Be warned, however, that in continuous autofocus mode shots can still be taken even if the focus lock has not yet been achieved.

IN PRACTICE: PORTRAITS & OTHER STILL PHOTOS


Still photos are best taken using the one-shot autofocus mode, which ensures that a focus lock has been achieved before the exposure begins. The usual focus point requirements of contrast and strong lighting still apply, although one
needs to ensure there is very little subject motion.
For portraits, the eye is the best focus point—both because this is a standard and because it has good contrast. Although the central autofocus sensor is usually most sensitive, the most accurate focusing is achieved using the off-center
focus points for off-center subjects. If one were to instead use the central AF point to achieve a focus lock (prior to recomposing for an off-center subject), the focus distance will always be behind the actual subject distance—and this
error increases for closer subjects. Accurate focus is especially important for portraits because these typically have a shallow depth of field.

Since the most common type of AF sensor is the vertical line sensor, it may also be worth considering whether your focus point contains primarily vertical or horizontal contrast. In low-light
conditions, one may be able to achieve a focus lock not otherwise possible by rotating the camera 90° during autofocus.
In the example to the left, the stairs are comprised primarily of horizontal lines. If one were to focus near the back of the foreground stairs (so as to maximize apparent depth of field using the
hyperfocal distance), one might be able to avoid a failed autofocus by orienting their camera first in landscape mode during autofocus. Afterwards one could rotate the camera back to portrait
orientation during the exposure, if so desired.

Note that the emphasis in this tutorial has been on *how* to focus—not necessarily *where* to focus. For further reading on this topic please visit the tutorials on depth of field and the hyperfocal distance.

8.CAMERA TRIPODS -
A camera tripod can make a huge difference in the sharpness and overall quality of photos. It enables photos to be taken with less light or a greater depth of field, in addition to enabling several specialty techniques. This tutorial is all
about how to choose and make the most of your camera tripod.
WHEN TO USE A TRIPOD
A camera tripod's function is pretty straightforward: it holds the camera in a precise position. This gives you a sharp picture when it might have otherwise appeared blurred due to camera shake. But how can you tell when you should and
shouldn't be using a tripod? When will a hand-held photo become blurred?
A common rule of thumb for estimating how fast the exposure needs to be is the one over focal length* rule. This states that for a 35 mm camera, the exposure time needs to be at least as fast as one over the focal length in seconds. In
other words, when using a 100 mm focal length on a 35 mm camera, the exposure time needs to be at most 1/100 seconds long -- otherwise blurring may be hard to avoid. For digital cameras with cropped sensors, one needs to convert to
a 35 mm equivalent focal length.
The reason this rule depends on focal length is because zooming in on your subject also ends up magnifying camera movement. This is analogous to trying to aim a laser pointer at a position on a distant wall; the further this wall is, the
more your laser pointer is likely to jump above and below this position due to an unsteady hand:

Simulation of what happens when you try to aim a laser pointer at a point on a distant wall;
the larger absolute movements on the further wall are similar to what happens with camera shake when you are using longer focal lengths (when you are more zoomed in).
Keep in mind that this rule is just for rough guidance. The exact shutter speed where camera shake affects your images will depend on (i) how steady you hold the camera, (ii) the sharpness of your lens, (iii) the resolution of your camera
and (iv) the distance to your subject. In other words: if in doubt, always use a tripod.
Finally, camera lenses with image stabilization (IS) or vibration reduction (VR) may enable you to take hand-held photographs at anywhere from two to eight times longer shutter speeds than you'd otherwise be able to hold steady.
However, IS and VR do not always help when the subject is moving -- but then again, neither do tripods.

OTHER REASONS TO USE A TRIPOD


Just because you can hold the camera steady enough to take a sharp photo using a given shutter speed, this doesn't necessarily mean that you should not use a tripod. You might be able to choose a more optimal combination of
aperture, ISO and shutter speed. For example, you could use a smaller aperture in order to achieve more depth of field, or a lower ISO in order to reduce image noise; both require a longer shutter speed, which may mean the photo is no
longer able to be taken hand-held.

Photo with a smoothed water effect from a long exposure (only possible with a tripod).
In addition, several specialty techniques may also require the use of a tripod:
• Taking a series of photos at different angles to produce a digital panorama.
• Taking a series of photos at different exposures for a high dynamic range (HDR) photo.
• Taking a series of time lapse photographs to produce an animation.
• Taking a series of photos to produce a composite image, such as selectively including people in a crowd, or combining portions lit by daylight with those at dusk.
• Whenever you want to precisely control your composition.
• Whenever you need to have your camera in the right composition well in advance of the shot, such as during a sporting event.

CHOOSING A TRIPOD: TOP CONSIDERATIONS


Even though a tripod performs a pretty basic function, choosing the best tripod often involves many competing factors. Finding the best tripod requires identifying the optimal combination of trade-offs for your type of photography.

The top considerations are usually its sturdiness, weight and ease of use:
Tripod Sturdiness/Stability. This is probably why you purchased a tripod in the first place: to keep your camera steady. Important factors which can influence sturdiness include: (i) the number of tripod leg sections, (ii) the material and
thickness of the leg units, and (iii) the length of the legs and whether a center column is needed to reach eye level. Ultimately though, the only way to gauge the sturdiness of a tripod is to try it out. Tap or apply weight to the top to see if it
vibrates or sways, and take a few test photos.
Tripod Weight. This can determine whether you take the tripod along with you on a hike, or even on a shorter stroll through town. Having a lighter tripod can therefore mean that you'll end up using it a lot more, or will at least make
using it more enjoyable since it won't cause a lot of fatigue when you have to carry it around. However, tripod weight and sturdiness are often closely related; just make sure that you're not sacrificing too much sturdiness in exchange for
portability, or vice versa. Further, tripods that do not extend as high may weigh a little less, but these also may not be as versatile as a result.
Tripod Ease of Use. What's the point of having a tripod if it stays in the closet because you find it too cumbersome, or if you miss the shot because it takes you too long to set it up? A tripod should therefore be quick and easy to position.
Ease of use depends on the type of tripod head (discussed later), and how one positions the leg sections.

twist lock
Tripod leg sections are usually extended or contracted using a locking mechanism, either with lever/clip locks or twist locks. Lever/clip locks tend to be much quicker to use, although some types can be tough to grip when wearing gloves.
Twist locks are usually a little more compact and streamlined though, since they do not have external clips/latches. Twist locks also sometimes require two hands if you want to extend or contract each leg section independently.

CHOOSING A TRIPOD: OTHER CONSIDERATIONS


Other more minor considerations when choosing a tripod include:
Number of Tripod Leg Sections. Each leg of a tripod can typically be extended using anywhere from two to four concentric leg sections. In general, more leg sections reduce stability, but can also reduce the size of the tripod when it's
fully contracted and in your camera bag. Having more leg sections can also mean that it takes longer to position or fully extend your tripod.

Example of a tripod with multiple leg sections extended.


Maximum Tripod Height. This is especially important if you're quite tall, since you could end up having to crouch. Make sure that the tripod's max height specification does not include having to extend its center column, because this
can make the camera much less steady. On the other hand, you may not wish to take your photos at eye level most of the time, since this can make for an ordinary looking perspective. Further, the higher you extend your tripod (even
without the center column), the less stable it will end up being.
Center column of tripod extends to increase maximum height (at the expense of stability).
Minimum Tripod Height. This is primarily important for photographers who take a lot of macro photographs of subjects on the ground, or who like to use extreme vantage points in their photographs.
Contracted Tripod Height. This is primarily important for photographers who need to fit their tripod in a backpack, suitcase or other enclosed space. Tripods with more leg sections are generally more compact when fully contracted.
However, often times more compact tripods either don't extend as far or aren't as sturdy.

TRIPOD HEADS: PAN-TILT vs. BALL HEADS


Although many tripods already come with a tripod head, you might want to consider purchasing something that better suits your shooting style. The two most common types of tripod heads are pan-tilt and ball heads:

Pan-Tilt Head Ball Head

Pan-Tilt Heads are great because you can independently control each of the camera's two axes of rotation: left-right (yaw) and up/down (pitch). This can be very useful once you've already gone to great care in leveling the tripod, but
need to shift the composition slightly. However, for moving subjects this can also be a disadvantage, since you will need to adjust at least two camera settings before you can fully recompose the shot.
Ball Heads are great because you can quickly point the camera freely in nearly any direction before locking it into position. They are typically also a little more compact than equivalent pan-tilt heads. However, the advantage of free
motion can also be a disadvantage, since it may cause your composition to no longer be level when you unlock the camera's position -- even if all you wanted to change was its left/right angle. On the other hand, some ball heads come
with a "rotation only" ability for just this type of situation. Ball heads can also be more susceptible to damage, since small scratches or dryness of the rotation ball can cause it to grind or move in uneven jumps.
Be on the lookout for whether your tripod head creeps/slips under heavy weight. This is surprisingly common, but is also a big problem for long exposures with heavier SLR cameras. Try attaching a big lens to your SLR camera and
facing it horizontal on your tripod. Take an exposure of around 30 seconds if possible, and see whether your image has slight motion blur when viewed at 100%. Pay particular attention to creeping when the tripod head is rotated so that it
holds your camera in portrait orientation.
The strength to weight ratio is another important tripod head consideration. This describes the rated load weight of the tripod head (how much equipment it can hold without creeping/slipping), compared to the weight of the tripod head
itself. Higher ratios will make for a much lighter overall tripod/head combination.

bubble level
A tripod head with a built-in bubble/spirit level can also be an extremely helpful feature -- especially when your tripod legs aren't equally extended on uneven ground.
Finally, regardless of which type of tripod head you choose, getting one that has a quick release mechanism can make a big difference in how quickly you can attach or remove your camera. A quick release mechanism lets your camera
attach to the tripod head using a latch, instead of requiring the camera to be screwed or unscrewed.
TRIPOD LENS COLLARS

Tripod lens collar on a 70-200 mm lens.


A lens collar is most commonly used with large telephoto lenses, and is an attachment that fits around the lens somewhere near its base or midsection. The tripod head then directly attaches to the lens collar itself (instead of the camera
body).
This causes the camera plus lens to rest on the tripod at a location which is much closer to their center of mass. Much less rotational stress (aka torque) is therefore placed on the tripod head, which greatly increases the amount of weight
that your tripod head can sustain without creeping or slipping. A lens collar can also make a huge difference in how susceptible the tripod and head are to vibrations.
In other words: if your lens came with a collar, use it! Otherwise you might consider purchasing one if there's a size available that fits your lens.

TRIPOD TIPS FOR SHARP PHOTOS


How you use your tripod can be just as important as the type of tripod you're using. Below is a list of top tips for achieving the sharpest possible photos with your tripod:
• Hang a camera bag from the center column for added stability, especially in the wind. However, make sure that this camera bag does not swing appreciably, or this could be counter-productive.
• Use the center column only after all of the tripod's leg sections have been fully extended, and only when absolutely necessary. The center column wobbles much more easily than the tripod's base.
• Remove the center column to shave off some weight.
• Extend your tripod only to the minimum height required for a given photograph.
• Spread the legs to their widest standard position whenever possible.
• Shield the tripod's base from the wind whenever possible.
• Extend only the thickest leg sections necessary in order to reach a given tripod height.
• Set your tripod up on a sturdy surface, such as rock or concrete versus dirt, sand or grass. For indoor use, tile or hardwood floor is much better than a rug or carpet.
• Use add-on spikes at the ends of the tripod legs if you have no other choice but to set up your tripod on carpet or grass.

TABLETOP & MINI TRIPODS

A tabletop or mini tripod is usually used with compact cameras, since this type of tripod can often be quite portable and even carried in one's pocket. However, this portability often comes at the expense of versatility. A tabletop/mini
tripod can only really change your camera's up/down (pitch) and left/right (yaw) orientation; shifting your camera to higher or lower heights is not a possibility. This means that finding the best surface to place the tripod is more important
than usual with a tabletop/mini tripod, because you also need this surface to be at a level which gives you the desired vantage height.
However, photos taken at eye level often appear ordinary, since that's the perspective that we're most used to seeing. Photos taken at above or below this height are therefore often perceived as more interesting. A tabletop/mini tripod can
be one way of forcing you to try a different perspective with your photos.

ALUMINUM vs. CARBON FIBER TRIPODS


The two most common types of tripod material are aluminum or carbon fiber. Aluminum tripods are generally much cheaper than carbon fiber models, but they are often also a lot heavier for an equivalent amount of stability, and can be
uncomfortably cold to handle with your bare hands in the winter. Carbon fiber tripods are generally also better at dampening vibrations. However, the best tripod material for damping vibrations is good old-fashioned wood -- it's just too
heavy and impractical for typical use.
CAMERA MONOPODS

monopod used to track a moving subject


A monopod is a tripod with only one leg. These are most commonly used to hold up heavy cameras and lenses, such as large telephoto lenses for sports and wildlife. Alternatively, monopods can increase hand-holdability for situations
where just a little bit longer shutter speed is needed, but carrying a full tripod might be too cumbersome.
A monopod can also make it much easier to photograph a moving subject in a way that creates a blurred background, but yet still keeps the moving subject reasonably sharp (example on left). This technique works by rotating the
monopod along its axis -- causing the camera to pan in only one direction.

9.UNDERSTANDING CAMERA LENS FLARE -


Lens flare is created when non-image forming light enters the lens and subsequently hits the camera's film or digital sensor. This often appears as a characteristic polygonal shape, with sides which depend on the shape of the lens
diaphragm. It can lower the overall contrast of a photograph significantly and is often an undesired artifact, however some types of flare may actually enhance the artistic meaning of a photo. Understanding lens flare can help you use it--
or avoid it--in a way which best suits how you wish to portray the final image.

WHAT IT LOOKS LIKE

The above image exhibits tell-tale signs of flare in the upper right caused by a bright sun just outside the image frame. These take the form of polygonal bright regions (usually 5-8 sides), in addition to bright streaks and an overall
reduction in contrast (see below). The polygonal shapes vary in size and can actually become so large that they occupy a significant fraction of the image. Look for flare near very bright objects, although its effects can also be seen far
away from the actual source (or even throughout the image).
Flare can take many forms, and this may include just one or all of the polygonal shapes, bright streaks, or overall washed out look (veiling flare) shown above.

BACKGROUND: HOW IT HAPPENS


All but the simplest cameras contain lenses which are actually comprised of several "lens elements." Lens flare is caused by non-image light which does not pass (refract) directly along its intended path, but instead reflects internally on
lens elements any number of times (back and forth) before finally reaching the film or digital sensor.

Note: The aperture above is shown as being behind several lens elements.
Lens elements often contain some type of anti-reflective coating which aims to minimize flare, however no multi-element lens eliminates it entirely. Light sources will still reflect a small fraction of their light, and this reflected light
becomes visible as flare in regions where it becomes comparable in intensity to the refracted light (created by the actual image). Flare which appears as polygonal shapes is caused by light which reflects off the inside edges of the lens
aperture (diaphragm), shown above.

Although flare is technically caused by internal reflections, this often requires very intense light sources in order to become significant (relative to refracted light). Flare-inducing light sources may include the sun, artificial lighting and
even a full moon. Even if the photo itself contains no intense light sources, stray light may still enter the lens if it hits the front element. Ordinarily light which is outside the angle of view does not contribute to the final image, but if this
light reflects it may travel an unintended path and reach the film/sensor. In the visual example with flowers, the sun was not actually in the frame itself, but yet it still caused significant lens flare.

REDUCING FLARE WITH LENS HOODS


A good lens hood can nearly eliminate flare caused by stray light from outside the angle of view. Ensure that this hood has a completely non-reflective inner surface, such as felt, and that there are no regions which have rubbed off.
Although using a lens hood may appear to be a simple solution, in reality most lens hoods do not extend far enough to block all stray light. This is particularly problematic when using 35 mm lenses on a digital SLR camera with a "crop
factor," because these lens hoods were made for the greater angle of view. In addition, hoods for zoom lenses can only be designed to block all stray light at the widest focal length.

Petal lens hoods often protect better than non-petal (round) types. This is because petal-style hoods take into account the aspect ratio of the camera's film or digital sensor, and so the angle of view is greater in one direction than the other.
If the lens hood is inadequate, there are some easy but less convenient workarounds. Placing a hand or piece of paper exterior to the side of the lens which is nearest the flare-inducing light source can mimic the effect of a proper lens
hood. On the other hand, it is sometimes hard to gauge when this makeshift hood will accidentally become part of the picture. A more expensive solution used by many pros is using adjustable bellows. This is just a lens hood which
adjusts to precisely match the field of view for a given focal length.
Another solution to using 35 mm lenses and hoods on a digital SLR with a crop factor is to purchase an alternative lens hood. Look for one which was designed for a lens with a narrower angle of view (assuming this still fits the hood
mount on the lens). One common example is to use the EW-83DII hood with Canon's 17-40 f/4L lens, instead of the one it comes with. The EW-83DII hood works with both 1.6X and 1.3X (surprisingly) crop factors as it was designed
to cover the angle of view for a 24 mm lens on a full-frame 35 mm camera. Although this provides better protection, it is still only adequate for the widest angle of view for a zoom lens.
Despite all of these measures, there is no perfect solution. Real-world lens hoods cannot protect against stray light completely since the "perfect" lens hood would
have to extend all the way out to the furthest object, closely following the angle of view.
Unfortunately, the larger the lens hood the better-- at least when only considering its light-blocking ability. Care should still be taken that this hood does not block any of the actual image light.

INFLUENCE OF LENS TYPE


In general, fixed focal length (or prime) lenses are less susceptible to lens flare than zoom lenses. Other than having an inadequate lens hood at all focal lengths, more complicated zoom lenses often have to contain more lens elements.
Zoom lenses therefore have more internal surfaces from which light can reflect.
Wide angle lenses are often designed to be more flare-resistant to bright light sources, mainly because the manufacturer knows that these will likely have the sun within or near the angle of view.
Modern high-end lenses typically contain better anti-reflective coatings. Some older lenses made by Leica and Hasselblad do not contain any special coatings, and can thus flare up quite significantly under even soft lighting.

MINIMIZING FLARE THROUGH COMPOSITION


Flare is thus ultimately under the control of the photographer, based on where the lens is pointed and what is included within the frame.
Although photographers never like to compromise their artistic flexibility for technical reasons, certain compositions can be very effective at minimizing flare. The best solutions
are those where both artistic intent and technical quality coexist.
One effective technique is to place objects within your image such that they partially or completely obstruct any flare-inducing light sources. The image on the left shows a cropped region within a photo
where a tree trunk partially obstructed a street light during a long exposure. Even if the problematic light source is not located within the image, photographing from a position where that source is
obstructed can also reduce flare.
The best approach is to of course shoot with the problematic light source to your back, although this is usually either too limiting to the composition or not possible. Even changing the angle of the lens
slightly can still at least change the appearance and position of the flare.

VISUALIZING FLARE WITH THE DEPTH OF FIELD PREVIEW


The appearance and position of lens flare changes depending on the aperture setting of the photo. The viewfinder image in a SLR camera represents how the scene appears only when the aperture is wide open (to create the brightest
image), and so this may not be representative of how the flare will appear after the exposure. The depth of field preview button can be used to simulate what the flare will look like for other apertures, but beware that this will also darken
the viewfinder image significantly.
The depth of field preview button is usually found at the base of the lens mount, and can be pressed to simulate the streaks and polygonal flare shapes. This button is still inadequate for simulating how "washed out" the final image will
appear, as this flare artifact also depends on the length of the exposure (more on this later).
OTHER NOTES
Lens filters, as with lens elements, need to have a good anti-reflective coating in order to reduce flare. Inexpensive UV, polarizing, and neutral density filters can all increase flare by introducing additional surfaces which light can reflect
from.
If flare was unavoidable and it produced a washed out image (due to veiling flare), the levels tool and local contrast enhancement can both help regain the appearance of contrast.

10.HYPERFOCAL DISTANCE -
Focusing your camera at the hyperfocal distance ensures maximum sharpness from half this distance all the way to infinity. The hyperfocal distance is particularly useful in landscape photography, and understanding it will help you
maximize sharpness throughout your image by making the most of your the depth of field-- thereby producing a more detailed final print. Knowing it for a given focal length and aperture can be tricky; this section explains how
hyperfocal distance is calculated, clears up a few misconceptions, and provides a hyperfocal chart calculator. I do not recommend using this distance "as is," but instead suggest using it as a reference point.

Front Focus Back Focus Front-Center Focus

Note how only the right image has words which are (barely) legible at all distances. Somewhere between the nearest and furthest subject distance lies a focal point which maximizes average sharpness throughout, although this is rarely
halfway in between. The hyperfocal distance uses a similar concept, except its bounds are from infinity to half the focus distance (and the amount of softness shown above would be unacceptable).

WHERE ITS LOCATED

Where is this optimal focusing distance? The hyperfocal distance is defined as the focus distance which places the maximum allowable circle of confusion at infinity. If one were to focus any closer than this--if even by the slightest
amount--then at some distance beyond the focal plane there would be an object which is no longer within the depth of field. Alternatively, it is also true that if one focuses at a very distant object on the horizon (~infinity), then the closest
distance which is still within the depth of field will also be the hyperfocal distance. To calculate its location precisely, use the hyperfocal chart at the bottom of this page.
PRECAUTIONS
The problem with the hyperfocal distance is that objects in the far background (treated as ~infinity) are on the extreme outer edge of the depth of field. These objects therefore barely meet what is defined
to be "acceptably sharp." This seriously compromises detail, considering that most people can see features 1/3 the size of those used by most lens manufacturers for their circle of confusion (see
"Understanding Depth of Field"). Sharpness at infinity is particularly important for those landscape images that are very background-heavy.
Sharpness can be a useful tool for adding emphasis, but blind use of the hyperfocal distance can neglect regions of a photo which may require more sharpness than others. A finely detailed foreground
may demand more sharpness than a hazy background (left). Alternatively, a naturally soft foreground can often afford to sacrifice some softness for the background. Finally, some images work best with
a very shallow depth of field (such as portraits), since this can separate foreground objects from an otherwise busy background.

When taking a hand-held photograph, one often has to choose where to allocate the most sharpness (due to aperture and shutter speed limitations). These situations call for quick judgment, and the hyperfocal distance is not always the
best option.

RULE OF THUMB FOR FINITE SCENES


What if your scene does not extend all the way to the horizon, or excludes the near foreground? Although the hyperfocal distance no longer applies, there is still an optimal focus distance between the foreground and background.

Many use a rule of thumb which states that you should focus roughly 1/3 of the way into your scene in order to achieve maximum sharpness throughout. I encourage you to ignore such advice since this distance is rarely optimal; the
position actually varies with subject distance, aperture and focal length. The fraction of the depth of field which is in front of the focal plane approaches 1/2 for the closest focus distances, and decreases all the way to zero by the time the
focus distance reaches the hyperfocal distance. The "1/3 rule of thumb" is correct at just one distance in between these two, but nowhere else. To calculate the location of optimal focus precisely, please use the depth of field calculator.
Ensure that both the nearest and furthest distances of acceptable sharpness enclose your scene.

IN PRACTICE
The hyperfocal distance is best implemented when the subject matter extends far into the distance, and if no particular region requires more sharpness than another. Even so, I also suggest either using a more rigorous requirement for
"acceptably sharp," or focusing slightly further such that you allocate more sharpness to the background. Manually focus using the distance markers on your lens, or by viewing the distance listed on the LCD screen of your compact
digital camera (if present).
You can calculate "acceptably sharp" such that any softness is not perceptible by someone with 20/20 vision, given your expected print size and viewing distance. Just use the hyperfocal chart at the bottom of the page, but instead modify
the eyesight parameter from its default value. This will require using a much larger aperture number and/or focusing further away in order to keep the far edge of the depth of field at infinity.
Using too large of an aperture number can be counterproductive since this begins to soften your image due to an effect called "diffraction." This softening is irrespective of an object's location relative to the depth of field, so the
maximum sharpness at the focal plane can drop significantly. For 35 mm and other similar SLR cameras, this will become significant beyond about f/16. For compact digital cameras, there is usually no worry since these are often
limited to a maximum of f/8.0 or less.
Want to learn more? Discuss this and other articles in our digital photography forums.

Top of Form
Hyperfocal Chart Calculator

Maximum Print 10
Dimension

Viewing Distance

Eyesight

Camera Type

Note: CF = "crop factor" (commonly referred to as the focal length multiplier)


Bottom of Form
Top of Form

16 24 35 50 85 13 20

m m m m m m m
m m m m m m m

f/2.8

f/4.0

f/5.6

f/8.0

f/11

f/16

f/22

f/32
11.MACRO CAMERA LENSES -
A macro lens literally opens up a whole new world of photographic subject matter. It can even cause one to think differently about everyday objects. However, despite these exciting possibilities, macro photography is also often a highly
meticulous and technical endeavor. Since fine detail is often a key component, macro photos demand excellent image sharpness, which in turn requires careful photographic technique. Concepts such as magnification, sensor size, depth of
field and diffraction all take on new importance. This advanced tutorial provides a technical overview of how these concepts interrelate.

Photo courtesy of Piotr Naskrecki, author of "The Smaller Majority ."

MAGNIFICATION
Magnification describes the size an object will appear on your camera's sensor, compared to its size in real-life. For example, if the image on your camera's sensor is 25% as large as the actual object, then the magnification is said to be 1:4
or 0.25X. In other words, the more magnification you have, the smaller an object can be and still fill the image frame.

Photograph at 0.25X Magnification Photograph at 1.0X Magnification


(subject is further) (subject is closer)

Diagram only intended as a qualitative illustration; horizontal distances not shown to scale.
Magnification is controlled by just two lens properties: the focal length and the focusing distance. The closer one can focus, the more magnification a given lens will be able to achieve -- which makes sense because closer objects appear
to become larger. Similarly, a longer focal length (more zoom) achieves greater magnification, even if the minimum focusing distance remains the same.

Top of Form
450
Focusing Distance*

100
Lens Focal Length**
mm
0.5X

Magnification

*Measured as the distance between camera sensor and subject.


**If using a full frame lens on a cropped sensor, you will need to use a focal length multiplier. Otherwise just use the actual lens focal length (without multipliers).
Bottom of Form

True macro lenses are able to capture an object on the camera's sensor at the same size as the actual object (termed a 1:1 or 1.0X macro). Strictly speaking, a lens is categorized as a "macro lens" only if it can achieve this 1:1
magnification. However, "macro" is often used loosely to also include close-up photography, which applies to magnifications of about 1:10 or greater. We'll use this loose definition of macro for the rest of the tutorial...
Note: Lens manufacturers inconsistently define the focusing distance; some use the sensor to subject distance, while others measure from the lens's front or center. If a maximum magnification value is available or measurable, this will provide more accurate results than the above calculator.

MAGNIFICATION & SENSOR SIZE


However, despite its usefulness, magnification says nothing about what photographers often care about most: what is the smallest object that can fill the frame? Unfortunately, this depends on the camera's sensor size -- of which there's a
wide diversity these days.

Full Size Object Compact Camera at 0.25X


(24 mm diameter) Full Frame SLR Camera at 0.25X

All illustrations above are shown to scale.


Compact camera example uses a 1/1.7" sensor size (7.6 x 5.7 mm).
A US quarter was chosen because it has roughly the same height as a full frame 35 mm sensor.
In the above example, even though the quarter is magnified to the same 0.25X size at each camera's sensor, the compact camera's smaller sensor is able to fill the frame with the image. Everything else being equal, a smaller sensor is
therefore capable of photographing smaller subjects.
Top of Form
0.25
Magnification
X

Sensor Size

60 mm

Smallest subject which can fill the image*

*as measured along the photo's shortest dimension


Bottom of Form

LENS EXTENSION & EFFECTIVE F-STOP


In order for a camera lens to focus progressively closer, the lens apparatus has to move further from the camera's sensor (called "extension"). For low magnifications, the extension is tiny, so the lens is always at the expected distance of
roughly one focal length away from the sensor. However, once one approaches 0.25-0.5X or greater magnifications, the lens becomes so far from the sensor that it actually behaves as if it had a longer focal length. At 1:1 magnification,
the lens moves all the way out to twice the focal length from the camera's sensor:
Choose a Magnification: 1:2 (0.5X) 1:1 (1.0X)

Note: Diagram assumes that the lens is symmetric (pupil magnification = 1).
The most important consequence is that the lens's effective f-stop increases*. This has all the usual characteristics, including an increase in the depth of field, a longer exposure time and a greater susceptibility to diffraction. In fact, the
only reason "effective" is even used is because many cameras still show the uncompensated f-stop setting (as it would appear at low magnification). In all other respects though, the f-stop really has changed.
*Technical Notes:
The reason that the f-stop changes is because this actually depends on the lens's focal length. An f-stop is defined as the ratio of the aperture diameter to the focal length. A 100 mm lens with an aperture
diameter of 28 mm will have an f-stop value of f/2.8, for example. In the case of a macro lens, the f-stop increases because the effective focal length increases -- not because of any change in the aperture
itself (which remains at the same diameter regardless of magnification).

A rule of thumb is that at 1:1 the effective f-stop becomes about 2 stops greater than the value set using your camera. An aperture of f/2.8 therefore becomes more like f/5.6, and f/8 more like f/16, etc. However, this rarely requires
additional action by the photographer, since the camera's metering system automatically compensates for the drop in light when it calculates the exposure settings:

Reduced Light from 2X After 8X Longer Exposure


Magnification Time

Photo courtesy of Piotr Naskrecki.


For other magnifications, one can estimate the effective f-stop as follows:
Effective F-Stop = F-Stop x (1 + Magnification)

For example, if you are shooting at 0.5X magnification, then the effective f-stop for a lens set to f/4 will be somewhere between f/5.6 and f/6.3. In practice, this will mean that you'll need a 2-3X longer exposure time, which might make
the difference between being able to take a hand-held shot and needing to use a tripod.
Technical Notes:
The above formula works best for normal lenses (near 50 mm focal length). Using this formula for macro lenses with much longer focal lengths, such as 105 mm or 180 mm, will tend to slightly
underestimate the the effective lens f-stop. For those interested in more accurate results, you will need to use the formula below along with knowing the pupil magnification of your lens:

Effective F-Stop = F-Stop x (1 + Magnification / Pupil Magnification)

Canon's 180 mm f/3.5L macro lens has a pupil magnification of 0.5 at 1:1, for example, resulting in a 50% larger f-stop than if one were to have used the simpler formula. However, using the pupil
magnification formula probably isn't practical for most situations. The biggest problem is that pupil magnification changes depending on focusing distance, which introduces yet another formula. It's also
rarely published by camera lens manufacturers.
Other consequences of the effective aperture include autofocus ability and viewfinder brightness. For example, most SLR cameras lose the ability to autofocus when the minimum f-stop becomes greater than f/5.6. As a result, lenses
with minimum f-stop values of greater than f/2.8 will lose the ability to autofocus when at 1:1 magnification. In addition, the viewfinder may also become unreasonably dark when at high magnification. To see what this would look like,
one can always set their camera to f/5.6 or f/8 and press the "depth of field preview" button.

MACRO DEPTH OF FIELD


The more one magnifies a subject, the shallower the depth of field becomes. With macro and close-up photography, this can become razor thin -- often just millimeters:

Example of a close-up photograph with a very shallow depth of field.


Photo courtesy of Piotr Naskrecki.

Macro photos therefore usually require high f-stop settings to achieve adequate depth of field. Alternatively, one can make the most of what little depth of field they have by aligning their subject matter with the plane of sharpest focus.
Regardless, it's often helpful to know how much depth of field one has available to work with:
Top of Form
Macro Depth of Field Calculator

Magnification

Sensor Size

Selected Lens Aperture

Depth of Field

Note: Depth of field defined based on what would appear sharp in an 8x10 in print viewed from a distance of one foot; based on standard circle of confusion for 35 mm cameras of 0.032 mm.
For magnifications above 1X, output is in units of µm (aka microns or 1/1000 of a mm)
Bottom of Form

Note that depth of field is independent of focal length; a 100 mm lens at 0.5X therefore has the same depth of field as a 65 mm lens at 0.5X, for example, as long as they are at the same f-stop. Also, unlike with low magnification
photography, the depth of field remains symmetric about the focusing distance (front and rear depth of field are equal).
Technical Notes:
Contrary to first impressions, there's no inherent depth of field advantage for smaller camera sensors. While it's true that a smaller sensor will have a greater depth of field at the same f-stop, this isn't a fair
comparison, because the larger sensor can get away with a higher f-stop before diffraction limits resolution at a given print size. When both sensor sizes produce prints with the same diffraction-limited
resolution, both sensor sizes will have the same depth of field. The only inherent advantage is that the smaller sensor requires a much shorter exposure time in order to achieve a given depth of field.
MACRO DIFFRACTION LIMIT
Diffraction is an optical effect which limits the resolution of your photographs -- regardless of how many megapixels your camera may have (see diffraction in photography tutorial). Images are more susceptible to diffraction as the f-stop
increases; at high f-stop settings, diffraction becomes so pronounced that it begins to limit image resolution (the "diffraction limit"). After that, any subsequent f-stop increase only acts to further decrease resolution.
However, at high magnification the effective f-stop is actually what determines the diffraction limit -- not necessarily the one set by your camera. This is accounted for below:
Top of Form
Macro Diffraction Limit Calculator

Magnification

Sensor Size

Resolution
Megapixels

Diffraction Limited F-Stop

Bottom of Form

Keep in mind that the onset of diffraction is gradual, so apertures slightly larger or smaller than the above diffraction limit will not all of a sudden look better or worse, respectively. Furthermore, the above is only a theoretical limit; actual
results will also depend on the characteristics of your specific lens. Finally, the above calculator is for viewing the image at 100% on-screen; small or large print sizes may mean that the diffraction-limited f-stop is actually greater or less
than the one suggested above, respectively.
With macro photography one is nearly always willing to trade some diffraction-induced softening for greater depth of field. Don't be afraid to push the f-stop beyond the diffraction limit. With digital SLR cameras in general,
aperture settings of f/11-f/16 provide a good trade-off between depth of field and sharpness, but f/22+ is sometimes necessary for extra (but softer) depth of field. Ultimately though, the best way to identify the optimal trade-off is to
experiment -- using your particular lens and subject matter.

WORKING DISTANCE & FOCAL LENGTH


The working distance of a macro lens describes the distance between the front of your lens and the subject. This is different from the closest focusing distance, which is instead (usually) measured from the camera's sensor to the subject.

Photo courtesy of Piotr Naskrecki


The working distance is a useful indicator of how much your subject is likely to be disturbed. While a close working distance may be fine for photographs of flowers and other stationary objects, it can disturb insects and other small
creatures (such as causing a bee to fly off of a flower). In addition, a subject in grass or other foliage may make closer working distances unrealistic or impractical. Close working distances also have the potential to block ambient light and
create a shadow on your subject.
At a given magnification, the working distance generally increases with focal length. This is often the most important consideration when choosing between macro lenses of different focal lengths. For example, Canon's 100 mm f/2.8
macro lens has a working distance of just ~150 mm (6") at 1:1 magnification, whereas Canon's 180 mm f/3.5L macro lens has a more comfortable working distance of ~300 mm (12") at the same magnification. This can often can make
the difference being able to photograph a subject and scaring them away.
However, another consideration is that shorter focal lengths often provide a more three-dimensional and immersive photograph. This is especially true with macro lenses, because the greater effective focal length will tend to flatten
perspective. Using the shortest focal length available will help offset this effect and provide a greater sense of depth.
CLOSE-UP IMAGE QUALITY
Higher subject magnification also magnifies imperfections from your camera lens. These include chromatic aberrations (magenta or blue halos along high contrast edges, particularly near the corners of the image), image distortion and
blurring. All of these are often most apparent when using a non-macro lens at high magnification; by contrast, a true macro lens achieves optimal image quality near its minimum focusing distance.
The example below was taken at 0.3X magnification using a compact camera at its closest focusing distance. Since this is a standard non-macro lens, image quality clearly suffers:

Close-up at 0.3X using a Compact


Crops shown at 100% zoom
Camera

Above images are depicted even after aggressive capture sharpening has been applied.
Note how the chromatic aberrations and image softness is more pronounced further from the center of the image (red crop). While the central crop (in blue) isn't as sharp as one would hope, chromatic aberration is far less apparent.

12.USING WIDE ANGLE LENSES -


A wide angle lens can be a powerful tool for exaggerating depth and relative size in a photo. However, it's also one of the most difficult types of lenses to learn how to use. This page dispels some common misconceptions, and discusses
techniques for taking full advantage of the unique characteristics of a wide angle lens.

16mm ultra-wide angle lens - sunset near Death Valley, California, USA

OVERVIEW
A lens is generally considered to be "wide angle" when its focal length is less than around 35 mm (on a full frame; see camera lenses: focal length & aperture). This translates into an angle of view which is greater than about 55° across
your photo's widest dimension. The definition of ultra-wide is a little fuzzier, but most agree that this realm begins with focal lengths somewhere around 20-24 mm and less. On a compact camera, wide angle is often when you've fully
zoomed out, however ultra-wide is usually never available without a special lens adapter.
Regardless, the key concept is this: the shorter the focal length, the more you will tend to notice the unique effects of a wide angle lens.
The above diagrams depict the maximum angles that light rays can take when hitting your camera's sensor. The location where light rays cross is not necessarily equal to the focal length, but is instead roughly proportional to this distance.
The angle of view therefore still increases similarly.
What makes a wide angle lens unique? A common misconception is that wide-angle lenses are primarily used for when you cannot step far enough away from your subject, but yet still want to capture all of this subject in a single camera
frame. Unfortunately, if one were to only use it this way they'd really be missing out. In fact, wide angle lenses are often used for just the opposite: when you want to get closer to a subject!
So, let's take a closer look at just what makes a wide angle lens unique:
• Its image encompasses a wide angle of view
• It generally has a close minimum focusing distance
Although the above characteristics might seem pretty basic, they result in a surprising range of possibilities. The rest of this page focuses on techniques for how to best use these traits for maximal impact in wide angle photography.

WIDE ANGLE PERSPECTIVE


Obviously, a wide angle lens is special because it has a wide angle of view -- but what does this actually do? A wide angle of view means that both the relative size and distance is exaggerated when comparing near and far objects.
This causes nearby objects to appear gigantic, and far away objects to appear unusually tiny and distant. The reason for this is the angle of view:

Wide Angle Lens Telephoto Lens


(objects are very different sizes) (objects are similar in size)

Even though the two cylinders above are the same distance apart when photographed with each lens, their relative sizes are very different when one fills the frame with the closest cylinder. With a wider angle of view, further objects
therefore comprise a much lower fraction of the total angle of view.
A misconception is that a wide angle lens affects perspective, but strictly speaking, this isn't true. Perspective is only influenced by where you are located when you take a photograph. However, in practical use, wide-angle lenses often
cause you to move much closer to your subject -- which does affect perspective.

Exaggerated 3 inch Flowers in Cambridge, UK.


Uses a 16mm ultra-wide angle focal length.
This exaggeration of relative size can be used to add emphasis and detail to foreground objects, while still capturing expansive backgrounds. If you plan on using this effect to full impact, you'll want to get as close as possible to the
nearest subject in the scene.In the extreme wide angle example to the left, the nearest flowers are nearly touching the front of the lens, which greatly exaggerates their size. In real life, these flowers are only a few inches wide!

Disproportionate body parts


caused by a wide angle lens.
However, one needs to take extra caution when photographing people. Their nose, head or other features can become greatly out of proportion if you are too close to them when taking the photo. This proportionality is in part why
narrower focal lengths are much more common for traditional portrait photography.
In the example to the right, note how the person's head has become abnormally large relative to their body. This can be a useful tool for adding drama or extra character to a candid shot, but certainly isn't how most people would want to
be depicted in a standard portrait.
Finally, because far away objects become quite small, sometimes it's a good idea to include some foreground elements to anchor the composition. Otherwise a landscape shot (taken at eye level) can appear overly busy and lack that extra
something that's needed to draw the eye into the photo.
Regardless, don't be afraid to get much closer! This is where wide angle really shines. Just take extra care with the composition though; extremely close objects can move a lot inside the image due to camera movements of even a fraction
of an inch. It can therefore become quite difficult to frame subjects the way you want.

CONVERGING VERTICALS
Whenever a wide angle lens is pointed above or below the horizon, it will cause otherwise parallel vertical lines to appear as if they are converging. Any lens does this -- even telephoto lenses -- it's just that a wider expanse of
converging lines is visible with a wide angle lens. Further, with a wide angle lens, even small changes in composition will alter the location of the vanishing point by a large amount -- resulting in a big difference in how sharply lines seem
to converge.
In this case, the vanishing point is the direction that you are pointing your camera. Move your mouse over the image below to see a simulation of what happens when you point your camera above or below the horizon:

Camera Aimed Above the Horizon Camera Aimed Below the Horizon

In the above example, the vanishing point didn't change by a whole lot as a fraction of the total image -- but this had a huge impact on the building. In each case, the building appears to be either falling in on or away from the viewer.
Although converging vertical lines are generally avoided in architectural photography for the above reasons, one can also sometimes use these to their advantage:
left: Wide angle shot of trees on Vancouver Island, Canada.
right: King's College Chapel, Cambridge, UK.
In the above trees example, a wide angle lens was used to capture the towering trees in a way that makes them appear to be enveloping the viewer. A big reason for this is that they look as if they are coming from all directions and
converging in the middle of the image -- even though they are actually all parallel to one another.
Similarly, the architectural photo to the right was taken close to the door in order to exaggerate the apparent height of the chapel. On the other hand, this also gives the unwanted appearance that the building is about to fall over backwards.
The only ways to reduce converging verticals are to either (i) aim your camera closer to the horizon, even if this means that you'll capture a lot of ground in addition to the subject (which you crop out later), (ii) get much further from
your subject and use a lens with a longer focal length, (iii) use Photoshop or other software to distort the photo so that vertical lines diverge less, or (iv) use a tilt/shift lens to control perspective.
Unfortunately all of these options have their drawbacks, whether it be resolution in the case of (i) and (iii), convenience/perspective in the case of (ii), or cost, technical knowledge and a slight reduction in image quality in the case of (iii).

INTERIORS & ENCLOSED SPACES


A wide angle lens can be an absolute requirement in enclosed spaces, simply because one cannot move far enough away from their subject to get all of them in the photo (using a normal lens). A common example is photography of
interior rooms or other indoor architecture. This kind of photography is also perhaps the easiest way to make the most of a wide angle lens -- in part because it forces you to be close to the subject.

left: 16mm focal length - Antelope Canyon, Arizona, USA.


right: Spiral staircase in New Court, St John's College, Cambridge
In both examples above, you could not move more than a few feet in any direction -- and yet the photos do not give any appearance of being cramped.
POLARIZING FILTERS

Coral Reef National Park, Utah, USA.


Using a polarizing camera lens filter should almost always be avoided with a wide angle lens. A key trait of a polarizer is that its influence varies depending on the angle of the subject relative to the sun. When you face your camera
90° from where the sun is coming from, you will maximize its effect; similarly, whenever you face your camera directly away from or into the sun, you will minimize the effect of a polarizer.
With an ultra-wide angle lens, one edge of your image frame might be nearly facing the sun, whereas the opposing edge might be facing 90° away from the sun. This means that you will be able to see the changing influence of your
polarizer across a single photo, which is usually undesirable.
In the example to the left, the blue sky clearly changes in saturation and brightness as you move across the image from left to right.

MANAGING LIGHT ACROSS A WIDE ANGLE

GND filter example - lighthouse in Nora, Sardinia.


A common hurdle with wide angle lenses is strong variation in the intensity of light across an image. Using an ordinary exposure, uneven light can make some parts of the image over-exposed, while also leaving other parts underexposed
-- even though our eye would have adjusted to this changing brightness as we looked in different directions. One therefore needs to take extra care when determining the desired exposure.
For example, in landscape photography the foreground foliage is often much less intensely lit than the sky or a distant mountain. This often results in an over-exposed sky and/or an under-exposed foreground. Most photographers
therefore use what is called a graduated neutral density (GND) filter to overcome this uneven lighting.
In the example above, the GND filter partially obstructed some of the light from the bright sky, while also gradually letting in more and more light for positions progressively lower in the photo. At the bottom of the photo, the GND filter
let in the full amount of light. Move your mouse over the image above to see what it would have looked like without a GND filter. Also take a look at the tutorials on camera lens filters and high dynamic range (HDR) for additional
examples.
A wide angle lens is also much more susceptible to lens flare, in part because the sun is much more likely to enter into the composition. It can also be difficult to effectively shield the sides of the lens from stray light using a lens hood,
since this hood cannot also block any of the image-forming light across the wide angle of coverage.
WIDE ANGLE LENSES & DEPTH OF FIELD
Note that nowhere in this page is it mentioned that a wide angle lens has a greater depth of field. Unfortunately, this is another common misconception. If you are magnifying your subject by the same amount (meaning that they fill the
image frame by the same proportion), then a wide angle lens will give the same* depth of field as a telephoto lens.
*Technical Note: for situations of extreme magnification, the depth of field may differ by a small amount. However, this is an extreme case and is not relevant for the uses discussed in this page. See the tutorial on depth of field for a more
detailed discussion of this topic.
The reason that wide angle lenses get the reputation of improving depth of field is not because of any inherent property with the lens itself. It's because of how they're most often used. People rarely get close enough to their subject to have
them fill the same amount of the frame with a wide angle lens as they do with lenses that have narrower angles of view.

SUMMARY: HOW TO USE A WIDE ANGLE LENS


While there's no steadfast rules, you can often use your wide angle lens most effectively if you use the following four guidelines as starting points:
(1) Subject Distance. Get much closer to the foreground and physically immerse yourself amongst your subject.

A wide angle lens exaggerates the relative sizes of near and far subjects. To emphasize this effect it's important to get very close to your subject. Wide angle lenses also typically have much closer minimum focusing distances, and enable
your viewer to see a lot more in tight spaces.
(2) Organization. Carefully place near and far objects to achieve clear compositions.

Wide angle shots often encompass a vast set of subject matter, so it's easy for the viewer to get lost in the confusion. Experiment with different techniques of organizing your subject matter.
Many photographers try to organize their subject matter into clear layers, and/or to include foreground objects which might guide the eye into and across the image. Other times it's a simple near-far composition with a close-up subject
and a seemingly equidistant background.
(3) Perspective. Point your camera at the horizon to avoid converging verticals; otherwise be acutely aware of how these will impact your subject.

Even slight changes in where you point your camera can have a huge impact on whether otherwise parallel vertical lines will appear to converge. Pay careful attention to architecture, trees and other geometric objects.
(4) Distortion. Be aware of how edge and barrel distortion may impact your subject.
The two most prevalent forms of distortion are barrel and edge distortion. Barrel distortion causes otherwise straight lines to appear bulged if they don't pass through the center of the image. Edge distortion causes objects at the extreme
edges of the frame to appear stretched in a direction leading away from the center of the image.

13.USING TELEPHOTO LENSES -


You've probably heard that telephoto lenses are for enlarging distant subjects, but they're also a powerful artistic tool for affecting the look of your subject. They can normalize the size and distance difference between near and far objects,
and can make the depth of field appear more shallow. Telephoto lenses are therefore useful not only for wildlife photography, but also for landscape photography. Read on to learn techniques for utilizing the unique characteristics of a
telephoto lens . . .

300 mm telephoto lens - two cheetahs laying behind a log

OVERVIEW
A lens is generally considered to be "medium telephoto" when its focal length is greater than around 70 mm (on a full frame; see camera lenses: focal length & aperture). However, many don't consider a lens a "full telephoto" lens until its
focal length becomes greater than around 135 mm. This translates into an angle of view which is less than about 15° across your photo's widest dimension. On a compact camera with a 3-4X or greater zoom lens, telephoto is simply when
you've fully zoomed in. However, some compact cameras might require a special adapter in order to achieve full telephoto.
Regardless, the key concept is this: the longer the focal length, the more you will tend to notice the unique effects of a telephoto lens.

The above diagrams depict the maximum angles that light rays can take when hitting your camera's sensor. The location where light rays cross is not necessarily equal to the focal length, but is instead roughly proportional to this distance.
The angle of view therefore still increases similarly.
Why use a telephoto lens? A common misconception is that telephoto lenses are just for capturing distant objects. While this is a legitimate use, there's a whole array of other possibilities, and often times distant objects are better
photographed by simply getting a little closer. Yes, this isn't practical with a lion, but a pet or a person will likely appear better when they aren't photographed from afar. Why? The distance from your subject actually changes your
photo's perspective, even if your subject is still captured at the same size in your camera frame. Confused? More on this in the next section...

TELEPHOTO PERSPECTIVE
A telephoto lens is special because it has a narrow angle of view -- but what does this actually do? A narrow angle of view means that both the relative size and distance is normalized when comparing near and far objects. This
causes nearby objects to appear similar in size compared to far away objects -- even if the closer object would actually appear larger in person. The reason for this is the angle of view:
Wide Angle Lens Telephoto Lens
(objects are very different sizes) (objects are similar in size)

Even though the two cylinders above are the same distance apart, their relative sizes are very different when one uses either a wide angle lens and telephoto lens to fill the frame with the closest cylinder. With a narrow angle of view,
further objects comprise a much greater fraction of the total angle of view.
A misconception is that a telephoto lens affects perspective, but strictly speaking, this isn't true. Perspective is only influenced by where you are located when you take a photograph. However, in practical use, the very fact that you're
using a telephoto lens may mean that you're far from your subject -- which does affect perspective.

Objects appear in proper proportion to one another.


Uses a 135 mm telephoto focal length.
This normalization of relative size can be used to give a proper sense of scale. For full impact, you'll want to get as far as possible from the nearest subject in the scene (and zoom in if necessary).
In the telephoto example to the left, the people in the foreground appear quite small compared to the background building. On the other hand, if a normal focal length lens were used, and one were closer to the foreground people, then they
would appear much larger relative to the size of the building.
However, normalizing the relative size too much can make the scene appear static, flat and uninteresting, since our eyes generally expect closer objects to be a little larger. Taking a photo of someone or something from very far
away should therefore be done only when necessary.
In addition to relative size, a telephoto lens can also make the distance between objects appear compressed. This can be beneficial when you're trying to emphasize the number of objects, or to enhance the appearance of congestion:

Exaggerated Crowd Density Exaggerated Flower Density

left: 135 mm focal length - congestion of punters on the River Cam - Cambridge, UK.
right: telephoto shot of flowers in Trinity College, Cambridge, UK.
In the example to the left, the boats all appear to be right next to each other -- even though they appeared much further from each other in person. On the right, the flowers and trees appear stacked on top of one another, when in reality this
image spans around 100 meters.

BRINGING FAR AWAY SUBJECTS CLOSER

320 mm detail shot of a parrot


Perhaps the most common use for a telephoto lens is to bring otherwise small and distant subjects closer, such as wildlife. This can enable a vantage on subjects not otherwise possible in real life. One should therefore pay careful attention
to far off detail and texture.

telephoto sunset
Furthermore, even if you were able to get a little closer to the subject, this may adversely impact the photograph because being closer might alter the subject's behavior. This is especially true when trying to capture candid photographs of
people; believe it or not, people usually act differently when they're aware that someone is taking their photograph.
Finally, consider this: since a telephoto lens encompasses a much narrower angle of view, you as the photographer can be much more selective with what you choose to contain within your camera frame. You might choose to capture just
the region of the sky right around the sunset (left), just the surfer on their wave, or just a tight region around someone's interesting facial expression. This added selectivity can make for very simple and focused compositions.

LANDSCAPES & LAYERING

130 mm telephoto shot using layered subject matter.


Photo taken on Mt. Baldy, California.
Standard photography teaching will often tell you that "a wide angle lens is for landscapes" and "a telephoto lens is for wildlife." Nonsense! Very powerful and effective compositions can still be made with the "inappropriate" type of lens.
However, such claims aren't completely unfounded. Telephoto lenses compress the sense of depth, whereas wide angle lenses exaggerate the sense of depth. Since spaciousness is an important quality in many landscapes, the rationale is
that wide angle lenses are therefore better suited.
However, telephoto landscapes just require different techniques. If you want to improve the sense of depth, a common telephoto technique is to compose the scene so that it's comprised of layered subject matter at distinctly different
distances. For example, the closest layer could be a foreground set of trees, the subsequent layers could be successively more distant hillsides, and the furthest layer could be the sky, ocean, and/or all other seeminly equidistant background
objects.

165 mm telephoto shot using layered subject matter - Mt. Hamilton, California
In the above example, the image would have seemed much less three-dimensional without the foreground layer of trees on the hill. Similarly, the separate layers of trees, clouds and background mountainside also give the first example
more depth. A telephoto lens can also enhance how photography in fog, haze or mist affect an image, since these lenses make distant objects appear closer.

POINT OF FOCUS
For a given subject distance, a telephoto lens captures the scene with a much shallower depth of field than does other lenses. Out of focus distant objects are also made much larger, which enlarges their blur. It's therefore critical that you
achieve pinpoint accuracy with your chosen point of focus.

320 mm focal length - shallow depth of field telephoto shot of a cat amongst leaves
In the above example, the foreground fence was less than a foot from the cat's face -- yet it appears extremely out of focus due to the shallow depth of field. Even a misfocus of an inch could have therefore caused the cat's eyes to become
blurred, which would have ruined the intent of the photograph.
Fortunately, telephoto lenses are rarely subject to the "focus and recompose" errors caused by shorter focal lengths -- primarily because one is often much further from their subject. This means that you can use your central autofocus point
to achieve a focus lock, and then recompose your frame without worry of changing the distance at which objects are in sharpest focus (see tutorial on camera autofocus for more on this topic).

MINIMIZING CAMERA SHAKE


A telephoto lens may have a significant impact on how easy it is to achieve a sharp handheld photograph. Longer focal lengths require shorter exposure times to minimize blurring caused by shaky hands. Think of this as if one were
trying to hold a laser pointer steady; when shining this pointer at a nearby object its bright spot ordinarily jumps around less than for objects further away.
Simulation of what happens when you try to aim a laser pointer at a point on a distant wall;
the larger absolute movements on the further wall are similar to what happens with camera shake when you are using a telephoto lens (since objects become more magnified).
Minimizing camera shake requires either shooting using a faster shutter speed or holding your camera steadier, or some combination of the two.
To achieve a faster shutter speed you will need to use a larger aperture (such as going from f/8.0 to f/2.8) and/or increase the ISO speed. However, both of these options have drawbacks, since a larger aperture decreases depth of field,
and a higher ISO speed increases image noise.
To hold your camera steadier, you can (i) use your other hand to stabilize the lens, (ii) try taking the photo while crouching, or (iii) lean your body or lens against another solid object. However, using a camera tripod or monopod is the
only truly consistent way to reduce camera shake.

TELEPHOTO LENSES & DEPTH OF FIELD


Note that I've been careful to say that telephoto lenses only decrease depth of field for a given subject distance. A telephoto lens itself does not have less depth of field. Unfortunately, this is a common misconception. If you are
magnifying your subject by the same amount (meaning that they fill the image frame by the same proportion), then a telephoto lens will give the same* depth of field as other lenses.
*Technical Note: for situations of extreme magnification, the depth of field may differ by a small amount. However, this is an extreme case and is not relevant for the uses discussed in this page. See the tutorial on depth of field for a more
detailed discussion of this topic.
The reason that telephoto lenses get the reputation of decreasing depth of field is not because of any inherent property with the lens itself. It's because of how they're most often used. People usually magnify their subject matter a lot more
with telephoto lenses than with lenses that have wider angles of view. In other words, people generally don't get further from their subject, so this subject ends up filling more of the frame. It's this higher magnification that is what causes
the shallower depth of field.

distracting out of focus background


However, a telephoto lens does enlarge out of focus regions (called "bokeh"), since it enlarges the background relative to the foreground. This may give the appearance of a shallower depth of field.
One should therefore pay close attention to how a background will look and be positioned when it's out of focus. For example, poorly-positioned out of focus highlights may prove distracting for a foreground subject (such as in the parrot
example).

14.TAKING PHOTOS IN FOG, MIST OR HAZE -


Photography in fog, mist or haze can give a wonderfully moody and atmospheric feel to your subjects. However, it's also very easy to end up with photos that look washed-out and flat. This techniques article uses examples to illustrate
how to make the most out of photos in these unique shooting environments.
Clare Bridge in the fog at night (version 1) - Cambridge, UK

OVERVIEW
Fog usually forms in the mid to late evening, and often lasts until early the next morning. It is also much more likely to form near the surface of water that is slightly warmer than the surrounding air. In this techniques article, we'll
primarily talk about fog, but the photographic concepts apply similarly to mist or haze.
Photographing in the fog is very different from the more familiar photography in clear weather. Scenes are no longer necessarily clear and defined, and they are often deprived of contrast and color saturation:

Examples of photos which appear washed-out and de-saturated due to the fog.
Both photos are from St John's College, Cambridge, UK.
In essence, fog is a natural soft box: it scatters light sources so that their light originates from a much broader area. Compared to a street lamp or light from the sun on a clear day, this dramatically reduces contrast:

A Lamp or the Sun on a Clear


Light in the Fog, Haze or Mist
Day
(Low Contrast)
(High Contrast)

Scenes in the fog are also much more dimly lit -- often requiring longer exposure times than would otherwise be necessary. In addition, fog makes the air much more reflective to light, which often tricks your camera's light meter into
thinking that it needs to decrease the exposure. Just as with photographs in the snow, fog therefore usually requires dialing in some positive exposure compensation.
In exchange for all of these potential disadvantages, fog can be a powerful and valuable tool for emphasizing the depth, lighting, and shape of your subjects. As you will see later, these traits can even make scenes feel mysterious and
uniquely moody -- an often elusive, but well sought after prize for photographers. The trick is knowing how to make use of these unique assets -- without also having them detract from your subject.

EMPHASIZING DEPTH

Mathematical Bridge in Queens' College, Cambridge.


As objects become progressively further from your camera, not only do they become smaller, but they also lose contrast -- and sometimes quite dramatically. This can be both a blessing and a curse, because while it exaggerates the
difference between near and far objects, it also makes distant objects difficult to photograph in isolation.
In the example to the left, there are at least four layers of trees which cascade back towards the distant bridge. Notice how both color saturation and contrast drop dramatically with each successively distant tree layer. The furthest layer,
near the bridge, is reduced to nothing more than a silhouette, whereas the closest layer has near full color and contrast.

Southwest coast of Sardinia in haze.


Although there are no steadfast rules with photographing in the fog, it's often helpful to have at least some of your subject close to the camera. This way a portion of your image can contain high contrast and color, while also hinting at
what everything else would look like otherwise. This also serves to add some tonal diversity to the scene.

EMPHASIZING LIGHT

View of King's College Bridge


from Queens' College, Cambridge, UK.
Water droplets in the fog or mist make light scatter a lot more than it would otherwise. This greatly softens light, but also makes light streaks visible from concentrated or directional light sources. The classic example is the photo in a
forest with early morning light: when the photo is taken in the direction of this light, rays of light streak down from the trees and scatter off of this heavy morning air.
In the example to the right, light streaks are clearly visible from an open window and near the bridge, where a large tree partially obstructs an orange lamp. However, when the camera was moved just a few feet backwards, the streaks
from the window were no longer visible.

Spires above entrance to King's College, Cambridge


during BBC lighting of King's Chapel for the boy's choir.
The trick to making light rays stand out is to carefully plan your vantage point. Light rays will be most apparent if you are located close to (but not at) where you can see the light source directly. This "off-angle" perspective ensures
that the scattered light will both be bright and well-separated from the darker looking air.
On the other hand, if the fog is very dense or the light source is extremely concentrated, then the light rays will be clearly visible no matter what vantage point you have. The second example above was taken in air that was otherwise not
visibly foggy, but the light sources were extremely intense and concentrated. Additionally, the scattered light was much brighter relative to the sky because it was taken after sunset.
SHAPES & SILHOUETTES

Swan at night on the River Cam, Cambridge.


Fog can emphasize the shape of subjects because it downplays their internal texture and contrast. Often, the subject can even be reduced to nothing more than a simple silhouette.
In the photo to the left, the swan's outline has been greatly exaggerated because the low-laying fog has washed out nearly all remains of the wall behind the swan. Furthermore, the bright fog background contrasts prominently with the
relatively darker swan.

Rear gate entrance to Trinity


College, Cambridge, UK.

Just make sure to expose based on the fog -- and not the subject -- if you want this subject to appear as a dark silhouette. Alternatively, you could dial in a negative exposure compensation to make sure that objects do not turn out
too bright. You will of course also need to pay careful attention to the relative position of objects in your scene, otherwise one object's outline or border may overlap with another object.
In the example to the right, the closest object -- a cast iron gate -- stands out much more than it would otherwise against this tangled backdrop of tree limbs. Behind this gate, each tree silhouette is visible in layers because the branches
become progressively fainter the further they are in the distance.

PHOTOGRAPHING FROM WITHOUT


You've perhaps heard of the saying: "it's difficult to photograph a forest from within." This is because it can be hard to get a sense of scale by photographing just a cluster of trees -- you have to go outside the forest so you can see its
boundaries, and not have individual trees hamper this perspective. The very same technique can often be very helpful with fog or haze.
left: Mt Rainier breaking through the clouds - Washington, USA
right: sunset above the haze on Mt Wilson - Los Angeles, California, USA
This way you can capture the unique atmospheric effects of fog or haze, but without also incurring its contrast-reducing disadvantages (at least for objects outside the fog/haze). In the case of fog, from a distance it's really nothing more
than low-laying clouds.

TIMING THE FOG FOR MAXIMAL IMPACT


Just as with weather and clouds, timing when to take a photo in the fog can also make a big difference with how the light appears. Depending on the type of fog, it can move in clumps and vary in thickness with time. However, these
differences are sometimes difficult to spot if they happen slowly, since our eyes adjust to the changing contrast. Try moving your mouse over the labels below to see how the scene changed over just 6 minutes:

First Photograph +2 minutes +6 minutes

Another important consideration is the apparent texture of fog. Even if you time the photograph for when you feel there's the most interesting distribution of fog, this fog may not retain its texture if the exposure time is not short enough.
In general, the shutter speed needs to be a second or less in order to prevent the fog's texture from smoothing out. However, you might be able to get away with longer exposures when the fog is moving more slowly, or when your subject
is not magnified by as much. Move your mouse over the image below to see how the exposure time affects the appearance of mist above the water:

Shorter Exposure Longer Exposure


(1 second) (30 seconds)

Clare Bridge in low-laying fog at night (version 2) - Cambridge, UK


Note that the above image is the very same bridge that was shown as the first image in this article. Fog can dramatically change the appearance of a subject depending on where it is located, and how dense it is in that location.
Although the shorter exposure does a much better job of freezing the fog's motion , it also has a substantial impact on the amount of image noise when viewed at 100%. This can be a common problem with fog photography, since (i) fog
is most likely to occur in the late evening through to the early morning (when light is low) and (ii) fog greatly reduces the amount of light reaching your camera after reflecting off the subject. Sometimes freezing the motion of fog
therefore isn't an option if you want to avoid noise.

BEWARE OF CONDENSATION
If water droplets are condensing out of the air, then you can be assured that these same droplets are also likely to condense on the surface of your lens or inside your camera. If your camera is at a similar temperature to the air, and the fog
isn't too dense, then you may not notice any condensation at all. On the other hand, expect substantial condensation if you previously had your camera indoors, and it is warmer outside.
Fortunately, there's an easy way to minimize condensation caused by going from indoors to outdoors. Before taking your camera and lens into a warmer humid environment, place all items within a plastic bag and ensure it is sealed
airtight. You can then take these sealed bags outdoors, but you have to wait until everything within the bags have have reached the same temperature as outdoors before you open the bags. For large camera lenses with many elements, this
can take 30 minutes or more if the indoor-outdoor temperature difference is big.
Unfortunately, sometimes a little condensation is unavoidable. Just make sure to bring a lens cloth with you for repeatedly wiping the front of your lens.

15.COMMON OBSTACLES IN NIGHT PHOTOGRAPHY -


Night photography has the ability to take a scene and cast it in an unusual light-- much like the "golden hour" surrounding sunrise and sunset can add an element of mood and uniqueness to a sunlit scene. Just as how sports and landscape
photography push the camera's limits for shutter speed and aperture, respectively, night photography often demands technical extremes in both (see below).

Due to lack of familiarity and since night photos are often highly technical, many photographers simply put their camera away and "call it a day" after sunset. This section aims to familiarize the photographer with obstacles they might
encounter at night, and discusses how to surmount many.

BACKGROUND
Night photography is subject to the same set of constraints as daylight photography-- namely aperture, shutter speed and light sensitivity-- although these are all often pushed to their extremes. For this reason, the abundance and diversity
of night photography has been closely tied to the advance of photographic technology. Early film photographers shied away from capturing night scenes because these require prohibitively long exposures to maintain adequate depth of
field, or produced unacceptable amounts of image noise. Furthermore, a problem with film called "reciprocity failure" means that progressively more light has to reach the film as the exposure time increases-- leading to diminishing
returns compared to shorter exposures. Finally, even if a proper exposure had been achieved, the photographer would then have to wait for the film to be developed to assess whether it had been captured to their liking-- a degree of
uncertainly which is often prohibitive after one has stayed up late and spent minutes to hours exposing each photo.

TRADE-OFFS IN DIGITAL NIGHT PHOTOGRAPHY


Fortunately, times have changed since the early days of night photography. Modern digital cameras are no longer limited by reciprocity failure and provide instant feedback-- greatly increasing the enjoyment and lowering the risk of
investing the time to take photographs at odd hours.
Even with all these advances, digital night photography is still not without its technical limitations. Photos are unavoidably limited by the trade-off between depth of field, exposure time and image noise. The diagram below illustrates all

available combinations of these for a typical night photo under a full moon, with constant exposure:
Note the trade-off incurred by moving in the direction of any of the four scenarios above. Most static nightscape photos have to choose between scenarios 2, 3 and 4. Each scenario often has a technique which can minimize the trade-off;
these include image averaging, stacking and multiple focal planes (to be added). Also note how even the minimum possible exposure time above is one second-- making a sturdy camera tripod essential for any photos at night.
The diagram does not consider additional constraints: decreased resolution due to diffraction and increased susceptibility to fixed pattern noise with longer exposures. Fixed pattern noise is the only disadvantage to progressively longer
exposures in digital photography (other than also possibly being impractical), much like the trade-off of reciprocity failure in film. Furthermore, moon movement and star trails (see below) can both limit the maximum exposure time.

IMPORTANCE OF MOONLIGHT
Just as how daylight photographers pay attention to the position and angle of the sun, night photographers should also pay careful attention to the moon. A low-laying moon can create long shadows on cross-lit objects, whereas an
overhead moon creates harsher, downward shadows.
An additional variable is that the moon can have varying degrees of intensity, depending where it is during its 29.5 day cycle of waxing and waning. A full moon can be a
savior for reducing the required exposure time and allowing for extended depth of field, while a moonless night greatly increases star visibility. Furthermore, the intensity of
the moon can be chosen at a time which provides the ideal balance between artificial light (streetlamps) and moonlight.
Gauging exposure times during a full moon can be tricky; use f/2.0 and 30 seconds at ISO100 as a starting point (if subject is diffuse and directly lit), then adjust towards scenarios 1-4 accordingly if
OK.

Another factor rarely noticed during daylight is movement of the light source (sun or moon). The long exposure time required for moonlight photography often means that the moon may have moved significantly over the course of the
exposure. Moon movement softens harsh shadows, however too much movement can create seemingly flat light.

Choose Exposure Time: Crop of Tree Shadows on Path:

Photograph Under a Full Moon 1 minute 4 minutes

Note how the 1 minute exposure above clearly shows high contrast and shadows from even the smaller branches, whereas the 4 minute exposure is at lower contrast and only shows the larger branches. The choice of exposure time can
also vary by much more than a factor of four-- greatly exaggerating the above effect.
Shots which include the moon in the frame are also susceptible to moon movement. A rule of thumb is that the moon appears to move its own diameter roughly every 2 minutes. As a result, it can quickly appear elongated if this
exposure time is approached.

VIEWFINDER BRIGHTNESS
Properly composing your photograph in the viewfinder can be problematic when there is little available light. Even if you intend to expose using a small aperture, a lens with a large maximum aperture can greatly increase
viewfinder brightness during composure. To see the effect of different apertures, manually choose an aperture by pressing the "depth of field preview" button (usually located on camera at base of lens).
The way a SLR camera redirects light from the lens to your eye can also affect brightness. Cameras with a pentaprism (as opposed to pentamirror) ensure that little light is lost before it hits your eye, however these often increase the cost
of the camera significantly. Larger format sensors also produce a brighter viewfinder image (such as full frame 35 mm, compared to 1.5-1.6X or smaller crop factors) . Finally, ensure that you give ample time for your eyes to fully adjust
to the decrease in light-- especially after standing in stronger light or using a flashlight.

INFLUENCE OF MIRROR LOCK-UP


Mirror lock-up (MLU) is a feature available in some SLR cameras which aims to minimize camera shake induced by mirror-slap (which produces the characteristic snapping sound of SLR cameras). It works by separating the mirror flip
and aperture opening into two steps. This way, any vibrations induced by the mirror have time to settle down before the exposure begins.

Mirror lock-up can drastically increase sharpness for exposure times comparable to the settling time of the mirror (~1/30 to 2 seconds). On the other hand, mirror shake is negligible for exposures much longer than this; therefore MLU is
not critical for most night photography. When forced to use wobbly tripods (never desired) or long focal lengths, the stabilizing time can increase significantly (~8 seconds).

APPEARANCE OF STAR TRAILS


Even modestly long exposures can begin to reveal the rotation of stars in the sky. Using a longer focal length and photographing stars far from the north star increase the distance stars will move across the image. This effect can create a
dizzying look, however sometimes these streaks detract from the artistic message if stillness and tranquility is the desired look.

Close to North Star —> Far From North Star

Normal focal lengths (28-50 mm) usually have minimal star movement if exposures are no longer than about 15-30 seconds. If star trails are desired, using a large aperture and higher sensitivity (ISO 200-400) can enhance the brightness
of each streak.

FOCUSING AND DEPTH OF FIELD


Proper focusing is critical at night because small apertures are often impractical-- therefore one cannot afford to waste mispositioning the depth of field (see hyperfocal distance). To further complicate focusing, night scenes rarely have
enough light or contrast to perform auto focus, nor enough viewfinder brightness to manually focus.
Fortunately there are several solutions to this focusing dilemma. One can try focusing on any point light sources which are at similar distance to the subject of interest. In the
photo to the left, one could achieve guaranteed autofocus by using the bright light at the bottom as the focal target.
The central focus point is more accurate/sensitive in many cameras, and so it is best to use this (instead of the outer focus points)-- even if using it requires having to recompose afterwards.
If you wish to autofocus at infinity, just aim your camera at the moon, autofocus, then recompose accordingly. Alternatively, bring a small flashlight since this can be set on the subject, focused on,
and then removed before the exposure begins. If all these approaches are impractical, one can always resort to manual focus using distance markings on the lens (and an appropriate hyperfocal
distance).

METERING AT NIGHT
Unfortunately, most in-camera light meters become inaccurate or max out at about 30 seconds. Usually one can first meter using a larger aperture (so that the metered exposure time is under 30 seconds), then stop down as necessary and
multiply the exposure time accordingly. Alternatively, one may need to carry an external light meter to achieve the most accurate metering. For exposure times longer than ~30 seconds, the camera needs to be set to "bulb mode" and an
external timer/release device should be used (below).

Night scenes which contain artificial light sources should almost always have low-key histograms, otherwise these will have significant blown highlights. Metering these can be tricky if the camera's auto-metering fails; a good starting
point is to meter off of a diffuse object which is directly lit by one of the light sources. If all else fails, be sure to bracket each image, or zero in on the correct exposure by using guess and check with the rear LCD screen.
What is a proper exposure at night? Unlike during daytime where the basis is (roughly) a 18% gray card, there is not really a consistent, commonly agreed upon way to expose night photos. One could "under-expose" to maintain the
dark look of night, or could alternatively have the histogram fill the entire tonal range like a daytime shot. I generally recommend always fully exposing the image as if it were a daytime photo, and shooting in RAW mode. This way the
exposure can always be decreased afterwards-- while still maintaining minimal image noise because more light was collected at the digital sensor.

16.COMPOSITION: RULE OF THIRDS -


The rule of thirds is a powerful compositional technique for making photos more interesting and dynamic. It's also perhaps one of the most well known. This article uses examples to demonstrate why the rule works, when it's ok to break
the rule, and how to make the most of it to improve your photography.

OVERVIEW
The rule of thirds states than an image is most pleasing when its subjects or regions are composed along imaginary lines which divide the image into thirds -- both vertically and horizontally:

Rule of Thirds Composition Region Divided Into Thirds


It is actually quite amazing that a rule so seemingly mathematical can be applied to something as varied and subjective as a photograph. But it works, and surprisingly well. The rule of thirds is all about creating the right aesthetic trade-
offs. It often creates a sense of balance -- without making the image appear too static -- and a sense of complexity -- without making the image look too busy.

RULE OF THIRDS EXAMPLES


OK, perhaps you can see its usefulness by now -- but the previous example was simple and highly geometric. How does the rule of thirds fare with more abstract subjects? See if you can spot the lines in the photo below:

Original Photo Show Rule of Thirds

Note how the tallest rock formation (a tufa) aligns with the rightmost third of the image, and how the horizon aligns with the topmost third. The darker foreground tufa also aligns with both the bottommost and leftmost thirds of the photo.
Even in an apparently abstract photo, there can still be a reasonable amount of order and organization.
Does this mean that you need to worry about perfectly aligning everything with the thirds of an image? Not necessarily -- it's just a rough guideline. What's usually most important is that your main subject or region isn't always in the
direct middle of the photograph. For landscapes, this usually means having the horizon align with the upper or lower third of the image. For subjects, this usually means photographing them to either side of the photo. This can make
landscape compositions much more dynamic, and give subjects a sense of direction.

Off-Center Subjects Can Give a Sense of Direction

In the examples above, the biker was placed more or less along the leftmost third since he was traveling to the right. Similarly, the bird is off-center to give the impression that it can take off to the right at any moment. Off-center
composition is a powerful way to convey or imply motion.

IMPROVE EXISTING PHOTOS BY CROPPING


Thus far we've looked at examples that have satisfied the rule -- but what if they hadn't? Wouldn't they have still appeared just fine? Perhaps, but usually not. The next set of examples shows situations where cropping to enforce the rule
yields a clear improvement. It is often quite amazing how you can resurrect an old photo and give it new life with something as simple as cropping it.
Uncropped Original Cropped Version
(horizon in direct middle) (horizon now along upper third of image)

In the example above, part of the empty sky was cropped off so that the horizon aligned with the upper third of the image -- adding emphasis to the foreground and mountains.

LIMITATIONS

But what if there's simply nothing in the image to apply the rule of thirds to? Although rare, this might be the case for extremely abstract compositions. However, the "spirit of the rule" may still apply: giving the photo a sense of balance
without making the subject appear too static and unchanging.
In the example to the right, there's not even a single line or subject that can be aligned with the thirds of the image. Perhaps the C-shaped region of light can be grouped into an upper, middle and lower thirds region, but that's probably
pushing it. Regardless, the image is on average brighter to the left compared to its right -- effectively creating an off-center composition.

BREAKING THE RULE OF THIRDS

Example of beneficial symmetry


By now, the free-spirited and creative artist that you are is probably feeling a bit cramped by the seeming rigidity of this rule. However, all rules are bound to be broken sooner or later -- and this one's no
exception. It's time to unleash that inner rebel. That is, as long as it is for a good cause.

A central tenet of the rule of thirds is that it's not ideal to place a subject in the center of a photograph. But what if you wanted to emphasize the subject's symmetry? The example to the left does just that.
Similarly, there's many other situations where it might be better to ignore the rule of thirds than to use it. You might want to make your subject look more confronting, for example. Alternatively, you might want to knock things out of
balance.
It's important to ask yourself: what is special about this subject, and what do I want to emphasize? What mood do I want to convey? If the rule of thirds helps you achieve any of these goals, then use it. If not, then don't let it get in the way
of your composition.

17.A COLOR PHOTO INTO BLACK & WHITE


Converting a digital color photo into black and white goes beyond simply desaturating the colors, and can be made to mimic any of a wide range of looks created by using color filters in black and white film photography. Conversion
which does not take into account an image's color and subject of interest can dilute the artistic message, and may create an image which appears washed out or lacks tonal range. This section provides a background on using color filters,
and outlines several different black and white conversion techniques-- comparing each in terms of their flexibility and ease of use.

BACKGROUND: COLOR FILTERS FOR B&W FILM


Contrary to what one might initially assume, traditional black and white photographers actually have to be quite attentive to the type and distribution of color in their subject.

Color filters are often used in front of the lens to selectively block some colors while passing others (similar to how color filters are used for each pixel in a digital camera's bayer
array). Filters are named after the hue of the color which they pass, not the color they block. These can block all but a primary color such as red, green or blue, or can partially
block any weighted combination of the primary colors (such as orange or yellow). Careful selection of these filters allows the photographer to decide which colors will produce the
brightest or darkest tones.

CONTROLLING TEXTURE AND CONTRAST


Just as with color photography, black and white photography can use color to make a subject stand out-- but only if the appropriate color filters have been chosen. Consider the example below, where the original color image makes the
red parrot stand out against the near colorless background. To give the parrot similar contrast with the background in black and white, a color filter should be chosen which translates bright red into a tone which is significantly different
from the middle gray background. Move your mouse over the options below to view some of the possibilities.

Original Color Photo Red Filter Green Filter Red-Green Combination

Note how the red and green filters make the parrot much brighter and darker than the background, respectively, whereas an intermediate combination of the two makes the parrot blend in more. Also note how the green and red-green
filters enhance texture in the feathers, and that the red filter eliminates tonal separation between the feathers and the white skin.
So which color filter is best? This depends on the goal of the mage, but in general: one can increase contrast in a given region by choosing a filter color which is
complimentary to that region's color. In other words, we want to choose a filter whose color is on the opposite side of the color wheel (right) to the image's color.
If we wished to maximize cloud contrast in a cyan-blue sky, then a reddish-yellow filter would achieve this goal. Of course, images rarely contain just one color. Although the red filter above decreases
contrast in the feathers, it would do the opposite in a cyan-blue sky. Black and white conversion may therefore require interpretive decisions.

The image below contains regions of red, green and blue; move your mouse over each filter and note its influence on the red rocks, green foliage and blue sea:
Original Color Filter Red Filter Green Filter Blue Filter

Notice the contrast changes both between and within regions of red, green and blue above. Pure red or primarily red color filters often work best for landscapes, as this increases texture in regions containing water, sky and foliage. On the
other hand, color filters can also make contrast appear greater than what we would perceive with our eyes, or can darken/brighten some regions excessively.
One can visualize other possibilities since all color filters would produce some superposition of the three images above (yellow would be half red, half green and zero blue). Each image may therefore require its own combination of red,
green and blue filtering in order to achieve the desired amount of contrast and tonal range.

DIGITAL COLOR INTO BLACK & WHITE


Converting a digital color photo into black and white utilizes the same principles as with color filters in film photography, except filters instead apply to each of the three RGB color channels in a digital image (see bit depth). Whether
you specify it or not, all conversion techniques have to use some weighted combination of each color channel to produce a grayscale brightness. Some techniques assume a combination for you, although the more powerful ones give you
full control. Each makes its own trade-offs between power and ease of use, and so you may find some techniques are best suited only to certain tasks.

CHANNEL MIXER
The channel mixer tool allows the user to control how much each of the three color channels (red, green and blue) contribute to the final grayscale brightness. It is undoubtedly one of the most powerful black and white conversion
methods, however it may take some time to master since there are many parameters which require simultaneous adjustment.
Open this tool by clicking on Image > Adjustments > Channel Mixer in Adobe Photoshop. GIMP and many other image editing programs also offer this tool, however its menu location may vary.
Be sure to first click on the lower left tick box entitled "Monochrome" for black and white conversion.
It is often best to get a feel for the distribution of each color channel by first setting each of the color channels to 100% individually.
Then adjust each of the red, green and blue sliders to produce an image to your liking. For an even more pronounced effect, some colors can even have negative percentages.

The sum of the red, green, and blue percentages need to equal 100% in order to maintain roughly constant brightness, although overall brightness can also be adjusted by using the "Constant" slider at the bottom. If the aim is to mimic the
luminosity perceived by the human eye, set: red=30%, green=59% and blue=11%.

HUE - SATURATION ADJUSTMENT LAYER


This technique is particularly elegant because it allows you to apply any of the entire spectrum of color filters just by dragging the hue slider. This allows one to quickly assess which of the many combinations of color filters work best,
without necessarily having one in mind when starting. It takes a little longer to setup than the channel mixer, but is actually faster to use once in place.
Open the image in Photoshop and create two separate "Hue/Saturation Adjustment Layers" by following the menus: Layers > New Adjustment Layer > Hue/Saturation...
Each window will be named "Hue/Saturation 1 or 2," however I have given these custom names for this tutorial.
On the top adjustment layer (Saturation), set the blending mode to "Color" and set the saturation to its minimum of "-100," shown below.

On the bottom adjustment layer, change the "Hue" slider to apply any of the entire spectrum of color filters. This is the main control for adjusting the look from this technique.
The saturation slider can also be adjusted in this layer, but this time it fine-tunes the magnitude of the filter effect for a given hue.
Once all adjustments have been made, merge/flatten the layers to make these final.

An alternative technique which may be a bit easier is to only add one Hue/Saturation adjustment layer and change the hue of the image itself. On the other hand, this does not allow one to go back and change the color filter hue if it is no
longer in Photoshop's undo history (at least not without unnecessarily destroying bit depth).

LIGHTNESS CHANNEL IN LAB MODE


Using the lightness channel in lab mode is quick and easy because it converts based on the luminance value from each pixel's RGB combination. Please see "Understanding Histograms: Luminance and Color" for further reading on this
topic.
First convert the image into the LAB color space by clicking on Image > Mode > Lab Color in Photoshop.
View the "Lightness" channel by clicking on it (as shown to the left) in the channel window. If not already open, the channel window can be accessed by clicking on Window > Channels.
Delete both the "a" and "b" channels to leave only the lightness channel ("a" and "b" refer the red-green and blue-yellow shift, or "chrominance").

Note that the lightness channel may subsequently require significant levels adjustments as it may not utilize the entire tonal range of the histogram. This is because it requires all three color channels to reach their maximum for clipping,
as opposed to just one of the three channels for an RGB histogram.

DESATURATE COLORS
Desaturating the colors in an image is the simplest type of conversion, but often produces inadequate results. This is because it does not allow for control over how the primary colors combine to produce a given grayscale brightness.
Despite this, it is probably the most commonly used way of converting into black and white. In Photoshop, this is accomplished by going from Image > Adjustments > Desaturate.

OTHER CONSIDERATIONS
Ordinarily the best results are achieved when the image has the correct white balance. Removal of color casts means that the colors will be more pure, and so the results of any color filter will be more pronounced.
Any black and white conversion which utilizes a significant boost in color saturation may begin to show artifacts, such as increased noise, clipping or loss of texture detail. On the other hand, higher color saturations also mean that
each color filter will have a more pronounced effect.
Shoot in RAW mode if possible, as 16-bit (per channel) images allow for the smoothest grayscale tones and greatest flexibility when using color filters. This also gives the ability to fine-tune the white balance based on the desired black
and white look.
Recall that the noise levels in each color channel can be quite different, with the blue and green channels having the most and least noise, respectively. Try to use as little of the blue channel as possible to avoid excess noise.
Levels and curves can be used in conjunction with black and white conversion to provide further control over tones and contrast. Keep in mind though that some contrast adjustments can only be made by choosing an appropriate
color filter, since this adjusts relative contrast within and between color regions. Care should also be taken when using these because even slight color clipping in any of the individual color channels can become quite apparent in black
and white (depending on which channel(s) is/are used for conversion).
There are also a number of third party plug-ins for Photoshop which help automate the process of conversion, and provide additional features such as sepia conversion or adding film grain.

18.LOCAL CONTRAST ENHANCEMENT -


Local contrast enhancement attempts to increase the appearance of large-scale light-dark transitions, similar to how sharpening with an "unsharp mask" increases the appearance of small-scale edges. Good local contrast gives an image its
"pop" and creates a three-dimensional effect-- mimicking the look naturally created by high-end camera lenses. Local contrast enhancement is also useful for minimizing the effect of haze, lens flare, or the dull look created by taking a
photograph through a dirty window.

VISUALIZING LOCAL CONTRAST

High Local
High Resolution Both Qualities
Contrast

When viewed at a distance, note how the large-scale features are much more pronounced for the image with high local contrast, despite the lack of resolution. Both resolution and local contrast are essential to create a detailed, three-
dimensional final image.

CONCEPT
The trick with local contrast enhancement is that it increases "local" contrast in smaller regions, while at the same time preventing an increase in "global" contrast-- thereby protecting large-scale shadow/highlight detail. It achieves this
feat by making some pixels in the histogram cross over each other, which is not possible when enhancing contrast using levels or curves.
Local contrast enhancement works similarly to sharpening with an unsharp mask, however the mask is instead created using an image with a greater blur distance. This creates a local contrast mask which maps larger-scale transitions
than the small-scale edges which are mapped when sharpening an image.
Step 1: Detect Transitions and Step 2: Increase Contrast at
Create Mask Transitions

Higher
Original Contrast
Original
Local
Contrast
Mask

Blurred
- Copy

Original

Local
Final
= Contrast = Image
Mask

Note: The "mask overlay" is when image information from the layer above the local contrast mask passes through and replaces the layer below in a way which is proportional to the brightness in that region of the mask. The upper image
does not contribute to the final for regions where the mask is black, while it completely replaces the layer below in regions where the local contrast mask is white.
The difference between the original and final image is often subtle, but should show a noticeable increase in clarity. In order to fully see this effect, one needs to examine the images up close. Move your mouse on and off of "local
contrast enhancement" and then "high contrast" in order to see their influence on the tones within the image below:

Original Local High


Contrast
Enhancemen
Contrast
t

Note how it creates more contrast near the transition between the rocks and dirt, but preserves texture in large-scale light and dark regions. Pay special attention to the dirt in between the rocks and how this region becomes very dark for
the high contrast image, but is preserved for local contrast enhancement. The effect above is quite strong to aid in visualization; local contrast enhancement is often less pronounced.

IN PRACTICE
Fortunately, performing local contrast enhancement in Photoshop and other image editing programs is quick and easy. It is identical to sharpening with an unsharp mask, except the "radius" is much larger and the "percentage" is much
lower. The unsharp mask can be accessed in Adobe Photoshop by clicking on the following drop-down menus: Filter > Sharpen > Unsharp Mask.

Amount is usually listed as a percentage, and controls the magnitude of each overshoot. This can also be thought of as how much contrast is added at the transitions. Amount is typically 5-20%.
Radius controls the amount to blur the original for creating the mask, shown by "blurred copy" in the illustration above. This affects the size of the transitions you wish to enhance, so a smaller radius
enhances smaller-scale detail. Radius is typically 30-100 pixels.
Threshold sets the minimum brightness change that will be sharpened. This is rarely used in local contrast enhancement, but could be set to a non-zero value to only enhance contrast at the most
prominent edges. Threshold is typically set to 0.

Much more so than with sharpening, the radius setting is strongly influenced by your image size and the scale of the light-dark transitions you wish to enhance. High resolution images, or those where light-dark transitions are large,
require using a larger radius value. Very low resolution images may require a radius even less than 30 pixels to achieve the effect.

COMPLICATIONS
Local contrast enhancement, as with sharpening, can also create unwanted color changes if performed on all three color channels. In addition, Local contrast enhancement can increase color saturation significantly. You can eliminate
these unwanted effects by either performing local contrast enhancement in the lightness channel of the LAB color space, or in a separate layer (while still in an RGB working space) and blending using "luminosity" in the layers window.
Local contrast enhancement can also clip highlights in regions which are both very bright and adjacent to a darker region. For this reason, it should be performed before adjusting levels (if levels are used bring tones to the extreme
highlights within the image histogram). This allows for a "buffer zone" when local contrast enhancement extends the lightest and darkest tones to full white or black, respectively.
Care should also be taken when using this technique because it can detract from the "smoothness" of tones within your image-- thereby changing its mood. Portrait photography is one area where one should be particularly cautious with
this technique.

19.NOISE REDUCTION BY IMAGE AVERAGING -


Image noise can compromise the level of detail in your digital or film photos, and so reducing this noise can greatly enhance your final image or print. The problem is that most techniques to reduce or remove noise always end up
softening the image as well. Some softening may be acceptable for images consisting primarily of smooth water or skies, but foliage in landscapes can suffer with even conservative attempts to reduce noise.
This section compares a couple common methods for noise reduction, and also introduces an alternative technique: averaging multiple exposures to reduce noise. Image averaging is common in high-end astrophotography, but is arguably
underutilized for other types of low-light and night photography. Averaging has the power to reduce noise without compromising detail, because it actually increases the signal to noise ratio (SNR) of your image. An added bonus is that
averaging may also increase the bit depth of your image-- beyond what would be possible with a single image. Averaging can also be especially useful for those wishing to mimic the smoothness of ISO 100, but whose camera only goes
down to ISO 200 (such as most Nikon digital SLR's).

CONCEPT
Image averaging works on the assumption that the noise in your image is truly random. This way, random fluctuations above and below actual image data will gradually even out as one averages more and more images. If you were to
take two shots of a smooth gray patch, using the same camera settings and under identical conditions (temperature, lighting, etc.), then you would obtain images similar to those shown on the left.
The above plot represents luminance fluctuations along thin blue and red strips of pixels in the top and bottom images, respectively. The dashed horizontal line represents the average, or what this plot look like if there were zero noise.
Note how each of the red and blue lines uniquely fluctuates above and below the dashed line. If we were to take the pixel value at each location along this line, and average it with value for the pixel in the same location for the other
image, then the luminance variation would be reduced as follows:

Even though the average of the two still fluctuates above and below the mean, the maximum deviation is greatly reduced. Visually, this has the affect of making the patch to the left appear smoother. Two averaged images usually
produce noise comparable to an ISO setting which is half as sensitive, so two averaged images taken at ISO 400 are comparable to one image taken at ISO 200, and so on. In general, magnitude of noise fluctuation drops by the square
root of the number of images averaged, so you need to average 4 images in order to cut the magnitude in half.

NOISE & DETAIL COMPARISON


The next example illustrates the effectiveness of image averaging in a real-world example. The following photo was taken at ISO 1600 on the Canon EOS 300D Digital Rebel, and suffers from excessive noise.
100% Crop of Regions on the Left

Original 2 Images 4 Images


Note how averaging both reduces noise and brings out the detail for each region. Noise reduction programs such as Neat Image are the best available arsenal against noise, and so this is used as the benchmark in the following
comparison:

Median
Original 2 Images 4 Images Neat Image
Filter

Noise reduction with Neat Image Pro Plus 4.5 uses default settings and "auto fine-tune"
Neat Image is the best of all for reducing noise in the smooth sky, but it sacrifices some fine detail in the tree branch and vertical mortar/grout lines in the brickwork. Sharpening could be used to enhance the remaining detail and greatly
improve the overall appearance of sharpness, but sharpening cannot recover lost information. The median filter is a primitive technique and is in most versions of Photoshop. It calculates each pixel value by taking the median value of all
adjacent pixels. This is effective at removing very fine noise, but leaves larger fluctuations behind and eliminates pixel-level detail. Overall, Neat Image is your best option for situations where you cannot use image averaging (hand held
shots). Ideally, one could use a combination of the two: image averaging to increase the SNR as much as possible, then Neat Image to reduce any remaining noise:

Averaging: Neat Image +


Original Neat Image
4 Images Averaging
Noise reduction with Neat Image Pro Plus 4.5 uses default settings and "auto fine-tune"
Note how neat image plus averaging is now able to both retain the vertical detail in the bricks and maintain a smooth, low noise look. Disadvantages of the averaging technique include increased storage requirements (multiple image files
for one photo) and possibly longer exposure times. Image averaging does not work on images which suffer from banding noise or fixed pattern noise. Note how the bright white "hot pixel" in the lower left of both the top and bottom
images does not diminish with averaging. Averaging, unlike other shots, requires zero camera movement *between* exposures in addition to during the exposure. Extra care should be taken with technique and averaging can only be
used for photos taken on a very sturdy camera tripod.

AVERAGING IMAGES IN PHOTOSHOP USING LAYERS


Performing image averaging in Adobe Photoshop is relatively quick using layers. The idea is to stack each image in a separate layer, and have them blend together such that each layer contributes equally. If for some reason one layer
receives more weighting than another, the blending of images will not be as effective.
One must first load all images into Photoshop which are to be averaged, and then copy and past each image on top of each other so that they are all within the same project window. Once this is done, the averaging can begin. The key to
averaging in Photoshop is to remember that each layer's opacity determines how much the layer behind it is "let through," and the same goes for each image underneath. This means that to properly average four images, for example, one
should not set each layer's opacity to 25%. One should instead set the bottom (or background) layer's opacity to 100%, the layer on top of that to 50%, and then the next layer to 33%, and finally the top layer to 25%. This is illustrated
below:
For averaging any number of images, each
layer's percent opacity is calculated by:

All markings in red have been added for


clarity; they will not actually be visible in
Photoshop.

RECOMMENDATIONS
When should one perform image averaging, as opposed to just taking a longer exposure at a lower ISO speed? The following list of situations may all prove useful:
• To avoid excessive fixed-pattern noise from long exposures
• For cameras which do not have a "bulb mode," you may be limited to 15-30 second exposures. For such cases, consider the following: taking two shots at ISO 800 and 30 seconds to produce the
rough equivalent (both in brightness and noise levels) of a single 60 second exposure at ISO 400. Many other combinations are possible...
• For situations where you cannot guarantee interruption-free exposures beyond a given time. As an example, one might be taking a photo in a public place and want low noise, but cannot take a long
enough exposure because pedestrians often pass through the shot. You could then take several short shots in between passers-by.
• To selectively freeze motion in low detail, faster moving areas while still retaining low noise in high detail, slower moving areas. An example of this is a starry night with foliage in the foreground.
• To reduce shadow noise (even in low ISO shots) where you wish to later bring out shadow detail through post-processing.

20.NEW PHOTOGRAPHY WITH DIGITAL -


Digital cameras have opened up amazing new photography possibilities. The following is an overview of several digital techniques that were on this website in the beginning. It now serves as a motivator to delve into the various
techniques available in the digital world. Each technique has links to more detailed advice if you want to learn more...

Camera equipment has made great strides in being able to mimic our visual perception in a single photograph. However, despite all of this progress, many key limitations still remain. Our eye can discern a far greater range of light to dark
(dynamic range), is able to realize a broader range of colors (color gamut), and can assess what is white in a given scene (white balance) far better than any photographic equipment.
Photographers have to be aware of these and other shortcomings in order to emphasize the elements of a scene as they see them. Overcoming these often requires interpretive decisions both before and after the exposure.
When we view a scene, we have the luxury of being able to look around and change what we are analyzing with our eyes. This ability is quite different from what a still camera is able to do with a given lens; it is the implications arising
from this that are discussed in the three sections below:
Depth of Field

Dynamic Range

Field of View

Each technique can evoke a heightened emotional response in the viewer, by emphasizing not only what one wishes for them to see, but also how they would like them to see it.

EXTENDED DEPTH OF FIELD


Our eyes can choose to have any particular object in perfect focus, whereas a lens has to choose a specific focal point and what photographers call a "depth of field," or the distance around the focal plane which still appears to be in sharp
focus. This difference presents the photographer with an important interpretive choice: does one wish to portray the scene in a way that draws attention to one aspect by making only that aspect in focus (such as would occur during a
fleeting glance), or does one instead wish to portray all elements in the scene as in focus (such as would occur by taking a sweeping look throughout).
Until recently, traditional photography was especially restricted with this choice, because there is always a trade-off between the length of the exposure, the depth of field, and the image noise (or film grain) for a given photo. Where
artistic flexibility is required, one could use a technique which utilises multiple exposures to create a single photo that is composed of several focal points. This is similar to how our eyes may glance at both near and distant objects in a far-
reaching scene.

If you were to stand in front of the above scene and take a quick glance, either the first or the second image would be closer to what you would see, depending on what you found interesting. On the other hand, if you were to fully absorb
the scene—analysing both the the stone carvings in the foreground as well as the bridge and trees in the background—then your view would be represented more realistically by portraying details for both regions, such as the final image
on the right. This technique allows a photographer to decouple themself from the traditional trade-off between depth of field, noise or film grain, and length of exposure. The end result is a print that has both less noise and more image
sharpness throughout.

HIGHER DYNAMIC RANGE


As we look around a scene, the irises within our eyes can adjust to changing conditions as we focus on regions of varying brightness—both extending the dynamic range where we can discern detail, and improving the local contrast. This
is apparent when we stand near a window in a dark room on a sunny day and see not only detail which is indoors and around the window (such as the frame or the pattern on the curtains), but also that which is outside and under the
intense lighting (such as the blades in the grass in the yard or the clouds in the sky).
Cameras, on the other hand, cannot always capture such scenes where the brightness varies drastically—at least not with the same contrast as we see it. Traditional landscape photography has practiced a technique to overcome this
limitation by using a camera lens filter which lets in more light in the darker regions, and less light in the brighter regions,
This works remarkably well, however it is limited to photos with a simple distribution of light intensity, since a filter has to exist which approximates the light distribution. This usually limits the photographer to photos consisting of a
bright sky, which gradually transitions into a darker foreground, such as the filter shown below.
Other scenes, such as those which contain alternating electrically-lit and moonlit objects, contain far more complex lighting geometries than can be captured using traditional photographic techniques. To increase the dynamic range
captured in a photo, while still retaining the local contrast, one can expose the photo several times: once for each region where the light intensity changes beyond the capabilities of their equipment. This is adapted from a similar technique
used in astrophotography. By exposing the photo several times for each intensity region, one then has the ability to combine these images for any arbitrary lighting geometry—thus diversifying the types of scenes one can reproduce in
photographic print.
For much more on this topic, also take a look at the tutorial on using the high dynamic range (HDR) feature of photoshop.

EXTREME FIELD OF VIEW


By looking around a scene, we are able to encompass a broader field of view than may be possible with a given lens. To mimic this behaviour, and to enhance image detail, one could point the camera in several adjacent directions for each
exposure. These could then be combined digitally in a way that accounts for lens distortion and perspective— producing a single, seamless image. This is technique is referred to as photo stitching or a digital panorama.
In the example below, a lens with a relatively narrow field of view (just 17° horizontally, or 80mm on a 35mm camera) was used to create a final image that contains both more detail and a wider field of view than would be possible with
a single exposure. As you can see by comparing the before and after images below, creating a single image from a mosaic of images is more complicated than just aligning these images; this process also has to take into account
perspective. Note how the rooftop appears curved in the upper image, whereas the rooftop is straight in the final print.
Individual Photos Seamlessly Stitched

The final result is a perspective that would have required a lens which horizontally encompasses a 71° field of view. An added bonus is that the final image contains over 6X the detail and local contrast than what would have been
captured with a single photograph (if one also happened to have such a lens).
You can view the digital photography tutorials for a more detailed and technical description of many of these photographic concepts...

21.GUIDE TO IMAGE SHARPENING -


Image sharpening is a powerful tool for emphasizing texture and drawing viewer focus. It's also required of any digital photo at some point -- whether you're aware it's been applied or not. Digital camera sensors and lenses always blur an
image to some degree, for example, and this requires correction. However, not all sharpening techniques are created equal. When performed too aggressively, unsightly sharpening artifacts may appear. On the other hand, when done
correctly, sharpening can often improve apparent image quality even more so than upgrading to a high-end camera lens.

sharp cacti at the Huntington Gardens - Pasadena, California

HOW IT WORKS
Most image sharpening software tools work by applying something called an "unsharp mask," which despite its name, actually acts to sharpen an image. Although this tool is thoroughly covered in the unsharp mask tutorial, in a nutshell it
works by exaggerating the brightness difference along edges within an image:
Photo of the letter "T"
Original Sharpened

Note that while the sharpening process isn't able to reconstruct the ideal image above, it is able to create the appearance of a more pronounced edge (see sharpness: acutance & resolution). The key to effective sharpening is walking the
delicate balance between making edges appear sufficiently pronounced, while also minimizing visible under and overshoots (called "sharpening halos").

Over Sharpening
Soft Original Mild Sharpening
(Visible Halos)

note: all images shown at 200% zoom to improve visibility

SETTINGS
Fortunately, most of the sharpening settings within image-editing software are reasonably standardized. One can usually adjust at least three settings:

Setting How It Works

Controls the size of the edges you wish to enhance, where a smaller radius enhances smaller-scale detail. You'll usually want a radius setting that is comparable to the size of the smallest detail
Radius
within your image.

Amoun
Controls the overall strength of the sharpening effect, and is usually listed as a percentage. A good starting point is often a value of 100%.
t

Thresh
old Controls the minimum brightness change that will be sharpened. This can be used to sharpen more pronounced edges, while leaving more subtle edges untouched. It's especially useful to avoid
(Maski sharpening noise.
ng)

Detail
Controls the relative sharpening of fine versus coarse detail (within a given radius value), in addition to affecting the overall strength of sharpening. Higher values emphasize fine detail, but also
(if
increase the overall sharpening effect. You will therefore likely need to adjust this setting in conjunction with the amount/percent setting.
avail.)
It's generally advisable to first optimize the radius setting, then to adjust the amount, and then finally to fine-tune the results by adjusting the threshold/masking setting (and potentially other settings such as "detail"). Optimal results may
require a few iterations.

SHARPENING WORKFLOW
Most photographers now agree that sharpening is most effective and flexible when it's applied more than once during image editing. Each stage of the sharpening process can be categorized as follows:
Capture Sharpening Creative Sharpening Output Sharpening

— —
> >

Accounts for your image's source device, along with any detail & Uniquely accounts for your image's content & Accounts for final output medium, after all
noise characteristics artistic intent editing & resizing

(1) Capture sharpening aims to address any blurring caused by your image's source, while also taking image noise and detail into consideration. With digital cameras, such blurring is caused by the camera sensor's anti-aliasing filter and
demosaicing process, in addition to your camera's lens. Capture sharpening is required for virtually all digital images, and may be applied automatically by the camera for photos which are saved as JPEG files. It also ensures the image
will respond well to subsequent rounds of sharpening.
(2) Creative sharpening is usually applied selectively, based on artistic intent and/or image content. For example, you might not want to apply additional sharpening to a smooth sky or a person's skin, but you may want to crank up the
sharpness in foliage or a person's eye lashes, respectively. Overall though, its use may vary wildly from photo to photo, so creative sharpening is really a "catch all" category. It's also the least used stage since it can also be the most time-
consuming.
(3) Output sharpening uses settings customized for a particular output device, and is applied at the very end of the image editing workflow. This may include special considerations based the size, type and viewing distance of a print, but
it can also be used to offset any softening caused by resizing an image for the web or e-mail.
Overall, the above sharpening workflow has the convenience of being able to save edited images at a near-final stage. When printing or sharing one of these images, all that is needed is a quick top-off pass of sharpening for the output
device. On the other hand, if all sharpening were applied in a single step, then all image editing would have to be re-done every time you wished share/print the photo using a different output device.
Note: the above capture, creative and output sharpening terminology was formally introduced in
Real World Image Sharpening by Bruce Fraser & Jeff Schewe. Highly recommended.

STAGE 1: CAPTURE SHARPENING


Capture sharpening is usually applied during the RAW development process. This can either occur automatically in your camera, when it saves the image as a JPEG, or it can occur manually using RAW software on your computer (such
as Adobe Camera RAW - ACR, Lightroom or any other RAW software that may have come with your camera).
Automatic Capture Sharpening. Although most cameras automatically apply capture sharpening for JPEG photos, the amount will depend on your camera model and any custom settings you may have applied. Also be aware that the
preset shooting modes will influence the amount of capture sharpening. For example, images taken in landscape mode are usually much sharper than those taken in portrait mode. Regardless, optimal capture sharpening requires shooting
using the RAW file format, and applying the sharpening manually on your computer (see below).
Manual Capture Sharpening requires weighing the advantages of enhancing detail with the disadvantages of amplifying the appearance of image noise. First, to enhance detail, sharpen using a radius value that is comparable to the size
of the smallest details. For example, the two images below have vastly different levels of fine detail, so their sharpening strategies will also need to differ:
Coarse (Low Frequency) Detail Fine (High Frequency) Detail
Sharpening Radius: 0.8 pixels Sharpening Radius: 0.4 pixels

Note: The sharpening radii described above are applied to the full resolution images
(and not to the downsized images shown above).
Shooting technique and/or the quality of your camera lens can also impact the necessary sharpening radius. Generally, well-focused images will require a sharpening radius of 1.0 or less, while slightly out of focus images may require a
sharpening radius of 1.0 or greater. Regardless, capture sharpening rarely needs a radius greater than 2.0 pixels.

Radius Too Small Radius Too Large Radius Just Right


Soft Original
(0.2 pixels) (2.0 pixels) (1.0 pixels)

When trying to identify an optimum sharpening radius, make sure to view a representative region within your image that contains the focal point and/or fine detail, and view it at 100% on-screen. Keep an eye on regions with high contrast
edges, since these are also more susceptible to visible halo artifacts. Don't fret over trying to get the radius "accurate" within 0.1 pixels; there's an element of subjectivity to this process, and such small differences wouldn't be
distinguishable in a print.
When noise is pronounced, capture sharpening isn't always able to be applied as aggressively and uniformly as desired. One often has to sacrifice sharpening some of the really subtle detail in exchange for not amplifying noise in
otherwise smooth regions of the image. Using high values of the threshold or masking settings help ensure that sharpening is only applied to pronounced edges:
Without sharpening
threshold/masking:

With sharpening
threshold/masking:
Original Image Sharpening Mask Move your mouse over above images
to see the unsharpened original.

The value used for the "masking" setting used above was 25.
Note how the masking/threshold setting was chosen so that only the edges of the cactus leaves are sharpened (corresponding to the white portions of the sharpening mask above). Such a mask was chosen because it doesn't worsen the
appearance of image noise within the otherwise textureless areas of the image. Also note how image noise is more pronounced within the darker regions.
If image noise is particularly problematic, such as with darker tones and/or high ISO speeds, one might consider using a creative sharpening technique, or using a third-party noise reduction plug-in. At the time of this writing, common
plug-ins include Neat Image, Noise Ninja, Grain Surgery & Noiseware. However, noise reduction should always be performed before sharpening, since sharpening will make noise removal less effective. One may therefore need to
postpone sharpening during RAW development until noise reduction has been applied.

STAGE 2: CREATIVE SHARPENING


While creative sharpening can be thought of as just about any sharpening which is performed between capture and output sharpening, its most common use is to selectively sharpen regions of a photograph. This can be done to avoid
amplifying image noise within smooth areas of a photo, or to draw viewer attention to specific subjects. For example, with portraits one may want to sharpen an eye lash without also roughening the texture of skin, or with landscapes, to
sharpen the foliage without also roughening the sky.
The key to performing such selective sharpening is the creation of a mask, which is just a way of specifying where and by how much the creative sharpening should be applied. Unlike with the output sharpening example, this mask may
need to be manually created. An example of using a mask for creative sharpening is shown below:
Sharpening Mask:

Image Used for Creative


Sharpening Selective Sharpening Using a Mask
Move your mouse over the image to see Top layer has creative sharpening applied; mask ensures this is only applied to the white regions.
selective blurring applied to background

To apply selective sharpening using a mask:


1. Sharpen Duplicate. Make a duplicate of your image (with capture sharpening and all other editing applied), then apply creative sharpening to the entire image. This sharpening can be very
aggressive since you can always fine-tune it later.
2. Create Mask. In Photoshop, use the menus Layer > New > Layer..., or the Shift+Ctrl+N keys.
3. Paint Mask. Select the layer mask (by left-clicking on it). Paint regions of the image with white and/or black when you want creative sharpening to remain visible or hidden in the final image,
respectively. Shades of gray will act partially.
4. Fine-Tune. Reduce the opacity of the top layer if you want to lessen the influence of creative sharpening. You can also change the blending mode of this layer to "Luminosity" to reduce color
artifacts.
Alternatively, sometimes the best technique for selectively sharpening a subject is to just blur everything else. The relative sharpness difference will increase -- making the subject appear much sharper -- while also avoiding over-
sharpening. It can also lessen the impact of a distracting background. Move your mouse over the top left image to see this technique applied to the previous example.
Another way of achieving the same results is to use a brush, such as a history, "sharpen more" or blurring brush. This can often be simpler than dealing with layers and masks. Sometimes this type of creative sharpening can even be
applied along with RAW development by using an adjustment brush in ACR or Lightroom, amongst others.
Overall, the options for creative sharpening are virtually limitless. Some photographers also apply local contrast enhancement (aka "clarity" in Photoshop) during this stage, although one could argue that this technique falls into a different
category altogether (even though it still uses the unsharp mask tool).

STAGE 3: OUTPUT SHARPENING FOR A PRINT


After capture and creative sharpening, an image should look nice and sharp on-screen. However, this usually isn't enough to produce a sharp print. The image may have also been softened due to digital photo enlargement. Output
sharpening therefore often requires a big leap of faith, since it's nearly impossible to judge whether an image is appropriately sharpened for a given print just by viewing it on your computer screen. In fact, effective output sharpening often
makes an on-screen image look harsh or brittle:
Output Sharpening Output Sharpening
Original Image Applied Applied
for On-Screen Display for a 300 PPI Glossy Print

Photograph of the Duomo at dusk - Florence, Italy (f/11.0 for 8.0 sec at 150 mm and ISO 200)
Output sharpening therefore relies on rule of thumb estimates for the amount/radius based on the (i) size and viewing distance of the print, (ii) resolution of the print (in DPI/PPI), (iii) type of printer and (iv) type of paper. Such estimates
are often built into RAW development or image editing software, but these usually assume that the image has already had capture sharpening applied (i.e., it looks sharp when viewed on-screen).
Alternatively, one can also estimate the radius manually using the calculator below:

Top of Form
Output Sharpening Radius Estimator

Typical Viewing Distance*


(or length of print's
diagonal)

Print Resolution
PPI**

Estimated Sharpening Radius

**PPI = pixels per inch; see tutorial on "Digital Camera Pixels." DPI is often used interchangeably with PPI, although strictly speaking, the two terms can have different meanings.
*It's generally a good estimate to assume that people will be viewing a print at a distance which is roughly equal to the distance along the print's diagonal.
Bottom of Form

The above radius estimates should only be taken as a rough guideline. In general, a larger viewing distance demands a larger output sharpening radius. The key is to have this radius small enough that it is near the limit of what our eyes
can resolve (at the expected viewing distance), but is also large enough that it visibly improves sharpness.
Regardless, the necessary amount of sharpening will still likely depend on the image content, type of paper, printer type and the look you want to achieve. For example, matte/canvas papers often require more aggressive sharpening than
glossy paper. A good starting point is always the default amount/percent value used by your image editing software. However, for mission-critical prints this best solution is often just trial and error. To save costs, you can always print a
cropped sample instead of the full print.
STAGE 3: OUTPUT SHARPENING FOR THE WEB & EMAIL
Even if an image already looks sharp when viewed on-screen, resizing it to less than 50% of its original size often removes any existing sharpening halos. One usually needs to apply output sharpening to offset this effect:

Softer Downsized Image


Original Image Downsized Image
(after output sharpening)

Move your mouse over the buttons on the right to see the effect of output sharpening.
For downsized images, an unsharp mask radius of 0.2-0.3 and an amount of 200-400% works almost universally well. With such a small radius value, one doesn't have to worry about halo artifacts, although new problems such as
aliasing/pixelation and moiré may become apparent if the amount/percent is set too high.
For more on image downsizing, see the tutorial on image resizing for the web and e-mail.

ADDITIONAL SHARPENING ADVICE


• Sharpening is irreversible; also save unsharpened originals whenever possible.
• RAW & TIFF files respond much better to sharpening than JPEG files, since the former preserve more detail. Further, sharpening may amplify JPEG compression artifacts.
• Blurring due to subject motion or some types of camera shake may require advanced techniques such as deconvolution or Photoshop's "smart sharpen" tool.
• Some camera lenses do not blur objects equally in all directions (see tutorial on camera lens quality - astigmatisms). This type of blur tends to increase further from the center of the image, and may
be in a direction which is either (i) away from the image's center or (ii) perpendicular to that direction. This can be extremely difficult to remove, and usually requires creative sharpening.
• Images will often appear sharper if you also remove chromatic aberrations during RAW development. This option can be found under the "lens corrections" menu in Adobe Camera RAW, although
most recent photo editing software offers a similar feature.
• Grossly over-sharpened images can sometimes be partially recovered in Photoshop by (i) duplicating the layer, (ii) applying a gaussian blur of 0.2-0.5 pixels to this layer 2-5 times, (iii) setting the
blending mode of this top layer to "darken" and (iv) potentially decreasing the layer's opacity to reduce the effect.
• The light sharpening halos are often more objectionable than the dark ones; advanced sharpening techniques sometimes get away woth more agressive sharpening by reducing the prominence of the
former.
• Don't get too caught up with scrutinizing all the fine detail. Better photos (and more fun) can usually be achieved if this time is spent elsewhere.

RECOMMENDED READING
If you're thirsting for additional examples, along with a more thorough technical treatment of the above topics, a great book is Real World Image Sharpening (2nd Edition) by Bruce Fraser & Jeff Schewe.

22.VIEWING ADVICE -
Computers use varying types of display devices—ranging from small monitors to the latest large flat panels. Despite all of these displays, the internet has yet to set a standard for representing color and tone such that images on one
display look the same as those viewed on another. As a result, this website has been designed to minimize these unavoidable viewing errors by approximating how the average display device would render these images.
In addition to these measures, you can help ensure more accurate viewing by taking the extra step to verify that your display has been properly calibrated. The following image has been designed so that when viewed at a distance, the
central square should appear to have the same shade as the gray background, whereas the leftmost and rightmost squares will appear slightly darker and lighter than the background, respectively.
© 2004-2010 Sean McHugh
If this is not the case, then your screen will display images much lighter or darker than at which they are intended to be viewed. To correct for this, set your display to maximum contrast (if possible), and then adjust the brightness setting
until the central square blends in completely. This is best verified by taking several steps back from your display, closing one eye, and partially opening the other so that you are seeing the screen slightly out of focus.
Although the above calibration step will help, also be aware that many LCD screens will display these galleries with more contrast than intended. These pages are optimally formatted on screens which are displaying for 1024x768 and
higher resolutions. Some pop-up blockers may need to be disabled in order to see larger views of or to visit the purchase screen for the individual images. Ideal viewing conditions can be obtained by using a display which has been
hardware calibrated using a measurement device. Hardware calibrated displays will not only pass the above test, but should also show all eight gradations in each of the dark and light rectangles below with a neutral tone.

If your display does not pass either of these tests, do not despair! The eye has a remarkable ability to adjust to viewing conditions; if you have been using this display for a while, images within this gallery will probably look just fine
compared to what you are used to seeing. Be aware that for such cases, the tones in the final print may look different than when viewed on your display.
For more on this topic, please see this website's tutorial on monitor calibration.

24.MONITOR CALIBRATION FOR PHOTOGRAPHY -


Knowing how to calibrate your monitor is critical for any photographer who wants accurate and predictable photographic prints. If your monitor is not correctly reproducing shades and colors, then all the time spent on image editing and
post-processing could actually be counter-productive. This tutorial covers basic calibration for the casual photographer, in addition to using calibration and profiling devices for high-precision results. Furthermore, it assumes that tossing
your old monitor and buying a new one is not an option.

— —
> >

Calibrat
ed
Color Monitor
Digital Image File
Profile

ADJUSTING BRIGHTNESS & CONTRAST


The easiest (but least accurate) way to calibrate your display is to simply adjust its brightness and contrast settings. This method doesn't require a color profile for your monitor, so it's ideal for casual use, or for when you're not at your
own computer and need to make some quick adjustments.
The images below are designed to help you pick optimal brightness/contrast settings. A well-calibrated monitor should be able to pass both tests, but if it cannot, then you will have to choose which of the two is most important. In either
case, make sure that your display has first been given at least 10-15 minutes to warm up.
(1) Mid-Tones. Having well-calibrated mid-tones is often the highest-priority goal. Such a monitor should depict the central square as being the same shade as the solid outer portion -- when viewed out of focus or at a distance. The
leftmost and rightmost squares should also appear darker and lighter than the solid gray, respectively.
© 2004-2010 Sean McHugh
Note: the above calibration assumes that your monitor is set to gamma 2.2.
If the central square is lighter or darker than the outer gray region, your display is likely depicting images lighter or darker than intended. This will also have a noticeable impact on your prints, so it's something that should be addressed.
If you are using an LCD monitor, first set your display to its default contrast (this will likely be either 100% or 50%), then adjust the brightness until the central square blends in. If you are using a CRT monitor (the larger "old-fashioned"
type), then instead set it to maximum contrast. For both CRT & LCD displays, make sure that these are set to gamma 2.2 if available (most current displays come with this as the native setting).
Note: increasing the brightness of your display too much can shorten its usable life span. You will likely not need to have your display at its maximum brightness if the room isn't too bright, if the display isn't back-lit (such as in front of a
window) and if the display isn't too old.
(2) Highlight & Shadow Detail. If you've followed the previous calibration, now your mid-tones will be reproduced roughly at the shade intended. However, it may also mean that the shadows and highlights will appear too bright or
dark, or vice versa. You should be able to distinguish the 8 shades in each of the two images below:

Shadow Detail Highlight Detail

The two adjacent shaded bands at each outer edge of this page should be just barely distinguishable. Otherwise you've likely reached the limit of what brightness/contrast adjustments alone can achieve. Alternatively, if maximal shadow
and highlight detail are more important than mid-tone lightness, you can ignore the mid-tone image. In that case, first use brightness to control shadow detail and then use contrast to control highlight detail (in that order). When brightness
is too high, solid black will appear gray, but when it's too low shadow clipping will make several of the darker 8 shades appear the same.
However, the above examples are just crude adjustments that only address small portions of the tonal range, and do not fix colors at all. There are somewhat more accurate methods out there for visual calibration, but ultimately, achieving
truly accurate results requires systematic and objective measurements using a calibration device...

OVERVIEW: CALIBRATION & PROFILING


The colors and shades that a monitor reproduces vary with the monitor's type, brand, settings and even age. Unfortunately, unlike in the digital world, all numbers aren't created equal when it comes to monitors. A digital green value may
therefore appear darker, lighter or with a different saturation than this color was intended to be seen:
Digital Standardiz
Monitor
Value ed
"X"
of Green Color

200 —>

150 —>

100 —>

50 —>

<- Color Mismatch ->


note: for the purposes of this example, "standardized color" is just one example of a desirable state that is well-defined in terms of universal parameters, such as gamma, white point and luminance.

Ideally, you would get your monitor to simply translate the digital values in a file into a standardized set of colors. However, this isn't always possible, so the process of monitor calibration actually requires two steps: (1) calibration and
(2) profiling.
(1) Calibration is the process of getting your monitor into a desirable and well-defined state. This usually involves changing various physical parameters on your monitor, such as brightness from before, in addition to creating what is
called a Look-Up Table (LUT).
The LUT takes an input value, such as green=50 in the above example, and then says "on 'Monitor X,' I know that it reproduces green=50 darker than the standard, but if I convert the 50 into a 78 before sending it to the monitor, then the
color will come out how a green=50 was intended to be seen." An LUT therefore translates digital values in a file into new values which effectively compensate for that particular monitor's characteristics:
Compensate
Digital Standardiz
d Monitor
Value LUT ed
Digital "X"
of Green Color
Values

200 —> 200 —>

150 —> 122 —>

100 —> 113 —>

50 —> 78 —>

<- Colors Match ->

(2) Profiling is the process of characterizing your monitor's calibrated state using a color profile. These characteristics include the range of colors your monitor is capable of displaying (the "color space"), in addition to the spacing of
intermediate shades within this range ("gamma"). Other properties may also be included.
Profiling is important because different devices cannot necessarily reproduce the same range of colors and shades (a "gamut mismatch"). A perfect translation from one device's color into another's therefore isn't always possible. Color
profiles enable color-managed software to make intelligent compromises when making these imperfect conversions:
Standa Standa
Original Converted
rd rd
Image Image
"A" "B"

Color-
Managed
Software
—————>

<- Gamut Mismatch ->

Wide Color Gamut Narrow Color Gamut

In the above example, Standard "A" has a greater range of greens than Standard "B," so the colors in the original image get squeezed from a wide range of intensities to a narrow range.
For details on how color spaces are converted, also see tutorial on "color space conversion."
MONITOR CALIBRATION DEVICES

calibration device in-use


A monitor calibration device is what performs the task of both calibration and profiling. It is usually something that looks like a computer mouse, but it instead fastens to the front of your monitor. Special software then controls the
monitor so that it displays a broad range of colors and shades underneath the calibration device, which are each sequentially measured and recorded.
Common calibration devices include the X-Rite Eye-One Display, ColorVision Spyder, ColorEyes Display and ColorMunki Photo, amongst others.
Before initiating a calibration, first make sure to give your monitor at least 10-15 minutes to warm up. This ensures that its brightness and color balance have reached a steady and reproducible state.
Just before the calibration starts, your calibration software will ask you to specify several parameters that it will calibrate to (the "target settings"). These may include the white point, gamma and luminance (we'll get to these in the next
section). During the calibration process you may also be instructed to change various display settings, including brightness and contrast (and RGB values if you have a CRT).
The result will be a matrix of color values and their corresponding measurements. Sophisticated software algorithms then attempt to create an LUT which both (i) reproduces neutral, accurate and properly-spaced shades of gray and (ii)
reproduces accurate color hue and saturation across the gamut. If neither are perfectly achievable (they never are), then the software tries to prioritize so that inaccuracies only correspond to tonal and color differences that our eyes are not
good at perceiving.

CALIBRATION SETTINGS
Here's a brief description and recommendation for each of the target calibration settings:
White Point. This setting controls the relative warmth or coolness the display's lightest tone, as specified by the "color temperature." Higher color temperatures appear cooler, whereas lower temperatures appear warmer (yes, this is at
first counter-intuitive).

Your Monitor's
Warmer Color Cooler Color
Native
Temperature Temperature
Color Temperature

Even though the above shades appear slightly warmer and cooler, that's primarily because they're being compared side by side. If they were on their own, and were the brightest shade your display could show, then your eye would adjust
and you would likely call each of them "white."
See the tutorial on white balance for additional background reading on this topic.
With CRT monitors, the standard recommendation is to set your display to around 6500K (aka D65), which is a little cooler than daylight. However, with LCD monitors it's become a bit more complicated. While many LCD's have a color
temperature option, the back light for these displays always has a native color temperature. Any deviation from this native value will end up reducing your display's color gamut. For this reason, it's generally recommended to leave your
LCD at its default color temperature unless you have a good reason to set it otherwise. Your eye will adjust to this native color temperature, and no warm or cool hue will be apparent unless it is being directly compared.
Gamma. This setting controls the rate at which shades appear to increase from black to white (for each successive digital value). This makes a given image appear brighter and darker for higher and lower gamma values, respectively, but
does not change the black and white points. It also strongly influences an image's apparent contrast:

Gamma 1.0 Gamma 1.8 Gamma 2.2 Gamma 4.0


Note: The above images assume that your display is currently set to gamma 2.2.
Older mac computers at one time used gamma values of 1.8, but they now also use gamma 2.2.
A display gamma of 2.2 has become a standard for image editing and viewing, so it's generally recommended to use this setting. It also correlates best with how we perceive brightness variations, and is usually close to your display's
native setting.
Luminance. This setting controls the amount of light emitted from your display.

Unlike with the white point and gamma settings, the optimal luminance setting is heavily influenced by the brightness of your working environment. Most people set the luminance to anywhere from 100-150 Cd/m2, with brighter working
environments potentially requiring values that exceed this range. The maximum attainable luminance will depend on your monitor type and age, so this may ultimately limit how bright your working environment can be.
However, higher luminance will shorten the usable life span of your monitor, so it's always better to instead move your monitor to somewhere darker if you can. Use the lowest setting in the 100-150 range where you can still see all 8
shades in the above image.

CALIBRATION: LOOK-UP TABLE


The Look-Up Table (LUT) is either controlled by your video card or by your monitor itself, so it will be used regardless of whether your software program is color managed -- unlike with the color profile. The LUT is usually loaded
immediately after booting up into your operating system, and is used identically regardless of what your monitor is displaying.
Whenever the red, green and blue values are equal, an accurate monitor should display this as a neutral gray. However, you'd be surprised how often this isn't the case (see below). The job of the LUT* is to maintain neutral gray tones
with the correct gamma.
*Note: this example is for a simpler 1D 8-bit LUT, as is most commonly used with CRT monitors.

Monito Neutra
R,G,B Input
r l
Values
"X" Gray

200,200,200 —>

159,159,159 —>

100,100,100 —>

50,50,50 —>

<- Mismatch ->

A sample LUT that corrects Monitor "X" is shown below. It effectively applies a separate tonal curve to each of your monitor's three color channels:

—>

No Adjustment Look-Up Table (LUT)

Note: The table shown above is an 8-bit 1D LUT; there's also more complicated 3D LUT's which do not treat each color independently. However, the basic concept remains the same.
Without the above LUT, your video card sends an input color value of 159 (from a digital file) directly to your monitor as an output value of 159 (no matter what the color is). With the LUT, the video card looks up each red, green and
blue value using the tonal curves. An input value of R,G,B=159,159,159 gets sent to your monitor as an output value of 145,155,162 (which is now perceived as neutral gray). Also, note how greater color corrections correspond to color
curves which diverge more from a straight diagonal.
There's often several LUT's along the imaging chain -- not just with the video card. The other LUT that is most relevant to monitor calibration is your monitor's internal LUT (as discussed later). If your monitor supports modifying its
own LUT (few do), this will usually achieve more accurate calibrations than using your video card's LUT. However, unless the calibration software is designed for your particular monitor, it will likely end up using your video card's LUT
instead.

PROFILING: COLOR PROFILE


The color profile specifies the target settings from your calibration, such as gamma, white point and luminance, in addition to measurements from the calibration, such as the maximum red, green and blue intensities that your display can
emit. These properties collectively define the color space of your monitor. A copy of the LUT is also included, but this is not used directly since it's already been implemented by your monitor or video card.
A color profile is used to convert images so that they can be displayed using the unique characteristics of your monitor. Unlike with the LUT, you will need to view images using color-managed software in order to use a color
profile. This won't be a problem if you're running the latest PC or Mac operating systems though, since they're both color-managed. Otherwise Photoshop or any other mainstream image editing or RAW development software will work
just fine.
Whenever a digital image is opened that contains an embedded color profile, your software can compare this profile to the profile that was created for your monitor. If the monitor has the same range of colors specified in the digital
image, then values from the file will be directly converted by the LUT into the correct values on your monitor. However, if the color spaces differ (as is usually the case), then your software will perform a more sophisticated conversion.
This process is called color space conversion.

TESTING YOUR MONITOR CALIBRATION


Do not assume that just because you've performed a color calibration that this monitor will now reproduce accurate color without complication. It's important to also verify the quality of this calibration. If you end up noticing that your
color calibration device was unable to repair some inaccuracies, at least you can be aware of these in the back of your mind if you perform any image editing that influences color.
The quickest and easiest way to diagnose the quality of a color calibration is to view a large grayscale gradient in an image viewing program that supports color management. Sub optimal monitor calibration may render this gradation with
subtle vertical bands of color, or occasional discrete jumps in tone. Move your mouse over the image below to see what a poor quality monitor calibration might look like:

Example of a smooth grayscale gradation for diagnosing the quality of a monitor calibration.
Such a gradation is easiest to diagnose when viewed at your display's maximum size, and when alternating between having the monitor's color profile turned on and off. In Photoshop, this is achieved by using "Proof Colors" set to
"Monitor RGB"; CTRL+Y toggles the monitor profile on and off. When "Monitor RGB" is turned on, this means that the monitor's color profile is not being used.
If color banding is visible, then this might mean that your monitor needs re-calibration. It's generally recommended to perform this once every month or so, depending on how important color accuracy is to your work.
Alternatively, your monitor's native color reproduction might be so far from optimal that the color profile represents an extreme correction. This could be due to the monitor calibration settings you're using, but could also be caused by the
age of the monitor. In the latter case, a color profile is likely still a vast improvement over no color profile -- but it comes with compromises.

LIMITATIONS OF MONITOR CALIBRATION


Unfortunately, there are limits to how accurately you can calibrate your display. With a digital display, the more you have to change your monitor from its native state, the more you will decrease the number of colors/shades that it
can display. Fortunately, the bit depth of your monitor's internal LUT can influence how well it is calibrated, since a monitor with a higher bit depth LUT is able to draw upon a larger palette of colors:

—> OR

Low Bit Depth High Bit Depth


No Adjustment
LUT LUT

(4 output shades) (2 output shades) (4 output shades)


Note: A higher bit depth internal LUT does not mean that a monitor can actually display more colors at the same time, since the number of input values remains the same. This is why a higher bit depth LUT in your video card will not on
its own achieve more accurate calibrations.
In the low bit depth example, the brightest (4) and darkest (1) shades are forced to merge with white (5) and black (0), respectively, since the LUT has to round to the nearest output value available. On the other hand, the high bit depth
LUT can use additional intermediate values. This greatly reduces the likelihood of color banding and image posterization -- even when the display is old and deviates substantially from its original colors.
If you have a new accurate display with an 8-bit LUT, then you'll likely get good calibrations; the LUT bit depth is just something to be aware of as your monitor ages. The vast majority of displays have an 8-bit LUT, although some have
6-bit or 10+ bit LUT's. Avoid LCD monitors that are marketed to the gaming community, because these sometimes sacrifice the bit depth of their LUT (or other aspects) in exchange for higher refresh rates -- which are of no importance
to viewing still images.

25.DIGITAL CAMERA SENSOR SIZES -


This article aims to address the question: how does your digital camera's sensor size influence different types of photography? Your choice of sensor size is analogous to choosing between 35 mm, medium format and large format film
cameras-- with a few notable differences unique to digital technology. Much confusion often arises on this topic because there are both so many different size options, and so many trade-offs relating to depth of field, image noise,
diffraction, cost and size/weight.
I have written this article after conducting my own research to decide whether the new Canon EOS 5D is really an upgrade from the 20D for the purposes of my photography. Background reading on this topic can be found in the tutorial
on digital camera sensors.

OVERVIEW OF SENSOR SIZES


Sensor sizes currently have many possibilities, depending on their use, price point and desired portability. The relative size for many of these is shown below:

Canon's 1Ds/1DsMkII/5D and the Kodak DCS 14n are the most common full frame sensors. Canon cameras such as the 300D/350D/10D/20D all have a 1.6X crop factor, whereas Nikon cameras such as the D70(s)/D100 have a 1.5X
crop factor. The above chart excludes the 1.3X crop factor, which is used in Canon's 1D series cameras.
Camera phones and other compact cameras use sensor sizes in the range of ~1/4" to 2/3". Olympus, Fuji and Kodak all teamed up to create a standard 4/3 system, which has a 2X crop factor compared to 35 mm film. Medium format and
larger sensors exist, however these are far less common and currently prohibitively expensive. These will thus not be addressed here specifically, but the same principles still apply.

CROP FACTOR & FOCAL LENGTH MULTIPLIER


The crop factor is the sensor's diagonal size compared to a full-frame 35 mm sensor. It is called this because when using a 35 mm lens, such a sensor effectively crops out this much of the image at its exterior (due to its limited size).
35 mm Full Frame Angle of View

One might initially think that throwing away image information is never ideal, however it does have its advantages. Nearly all lenses are sharpest at their centers, while quality degrades progressively toward to the edges. This means that
a cropped sensor effectively discards the lowest quality portions of the image, which is quite useful when using low quality lenses (as these typically have the worst edge quality).

Uncropped Photograph Center Crop Corner Crop

On the other hand, this also means that one is carrying a much larger lens than is necessary-- a factor particularly relevant to those carrying their camera for extended periods of time (see section below). Ideally, one would use nearly all
image light transmitted from the lens, and this lens would be of high enough quality that its change in sharpness would be negligible towards its edges.
Additionally, the optical performance of wide angle lenses is rarely as good as longer focal lengths. Since a cropped sensor is forced to use a wider angle lens to produce the same angle of view as a larger sensor, this can degrade
quality. Smaller sensors also enlarge the center region of the lens more, so its resolution limit is likely to be more apparent for lower quality lenses. See the tutorial on camera lens quality for more on this.
Similarly, the focal length multiplier relates the focal length of a lens used on a smaller format to a 35 mm lens producing an equivalent angle of view, and is equal to the crop factor. This means that a 50 mm lens used on a sensor
with a 1.6X crop factor would produce the same field of view as a 1.6 x 50 = 80 mm lens on a 35 mm full frame sensor.
Top of Form
Focal Length Multiplier Calculator

Sensor Type:
Actual Lens Focal
Length: mm

Focal Length Multiplier

35 mm Equivalent Focal Length

Bottom of Form

Be warned that both of these terms can be somewhat misleading. The lens focal length does not change just because a lens is used on a different sized sensor-- just its angle of view. A 50 mm lens is always a 50 mm lens, regardless of
the sensor type. At the same time, "crop factor" may not be appropriate to describe very small sensors because the image is not necessarily cropped out (when using lenses designed for that sensor).

LENS SIZE AND WEIGHT CONSIDERATIONS


Smaller sensors require lighter lenses (for equivalent angle of view, zoom range, build quality and aperture range). This difference may be critical for wildlife, hiking and travel photography because all of these often utilize heavier
lenses or require carrying equipment for extended periods of time. The chart below illustrates this trend for a selection of Canon telephoto lenses typical in sport and wildlife photography:

An implication of this is that if one requires the subject to occupy the same fraction of the image on a 35 mm camera as using a 200 mm f/2.8 lens on a camera with a 1.5X crop factor (requiring a 300 mm f/2.8 lens), one would have to
carry 3.5X as much weight! This also ignores the size difference between the two, which may be important if one does not want to draw attention in public. Additionally, heavier lenses typically cost much more.

For SLR cameras, larger sensor sizes result in larger and clearer viewfinder images, which can be especially helpful when manual focusing. However, these will also be heavier and cost more because they require a larger
prism/pentamirror to transmit the light from the lens into the viewfinder and towards your eye.

DEPTH OF FIELD REQUIREMENTS


As sensor size increases, the depth of field will decrease for a given aperture (when filling the frame with a subject of the same size and distance). This is because larger sensors require one to get closer to their subject, or to use a
longer focal length in order to fill the frame with that subject. This means that one has to use progressively smaller aperture sizes in order to maintain the same depth of field on larger sensors. The following calculator predicts the
required aperture and focal length in order to achieve the same depth of field (while maintaining perspective).
Top of Form
Depth of Field Equivalents

Sensor #1

Selected aperture
Actual lens focal ??
length mm

Sensor #2

Required Focal Length (for same perspective)

Required Aperture

Bottom of Form

As an example calculation, if one wanted to reproduce the same perspective and depth of field on a full frame sensor as that attained using a 10 mm lens at f/11 on a camera with a 1.6X crop factor, one would need to use a 16 mm lens and
an aperture of roughly f/18. Alternatively, if one used a 50 mm f/1.4 lens on a full frame sensor, this would produce a depth of field so shallow it would require an aperture of 0.9 on a camera with a 1.6X crop factor-- not possible with
consumer lenses!

A shallower depth of field may be desirable for portraits because it improves background blur, whereas a larger depth of field is desirable for landscape photography. This is why compact cameras struggle to produce significant
background blur in portraits, while large format cameras struggle to produce adequate depth of field in landscapes.
Note that the above calculator assumes that you have a lens on the new sensor (#2) which can reproduce the same angle of view as on the original sensor (#1). If you instead use the same lens, then the aperture requirements remain the
same (but you will have to get closer to your subject). This option, however, also changes perspective.

INFLUENCE OF DIFFRACTION
Larger sensor sizes can use smaller apertures before the diffraction airy disk becomes larger than the circle of confusion (determined by print size and sharpness criteria). This is primarily because larger sensors do not have to be enlarged
as much in order to achieve the same print size. As an example: one could theoretically use a digital sensor as large as 8x10 inches, and so its image would not need to be enlarged at all for a 8x10 inch print, whereas a 35 mm sensor
would require significant enlargement.
Use the following calculator to estimate when diffraction begins to reduce sharpness. Note that this only shows when diffraction will be visible when viewed onscreen at 100%-- whether this will be apparent in the final print also depends
on viewing distance and print size. To calculate this as well, please visit: diffraction limits and photography.
Top of Form
Diffraction Limited Aperture Estimator

Sensor Size

Resolution
Megapixels

Diffraction Limited Aperture

Bottom of Form
Keep in mind that the onset of diffraction is gradual, so apertures slightly larger or smaller than the above diffraction limit will not all of a sudden look better or worse, respectively. On a Canon 20D, for example, one can often use f/11
without noticeable changes in focal plane sharpness, but above this it becomes quite apparent. Furthermore, the above is only a theoretical limit; actual results will also depend on lens characteristics. The following diagrams show the
size of the airy disk (theoretical maximum resolving ability) for two apertures against a grid representing pixel size:

Pixel Density Limits Airy Disk Limits


Resolution Resolution
(Shallow DOF Requirement) (Deep DOF Requirement)

An important implication of the above results is that the diffraction-limited pixel size increases for larger sensors (if the depth of field requirements remain the same). This pixel size refers to when the airy disk size becomes the
limiting factor in total resolution-- not the pixel density. Further, the diffraction-limited depth of field is constant for all sensor sizes. This factor may be critical when deciding on a new camera for your intended use, because more pixels
may not necessarily provide more resolution (for your depth of field requirements). In fact, more pixels could even harm image quality by increasing noise and reducing dynamic range (next section).

PIXEL SIZE: NOISE LEVELS & DYNAMIC RANGE


Larger sensors generally also have larger pixels (although this is not always the case), which give them the potential to produce lower image noise and have a higher dynamic range. Dynamic range describes the range of tones which a
sensor can capture below when a pixel becomes completely white, but yet above when texture is indiscernible from background noise (near black). Since larger pixels have a greater volume -- and thus a greater range of photon capacity
-- these generally have a higher dynamic range.

Note: cavities shown without color filters present


Further, larger pixels receive a greater flux of photons over a given exposure time (at the same aperture), so their light signal is much stronger. For a given amount of background noise, this produces a higher signal to noise ratio-- and
thus a smoother looking photo.

Larger Pixels Smaller Pixels


(Often Smaller
(Often Larger Sensor)
Sensor)

This is not always the case however, because the amount of background noise also depends on sensor manufacturing process and how efficiently the camera extracts tonal information from each pixel (without introducing additional
noise). In general though, the above trend holds true. Another aspect to consider is that even if two sensors have the same apparent noise when viewed at 100%, the sensor with the higher pixel count will produce a cleaner
looking final print. This is because the noise gets enlarged less for the higher pixel count sensor (for a given print size), therefore this noise has a higher frequency and thus appears finer grained.

COST OF PRODUCING DIGITAL SENSORS


The cost of a digital sensor rises dramatically as its area increases. This means that a sensor with twice the area will cost more than twice as much, so you are effectively paying more per unit "sensor real estate" as you move to larger
sizes.

Silicon Wafer Silicon Wafer


(divided into small (divided into large
sensors) sensors)

One can understand this by looking at how manufacturers make their digital sensors. Each sensor is cut from a larger sheet of silicon material called a wafer, which may contain thousands of individual chips. Each wafer is extremely
expensive (thousands of dollars), therefore fewer chips per wafer result in a much higher cost per chip. Furthermore, the chance of an irreparable defect (too many hot pixels or otherwise) ending up in a given sensor increases with sensor
area, therefore the percentage of usable sensors goes down with increasing sensor area (yield per wafer). Assuming these factors (chips per wafer and yield) are most important, costs increase proportional to the square of sensor area (a
sensor 2X as big costs 4X as much). Real-world manufacturing has a more complicated size versus cost relationship, but this gives you an idea of skyrocketing costs.
This is not to say though that certain sized sensors will always be prohibitively expensive; their price may eventually drop, but the relative cost of a larger sensor is likely to remain significantly more expensive (per unit area) when
compared to some smaller size.

OTHER CONSIDERATIONS
Some lenses are only available for certain sensor sizes (or may not work as intended otherwise), which might also be a consideration if these help your style of photography. One notable type is tilt/shift lenses, which allow one to
increase (or decrease) the apparent depth of field using the tilt feature. Tilt/shift lenses can also use shift to control perspective and reduce (or eliminate) converging vertical lines caused by aiming the camera above or below the horizon
(useful in architectural photography). Furthermore, fast ultra-wide angle lenses (f/2.8 or larger) are currently only available for 35 mm and larger sensors, which may be a deciding factor if needed in sports or photojournalism.

CONCLUSIONS: OVERALL IMAGE DETAIL & COMPETING FACTORS


Depth of field is much shallower for larger format sensors, however one could also use a smaller aperture before reaching the diffraction limit (for your chosen print size and sharpness criteria). So which option has the potential to
produce the most detailed photo? Larger sensors (and correspondingly higher pixel counts) undoubtedly produce more detail if you can afford to sacrifice depth of field. On the other hand, if you wish to maintain the same depth of
field, larger sensor sizes do not necessarily have a resolution advantage. Further, the diffraction-limited depth of field is the same for all sensor sizes. In other words, if one were to use the smallest aperture before diffraction
became significant, all sensor sizes would produce the same depth of field-- even though the diffraction limited aperture will be different.
Technical Notes:
This result assumes that your pixel size is comparable to the size of the diffraction limited airy disk for each sensor in question, and that each lens is of comparable quality. Furthermore, the tilt lens feature is far more common in larger format cameras-- allowing one to change the angle of the focal plane and therefore increase the apparent depth of field.
Another important result is that if depth of field is the limiting factor, the required exposure time increases with sensor size for the same sensitivity. This factor is probably most relevant to macro and nightscape photography, as
these both may require a large depth of field and reasonably short exposure time. Note that even if photos can be taken handheld in a smaller format, those same photos may not necessarily be taken handheld in the larger format.
On the other hand, exposure times may not necessarily increase as much as one might initially assume because larger sensors generally have lower noise (and can thus afford to use a higher sensitivity ISO setting while maintaining similar
perceived noise).
Ideally, perceived noise levels (at a given print size) generally decrease with larger digital camera sensors (regardless of pixel size).
No matter what the pixel size, larger sensors unavoidably have more light-gathering area. Theoretically, a larger sensor with smaller pixels will still have lower apparent noise (for a given print size) than a smaller sensor with larger pixels
(and a resulting much lower total pixel count). This is because noise in the higher resolution camera gets enlarged less, even if it may look noisier at 100% on your computer screen. Alternatively, one could conceivably average adjacent
pixels in the higher pixel count sensor (thereby reducing random noise) while still achieving the resolution of the lower pixel count sensor. This is why images downsized for the web and small prints look so noise-free.
Technical Notes:
This all assumes that differences in microlens effectiveness and pixel spacing are negligible for different sensor sizes. If pixel spacing has to remain constant (due to read-out and other circuitry on the chip), then higher pixel densities will result in less light gathering area unless the microlenses can compensate for this loss. Additionally, this ignores the impact of fixed pattern or
dark current noise, which may vary significantly depending on camera model and read-out circuitry.

Overall: larger sensors generally provide more control and greater artistic flexibility, but at the cost of requiring larger lenses and more expensive equipment. This flexibility allows one to create a shallower depth of field than
possible with a smaller sensor (if desired), but yet still achieve a comparable depth of field to a smaller sensor by using a higher ISO speed and smaller aperture (or when using a tripod).

26.QUALITY: MTF, RESOLUTION & CONTRAST-


Lens quality is more important now than ever, due to the ever-increasing number of megapixels found in today's digital cameras. Frequently, the resolution of your digital photos is actually limited by the camera's lens -- and not by the
resolution of the camera itself. However, deciphering MTF charts and comparing the resolution of different lenses can be a science unto itself. This tutorial gives an overview of the fundamental concepts and terms used for assessing lens
quality. At the very least, hopefully it will cause you to think twice about what's important when purchasing your next digital camera or lens.

RESOLUTION & CONTRAST


Everyone is likely to be familiar with the concept of image resolution, but unfortunately, too much emphasis is often placed on this single metric. Resolution only describes how much detail a lens is capable of capturing -- and not
necessarily the quality of the detail that is captured. Other factors therefore often contribute much more to our perception of the quality and sharpness of a digital image.
To understand this, let's take a look at what happens to an image when it passes through a camera lens and is recorded at the camera's sensor. To make things simple, we'll use images composed of alternating black and white lines ("line
pairs"). Beyond the resolution of your lens, these lines are of course no longer distinguishable:

High Resolution Line Pairs Lens Unresolved Line Pairs

Example of line pairs which are smaller than the resolution of a camera lens.
However, something that's probably less well understood is what happens to other, thicker lines. Even though they're still resolved, these progressively deteriorate in both contrast and edge clarity (see sharpness: resolution and acutance)
as they become finer:
Progressively Finer Lens Progressively Less
Lines Contrast
& Edge Definition

For two lenses with the same resolution, the apparent quality of the image will therefore be mostly determined by how well each lens preserves contrast as these lines become progressively narrower. However, in order to make a fair
comparison between lenses we need to establish a way to quantify this loss in image quality...

MTF: MODULATION TRANSFER FUNCTION


A Modulation Transfer Function (MTF) quantifies how well a subject's regional brightness variations are preserved when they pass through a camera lens. The example below illustrates an MTF curve for a perfect* lens:
*A perfect lens is a lens that is limited in resolution and contrast only by diffraction.
See tutorial on diffraction in photography for a background on this topic.

<— Maximum Resolution


Increasing Line Pair Frequency —> (Diffraction Limit)

Note: The spacing between black and white lines has been exaggerated to improve visibility.
MTF curve assumes a circular aperture; other aperture shapes will produce slightly different results.
An MTF of 1.0 represents perfect contrast preservation, whereas values less than this mean that more and more contrast is being lost -- until an MTF of 0, where line pairs can no longer be distinguished at all. This resolution limit is an
unavoidable barrier with any lens; it only depends on the camera lens aperture and is unrelated to the number of megapixels. The figure below compares a perfect lens to two real-world examples:

Increasing Line Pair Frequency —>

Very High Quality Camera Lens Low Quality Camera Lens


(close to the diffraction limit) (far from the diffraction limit)

Comparison between an ideal diffraction-limited lens (blue line) and real-world camera lenses.
The line pair illustration below the graph does not apply to the perfect lens.
Move your mouse over each of the labels to see how high and low quality lenses often differ.
The blue line above represents the MTF curve for a perfect "diffraction limited" lens. No real-world lens is limited only by diffraction, although high-end camera lenses can get much closer to this limit than lower quality lenses.
Line pairs are often described in terms of their frequency: the number of lines which fit within a given unit length. This frequency is therefore usually expressed in terms of "LP/mm" -- the number of line pairs (LP) that are concentrated
into a millimeter (mm). Alternatively, sometimes this frequency is instead expressed in terms of line widths (LW), where two LW's equals one LP.
The highest line frequency that a lens can reproduce without losing more than 50% of the MTF ("MTF-50") is an important number, because it correlates well with our perception of sharpness. A high-end lens with an MTF-50 of 50
LP/mm will appear far sharper than a lower quality lens with an MTF-50 of 20 LP/mm, for example (presuming that these are used on the same camera and at the same aperture; more on this later).
However, the above MTF versus frequency chart is not normally how lenses are compared. Knowing just the (i) maximum resolution and (ii) MTF at perhaps two different line frequencies is usually more than enough information. What
often matters more is knowing how the MTF changes depending on the distance from the center of your image.
The MTF is usually measured along a line leading out from the center of the image and into a far corner, for a fixed line frequency (usually 10-30 LP/mm). These lines can either be parallel to the direction leading away from the center
(sagittal) or perpendicular to this direction (meridional). The example below shows how these lines might be measured and shown on an MTF chart for a full frame 35mm camera:

Distance From Center


Meridional (Circular) Line Pairs
[mm]

Sagittal (Radial) Line Pairs

Detail at the center of an image will virtually always have the highest MTF, and positions further from the center will often have progressively lower MTF values. This is why the corners of camera lenses are virtually always the softest
and lowest quality portion of your photos. We'll discuss why the sagittal and meridional lines diverge later.

HOW TO READ AN MTF CHART


Now we can finally put all of the above concepts into practice by comparing the properties of a zoom lens with a prime lens:

Distance from Image Center [mm] Distance from Image Center [mm]
Canon 16-35mm f/2.8L II Zoom
Lens Canon 35mm f/1.4L Prime Lens
(zoom set at 35mm)

On the vertical axis, we have the MTF value from before, with 1.0 representing perfect reproduction of line pairs, and 0 representing line pairs that are no longer distinguished from each other. On the horizontal axis, we have the distance
from the center of the image, with 21.6 mm being the far corner on a 35 mm camera. For a 1.6X cropped sensor, you can ignore everything beyond 13.5 mm. Further, anything beyond about 18 mm with a full frame sensor will only be
visible in the extreme corners of the photo:
Full Frame 35 mm Sensor 1.6X Cropped Sensor

Note: For a 1.5X sensor, the far corner is at 14.2 mm, and the far edge is at 11.9 mm.
See the tutorial on digital camera sensor sizes for more on how these affect image quality.
All of the different looking lines in the above MTF charts can at first be overwhelming; the key is to look at them individually. Each line represents a separate MTF under different conditions. For example, one line might represent MTF
values when the lens is at an aperture of f/4.0, while another might represent MTF values at f/8.0. A big hurdle with understanding how to read an MTF chart is learning what each line refers to.
Each line above has three different styles: thickness, color and type. Here's a breakdown of what each of these represents:

Bold -> 10 LP/mm - small-scale


Line contrast
Thickness:
Thin -> 30 LP/mm - resolution or fine
detail

Blue -> Aperture at f/8.0


Line Color:
Black -> Aperture wide open

Dashed -> Meridional (concentric)


Line Type: line pairs

Solid -> Sagittal (radial) line pairs

Since a given line can have any combination of thickness, color and type, the above MTF chart has a total of eight different types of lines. For example, a curve that is bold, blue and dashed would describe the MTF of meridional 10
LP/mm lines at an aperture of f/8.0.
Black Lines. These are most relevant when you are using your lens in low light, need to freeze rapid movement, or need a shallow depth of field. The MTF of black lines will almost always be a worst case
scenario (unless you use unusually small apertures).

In the above example, black lines unfortunately aren't a fair apples to apples comparison, since a wide open aperture is different for each of the above lenses (f/2.8 on the zoom vs f/1.4 on the prime). This is the main reason why the black
lines appear so much worse for the prime lens. However, given that the prime lens has such a handicap, it does quite admirably -- especially at 10 LP/mm in the center, and at 30 LP/mm toward the edges of the image. It's therefore highly
likely that the prime lens will outperform the zoom lens when they're both at f/2.8, but we cannot say for sure based only on the above charts.
Blue Lines. These are most relevant for landscape photography, or other situations where you need to maximize depth of field and sharpness. They are also more useful for comparisons because blue lines
are always to be at the same aperture: f/8.0.

In the above example, the prime lens has a better MTF at all positions, for both high and low frequency details (30 and 10 LP/mm). The advantage of the prime lens is even more pronounced towards the outer regions of the camera's
image.
Bold vs. Thin Lines. Bold lines describe the amount of "pop" or small-scale contrast, whereas thin lines describe finer details or resolution. Bold lines are often a priority since high values can mean that
your images will have a more three dimensional look, similar to what happens when performing local contrast enhancement.

In the above example, both lenses have similar contrast at f/8.0, although the prime lens is a little better here. The zoom lens barely loses any contrast when used wide open compared to at f/8.0. On the other hand, the prime lens loses
quite a bit of contrast when going from f/8.0 to f/1.4, but this is probably because f/1.4-f/8.0 is a much bigger change than f/2.8-f/8.0.
ASTIGMATISM: SAGITTAL vs. MERIDIONAL LINES
Dashed vs. Solid Lines. At this point you're probably wondering: why show the MTF for both sagittal ("S") and meridional ("M") line pairs? Wouldn't these be the same? Yes, at the image's direct center they're always identical.
However, things become more interesting progressively further from the center. Whenever the dashed and solid lines begin to diverge, this means that the amount of blur is not equal in all directions. This quality-reducing artifact is called
an "astigmatism," as illustrated below:

Astigmatism: MTF in S >


Original Image
M

Astigmatism: MTF in M >


S

No Astigmatism: MTF in
M=S

Move your mouse over the labels on the image to the right to see the effect of astigmatism.
S = sagittal lines, M = meridional lines
Note: Technically, the S above will have a slightly better MTF because it is closer to the center of the image; however, for the purposes of this example we're assuming that M & S are at similar positions.
When the MTF in S is greater than in M, objects are blurred primarily along lines radiating out from the center of the image. In the above example, this causes the white dots to appear to streak outward from the center of the image, almost
as if they had motion blur. Similarly, objects are blurred in the opposite (circular) direction when the MTF in M is greater than in S. Many of you reading this tutorial right now might even be using eye glasses that correct for an
astigmatism...
Technical Note: With wide angle lenses, M lines are much more likely to have a lower MTF than S lines, partly because these try to preserve a rectilinear image projection. Therefore, as the angle of view becomes wider, subjects near the periphery become progressively more stretched/distorted in directions leading away from the center of the image. A wide angle lens with
significant barrel distortion can therefore achieve a better MTF since objects at the periphery are stretched much less than they would be otherwise. However, this is usually an unacceptable trade-off with architectural photography.

In the MTF charts for the Canon zoom versus prime lens from before, both lenses begin to exhibit pronounced astigmatism at the very edges of the image. However, with the prime lens, something interesting happens: the type of
astigmatism reverses when comparing the lens at f/1.4 versus at f/8.0. At f/8.0, the lens primarily blurs in the radial direction, which is a common occurrence. However, at f/1.4 the prime lens primarily blurs in a circular direction, which is
much less common.
What does astigmatism mean for your photos? Probably the biggest implication, other than the unique appearance, is that standard sharpening tools may not work as intended. These tools assume that blur is equal in all directions, so
you might end up over-sharpening some edges, while leaving other edges still looking blurry. Astigmatism can also be problematic with photos containing stars or other point light sources, since this will make the asymmetric blur more
apparent.

MTF & APERTURE: FINDING THE "SWEET SPOT" OF A LENS


The MTF of a lens generally increases for successively narrower apertures, then reaches a maximum for intermediate apertures, and finally declines again for very narrow apertures. The figure below shows the MTF-50 for various
apertures on a high-quality lens:

The aperture corresponding to the maximum MTF is the so-called "sweet spot" of a lens, since images will generally have the best sharpness and contrast at this setting. On a full frame or cropped sensor camera, this sweet spot is usually
somewhere between f/8.0 and f/16, depending on the lens. The location of this sweet spot is also independent of the number of megapixels in your camera.
Technical Notes:

• At large apertures, resolution and contrast are generally limited by light aberrations.
An aberration is when imperfect lens design causes a point light source in the image not to converge onto a point on your camera's sensor.
• At small apertures, resolution and contrast are generally limited by diffraction.
Unlike aberrations, diffraction is a fundamental physical limit caused by the scattering of light, and is not necessarily any fault of the lens design.
• High and low quality lenses are therefore very similar when used at small apertures
(such as f/16-32 on a full frame or cropped sensor).
• Large apertures are where high quality lenses really stand out, because the materials and engineering of the lens are much more important. In fact, a perfect lens would not even have a "sweet spot";
the optimal aperture would just be wide open.
However, one should not conclude that the optimal aperture setting is completely independent of what is being photographed. The sweet spot at the center of the image may not correspond with where the edges and corners of the image
look their best; this often requires going to an even narrower aperture. Further, this all assumes that your subject is in perfect focus; objects outside the depth of field will likely still improve in sharpness even when your f-stop is larger
than the so-called sweet spot.

COMPARING DIFFERENT CAMERAS & LENS BRANDS


A big problem with the MTF concept is that it's not standardized. Comparing different MTF charts can therefore be quite difficult, and in some cases even impossible. For example, MTF charts by Canon and Nikon cannot be directly
compared, because the Canon uses theoretical calculations while Nikon uses measurements.
However, even if one performed their own MTF tests, they'd still run into problems. A typical self-run MTF chart actually depicts the net total MTF of your camera's optical system -- and not the MTF of the lens alone. This net
MTF represents the combined result from the lens, camera sensor and RAW conversion, in addition to any sharpening or other post-processing. MTF measurements will therefore vary depending on which camera is used for the
measurement, or the type of software used in the RAW conversion. It's therefore only practical to compare MTF charts that were measured using identical methodologies.
Cropped vs. Full Frame Sensors. One needs to be extra careful when comparing MTF charts amongst cameras with different sensor sizes. For example, an MTF curve at 30 LP/mm on a full frame camera is not equivalent to a different
30 LP/mm MTF curve on a 1.6X cropped sensor. The cropped sensor would instead need to show a curve at 48 LP/mm for a fair comparison, because the cropped sensor gets enlarged more when being made into the same size print.

The diversity of sensor sizes is why some have started listing the line frequency in terms of the picture or image height (LP/PH or LP/IH), as opposed to using an absolute unit like a millimeter. A line frequency of 1000 LP/PH, for
example, has the same appearance at a given print size -- regardless of the size of the camera's sensor. One would suspect that part of the reason manufacturers keep showing MTF charts at 10 and 30 LP/mm for DX, EF-S and other
cropped sensor lenses is because this makes their MTF charts look better.

MTF CHART LIMITATIONS


While MTF charts are an extremely powerful tool for describing the quality of a lens, they still have many limitations. In fact, an MTF chart says nothing about:
• Color quality and chromatic aberrations
• Image distortion
• Vignetting (light fall-off toward the edges of an image)
• Susceptibility to camera lens flare
Furthermore, other factors such as the condition of your equipment and your camera technique can often have much more of an impact on the quality of your photos than small differences in the MTF. Some of these quality-reducing
factors might include:
• Focusing accuracy
• Camera shake
• Dust on your camera's digital sensor
• Micro abrasions, moisture, fingerprints or other coatings on your lens
Most importantly, even though MTF charts are amazingly sophisticated and descriptive tools -- with lots of good science to back them up -- ultimately nothing beats simply visually inspecting an image on-screen or in a print. After all,
pictures are made to look at, so that's all that really matters at the end of the day. It can often be quite difficult to discern whether an image will look better on another lens based on an MTF, because there's usually many competing factors:
contrast, resolution, astigmatism, aperture, distortion, etc. A lens is rarely superior in all of these aspects at the same time. If you cannot tell the different between shots with different lenses used in similar situations, then any MTF
discrepancies probably don't matter.
Finally, even if one lens's MTF is indeed worse than another's, sharpening and local contrast enhancement can often make this disadvantage imperceptible in a print -- as long as the original quality difference isn't too great.

27.HDR: HIGH DYNAMIC RANGE PHOTOGRAPHY -


High dynamic range (HDR) images enable photographers to record a greater range of tonal detail than a given camera could capture in a single photo. This opens up a whole new set of lighting possibilities which one might have
previously avoided—for purely technical reasons. The new "merge to HDR" feature of Photoshop allows the photographer to combine a series of bracketed exposures into a single image which encompasses the tonal detail of the entire
series. There is no free lunch however; trying to broaden the tonal range will inevitably come at the expense of decreased contrast in some tones. Learning to use the merge to HDR feature in Photoshop can help you make the most of
your dynamic range under tricky lighting—while still balancing this trade-off with contrast.

MOTIVATION: THE DYNAMIC RANGE DILEMMA


As digital sensors attain progressively higher resolutions, and thereby successively smaller pixel sizes, the one quality of an image which does not benefit is its dynamic range. This is particularly apparent in compact cameras with
resolutions near 8 megapixels, as these are more susceptible than ever to blown highlights or noisy shadow detail. Further, some scenes simply contain a greater brightness range than can be captured by current digital cameras-- of any
type.
The "bright side" is that nearly any camera can actually capture a vast dynamic range-- just not in a single photo. By varying the shutter speed alone, most digital cameras can change how much light they let in by a factor of 50,000 or
more. High dynamic range imaging attempts to utilize this characteristic by creating images composed of multiple exposures, which can far surpass the dynamic range of a single exposure.

WHEN TO USE HDR IMAGES


I would suggest only using HDR images when the scene's brightness distribution can no longer be easily blended using a graduated neutral density (GND) filter. This is because GND filters extend dynamic range while still maintaining
local contrast. Scenes which are ideally suited for GND filters are those with simple lighting geometries, such as the linear blend from dark to light encountered commonly in landscape photography (corresponding to the relatively dark
land transitioning into bright sky).
GND Filter Final Result

In contrast, a scene whose brightness distribution is no longer easily blended using a GND filter is the doorway scene shown below.

Brightness
Underexposure Overexposure
Distribution

We note that the above scene contains roughly three tonal regions with abrupt transitions at their edges-- therefore requiring a custom-made GND filter. If we were to look at this in person, we would be able to discern detail both inside
and outside the doorway, because our eyes would adjust to changing brightness. The goal of HDR use in this article is to better approximate what we would see with our own eyes through the use of a technique called tonal mapping.

INNER WORKINGS OF AN HDR FILE


Photoshop creates an HDR file by using the EXIF information from each of your bracketed images to determine their shutter speed, aperture and ISO settings. It then uses this information to assess how much light came from each image
region. Since this light may vary greatly in its intensity, Photoshop creates the HDR file using 32-bits to describe each color channel (as opposed to the usual 16 or 8-bits, as discussed in the tutorial on "Understanding Bit Depth"). The
real benefit is that HDR files use these extra bits to create a relatively open-ended brightness scale, which can adjust to fit the needs of your image. The important distinction is that these extra bits are used differently than the extra bits in
16-bit images, which instead just define tones more precisely (see tutorials on the "RAW File Format" and "Posterization"). We refer to the usual 8 and 16-bit files as being low dynamic range (LDR) images, relatively speaking.
The 32-bit HDR file format describes a greater dynamic range by using its bits to specify floating point numbers, also referred to as exponential notation. A floating point number is composed of a decimal number between 1 and
10 multiplied by some power of 10, such as 5.467x103, as opposed to the usual 0-255 (for 8-bit) or 0-65535 (for 16-bit) integer color specifications. This way, an image file can specify a brightness of 4,300,000,000 simply as 4.3x109,
which would be too large even with 32-bit integers.
We see that the floating point notation certainly looks neater and more concise, but how does this help a computer? Why not just keep adding more bits to specify successively larger numbers, and therefore a larger dynamic range?
Recall that for ordinary LDR files, far more bits are used to distinguish lighter tones than darker tones (from the tutorial on gamma correction, tonal levels and exposure - to be added). As a result, as more bits are added, an exponentially
greater fraction of these bits are used to specify color more precisely, instead of extending dynamic range.
Representation of How Bits Are Allocated for Increasing
Brightness

Note: Above representation is qualitative, and depends on other factors such as screen bit depth, monitor gamma, etc. The more closely spaced bits for brighter values is a result of the fact that ordinary 8 and 16-bit JPEG files are gamma-
encoded, which can actually help increase dynamic range for low-bit files; gamma-encoding just becomes more and more inefficient as the bit depth increases.
HDR files get around this LDR dilemma of diminishing returns by using floating point numbers which are proportional to the actual brightness values of the subject matter (gamma equals one, or linear). This ensures that bits are equally
spaced throughout the dynamic range, and not just concentrated in the brighter tones-- allowing for greater bit efficiency. Further, the use of floating point numbers ensure that all tones are recorded with the same relative precision, since
numbers such as 2.576x103 and 8.924x109 each have the same number of significant figures (four), even though the second number is more than a million times larger.
Note: just as how using high bit depth images do not necessarily mean your image contains more color, a high dynamic range file does not guarantee greater dynamic range unless this is also present in the actual subject matter.
All of these extra bits provided by the HDR format are great, and effectively allow for a nearly infinite brightness range to be described. The problem is that your computer display (or the final photographic print) can only show a fixed
brightness scale. This tutorial therefore focuses on how to create and convert HDR files into an ordinary 8 or 16-bit image, which can be displayed on a monitor, or will look great as a photographic print. This process is also commonly
referred to as tonal mapping.

IN-FIELD PREPARATION
Since creating a HDR image requires capturing a series of identically-positioned exposures, a sturdy tripod is essential. Photoshop has a feature which attempts to align the images when the camera may have moved between shots,
however best results are achieved when this is not relied upon.
Make sure to take at least three exposures, although five or more is recommended for optimum accuracy. More exposures allow the HDR algorithm to better approximate how your camera translates light into digital values (a.k.a. the
digital sensor's response curve)-- creating a more even tonal distribution. The doorway example is best-suited with several intermediate exposures, in addition to the two shown previously.

Reference -1 Stops -2 Stops -3 Stops

It is essential that the darkest of these exposures includes no blown highlights in areas where you want to capture detail. The brightest exposure should show the darkest regions of the image with enough brightness that they are relatively
noise-free and clearly visible. Each exposure should be separated by one to two stops, and these are ideally set by varying the shutter speed (as opposed to aperture or ISO speed). Recall that each "stop" refers to a doubling (+1 stop) or
halving (-1 stop) of the light captured from an exposure.
We also note another disadvantage of HDR images: they require relatively static subject matter, due to the necessity of several separate exposures. Our previous ocean sunset example would therefore not be well-suited for the HDR
technique, as the waves would have moved significantly between each exposure.

CREATING A 32-BIT HDR FILE IN PHOTOSHOP


Here we use Adobe Photoshop to convert the sequence of exposures into a single image, which uses tonal mapping to approximate what we would see with our eye. Before tonal mapping can be performed, we first need to combine all
exposures into a single 32-bit HDR file.
Open the HDR tool (File>Automate>Merge to HDR�), and load all photographs in the exposure sequence; for this example it would be the four images shown in the previous section. If your images were not taken on a stable tripod,
this step may require checking "Attempt to Automatically Align Source Images" (which greatly increases processing time). After pressing OK, you will soon see a "Computing Camera Response Curves" message.

Once your computer has stopped processing, it will show a window with their combined histogram. Photoshop has estimated the white point, but this value often clips the highlights. You may wish to move the white point slider to the
rightmost edge of the histogram peaks in order to see all highlight detail. This value is for preview purposes only and will require setting more precisely later. After pressing OK, this leaves you with a 32-bit HDR image, which can now
be saved if required. Note how the image may still appear quite dark; only once it has been converted into a 16 or 8-bit image (using tonal mapping) will it begin to look more like the desired result.
At this stage, very few image processing functions can be applied to a 32-bit HDR file, so it is of little use other than for archival purposes. One function which is available is exposure adjustment (Image>Adjustments>Exposure). You
may wish to try increasing the exposure to see any hidden shadow detail, or decreasing the exposure to see any hidden highlight detail.

USING HDR TONAL MAPPING IN PHOTOSHOP


Here we use Adobe Photoshop to convert the 32-bit HDR image into a 16 or 8-bit LDR file using tonal mapping. This requires interpretive decisions about the type of tonal mapping, depending on the subject matter and brightness
distribution within the photograph.
Convert into a regular 16-bit image (Image>Mode>16 Bits/Channel�) and you will see the HDR Conversion tool. The tonal mapping method can be chosen from one of four options, described below.

Exposure and
This method lets you manually adjust the exposure and gamma, which serve as the equivalent to brightness and contrast adjustment, respectively.
Gamma

Highlight This method has no options and applies a custom tonal curve, which greatly reduces highlight contrast in order to brighten and restore contrast in the rest of the
Compression image.

This method attempts to redistribute the HDR histogram into the contrast range of a normal 16 or 8-bit image. This uses a custom tonal curve which spreads out
Equalize
histogram peaks so that the histogram becomes more homogenous. It generally works best for image histograms which have several relatively narrow peaks with no
Histogram
pixels in between.

This is the most flexible method and probably the one which is of most use to photographers. Unlike the other three methods, this one changes how much it
Local
brightens or darkens regions on a per-pixel basis (similar to local contrast enhancement). This has the effect of tricking the eye into thinking that the image has
Adaptation
more contrast, which is often critical in contrast-deprived HDR images. This method also allows changing the tonal curve to better suit the image.

Before using any of the above methods, one may first wish to set the black and white points on the image histogram sliders (see "Using Levels in Photoshop" for a background on this concept). Click on the double arrow next to "Toning
Curve and Histogram" to show the image histogram and sliders.
The remainder of this tutorial focuses on settings related to the "local adaptation" method, as this is likely the most-used, and provides the greatest degree of flexibility.

CONCEPT: TONAL HIERARCHY & IMAGE CONTRAST


In contrast to the other three conversion methods, the local adaptation method does not necessarily retain the overall hierarchy of tones. It translates pixel intensities not just with a single tonal curve, but instead also based on the
surrounding pixel values. This means that unlike using a tonal curve, tones on the histogram are not just stretched and compressed, but may instead cross positions. Visually, this would mean that some part of the subject matter which
was initially darker than some other part could later acquire the same brightness or become lighter than that other part-- if even by a small amount.

Final Composite that


Underexposed Photo Overexposed Photo Violates Large-Scale
Tonal Hierarchy

A clear example where global tonal hierarchy is not violated is the example used in the page on using a GND to extend dynamic range (although this is not how local adaptation works). In this example, even though the foreground sea
foam and rock reflections are actually darker than the distant ocean surface, the final image renders the distant ocean as being darker. The key concept here is that over larger image regions our eyes adjust to changing brightness
(such as looking up at a bright sky), while over smaller distances our eyes do not. Mimicking this characteristic of vision can be thought of as a goal of the local adaptive method-- particularly for brightness distributions which are
more complex than the simple vertical blend in the ocean sunset above.
An example of a more complex brightness distribution is shown below for three statue images. We refer to contrast over larger image distances as global contrast, whereas contrast changes over smaller image distances are termed local
contrast. The local adaptation method attempts to maintain local contrast, while decreasing global contrast (similar to that performed with the ocean sunset example).

Low Global
High Global
Contrast
Original Image Contrast
High Local
Low Local Contrast
Contrast

The above example illustrates visually how local and global contrast impact an image. Note how the large-scale (global) patches of light and dark are exaggerated for the case of high global contrast. Conversely, for the case of low
global contrast the front of the statue's face is virtually the same brightness as it's side.
The original image looks fine since all tonal regions are clearly visible, and shown with sufficient contrast to give it a three-dimensional appearance. Now imagine that we started with the middle image, which would be an ideal candidate
for HDR conversion. Tonal mapping using local adaptation would likely produce an image similar to the far right image (although perhaps not as exaggerated), since it retains local contrast while still decreasing global contrast (thereby
retaining texture in the darkest and lightest regions).
HDR CONVERSION USING LOCAL ADAPTATION
The distance which distinguishes between local and global contrast is set using the radius value. Radius and threshold are similar to the settings for an unsharp mask used for local contrast enhancement. A high threshold improves local
contrast, but also risks inducing halo artifacts, whereas too low of a radius can make the image appear washed out. For any given image, it is recommended to adjust each of these to see their effect, since their ideal combination varies
depending on image content.
In addition to the radius and threshold values, images almost always require adjustments to the tonal curve. This technique is identical to that described in the Photoshop curves tutorial, where small and gradual changes in the curve's
slope are nearly always ideal. This curve is shown for our doorway example below, yielding the final result.

Final Result
Photoshop CS2 Tool Using Local Adaptation
Method

HDR images which have been converted into 8 or 16-bit often require touching up in order to improve their color accuracy. Subtle use of levels and saturation can drastically improve problem areas in the image. In general, regions
which have increased in contrast (a large slope in the tonal curve) will exhibit an increase in color saturation, whereas the opposite occurs for a decrease in contrast. Changes in saturation may sometimes be desirable when brightening
shadows, but in most other instances this should be avoided.
The main problem with the local adaptation method is that it cannot distinguish between incident and reflected light. As a result, it may unnecessarily darken naturally white textures and brighten darker ones. Be aware of this when
choosing the radius and threshold settings so that this effect can be minimized.

TIP: USING HDR TO REDUCE SHADOW NOISE


Even if your scene does not require more dynamic range, your final photo may still improve from a side benefit: decreased shadow noise. Ever noticed how digital images always have more noise in the shadows than in brighter tones?
This is because the image's signal to noise ratio is higher where the image has collected more of a light signal. You can take advantage of this by combining a properly exposed image with one which has been overexposed. Photoshop
always uses the most exposed image to represent a given tone—thereby collecting more light in the shadow detail (but without overexposing).

RECOMMENDATIONS
Keep in mind that HDR images are extremely new-- particularly in the field of digital photography. Existing tools are therefore likely to improve significantly; there is not currently, and may never be, an automated single-step process
which converts all HDR images into those which look pleasing on screen, or in a print. Good HDR conversions therefore require significant work and experimentation in order to achieve realistic and pleasing final images.
Additionally, incorrectly converted or problematic HDR images may appear washed out after conversion. While re-investigating the conversion settings is recommended as the first corrective step, touch-up with local contrast
enhancement may also yield a more pleasing result.
As with all new tools, be careful not to overdo their use. Use care when violating the image�s original tonal hierarchy; do not expect deep shadows to become nearly as light as a bright sky. In our doorway example, the sunlit building
and sky are the brightest objects, and they stayed that way in our final image. Overdoing editing during HDR conversion easily can cause the image to lose its sense of realism. Furthermore, HDR should only be used when necessary;
best results can always be achieved by having good lighting to begin with.
Want to learn more? Discuss this article in our HDR photo techniques forum.
Note: To clarify email queries, no photo within my gallery used the HDR technique. Only when necessary, I prefer to use linear and radial graduated neutral density filters to control drastically varying light. If used properly, these do not
induce halo artifacts while still maintaining local contrast. Further, these have been a standard by landscape photographers for nearly a century. In some situations, however, I can certainly see when the photo would be unattainable
without HDR.
28.TILT SHIFT LENSES: PERSPECTIVE CONTROL -
Tilt shift lenses enable photographers to transcend the normal restrictions of depth of field and perspective. Many of the optical tricks these lenses permit could not otherwise be reproduced digitally—making them a must for certain
landscape, architectural and product photography. The first part of this tutorial addresses the shift feature, and focuses on its use for in digital SLR cameras for perspective control and panoramas. The second part focuses on using tilt shift
lenses to control depth of field.

OVERVIEW: TILT SHIFT MOVEMENTS


Shift movements enable the photographer to shift the location of the lens's imaging circle relative to the digital camera sensor. This means that the lens's center of perspective no longer corresponds the the image's center of perspective,
and produces an effect similar to only using a crop from the side of a correspondingly wider angle lens.
Tilt movements enable the photographer to tilt the plane of sharpest focus so that it no longer lies perpendicular to the lens axis. This produces a wedge-shaped depth of field whose width increases further from the camera. The tilt effect
therefore does not necessarily increase depth of field—it just allows the photographer to customize its location to better suit their subject matter.

CONCEPT: LENS IMAGING CIRCLE


The image captured at your camera's digital sensor is in fact just a central rectangular crop of the circular image being captured by your lens (the "imaging circle"). With most lenses this circle is designed to extend just beyond what is
needed by the sensor. Shift lenses, by contrast, actually project a much larger imaging circle than is ordinarily required—thereby allowing the photographer to "shift" this imaging circle to selectively capture a given rectangular portion.

Apply
Left Right
Shift:

Lens Capable of Shift


Ordinary Camera Lens
Movements

Above comparison shown for 11 mm shift movements on a 35 mm SLR camera;


actual image circles would be larger relative to the sensor for cameras with a crop factor
(see tutorial on digital camera sensor sizes for more on this topic).
Shift movements have two primary uses: they enable photographers to change perspective or expand the angle of view (using multiple images). Techniques for each are discussed in subsequent sections. The above example would be more
useful for creating a panorama since the medium telephoto camera lens created a flat perspective.
The shift ability comes with an additional advantage: even when unshifted, these lenses will typically have better image quality at the edges of the frame—similar to using full frame 35 mm lenses on cameras with a crop factor. This
means less softness and vignetting, with potentially less pronounced distortion.
On the other hand, a lens capable of shift movements will need to be much larger and heavier than a comparable regular lens, assuming the same focal length and maximum aperture. Extreme shift movements will also expose regions of
the imaging circle with lower image quality, but this may not be any worse than what is always visible with an ordinary camera lens. Further, a 24 mm tilt shift lens is likely to be optically similar to an ordinary 16 mm lens due to a
similar sized imaging circle. This means that this 24 mm tilt shift lens is therefore likely to be surpassed in optical quality by an ordinary 24 mm lens, since wider angle lenses generally have poorer optical quality.

SHIFT MOVEMENTS FOR PERSPECTIVE CONTROL


Shift movements are typically used for perspective control to straighten converging vertical lines in architectural photography. When the camera is aimed directly at the horizon (the vanishing point below), vertical lines which are parallel
in person remain parallel in print:

Converging verticals arise whenever the camera lens (ie, center of the imaging circle) is aimed away from the horizon. The trick with a shifted lens is that it can capture an image which lies primarily above or below the horizon—even
though the center of the imaging circle still lies on or near the horizon. This effect changes the perspective.
Lens Shifted for
Ordinary Lens
Perspective Control

The shifted lens gives the architecture much more of a sense of prominence and makes it appear more towering—as it does in person. This can be a very useful effect for situations where one cannot get sufficiently far from a building to
give it this perspective (such as would be the case when photographing buildings from the side of a narrow street).
Note that in the above example the vanishing point of perspective was not placed directly on the horizon, and therefore vertical lines are not perfectly parallel (although much more so than with the ordinary lens). Often times a slight bit of
convergence is desirable, since perfectly parallel vertical lines can sometimes look overdone and unrealistic.

A similar perspective effect could be achieved using an ordinary lens and digital techniques. One way would be to use a wider angle lens and then only make a print of a cropped portion of
this, although this would sacrifice a substantial portion of the camera's megapixels.
A second way would be to stretch the image from the ordinary lens above using photoshop's perspective control (so that it is shaped like an upside down trapezoid).

The second method would retain more resolution, but would yield an image whose horizontal resolution progressively decreases toward the top. Either way, the shifted lens generally yields the best quality.
Technical Note: it is often asked whether digital perspective control achieves similar quality results as a shifted lens. Although the above digital techniques clearly sacrifice resolution, the question is whether this is necessarily any worse
than the softening caused by using the edge of the imaging circle for an optically poor tilt shift lens. In my experience, using a shifted lens is visibly better when using Canon's 45 mm and 90 mm tilt shift lenses. Canon's 24 mm tilt shift
lens is a closer call; if chromatic aberrations are properly removed I still find that the shifted lens is a little better.

SHIFT MOVEMENTS FOR SEAMLESS PANORAMAS


One can create digital panoramas by using a sequence of shifted photographs. This technique has the advantage of not moving the optical center of the camera lens, which means that one can avoid having to use a panoramic head to
prevent parallax error with foreground subject matter. Another potential benefit is that the final composite photo will retain the rectilinear perspective of the original lens.
The Canon and Nikon lenses can shift up to 11 mm and 11.5 mm, respectively, which describes how far the lens can physically move relative to the camera sensor (in each direction). Several common shift scenarios have been included
below to give a better feel for what 11 mm of shift actually means for photos. Since each lens can rotate on its axis, this shift could be applied in two directions:
Panorama Using Horizontal Shift Movements in
Landscape Orientation

Sensor with 1.6X Crop


Full Frame 35 mm Sensor
Factor
Area Increase: 60%
Area Increase: 100%
Aspect Ratio: 2.42:1
Aspect Ratio: 3:1

Wide Angle Using Horizontal Shift Movements in


Portrait Orientation

Sensor with 1.6X Crop


Full Frame 35 mm Sensor
Factor
Area Increase: 90%
Area Increase: 150%
Aspect Ratio: 1.28:1
Aspect Ratio: 1.66:1

note: all diagrams shown to scale for 11 mm shift; area increases rounded to nearest 5%
Note how cropped sensors have more to gain from shifting than full frame sensors. For panoramas, one can achieve dramatically wide aspect ratios of 2:1 and 3:1 for full frame and cropped sensors, respectively, with substantially more
resolution. Many more combinations of camera orientation, shift direction and sensor size can be explored using the calculator in the next section.
Shift can also be used in other directions than just up-down or left-right. The example below illustrates all combinations of shift in 30° increments for a 35 mm full frame sensor in landscape orientation:
Move your mouse over the image to see frame outlines for each shift combination.
Megapixels of above image increased by 3X compared to a single photo; if 1.6X CF this would be 5X.
Once captured, the stitching process is more straightforward since each photograph does not have to be corrected for perspective and lens distortion, and lens vignetting will not be uneven between images. Photoshop or another basic
image editing program could therefore be used to layer the images and align manually. Make sure to use manual or fixed exposure since vignetting can cause the camera to expose the shifted photos more than the unshifted photo—even if
the photos are exposed using a small aperture. This occurs because the camera's through the lens (TTL) metering is based on measurements with the lens wide open (smallest f-number), not the aperture used for exposure.
Alternatively, one could use photo stitching software on a series of shifted photographs to create a perspective control panorama. Such a panorama would require the lens to be shifted either up or down, and remain in that position for each
camera angle comprising the panorama.

TILT SHIFT LENS CALCULATOR


The tilt shift calculator below computes the angle of view encompassed by shift movements in up-down or left-right directions, along with other relevant values. This is more intended to give a better sense for the numerics of shift
movements than to necessarily be used in the field. This way, when your lens has markings for 5 and 10 mm of shift, you should be able to better visualize how this will impact the final image. The diagram within the calculator (on the
right) adjusts dynamically to illustrate your values.
Top of Form
Tilt Shift Lens Calculator: Creating Panoramas

Camera Sensor Size

Focal Length of T/S 45


Lens mm

Camera Orientation

11
Shift Amount
mm

Shift Direction

— unshifted sensor area


Angle of View ( Horizontal x Vertical ) — area including camera
shifts

Focal Length if Single Photograph


2.7

Scale: pixels per mm


Megapixel Increase
CF = crop factor of digital sensor, see tutorial on digital camera sensor sizes for more on this topic.
Calculator not intended for use in macro photography and assumes negligible distortion.
Bottom of Form

The output for "focal length if single photograph" is intended to give a feel for what focal length would be required, using an unshifted photo, in order to encompass the entire shifted angle of view. From this we can see that the imaging
circle of a 45 mm tilt shift lens actually covers an angle of view comparable to an ordinary 28 mm wide angle lens.

AVAILABLE NIKON & CANON TILT SHIFT LENSES


Canon has four and Nikon has three mainstream tilt shift lens models available:
Canon Tilt Shift Lenses Nikon Tilt Shift Lenses

Canon 17 mm TS-E f/4L

Canon 24 mm TS-E f/3.5L II PC-E Nikkor 24 mm F3.5D ED

Canon 45 mm TS-E f/2.8 PC-E Nikkor 45 mm F2.8D ED

Canon 90 mm TS-E f/2.8 PC-E Nikkor 85 mm F2.8D ED

Calculations and diagrams above have been designed to represent the range of tilt and shift movements relevant for these lenses on the 35 mm and cropped camera formats.
Note that this tutorial only discusses the shift feature of a tilt shift lens; for part 2 visit:
Tilt Shift Lenses: Using Tilt to Control Depth of Field
Alternatively, for an overview of ordinary camera lenses, please visit:
Understanding Camera Lenses: Focal Length & Aperture

29.TILT SHIFT LENSES: DEPTH OF FIELD -


Tilt shift lenses enable photographers to transcend the normal restrictions of depth of field and perspective. Many of the optical tricks these lenses permit could not otherwise be reproduced digitally—making them a must for certain
landscape, architectural and product photography. This part of the tutorial addresses the tilt feature, and focuses on its use in digital SLR cameras for controlling depth of field. The first part of this tutorial focused on using tilt shift lenses
to control perspective and create panoramas.

OVERVIEW: TILT SHIFT MOVEMENTS


Shift movements enable the photographer to shift the location of the lens's imaging circle relative to the digital camera sensor. This means that the lens's center of perspective no longer corresponds the the image's center of perspective,
and produces an effect similar to only using a crop from the side of a correspondingly wider angle lens.
Tilt movements enable the photographer to tilt the plane of sharpest focus so that it no longer lies perpendicular to the lens axis. This produces a wedge-shaped depth of field whose width increases further from the camera. The tilt effect
therefore does not necessarily increase depth of field—it just allows the photographer to customize its location to better suit their subject matter.

CONCEPT: SCHEIMPFLUG PRINCIPLE & HINGE RULE


The Scheimpflug principle states that the sensor plane, lens plane and plane of sharpest focus must all intersect along a line. In the diagram below, this intersection is actually a point since the line is perpendicular to your the screen. When
the Scheimpflug principle is combined with the "Hinge" or "Pivot Rule," these collectively define the location for the plane of sharpest focus as follows:
Apply
Lens
Tilt:

0.0°

0.5°

1.0°

2.0°

3.0°

4.0°

5.0°
6.0°

8.0°

— plane of sharpest
— sensor plane — lens plane
focus

Figure based on actual calculations using Canon's 45 mm TS-E lens; vertical scale compressed 2X.
Purple line (—) represents plane parallel to lens plane and separated by the lens focal length.
Try experimenting with different values of tilt to get a feel for how this influences the plane of sharpest focus. Notice that even a small lens tilt angle can produce a correspondingly large tilt in the plane of sharpest focus.
The focusing distance can also change the plane of sharpest focus along with tilt, and will be discussed later in this tutorial. Also note that for the sake of brevity, the rest of this tutorial will use "plane of sharpest focus" and "focus plane"
synonymously.

TILT MOVEMENTS TO REPOSITION DEPTH OF FIELD


Depth of field for many scenes is often insufficient using standard equipment—even with small lens apertures. The problem is that one could use even smaller apertures to increase depth of field, but not without also increasing softness at
the camera's focus plane due to diffraction. Tilt movements can sometimes avoid this technical limitation by making more efficient use of the depth of field, depending on the subject matter.
The example below demonstrates the effect of tilt movements on a scene whose subject matter traverses both up/down and front/back directions. Each image is taken using a wide aperture of f/2.8 to make the depth of field more
noticeable at this small image size.

3° Downward Tilt
Zero Tilt 8° Upward Tilt
Rug DoF Increased
Apparent DoF Decreased
Lens DoF Decreased
mouseover to view at f/16

DoF = Depth of Field; Camera lens aimed downward approx. 30° towards rug.
All images taken at f/2.8 using the Canon 45 mm TS-E lens on a full 35 mm frame sensor.
Center image at f/16 brightens due to reduced vignetting.
On the left we see the typical depth if field produced by an ordinary lens. In order to get both the front and rear rug edges sharp in the left image we would have needed to use a very small aperture. The central image, however, is able to
achieve this even with the same aperture. On the other hand, note how the vertical depth of field has decreased and caused the top of the front lens to be blurred.
Tilt can also be used to reduce apparent depth of field, as demonstrated by the 8° upward tilt image. This can be particularly useful for portraits when a wide aperture is insufficient, or when one wishes to focus
on only part of a vertical object. Note how both the rug and vertical depth of field appear to have decreased. This is because the focus plane is at an angle in between the rug and lens. Also note how the field of
view has moved downward due to the tilt, which should be taken into account.
Another possibility would be to place the depth of field both above and parallel to the rug, such that only the tops of the two lenses are in sharp focus (right image). This type of placement is common for many
types of flower shots, since these have a geometry similar to this rug/lens example.
For landscapes and architecture, however, the goal is usually to achieve maximal sharpness throughout. In the rug/lens example this would require placing the focus plane slightly above and parallel to the rug
with a small aperture.
Deciding where to optimally place the focus plane can become a tricky game of geometry, particularly if the subject traverses both front/back and up/down directions. This requires considering not just the
angle of the focus plane, but also the shape of the depth of field.

Downward Tilt
Only Lens Tops in
Focus
mouseover to view at f/5.6

Instead of the usual rectangular region for an ordinary lens, the depth of field for a tilt shift lens actually occupies a wedge which broadens away from the camera. This means that placement of the depth of field is more critical near
foreground subject matter.

Large Small Large Small


Aperture Aperture Aperture Aperture

Ordinary Camera Lens Tilt Shift Lens

Blue intensity qualitatively represents the degree of image sharpness at a given distance;
actual depth of field can be unequally distributed to either side of the focus plane.
Note how using a small aperture with a tilt shift lens can become very important with vertical subject matter—and increasingly so if this subject matter is in the foreground, or with a more horizontal focus plane.
Traditional view cameras (ie, old-fashioned looking camera with flexible bellows) can use virtually any amount of lens tilt. However, the Nikon and Canon tilt shift lenses are limited to 8.5 and 8 degrees of tilt, respectively. This means
that achieving optimal sharpness throughout is often a compromise between the best possible location for the focus plane and the constraints caused by a narrow range of tilt. This can sometimes occur when one requires a horizontal focus
plane, since this may not always be achievable with just 8 degrees of tilt. The example below demonstrates an alternative placement:

Optimal Depth of Field Best Available Depth of Field


Placement
(if horizontal placement not
(if wide range of tilt angles
possible)
available)

The key is to optimally place not only the focus plane, but also its wedge-shaped depth of field. Note how in the right image the focus plane crosses the floor, which ensures that depth of field is most efficiently distributed across the
floor and two subjects. In this example the crossover distance is positioned just before the hyperfocal distance of the corresponding untilted lens, since there is minimal vertical subject matter. For other subject distributions, proper
placement depends on the relative importance of subject matter and the artistic intent of the photograph.
A more sophisticated possibility is to use a combination of tilt and shift. This could have been accomplished by first aiming the camera itself slightly towards the ground, thereby rotating the focus plane even further than possible
using lens tilt alone. One could then use shift to change the field of view—thereby maintaining a similar composition as the original unshifted camera angle, but with a different perspective.
Overall, even if one cannot tilt enough to place the focus plane at the best possible location, one can usually still use some tilt and be better off than what would have been achievable with an ordinary lens. The only exception is when
there is vertical subject matter in the foreground which fills a significant fraction of the image, in which case zero tilt is usually best, although shift movements are likely to be helpful.

FOCUSING A TILT SHIFT LENS


Mentally visualizing how tilting a lens will correspond to changes in the depth of field can be quite difficult, even for the most experienced of photographers. Even then, knowing where to best place the focus plane is only half the battle—
actually putting it there can be a different matter entirely.

The reason focusing can become so difficult is because the focusing distance and the amount of tilt do not independently control the focus plane's location. In other words, changing the focusing distance changes the angle of the focus
plane in addition to changing its distance. Focusing can therefore become an iterative process of alternately adjusting the focusing distance and lens tilt until the photo looks best.
Perhaps the easiest scenarios are those which demand more tilt than the lens supports. In these cases one can just use maximal tilt in the chosen direction, then choose the focus distance which achieves the best available depth of field
placement. No tilt/focusing iterations are required.
For more difficult focusing scenarios, tilt shift lenses are usually focused using trial and error techniques through the viewfinder. This works by following a systematic procedure of alternating between setting the focusing distance
and tilt, with the aim of having the focus plane converge onto the desired location. Since accurate focusing requires consistent and careful attention to detail, using a tripod is almost always a must.
The following procedure is intended for situations where the subject lies primarily along a horizontal plane or some other plane which is rotated relative to the camera's sensor:
Focusing Procedure for a Tilt Shift Lens

(1) Compose Set lens to zero degrees tilt and frame the photograph

(2) Identify Identify critical nearest and furthest subjects along the subject plane

Focus at a distance which maximizes near and far subject sharpness in the viewfinder (if far subject is at infinity, this distance will be at or near the hyperfocal distance).
(3) Focus
Once an approximate distance is identified, rock the focus ring back and forth slightly to get a better estimate of this distance.

(4) Tilt Very slowly apply progressively more lens tilt towards the subject plane until near and far subject sharpness is maximized in viewfinder.
Once an approximate tilt angle is identified, slightly rotate the tilt knob back and forth to get a better estimate of this angle.
(5) Refine Repeat steps (3) and (4) with smaller changes than before to identify whether this improves both near and far subject sharpness; if no further improvement then the focusing procedure is complete.

For more on step (3) above, see tutorials on depth of field and the hyperfocal distance.
For landscapes, one should generally put more weight on having the furthest subject sharp.
Overall, the above procedure aims to give robust results across a wide range of scenarios;
for more exact focusing under specific conditions, refer to the calculators/charts later in this tutorial.
Also note that using the camera's focus points and focus lock confirmation can be of great help. Even though tilt shift lenses do no work with a camera's autofocus, your camera can still be used to notify when you have achieved
successful manual focus. Select a focus point which is on your subject and use the focus lock lights in the viewfinder to confirm when your tilt or focusing has successfully brought this subject into focus.
With practice, visual procedures work fine, but ultimately nothing beats having a better intuition for how the process works. One is encouraged to first experiment heavily with their tilt shift lens in order to get a better feel for using tilt
movements.
TOOLS TO MAKE TILT SHIFT FOCUSING EASIER
Trial and error techniques can be problematic due to the limited size of viewfinders used in 35 mm or cropped digital camera sensors. This can make it very hard to discern changes in sharpness—particularly in low light, or with tilt shift
lenses having a maximum aperture of f/3.5 or a wide angle of view. However, several tools are available which may make this process easier.
A special texturized manual focusing screen can ensure the eye has a clear reference to compare with out of focus objects. Otherwise one's eye can get tricked and try to make objects appear in focus even though these objects are not
necessarily in focus in the viewfinder. In such cased the eye effectively becomes a part of the optical system.
Alternatively, if your camera supports real-time viewing using your camera's LCD (Live View) this can be of great help. One can also take a series of test photographs and then zoom in afterwards to verify sharpness at critical points.
A magnified viewfinder can also help, such as Canon's "angle finder C" or one of many third party viewfinder magnifiers. Many of these are at a right angle to the viewfinder, which can make for more convenient focusing when the
camera is near the ground.

FOCUSING TECHNIQUE FOR LANDSCAPE PHOTOGRAPHY

Tilt movements for landscape photography often require a focus plane which lies along a sweeping, near horizontal subject plane. In these situations it is very important to place the focus plane accurately in the foreground. The vertical
distance "J" is easy to set because it is only determined by the lens tilt, not focusing distance.

Once the desired value of J has been determined and the corresponding tilt set, one can then independently use the focusing distance to set the angle of the focus plane. Setting the lens's focus ring to further distances will simultaneously
increase the angle of the focus plane and the angular depth of field, as demonstrated in the next section.

TILT SHIFT LENS DEPTH OF FIELD CALCULATOR


The calculator below uses the lens focal length, lens tilt and untilted focusing distance to locate the plane of sharpest focus and depth of field:
Top of Form
Tilt Shift Lens Depth of Field Calculator

Aperture and sensor size are only necessary if you also wish to estimate depth of field:
Camera Sensor Size

Aperture

45
Focal Length of Tilt/Shift Lens
mm

8
Tilt Amount
degrees

Untilted Focus Distance 1

meters
Angle of Plane of Sharpest Focus (Focus Plane)

Vertical Distance of Focus Plane from Lens (J)

Angle of Near Plane of Acceptable Sharpness

Angle of Far Plane of Acceptable Sharpness

Total Angular Depth of Field

Calculator is a work in progress, but should give adequate estimates for most scenarios.
Assumes thin lens approximation, with greatest error for focusing distances near infinity or close up.
The circle of confusion is the standard size also used in the regular depth of field calculator.
"Untilted Focus Distance" is (approximately) the distance labeled on your lens's focus ring.
Bottom of Form

Note how small changes in tilt lead to large changes in the focus plane angle, and that tilt is correspondingly less influential as the tilt angle increases. Also observe how the untilted focusing distance can have a significant impact on the
focus plane angle. Similar to ordinary depth of field, the total angular depth of field (near minus far angles of acceptable sharpness) decreases for closer focusing distances.

USING SHIFT TO FURTHER ROTATE THE FOCUS PLANE


The next calculator is useful for situations where tilt and shift are used together to achieve an even greater rotation in the focus plane. With an ordinary camera lens, the angle of the focus plane changes when the camera is rotated, since
the focus plane is always perpendicular to the lens's line of sight. With a tilt shift lens this is no different. However, the key is that with a tilt shift lens, we can rotate the camera slightly and then use a shift movement to ensure the same
field of view (same composition).

Rotated Ordinary Rotated Lens With


Ordinary Unrotated Lens Shift
Lens Rotates plane of focus, Rotates plane of focus,
different field of view maintains field of view

Blue intensity qualitatively represents the degree of image sharpness at a given distance;
light gray line lies along the center of the photograph.
The calculator below demonstrates how much one would have to rotate their camera in order to offset a given lens shift, which is also equal to the rotation in the focus plane. This would achieve a rotation in the focus plane similar to the
top left and right images above, with also the same field of view.

Top of Form
Using Shift to Rotate the Focus Plane

45
Focal Length of Tilt/Shift Lens
mm

11
Shift Amount
mm
Amount of Focus Plane Rotation

Rotation in focus plane is relative to its location for camera/lens with same field of view, but no shift.
Bottom of Form

Note how shift can rotate the plane of sharpest focus much more for shorter focal lengths. This is because in absolute units, a given mm shift corresponds to a greater rotation in in the field of view. On the other hand, this also means that
the perspective will be more strongly influenced for shorter focal lengths, which may be an important consideration.
Be aware that using shift to rotate the focus plane may require that your tilt shift lens be modified so that it can tilt and shift in the same direction, which is usually not the case by manufacturer default. This can be sent to the
manufacturer for modification, or can be performed yourself with a small screwdriver. One needs to remove the four small screws at the base of the lens, rotate the base 90°, and then screw them back into the base.

AVAILABLE NIKON & CANON TILT SHIFT LENSES


Canon has four and Nikon has three mainstream tilt shift lens models available:
Canon Tilt Shift Lenses Nikon Tilt Shift Lenses

Canon 17 mm TS-E f/4L

Canon 24 mm TS-E f/3.5L II PC-E Nikkor 24 mm F3.5D ED

Canon 45 mm TS-E f/2.8 PC-E Nikkor 45 mm F2.8D ED

Canon 90 mm TS-E f/2.8 PC-E Nikkor 85 mm F2.8D ED

Calculations and diagrams above have been designed to represent the range of tilt and shift movements relevant for these lenses on the 35 mm and cropped camera formats.
Note that this tutorial primarily discusses the tilt feature; for shift movements visit part 1:
Tilt Shift Lenses: Using Shift to Control Perspective or Create Panoramas

30.PHOTO STITCHING DIGITAL PANORAMAS -


Digital photo stitching for mosaics and panoramas enable the photographer to create photos with higher resolution and/or a wider angle of view than their digital camera or lenses would ordinarily allow—creating more detailed final prints
and potentially more dramatic, all-encompassing panoramic perspectives. However, achieving a seamless result is more complicated than just aligning photographs; it also involves correcting for perspective and lens distortion,
identifying pixel-perfect matches between subject matter, and properly blending each photo at their seam. This tutorial aims to provide a background on how this process works, along with discussing common obstacles that one may
encounter along the way—irrespective of panorama software type.

OVERVIEW: SEEING THE BIG PICTURE


Stitching a photo can require a complex sequence of steps, which may change depending on subject matter, or type of panoramic stitch. This procedure can be simplified into several closely related groups of steps, which can then each be
addressed in separate stages. Later sections of this tutorial go into each stage with greater detail, including terminology and alternative approaches.
STAGE 1: physically setting up the camera, configuring it to capture all photos identically, and then taking the sequence of photos. The end result is a set of images which encompasses the entire field of view, where all are taken from
virtually the same point of perspective.

STAGE 2: the first stage to begin using photo stitching software; involves choosing the order and precise positioning which mutually aligns all photos. This may occur automatically, or require manually selecting pairs of control points
which should ideally overlay exactly in the final image. This stage may also require input of camera and lens settings so that the panorama software can estimate each photo's angle of view.
STAGE 3: defining the perspective using references such as the horizon, straight lines or a vanishing point. For stitched photos that encompass a wide angle of view, one may also need to consider the type of panoramic projection. The
projection type influences whether and how straight lines become curved in the final stitched image.

STAGE 4: shifting, rotating and distorting each of the photos such that both the average distance between all sets of control points is minimized, and the chosen perspective (based on vanishing point) is still maintained. This stage
requires digital image interpolation to be performed on each photo, and is often the most computationally intensive of all the stages.

STAGE 5: reducing or eliminating the visibility of any seam between photos by gradually blending one photo into another. This stage is optional, and may sometimes be combined with the previous stage of moving and distorting each
image, or may also involve custom placement of the seam to avoid moving objects (such as people).

STAGE 6: cropping the panorama so that it adheres to a given rectangular (or otherwise) image dimension. This may also involve any necessary touch-up or post-processing steps for the panorama, including levels, curves, color
refinements and sharpening.
The resulting panorama is 20 megapixels, even though the camera used for this was under 4 megapixels. This provides a much greater level of detail—something ordinarily only attainable with much more expensive equipment—by using
a compact, handheld and inexpensive travel camera. The above stages can be summarized as:

Stage 1 Equipment setup and acquisition of photographs

Selection of desired photo alignment


Stage 2
and input of camera and lens specifications

Stage 3 Selection of perspective and projection type

Computer shifts, rotates and distorts photos to


Stage 4
conform with requirements of stages 2 and 3

Stage 5 Manual or automatic blending of seams

Stage 6 Cropping, touch-up and post-processing

Note how stages 2-6 are all conducted on the computer using a panorama software package, after the photos have been taken. The rest of this tutorial takes an in-depth look at stage 1, with details on stages 2-6 being presented in the
second part of the tutorial. These stages will show that panoramas are not always straightforward, and require many interpretive decisions to be made along the way.

BACKGROUND: PARALLAX ERROR & USING A PANORAMIC HEAD


The size and cost of panoramic equipment can vary drastically, depending on the intended use. Being able to identify when you need additional equipment can save time and money. Here we identify two typical stitching scenarios, based
on required equipment:
Scenario #1 Scenario #2

Handheld or tripod-mounted photographs with no close foreground subject matter. Tripod-mounted photographs with foreground subject matter in multiple frames.

PANORAMIC HEAD NOT REQUIRED REQUIRES A PANORAMIC HEAD

Panoramas require that the camera rotates about the optical center of its lens, thereby maintaining the same point of perspective for all photographs. If the camera does not rotate about its optical center, its images may become
impossible to align perfectly; these misalignments are called parallax error. A panoramic head is a special device that ensures your camera and lens rotate about their optical center.
Note: The optical center of a lens is often referred to as its nodal point, although this term is not strictly correct. A more accurate term is the entrance pupil, but even this refers to a small area and not an individual point. The location
which we refer to is therefore the point at the center of the entrance pupil, which may also be called the "no parallax point" or "perspective point."
Scenario 2 is far more sensitive to parallax error due to foreground subject matter. With scenario 1, small movements deviating from the lens's optical center have a negligible impact on the final image—allowing these photos to be
taken handheld.
To see why foreground subject matter is so important, this can be illustrated by looking at what happens for two adjacent, overlapping photos which comprise a panorama. The two pink pillars below represent the background and
foreground subjects. The photo angle shown on the left (below) is the initial position before camera rotation, whereas the angle shown on the right is after camera rotation.
For incorrect rotation, the change in perspective that results is due to parallax error, because the camera was not rotated about its optical center. Move your mouse over the three buttons below to see the effect of each scenario:
Incorrect Rotation: Scenario #1 Scenario #2

Correct Rotation: Scenario #2 with Panoramic Head


Note: incorrect rotation assumes that the camera is rotated about the front of the lens;
correct rotation assumes rotation about the optical center
SCENARIO #1: The problem with the second image (right) is that each photo in the panorama will no longer see the same image perspective. Although some degree of misalignment may occur from this, the problem is far less
pronounced as when there are close foreground objects, as illustrated for scenario #2.
SCENARIO #2: Here we see that the degree of misalignment is much greater when foreground objects are present in more than one photo of the panorama. This makes it absolutely essential that the camera is rotated precisely about its
optical center, and usually necessitates the use of a special panoramic head (as shown in the final scenario).
SCENARIO #2, PANORAMIC HEAD: Here we see that the perspective is maintained because the lens is correctly rotated about its optical center. This is apparent because for the image on the right, the light rays from both pillars still
coincide, and the rear column remains behind the front column. Panoramic photos of building interiors almost always require a panoramic head, while skyline vistas usually never do. Multi-row or spherical panoramas may also require a
tripod-mounted panoramic head that keeps the lens at the center of rotation for up and down rotations.
With care, parallax error can be made undetectable in handheld panoramas which do not have foreground subject matter. The trick is to hold the camera directly above one of your feet, then rotate your body about the ball of that foot
while keeping the camera at the same height and distance from your body.

STAGE 1: DIGITAL CAMERA SETUP & PANORAMA ACQUISITION


Taking a digital panorama involves systematically rotating your camera in increments to encompass the desired field of view. The size of each rotation increment and the number to images in total depends on the angle of view for each
photo, which is determined by the focal length of the camera lens being used, and the amount of overlap between photos. The image below is composed of two rows of four photographs; the camera first scanned from left to right across
the top row, then down to the second row, and back across the bottom half of the image from right to left.
Other than minimizing parallax error, the key to creating a seamless panorama is to ensure that each of the panorama photos are taken using identical settings. Any change in exposure, focus, lighting or white balance between shots
creates a surprising mismatch. If you are using a digital SLR camera, it is highly recommended that all photos be taken in manual exposure mode using the RAW file format. This way white balance can be customized identically for all
shots, even after the images have been taken.
If using a compact digital camera, many of these include a panoramic preset mode, which shows the previous image on-screen along with the current composition. This can be very helpful with handheld panoramas because it assists in
making sure each photo is level and overlaps sufficiently with the previous photo. Additionally, the panoramic preset modes use manual exposure settings, where they lock in the white balance and exposure based on the first photograph
(or at least based on the first time the shutter button is held down half-way).
Panoramas can encompass a very wide angle of view, up to 360 degree panoramic views, and may therefore encompass a drastic range of illumination across all photo angles. This may pose problems when choosing the exposure
settings, because exposing directly into or away from the sun may make the rest of the panorama too dark or light.

Often the most intermediate exposure is obtained by aiming the camera in a direction perpendicular to one's shadow, using an automatic exposure setting (as if this were a single photo), and then manually using that setting for all
photographs. Depending on artistic intent, however, one may wish to expose based on the brightest regions in order to preserve highlight detail. For a compact digital camera, the exposure can be locked in by using the panoramic preset
mode, holding the shutter button down halfway at the intermediate angle, then taking the photos in any particular order (while ensuring that the shutter button remains pressed halfway before taking the first photo).

Single Row Panorama Multi-Row Panorama or Stitched Mosaic

Ensure that each photograph has roughly 10-30% overlap with all other adjacent photos. The percent overlap certainly does not have to be exact; too high of an overlap could mean that you have to take far more photos for a given
angle of view, but too little of an overlap may provide too short a region over which to blend or redirect the placement of seams.
Note that if your panorama contains regions which are moving, such as water or people, it is best to try and isolate movement in a single camera angle. This way you do
not run into problems where someone appears in the panorama twice, or photo misalignment occurs because a moving objects is on the seam between two stitched images. In
the image to the left, the colorful Swiss guards were marching side to side, but the lower third of the image was contained within a single photo.
Another consideration is whether to stitch a single row panorama in landscape or in portrait orientation. Using portrait orientation can achieve nearly 2.25X the number of megapixels (for the same
subject matter) for cameras with a 3:2 aspect ratio digital sensor (for the same 20% overlap). The disadvantage to this is that portrait orientation requires a longer focal length, and thus a smaller aperture
to achieve the same depth of field (since magnification has increased).
Other considerations when taking photos include total resolution and depth of field. One can increase the number of megapixels in their stitched photo dramatically by comprising them of progressively
more images. However, the disadvantage of this is that in order to achieve the same depth of field, one has to use progressively smaller lens aperture settings (larger f-numbers) as the number of
stitched images increases (for the same angle of view). This may make achieving certain resolutions nearly impossible with some subject matter, because of the resulting exposure time required, or
because the small aperture induces significant photo blurring due to diffraction.

The following calculator demonstrates how the number of megapixels and camera lens settings change when attempting to make a stitched photo mosaic out of a scene which could have otherwise been captured in a single photograph.
This could also be used to quickly assess what focal length is needed to encompass a given scene. Required input values are in the dark gray boxes and results are shown within dark blue boxes.

Top of Form
Settings for a Single Photo
which Encompasses the Entire Scene:

Selected aperture

??
Actual lens focal length
mm

Mosaic Size
Photos

20
Percent Overlap
%

Required Focal Length (for same overall angle of


view)

Megapixels (relative to single photo)

Requirements if needing to maintain the same depth


of field:

Lens Aperture

Exposure Time (relative to single photo)

Note: Calculator assumes that photographs are all taken in the same orientation, whether this all be in landscape or portrait orientation, and that photos are of low magnification.
Bottom of Form

Here we see that even small photo mosaics can quickly require impractical lens apertures and exposure times in order to maintain the same depth of field. Hopefully this makes it clear that digital panoramas and stitched photo mosaics
are more difficult to technically master than single photographs. Also note that image overlap may reduce the final resolution significantly (compared to the sum of megapixels in all the individual photos), implying that photo
stitching is definitely not an efficient way to store image data on a memory card. The calculator below estimates the total megapixels of a stitched photo mosaic as a percentage of all its individual photos.
Top of Form
Photo Stitching Efficiency Calculator

Mosaic Size
Photos

20
Percent Overlap
%

Efficiency (Final Megapixels / Megapixels of Input


Images)

Bottom of Form
Use of a polarizing filter should be avoided for extremely wide angle panoramas, as strong changes in sky lightness may appear. Recall that polarizing filters darken
the sky most when facing at a 90 degree angle to the direction of the sun, and least when facing directly into or away from the path of sunlight. This means that any panorama
which spans 180 degrees of the sky may see regions where the polarizer both darkens fully and not at all. A strong, unnatural sky gradient can be observed in the photo of an
arch to the right. Additionally, polarizing filters may make the edges of each photograph much more difficult to stitch without showing visible seams.
Also, be wary of attempting panoramas of scenes with rapidly changing light, such as when clouds are moving across the sky and selectively illuminating a landscape. Such scenes can still be
stitched, just ensure that any moving patches of light (or dark) are contained within individual photos, and not on the seams.

Finally, try to ensure that each of your photographs rotates across the sky in a systematic, grid-like direction. With large panoramas it can become very easy to drift upwards or downwards, requiring that an unacceptable amount of the
final panorama be cropped out (as shown below).
The above result can be prevented by carefully placing the horizon at a pre-defined position in each photograph (such as halfway down the photo, or one third, etc.).
For further reading on this topic, please continue to:
Part 2: Using Photo Stitching Software

31.USING PHOTO STITCHING SOFTWARE -


Digital photo stitching software is the workhorse of the panorama-making process, and can range from providing a fully automatic one-click stitching, to a more time-consuming manual process. This is part 2 of the tutorial, which
assumes all individual photos have already been properly captured (stage 1 below is complete); for stage 1 and an overview of the whole stitching process please visit part 1 of this tutorial on digital panoramas.

Stage 1 Equipment setup and acquisition of photographs

Selection of desired photo alignment


Stage 2
and input of camera and lens specifications

Stage 3 Selection of perspective and projection type

Computer shifts, rotates and distorts photos to


Stage 4
conform with requirements of stages 2 and 3

Stage 5 Manual or automatic blending of seams

Stage 6 Cropping, touch-up and post-processing

TYPES OF STITCHING SOFTWARE


In order to begin processing our series of photos, we need to select an appropriate software program. The biggest difference between options is in how they choose to address the tradeoff between automation and flexibility. Generally
speaking, fully customized stitching software will always achieve better quality than automated packages, but this may also result in being overly technical or time consuming.
This tutorial aims to improve understanding of most software stitching concepts by keeping the discussion as generic as possible, however actual software features may refer to a program called PTAssembler (front-end for PanoTools or
PTMender). PTAssembler incorporates a fully-automated one-click stitching option, in addition to providing for nearly all possible custom stitching options available in other programs. A similarly-equipped software for the Mac is called
PTMac.
At the time of this article, other notable programs include those that come packaged with the camera, such as Canon PhotoStitch, or popular commercial packages such as Autostitch, Hugin Panorama Photo Stitcher, Arc Soft Panorama
Maker, Panorama Factory and PanaVue, among others.

STAGE 2: CONTROL POINTS & PHOTO ALIGNMENT


Panorama stitching software uses pairs of control points to specify regions of two camera photos that refer to the same point in space. Pairs of control points may be manually selected by visual inspection, or these may be generated
automatically using sophisticated matching algorithms (such as Autopano for PTAssembler). With most photographs, best results can only be achieved with manual control point selection (which is often the most time-consuming stage of
the software stitching process).
The example above shows a selection of four pairs of control points, for two photos within a panorama. The best control points are those which are based upon highly rigid objects with sharp edges or fine detail, and are spaced
evenly and broadly across each overlap region (with 3-5+ points for each overlap). This means that basing control points on tree limbs, clouds or water is ill-advised except when absolutely necessary. It is for this reason recommended
to always capture some land (or other rigid objects) in the overlap region between all pairs of photographs, otherwise control point selection may prove difficult and inaccurate (such as for panoramas containing all sky or water).
The example below demonstrates a situation where the only detailed, rigid portion of each image is in the silhouette of land at the very bottom—thereby making it difficult to space the control points evenly across each photo's overlap
region. In these situations automated control point selection may prove more accurate.

PTAssembler has a feature called "automatically micro-position control points," which works by using your selection as an initial guess, then looking to all adjacent pixels within a specified distance (such as 5 pixels) to see if these are a
better match. When stitching difficult cloud scenes such as that shown above, this effectively combines the advantages of manual control point selection with those of automated algorithms.
Another consideration is how far away from the camera each control point is physically located. For panoramas taken without a panoramic head, parallax error may become large in foreground objects, therefore more accurate results can
be achieved by only basing these on distant objects. Any parallax error in the near foreground may not be visible if each of these foreground elements are not contained within the overlap between photos.

STAGE 3: VANISHING POINT PERSPECTIVE


Most photo stitching software gives the ability to specify where the reference or vanishing point of perspective is located, along with the type of image projection.
Careful choice of this vanishing point can help avoid converging vertical lines (which would otherwise run parallel), or a curved horizon. The vanishing point is usually where one would be directly facing if they were standing within the
panoramic scene. For architectural stitches, such as the example below (120° crop from the rectilinear projection), this point is also clearly apparent by following lines into the distance which are parallel to one's line of site.
Incorrect placement of the vanishing point causes lines laying in the planes perpendicular to the viewer's line of site to converge (even though these would otherwise appear as being parallel). This effect can also be observed by using a
wide angle lens in an architectural photo and pointing the camera significantly above or below the horizon— thereby giving the impression of buildings which are leaning.

move your mouse over the image to see image if vanishing point were too low
The vanishing point is also critical in very wide angle, cylindrical projection panoramas (such as the 360 degree image shown below). It may exhibit different looking distortion if misplaced, resulting in a curved horizon.
If the vanishing point were placed too high, the horizon curvature would be in the opposite direction. Sometimes it may be difficult to locate the actual horizon, due to the presence of hills, mountains, trees or other obstructions. For such
difficult scenarios the location of the horizon could then be inferred by placing it at a height which minimizes any curvature.
Panorama stitching software also often gives the option to tilt the imaginary horizon. This can be very useful when the photo containing the vanishing point was not taken perfectly level. For this scenario, even if the vanishing point is
placed at the correct height, the horizon may be rendered as having an S-curve if the imaginary horizon does not align with the actual horizon (in the individual photo).

If the panorama itself were taken level, then the straightest horizon would be the one that yields a stitched image whose vertical dimension is the shortest (and is a technique sometimes employed by stitching software).

STAGE 4: OPTIMIZING PHOTO POSITIONS


Once the control points, vanishing point perspective and image projection have all been chosen, the photo stitching software can then begin to distort and align each image to create the final stitched photograph. This is often the most
computationally intensive step in the process. It works by systematically searching through combinations of yaw, pitch and roll in order to minimize the aggregate error between all pairs of control points. This process may also adjust lens
distortion parameters, if unknown.
Yaw Pitch Roll

Note that the above photos are slightly distorted; this is to emphasize that when the stitching software positions each image it adjusts for perspective, and that the amount of perspective distortion depends on that image's location relative to
the vanishing point.
The key quality metric to be aware of is the average distance between control points. If this distance is large relative to the print size, then seams may be visible regardless of how well these are blended. The first thing to check is
whether any control points were mistakenly placed, and that they follow the other guidelines listed in stage 2. If the average distance is still too large then this may be caused by improperly captured images, including parallax error from
camera movement or not using a panoramic head.
STAGE 5: MANUALLY REDIRECTING & BLENDING SEAMS
Ideally one would want to place the photo seams along unimportant or natural break points within the scene. If the stitching software supports layered output one can perform this manually using a mask in photoshop:

Mask from Manual


Without Blend Manual Blend
Blend

Note how the above manual blend evens the skies and avoids visible jumps along geometrically prominent architectural lines, including the crescent of pillars, foreground row of statues and distant white building.
Make sure to blend the mask over large distances for smooth textures, such as the sky region above. For fine detail, blending over large distances can blur the image if there is any misalignment between photos. It is therefore best to blend
fine details over short distances using seams which avoid any easily noticeable discontinuities (view the "mask from manual blend" above to see how the sky and buildings were blended).
On the other hand, manually blending seams can become extremely time consuming. Fortunately some stitching software has an automated feature which can perform this simultaneously, as described in the next section.

STAGE 5: AUTOMATICALLY REDIRECTING & BLENDING SEAMS


One of the best ways to blend seams in a stitched photograph is by using a technique called "multi-resolution splines", which can often rectify even poorly captured panoramas or mosaics. It works by breaking each image up into several
components, similar to how an RGB photo can be separated into individual red, green and blue channels, except that in this case each component represents a different scale of image texture. Small-scale features (such as foliage or fine
grass) have a high spatial resolution, whereas larger scale features (such as a clear sky gradient) are said to have low spatial resolutions.
Large-Scale Small-Scale
Show:
Textures Textures

—>

Original Image in Black & White Processed Image

The multi-resolution spline effectively blends each texture size separately, then recombines these to re-create a normal looking photograph. This means that the lower resolution components are blended over a larger distance, whereas the
higher resolution components are blended over shorter distances. This addresses the common problem of visible jumps across the seams corresponding to smooth areas, or blurriness along the seams corresponding to fine detail.
In the example below, we demonstrate a seemingly impossible blend between an apple and an orange—objects which contain different large-scale color and small-scale texture.
Multi-
Feathered
Show: Apple Orange Blended: Resolution
(Normal)
Spline

—>

Individual Images Blended Image

Of course this "apples and oranges" blend would likely never be performed intentionally in a stitched photograph, but it does help to demonstrate the true power of the technique.

move your mouse over the image to see final blended result
The above example demonstrates its use in a real-world panorama. Note the highly uneven sky brightness at the seams, which was primarily caused by pronounced vignetting (light fall-off at the edges of the frame caused by optics).
Move your mouse over this image to see how well the multi-resolution spline performs.
Smartblend and Enblend are two add-on tools that can perform the multi-resolution spline in PTAssembler and other photo stitching software. Smartblend has the added advantage of being able to intelligently place seams based on image
content.

STAGE 6: FINISHING TOUCHES


Here one may wish to crop their irregularly shaped stitch to fit a standard rectangular aspect ratio or frame size. The assembled panorama may then be treated as any ordinary single image photograph in terms of post-processing, which
could include photoshop levels or photoshop curves. Most importantly, this image will need an unsharp mask or other sharpening technique applied since the perspective distortion (using image interpolation) and blending will introduce
significant softening.
For background reading on this topic, please refer to:
Part 1: Photo Stitching Digital Panoramas
or the tutorial on Understanding Image Projections
32.PANORAMIC IMAGE PROJECTIONS -
An image projection occurs whenever a flat image is mapped onto a curved surface, or vice versa, and is particularly common in panoramic photography. A projection is performed when a cartographer maps a spherical globe of the earth
onto a flat piece of paper, for example. Since the entire field of view around us can be thought of as the surface of a sphere (for all viewing angles), a similar spherical to 2-D projection is required for photographs which are to be
printed.

Narrow Angle of View Wider Angle of View


(grid remains nearly square) (grid is highly distorted)

For small viewing angles, it is relatively easy to distort this into an image on a flat piece of paper since this viewing arc is relatively flat. Some distortion is inevitable when trying to map a spherical image onto a flat surface,
therefore each projection type only tries to minimize one type of distortion at the expense of others. As the viewing angle increases, the viewing arc becomes more curved, and thus the difference between panorama projection types
becomes more pronounced. When to use each projection depends largely on the subject matter and application; here we focus on a few which are most commonly encountered in digital photography. Many of the projection types
discussed in this tutorial are selectable as an output format for several panoramic software packages; PTAssembler allows selection of all those which are listed.

IMAGE PROJECTION TYPES IN PHOTOGRAPHY


Grid representing
sphere of vision (if
standing at center)

Flatten
ed
Sphere:

Equirectangular (100% Coverage)

Rectilinear Cylindrical
Choose a Projection Type:
Mercator Fisheye

Sinusoidal Stereographic

If all the above image projection types seem a bit daunting, try to first just read and understand the distinction between rectilinear and cylindrical (shown in bold), as these are the ones which are most widely used when photo stitching
digital panoramas.
Equirectangular image projections map the latitude and longitude coordinates of a spherical globe directly onto horizontal and vertical coordinates of a grid, where this grid is roughly twice as wide as it is tall. Horizontal stretching
therefore increases further from the poles, with the north and south poles being stretched across the entire upper and lower edges of the flattened grid. Equirectangular projections can show the entire vertical and horizontal angle of view
up to 360 degrees.
Cylindrical image projections are similar to equirectangular, except it also vertically stretches objects as they get closer to the north and south poles, with infinite vertical stretching occurring at the poles (therefore no horizontal line is
shown at the top and bottom of this flattened grid). It is for this reason that cylindrical projections are also not suitable for images with a very large vertical angle of view. Cylindrical projections are also the standard type rendered by
traditional panoramic film cameras with a swing lens. Cylindrical projections maintain more accurate relative sizes of objects than rectilinear projections, however this is done at the expense of rendering lines parallel to the viewer's line
of sight as being curved (even though these would otherwise appear straight).
Rectilinear image projections have the primary advantage that they map all straight lines in three-dimensional space to straight lines on the flattened two-dimensional grid. This projection type is what most ordinary wide angle lenses
aim to produce, so this is perhaps the projection with which we are most familiar. Its primary disadvantage is that it can greatly exaggerate perspective as the angle of view increases, leading to objects appearing skewed at the edges of the
frame. It is for this reason that rectilinear projections are generally not recommended for angles of view much greater than 120 degrees.
Fisheye image projections aim to create a flattened grid where the distance from the center of this grid is roughly proportional to actual viewing angle, yielding an image which would look similar to the reflection off of a metallic sphere.
These are generally not used as an output format for panoramic photography, but may instead represent the input images when the camera lens type being used for photo stitching is a fisheye lens. Fisheye projections are also limited to
vertical and horizontal angles of view of 180 degrees or less, yielding an image which fits within a circle. This would be characterized by (otherwise straight) lines becoming progressively more curved the further they get from the center
of the image grid. A camera with a fisheye lens is extremely useful when creating panoramas that encompass the entire sphere of vision, since these often require stitching just a few input photographs.
Mercator image projections are most closely related to the cylindrical and equirectangular projection types; mercator represents a compromise between these two types, providing for less vertical stretching and a greater usable vertical
angle of view than cylindrical, but with more line curvature. This projection is perhaps the most recognizable from its use in flat maps of the earth. Here we also note that an alternative form of this projection (the transverse mercator)
may be used for very tall vertical panoramas.
Sinusoidal image projections aim to maintain equal areas throughout all grid sections. If flattening the globe of an earth, one can imagine that this projection could be rolled back up again to form a sphere with the same area and shape as
the original. The equal area characteristic is useful because if recording a spherical image in 2-D, it maintains the same horizontal and vertical resolution throughout the image. This projection is similar to the fisheye and stereographic
types, except that it maintains perfectly horizontal latitude lines from the original sphere.
Stereographic image projections are very similar to fisheye projections, except that it maintains a better sense of perspective by progressively stretching objects away from the point of perspective. This perspective-exaggerating
characteristic is somewhat similar to that yielded by the rectilinear projection, though certainly less pronounced.

EXAMPLES: WIDE HORIZONTAL FIELD OF VIEW


How do the above image projections actually influence a panoramic photograph? The following series of photographs are used to visualize the difference between two projection types most often encountered in photo stitching software:
rectilinear and cylindrical projections. These are designed to show only distortion differences for a wide horizontal angle of view; vertical panoramas are used later on to illustrate differences in vertical distortion between other projection
types.

The first example demonstrates how a rectilinear image projection would be rendered in a photo stitch of the above three photographs.
Note the extreme distortion near the edges of the angle of view, in addition to the dramatic loss in resolution due to image stretching. The next image demonstrates how the highly distorted image above would appear if it were cropped to
contain just a 120 degree horizontal angle of view.

Here we see that this cropped rectilinear projection yields a very suitable look, since all straight architectural lines are rendered straight in the stitched photograph. On the other hand, this is done at the expense of maintaining the relative
size of objects throughout the angle of view; objects toward the edge of the angle of view (far left and right) are significantly enlarged compared to those at the center (tower with doorway at base).
The next example demonstrates how the stitched photographs would appear using a cylindrical projection. Cylindrical projections also have the advantage of producing stitched photographs with relatively even resolution throughout, and
also require minimal cropping of empty space. Additionally, the difference between cylindrical and equirectangular is negligible for photographs which do not have extreme vertical angles of view (such as the example below).
EXAMPLES: TALL VERTICAL FIELD OF VIEW
The following examples illustrate the difference between projection types for a vertical panorama (with a very large vertical field of view). This gives a chance to visualize the difference between the equirectangular, cylindrical and
mercator projections, even though these would have appeared the same in the previous example (with a wide horizontal angle of view).

Cylindrical Mercator Equirectangular

Note: The point of perspective for this panorama was set as the base of the tower, therefore the effective vertical angle of view looks as if there were a 140 degrees field of view in total (if the perspective point were at the halfway height).
This large vertical angle of view allows us to clearly see how each of these image projections differ in their degree of vertical stretching/compression. The equirectangular projection
compresses vertical perspective so greatly that one arguably loses the sense of extreme height that this tower gives in person. For this reason, equirectangular is only recommended
when absolutely necessary (such as in stitched photographs with both an extreme vertical and horizontal field of view).
The three projections above aim to maintain nearly straight vertical lines; the transverse mercator projection to the right sacrifices some curvature for a (subjectively) more realistic
perspective. This projection type is often used for panoramas with extreme vertical angles of view. Also note how this projection closely mimics the look of each of the individual
source photographs.
The difference between rectilinear and cylindrical is barely noticeable for this narrow horizontal angle of view, so the rectilinear projection was not included.

Transverse Mercator

PANORAMIC FIELD OF VIEW CALCULATORS


The following calculator can be used to estimate your camera's vertical and horizontal angles of view for different lens focal lengths, which can help in assessing which projection type would be most suitable.

Top of Form
Panoramic Field of View Calculator

Lens Focal ??
Length: mm

Horizontal Size
photo(s)

Vertical Size
photo(s)

Camera
Orientation:

20
Percent Overlap:
%

Camera Type:

Field of View: x (horizontal x vertical)


Note: Calculators not intended for use in extreme macro photography. The above results are only approximate, since the angle of view is actually also influenced (to a lesser degree) by the focusing distance. Additionally, field of view
estimate assumes that the lens performs a perfect rectilinear image projection; lenses with large barrel or pincushion distortion may yield slightly different results.
Bottom of Form

The next calculator estimates how many photos are required to encompass a 360 degree horizontal field of view, given the input settings of: focal length, camera orientation, photo overlap and digital camera sensor size.

Top of Form
360° Panorama Calculator

Lens Focal ??
Length: mm

Camera
Orientation:

20
Percent Overlap:
%

Camera Type:

Required Number of Horizontal Photos:

Note: CF = crop factor, which describes the relative width of the camera sensor compared to a 35 mm camera. For a background reading, please visit the tutorial on digital camera sensor sizes.
Bottom of Form

For a summary of when to consider each projection type, please refer to the table below:
Field of View Recommendations Straight Lines?
Projection Type
Horizontal Vertical Horizontal Vertical

Rectilinear <120° <120° YES YES

Cylindrical ~120-360° <120° NO YES

Mercator ~120-360° <150° NO YES

Equirectangular ~120-360° 120-180° NO YES

Fisheye <180° <180° NO NO

Note: All straight line considerations exclude the centermost horizontal and vertical lines, and fields of view assume that the point of perspective is located at the center of this angle.
For background reading on creating digital panoramas, please also refer to:
Part 1: Photo Stitching Digital Panoramas
Part 2: Using Photo Stitching Software

33.OVERVIEW OF COLOR MANAGEMENT -


"Color management" is a process where the color characteristics for every device in the imaging chain is known precisely and utilized to better predict and control color reproduction. For digital photography, this imaging chain usually
starts with the camera and concludes with the final print, and may include a display device in between.
Many other imaging chains exist, but in general, any device which attempts to reproduce color from another device can benefit from color management. As a photographer, it is often critical that others see your work how it is intended to
be seen. Color management cannot guarantee identical color reproduction, as this is rarely possible, but it can at least give you more control over any changes which may occur.

CONCEPT: THE NEED FOR REFERENCE COLORS


Color reproduction has a fundamental problem: different color numbers do not necessarily produce the same color in all devices. We use an example of spiciness to convey both why this creates a problem, and how it is overcome.
Let's say that you're at a restaurant with a friend and are about to order a spicy dish. Although you enjoy spiciness, your threshold for it is limited, and so you also wish to specify a pleasurable amount. The dilemma is this: a "mild" degree
of spiciness may represent one level of spice in Thailand, and a completely different level in England. Restaurants could standardize this by establishing that one pepper equals "mild," two equals "medium," and so on, however this would
not be universal. Spice varies not just with the number of peppers included in the dish, but also depends on how sensitive the taster is to each pepper. "Mild" would have a different meaning for you and your friend, in addition to meaning
something different at other restaurants.

To solve your spiciness dilemma, you could undergo a one-time taste test where you eat a series of dishes, with each containing slightly more peppers (shown above). You could then create a personalized table to carry with you at
restaurants which specifies that 3 equals "mild," 5 equals "medium," and so on (assuming that all peppers are the same).
Computers color manage using a similar principle. Color management requires a personalized table, or "color profile," for every device which associates each number with a measured color. This way, when a computer tries to
communicate colors with another device, it does not merely send numbers, but also specifies how those numbers are intended to appear. Color-managed software can then take this profile into account and adjust the numbers sent to the
device accordingly. The table below is an example similar to the personalized spiciness table you and your friend created, which compares the input number with an output color.

Output Color
Input Number
(Green) Device
Device 2
1

200 —>

150 —>
100 —>

50 —>

Real-world color profiles include all three colors, more data, and are often more sophisticated than in the above table. In order for these profiles to be useful, they have to be presented in a standardized way which can be read by all
programs.

COLOR MANAGEMENT OVERVIEW


The International Color Consortium (ICC) was established in 1993 to create an open, standardized color management system which is now used in most computers. This system involves three key concepts: color profiles, color spaces,
and translation between color spaces. A color space relates numbers to actual colors and contains all realizable color combinations. When trying to reproduce color on another device, color spaces can show whether you will be able to
retain shadow/highlight detail, color saturation, and by how much either will be compromised. The following diagram shows these concepts for conversion between two typical devices: a monitor and printer.

Input Profile Connection Output


Device Space Device

CMYK
RGB Profile
Profile
(RGB
(CMYK
Space)
Space)

The color profile keeps track of what colors are produced for a particular device's RGB or CMYK numbers, and maps these colors as a subset of the "profile connection space" (PCS). The PCS is a color space which is independent of any
device's particular color reproduction methods, and so it serves as a universal translator. The PCS is usually the set of all visible colors defined by the Commission International de l'éclairage (CIE) and used by the ICC. The thin
trapezoidal region drawn within the PCS is what is called a "working space." The working space is used in image editing programs (such as Adobe Photoshop) and defines the set of colors available to work with when performing any
image editing.
Each step in the above chain specifies the available colors, and thereby defines a color space. If one device has a larger gamut of colors than another device can produce, some of that device's colors will be outside the other's color space.
These "out-of-gamut colors" occur with nearly every conversion and are called a "gamut mismatch." A color management module (CMM) performs all calculations needed to translate from one space into another, and is the workhorse of
color management. A gamut mismatch requires the CMM to make key approximations that are specified by a "rendering intent." The rendering intent is often specified manually and includes several options for how to deal with out-of-
gamut colors.
This all may seem a bit confusing at first, so for a more in-depth explanation of color spaces, profiles, and rendering intent, please visit:
Color Management, Part 2:Color Spaces
Part 3:Color Space Conversion

34.COLOR MANAGEMENT: COLOR SPACES -


A color space relates numbers to actual colors, and is a three-dimensional object which contains all realizable color combinations. When trying to reproduce color on another device, color spaces can show whether you will be able to
retain shadow/highlight detail, color saturation, and by how much either will be compromised.
TYPES
Color spaces can be either dependent to or independent of a given device. Device-dependent spaces express color relative to some other color space, while device-independent color spaces express color in absolute terms. Device-
dependent color spaces can tell you valuable information by describing the subset of colors which can be shown with a monitor or printer, or can be captured with a camera or scanner. Devices with a large color space, or "wide gamut,"
can realize more extreme colors, whereas the opposite is true for a device with a narrow gamut color space.

VISUALIZING COLOR SPACES


Each dimension in "color space" represents some aspect of color, such as lightness, saturation or hue, depending on the type of space. The two diagrams below show the outer surface of a sample color space from two different viewing
angles; its surface includes the most extreme colors of the space. The vertical dimension represents luminosity, whereas the two horizontal dimensions represent the red-green and yellow-blue shift. These dimensions could also be
described using other color properties.

(Same Space Rotated


Sample Color Space
180°)

The above color space is intended to help you qualitatively understand and visualize a color space, however it would not be very useful for real-world color management. This is because a color space almost always needs to be compared
to another space. In order to visualize this, color spaces are often represented by two-dimensional regions. These are more useful for everyday purposes since they allow you to quickly see the entire boundary of a given cross-section.
Unless specified otherwise, two-dimensional diagrams usually show the cross-section containing all colors which are at 50% luminance (a horizontal slice at the vertical midpoint for the color space shown above). The following diagram
shows three example color spaces: sRGB, Wide Gamut RGB, and a device-independent reference space. sRGB and Wide Gamut RGB are two working spaces sometimes used for image editing.

2D Color Space Comparison

(Colors at 50% Luminance)


What can we infer from a 2D color space comparison? Both the black and white outlines show the subset of colors which are reproducible by each color space, as a fraction of some device-independent reference space. Colors shown in
the reference color space are only for qualitative visualization, as these depend on how your display device renders color. In addition, the reference space almost always contains more colors than can be shown on a computer display.
For this particular diagram, we see that the "Wide Gamut RGB" color space contains more extreme reds, purples, and greens, whereas the "sRGB" color space contains slightly more blues. Keep in mind that this analysis only applies for
colors at 50% luminance, which is what occupies the midtones of an image histogram. If we were interested in the color gamut for the shadows or highlights, we could look at a similar 2D cross-section of the color space at roughly 25%
and 75% luminance, respectively.

REFERENCE SPACES
What is the device-independent reference space shown above? Nearly all color management software today uses a device-independent space defined by the Commission International de l' éclairage (CIE) in 1931. This space aims to
describe all colors visible to the human eye based upon the average response from a set of people with no vision problems (termed a "standard colorimetric observer"). Nearly all devices are subsets of the visible colors specified by the
CIE (including your display device), and so any representation of this space on a monitor should be taken as qualitative and highly inaccurate.
The CIE space of visible color is expressed in several common forms: CIE xyz (1931), CIE L*a*b*, and CIE L u'v' (1976). Each contains the same colors, however they differ in how they distribute color onto a two-dimensional space:
CIE xy CIE a*b* CIE u'v'

(All color spaces shown are 2D cross-sections at 50% Luminance)


CIE xyz is based on a direct graph of the original X, Y and Z tristimulus functions created in 1931. The problem with this representation is that it allocates too much area to the greens. CIE L u'v' was created to correct for this distortion
by distributing colors roughly proportional to their perceived color difference. Finally, CIE L*a*b* transforms the CIE colors so that they extend equally on two axes-- conveniently filling a square. Furthermore, each axis in L*a*b*
color space represents an easily recognizable property of color, such as the red-green and blue-yellow shifts used in the 3D visualization above.

WORKING SPACES
A working space is used in image editing programs (such as Adobe Photoshop), and defines the set of colors available to work with when performing any image editing. Two of the most commonly used working spaces in digital
photography are Adobe RGB 1998 and sRGB IEC61966-2.1. For an in-depth comparison for each of these color spaces, please see sRGB vs. Adobe RGB 1998.
Why not use a working space with the widest gamut possible? It is generally best to use a color space which contains all colors which your final output device can render (usually the printer), but no more. Using a color space with an
excessively wide gamut can increase the susceptibility of your image to posterization. This is because the bit depth is stretched over a greater area of colors, and so fewer bits are available to encode a given color gradation.
For further reading, please visit:
Color Management, Part 1
Color Management: Color Space Conversion (Part 3)

35.COLOR MANAGEMENT: COLOR SPACE CONVERSION -


Color space conversion is what happens when the color management module (CMM) translates color from one device's space to another. Conversion may require approximations in order to preserve the image's most important color
qualities. Knowing how these approximations work can help you control how the photo may change-- hopefully maintaining the intended look or mood.

Input Profile Connection Output


Device Space Device

CMYK
RGB Profile
Profile
(RGB
(CMYK
Space)
Space)
BACKGROUND: GAMUT MISMATCH & RENDERING INTENT
The translation stage attempts to create a best match between devices-- even when seemingly incompatible. If the original device has a larger color gamut than the final device, some of the those colors will be outside the final device's
color space. These "out-of-gamut colors" occur with nearly every conversion and are called a gamut mismatch.

RGB Color Space

CMYK Color Space


(Destination Space)

Each time a gamut mismatch occurs, the CMM uses the rendering intent to decide what qualities of the image it should prioritize. Common rendering intents include: absolute and relative colorimetric, perceptual, and saturation. Each of
these types maintains one property of color at the expense of others (described below).

PERCEPTUAL & RELATIVE COLORIMETRIC INTENT


Perceptual and relative colorimetric rendering are probably the most useful conversion types for digital photography. Each places a different priority on how they render colors within the gamut mismatch region. Relative colorimetric
maintains a near exact relationship between in gamut colors, even if this clips out of gamut colors. In contrast, perceptual rendering tries to also preserve some relationship between out of gamut colors, even if this results in inaccuracies
for in gamut colors. The following example demonstrates an extreme case for an image within a 1-D black-magenta color space:

Original Image:

A = Wide Gamut Space


B = Narrow Gamut Space
(Destination Space)

Relative Colorimetric Perceptual

A A
B B

Converted Image: Converted Image:

Note how perceptual maintains smooth color gradations throughout by compressing the entire tonal range, whereas relative colorimetric clips out of gamut colors (at center of magenta globules and in the darkness between them). For 2D
and 3D color spaces, relative colorimetric maps these to the closest reproducible hue in the destination space.
Even though perceptual rendering compresses the entire gamut, note how it remaps the central tones more precisely than those at the edges of the gamut. The exact conversion depends on what CMM is used for the conversion; Adobe
ACE, Microsoft ICM and Apple ColorSynch are some of the most common.
Another distinction is that perceptual does not destroy any color information-- it just redistributes it. Relative colorimetric, on the other hand, does destroy color information. This means that conversion using relative colorimetric
intent is irreversible, while perceptual can be reversed. This is not to say that converting from space A to B and then back to A again using perceptual will reproduce the original; this would require careful use of tone curves to reverse
the color compression caused by the conversion.

ABSOLUTE COLORIMETRIC INTENT


Absolute is similar to relative colorimetric in that it preserves in gamut colors and clips those out of gamut, but they differ in how each handles the white point. The white point is the location of the purest and lightest white in a color
space (also see discussion of color temperature). If one were to draw a line between the white and black points, this would pass through the most neutral colors.

2D Cross-Section
3D Color Space
(Two Spaces at 50% Luminance)
The location of this line often changes between color spaces, as shown by the "+" on the top right. Relative colorimetric skews the colors within gamut so that the white point of one space aligns with that of the other, while absolute
colorimetric preserves colors exactly (without regard to changing white point). To illustrate this, the example below shows two theoretical spaces that have identical gamuts, but different white points:

= White Point

Absolute Relative
Color Space #1 Color Space #2
Colorimetric Colorimetric

Absolute colorimetric preserves the white point, while relative colorimetric actually displaces the colors so that the old white point aligns with the new one (while still retaining the colors' relative positions). The exact preservation of
colors may sound appealing, however relative colorimetric adjusts the white point for a reason. Without this adjustment, absolute colorimetric results in unsightly image color shifts, and is thus rarely of interest to photographers.
This color shift results because the white point of the color space usually needs to align with that of the light source or paper tint used. If one were printing to a color space for paper with a bluish tint, absolute colorimetric would ignore
this tint change. Relative colorimetric would compensate colors to account for the fact that the whitest and lightest point has a tint of blue.

SATURATION INTENT
Saturation rendering intent tries to preserve saturated colors, and is most useful when trying to retain color purity in computer graphics when converting into a larger color space. If the original RGB device contained pure (fully saturated)
colors, then saturation intent ensures that those colors will remain saturated in the new color space-- even if this causes the colors to become relatively more extreme.

Pie chart with fully saturated cyan, blue, magenta and red:

Saturation intent is not desirable for photos because it does not attempt to maintain color realism. Maintaining color saturation may come at the expense of changes in hue and lightness, which is usually an unacceptable trade-off
for photo reproduction. On the other hand, this is often acceptable for computer graphics such as pie charts.
Another use for saturation intent is to avoid visible dithering when printing computer graphics on inkjet printers. Some dithering may be unavoidable as inkjet printers never have an ink to match every color, however saturation intent can
minimize those cases where dithering is sparse because the color is very close to being pure.

Visible dithering due to lack of fully saturated colors:

PAY ATTENTION TO IMAGE CONTENT


One must take the range of image colors present into account; just because an image is defined by a large color space does not mean that it actually utilizes all of those extreme colors. If the destination color space fully encompasses the
image's colors (despite being smaller than the original space), then relative colorimetric will yield a more accurate result.
Example Image:

The above image barely utilizes the gamut of your computer display device, which is actually typical of many photographic images. If one were to convert the above image into a destination space which had less saturated reds and
greens, this would not place any image colors outside the destination space. For such cases, relative colorimetric would yield more accurate results. This is because perceptual intent compresses the entire color gamut-- regardless of
whether these colors are actually utilized.

SHADOW & HIGHLIGHT DETAIL IN 3D COLOR SPACES


Real-world photographs utilize three-dimensional color spaces, even though up until now we have been primarily analyzing spaces in one and two dimensions. The most important consequence of rendering intent on 3D color spaces is
how it affects shadow and highlight detail.
If the destination space can no longer reproduce subtle dark tones and highlights, this detail may be clipped when using relative/absolute colorimetric intent. Perceptual intent
compresses these dark and light tones to fit within the new space, however it does this at the cost of reducing overall contrast (relative to what would have been produced with
colorimetric intent).
The conversion difference between perceptual and relative colorimetric is similar to what was demonstrated earlier with the magenta image. The main difference is that now the compression or clipping
occurs in the vertical dimension-- for shadows and highlight colors. Most prints cannot produce the range of light to dark that we may see on our computer display, so this aspect is of particular importance
when making a print of a digital photograph.

Using the "black point compensation" setting can help avoid shadow clipping-- even with absolute and relative colorimetric intents. This is available in the conversion properties of nearly all software which supports color management
(such as Adobe Photoshop).
RECOMMENDATIONS
So which is the best rendering intent for digital photography? In general, perceptual and relative colorimetric are best suited for photography because they aim to preserve the same visual appearance as the original.
The decision about when to use each of these depends on image content and the intended purpose. Images with intense colors (such as bright sunsets or well-lit floral arrangements) will preserve more of their color gradation in extreme
colors using perceptual intent. On the other hand, this may come at the expense of compressing or dulling more moderate colors. Images with more subtle tones (such as some portraits) often stand to benefit more from the increased
accuracy of relative colorimetric (assuming no colors are placed within the gamut mismatch region). Perceptual intent is overall the safest bet for general and batch use, unless you know specifics about each image.
For related reading, please visit:
Color Management, Part 1
Color Management: Color Spaces (Part 2)

36.sRGB vs. ADOBE RGB 1998 -


Adobe RGB 1998 and sRGB IEC61966-2.1 (sRGB) are two of the most common working spaces used in digital photography. This section aims to clear up some of the confusion associated with sRGB and Adobe RGB 1998, and to
provide guidance on when to use each working space.

BACKGROUND
sRGB is a RGB color space proposed by HP and Microsoft because it approximates the color gamut of the most common computer display devices. Since sRGB serves as a "best guess" for how another person's monitor produces color,
it has become the standard color space for displaying images on the internet. sRGB's color gamut encompasses just 35% of the visible colors specified by CIE (see section on color spaces). Although sRGB results in one of the narrowest
gamuts of any working space, sRGB's gamut is still considered broad enough for most color applications.
Adobe RGB 1998 was designed (by Adobe Systems, Inc.) to encompass most of the colors achievable on CMYK printers, but by using only RGB primary colors on a device such as your computer display. The Adobe RGB 1998
working space encompasses roughly 50% of the visible colors specified by CIE-- improving upon sRGB's gamut primarily in cyan-greens.

GAMUT COMPARISON
The following color gamut comparison aims to give you a better qualitative understanding of where the gamut of Adobe RGB 1998 extends beyond sRGB for shadow (~25%), midtone (~50%), and highlight colors (~75%).
sRGB IEC61966-2.1 Adobe RGB 1998

25% Luminance 50% Luminance 75% Luminance

Comparison uses CIE L*a*b* reference space; colors are only qualitative to aid in visualization.
Note how Adobe RGB 1998 extends into richer cyans and greens than does sRGB-- for all tonal levels. The 50% luminance diagram is often used to compare these two working spaces, however the shadow and highlight diagrams also
deserve attention. Adobe RGB 1998 extends its advantage in the cyan-greens for the highlights, but now has advantages with intense magentas, oranges, and yellows-- colors which can add to the drama of a bright sunset. Adobe RGB
1998 does not extend as far beyond sRGB in the shadows, however it still shows advantages in the dark greens (often encountered with dark foliage).

IN PRINT
All of these extra colors in Adobe RGB 1998 are great to have for viewing on a computer monitor, but can we actually reproduce them in a print? It would be a shame to edit using these extra colors, only to later retract their intensity due
to printer limitations. The following diagrams compare sRGB and Adobe RGB 1998 with two common printers: a Fuji Frontier (390) and a high-end inkjet printer with 8 inks (Canon iP9900 on Photo Paper Pro). A Fuji Frontier printer
is what large companies such as Walmart use for making their prints.
sRGB IEC61966-2.1 Adobe RGB 1998

25% Luminance 50% Luminance 75% Luminance

Select Printer Type: Fuji Frontier High-End Inkjet

Comparison uses CIE L*a*b* reference space; colors are only qualitative to aid in visualization.
We see a big difference in how each printer uses the additional colors provided by Adobe RGB 1998: The Fuji Frontier only uses a small patch of yellow in the highlights, whereas the high-end inkjet printer exceeds sRGB for colors in
shadows, midtones, and highlights. The high-end inkjet even exceeds the gamut of Adobe RGB 1998 for cyan-green midtones and yellow highlights.
The printer should also be considered when choosing a color space, as this can have a big influence on whether the extra colors are utilized. Most mid-range printer companies provide a downloadable color profile for their printer. This
color profile can help you achieve similar conclusions to those visible in the above analysis.

INFLUENCE ON BIT DEPTH DISTRIBUTION


Since the Adobe RGB 1998 working space clearly provides more colors to work with, why not just use it in every situation? Another factor to consider is how each working space influences the distribution of your image's bit depth.
Color spaces with larger gamuts "stretch" the bits over a broader region of colors, whereas smaller gamuts concentrate these bits within a narrow region. Consider the following green "color spaces" on a line:

Large
Gamut

Small
Gamut

If our image contained only shades of green in the small gamut color space, then we would be wasting bits by allocating them to encode colors outside the small gamut:
For a limited bit depth which encodes all colors within the
large gamut:

Large
Gamut

Small
Gamut
Waste
d Bits

If all bits were concentrated within the smaller gamut:

A similar concentration of bit depth occurs with sRGB versus Adobe RGB 1998, except in three dimensions, and not quite as dramatic as demonstrated above. Adobe RGB 1998 occupies roughly 40% more volume than sRGB, so you
are only utilizing 70% of your bit depth if the colors in Adobe RGB 1998 are unnecessary (for evenly spaced bits). On the other hand, you may have plenty of "spare" bits if you are using a 16-bit image, and so any reduction due to your
choice of working space might be negligible.

SUMMARY
My advice is to know which colors your image uses, and whether these can benefit from the additional colors afforded by Adobe RGB 1998. Ask yourself: do you really need the richer cyan-green midtones, orange-magenta highlights, or
green shadows? Will these colors also be visible in the final print? Will these differences even be noticeable? If you've answered "no" to any of these questions, then you would be better served using sRGB. sRGB will make the most of
your bit depth because it allocates more bits to encoding the colors present in your image. In addition, sRGB can simplify your workflow since this color space is also used for displaying images on the internet.
What if you desire a speedy workflow, and do not wish to decide on your working space using a case-by-case method? My advice is to use Adobe RGB 1998 if you normally work with 16-bit images, and sRGB if you normally work
with 8-bit images. Even if you may not always use the extra colors, you never want to eliminate them as a possibility for those images which require them.

OTHER NOTES
It is apparent that Adobe RGB 1998 has a larger gamut than sRGB, but by how much? Adobe RGB is often depicted has having a superior gamut in greens, however this can be misleading and results mainly from the use of the CIE xyz
reference space. Consider the following comparison:
sRGB IEC61966-2.1 Adobe RGB 1998

CIE xy CIE u'v'


Exaggerates difference in greens Closer to the eye's perceived difference

When the two are compared using the CIE u'v' reference space, the advantage in greens becomes less apparent. In addition, the diagram on the right now shows Adobe RGB 1998 having similar advantages in both the cyans and greens--
better representing the relative advantage we might perceive with our eyes. Care should be taken to also consider the influence of a reference space when drawing conclusions from any color space comparison diagram.

37.TUTORIALS: PHOTOSHOP LEVELS -


Levels is a tool in Photoshop and other image editing programs which can move and stretch the brightness levels of an image histogram. It has the power to adjust brightness, contrast, and tonal range by specifying the location of
complete black, complete white, and midtones in a histogram. Since every photo's histogram is unique, there is no single way to adjust the levels for all your photos. A proper understanding of how to adjust the levels of an image
histogram will help you better represent tones in the final image.
HOW IT WORKS
The levels tool can move and stretch brightness levels in a histogram using three main components: a black point, white point and midtone slider. The position of the black and white point sliders redefine the histogram's "Input Levels" so
they are mapped to the "Output Levels" (default is black (0) or white (255), respectively), whereas the midtone slider redefines the location of middle gray (128). Each slider is shown below as they appear in Photoshop's levels tool, with
added blue labels for clarity:

All examples below will use the levels tool on an RGB histogram, although levels can also be performed on other types of histograms. Levels can be performed on an individual color channel by changing the options within the "Channel"
box at the top.

ADJUSTING THE BLACK AND WHITE POINT LEVELS


When considering adjusting the black and white point levels of your histogram, ask yourself: is there any region in the image which should be completely black or white, and does the image histogram show this?
Most images look best when they utilize the full range dark to light which can be displayed on your screen or in a print. This means that it is often best to perform levels such that the histogram extends all the way from black (0) to white
(255). Images which do not extend to fill the entire tonal range often look washed out and can lack impact. The image below was taken in direct sunlight and includes both bright clouds and dark stone shadows-- an example of where
there should be at least some regions that are portrayed as nearly white or black. This histogram can be extended to fill the entire tonal range by adjusting the levels sliders as shown:

Histogram Before Levels Histogram After Levels

Lower Contrast Higher Contrast


On the other hand, be wary of developing a habit of simply pushing the black and white point sliders to the edges of the histogram-- without also paying attention to the content of your image. Images taken in fog, haze or very soft light
often never have fully black or white regions. Adjusting levels for such images can ruin the mood and make your image less representative of the actual scene by making it appear as though the lighting is harsher than it actually was.
One should also be cautious when moving the black and white point sliders to the edge of the histogram, as these can easily clip the shadows and highlights. A histogram may contain highlights or shadows that are shown with a height of
just one pixel, and these are easily clipped. This is often the case with low-key images (see histograms tutorial), such as the example shown below:

Histogram Before Levels Histogram After Levels

No Pixel at Full Brightness Stronger Highlights

Holding down the "ALT" key while dragging the black or white point slider is a trick which can help avoid shadow or highlight clipping, respectively. If I were to have dragged the highlight slider above to a point which was further left

(a level of 180 was used, versus 235 above), while simultaneously holding down ALT, the image would have appeared as follows:
If the image is fully black while dragging a black or white point slider, then no clipping has occurred. When the slider is dragged over where there are counts on the histogram, the regions of the image which have become clipped get
highlighted as shown above. This can be quite useful because knowing where the clipping will occur can help one assess whether this will actually be detrimental to the artistic intent of the image. Keep in mind though that clipping
shown while dragging a slider on an RGB histogram does not necessarily mean that region has become completely white-- only that at least one of the red, green, or blue color channels has reached its maximum of 255.

ADJUSTING THE MIDTONE LEVEL


Moving the midtones slider compresses or stretches the tones to the left or right of the slider, depending on which direction it is moved. Movement to the left stretches the histogram to the its right and compresses the histogram to its left
(thereby brightening the image by stretching out the shadows and compressing the highlights), whereas movement to the right performs the opposite. Therefore, the midtone slider's main use is to brighten or darken the midtones within an
image.
When else should one use the midtone slider? Consider the following scenario: your image should contain full black and white, and even though the histogram extends to full black, it does not extend to white. If you move the white point
slider so that it reaches the edge of the histogram, you end up making the image much brighter and overexposed. Using the midtone slider in conjunction with the white point slider can help you maintain the brightness in the rest of your
image, while still stretching the highlights to white:
Histogram Before Levels Histogram After Levels

Stronger Highlights
Sky Not At Full Brightness
Similar Overall Brightness

Note how the sky became more pronounced, even though the overall brightness of the image remained similar. If the midtones tool were not used, the image to the right would have appeared very overexposed. The same method could be
used to darken the shadows while maintaining midtones, except the midtones slider would instead be moved to the left.
Note: Even though the midtones slider is always initially at 128, it is instead shown as 1.00 to avoid confusion when the black and white points change. This way, the midtones slider is always at 1.00 even when the other sliders have
been moved. The midtone "Input Level" number actually represents the gamma adjustment, which can be thought of as a relative measure of the number of levels on the sliders left to those on its right. Thus, values greater than one mean
there are more levels are to the slider's right, whereas values less than one mean more levels are to its left.

ADJUSTING LEVELS WITH THE DROPPER TOOLS


The histogram levels can also be adjusted using the dropper tools, shown below in red:

One can use the dropper tools on the far left and right to set the black and white points by clicking on locations within the image that should be either black or white, respectively. This is often not as precise as using the sliders, because
one does not necessarily know whether clicking on a given point will clip the histogram. The black and white point droppers are more useful for computer-generated graphics as opposed to photos.
Unlike the black and white point droppers, the middle dropper tool does not perform the same function as the midtone slider. The middle dropper actually sets the "gray point," which is a section of the image that should be colorless.
This is useful when there is a colorless reference object within your scene; one can click on it with the dropper tool and removing color casts by setting the white balance. On the other hand, it is better to perform a white balance on a
RAW file format since this reduces the risk of posterization.

OTHER USES FOR THE LEVELS TOOL


The levels tool can be performed any type of image histogram in addition to the RGB histograms shown above, including luminance and color histograms. Performing levels on a luminance histogram can be useful to increase contrast
without also influencing color saturation, whereas levels on a color histogram can change the color balance for images which suffer from unrealistic color casts (such as those with an incorrect white balance).
Levels can also be used to decrease the contrast in an image by modifying the "Output Levels" instead of the "Input Levels." This can be a useful step before performing techniques such as local contrast enhancement since it avoids
clipping (because this technique may darken or brighten the darkest or brightest regions, respectively), or when your image contains too much contrast.

PRECAUTIONS
• Minimize use of the levels tool, as anything which stretches the image histogram increases the possibility of posterization.
• Performing levels on a luminance histogram can easily clip an individual color channel, although this may also allow for darker and brighter black and white points, respectively.
• Performing levels on an individual color histogram or channel can adversely affect the color balance, so color channel levels should only be performed when necessary or intentional color shifts are
desired.

38.TUTORIALS: PHOTOSHOP CURVES -


The Photoshop curves tool is perhaps the most powerful and flexible image transformation, yet it may also be one of the most intimidating. Since photographers effectively paint with light, curves is central to their practice because it
affects light's two primary influences: tones and contrast. Tonal curves are also what give different film types their unique character, so understanding how they work allows one to mimic any film-- without ever having to retake the
photograph.

HOW IT WORKS
Similar to Photoshop levels, the curves tool can take input tones and selectively stretch or compress them. Unlike levels however, which only has black, white and midpoint control, a tonal curve is controlled using any number of anchor
points (small squares below, up to a total of 16). The result of a given curve can be visualized by following a test input tone up to the curve, then over to its resulting output tone. A diagonal line through the center will therefore leave
tones unchanged.

If you follow two spaced input tones, note that their separation becomes stretched as the slope of the curve increases, whereas tones get compressed when the slope decreases (compared to the original diagonal line). Recall from the
image histogram tutorial that compressed tones receive less contrast, whereas stretched tones get more contrast. Move your mouse over the curve types below to see how these changes affect this exaggerated example:
Choose:

High Contrast Low Contrast

Show Tonal Labels?

YES NO

Note: curves and histograms shown above are applied to and shown for luminosity (not RGB)
The curves shown above are two of the most common: the "S-curve" and "inverted S-curve." An S-curve adds contrast to the midtones at the expense of shadows and highlights, whereas the inverted S-curve does the opposite. Note how
these change the histogram and most importantly, also notice how these changes influence the image: reflection detail on the side and underside of the boat become clearer for the inverted S-curve while water texture becomes more
washed out (and the opposite for the S-curve).

MOTIVATION: DYNAMIC RANGE & FILM CURVES


Why redistribute contrast if this is always a trade-off? Since actual scenes contain a greater lightness range (dynamic range) than we can reproduce on paper, one always has
to compress the tonal range to reproduce it in a print. Curves allows us to better utilize limited dynamic range.
Midtone contrast is perceptually more important, so the shadows and highlights usually end up bearing the bulk of this tonal compression. Most films and photo papers therefore use something similar
to an S-curve to maintain midtone contrast. Move your mouse over the image (right) to see how an S-curve can help maintain contrast in the midtones, and note its similarity to an actual film curve

(below). Each fi lm's unique character is primarily defined by its tonal curve.
Furthermore, while our camera (ideally) estimates the relative amount of photons hitting each pixel, our eyes/brain actually apply a tonal curve of their own to achieve the
maximum visual sensitivity over the greatest lightness range. The camera therefore has to apply its own tonal curve to the RAW file format to maintain accuracy. On top of
this, each type of digital sensor has their own tonal response curve, and even PC/Mac computers apply a different tonal curve when displaying images (gamma).
In summary: tonal curves are required for every image in one form or another-- whether this be by our eyes, the film emulsion, digital camera, display device or in post-processing.

(Shown for Kodak Supra II Paper)

IN PRACTICE: OVERVIEW
The key concept with curves is that you can never add contrast in one tonal region without also decreasing it in another. In other words, the curves tool only redistributes contrast. All photographs therefore have a "contrast budget"
and you must decide how to spend it-- whether this be by spreading contrast evenly (straight diagonal line) or by unequal allocation (varying slope).
Furthermore, curves always preserves the tonal hierarchy (unless uncommon curves with negative slope are used). This means that if a certain tone was brighter than another before the conversion, it will still be brighter afterwards--
just not necessarily by the same amount.

Three anchor points shown above (each for shadows, midtones and highlights) are generally all that is needed (in addition to the black and white points). A tricky aspect is that even minor movement in an anchor point can result in major
changes in the final image. Abrupt changes in slope can easily induce posterization by stretching tones in regions with gradual tonal variation. Therefore moderate adjustments which produce smooth curves usually work best. If you
need extra fine-tuning ability, try enlarging the size of the curves window.
Pay close attention to the image histogram when making adjustments. If you want to increase contrast in a certain tonal peak, use the histogram to ensure that the region of greater slope falls on top of this peak. I prefer to open the
histograms window (Window > Histogram) to see live changes as I drag each anchor point.

UTILIZING EMPTY TONAL RANGE


The exception to the contrast trade-off is when you have unused tonal range, either at histogram edges or as gaps in between tonal peaks. If these gaps are at the histogram's edges, this unused tonal range can be utilized with the black and
white anchor points (as with levels tool).
BEFORE AFTER

If the gaps occur in between tonal peaks, then a unique ability with curves is that it can decrease contrast in these unused tones-- thereby freeing up contrast to be spent on tones which are actually present in the image. The next example
uses a curve to close the tonal gap between the sky and darker foliage.

BEFORE AFTER

Note how this produces an overall smoother toned image, and that the midtones and highlights remain more or less unchanged on the histogram.

TRANSITION OF CLIPPED HIGHLIGHTS


Digital photos may abruptly clip their highlights once the brightness level reaches its maximum (255 for 8-bit images). This can create an unrealistic look, and often a smoother transition to white is preferred. Move your mouse over the
image to see the difference.
(Above results achieved with a custom color profile curve.)
Note the transition at the sun's border. In general, the highlight transition can be made more gradual by decreasing the curve's slope at the far upper right corner.

LIGHTNESS CHANNEL & ADJUSTMENT LAYERS


Performing curves to just the lightness/luminosity channel - either in LAB mode or as an adjustment layer - can help reduce changes in hue and color saturation. Move your mouse over each of the images below to see what would have
happened if this curve had been applied to the RGB channel.

Inverted S-Curve S-Curve

Note how color saturation is greatly decreased and increased for the inverted S-curve and the regular S-curve, respectively. In general, curves with a large slope in the midtones will increase color saturation, whereas a small slope will
decrease it. Changes in saturation may be desirable when brightening shadows, but in most other instances this should be avoided.
Adjustment layers (Layer > New Adjustment Layer > Curves...) can be set to make curves only apply to the luminosity channel by choosing a different blending mode
(right).
Another benefit is that it can make your curves adjustment more subtle. This is accomplished by reducing the opacity appropriately (circled in red above). This is particularly useful because
small changes in anchor points sometimes yield too much of a change in the image. Finally, you can continually fine-tune the curve without changing the actual image levels each time-- thereby
reducing posterization.

USING CURVES TO CORRECT COLOR BALANCE


Although all curves thus far have been applied to RGB values or luminosity, they can also be used on individual color channels as a powerful way of correcting color casts in specific tonal areas. Let's say your image had a bluish color
cast in the shadows, however both the midtones and highlights appeared balanced. Changing the white balance or adjusting the overall color to fix the shadows would inadvertently harm the other tones.
BEFORE AFTER

The above example selectively decreases the amount of blue in the shadows to fix the bluish color cast. Make sure to apply anchor points along the diagonal for all tonal regions which you do not wish to change. If you do not require
precise color adjustments, the curves tool is probably overkill. In such cases a color balance correction would be much easier ("Image > Adjustments > Color Balance..." in Photoshop).
Alternatively, overall color casts can be fixed using the "Snap Neutral Midtones" setting under the options button. This works best for images whose average midtone color is roughly neutral; photos with an overabundance of one color
(such as one taken within a forest) should use other methods such as white balance in RAW or with the levels tool.

NOTES ON IMAGE CONTRAST


This tutorial has discussed contrast as if it were always desirable, however this depends on subject matter, atmosphere and artistic intent. There may be cases where one would wish to deliberately not use the entire tonal range. These
may include Images taken in fog, haze or very soft light as they often never have fully black or white regions. Contrast can emphasize texture or enhance subject-background separation, however harsh or overcast light can result in too
much or too little contrast, respectively.
PRECAUTIONS
• Minimize use of the curves tool, as anything which stretches the image histogram increases the possibility of posterization.
• Always perform curves on 16-bit images when possible.
• Extreme levels adjustments in the RGB channel should be avoided; for such cases perform curves using the lightness channel in an adjustment layer or LAB mode to avoid significant changes in hue
and saturation.

39.SHARPENING: UNSHARP MASK -


An "unsharp mask" is actually used to sharpen an image, contrary to what its name might lead you to believe. Sharpening can help you emphasize texture and detail, and is critical when post-processing most digital images. Unsharp masks
are probably the most common type of sharpening, and can be performed with nearly any image editing software (such as Photoshop). An unsharp mask cannot create additional detail, but it can greatly enhance the appearance of detail by
increasing small-scale acutance.

CONCEPT
The sharpening process works by utilizing a slightly blurred version of the original image. This is then subtracted away from the original to detect the presence of edges, creating the unsharp mask (effectively a high-pass filter). Contrast
is then selectively increased along these edges using this mask-- leaving behind a sharper final image.

Step 1: Detect Edges and Create Step 2: Increase Contrast at


Mask Edges

Higher
Original Contrast
Original

Unsharp
Mask
Blurred
- Copy
Original

Sharpened
Unsharp
= Mask = Final
Image
Note: The "mask overlay" is when image information from the layer above the unsharp mask passes through and replaces the layer below in a way which is proportional to the brightness in that region of the mask. The upper image does
not contribute to the final for regions where the mask is black, while it completely replaces the layer below in regions where the unsharp mask is white.
If the resolution in the above image is not increasing, then why is the final text so much sharper? We can better see how it works if we magnify and examine the edge of one of these letters as follows:

Original Sharpened

Note how it does not transform the edges of the letter into an ideal "step," but instead exaggerates the light and dark edges of the transition. An unsharp mask improves sharpness by increasing acutance, although resolution remains the
same (see sharpness: resolution and acutance).
Note: Unsharp masks are not new to photography. They were traditionally performed with film by utilizing a softer, slightly out of focus image (which would act as the unsharp mask). The positive of the unsharp mask was then
sandwiched with the negative of the original image and made into a print. This was used more to enhance local contrast than small-scale detail.

BIOLOGICAL MOTIVATION
Why are these light and dark over/undershoots so effective at increasing sharpness? It turns out that an unsharp mask is actually utilizing a trick performed by our own human visual system. The human eye sees what are called "Mach
bands" at the edges of sharp transitions, named after their discovery by physicist Ernst Mach in the 1860's. These enhance our ability to discern detail at an edge. Move your mouse on and off of the following image to see the mach band
effect:

(Alternating with a smooth gradient enhances the mach band effect)


Note how the brightness within each step of the gradient does not appear constant. On the right side of each step you will notice it is lighter, whereas on the left it is darker-- very similar to the behavior of an unsharp mask. Move your
mouse over the plot below to see what is happening:

IN PRACTICE
Fortunately, sharpening with an unsharp mask in Photoshop and other image editing programs is quick and easy. It can be accessed in Adobe Photoshop by clicking on the following drop-down menus: Filter > Sharpen > Unsharp Mask.
Using the unsharp mask requires understanding its three settings: "Amount," "Radius," and "Threshold."
Amount is usually listed as a percentage, and controls the magnitude of each overshoot. This can also be thought of as how much contrast is added at the edges.
Radius controls the amount to blur the original for creating the mask, shown by "blurred copy" in the TEXT illustration above. This affects the size of the edges you wish to enhance, so a smaller radius
enhances smaller-scale detail.
Threshold sets the minimum brightness change that will be sharpened. This is equivalent to clipping off the darkest non-black pixel levels in the unsharp mask. The threshold setting can be used to
sharpen more pronounced edges, while leaving more subtle edges untouched. This is especially useful to avoid amplifying noise, or to sharpen an eye lash without also roughening the texture of skin.

COMPLICATIONS
Unsharp masks are wonderful at sharpening images, however too much sharpening can also introduce "halo artifacts." These are visible as light/dark outlines or halos near edges. Halos artifacts become a problem when the light and dark
over and undershoots become so large that they are clearly visible at the intended viewing distance.

Over
Soft Original Mild Sharpening Sharpening
(Visible Halos)

Remedies: The appearance of halos can be greatly reduced by using a smaller radius value for the unsharp mask. Alternatively, one could employ one of the more advanced sharpening techniques (coming soon).
Another complication of using an unsharp mask is that it can introduce subtle color shifts. Normal unsharp masks increase the over and undershoot of the RGB pixel values similarly, as opposed to only increasing the over and
undershoots of luminance. In situations where very fine color texture exists, this can selectively increase some colors while decreasing others. Consider the following example:

Normal RGB
Sharpening Luminance
Soft Original
(Visible Cyan Sharpening
Outline)

When red is subtracted away from the neutral gray background at the edges (middle image), this produces cyan color shifts where the overshoot occurs (see subtractive color mixing). If the unsharp mask were only performed on the
luminance channel (right image), then the overshoot is light red and the undershoot (barely visible) becomes dark red-- avoiding the color shift.
Remedies: Color shifts can be avoided entirely by performing the unsharp mask within the "lightness" channel in LAB mode. A better technique, which avoids converting between color spaces and minimizes posterization, is to:
1) Create a duplicate layer
2) Sharpen this layer like normal using the unsharp mask
3) Blend sharpened layer with the original using "luminosity" mode in the layers window
REAL-WORLD EXAMPLE Move your mouse over unsharp mask and sharpened to see how the sharpened image compares with the softer original image.
The difference can often be quite striking.

Original Unsharp Mask Sharpened

(Unsharp mask brightened slightly to increase visibility)

RECOMMENDED READING
• For a more practical discussion, also see this website's Guide to Image Sharpening.
• Learn another use for an unsharp mask with "local contrast enhancement"
• Alternatively, learn about their use in large format film photography at:
http://www.largeformatphotography.info/unsharp/

40.IMAGE RESIZING FOR THE WEB & EMAIL -


Resizing images for the web and email are perhaps the most common ways to share digital photos. Particularly for web presentation, being able to retain artifact-free sharpness in a downsized image is critical-- yet may prove
problematic. Unlike in photo enlargement where jagged edges are a problem, downsizing results in the opposite aliasing artifact: moiré. The prevalence of moiré largely depends on the type of interpolator used, although some images are
much more susceptible than others. This tutorial compares different approaches of how to resize an image for web and email, and makes recommendations based on their results.

BACKGROUND: MOIRÉ ARTIFACTS


Moiré (pronounced "more-ay") is another type of aliasing artifact, but may instead occur when downsizing an image. This shows up in images with fine textures which are near the resolution limit. These textures surpass the resolution
when downsized, so the image may only selectively record them in a repeating pattern:

Image Downsized to 50%


Downsized Image Shown at 200%

Note how this pattern has no physical meaning in the picture because these lines do not correlate with the direction of roof shingles. Images with fine geometric patters are at the highest risk; these include roof tiles, distant brick and
woodwork, wire mesh fences, and others.

RESIZE-INDUCED SOFTENING
In addition to moiré artifacts, a resized image can also become significantly less sharp. Interpolation algorithms which preserve the best sharpness are more susceptible to moiré, whereas those which avoid moiré typically produce a softer
result. This is unfortunately an unavoidable trade-off in resizing.

Original Image Softer Resized Image

One of the best ways to combat this is to apply a follow-up unsharp mask after resizing an image-- even if the original had already been sharpened. Move your mouse over the image above to see how this can regain lost sharpness.

INTERPOLATION PERFORMANCE COMPARED


As an example: when an image is downsized to 50% of its original size, it is impossible to show detail which previously had a resolution of just a single pixel. If any detail is shown, this is not real and must be an artifact of the
interpolator.

Original Image Image Averages to Gray

Using this concept, a test was designed to assess both the maximum resolution and degree or moiré each interpolator produces upon downsizing. It amplifies these artifacts for a typical scenario: resizing a digital camera image to a more
manageable web and email resolution of 25% its original size.
The test image (below) was designed so that the resolution of stripes progressively increases away from the center of the image. When the image gets downsized, all stripes beyond a certain distance from the center should no longer be
resolvable. Interpolators which show detail all the way up to the edge of this resolution limit (dashed red box shown below) preserve maximum detail, whereas interpolators which show detail outsize this limit are adding patterns to the
image which are not actually there (moiré).

1. Nearest Neighbor

2. Bilinear

3. Bicubic **

4. Sinc

5. Lanczos

6. Bicubic, 1px pre-blur

7. #6 w/ sharpening

8. Genuine Fractals Type: Test Image*

Show Red Box? YES NO

*Test image shown has been modified for viewing;


actual image is 800x800 pixels and stripes extend to max resolution at that size.
**Bicubic is from the default setting used in Adobe Photoshop CS & CS2
Test chart conceived in a BBC paper and first implemented at www.worldserver.com/turk/opensource/;
all diagrams and custom code above were performed in Matlab for the above use.
Sinc and lanczos algorithms produce the best results; they are able to resolve detail all the way to the theoretical maximum (red box), while still maintaining the fewest artifacts beyond. Photoshop bicubic comes in second, as it has visible
moiré patterns far outside the box. Furthermore, note how bicubic also does not show as much detail and contrast just inside the red box. 6 & 7 are variants of the bicubic downsize, and are discussed below. Genuine Fractals 4.0 was
included for comparison, although it does poorly at downsizing (not its intended use). This highlights a key divide: some interpolation algorithms are much better at increasing than decreasing image size, and vice versa.
Technical Note: interpolation algorithms vary depending on the software used, even if the algorithm has the same name. Sinc interpolation, for example, has variations which take into account anywhere from 256-1024+ adjacent known
pixels. This may or may not be explicitly stated in the software. Furthermore, software may also vary in how much weighting they give to close vs. far known pixels in their calculations, which is often the case with "bicubic."

PRE-BLUR TO MINIMIZE MOIRÉ ARTIFACTS


One approach which can improve results in problem images is to apply a little blur to the image *before* you downsize it. This allows you to eliminate any detail smaller than what you know is impossible to capture at a lower
resolution. If you do not have a problem with moire artifacts, then there is no need to pre-blur.
Since the above image was downsized to 1/4 its original size, any repeating patterns smaller than 4 pixels cannot be resolved. A radius as high as 2 pixels (for a total diameter
of 4 pixels) could have been used in #6, however 1 pixel is all that was needed to virtually eliminate artifacts outside the box. Too high of a pre-blur can lead to softening in the
final image.
The pre-blurred photoshop image above (#6) eliminates most of the moiré (found in #3), however additional sharpening (performed in #7) is required to regain sharpness for detail just inside the red box.
After pre-blur and sharpening, photoshop bicubic performs close to the more sophisticalted sinc and lanczos algorithms.

PHOTOSHOP BICUBIC SHARPER vs. BICUBIC SMOOTHER


Adobe Photoshop versions CS (8.0) and higher actually have three options for bicubic interpolation: bicubic smoother, bicubic (intermediate default), and bicubic sharper. All variations provide similar results to #3 in the interpolation
comparison, but with varying degrees of sharpness. Therefore if your image has moiré, the sharper setting will amplify and the smoother setting will reduce it (relative to default).

Show Bicubic Type:

Original Image Smoother / Sharper

Many recommend using the smoother variation for upsizing and the sharper variation for downsizing. This works well, but my preference is to use the standard bicubic for downsizing-- leaving greater flexibility to sharpen afterwards as
the image requires. Many find the built-in sharpening in the sharper variation to be a little to strong and coarse for most images, but this is simply a matter of preference.

RECOMMENDATIONS
All of this analysis is directed at explaining what happens when things go wrong. If resizing is artifact-free, you may not need to change a thing; photographic workflows can become complicated enough as is. Many photos do not have
detail which is susceptible to moiré-- regardless of the interpolation. On the other hand, when things do go wrong this can help explain why-- and what actions you can take to fix it.
The ideal solution is to use a sinc or lanczos algorithm to avoid moiré artifacts in the downsized image, then follow-up with a very small radius (0.2-0.3) unsharp mask to correct for any interpolation-induced softening. On the
other hand, the sinc algorithm is not widely supported and software which uses it is often not as user-friendly.
An alternative approach would be to use bicubic, pre-blur problematic images and then sharpen after downsizing. This prepares the image for the interpolator in a way which minimizes aliasing artifacts. The main disadvantage to
this approach is that the required radius of blur depends on how much you wish to downsize your image-- therefore you have to use this technique on a case-by-case basis.

Original Computer Graphic Zero Anti-


Aliasing

Finally, you can ensure that you do not induce any anti-aliasing in computer graphics if you use the nearest neighbor algorithm. Just be particularly cautious when the image contains fine textures, as this algorithm is the most
prone to moiré artifacts.
For further reading, please visit:
Digital Image Interpolation, Part 1

41.DIGITAL PHOTO ENLARGEMENT -


Digital photo enlargement to several times its original 300 PPI size, while still retaining sharp detail, is perhaps the ultimate goal of many interpolation algorithms. Despite this common aim, enlargement results can vary significantly
depending on the resize software, sharpening and interpolation algorithm implemented.

BACKGROUND
The problem arises because unlike film, digital cameras store their detail in a discrete unit: the pixel. Any attempt to magnify an image also enlarges these pixels-- unless some type of image interpolation is performed. Move your mouse
over the image to the right to see how even standard interpolation can improve the blocky, pixelated appearance.

Original

Visible pixels without interpolation

Before proceeding with this tutorial, know that there is no magic solution; the best optimization is to start with the highest quality image possible. Ensuring this means using proper technique, a high resolution camera, a low noise setting
and a good RAW file converter. Once all of this has been attempted, optimizing digital photo enlargement can help you make the most of this image.

OVERVIEW OF NON-ADAPTIVE INTERPOLATION


Recall that all non-adaptive interpolation algorithms always face a trade-off between three artifacts: aliasing, blurring and edge halos. The following diagram and interactive visual
comparison demonstrate where each algorithm lies in this three-way tug of war.
A small sampling of the most common algorithms are included below. Move your mouse over the options below to see how each interpolator performs for this enlargement:

1. Nearest Neighbor

2. Bilinear

3. Bicubic Smoother

4. Bicubic *

5. Bicubic Sharper
6. Lanczos

7. Bilinear w/ blur Type Selected: Test Image

*default interpolation algorithm for Adobe Photoshop CS and CS2

The qualitative diagram to the right roughly demonstrates the trade-offs of each type. Nearest neighbor is the most aliased, and along with bilinear these are the only two that have
no halo artifacts-- just a different balance of aliasing and blur. You will see that edge sharpness gradually increases from 3-5, but at the expense of both increased aliasing and edge
halos. Lanczos is very similar to Photoshop bicubic and bicubic sharper, except perhaps a bit more aliased. All show some degree of aliasing, however one could always eliminate
aliasing entirely by blurring the image in Photoshop (#7).

Lanczos and bicubic are some of the most common, perhaps because they are very mild in their choice of all three artifacts (as evidenced by being towards the middle of the triangle above). Nearest neighbor and bilinear are not
computationally intensive, and can thus be used for things like web zooming or handheld devices.

OVERVIEW OF ADAPTIVE METHODS


Recall that adaptive (edge-detecting) algorithms do not treat all pixels equally, but instead adapt depending on nearby image content. This flexibility gives much sharper images with fewer artifacts (than would be possible with a non-
adaptive method). Unfortunately, these often require more processing time and are usually more expensive.
Even the most basic non-adaptive methods do quite well at preserving smooth tonal gradations, but they all begin to show their limitations when they try to interpolate near a sharp edge.

1. Nearest Neighbor

2. Bicubic *

3. Genuine Fractals

4. PhotoZoom (default)

5. PhotoZoom (graphic)

6. PhotoZoom (text)

7. SmartEdge ** Type Selected: Test Image

*default interpolation algorithm for Adobe Photoshop CS and CS2


**still in research phase, not available to public
Genuine Fractals is perhaps the most common iterative (or fractal) enlargement software. It tries to encode a photo similar to a vector graphics file-- allowing for near lossless resizing ability (at least in theory). Interestingly, its original
aim was not for enlargement at all, but was instead intended for efficient image compression. Times have changed since storage space is now more plentiful and fortunately, so has its application.
Shortcut PhotoZoom Pro (formerly S-Spline Pro) is another common enlargement program. It takes into account many surrounding pixels when interpolating each pixel, and attempts to re-create a smooth edge that passes through all
known pixels. It uses a spline algorithm to re-create these edges, which is similarly used by car manufacturers when they design a new smooth-flowing body for their cars. PhotoZoom has several settings-- each geared towards a
different type of image.
Note how PhotoZoom produces superior results in the computer graphic above, as it is able to create a sharp and smooth-flowing edge for all the curves in the flag. Genuine fractals adds small-scale texture which was not present in the
original, and its results for this example are arguably not much better than those of bicubic. It is also worth noting though that Genuine Fractals does the best job at preserving the tip of the flag, whereas PhotoZoom sometimes breaks it
up into pieces. The only interpolator which maintains both smooth sharp edges and the flag's tip is SmartEdge.

REAL-WORLD EXAMPLES
The above comparisons demonstrate enlargement of theoretical examples, however real-world images are seldom this simple. These also have to deal with color patterns, image noise, fine textures and edges that are not as easily
identifiable. The following example includes regions of fine detail, sharp edges and a smooth background:

Nearest Bicub Bicubic Genuine SmartEd


PhotoZoom
Neighbor ic Smoother Fractals ge

Bicub Bicubic PhotoZoom Genuine SmartEd


Sharpened:
ic Smoother (Default) Fractals ge

All but nearest neighbor (which simply enlarges the pixels) do a remarkable job considering the relatively small size of the original. Pay particular attention to problem areas; for aliasing these are the top of the nose, tips of ears, whiskers
and purple belt buckle. As expected, all perform nearly identically at rendering the softer background.
Even though genuine fractals struggled with the computer graphic, it more than holds its own with this real-world photo. It creates the narrowest whiskers, which are even thinner than in the original image (relative to other features). It
also renders the cat's fur with sharp edges while still avoiding halo artifacts at the cat's exterior. On the other hand, some may consider its pattern of fur texture undesirable, so there is also a subjective element to the decision. Overall I
would say it produces the best results.
PhotoZoom Pro and bicubic are quite similar, except PhotoZoom has fewer visible edge halos and a little less aliasing. SmartEdge also does exceptionally well, however this is still in the research phase and not available. It is the only
algorithm which does well for both the computer graphic and the real-world photo.
SHARPENING ENLARGED PHOTOS
Attention has been focused on the type of interpolation, however the sharpening technique can have at least as much of an impact.
Apply your sharpening after enlarging the photo to the final size, not the other way around. Otherwise previously unperceivable sharpening halos may become clearly visible. This effect is the
same as if one were to apply an unsharp mask with a larger than ideal radius. Move your mouse over the image to the left (a crop of the enlargement shown before) to see what it would have looked
like if sharpening were applied before enlargment. Notice the increase in halo size near the cat's whiskers and exterior.
Also be aware that many interpolation algorithms have some sharpening built into them (such as Photoshop's "bicubic sharper"). A little sharpening is often unavoidable because the bayer
interpolation itself may also introduce sharpening.

If your camera does not support the RAW file format (and therefore have to use JPEG images), be sure to disable or decrease all in-camera sharpening options to a minimum. Save these JPEG files at the highest quality compression,
otherwise previously undetectable JPEG artifacts will be magnified significantly upon enlargement and subsequent sharpening.
Since an enlarged photo can become significantly blurred compared to the original, resized images often stand to benefit more from advanced sharpening techniques. These include deconvolution, fine-tuning the light/dark
over/undershoots, multi-radius unsharp mask and PhotoShop CS2's new feature: smart sharpen.

SHARPENING & VIEWING DISTANCE


The expected viewing distance of your print may change the requirements for a given depth of field and circle of confusion. Furthermore, an enlarged photo for use as a poster will require a larger sharpening radius than one intended for
display on a website. The following estimator should be used as no more than a rough guide; the ideal radius also depends on other factors such as image content and interpolation quality.
Top of Form
Sharpening Radius Estimator

Viewing Distance

Print Resolution
PPI*

Estimated Sharpening Radius

*PPI = pixels per inch; see tutorial on "Digital Camera Pixels"


Bottom of Form

A typical display device has a pixel density of around 70-100 PPI, depending on resolution setting and display dimensions. A standard value of 72 PPI gives a sharpening radius of 0.3 pixels using the above calculator-- corresponding to
the common radius used for displaying images on the web. Alternatively, a print resolution of 300 PPI (standard for photographic prints) gives a sharpening radius of ~1.2 pixels (also typical).

WHEN INTERPOLATION BECOMES IMPORTANT


The resolution of a large roadside billboard image need not be anywhere near as high as that of a closely viewed fine art print. The following estimator lists the minimum PPI and maximum print dimension which can be used before the
eye begins to see individual pixels (without interpolation).
Top of Form
Photo Enlargement Calculator

Viewing Distance
Eyesight

Camera Aspect Ratio


Width:Height

Camera Resolution
Megapixels

Minimum PPI

Maximum Print Dimension

Bottom of Form

You can certainly make prints much larger-- just beware that this marks the point where you need to start being extra cautious. Any print enlarged beyond the above size will become highly dependent on the quality of interpolation
and sharpening.

CONSIDER YOUR SUBJECT MATTER


Both the size and type of texture within a photo may influence how well that image can be enlarged. For landscapes, the eye often expects to see detail all the way down near their resolving limit, whereas smooth surfaces and geometric
objects may be less demanding. Some regions may even enlarge better than others; hair in portraits usually needs to be fully resolved, although smooth skin is often much less demanding.

HARDWARE vs. SOFTWARE ENLARGEMENT


Many professional printers have the ability to use a small image and perform the photo enlargement themselves (hardware interpolation), as opposed to requiring a photo that has already been enlarged on a computer (software
interpolation). Many of these printers claim better enlargement quality than is possible with the bicubic algorithm, so which is a better option? Performing the enlargement in software beforehand allows for greater flexibility-- allowing
one to cater the interpolation and sharpening to the needs of the image. On the other hand, enlarging the file yourself means that the file sizes will be MUCH larger, which may be of special importance if you need to upload images to the
online print company in a hurry.

42.DEPTH OF FIELD CALCULATOR -


A depth of field calculator is a useful photographic tool for assessing what camera settings are required to achieve a desired level of sharpness. This calculator is more flexible than that in the depth of field tutorial because it adjusts for
parameters such as viewing distance, print size and eyesight-- thereby providing more control over what is "acceptably sharp" (maximum tolerable circle of confusion).
In order to calculate the depth of field, one needs to first decide on an appropriate value for the maximum circle of confusion (CoC). Most calculators assume that for a 8x10 inch print viewed at 25 cm (~1 ft), features smaller than 0.01
inches are not required to achieve acceptable sharpness. This scenario is often not an adequate description of acceptable sharpness, and so the calculator below accounts for other viewing scenarios (although it defaults to the standard
settings).
Top of Form
Depth of Field Calculator

Maximum Print 10
Dimension

Viewing Distance

Eyesight

Camera Type
Selected Aperture

Actual Lens Focal Length


mm

Focus Distance (to


subject) meters

Closest distance of acceptable sharpness

Furthest distance of acceptable sharpness

Hyperfocal distance

Total Depth of Field

Note: CF = "crop factor" (commonly referred to as the focal length multiplier)


Bottom of Form

USING THE CALCULATOR


As the viewing distance increases, our eyes become less able to perceive fine detail in the print, and so the depth of field increases (max. CoC increases). Conversely, our eyes can perceive finer detail as the print size increases, and so
the depth of field decreases. A photo intended for close viewing at a large print size (such as in a gallery) will likely have a far more restrictive set of constraints than a similar image intended for display as a postcard or on a roadside
billboard.
People with 20/20 vision can perceive details which are roughly 1/3 the size of those used by lens manufacturers (~0.01 in features for a 8x10 in print viewed at 1 ft) to set the standard for lens markings. Changing the eyesight parameter
therefore has a significant influence on the depth of field. On the other hand, even if you can detect the circle of confusion with your eyes, the image may still be perceived as "acceptably sharp." This should serve only as a rough
guideline to conditions where detail can no longer be resolved by our eyes.
The camera type determines the size of your film or digital sensor, and thus how much the original image needs to be enlarged to achieve a given print size. Larger sensors can get away with larger circles of confusion because these
images do not have to be enlarged as much, however they also require longer focal lengths to achieve the same field of view. Consult your camera's manual or manufacturer website if unsure what to enter for this parameter.
Actual lens focal length refers to the focal length in mm listed for your lens, NOT the "35 mm equivalent focal length" sometimes used. Most compact digital cameras have a zoom lens that varies on the order of 6 or 7 mm to about 30
mm (often listed on the front of your camera on the side of the lens). If you are using a focal length outside this range for a compact digital camera, then it is likely to be incorrect. SLR cameras are more straightforward as most of these
use standard 35 mm lenses and clearly state the focal length, but be sure not to multiply the value listed on your lens by a crop factor (or focal length multiplier). If you have already taken your photo, nearly all digital cameras also record
the actual lens focal length in the EXIF data for the image file.
Hyperfocal distance is the focus distance where everything from half the hyperfocal distance to infinity is within the depth of field. This is useful when deciding where to focus such that you maximize the sharpness within your scene,
although I do not recommend using this value "as is" since sharpness is often more critical at infinity than in front of the focus distance. For more on this topic, please see "Understanding the Hyperfocal Distance."

IN PRACTICE
Care should be taken not to let all of these numbers get in the way of taking your photo. I do not recommend calculating the depth of field for every image, but instead suggest that you get a visual feel for how aperture and focal length
affect your image. This can only be achieved by getting out there and experimenting with your camera. Once you have done this, the depth of field calculator can then be used to enhance those carefully planned landscape, macro of low-
light images where the range of sharpness is critical.
43.ARCHIVAL DIGITAL PHOTO BACKUP -
Backing up your photos so that they last for 100 years is no longer as simple as having an archival print made and stored in a safe frame. Modern digital images and scans require an intimate understanding of topics such as file format,
data degradation, media type and ever-changing storage technologies. This tutorial summarizes the best strategies in three stages--what to store, how and where to backup, and what to do once everything's archived--so that you can be
confident your photos will stand the test of time.

ARCHIVAL FILE FORMATS FOR PHOTO STORAGE


Here's a topic that keeps many photographers up at night: how can you be truly sure that the photos you are saving will be readable on computers 10, 50 or 100 years down the road, with vastly different technology? Will Canon, Nikon,
Sony or another camera manufacturer's proprietary RAW format still have full software support, and will the images be reproduced exactly as before when loaded?

Old Photo, circa 1890 New Photograph, circa 2008

Unfortunately, the photo on the right will not necessarily last as long as the one on the left did.
However, if the necessary precautions are taken, not only will the photo on the right be preserved, but it also won't be subject to the gradual fading and deterioration of the photo from 1890.
The chosen file type is therefore an important first consideration when backing up archives of your photos. The table below compares the most common file formats:
Archival File Format Size Quality Software Compatibility

JPEG Smallest Lowest Excellent

TIFF (8 bit) Medium Medium Excellent

TIFF (16 bit) Largest High Excellent

Good now;
RAW files: CR2, NEF, etc Large Highest
Questionable years later

Moderate now;
DNG Large Highest
Excellent years later (in theory)

JPEG files are by far the most likely to be widely supported many years down the road; after all, JPEG has become a near standard for images on the internet. If you already have a lot of photos taken in JPEG, then the choice of what
format to store them in is easy: leave them as JPEG files. However, for future photos, it's highly advised that you shoot in RAW if your camera supports it, as discussed later.
TIFF files are a close second to JPEG when it comes to compatibility, but are much higher quality because they do not use JPEG's lossy image compression. For many, TIFF achieves an optimal balance. However, TIFF files either
preserve much less about the original photo (if the bit depth is 8-bit), or are even larger than RAW files despite preserving a little less of the original image (if the bit depth is 16-bit).
RAW files are certainly the best when it comes to preserving what was originally captured, while still being smaller than 16-bit TIFF files. However, nearly every camera has a slightly different RAW file, so it's highly unlikely that
general software 10-20 years later will be able to open every one of these file types correctly. RAW file backup therefore leaves two options: (i) to convert them to some other format, or (ii) to backup the RAW files in their native format
until some later date when you start to notice compatibility issues, and a suitable replacement format exists.

Many feel that a suitable format already exists: the Digital Negative (DNG) file format, which was created by Adobe to address many of the problems associated with longer term archival
storage. It is an open standard and royalty free, so you can be sure the files can be more easily and universally opened in the future. DNG aims to combine the compatibility advantages of
TIFF and JPEG file formats with the quality and efficiency advantages of your camera's original RAW files.

However, even DNG is not future-proof. With the exception of Adobe software, support is still not as universal as one would like it to be for a format that aims to be archival (although this is rapidly changing). Further, companies go in
and out of existence (remember the once dominant Kodak?), DNG itself has version numbers, and DNG is helpless if sensor technologies change dramatically.
Another consideration is how to store various edited versions of your files, which is something that DNG does not address. Multiple 16-bit TIFF, PSD or other files can quickly become extremely large and unmanageable. The best way to
conserve storage space is to save file types that preserve the editing steps, but do not actually apply them as an additional saved TIFF file. RAW conversion software often has the ability to store the settings you used to convert the RAW
file, such as XMP sidecar files (for Camera RAW), catalog files (for Lightroom), and library files (for Aperture), amongst others. In Photoshop, using and storing adjustment layers is also a great way to avoid multiple intermediate files
for each edit.
Unfortunately, many of the formats used for storing edited photos are also subject to future compatibility issues. Fortunately, this is one area where changing technology can mean you'd like to rework certain images using the latest
software and techniques. Just make sure that you also have an archived version of the unaltered original photo.
Overall, the only fail-proof solution is to keep your data up-to-date. Every few years it's a good idea to convert file types that are in danger of becoming obsolete.

CHOOSING PHOTO BACKUP MEDIA


Even if we use a compatible file format, how can we be sure that these files will later be accessible on our chosen backup device or media? Remember 5.25 inch floppy disks? In fact, the US Federal Government is so concerned about this
topic that they house and maintain computers at various stages of advancement -- just in case a file can only be loaded on one of these older computer setups.

CD, DVD, Blu-Ray, or other removable media has been the primary method of consumer backup for quite some time. They have the advantage of being reasonably inexpensive and broadly compatible. Probably their biggest drawback
is inconsistency; some removable media lasts only 5-10 years, while others claim a lifetime of 50-100 years. It can often be difficult to tell which longevity category your media purchase falls under.
Do not assume that all writable media is created equal. There's often a dramatic difference in longevity between one brand and another. Pay attention to the type of dyes used (blue, gold, silver, etc), to online accelerated aging tests, and to
reports of issues with a particular model/batch.
External hard drives, while a newcomer on the backup scene, have made great progress since they've dropped in price tremendously over the past several years. Hard drives can store a tremendous amount of information in a small area,
are quite fast, and permit the backed up data to be immediately accessible and modifiable. Over time they can gradually demagnetize, but the biggest concern is that they may not spin up because their internal motor has failed (although no
data is lost, it can be expensive to recover). Another concern would be if the eSATA, USB or firewire connector becomes obsolete.
Tape backup, while once the "go to" method of archiving data, is becoming increasingly marginalized, and today is only really used for large corporate backups. Consumer models are less common, and they haven't quite kept pace with
the storage density progress that's been made with hard drives. Further, some tapes are much more vulnerable to humidity, water damage and other external factors than are external hard drives or other removable media. Their biggest
advantages are that (i) they are very inexpensive for high volume backups, and (ii) they do not require an internal motor, and thus have no risk of not spinning up for access (unlike hard drives).

Unfortunately, the only future-proof solution is to migrate your data over to the latest technology every 3-5 years. Fortunately, storage technologies have been increasing in capacity exponentially, so your old 10 photo DVD's can be
combined into just one Blu-Ray or a fraction of an external hard drive -- and one would expect that 10 or more of these could easily be combined into just one of the next storage technology, and so on. This means that even if you
continue to accrue photos, the amount of work required to transfer these will not necessarily increase each time you need to do so.

PRESERVING IMAGE INTEGRITY


No matter what the backup media, all data degrades over time, and errors can occur each time you copy your images from one location to another. The chemicals in a DVD disk gradually decompose, tapes and hard drives eventually
become demagnetized, and flash memory can lose its charge. All of these processes are inevitable. The following is a real-world example of what it can look like when a photograph becomes corrupted:

Noticing the above flaw requires zooming in 100% and inspecting specific regions of the photograph, even though it would be easily visible as a colored streak in a print. This is a little unsettling
considering that most people have hundreds if not many thousands of photographs; identifying each and every corrupt image would clearly be unrealistic.
Further, image corruptions become replicated in each subsequent backup copy, and would go unnoticed until a print were made many years later.

A storage technique that employs parity, checksum or other data verification files is the only way to systematically spot these problems before they permanently alter your photo archives. That is the only reason the photo in the above
example (and others) were identified before they became a problem. The following chart outlines some of the most common techniques for preventing, verifying and repairing corrupt photographs:
Primary
Type How It Works
Use

PREVENTIO A RAID 1, 5 or 10 is an array of disk drives with fault protection in case one of your drives fail. These can continue to operate even if a drive fails, without losing
RAID 1,5,10
N any information. However, they can also substantially increase costs since they require additional disk drives and a RAID controller.

SFV or MD5 VERIFICATI Checksum files verify that a file or copy is identical to its original. They are effectively digital fingerprints, which are created based on every 0 and 1 in a digital file.
Checksum Files ON When even one bit of the file changes, it's almost guaranteed that the fingerprint won't match. However, that's all they do: inform you when there was an error.

Parity files can be used to repair minor damage without requiring a full duplicate of the original. They store carefully chosen redundant information about a file;
Parity or
REPAIR if part of the original becomes corrupt, the parity file can be used along with the surviving portions of the corrupt file to re-create the original data. However, parity files take
Recovery Files
up increasingly more space if you want to recover files which are more badly damaged.

Technical Notes: Although it's beyond the scope of this article, RAID comes in many varieties; RAID 1 is effectively two disks containing identical data at all times; RAID 5 is three or more disks where one drive contains parity data;
RAID 10 requires four drives, and is similar to RAID 1 except it improves performance by simultaneously reading/writing to multiple drives. RAID 0 should not be used with critical data since it increases the failure rate in exchange for
better performance.
If you routinely work with very important photographs, the best protection is achieved by using RAID while editing on your computer and between backups, and storing MD5/SFV checksum and parity files along with your archived
photographs.
A simpler solution would be to store two backup copies immediately after the photo is captured. This way you do not need to worry about complicated RAID or parity files, but you will still need to store SFV or MD5 checksum files**
along with each archived photo. There are far too many programs that can read or create SFV and MD5 files to list here; a quick search engine query will yield several free options. If you ever identify a corrupt file, then the other backup
copy can be used as a replacement. Not having RAID means that there's no protection against losing intermediate edited files on your computer, but these are usually much less important than the unaltered originals.
**Technical Notes: A checksum is a digital fingerprint that verifies the integrity/identity of a file. SFV stands for "single file verify", and contains a list of checksums corresponding to a list of files. MD5 checksums were created to not
just verify the integrity of a file, but to also verify its authenticity (that no person had intentionally modified a file). CRC checksums are much quicker to calculate than their equivalent MD5 checksums, but MD5 checksums are also more
sensitive to file changes. There are other checksum file types available, but SFV and MD5 are currently the most widely supported.
Regardless, it's important to keep your data "fresh" by copying it to some other media after 5-10 years -- even if the file format or media isn't in danger of becoming obsolete.

WHERE TO STORE YOUR PHOTO ARCHIVES


The best location to store your archival photo backups is in a cool, dry place with a reasonably constant environment and minimal need for movement. If there's a chance of humidity, be sure to seal the media in a plastic bag prior to
storage.
However, unforeseen accidents such as theft and fire can occur, so any fail-proof backup strategy should make use of multiple backup locations. This could mean having a duplicate archive in a safety deposit box, at a friend or family's
household or at some remote online server. If your internet connection is fast, backups can even be transferred regularly and systematically via FTP. Depending on the size and quantity of your photos, some even treat online photo sharing
sites as backup locations. However, this is not an option for true digital negatives, such as RAW files, since they cannot be displayed as is.
Try and stick to a regular backup schedule with an easy to follow naming convention. After all, if you cannot find a photo once it's been archived then it's as good as lost.
MINIMIZING RISK OF ACCIDENTAL DELETION
OK, so we've now gone to great lengths to ensure that (1) the file format will be readable, (2) the backup media will be loadable and (3) the accuracy of each photo will be preserved identically. What's preventing someone from
mistakenly deleting or overwriting some of your photo archive? Of course, clearly labeling your media is a must, but it might also be a good idea to make the archived photos read only, and to password-protect the photos folder and/or
media. However, adding a password is a double edged sword, because it means there's always the possibility of forgetting the password. If this is a concern, then simply use a password of "password", since the purpose is to add another
barrier to inadvertent deletion as opposed to preventing unauthorized access.

SUMMARY OF ARCHIVAL PHOTO BACKUP OPTIONS


Photographers can be loosely grouped into one of two categories:
(1) Casual Photographer: generally takes snapshots to record events and other get togethers; not overly concerned with making large prints, but wants their collections to be preserved for later
generations. Often uses a compact digital camera or camera phone and photos are rarely post-processed. Usually takes JPEG photographs to simplify image sharing and printing, or to save storage space.

Backup Strategy: Casual photographers should try and save their JPEG files using the highest quality "superfine" (or similar setting) to minimize image compression artifacts. Each batch of photos should be backed up in two copies on
removable media, ideally with SFV or MD5 checksum files to identify if any image later becomes corrupt. Archived photos should be transferred over to new media every 5 years to keep the storage technology up to date, and to prevent
corrupt images by keeping the data fresh.
(2) Discerning Photographer: takes a variety of photos, possibly including very memorable events or work to be made into large prints; maximal preservation of each scene is of high importance. Often
uses a digital SLR camera, or a compact digital camera when weight/space are important. Usually takes RAW photographs, because any added trouble in post-processing is worth it in exchange for knowing
they can make the most out of any print if they happen to capture a special moment.

Backup Strategy: Discerning photographers should always save their photos using their camera's RAW file format. Any photo editing should ideally occur on a computer with duplicate hard drives in RAID 1, otherwise unaltered photos
should be backed up immediately after capture. RAW files should either be converted to the DNG format prior to archiving or saved in their native format. When possible, edited versions of photos should be stored as processing steps
(such as in XMP catalog files) as opposed to separate TIFF files. Each batch of photo backups should be written to at least two media, and all images should be stored along with SFV or MD5 checksum files and parity information, just in
case a repair is needed. Each set of backups should be stored in a different physical building. Archived RAW or DNG files should be converted to some other format every 3-5 years to maintain software compatibility; each of these
backups should be on new media using the latest storage technology to keep their data fresh.

44.TUTORIALS: COLOR PERCEPTION -


Color can only exist when three components are present: a viewer, an object, and light. Although pure white light is perceived as colorless, it actually contains all colors in the visible spectrum. When white light hits an object, it
selectively blocks some colors and reflects others; only the reflected colors contribute to the viewer's perception of color.

HUMAN COLOR PERCEPTION: OUR EYES & VISION


The human eye senses this spectrum using a combination of rod and cone cells for vision. Rod cells are better for low-light vision, but can only sense the intensity of light, whereas while cone cells can also discern color, they function
best in bright light. Three types of cone cells exist in your eye, each being more sensitive to either short (S), medium (M), or long (L) wavelength light. The set of signals possible at all three cone cells describes the range of colors we
can see with our eyes. The example below illustrates the relative sensitivity of each type of cone cell for the entire visible spectrum from ~400 nm to 700 nm.

Select View: Cone Cells Luminosity


Raw data courtesy of the Colour and Vision Research Laboratories (CVRL), UCL
Note how each type of cell does not just sense one color, but instead has varying degrees of sensitivity across a broad range of wavelengths. Move your mouse over "luminosity" to see which colors contribute the most towards our
perception of brightness. Also note how human color perception is most sensitive to light in the yellow-green region of the spectrum; this is utilized by the bayer array in modern digital cameras.

ADDITIVE & SUBTRACTIVE COLOR MIXING


Virtually all our visible colors can be produced by utilizing some combination of the three primary colors, either by additive or subtractive processes. Additive processes create color by adding light to a dark background, whereas
subtractive processes use pigments or dyes to selectively block white light. A proper understanding of each of these processes creates the basis for understanding color reproduction.

Additive Subtractive

The color in the three outer circles are termed primary colors, and are different in each of the above diagrams. Devices which use these primary colors can produce the maximum range of color. Monitors release light to produce additive
colors, whereas printers use pigments or dyes to absorb light and create subtractive colors. This is why nearly all monitors use a combination of red, green and blue (RGB) pixels, whereas most color printers use at least cyan, magenta and
yellow (CMY) inks. Many printers also include black ink in addition to cyan, magenta and yellow (CMYK) because CMY alone cannot produce deep enough shadows.
Additive Color Mixing Subtractive Color Mixing
(RGB Color) (CMYK Color)

Red + Green —> Yellow Cyan + Magenta —> Blue

Green + Blue —> Cyan Magenta + Yellow —> Red

Magent
Blue + Red —> Yellow + Cyan —> Green
a

Cyan + Magenta +
Red + Green + Blue —> White —> Black
Yellow

Subtractive processes are more susceptible to changes in ambient light, because this light is what becomes selectively blocked to produce all their colors. This is why printed color processes require a specific type of ambient lighting in
order to accurately depict colors.
COLOR PROPERTIES: HUE & SATURATION
Color has two unique components that set it apart from achromatic light: hue and saturation. Visually describing a color based on each of these terms can be highly subjective, however each can be more objectively illustrated by
inspecting the light's color spectrum.
Naturally occurring colors are not just light at one wavelength, but actually contain a whole range of wavelengths. A color's "hue" describes which wavelength appears to be most dominant. The object whose spectrum is shown
below would likely be perceived as bluish, even though it contains wavelengths throughout the spectrum.

Although this spectrum's maximum happens to occur in the same region as the object's hue, it is not a requirement. If this object instead had separate and pronounced peaks in just the the red and green regions, then its hue would instead
be yellow (see the additive color mixing table).
A color's saturation is a measure of its purity. A highly saturated color will contain a very narrow set of wavelengths and appear much more pronounced than a similar, but less saturated color. The following example illustrates the
spectrum for both a highly saturated and less saturated shade of blue.
Select Saturation Level: Low High

45.BASICS OF DIGITAL CAMERA PIXELS -


The continuous advance of digital camera technology can be quite confusing because new terms are constantly being introduced. This tutorial aims to clear up some of this digital pixel confusion-- particularly for those who are either
considering or have just purchased their first digital camera. Concepts such as sensor size, megapixels, dithering and print size are discussed.

THE PIXEL: A FUNDAMENTAL UNIT FOR ALL DIGITAL IMAGES


Every digital image consists of a fundamental small-scale descriptor: THE PIXEL, invented by combining the words "PICture ELement." Just as how pointillist artwork uses a series of paint blotches, millions of pixels can also combine
to create a detailed and seemingly continuous image.

Move mouse over each to select: Pointillism (Paint Blotches) Pixels


Each pixel contains a series of numbers which describe its color or intensity. The precision to which a pixel can specify color is called its bit or color depth. The more pixels your image contains, the more detail it has the ability to
describe. Note how I wrote "has the ability to"; just because an image has more pixels does not necessarily mean that these are fully utilized. This concept is important and will be discussed more later.

PRINT SIZE: PIXELS PER INCH vs. DOTS PER INCH


Since a pixel is just a logical unit of information, it is useless for describing real-world prints-- unless you also specify their size. The terms pixels per inch (PPI) and dots per inch (DPI) were both introduced to relate this theoretical pixel
unit to real-world visual resolution. These terms are often inaccurately interchanged (particularly with inkjet printers)-- misleading the user about a device's maximum print resolution.
"Pixels per inch" is the more straightforward of the two terms. It describes just that: how many pixels an image contains per inch of distance in the horizontal and vertical directions. "Dots per inch" may seem deceptively simple at first.
The complication arises because a device may require multiple dots in order to create a single pixel; therefore a given number of dots per inch does not always lead to the same resolution. Using multiple dots to create each pixel is a
process called "dithering."

A device with a limited number of ink colors can play a trick on the eye by arranging these into small patterns-- thereby creating the perception of a different color if each "sub-pixel" is small enough. The example above uses 128 pixel
colors, however the dithered version creates a nearly identical looking blend of colors (when viewed in its original size) using only 24 colors. There is one critical difference: each color dot in the dithered image has to be much smaller
than the individual pixel. As a result, images almost always require more DPI than PPI in order to achieve the same level of detail. PPI is also far more universal because it does not require knowledge of the device to understand
how detailed the print will be.
The standard for prints done in a photo lab is about 300 PPI, however inkjet printers require several times this number of DPI (depending on the number of ink colors) for photographic quality. It also depends on the application; magazine
and newspaper prints can get away with much less than 300 PPI. The more you try to enlarge a given image, the lower its PPI will become (assuming the same number of pixels).

MEGAPIXELS AND MAXIMUM PRINT SIZE


A "megapixel" is simply a unit of a million pixels. If you require a certain resolution of detail (PPI), then there is a maximum print size you can achieve for a given number of megapixels. The following chart gives the maximum 200 and
300 PPI print sizes for several common camera megapixels.

# of Maximum 3:2 Print Size


Megapixel
s at 300 PPI: at 200 PPI:

2 5.8" x 3.8" 8.7" x 5.8"

3 7.1" x 4.7" 10.6" x 7.1"

4 8.2" x 5.4" 12.2" x 8.2"


5 9.1" x 6.1" 13.7" x 9.1"

6 10.0" x 6.7" 15.0" x 10.0"

8 11.5" x 7.7" 17.3" x 11.5"

12 14.1" x 9.4" 21.2" x 14.1"

16 16.3" x 10.9" 24.5" x 16.3"

22 19.1" x 12.8" 28.7" x 19.1"

Note how a 2 megapixel camera cannot even make a standard 4x6 inch print at 300 PPI, while it requires a whopping 16 megapixels to make a 16x10 inch photo. This may be discouraging, but do not despair! Many will be happy with
the sharpness provided by 200 PPI, although an even lower PPI may suffice if the viewing distance is large (see "Digital Photo Enlargement"). Many wall posters assume that you will not be inspecting them from 6 inches away, and so
these are often less than 200 PPI.

CAMERA & IMAGE ASPECT RATIO


The print size calculations above assumed that the camera's aspect ratio, or ratio of longest to shortest dimension, is the standard 3:2 used for 35 mm cameras. In fact,
most compact cameras, monitors and TV screens have a 4:3 aspect ratio, while most digital SLR cameras are 3:2. Many other types exist though: some high end film equipment
even use a 1:1 square image, and DVD movies are an elongated 16:9 ratio.
This means that if your camera uses a 4:3 aspect ratio, but you need a 4 x 6 inch (3:2) print, then a lot of your megapixels will be wasted (11%). This should be considered if your camera has a different
ratio than the desired print dimensions.

Pixels themselves can also have their own aspect ratio, although this is less common. Certain video standards and earlier Nikon cameras have pixels with skewed dimensions.

DIGITAL SENSOR SIZE: NOT ALL PIXELS ARE CREATED EQUAL


Even if two cameras have the same number of pixels, it does not necessarily mean that the size of their pixels are also equal. The main distinguishing factor between a more expensive digital SLR and a compact camera is that the former
has a much greater digital sensor area. This means that if both an SLR and a compact camera have the same number of pixels, the size of each pixel in the SLR camera will be much larger.
Why does one care about how big the pixels are? A larger pixel has more light-gathering area, Compact Camera Sensor
which means the light signal is stronger over a given interval of time.
This usually results in an improved signal to noise ratio (SNR), which creates a smoother and more detailed image.
Furthermore, the dynamic range of the images (range of light to dark which the camera can capture without becoming
either black or clipping highlights) also increases with larger pixels. This is because each pixel well can contain more
photons before it fills up and becomes completely white. SLR Camera Sensor

The diagram below illustrates the relative size of several standard sensor sizes on the market today. Most digital SLR's have either a 1.5X or 1.6X crop factor (compared to 35 mm film), although some high-end models actually have a
digital sensor which has the same area as 35 mm. Sensor size labels given in inches do not reflect the actual diagonal size, but instead reflect the approximate diameter of the "imaging circle" (not fully utilized). Nevertheless, this number
is in the specifications of most compact cameras.
Why not just use the largest sensor possible? The main disadvantage of having a larger sensor is that they are much more expensive, so they are not
always beneficial.
Other factors are beyond the scope of this tutorial, however more can be read on the following points: larger sensors require smaller apertures in order to achieve the same depth of field,
however they are also less susceptible to diffraction at a given aperture.

Does all this mean it is bad to squeeze more pixels into the same sensor area? This will usually produce more noise, but only when viewed at 100% on your computer monitor. In an actual print, the higher megapixel model's noise
will be much more finely spaced-- even though it appears noisier on screen (see "Image Noise: Frequency and Magnitude"). This advantage usually offsets any increase in noise when going to a larger megapixel model (with a few
exceptions).

46.TUTORIALS: BIT DEPTH -


Bit depth quantifies how many unique colors are available in an image's color palette in terms of the number of 0's and 1's, or "bits," which are used to specify each color. This does not mean that the image necessarily uses all of these
colors, but that it can instead specify colors with that level of precision. For a grayscale image, the bit depth quantifies how many unique shades are available. Images with higher bit depths can encode more shades or colors since there
are more combinations of 0's and 1's available.

TERMINOLOGY
Every color pixel in a digital image is created through some combination of the three primary colors: red, green, and blue. Each primary color is often referred to as a "color channel" and can have any range of intensity values specified
by its bit depth. The bit depth for each primary color is termed the "bits per channel." The "bits per pixel" (bpp) refers to the sum of the bits in all three color channels and represents the total colors available at each pixel. Confusion
arises frequently with color images because it may be unclear whether a posted number refers to the bits per pixel or bits per channel. Using "bpp" as a suffix helps distinguish these two terms.

EXAMPLE
Most color images from digital cameras have 8-bits per channel and so they can use a total of eight 0's and 1's. This allows for 28 or 256 different combinations—translating into 256 different intensity values for each primary color.
When all three primary colors are combined at each pixel, this allows for as many as 28*3 or 16,777,216 different colors, or "true color." This is referred to as 24 bits per pixel since each pixel is composed of three 8-bit color channels.
The number of colors available for any X-bit image is just 2X if X refers to the bits per pixel and 23X if X refers to the bits per channel.

COMPARISON
The following table illustrates different image types in terms of bits (bit depth), total colors available, and common names.
Bits Per Number of Colors Common
Pixel Available Name(s)

1 2 Monochrome

2 4 CGA

4 16 EGA

8 256 VGA

16 65536 XGA, High Color

24 16777216 SVGA, True Color

32 16777216 + Transparency

48 281 Trillion
BIT DEPTH VISUALIZATION
By moving your mouse over any of the labels below, the image will be re-displayed using the chosen amount of colors. The difference between 24 bpp and 16 bpp is subtle, but will be clearly visible if you have your display set to true
color or higher (24 or 32 bpp).
24 16 8
bpp bpp bpp

USEFUL TIPS
• The human eye can only discern about 10 million different colors, so saving an image in any more than 24 bpp is excessive if the only intended purpose is for viewing. On the other hand, images with
more than 24 bpp are still quite useful since they hold up better under post-processing (see "Posterization Tutorial").
• Color gradations in images with less than 8-bits per color channel can be clearly seen in the image histogram.
• The available bit depth settings depend on the file type. Standard JPEG and TIFF files can only use 8-bits and 16-bits per channel, respectively.

47.IMAGE TYPES: JPEG & TIFF FILES -


Knowing which image type to use ensures you can make the most of your digital photographs. Some image types are best for getting an optimal balance of quality and file size when storing your photos, while other image types enable
you to more easily recover from a bad photograph. Countless image formats exist and new ones are always being added; in this section we will focus on options related to the two of the three formats most relevant to digital photography:
JPEG and TIFF. The RAW file format is covered in a separate tutorial.

INTRO: IMAGE COMPRESSION


An important concept which distinguishes many image types is whether they are compressed. Compressed files are significantly smaller than their uncompressed counterparts, and fall into two general categories: "lossy" and "lossless."
Lossless compression ensures that all image information is preserved, even if the file size is a bit larger as a result. Lossy compression, by contrast, can create file sizes that are significantly smaller, but achieves this by selectively
discarding image data. The resulting compressed file is therefore no longer identical to the original. Visible differences between these compressed files and their original are termed "compression artifacts."

JPEG FILE FORMAT


JPEG stands for "Joint Photographic Expert Group" and, as its name suggests, was specifically developed for storing photographic images. It has also become a standard format for storing images in digital cameras and displaying
photographic images on internet web pages. JPEG files are significantly smaller than those saved as TIFF, however this comes at a cost since JPEG employs lossy compression. A great thing about JPEG files is their flexibility. The
JPEG file fomat is really a toolkit of options whose settings can be altered to fit the needs of each image.
JPEG files achieve a smaller file size by compressing the image in a way that retains detail which matters most, while discarding details deemed to be less visually impactful. JPEG does this by taking advantage of the fact that the human
eye notices slight differences in brightness more than slight differences in color. The amount of compression achieved is therefore highly dependent on the image content; images with high noise nevels or lots of detail will not be as easily
compressed, whereas images with smooth skies and little texture will compress very well.

Image with Fine Detail Image without Fine Detail


(Less Effective JPEG Compression) (More Effective JPEG
Compression)

It is also helpful to get a visual intuition for how varying degrees of compression impact the quality of your image. At 100%, you will barely notice any difference between the compressed and uncompressed image below, if at all. Notice
how the JPEG algorithm prioritizes prominent high-contrast edges at the expense of more subtle textures. As the compression quality decreases, the JPEG algorithm is forced to sacrifice the quality of more and more visually prominant
textures in order to continue decreasing the file size.
Choose Compression Quality: 100% 80% 60% 30% 10%

ORIGINAL IMAGE COMPRESSED IMAGE

TIFF FILE FORMAT


TIFF stands for "Tagged Image File Format" and is a standard in the printing and publishing industry. TIFF files are significantly larger than their JPEG counterparts, and can be either uncompressed or compressed using lossless
compression. Unlike JPEG, TIFF files can have a bit depth of either 16-bits per channel or 8-bits per channel, and multiple layered images can be stored in a single TIFF file.
TIFF files are an excellent option for archiving intermediate files which you may edit later, since it introduces no compression artifacts. Many cameras have an option to create images as TIFF files, but these can consume excessive space
compared to the same JPEG file. If your camera supports the RAW file format this is a superior alternative, since these are significantly smaller and can retain even more information about your image.

USEFUL TIPS
• Only save an image using a lossy compression once all other image editing has been completed, since many image manipulations can amplify compression artifacts.
• Avoid compressing a file multiple times, since compression artifacts may accumulate and progressively degrade the image. For such cases, the JPEG algorithm will also produce larger and larger files
at the same compression level.
• Ensure that image noise levels are as low as possible, since this will produce dramatically smaller JPEG files.

48.RAW FILE FORMAT -


The RAW file format is digital photography's equivalent of a negative in film photography: it contains untouched, "raw" pixel information straight from the digital camera's sensor. The RAW file format has yet to undergo demosaicing,
and so it contains just one red, green, or blue value at each pixel location. Digital cameras normally "develop" this RAW file by converting it into a full color JPEG or TIFF image file, and then store the converted file in your memory
card. Digital cameras have to make several interpretive decisions when they develop a RAW file, and so the RAW file format offers you more control over how the final JPEG or TIFF image is generated. This section aims to illustrate
the technical advantages of RAW files, and makes suggestions about when to use the RAW file format.

OVERVIEW
A RAW file is developed into a final JPEG or TIFF image in several steps, each of which may contain several irreversible image adjustments. One key advantage of RAW is that it allows the photographer to postpone applying these
adjustments-- giving more flexibility to the photographer to later apply these themselves, in a way which best suits each image. The following diagram illustrates the sequence of adjustments:
Demosaicing Tone Curves Conversion to 8-bit
White Balance Contrast JPEG Compression
Color Saturation
Sharpening

Demosaicing and white balance involve interpreting and converting the bayer array into an image with all three colors at each pixel, and occur in the same step. The bayer array is what makes the first image appear more pixelated than
the other two, and gives the image a greenish tint.
Our eyes perceive differences in lightness logarithmically, and so when light intensity quadruples we only perceive this as a doubling in the amount of light. A digital camera, on the other hand, records differences in lightness linearly--
twice the light intensity produces twice the response in the camera sensor. This is why the first and second images above look so much darker than the third. In order for the numbers recorded within a digital camera to be shown as we
perceive them, tone curves need to be applied.
Color saturation and contrast may also be adjusted, depending on the setting within your camera. The image is then sharpened to offset the softening caused by demosaicing, which is visible in the second image.
The high bit depth RAW image is then converted into 8-bits per channel, and compressed into a JPEG based on the compression setting within your camera. Up until this step, RAW image information most likely resided within the
digital camera's memory buffer.
There are several advantages to performing any of the above RAW conversion steps afterwards on a personal computer, as opposed to within a digital camera. The next sections describe how using RAW files can enhance these RAW
conversion steps.

DEMOSAICING
Demosaicing is a very processor-intensive step, and so the best demosaicing algorithms require more processing power than is practical within today's digital cameras. Most digital cameras therefore take quality-compromising shortcuts
to convert a RAW file into a TIFF or JPEG in a reasonable amount of time. Performing the demosaicing step on a personal computer allows for the best algorithms since a PC has many times more processing power than a typical digital
camera. Better algorithms can squeeze a little more out of your camera sensor by producing more resolution, less noise, better small-scale color accuracy and reduced moiré. Note the resolution advantage shown below:

JPEG
(in-camera)

RAW

Ideal

Images from actual camera tests with a Canon EOS 20D using an ISO 12233 resolution test chart.
Differential between RAW and JPEG resolution may vary camera model and conversion software.
The in-camera JPEG image is not able to resolve lines as closely spaced as those in the RAW image. Even so, a RAW file cannot achieve the ideal lines shown, because the process of demosaicing always introduces some softening to the
image. Only sensors which capture all three colors at each pixel location could achieve the ideal image shown at the bottom (such as Foveon-type sensors).
FLEXIBLE WHITE BALANCE
White balance is the process of removing unrealistic color casts, so that objects which appear white in person are rendered white in your photo. Color casts within JPEG images can often be removed in post-processing, but at the cost of
bit depth and color gamut. This is because the white balance has effectively been set twice: once in RAW conversion and then again in post-processing. RAW files give you the ability to set the white balance of a photo *after* the
picture has been taken-- without unnecessarily destroying bits.

HIGH BIT DEPTH


Digital cameras actually record each color channel with more precision than the 8-bits (256 levels) per channel used for JPEG images (see "Understanding Bit Depth"). Most current cameras capture each color with 12-bits of precision
(212 = 4096 levels) per color channel, providing several times more levels than could be achieved by using an in-camera JPEG. Higher bit depth decreases the susceptibility to posterization, and increases your flexibility when choosing a
color space and in post-processing.

DYNAMIC RANGE & EXPOSURE COMPENSATION


The RAW file format usually provides considerably more "dynamic range" than a JPEG file, depending on how the camera creates its JPEG. Dynamic range refers to the range of light to dark which can be captured by a camera before
becoming completely white or black, respectively. Since the raw color data has not been converted into logarithmic values using curves (see overview section above), the exposure of a RAW file can be adjusted slightly-- after the photo
has been taken. Exposure compensation can correct for metering errors, or can help bring out lost shadow or highlight detail. The following example was taken directly into the setting sun, and shows the same RAW file with -1 stop, 0
(no change), and +1 stop exposure compensation. Move your mouse over each to see how exposure compensation affects the image:

Apply Exposure Compensation: -1.0 none +1.0

Note: +1 or -1 stop refers to a doubling or halving of the light used for an exposure, respectively.
A stop can also be listed in terms of eV, and so +1 stop is equivalent to +1 eV.
Note the broad range of shadow and highlight detail across the three images. Similar results could not be achieved by merely brightening or darkening a JPEG file-- both in dynamic range and in the smoothness of tones. A graduated
neutral density filter (see tutorial on camera lens filters) could then be used to better utilize this broad dynamic range.

ENHANCED SHARPENING
Since a RAW file is untouched, sharpening has not been applied within the camera. Much like demosaicing, better sharpening algorithms are often far more processor intensive. Sharpening performed on a personal computer can thus
create fewer halo artifacts for an equivalent amount of sharpening (see "Sharpening Using an Unsharp Mask" for examples of sharpening artifacts).
Since sharpness depends on the intended viewing distance of your image, the RAW file format also provides more control over what type and how much sharpening is applied (given your purpose). Sharpening is usually the last post-
processing step since it cannot be undone, so having a pre-sharpened JPEG is not optimal.

LOSSLESS COMPRESSION
The RAW file format uses a lossless compression, and so it does not suffer from the compression artifacts visible with "lossy" JPEG compression. RAW files contain more information and achieve better compression than TIFF, but
without the compression artifacts of JPEG.
Compression: Lossless Lossy

Image shown at 200%. Lossy JPEG compression at 60% in Adobe Photoshop.


Note: Kodak and Nikon employ a slightly lossy RAW compression algorithm, although any artifacts are much lower than would be perceived with a similar JPEG image. The efficiency of RAW compression also varies with digital
camera manufacturer.

DISADVANTAGES
• RAW files are much larger than similar JPEG files, and so fewer photos can fit within the same memory card.
• RAW files are more time consuming since they may require manually applying each conversion step.
• RAW files often take longer to be written to a memory card since they are larger, therefore most digital cameras may not achieve the same frame rate as with JPEG.
• RAW files cannot be given to others immediately since they require specific software to load them, therefore it may be necessary to first convert them into JPEG.
• RAW files require a more powerful computer with more temporary memory (RAM).

OTHER CONSIDERATIONS
One problem with the RAW file format is that it is not very standardized. Each camera has their own proprietary RAW file format, and so one program may not be able to read all formats. Fortunately, Adobe has announced a digital
negative (DNG) specification which aims to standardize the RAW file format. In addition, any camera which has the ability to save RAW files should come with its own software to read them.
Good RAW conversion software can perform batch processes and often automates all conversion steps except those which you choose to modify. This can mitigate or even eliminate the ease of use advantage of JPEG files.
Many newer cameras can save both RAW and JPEG images simultaneously. This provides you with an immediate final image, but retains the RAW "negative" just in case more flexibility is desired later.

SUMMARY
So which is better: RAW or JPEG? There is no single answer, as this depends on the type of photography you are doing. In most cases, RAW files will provide the best solution due to their technical advantages and the decreasing cost of
large memory cards. RAW files give the photographer far more control, but with this comes the trade-off of speed, storage space and ease of use. The RAW trade-off is sometimes not worth it for sports and press photographers, although
landscape and most fine art photographers often choose RAW in order to maximize the image quality potential of their digital camera.

49.CAMERA HISTOGRAMS: TONES & CONTRAST -


Understanding image histograms is probably the single most important concept to become familiar with when working with pictures from a digital camera. A histogram can tell you whether or not your image has been properly exposed,
whether the lighting is harsh or flat, and what adjustments will work best. It will not only improve your skills on the computer, but as a photographer as well.
Each pixel in an image has a color which has been produced by some combination of the primary colors red, green, and blue (RGB). Each of these colors can have a brightness value ranging from 0 to 255 for a digital image with a bit
depth of 8-bits. A RGB histogram results when the computer scans through each of these RGB brightness values and counts how many are at each level from 0 through 255. Other types of histograms exist, although all will have the
same basic layout as the histogram example shown below.
TONES
The region where most of the brightness values are present is called the "tonal range." Tonal range can vary drastically from image to image, so developing an intuition for how numbers map to actual brightness values is often critical—
both before and after the photo has been taken. There is no one "ideal histogram" which all images should try to mimic; histograms should merely be representative of the tonal range in the scene and what the photographer wishes to
convey.

The above image is an example which contains a very broad tonal range, with markers to illustrate where regions in the scene map to brightness levels on the histogram. This coastal scene contains very few midtones, but does have
plentiful shadow and highlight regions in the lower left and upper right of the image, respectively. This translates into a histogram which has a high pixel count on both the far left and right-hand sides.
Lighting is often not as extreme as the last example. Conditions of ordinary and even lighting, when
combined with a properly exposed subject, will usually produce a histogram which peaks in the
centre, gradually tapering off into the shadows and highlights. With the exception of the direct
sunlight reflecting off the top of the building and off some windows, the boat scene to the right is quite
evenly lit. Most cameras will have no trouble automatically reproducing an image which has a
histogram similar to the one shown below.
HIGH AND LOW KEY IMAGES
Although most cameras will produce midtone-centric histograms when in an automatic exposure mode, the distribution of peaks within a histogram also depends on the tonal range of the subject matter. Images where most of the tones
occur in the shadows are called "low key," whereas with "high key" images most of the tones are in the highlights.

Before the photo has been taken, it is useful to assess whether or not your subject matter qualifies as high or low key. Since cameras measure reflected as opposed to incident light, they are unable to assess the absolute brightness of their
subject. As a result, many cameras contain sophisticated algorithms which try to circumvent this limitation, and estimate how bright an image should be. These estimates frequently result in an image whose average brightness is placed
in the midtones. This is usually acceptable, however high and low key scenes frequently require the photographer to manually adjust the exposure, relative to what the camera would do automatically. A good rule of thumb is that you
will need to manually adjust the exposure whenever you want the average brightness in your image to appear brighter or darker than the midtones.
The following set of images would have resulted if I had used my camera's auto exposure setting. Note how the average pixel count is brought closer to the midtones.
Most digital cameras are better at reproducing low key scenes since they prevent any region from becoming so bright that it turns into solid white, regardless of how dark the rest of the image might become as a result. High key scenes,
on the other hand, often produce images which are significantly underexposed. Fortunately, underexposure is usually more forgiving than overexposure (although this compromises your signal to noise ratio). Detail can never be
recovered when a region becomes so overexposed that it becomes solid white. When this occurs the highlights are said to be "clipped" or "blown."

The histogram is a good tool for knowing whether clipping has occurred since you can readily see when the highlights are pushed to the edge of the chart. Some clipping is usually ok in regions such as specular reflections on water or
metal, when the sun is included in the frame or when other bright sources of light are present. Ultimately, the amount of clipping present is up to the photographer and what they wish to convey.

CONTRAST
A histogram can also describe the amount of contrast. Contrast is a measure of the difference in brightness between light and dark areas in a scene. Broad histograms reflect a scene with significant contrast, whereas narrow histograms
reflect less contrast and may appear flat or dull. This can be caused by any combination of subject matter and lighting conditions. Photos taken in the fog will have low contrast, while those taken under strong daylight will have higher
contrast.
Contrast can have a significant visual impact on an image by emphasizing texture, as shown in the image above. The high contrast water has deeper shadows and more pronounced highlights, creating texture which "pops" out at the
viewer.
Contrast can also vary for different regions within the same image due to both subject matter and lighting. We can partition the previous image of a boat into three separate regions—each with its own distinct histogram.

The upper region contains the most contrast of all three because the image is created from light which does not first reflect off the surface of water. This produces deeper shadows underneath the boat and its ledges, and stronger highlights
in the upward-facing and directly exposed areas. The middle and bottom regions are produced entirely from diffuse, reflected light and thus have lower contrast; similar to if one were taking photographs in the fog. The bottom region has
more contrast than the middle—despite the smooth and monotonic blue sky—because it contains a combination of shade and more intense sunlight. Conditions in the bottom region create more pronounced highlights, but it still lacks the
deep shadows of the top region. The sum of the histograms in all three regions creates the overall histogram shown before.
For additional information on histograms, visit part 2 of this tutorial:
"Understanding Camera Histograms: Luminance & Color"

50.CAMERA HISTOGRAMS: LUMINANCE & COLOR -


This section is designed to help you develop a better understanding of how luminance and color both vary within an image, and how this translates into the relevant histogram. Although RGB histograms are the most commonly used
histogram, other types are more useful for specific purposes.
The image below is shown alongside several of the other histogram types which you are likely to encounter. Move your mouse over the labels at the bottom to toggle which type of color histogram is displayed. When you change to one
of the color histograms a different image will be shown. This new image is a grayscale representation of how that color's intensity is distributed throughout the image. Pay particular attention to how each color changes the brightness
distribution within the image, and how the colors within each region influence this brightness.
Choose: RED GREEN BLUE ALL

LUMINANCE HISTOGRAMS
Luminance histograms are more accurate than RGB histograms at describing the perceived brightness distribution or "luminosity" within an image. Luminance takes into account the fact that the human eye is more sensitive to green light
than red or blue light. View the above example again for each color and you will see that the green intensity levels within the image are most representative of the brightness distribution for the full color image. This also reflected by the
fact that the luminance histogram also matches the green histogram more than any other color. Luminance correctly predicts that the following stepped gradient gradually increases in lightness, whereas a simple addition of each RGB
value would give the same intensity at each rectangle.

dark light
est est

How is a luminance histogram produced? First, each pixel is converted so that it represents a luminosity based on a weighted average of the three colors at that pixel. This weighting assumes that green represents 59% of the perceived
luminosity, while the red and blue channels account for just 30% and 11%, respectively. Move your mouse over "convert to luminosity" below the example image to see what this calculation looks like when performed for for each pixel.
Once all pixels have been converted into luminosity, a luminance histogram is produced by counting how many pixels are at each luminance—identical to how a histogram is produced for a single color.
An important difference to take away from the above calculation is that while luminance histograms keep track of the location of each color pixel, RGB histograms discard this information. A RGB histogram produces three independent
histograms and then adds them together, irrespective of whether or not each color came from the same pixel. To illustrate this point we will use an image which the two types of histograms interpret quite differently.
The above image contains many patches of pure color. At the interior of each color patch the intensity reaches a maximum of 255, so all patches have significant color clipping and only in that color. Even though this image contains no
pure white pixels, the RGB histogram shows strong clipping—so much that if this were a photograph the image would appear significantly overexposed. This is because the RGB histogram does not take into account the fact that all three
colors never clip in the same place.
The luminance histogram tells an entirely different story by showing no pixels anywhere near full brightness. It also shows three distinct peaks—one for each color that has become significantly clipped. Since this image contains
primarily blue, then red, then least of all green, the relative heights clearly show which color belongs where. Also note that the relative horizontal position of each peak is in accordance with the percentages used in the weighted average
for calculating luminance: 59%, 30%, and 11%.
So which one is better? If we cared about color clipping, then the RGB histogram clearly warns us while the luminance histogram provides no red flags. On the other hand, the luminance histogram accurately tells us that no pixel is
anywhere near full black or white. Each has its own use and should be used as a collective tool. Since most digital cameras show only a RGB histogram, just be aware of its shortcomings. As a rule of thumb, the more intense and pure
the colors are in your image, the more a luminance and RGB histogram will differ. Pay careful attention when your subject contains strong shades of blue since you will rarely be able to see blue channel clipping with luminance
histograms.

COLOR HISTOGRAMS
Whereas RGB and luminance histograms use all three color channels, a color histogram describes the brightness distribution for any of these colors individually. This can be more helpful when trying to assess whether or not individual
colors have been clipped.
View Channel: RED GREEN BLUE ALL LUMINOSITY

View Histogram: RGB LUMINOSITY


The petals of the red flowers caught direct sunlight, so their red color became clipped, even though the rest of the image remained within the histogram. Regions where individual color channels are clipped lose all texture caused by that
particular color. However, these clipped regions may still retain some luminance texture if the other two colors have not also been clipped. Individual color clipping is often not as objectionable as when all three colors clip, although this
all depends upon what you wish to convey.
RGB histograms can show if an individual color channel clips, however they do not tell you if this is due to an individual color or all three. Color histograms amplify this effect and clearly show the type of clipping. Move your mouse
over the labels above to compare the luminance and RGB histograms, to view the image in terms of only a single color channel, and to view the image luminance. Notice how the intensity distribution for each color channel varies
drastically in regions of nearly pure color. The strength and purity of colors within this image cause the RGB and luminance histograms to differ significantly.
For additional information on histograms, visit part 1 of this tutorial:
"Understanding Camera Histograms - Tones and Contrast"

51.DIGITAL CAMERA IMAGE NOISE -


"Image noise" is the digital equivalent of film grain for analogue cameras. Alternatively, one can think of it as analogous to the subtle background hiss you may hear from your audio system at full volume. For digital images, this noise
appears as random speckles on an otherwise smooth surface and can significantly degrade image quality. Although noise often detracts from an image, it is sometimes desirable since it can add an old-fashioned, grainy look which is
reminiscent of early film. Some noise can also increase the apparent sharpness of an image. Noise increases with the sensitivity setting in the camera, length of the exposure, temperature, and even varies amongst different camera
models.

CONCEPT
Some degree of noise is always present in any electronic device that transmits or receives a "signal." For televisions this signal is the broadcast data transmitted over cable or received at the antenna; for digital cameras, the signal is the
light which hits the camera sensor. Even though noise is unavoidable, it can become so small relative to the signal that it appears to be nonexistent. The signal to noise ratio (SNR) is a useful and universal way of comparing the relative
amounts of signal and noise for any electronic system; high ratios will have very little visible noise whereas the opposite is true for low ratios. The sequence of images below show a camera producing a very noisy picture of the word
"signal" against a smooth background. The resulting image is shown along with an enlarged 3-D representation depicting the signal above the background noise.

Original
Image

Camera
Image

Colorful 3-D representation of the camera's image

The image above has a sufficiently high SNR to clearly separate the image information from background noise. A low SNR would produce an image where the "signal" and noise are more comparable and thus harder to discern from one
another.
Camera
Image

TERMINOLOGY
A camera's "ISO setting" or "ISO speed" is a standard which describes its absolute sensitivity to light. ISO settings are usually listed as factors of 2, such as ISO 50, ISO 100 and ISO 200 and can have a wide range of values. Higher
numbers represent greater sensitivity and the ratio of two ISO numbers represents their relative sensitivity, meaning a photo at ISO 200 will take half as long to reach the same level of exposure as one taken at ISO 100 (all other settings
being equal). ISO speed is analogous to ASA speed for different films, however a single digital camera can capture images at several different ISO speeds. This is accomplished by amplifying the image signal in the camera, however this
also amplifies noise and so higher ISO speeds will produce progressively more noise.

TYPES OF NOISE
Digital cameras produce three common types of noise: random noise, "fixed pattern" noise, and banding noise. The three qualitative examples below show pronounced and isolating cases for each type of noise against an ordinarily
smooth grey background.

Fixed Pattern
Random Noise Banding Noise
Noise
Short Exposure Susceptible Camera
Long Exposure
High ISO Speed Brightened Shadows
Low ISO Speed

Random noise is characterized by intensity and color fluctuations above and below the actual image intensity. There will always be some random noise at any exposure length and it is most influenced by ISO speed. The pattern of
random noise changes even if the exposure settings are identical
Fixed pattern noise includes what are called "hot pixels," which are defined as such when a pixel's intensity far surpasses that of the ambient random noise fluctuations. Fixed pattern noise generally appears in very long exposures and is
exacerbated by higher temperatures. Fixed pattern noise is unique in that it will show almost the same distribution of hot pixels if taken under the same conditions (temperature, length of exposure, ISO speed).
Banding noise is highly camera-dependent, and is noise which is introduced by the camera when it reads data from the digital sensor. Banding noise is most visible at high ISO speeds and in the shadows, or when an image has been
excessively brightened. Banding noise can also increase for certain white balances, depending on camera model.
Although fixed pattern noise appears more objectionable, it is usually easier to remove since it is repeatable. A camera's internal electronics just has to know the pattern and it can subtract this noise away to reveal the true image. Fixed
pattern noise is much less of a problem than random noise in the latest generation of digital cameras, however even the slightest amount can be more distracting than random noise.
The less objectionable random noise is usually much more difficult to remove without degrading the image. Computers have a difficult time discerning random noise from fine texture patterns such as those occurring in dirt or foliage, so
if you remove the random noise you often end up removing these textures as well. Programs such as Neat Image and Noise Ninja can be remarkably good at reducing noise while still retaining actual image information. Please also see
my section on image averaging for another technique to reduce noise.
Understanding the noise characteristics of a digital camera will help you know how noise will influence your photographs. The following sections discuss the tonal variation of noise, "chroma" and luminance noise, and the frequency and
magnitude of image noise. Examples of noise variation based on ISO and color channel are also shown for three different digital cameras.

CHARACTERISTICS
Noise not only changes depending on exposure setting and camera model, but it can also vary within an individual image. For digital cameras, darker regions will contain more noise than the brighter regions; with film the inverse is true.
Each Region at 100% Zoom

1 2 3 4

Note how noise becomes less pronounced as the tones become brighter. Brighter regions have a stronger signal due to more light, resulting in a higher overall SNR. This means that images which are underexposed will have more visible
noise-- even if you brighten them up to a more natural level afterwards. On the other hand, overexposed images will have less noise and can actually be advantageous, assuming that you can darken them later and that no region has
become solid white where there should be texture (see "Understanding Histograms, Part 1").
Noise is also composed of two elements: fluctuations in color and luminance. Color or "chroma" noise is usually more unnatural in appearance and can render images unusable if not kept under control. The example below shows noise
on what was originally a neutral grey patch, along with the separate effects of chroma and luminance noise.

Luminance
Noise

Image Noise

Chroma Noise

The relative amount of chroma and luminance noise can vary significantly from one camera model to another. Noise reduction software can be used to selectively reduce both chroma and luminance noise, however complete elimination
of luminance noise can result in unnatural or "plasticy" looking images.
Noise fluctuations can also vary in both their magnitude and spatial frequency, although spatial frequency is often a neglected characteristic. The term "fine-grained" was used frequently with film to describe noise whose fluctuations
occur over short distances, which is the same as having a high spatial frequency. The example below shows how the spatial frequency can change the appearance of noise.

Low Frequency High Frequency


Noise Noise
(Coarser Texture) (Finer Texture)
Standard Deviation: Standard Deviation:
11.7 12.5

If the two patches above were compared based solely on the magnitude of their fluctuations (as is done in most camera reviews), then the patch on the right would seem to have higher noise. Upon visual inspection, the patch on the right
actually appears to be much less noisy than the patch on the left. This is due entirely to the spatial frequency of noise in each patch.Even though noise's spatial frequency is under emphasized, its magnitude still has a very prominent
effect. The next example shows two patches which have different standard deviations, but the same spatial frequency.

Low Magnitude High Magnitude


Noise Noise
(Smoother Texture) (Rougher Texture)
Standard Deviation: Standard Deviation:
11.7 20.8

Note how the patch on the left appears much smoother than the patch on the right. High magnitude noise can overpower fine textures such as fabric or foliage, and can be more difficult to remove without over softening the image. The
magnitude of noise is usually described based on a statistical measure called the "standard deviation," which quantifies the typical variation a pixel will have from its "true" value. This concept can also be understood by looking at the
histogram for each patch:
Select noise LO HIG
magnitude: W H

RGB Histogram

If each of the above patches had zero noise, all pixels would be in a single line located at the mean. As noise levels increase, so does the width of this histogram. We present this for the RGB histogram, although the same comparison can
also be made for the luminosity and individual color histograms. For more information on types of histograms, please see: "Understanding Histograms: Luminosity and Color."

EXAMPLES
It is helpful to experiment with your camera so you can get a feel for how much noise is produced at a given ISO setting. The examples below show the noise characteristics for three different cameras against an otherwise smooth grey
patch.
ISO 100 ISO 200 ISO 400
Canon EOS 20D
Pixel Area: 40 µm2
Released in 2004

Canon PowerShot
A80
Pixel Area: 9.3 µm2
Released in 2003

Epson PhotoPC 800


Pixel Area: 15 µm2
Released in 1999

Show RE GREE BLU AL


Channel: D N E L

(best JPEG quality, daylight white balance and default sharpening)


Note the differences due to camera model, color channel and ISO speed. Move your mouse over the buttons below to see that each individual channel has quite a different amount of noise. The blue and green channels will usually have
the highest and lowest noise, respectively, in digital cameras with Bayer arrays (see "Understanding Digital Sensors"). Also note how the Epson develops patches of color which are much more objectionable than noise caused only by
brightness fluctuations.
You can also see that increasing ISO speed always produces higher noise for a given camera, however noise variation between cameras is more complex. The greater the area of a pixel in the camera sensor, the more light gathering
ability it will have-- thus producing a stronger signal. As a result, cameras with physically larger pixels will generally appear less noisy since the signal is larger relative to the noise. This is why cameras with more megapixels packed
into the same sized camera sensor will not necessarily produce a better looking image. On the other hand, a stronger signal does not necessarily lead to lower noise since it is the relative amounts of signal and noise that determine how
noisy an image will appear. Even though the Epson PhotoPC 800 has much larger pixels than the Canon PowerShot A80, it has visibly more noise-- especially at ISO 400. This is because the much older Epson camera had much higher
internal noise levels caused by less sophisticated electronics.
Part 1 of this tutorial can be found at: "Image Noise: Concept and Types"

52.TUTORIALS: SHARPNESS -
Sharpness describes the clarity of detail in a photo, and can be a valuable creative tool for emphasizing texture. Proper photographic and post-processing technique can go a long way towards improving sharpness, although sharpness is
ultimately limited by your camera equipment, image magnification and viewing distance. Two fundamental factors contribute to the perceived sharpness of an image: resolution and acutance.
Acutance Resolution

High Low High Low

Acutance describes how quickly image information transitions at an edge, and so high acutance results in sharp Resolution describes the camera's ability to distinguish between closely spaced elements of detail, such as the
transitions and detail with clearly defined borders. two sets of lines shown above.
For digital cameras, resolution is limited by your digital sensor, whereas acutance depends on both the quality of your lens and the type of post-processing. Acutance is the only aspect of sharpness which is still under your control after
the shot has been taken, so acutance is what is enhanced when you digitally sharpen an image (see Sharpening using an "Unsharp Mask").

COMPARISON
Photos require both high acutance and resolution to be perceived as critically sharp.

The following example is designed to give you a feel for how each influences your image:

Acutance: High Resolution: Low

Acutance: Low Resolution: High

Acutance: High Resolution: High

PROPERTIES OF SHARPNESS
Sharpness also depends on other factors which influence our perception of resolution and acutance. Image noise (or film grain) is usually detrimental to an image, however small amounts can actually increase the appearance of
sharpness. Consider the following example:
Low Noise, Soft High Noise, Sharp

Although both images have not been sharpened, the image to the left appears softer and less detailed. Image noise can be both very fine and have a very high acutance-- tricking the eye into thinking sharp detail is present.
Sharpness also depends on viewing distance. Images which are designed to be viewed from further away, such as posters or billboards, may have much lower resolution than fine art prints in a gallery, but yet both may be perceived as
sharp because of your viewing distance. Keep this property in mind when sharpening your image, as the optimal type of your sharpening may not necessarily be what looks best on your screen.
Sharpness is also significantly affected by your camera technique. Even small amounts of camera shake can dramatically reduce the sharpness of an image. Proper shutter speeds, use of a sturdy camera tripod and mirror lock-up can also
significantly impact the sharpness of your prints.

53.TUTORIALS: WHITE BALANCE -


White balance (WB) is the process of removing unrealistic color casts, so that objects which appear white in person are rendered white in your photo. Proper camera white balance has to take into account the "color temperature" of a light
source, which refers to the relative warmth or coolness of white light. Our eyes are very good at judging what is white under different light sources, however digital cameras often have great difficulty with auto white balance (AWB). An
incorrect WB can create unsightly blue, orange, or even green color casts, which are unrealistic and particularly damaging to portraits. Performing WB in traditional film photography requires attaching a different cast-removing filter for
each lighting condition, whereas with digital this is no longer required. Understanding digital white balance can help you avoid color casts created by your camera's AWB, thereby improving your photos under a wider range of lighting
conditions.

Incorrect White Balance Correct White Balance

BACKGROUND: COLOR TEMPERATURE


Color temperature describes the spectrum of light which is radiated from a "blackbody" with that surface temperature. A blackbody is an object which absorbs all incident light-- neither reflecting it nor allowing it to pass through. A
rough analogue of blackbody radiation in our day to day experience might be in heating a metal or stone: these are said to become "red hot" when they attain one temperature, and then "white hot" for even higher temperatures. Similarly,
blackbodies at different temperatures also have varying color temperatures of "white light." Despite its name, light which may appear white does not necessarily contain an even distribution of colors across the visible spectrum:
Relative intensity has been normalized for each temperature (in Kelvins).
Note how 5000 K produces roughly neutral light, whereas 3000 K and 9000 K produce light spectrums which shift to contain more orange and blue wavelengths, respectively. As the color temperature rises, the color distribution becomes
cooler. This may not seem intuitive, but results from the fact that shorter wavelengths contain light of higher energy.
Why is color temperature a useful description of light for photographers, if they never deal with true blackbodies? Fortunately, light sources such as daylight and tungsten bulbs closely mimic the distribution of light created by
blackbodies, although others such as fluorescent and most commercial lighting depart from blackbodies significantly. Since photographers never use the term color temperature to refer to a true blackbody light source, the term is implied
to be a "correlated color temperature" with a similarly colored blackbody. The following table is a rule-of-thumb guide to the correlated color temperature of some common light sources:
Color
Light Source
Temperature

1000-2000 K Candlelight

2500-3500 K Tungsten Bulb (household variety)

3000-4000 K Sunrise/Sunset (clear sky)

4000-5000 K Fluorescent Lamps

5000-5500 K Electronic Flash

5000-6500 K Daylight with Clear Sky (sun overhead)

6500-8000 K Moderately Overcast Sky

9000-10000 K Shade or Heavily Overcast Sky

IN PRACTICE: JPEG & TIFF FILES


Since some light sources do not resemble blackbody radiators, white balance uses a second variable in addition to color temperature: the green-magenta shift. Adjusting the green-magenta shift is often unnecessary under ordinary
daylight, however fluorescent and other artificial lighting may require significant green-magenta adjustments to the WB.
Auto White Fortunately, most digital cameras contain a variety of preset white balances, so you do not have to deal with color temperature and green-magenta shift during the critical shot.
Balance Commonly used symbols for each of these are listed to the left.
The first three white balances allow for a range of color temperatures. Auto white balance is available in all digital cameras and uses a best guess algorithm within a limited range-- usually between
Custom 3000/4000 K and 7000 K. Custom white balance allows you to take a picture of a known gray reference under the same lighting, and then set that as the white balance for future photos. With "Kelvin"
you can set the color temperature over a broad range.
The remaining six white balances are listed in order of increasing color temperature, however many compact cameras do not include a shade white balance. Some cameras also include a "Fluorescent H"
Kelvin setting, which is designed to work in newer daylight-calibrated fluorescents.

Tungsten

Fluorescent

Daylight

Flash

Cloudy

Shade
The description and symbol for the above white balances are just rough estimates for the actual lighting they work best under. In fact, cloudy could be used in place of daylight depending on the time of day, elevation, or degree of
haziness. In general, if your image appears too cool on your LCD screen preview (regardless of the setting), you can quickly increase the color temperature by selecting a symbol further down on the list above. If the image is still too
cool (or warm if going the other direction), you can resort to manually entering a temperature in the Kelvin setting.
If all else fails and the image still does not have the correct WB after inspecting it on a computer afterwards, you can adjust the color balance to remove additional color casts. Alternatively, one could click on a colorless reference (see
section on neutral references) with the "set gray point" dropper while using the "levels" tool in Photoshop. Either of these methods should be avoided since they can severely reduce the bit depth of your image.

IN PRACTICE: THE RAW FILE FORMAT


By far the best white balance solution is to photograph using the RAW file format (if your camera supports them), as these allow you to set the WB *after* the photo has been taken. RAW files also allow one to set the WB based on a
broader range of color temperature and green-magenta shifts.
Performing a white balance with a raw file is quick and easy. You can either adjust the temperature and green-magenta sliders until color casts are removed, or you can simply click on a neutral reference within the image (see next
section). Even if only one of your photos contains a neutral reference, you can click on it and then use the resulting WB settings for the remainder of your photos (assuming the same lighting).

CUSTOM WHITE BALANCE: CHOOSING A NEUTRAL REFERENCE


A neutral reference is often used for color-critical projects, or for situations where one anticipates auto white balance will encounter problems. Neutral references can either be parts of your scene (if you're lucky), or can be a portable item
which you carry with you. Below is an example of a fortunate reference in an otherwise bluish twilight scene.

On the other hand, pre-made portable references are almost always more accurate since one can easily be tricked into thinking an object is neutral when it is not. Portable references can be expensive and specifically designed for
photography, or may include less expensive household items. An ideal gray reference is one which reflects all colors in the spectrum equally, and can consistently do so under a broad range of color temperatures. An example of a pre-
made gray reference is shown below:

Common household neutral references are the underside of a lid to a coffee or pringles container. These are both inexpensive and reasonably accurate, although custom-made photographic references are the best (such as the cards shown
above). Custom-made devices can be used to measure either the incident or reflected color temperature of the illuminant. Most neutral references measure reflected light, whereas a device such as a white balance meter or an "ExpoDisc"
can measure incident light (and can theoretically be more accurate).
Care should be taken when using a neutral reference with high image noise, since clicking on a seemingly gray region may actually select a colorful pixel caused by color noise:
Low Noise
High Noise
(Smooth Colorless
(Patches of Color)
Gray)

If your software supports it, the best solution for white balancing with noisy images is to use the average of pixels with a noisy gray region as your reference. This can be either a 3x3 or 5x5 pixel average if using Adobe Photoshop.

NOTES ON AUTO WHITE BALANCE


Certain subjects create problems for a digital camera's auto white balance-- even under normal daylight conditions. One example is if the image already has an overabundance of warmth or coolness due to unique subject matter. The
image below illustrates a situation where the subject is predominantly red, and so the camera mistakes this for a color cast induced by a warm light source. The camera then tries to compensate for this so that the average color of the
image is closer to neutral, but in doing so it unknowingly creates a bluish color cast on the stones. Some digital cameras are more susceptible to this than others.

Automatic White Balance Custom White Balance

(Custom white balance uses an 18% gray card as a neutral reference.)


A digital camera's auto white balance is often more effective when the photo contains at least one white or bright colorless element. Of course, do not try to change your composition to include a colorless object, but just be aware that its
absence may cause problems with the auto white balance. Without the white boat in the image below, the camera's auto white balance mistakenly created an image with a slightly warmer color temperature.

IN MIXED LIGHTING
Multiple illuminants with different color temperatures can further complicate performing a white balance. Some lighting situations may not even have a truly "correct" white balance, and will depend upon where color accuracy is most
important.
Under mixed lighting, auto white balance usually calculates an average color temperature for the entire scene, and then uses this as the white balance. This approach is usually acceptable, however auto
white balance tends to exaggerate the difference in color temperature for each light source, as compared with what we perceive with our eyes.
Exaggerated differences in color temperature are often most apparent with mixed indoor and natural lighting. Critical images may even require a different white balance for each lighting region. On the
other hand, some may prefer to leave the color temperatures as is.
Note how the building to the left is quite warm, whereas the sky is somewhat cool. This is because the white balance was set based on the moonlight-- bringing out the warm color temperature of the
artificial lighting below. White balancing based on the natural light often yields a more realistic photograph. Choose "stone" as the white balance reference and see how the sky becomes unrealistically
blue.

Reference: Moon Stone

54.IMAGE POSTERIZATION -
Posterization occurs when an image's apparent bit depth has been decreased so much that it has a visual impact. The term posterization is used because it can influence your photo similar to how the colors may look in a mass-produced
poster, where the print process uses a limited number of color inks. This effect ranges from subtle to quite pronounced, although one's tolerance for posterization may vary.
Any process which "stretches" the histogram has the potential to cause posterization. Stretching can be caused by techniques such as levels and curves in Photoshop, or by converting an image from one color space into another as part of
color management. The best way to ward off posterization is to keep any histogram manipulation to a minimum.
Visually inspecting an image is a good way to detect posterization, however the best objective tool is the histogram. Although RGB histograms will show extreme cases, the individual color histograms are your most sensitive means of
diagnosis. The two RGB histograms below demonstrate an extreme case, where a previously narrow histogram has been stretched to almost three times its original width.

Note the tell-tale sign of posterization on the right: vertical spikes which look similar to the teeth of a comb. Why does it look like this? Recall that each channel in an 8-bit image can only have discrete color intensities from 0 to 255 (see
"Understanding Bit Depth"). A stretched histogram is forced to spread these discrete levels over a broader range than exists in the original image. This creates gaps where there is no longer any intensity information left in the image. As
an example, if we were to take a color histogram which ranged from 120 to 130 and then stretched it from 100 to 150 (5x its original width), then there would be peaks at every increment of 5 (100, 105, 110, etc) and no pixels in between.
Visually, this would force colors to "jump" or form steps in what would otherwise be smooth color gradations. Keep in mind though that all digital images have discrete color levels—it is only when these levels sufficiently disperse that
our eye is able to perceive them.
Posterization occurs more easily in regions of gradual color transitions, such as smooth skies. These regions require more colors levels to describe them and so any decrease in levels can have a visual impact on the image.

USEFUL TIPS
• Using images with 16-bits per channel can greatly reduce the risk of posterization since this provides up to 256 times as many color levels as 8-bit. Realistically, you can expect anywhere from 4-16
times as many levels if your image originated from a digital camera since most capture at 10 to 12-bits per channel in RAW mode— regardless of whether or not you saved it as a 16-bit file.
• Adjustment layers in Photoshop will decrease the likelihood of unnecessarily performing the same image manipulations more than once.
• Even if your original image was 8-bits per channel, performing all editing in 16-bit mode will nearly eliminate posterization caused by rounding errors.
• Working in color spaces with broad gamuts can increase the likelihood of posterization because they require more bit depth to produce the same color gradient.

55.DYNAMIC RANGE IN DIGITAL PHOTOGRAPHY -


Dynamic range in photography describes the ratio between the maximum and minimum measurable light intensities (white and black, respectively). In the real world, one never encounters true white or black-- only varying degrees of
light source intensity and subject reflectivity. Therefore the concept of dynamic range becomes more complicated, and depends on whether you are describing a capture device (such as a camera or scanner), a display device (such as a
print or computer display), or the subject itself.
Just as with color management, each device within the above imaging chain has their own dynamic range. In prints and computer displays, nothing can become brighter than paper white or a maximum intensity pixel, respectively. In
fact, another device not shown above is our eyes, which also have their own dynamic range. Translating image information between devices may therefore affect how that image is reproduced. The concept of dynamic range is therefore
useful for relative comparisons between the actual scene, your camera, and the image on your screen or in the final print.

INFLUENCE OF LIGHT: ILLUMINANCE & REFLECTIVITY


Light intensity can be described in terms of incident and reflected light; both contribute to the dynamic range of a scene (see tutorial on "camera metering and exposure").

Strong Reflections Uneven Incident Light

Scenes with high variation in reflectivity, such as those containing black objects in addition to strong reflections, may actually have a greater dynamic range than scenes with large incident light variation. Photography under either
scenario can easily exceed the dynamic range of your camera-- particularly if the exposure is not spot on.
Accurate measurement of light intensity, or luminance, is therefore critical when assessing dynamic range. Here we use the term illuminance to specify only incident light. Both illuminance and luminance are typically measured in
candelas per square meter (cd/m2). Approximate values for commonly encountered light sources are shown below.

Here we see the vast variation possible for incident light, since the above diagram is scaled to powers of ten. If a scene were unevenly illuminated by both direct and obstructed sunlight, this alone can greatly increase a scene's dynamic
range (as apparent from the canyon sunset example with a partially-lit cliff face).

DIGITAL CAMERAS
Although the meaning of dynamic range for a real-world scene is simply the ratio between lightest and darkest regions (contrast ratio), its definition becomes more complicated when describing measurement devices such as digital
cameras and scanners. Recall from the tutorial on digital camera sensors that light is measured at each pixel in a cavity or well (photosite). Each photosite's size, in addition to how its contents are measured, determine a digital camera's
dynamic range.

Black Level White Level Darker White Level


(Limited by Noise) (Saturated Photosite) (Low Capacity Photosite)

Photosites can be thought of as a buckets which hold photons as if they were water. Therefore, if the bucket becomes too full, it will overflow. A photosite which overflows is said to have become saturated, and is therefore unable to
discern between additional incoming photons-- thereby defining the camera's white level. For an ideal camera, its contrast ratio would therefore be just the number of photons it could contain within each photosite, divided by the darkest
measurable light intensity (one photon). If each held 1000 photons, then the contrast ratio would be 1000:1. Since larger photosites can contain a greater range of photons, dynamic range is generally higher for digital SLR cameras
compared to compact cameras (due to larger pixel sizes).
Note: In some digital cameras, there is an extended low ISO setting which produces less noise, but also decreases dynamic range. This is because the setting in effect overexposes the image by a full f-stop, but then later truncates the
highlights-- thereby increasing the light signal. An example of this is many of the Canon cameras, which have an ISO-50 speed below the ordinary ISO-100.
In reality, consumer cameras cannot count individual photons. Dynamic range is therefore limited by the darkest tone where texture can no longer be discerned; we call this the black level. The black level is limited by how accurately
each photosite can be measured, and is therefore limited in darkness by image noise. Therefore, dynamic range generally increases for lower ISO speeds and cameras with less measurement noise.
Note: Even if a photosite could count individual photons, it would still be limited by photon noise. Photon noise is created by the statistical variation in arrival of photons, and therefore represents a theoretical minimum for noise. Total
noise represents the sum of photon noise and read-out noise.
Overall, the dynamic range of a digital camera can therefore be described as the ratio of maximum light intensity measurable (at pixel saturation), to minimum light intensity measurable (above read-out noise). The most
commonly used unit for measuring dynamic range in digital cameras is the f-stop, which describes total light range by powers of 2. A contrast ratio of 1024:1 could therefore also be described as having a dynamic range of 10 f-stops
(since 210 = 1024). Depending on the application, each unit f-stop may also be described as a "zone" or "eV."

SCANNERS
Scanners are subject to the same saturation:noise criterion as for dynamic range in digital cameras, except it is instead described in terms of density (D). This is useful because it is conceptually similar to how pigments create tones in
printed media, as shown below.

Low Reflectance High Reflectance High Pigment Density Low Pigment Density
(High Density) (Low Density) (Darker Tone) (Lighter Tone)

The overall dynamic range in terms of density is therefore the maximum pigment density (Dmax), minus the minimum pigment density (Dmin). Unlike powers of 2 for f-stops, density is measured using powers of 10 (just as the Richter scale
for earthquakes). A density of 3.0 therefore represents a contrast ratio of 1000:1 (since 103.0 = 1000).
Dynamic
Range
of Original

Dynamic
Range
of Scanner

Instead of listing total density (D), scanner manufacturer's typically list just the Dmax value, since Dmax - Dmin is approximately equal to Dmax. This is because unlike with digital cameras, a scanner has full control over it's light source,
ensuring that minimal photosite saturation occurs.
For high pigment density, the same noise constraints apply to scanners as digital cameras (since they both use an array of photosites for measurement). Therefore the measurable Dmax is also determined by the noise present during read-
out of the light signal.

COMPARISON
Dynamic range varies so greatly that it is commonly measured on a logarithmic scale, similar to how vastly different earthquake intensities are all measured on the same Richter scale. Here we show the maximum measurable (or
reproducible) dynamic range for several devices in terms any preferred measure (f-stops, density and contrast ratio). Move your mouse over each of the options below to compare these.
Select Measure for Dynamic Range:

f-stops Density Contrast Ratio

Select Types to Display Above:

Printed Media Scanner Digital Display


s Cameras Devices

Note the huge discrepancy between reproducible dynamic range in prints, and that measurable by scanners and digital cameras. For a comparison with real-world dynamic range in a scene, these vary from approximately 3 f-stops for a
cloudy day with nearly even reflectivity, to 12+ f-stops for a sunny day with highly uneven reflectivity.
Care should be taken when interpreting the above numbers; real-world dynamic range is a strong function of ambient light for prints and display devices. Prints not viewed under adequate light may not give their full dynamic range,
while display devices require near complete darkness to realize their full potential-- especially for plasma displays. Finally, these values are rough approximations only; actual values depend on age of device, model generation, price
range, etc.
Be warned that contrast ratios for display devices are often greatly exaggerated, as there is no manufacturer standard for listing these. Contrast ratios in excess of 500:1 are often only the result of a very dark black point, instead of a
brighter white point. For this reason attention should be paid to both contrast ratio and luminosity. High contrast ratios (without a correspondingly higher luminosity) can be completely negated by even ambient candle light.

THE HUMAN EYE


The human eye can actually perceive a greater dynamic range than is ordinarily possible with a camera. If we were to consider situations where our pupil opens and closes for varying light, our eyes can see over a range of nearly 24 f-
stops.

On the other hand, for accurate comparisons with a single photo (at constant aperture, shutter and ISO), we can only consider the instantaneous dynamic range (where our pupil opening is unchanged). This would be similar to looking at
one region within a scene, letting our eyes adjust, and not looking anywhere else. For this scenario there is much disagreement, because our eye's sensitivity and dynamic range actually change depending on brightness and contrast. Most
estimate anywhere from 10-14 f-stops.
The problem with these numbers is that our eyes are extremely adaptable. For situations of extreme low-light star viewing (where our eyes have adjusted to use rod cells for night vision), our eyes approach even higher instantaneous
dynamic ranges (see tutorial on "Color Perception of the Human Eye").

BIT DEPTH & MEASURING DYNAMIC RANGE


Even if one's digital camera could capture a vast dynamic range, the precision at which light measurements are translated into digital values may limit usable dynamic range. The workhorse which translates these continuous
measurements into discrete numerical values is called the analog to digital (A/D) converter. The accuracy of an A/D converter can be described in terms of bits of precision, similar to bit depth in digital images, although care should
be taken that these concepts are not used interchangeably. The A/D converter is what creates values for the digital camera's RAW file format.

Bit Precision Dynamic Range


of
Contrast Ratio
Analog/Digital f-stops Density
Converter

8 256:1 8 2.4

10 1024:1 10 3.0

12 4096:1 12 3.6

14 16384:1 14 4.2

16 65536:1 16 4.8

Note: Above values are for A/D converter precision only,


and should not be used to interpret results for 8 and 16-bit image files.
Furthermore, values shown are a theoretical maximum, assuming noise is not limiting.
Additionally, this applies only to linear A/D converters; a non-linear A/D converter's bit precision does not necessarily correlate with dynamic range.
As an example, 10-bits of tonal precision translates into a possible brightness range of 0-1023 (since 210 = 1024 levels). Assuming that each A/D converter number is proportional to actual image brightness (meaning twice the pixel value
represents twice the brightness), 10-bits of precision can only encode a contrast ratio of 1024:1.
Most digital cameras use a 10 to 14-bit A/D converter, and so their theoretical maximum dynamic range is 10-14 stops. However, this high bit depth only helps minimize image posterization since total dynamic range is usually limited by
noise levels. Similar to how a high bit depth image does not necessarily mean that image contains more colors, if a digital camera has a high precision A/D converter it does not necessarily mean it can record a greater dynamic range. In
practice, the dynamic range of a digital camera does not even approach the A/D converter's theoretical maximum; 5-9 stops is generally all one can expect from the camera.

INFLUENCE OF IMAGE TYPE & TONAL CURVE


Can digital image files actually record the full dynamic range of high-end devices? There seems to be much confusion on the internet about the relevance of image bit depth on recordable dynamic range.
We first need to distinguish between whether we are speaking of recordable dynamic range, or displayable dynamic range. Even an ordinary 8-bit JPEG image file can conceivably record an infinite dynamic range-- assuming that the
right tonal curve is applied during RAW conversion (see tutorial on curves, under motivation: dynamic range), and that the A/D converter has the required bit precision. The problem lies in the usability of this dynamic range; if too few
bits are spread over too great of a tonal range, then this can lead to image posterization.
On the other hand, displayable dynamic range depends on the gamma correction or tonal curve implied by the image file, or used by the video card and display device. Using a gamma of 2.2 (standard for PC's), it would be theoretically
possible to encode a dynamic range of nearly 18 f-stops (see tutorial on gamma correction, to be added). Again though, this would suffer from severe posterization. The only current standard solution for encoding a nearly infinite
dynamic range (with no visible posterization) is to use high dynamic range (HDR) image files in Photoshop (or other supporting program).

56.DIGITAL IMAGE INTERPOLATION -


Image interpolation occurs in all digital photos at some stage-- whether this be in bayer demosaicing or in photo enlargement. It occurs anytime you resize or remap (distort) your image from one pixel grid to another. Image resizing is
necessary when you need to increase or decrease the total number of pixels, whereas remapping can occur under a wider variety of scenarios: correcting for lens distortion, changing perspective, and rotating an image.

Original Image After Interpolation

Even if the same image resize or remap is performed, the results can vary significantly depending on the interpolation algorithm. It is only an approximation, therefore an image will always lose some quality each time interpolation is
performed. This tutorial aims to provide a better understanding of how the results may vary-- helping you to minimize any interpolation-induced losses in image quality.

CONCEPT
Interpolation works by using known data to estimate values at unknown points. For example: if you wanted to know the temperature at noon, but only measured it at 11AM and 1PM, you could estimate its value by performing a linear
interpolation:

If you had an additional measurement at 11:30AM, you could see that the bulk of the temperature rise occurred before noon, and could use this additional data point to perform a quadratic interpolation:
The more temperature measurements you have which are close to noon, the more sophisticated (and hopefully more accurate) your interpolation algorithm can be.

IMAGE RESIZE EXAMPLE


Image interpolation works in two directions, and tries to achieve a best approximation of a pixel's color and intensity based on the values at surrounding pixels. The following example illustrates how resizing / enlargement works:
2D Interpolation

Original Before After No Interpolation

Unlike air temperature fluctuations and the ideal gradient above, pixel values can change far more abruptly from one location to the next. As with the temperature example, the more you know about the surrounding pixels, the better the
interpolation will become. Therefore results quickly deteriorate the more you stretch an image, and interpolation can never add detail to your image which is not already present.

IMAGE ROTATION EXAMPLE


Interpolation also occurs each time you rotate or distort an image. The previous example was misleading because it is one which interpolators are particularly good at. This next example shows how image detail can be lost quite rapidly:
Image Degrades

90° Rotation 2 X 45° 6 X 15°


Original 45° Rotation
(Lossless) Rotations Rotations

The 90° rotation is lossless because no pixel ever has to be repositioned onto the border between two pixels (and therefore divided). Note how most of the detail is lost in just the first rotation, although the image continues to deteriorate
with successive rotations. One should therefore avoid rotating your photos when possible; if an unleveled photo requires it, rotate no more than once.
The above results use what is called a "bicubic" algorithm, and show significant deterioration. Note the overall decrease in contrast evident by color becoming less intense, and how dark haloes are created around the light blue. The
above results could be improved significantly, depending on the interpolation algorithm and subject matter.

TYPES OF INTERPOLATION ALGORITHMS


Common interpolation algorithms can be grouped into two categories: adaptive and non-adaptive. Adaptive methods change depending on what they are interpolating (sharp edges vs. smooth texture), whereas non-adaptive methods treat
all pixels equally.
Non-adaptive algorithms include: nearest neighbor, bilinear, bicubic, spline, sinc, lanczos and others. Depending on their complexity, these use anywhere from 0 to 256 (or more) adjacent pixels when interpolating. The more adjacent
pixels they include, the more accurate they can become, but this comes at the expense of much longer processing time. These algorithms can be used to both distort and resize a photo.

Original
Adaptive algorithms include many proprietary algorithms in licensed software such as: Qimage, PhotoZoom Pro, Genuine Fractals and others. Many of these apply a different version of their algorithm (on a pixel-by-pixel basis) when
they detect the presence of an edge-- aiming to minimize unsightly interpolation artifacts in regions where they are most apparent. These algorithms are primarily designed to maximize artifact-free detail in enlarged photos, so some
cannot be used to distort or rotate an image.

NEAREST NEIGHBOR INTERPOLATION Nearest neighbor is the most basic and requires the least processing time of all the interpolation algorithms
because it only considers one pixel-- the closest one to the interpolated point. This has the effect of simply making each pixel bigger.

BILINEAR INTERPOLATION

Bilinear interpolation considers the closest 2x2 neighborhood of known pixel values surrounding the unknown pixel. It then takes a weighted average of these 4 pixels to arrive at
its final interpolated value. This results in much smoother looking images than nearest neighbor.
The diagram to the left is for a case when all known pixel distances are equal, so the interpolated value is simply their sum divided by four.

BICUBIC INTERPOLATION

Bicubic goes one step beyond bilinear by considering the closest 4x4 neighborhood of known pixels-- for a total of 16 pixels. Since these are at various distances from
the unknown pixel, closer pixels are given a higher weighting in the calculation. Bicubic produces noticeably sharper images than the previous two methods, and is
perhaps the ideal combination of processing time and output quality. For this reason it is a standard in many image editing programs (including Adobe Photoshop),
printer drivers and in-camera interpolation.

HIGHER ORDER INTERPOLATION: SPLINE & SINC


There are many other interpolators which take more surrounding pixels into consideration, and are thus also much more computationally intensive. These algorithms include spline and sinc, and retain the most image information after an
interpolation. They are therefore extremely useful when the image requires multiple rotations / distortions in separate steps. However, for single-step enlargements or rotations, these higher-order algorithms provide diminishing visual
improvement as processing time is increased.

INTERPOLATION ARTIFACTS TO WATCH OUT FOR


All non-adaptive interpolators attempt to find an optimal balance between three undesirable artifacts: edge halos, blurring and aliasing.

Original

Aliasing Blurring Edge Halo

Even the most advanced non-adaptive interpolators always have to increase or decrease one of the above artifacts at the expense of the other two-- therefore at least one will be visible. Also note how the edge halo is similar to the artifact
produced by over sharpening with an unsharp mask, and improves the appearance of sharpness by increasing acutance.
Adaptive interpolators may or may not produce the above artifacts, however they can also induce non-image textures or strange pixels at small-scales:

Original Image with Small-Scale Textures Crop Enlarged 220%

On the other hand, some of these "artifacts" from adaptive interpolators may also be seen as benefits. Since the eye expects to see detail down to the smallest scales in fine-textured areas such as foliage, these patterns have been argued to
trick the eye from a distance (for some subject matter).

ANTI-ALIASING
Anti-aliasing is a process which attempts to minimize the appearance of aliased or jagged diagonal edges, termed "jaggies." These give text or images a rough digital appearance:

300%

Anti-aliasing removes these jaggies and gives the appearance of smoother edges and higher resolution. It works by taking into account how much an ideal edge overlaps adjacent pixels. The aliased edge simply rounds up or down with
no intermediate value, whereas the anti-aliased edge gives a value proportional to how much of the edge was within each pixel:

Ideal Edge on Low Resolution


Choose: Aliased Anti-Aliased
Grid

A major obstacle when enlarging an image is preventing the interpolator from inducing or exacerbating aliasing. Many adaptive interpolators detect the presence of edges and adjust to minimize aliasing while still retaining edge
sharpness. Since an anti-aliased edge contains information about that edge's location at higher resolutions, it is also conceivable that a powerful adaptive (edge-detecting) interpolator could at least partially reconstruct this edge when
enlarging.

NOTE ON OPTICAL vs. DIGITAL ZOOM


Many compact digital cameras can perform both an optical and a digital zoom. A camera performs an optical zoom by moving the zoom lens so that it increases the magnification of light before it even reaches the digital sensor. In
contrast, a digital zoom degrades quality by simply interpolating the image-- after it has been acquired at the sensor.
10X Optical Zoom 10X Digital Zoom

Even though the photo with digital zoom contains the same number of pixels, the detail is clearly far less than with optical zoom. Digital zoom should be almost entirely avoided, unless it helps to visualize a distant object on your
camera's LCD preview screen. Alternatively, if you regularly shoot in JPEG and plan on cropping and enlarging the photo afterwards, digital zoom at least has the benefit of performing the interpolation before any compression artifacts
set in. If you find you are needing digital zoom too frequently, purchase a teleconverter add-on, or better yet: a lens with a longer focal length.
For further reading, please visit more specific tutorials on:
Digital Photo Enlargement
Image Resizing for the Web and Email

57.CAMERA FLASH: APPEARANCE -


Using a camera flash can both broaden the scope and enhance the appearance of your photographic subjects. However, flash is also one of the most confusing and misused of all photographic tools. In fact, the best flash photo is often the
one where you cannot even tell a flash was used. This tutorial aims to overcome all the technical terminology in order to focus on the real essence of flash photography: how to control your light and subsequently achieve the desired
exposure.

camera flashes firing in a stadium: beautiful, but a good example of misuse


Before proceeding, it's advisable to first read the tutorials on camera metering & camera exposure on how aperture, ISO and shutter speed control exposure.

FLASH LIGHTING INTRO


Using a flash is fundamentally different from taking a normal camera exposure because your subject is being lit by two light sources: your flash, which you have some control over, and the ambient light, which is likely beyond your
control. While this fact may seem simple and obvious, its consequences are probably not:
• A flash photograph can vary the appearance of a subject by controlling the intensity, position and distribution of light coming from a flash. With ordinary ambient light photos, one
can only affect the appearance of a subject by changing exposure and depth of field.
• Unlike with ambient light photography, one cannot see how their camera flash will affect the scene prior to taking the photograph, since a flash emits within milliseconds or less. Further, a
flash is so quick that even after the shot it's nearly impossible to tell what it looked like without checking your camera.
It's therefore critical to develop a good intuition for how the position and distribution of your camera's flash influences the appearance of your subject. These qualitative aspects will be the focus of the first part of this tutorial; the second
part will concentrate on camera settings for achieving the desired flash exposure.

LIGHT DISTRIBUTION: BOUNCED FLASH & DIFFUSERS


An important concept in flash photography is the following: for a given subject, the distribution of the light source determines how much contrast this subject will have.

High Contrast Low Contrast

Contrast describes the brightness difference between the lightest and darkest portions of a subject. When light is more localized (left), one face of the sphere receives intense direct light, while the opposing side is nearly black because it
only receives what little light had bounced off the walls, ceiling and floor. When light is more distributed (right), shadows and highlights appear softer and less intense because this light is hitting the sphere from a wider angle.
Photographers often describe light which scatters substantially or originates from a large area as being "soft light," and more concentrated and directional light as being "hard light."
What does this all mean in practice? Generally, photographs of people will appear more appealing if they are captured using less contrast. Contrast tends to over-exaggerate facial features due to deep shadows being cast across the face.
Further, if the sphere in the above example had texture, then its texture would have been greatly emphasized in high contrast lighting. For a photo of a person, this would be analogous to giving skin a rougher and often less desirable
texture.
The big problem is that a camera flash is by its very nature a localized light source. A good flash photographer therefore knows how to make their flash appear as if it had originated from a much larger and more evenly
distributed light source. Two ways to achieve this are by using either a flash diffuser or a bounced flash.

bounced flash is diffuse but loses intensity


While it may at first sound counterintuitive, aiming your flash *away* from your subject can actually enhance their appearance. This causes the incident light from your flash to originate from a greater area, and is why portraits are
usually taken with a flash that first bounces off a large umbrella.
However, bouncing a flash greatly reduces its intensity, so you will need to have a much stronger flash in order to achieve the same exposure. Additionally, bouncing a flash is often unrealistic for outdoor photographs of people since they
are no longer in a contained environment.
Similarly, a flash diffuser is usually just a simple piece of translucent plastic which fastens over your flash, acting to scatter outgoing light. For outdoor photos this will make very little difference, but for photographs taken indoors this
will soften the lighting on your subject, since some of the scattered light from your flash will first bounce off of other objects before hitting your subject. However, just as with a bounced flash, be aware that using a flash diffuser can
greatly increase the necessary flash intensity.
As with anything though, too much can be a bad thing. Light which is overly diffuse can cause the subject to look flat and two-dimensional. Landscape photographers understand this well, as it's the flat look created by light which is
emitted evenly across the sky on an overcast day. However, overly diffuse light is rarely a problem with flash photography.

LIGHT POSITION: ON-CAMERA & OFF-CAMERA FLASH


The position of the light source relative to the viewer also affects the appearance of your subject. Whereas the localization of light affects contrast, light source position affects the visibility of a subject's shadows and highlights:
Head-On Lighting Off-Angle Lighting
Subject Appears Flat Subject is More Three-Dimensional

The subject with head-on lighting (left) looks less three-dimensional than the subject shown using off-angle flash (right), which is is exactly the difference one sees when using an on-camera versus off-camera flash, respectively. With on-
camera flash, the side of the subject which receives all the light is also the side of the subject the camera sees, resulting in shadows that are barely visible, and a bright and harshly-lit subject.

example of non-ideal on-camera flash


Overall, subjects generally look best when the light source is neither head-on, as with on-camera flash, nor directly overhead, as is often the case with indoor lighting. In real-world photographs, using an on-camera flash can often give a
"deer in the headlights" appearance to subjects, such as in the example of the well-known subject to the left.
However, it's usually unrealistic to expect that one can have a flash located off of the camera, unless one is in a studio or has a sophisticated setup, as may be the case for a big event like a wedding.
The best and easiest way to achieve the look of an off-camera flash using an on-camera flash is to bounce the flash off of an object, such as a wall or ceiling, as discussed previously.
Another option is to use a flash bracket, which increases the distance between the flash unit and the front of your camera. Flash brackets create substantial off-angle lighting for close range photos, but appear increasingly similar to an
on-camera flash the further they are from your subject. A noticeable improvement is reducing red-eye, because light from the flash no longer bounces straight back to the camera (see red-eye section later). A flash bracket's biggest
disadvantage is that they can be quite large, since they need to extend far above or to the side of your camera body in order to achieve their effect.

MULTIPLE LIGHT SOURCES: FILL FLASH


reduces harsh shadows from strong sunlight
The term "fill flash" is used to describe a flash that contributes less to the exposure than does ambient light. Fill flash gets its name because it is effectively "filling in" the shadows of your subject, while not appreciably changing the
overall exposure. A fill flash effectively plays the role of a secondary light source.
A common misconception is that a flash is only used for situations where it's dark. Contrary to this belief, fill flash is most useful under bright ambient lighting, such as when your subject is back-lit, or when the lighting has too much
contrast. It can dramatically improve the appearance of people being photographed in otherwise harsh outdoor lighting, such as in afternoon sunlight on a clear day (example to the right). Move your mouse over the image to see it without
fill flash.
However, in order to use a fill flash you will need to force your flash to fire; most cameras do not fire a flash in automatic mode unless the scene is rather dimly lit. When there is plenty of ambient light, compact and SLR cameras will
default to using their flash as a fill flash when it's activated. Just pay close attention to the charge on your camera's battery since flash can deplete it much more rapidly than normal. The second half of this tutorial will go into more detail
about how to achieve the right amount of fill flash.

FLASH & RED-EYE REDUCTION


A big problem with camera flashes is unnatural red eyes in subjects, caused by a flash which glares back from the subject's pupil. The red color is due to the high density of blood vessels directly behind the pupil at the back of the eye.
Red-eye can be most distracting when the subject is looking directly into the camera lens, or when their pupils are fully dilated due to dim ambient light. It is also much more prominent when the flash is very localized and directional
("hard light").

example of red-eye caused by flash


Some camera flashes have a red-eye reduction mode, which sends a series of smaller flashes before the exposure so that the subject's pupils are contracted during the actual flash. This does not eliminate red-eye entirely (since the smaller
pupils still reflect some light), but it makes red-eye much less prominent since the pupil area is greatly reduced. An alternative method for red-eye reduction would be to just take the photo where it is brighter, or to increase the amount of
ambient light -- both will naturally contract the pupil.
Another technique is to use digital red-eye removal, which works by using image editing software to select the red pupils and change their hue to match the person's natural eye color. However, this technique should only be used as a last
resort since it does not address the underlying cause of red-eye, and is difficult to perform so that the eye looks natural in a detailed print. For example, subjects can easily end up not having any pupils at all, or can have portions of their
eye that are colored like a blue iris but still have the texture of a pupil.
The only ways to eliminate red-eye entirely are (i) to have the subject look away from the camera, (ii) to use a flash bracket, an off-camera flash or a bounced flash, or (iii) to avoid using a flash in the first place.

FLASH WHITE BALANCE

flash vs ambient white balance


Most flash units emit light which has a color temperature of about 5000K, which is comparable to daylight (see tutorial on white balance). Ambient light will therefore have a color tint if it differs substantially from 5000K, since most
cameras automatically set their white balance to match the flash (if it's used). The tint is most apparent with artificial lighting, and when balanced flash ratios (1:4 to 4:1) make light from both flash and ambient sources clearly
distinguishable.
Flash white balance issues can also result from a flash which bounces off a colored surface, such as a wall which is painted orange or green. However, bouncing off a colored surface will not necessarily change the white balance of
your flash if ambient light bounces off that surface as well.
Alternatively, the flash's white balance can be intentionally modified to achieve a given effect. Some flash diffusers have a subtle warming effect, for example, in order to better match indoor incandescent lighting, or to give the
appearance of light from a sunset.

EXTERNAL FLASH UNITS


External flash units are usually much more powerful than flash units which are built into your camera. Even though an in-camera flash has enough intensity for direct light on nearby people, this type of light can be quite harsh. Often only
an external flash unit has enough power to bounce off a distant wall or ceiling and still adequately illuminate the subject. An added benefit is that external flash units are usually easier to modify with diffusers, brackets, reflectors, color
filters and other add-ons. Furthermore, external flashes are also a little further from your lens's line of sight, which can reduce red-eye and slightly improve light quality.

Please continue onto the second half of this tutorial:


Camera Flash, Part 2: Flash Ratios & Exposure

58.CAMERA FLASH: EXPOSURE -


Using a camera flash can both broaden the scope and enhance the appearance of your photographic subjects. However, flash is also one of the most confusing and misused of all photographic tools. In fact, the best flash photo is often the
one where you cannot even tell a flash was used. This tutorial aims to overcome all the technical terminology in order to focus on the real essence of flash photography: how to control your light and subsequently achieve the desired
exposure.
The first part of the camera flash tutorial focused on the qualitative aspects of using a camera's flash to influence a subject's appearance; this second part focuses on what camera settings to use in order to achieve the desired flash
exposure.

FLASH EXPOSURE OVERVIEW


Using a flash is fundamentally different from taking a normal camera exposure because your subject is being lit by two light sources: your flash, which you have some control over, and the ambient light, which is likely beyond your
control. In this part of the tutorial we'll focus on the other two consequences of this fact, as they pertain to flash exposure:
Diagram Illustrating the Flash Exposure Sequence

Illustration shown roughly to scale for a 1/200th second exposure with a 4:1 flash ratio.
Flash shown for first curtain sync. A pre-flash is not emitted with much older flash units.
• A flash photograph actually consists of two separate exposures: one for ambient light and the other for flash. Each of these occurs in the split second between when you hold the shutter
button and when the shutter opens. Newer SLR cameras also fire a pre-flash in order to estimate how bright the actual flash needs to be.
• A flash pulse is usually very brief compared to the exposure time, which means that the amount of flash captured by your camera is independent of your shutter speed. On the other hand,
aperture and ISO speed still affect flash and ambient light equally.
The key is knowing how to achieve the desired mix between light from your flash and light from ambient sources -- while also having the right amount of total light (from all sources) to achieve a properly exposed image.

CONCEPT: FLASH RATIO


The "flash ratio" is an important way to describe the mix between ambient light and light from your flash. Since the shutter speed doesn't affect the amount of light captured from your flash (but does affect ambient light), you can use this
fact to control the flash ratio. For a given amount of ambient light, the mix of flash and ambient light is adjusted using only two camera settings: (i) the length of the exposure and (ii) the flash intensity.

Flash
N/A or 0 1:8 - 1:2 1:1 2:1 - 8:1
Ratio:

Only Ambient Balanced


Fill Flash Strong Flash
Light Flash

shorter shortest
longest exposure
Settings: no flash exposure exposure
weakest flash
weaker flash strongest flash

In this tutorial, the flash ratio* is used to describe the ratio between light from the flash and ambient light. At one extreme of this ratio is ordinary ambient light photography (left), and at the other extreme is photography using
mostly light from the flash (right). Realistically though, there's always some amount of ambient light, so an infinite flash ratio is just a theoretical limit.
*Technical Note: Sometimes the flash ratio is instead described in terms of the ratio between total light and light from the flash. In that case, a 2:1, 3:1 and 5:1 ratio would be equivalent to a 1:1, 1:2 and 1:4 ratio in the table above,
respectively. Unfortunately both conventions are often used.
It's important to also note that not all flash ratios are necessarily attainable with a given flash unit or ambient light intensity. If ambient light is extremely intense, or if your flash is far from your subject, it's unlikely that the internal
flash of a compact camera could achieve flash ratios approaching 10:1, for example. At the other extreme, using a subtle 1:8 fill flash might be impractical if there's very little ambient light and your lens doesn't have a large maximum
aperture (or if you are unable to use a high ISO speed, or capture the photo using a tripod).
Flash ratios of 1:2 or greater are where the topics in the first half of this tutorial become most important, including the flash position and its apparent light area, since the flash can appear quite harsh unless carefully controlled. On
the other hand, flash ratios less than 1:2 can often achieve excellent results using a flash that is built into the camera. For this reason, most photographers will likely want to use their flash as a fill flash, if possible, since this is the simplest
type of flash photography.
FLASH EXPOSURE MODES

One of the most difficult tasks in flash photography is understanding how different camera and flash metering modes will affect an overall exposure. Some modes assume you only want a fill flash, while others virtually ignore ambient
light and assume that your camera's flash will be the dominant source of illumination.
Fortunately, all cameras use their flash as either the primary light source or as a fill flash. The key is knowing when and why your camera uses its flash in each of these ways. A table summarizing the most common camera modes is listed
below:

Camera Mode Flash Ratio

Auto ( ) 1:1 or greater if dim; otherwise flash doesn't fire

Program (P) fill flash if bright; otherwise greater than 1:1

Aperture Priority (Av)


fill flash
Shutter Priority (Tv)

Manual (M) whatever flash ratio is necessary

In Auto mode ( ), the flash turns on only if the shutter speed would otherwise drop below what is deemed as being hand-holdable -- usually about 1/60 of a second. The flash ratio then increases progressively as light hitting the subject
gets dimmer, but the shutter speed remains at 1/60 of a second.
Program (P) mode is similar to Auto, except one can also force a flash to be used in situations where the subject is well-lit, in which case the flash will act as a fill flash. Most cameras intelligently decrease their fill flash as ambient light
increases (called "auto fill reduction" in Canon models). The fill flash ratio may therefore be anywhere from 1:1 (in dim light) to 1:4 (in bright light). For situations where the shutter speed is longer than 1/60 of a second, flash in Program
mode acts just as it did in Auto mode.
Aperture Priority (Av) and Shutter Priority (Tv) modes have even different behavior. Just as with Program mode, one usually has to force their flash to "on," which results in the camera using the flash as a fill flash. However, unlike
with Auto and P modes, the flash ratio never increases beyond about 1:1 and exposures are as long as necessary (aka "slow sync"). In Tv mode, the flash ratio may also increase if the necessary f-stop is smaller than available with your
lens.
In Manual (M) mode, the camera exposes ambient light based on how you set the aperture, shutter speed and ISO. The flash exposure is then calculated based on whatever remaining light is necessary to illuminate the subject. Manual
mode therefore enables a much broader range of flash ratios than the other modes.
In all modes, the relevant setting in your viewfinder will blink if a flash exposure is not possible using that setting. This might include requiring an aperture that is outside the range available with your lens, or a shutter speed that is
faster than what your camera/flash system supports (the "X-sync speed" - usually 1/200 to 1/500 second).

FLASH EXPOSURE COMPENSATION - FEC


The key to changing the flash ratio is using the right combination of flash exposure compensation (FEC) and ordinary exposure compensation (EC). FEC works much like regular EC: it tells the camera to take whatever flash intensity it
was going to use, and to override that by the FEC setting. The big difference is that while EC affects the exposures for both flash and ambient light, FEC only affects flash intensity.
Both EC and FEC are specified in terms of stops of light. Each positive or negative stop refers to a doubling or halving of light, respectively. Therefore a +1 EC or FEC value means a doubling of light, whereas a -2 value means there's a
quarter as much light.
The problem is that it is complicated to adjust both EC and FEC to change the flash ratio without also changing the overall exposure. The following table summarizes how to change the flash ratio if it had originally been 1:1:
Flash
1:8 1:4 1:2 1:1 2:1 4:1 8:1
Ratio:

FEC
-3 -2 -1 0 +1 +2 +3
Setting:

EC +2/3 to +1/3 to -1/2 to -2 to


+2/3 0 -1 1/3
Setting: +1 +1/2 -2/3 -2 1/3

The above table shows how to change the flash ratio by adjusting FEC and EC;
EC settings are listed as a range because they can only be set in 1/2 to 1/3 stop increments.
Note that the FEC value is straightforward: it's just equal to the number of stops you intend to increase or decrease the flash ratio by. On the other hand, the EC setting is far from straightforward: it depends not only on how much you
want to change the flash ratio by, but also on the original flash ratio -- and it's rarely an integer.
As an example of why EC is much more complicated than FEC, let's walk through what happens when you change the flash ratio from 1:1 to 2:1 in the above example. You will first want to dial in +1 FEC, since that's the easiest part.
However, if only FEC is increased +1, then the amount of light from flash doubles while light from ambient remains the same -- thereby increasing the overall exposure. We therefore need to dial in a negative EC to compensate for this,
so that the exposure is unchanged. But how much EC? Since the original flash ratio was 1:1, the total amount of light using +1 FEC is now 150% of what it was before. We therefore need to use an EC value which reduces the total
amount of light by a factor of 2/3 (150% * 2/3 = 100%). Since each negative EC halves the amount of light, we know this EC value has to be between 0 and -1, but the exact value isn't something we can readily calculate in our head. It's
equal to log2(2/3), which comes out to about -0.58.
Fortunately, the flash ratio calculator (below) solves this problem for us. While it's not something one would necessarily use in the field, hopefully it can help you develop a better intuition for roughly what EC values are needed in
different situations.
Top of Form
Flash Ratio Calculator

Original Flash
FEC Setting:
Ratio:

New Flash Ratio: EC Setting:

Bottom of Form
note: EC can only be set in 1/3 to 1/2 stop increments, so use the nearest value available
How to increase the flash ratio: dial in a positive flash exposure compensation, while simultaneously entering a negative exposure compensation. Assuming a default 1:1 flash ratio, achieving a 2:1 flash ratio requires an FEC value of +1
and a corresponding EC value of -1/2 to -2/3.
How to decrease the flash ratio: dial in a negative flash exposure compensation, while simultaneously entering a positive exposure compensation (but not exceeding +1). Assuming a default 1:1 flash ratio, achieving a 1:2 flash ratio
requires an FEC value of -1 and a corresponding EC value of about +1/3 to +1/2.
Finally, it's important to note that FEC is not always used to change the flash ratio. It can also be used to override errors by your camera's flash metering system. How and why this might happen is discussed in the next section...

TTL FLASH METERING


Most current SLR flash systems employ some form of through-the-lens (TTL) metering. Digital TTL flash metering works by bouncing one or more tiny pre-flash pulses off the subject immediately before the exposure begins, which are
then used to estimate what flash intensity is needed during the actual exposure.
Just after the exposure begins, the flash unit starts emitting its flash pulse. Your camera then measures how much of this flash has reflected back in real-time, and quenches (stops) the flash once the necessary amount of light has been
emitted. Depending on the camera mode, the flash will be quenched once it either balances ambient light (fill flash) or adds whatever light is necessary to expose the subject (greater than 1:1 flash ratio).
However, a lot can go wrong. Since a flash exposure is actually two sequential exposures, both (1) ambient light metering and (2) flash metering have to be correct. We'll therefore deal with each source of metering error separately.
(1) Ambient Light Metering is the first to occur, and determines the combination of aperture, ISO and shutter speed. It's quite important since it controls the overall exposure, and is what the subsequent flash metering will be based on.
Recall that in-camera metering goes awry primarily because it can only measure reflected and not incident light (see tutorial on camera metering & exposure).

Reflective Subject Incident vs. Reflected Light

If your subject is light and reflective, such as in the example above, then your camera will mistakenly assume that this apparent brightness is caused by lots of incident light, as opposed to its high reflectance. Since your camera over-
estimates the amount of ambient light, it therefore ends up under-exposing the subject. Similarly, a dark and unreflective subject often results in an over-exposure. Furthermore, situations with high or low-key lighting can also throw off
your camera's metering (see digital camera histograms).
Note: Ironically, white wedding dresses and black tuxedos are perfect examples of highly reflective and unreflective subjects that can throw off your camera's exposure -- even though weddings are often where flash photography and
accurate exposures are most important.
Regardless, if you suspect your camera's ambient light metering will be incorrect, then dialing in a positive or negative exposure compensation (EC) will fix ambient light metering and improve flash metering at the same time.
(2) Flash Metering is based on the results from both the pre-flash and from ambient light metering. If your TTL flash metering system emits an incorrect amount of flash, not only will your overall exposure be off, but the flash ratio will
be off as well -- thereby affecting the appearance of your subject.
The biggest causes for flash error are the distance to your subject, the distribution of ambient light and your subject's reflective properties. The subject distance is important because it strongly influences how much flash will hit and
bounce back from this subject:

Flash Illumination vs. Distance

light fall-off is so rapid that objects 2x as far receive 1/4 the amount of flash
Even with a proper flash exposure, if your subject (or other objects) traverse a large distance to and from the camera, expect regions of these objects which are closer to the camera to appear much brighter than regions which are further.

example of complex, uneven ambient light


Complex lighting situations can also be problematic. If ambient light illuminates your subject differently than the background or other objects, the flash might mistakenly try to balance light hitting the overall scene (or some other object),
as opposed to light which only hits your subject.
Additionally, since flash metering occurs after your camera meters for ambient light, it is important not to use the auto exposure (AE) lock setting when using the focus and recompose technique. If available, one should instead use flash
exposure lock (FEL).
The particular reflective properties of objects in your photo can also throw off flash metering. This might include flash glare and other hard reflections from mirrors, metal, marble, glass or other similar objects. These objects may also
create additional unintended sources of hard light, which can cast additional shadows on your subject.
There's also subtleties with how different manufacturer's metering systems work. For Canon EOS digital, you will likely have either E-TTL or E-TTL II; for Nikon digital it will be D-TTL or i-TTL. However, many of their flash metering
algorithms are complicated and proprietary, and differences often only arise in situations with uneven ambient lighting. Therefore the best approach is to experiment with a new flash system before using it for critical photos, so you can
get a better feel for when metering errors might occur.

FIRST & SECOND CURTAIN SYNC


First and second curtain sync are flash exposure settings that affect how a subject's motion blur is perceived. Since a flash pulse is usually much shorter than the exposure time, a flash photo of a moving object is comprised of both
a blurred portion, caused by the slower ambient light exposure, and a sharper portion, caused by the much faster flash pulse. Each of these is effectively overlaid to create the final flash photograph. First and second curtain sync control
whether the blurred portion trails behind or in front of the subject's flash image, respectively, by synchronizing the flash pulse with the beginning ("first curtain") or end ("second curtain") of the exposure:

With first curtain sync, most of the ambient light is captured after the flash pulse -- causing the blurred portion to streak in front of the sharper flash image. This can give moving objects the appearance of traveling in the opposite direction
of their actual motion. In the example below, the swan has motion streaks which make it appear as though it is rapidly swimming backwards, and the snow appears to be "falling" upwards:

Example of First Curtain Sync Appearance of Moving Objects

For the above reasons, first curtain sync is usually undesirable for subjects in motion -- unless the exposure time is kept short enough that no streaks are visible. On the other hand, second curtain sync can be very useful for
exaggerating subject motion, because the light streaks appear behind the moving subject.
However, most cameras do not use second curtain sync by default, because it can make timing the shot more difficult. This is because second curtain sync introduces much more of a delay between when you press the shutter button and
when the flash fires -- and increasingly so for longer exposure times. One therefore needs to anticipate where the subject will be at the end of the exposure, as opposed to when the shutter button is pressed. This can be very tricky to time
correctly for exposures of a second or more, or for really fast moving subjects.

59.TUTORIALS: DIFFRACTION & PHOTOGRAPHY -


Diffraction is an optical effect which can limit the total resolution of your photography-- no matter how many megapixels your camera may have. Ordinarily light travels in straight lines through uniform air, however it begins to disperse
or "diffract" when squeezed through a small hole (such as your camera's aperture). This effect is normally negligible, but increases for very small apertures. Since photographers pursuing better sharpness use smaller apertures to achieve
a greater depth of field, at some aperture the softening effects of diffraction offset any gain in sharpness due to better depth of field. When this occurs your camera optics are said to have become diffraction limited. Knowing this limit
can help you to avoid any subsequent softening, and the unnecessarily long exposure time or high ISO speed required for such a small aperture.
BACKGROUND
Parallel light rays which pass through a small aperture begin to diverge and interfere with one another. This becomes more significant as the size of the aperture decreases relative to the wavelength of light passing through, but occurs to
some extent for any size of aperture or concentrated light source.

Large Aperture Small Aperture

Since the divergent rays now travel different distances, some move out of phase and begin to interfere with each other-- adding in some places and partially or completely canceling out in others. This interference produces a diffraction
pattern with peak light intensities where the amplitude of the light waves add, and less light where they cancel out. If one were to measure the intensity of light reaching each position on a line, the data would appear as bands similar to
those shown below.

For an ideal circular aperture, the 2-D diffraction pattern is called an "airy disk," after its discoverer George Airy. The width of the airy disk is used to define the theoretical maximum resolution for an optical system (defined as the
diameter of the first dark circle).
Airy Disk 3-D Visualization

Spatial Position

When the diameter of the airy disk's central peak becomes large relative to the pixel size in the camera (or maximum tolerable circle of confusion), it begins to have a visual impact on the image. Alternatively, if two airy disks become
any closer than half their width they are also no longer resolvable (Rayleigh criterion).

Barely Resolved No Longer Resolved


Diffraction thus sets a fundamental resolution limit that is independent of the number of megapixels, or the size of the film format. It depends only on the aperture's f-stop (or f-number) setting on your lens, and on the wavelength of light
being imaged. One can think of it as the smallest theoretical "pixel" of detail in photography. Even if two peaks can still be resolved, small apertures can also decrease small-scale contrast significantly due to partial overlap, the
secondary ring and other ripples around the central disk (see example photo).

VISUAL EXAMPLE: APERTURE VS. PIXEL SIZE


The size of the airy disk itself is only useful in the context of depth of field and pixel size. The following interactive table shows the airy disk within a grid which is representative of the pixel size for several camera models (move your
mouse over each to change grid).
Aperture Camera Type Pixel Area

f/2.0 Canon EOS 1D 136. µm2

f/2.8 Canon EOS 1Ds 77.6 µm2

f/4.0 Canon EOS 1DMkII / 5D 67.1 µm2

f/5.6 Nikon D70 61.1 µm2

f/8.0 Canon EOS 10D 54.6 µm2

f/11 Canon EOS 1DsMkII 52.0 µm2

f/16 Canon EOS 20D / 350D 41.2 µm2

f/22 Nikon D2X 30.9 µm2

f/32 Canon PowerShot G6 5.46 µm2

Recall that a digital sensor utilizing a bayer array only captures one primary color at each pixel location, and then interpolates these colors to produce the final full color image. As a result of the sensor's anti-aliasing filter (and the
Rayleigh criterion above), the airy disk can have a diameter approaching about 2 pixels before diffraction begins to have a visual impact (assuming an otherwise perfect lens, when viewed at 100% onscreen).
As two examples, the Canon EOS 20D begins to show diffraction at around f/11, whereas the Canon PowerShot G6 (compact camera) begins to show its effects at only about f/4.0-5.6. On the other hand, the Canon G6 does not require
apertures as small as the 20D in order to achieve the same depth of field (for a given angle of view) due to its much smaller total sensor size (more on this later).
Since the size of the airy disk also depends on the wavelength of light, each of the three primary colors will reach its diffraction limit at a different aperture. The calculation above assumes light in the middle of the visible spectrum
(~510 nm). Typical digital SLR cameras can capture light with a wavelength of anywhere from 450 to 680 nm, so at best the airy disk would have a diameter of 80% the size shown above (for pure blue light).
Another complication is that bayer arrays allocate twice the fraction of pixels to green as red or blue light. This means that as the diffraction limit is approached, the first signs will be a loss of resolution in green and in pixel-level
luminance. Blue light requires the smallest apertures (largest f-stop number) in order to reduce its resolution due to diffraction.
Technical Notes:

• The actual pixels in a camera's digital sensor do not actually occupy 100% of the sensor area, but instead have gaps in between. This calculation assumes that the microlenses are effective enough
that this can be ignored.
• Nikon digital SLR cameras have pixels which are slightly rectangular, therefore resolution loss from diffraction may be greater in one direction. This effect should be visually negligible, and only
noticeable with very precise measurement software.
• The above chart approximates the aperture as being circular, but in reality these are polygonal with 5-8 sides (a common approximation).
• One final note is that the calculation for pixel area assumes that the pixels extend all the way to the edge of each sensor, and that they all contribute to those seen in the final image. In reality,
camera manufacturers leave some pixels unused around the edge of the sensor. Since not all manufacturers provide info on the number of used vs. unused pixels, only used pixels were considered
when calculating the fraction of total sensor area. This pixel sizes above are thus slightly larger than is actually the case (by no more than 5% in the worst case scenario).

WHAT IT LOOKS LIKE


The above calculations and diagrams are quite useful for getting a feel for the concept of diffraction, however only real-world photography can show its visual impact. The following series of images were taken on the Canon EOS 20D,
which begins to be diffraction limited at about f/11 (as shown above). Move your mouse over each f-number and notice the differences in the texture of the fabric.
No Overlap of Airy Disks

Partial Overlap of Airy


Select Aperture: f/8.0 f/11 f/16 f/22
Disks

Note how most of the lines in the fabric are still resolved at f/11, but they are shown with slightly lower small-scale contrast or acutance (particularly where the fabric lines are very close). This is because the airy disks are only partially
overlapping, similar to the effect on adjacent rows of alternating black and white airy disks (as shown on the right). By f/22, almost all fine lines have been smoothed out because the airy disks are larger than this detail.

CALCULATING THE DIFFRACTION LIMIT


The form below calculates the size of the airy disk and assesses whether the system has become diffraction limited. Sections in dark grey are optional and have the ability to define a custom circle of confusion (CoC).
Top of Form
Diffraction Limit Calculator

Maximum Print 10
Dimension

Viewing Distance

Eyesight

?
Resolution
Megapixels

Camera Type

Selected Aperture

Set Circle of Confusion = Twice Pixel Size?

Pixel Size (µm)

Maximum Circle of Confusion (µm)

Diameter of Airy Disk (µm)

Diffraction Limited?
Note: CF = "crop factor" (commonly referred to as the focal length multiplier);
assumes square pixels, 4:3 aspect ratio for compact digital and 3:2 for SLR
Bottom of Form

This calculator decides that the system has become diffraction limited when the diameter of the airy disk exceeds that of the CoC. For a further explanation on each input setting, please see their use in the flexible depth of field calculator.
The "set CoC = Twice Pixel Size" checkbox is intended to give you an indication of when diffraction will become visible when viewing your digital image at 100% on a computer screen. Understand that the "twice pixel size" limit is
arbitrary, and that there is actually gradual transition between when diffraction is and is not visible at 100% view. Real-world results will also depend on the lens being used, and so this only applies for the sharpest lenses.

NOTES ON REAL-WORLD USE IN PHOTOGRAPHY


Even when a camera system is near or just past its diffraction limit, other factors such as focus accuracy, motion blur and imperfect lenses are likely to be more significant. Softening due to diffraction only becomes a limiting factor for
total sharpness when using a sturdy tripod, mirror lock-up and a very high quality lens.
Some diffraction is often ok if you are willing to sacrifice some sharpness at the focal plane, in exchange for a little better sharpness at the extremities of the depth of field. Alternatively, very small apertures may be required to achieve a
long exposure where needed, such as to create motion blur in flowing water for waterfall photography.
This should not lead you to think that "larger apertures are better," even though very small apertures create a soft image. Most lenses are also quite soft when used wide open (at the largest aperture available), and so there is an
optimal aperture in between the largest and smallest settings-- usually located right at or near the diffraction limit, depending on the lens. Alternatively, the optimum sharpness may even be below the diffraction limit for some lenses.
These calculations only show when diffraction becomes significant, not necessarily the location of optimum sharpness (although both often coincide). See "camera lens quality: MTF, resolution & contrast" for more on this.
Are smaller pixels somehow worse? Not necessarily. Just because the diffraction limit has been reached with large pixels does not mean the final photo will be any worse than if there were instead smaller pixels and the limit was
surpassed; both scenarios still have the same total resolution (although one will produce a larger file). Even though the resolution is the same, the camera with the smaller pixels will render the photo with fewer artifacts (such as color
moiré and aliasing). Smaller pixels also provide the flexibility of having better resolution with larger apertures, in situations where the depth of field can be more shallow. When other factors such as noise and depth of field are
considered, the answer as to which is better becomes more complicated.
Technical Note:
Since the physical size of the lens aperture is larger for telephoto lenses (f/22 is a larger aperture at 200 mm than at 50 mm), why doesn't the size of the airy disk vary with focal length? This is because the distance to the focal plane also increases with focal length, and so the airy disk diverges more over this greater distance. As a result, the two effects of physical aperture
size and focal length cancel out. Therefore the size of the airy disk only depends on the f-stop, which describes both focal length and aperture size. The term used to universally describe the lens opening is the "numerical aperture" (inverse of twice the f-stop). There is some variation between lenses though, but this is mostly due more to the different design and distance between
the focal plane and "entrance pupil."

Bottom of Form

Vous aimerez peut-être aussi