Vous êtes sur la page 1sur 6

DP Essentials #05 @Digital Outback Photo Page 1 sur 6

Digital Outback Photo - Photography using Digital


SLRs
Handbook Stories Technique Reviews Resources Forums Books

Digital Photography Essentials #005

Improving “Presence” in Digital Images

review by Mike Chaney (author of Qimage)

Manage the Digital Workflow

Assessing and Improving “Presence” or “3D Effect” in Digital Images

Near the start of the new year in 2004, “3D effect” started to become a popular topic of
discussion on some of the online digital imaging message boards. There seemed to be some
consensus that some images had more “presence” in that the images simply looked more like
the actual scene. A few examples were posted where people claimed that the images simply
looked more like you could “walk into them”. The term “3D effect” was most likely coined due
the images appearing to allow a better or more accurate perception of depth.

The term “3D effect” started to become most frequently used on the Sigma SD9/SD10 forums
as the SD9/SD10 users felt their cameras returned images with more depth than other cameras.
After reviewing some samples, I did see a difference as it appeared that some of the full color
images from the SD9/SD10 had more depth than images from even the latest crop of single-

http://www.outbackphoto.com/dp_essentials/dp_essentials_05/essay.html 2005-09-20
DP Essentials #05 @Digital Outback Photo Page 2 sur 6

color-per- pixel cameras (Canon 10D, Nikon D100, Fuji S2, etc.). I set out to discover the
reason for the apparent difference and to find a more technical spin on something that people
were describing as a “feeling” when viewing images.

Perception of depth in a photo has three basic elements: (1) size of an object relative to other
objects in the frame, (2) location of an object relative to other objects, and (3) focus of the
object in relation to other objects and the depth-of-field of the photo. Setting aside (usually
minor) lens distortions, the first two elements (size and location) are not likely to change or be
different just because the sensor that captures the image uses different technology. I therefore
focused my analysis on the third element: focus of objects in the photo.

The apparent sharpness/softness of object in the photo is a major contributor to the human
perception of depth or 3D in an image. Our brains know that all objects in the same focal plane
(same distance from the lens) in an image should have the same focus. The subject in the photo
is usually sharp while (depending on depth-of-field) objects in front of or behind the subject
appear soft or out of focus. If one were to disrupt this relationship between focus and the
object’s actual distance from the camera lens, this would certainly degrade the perception of
depth in the photo. For example, if one were to take a photo of two people standing side by side
where both subjects are in focus and artificially/intentionally blur the subject on the right, some
of the depth in the photo may be lost because the brain wouldn’t know what to do with the
information. Is the subject on the right actually taller but standing further from the
photographer? Maybe the subject on the right is smaller but is closer to the camera? In a sense,
you have disturbed the relationship of distance/focus and therefore have diluted the 3D effect in
the image.

Having written software to deal with raw images from single color capture (Bayer type)
cameras, I was aware of the pitfalls of single color capture methods and wondered if one of
these pitfalls might be causing some images to have less apparent depth. The Bayer type
sensors used in typical dSLR’s like the Nikon D100, Canon 10D, Kodak 14N, etc. capture a single
color at each photo site in a pattern such as this:

To get the full color image from this type of “dithered” pattern, the missing two colors must be
“predicted” at each photo site. The interpolation process by which the missing colors are
predicted can be relatively complex. For a black and white image such as a resolution chart,
things work pretty well because each pixel will have roughly the same brightness for blacks,
whites, and grays (red, green, and blue values are equal). If the red, green, and blue values in a
particular region are the same, the missing colors will be roughly the same brightness as the
actual (measured) color so resolution for grays can at least approach that of the physical
resolution of the sensor.

After studying the layout of a Bayer type sensor, however, it becomes apparent that different
colors can only be captured at a fraction of the total sensor resolution. For example, an object
that is pure red (pure as far as the sensor is concerned) and only “excites” the red photo sites
will have roughly ¼ the resolution of the sensor because the blue and green sensors will have

http://www.outbackphoto.com/dp_essentials/dp_essentials_05/essay.html 2005-09-20
DP Essentials #05 @Digital Outback Photo Page 3 sur 6

little or no information to contribute. The red channel at blue and green photosites will therefore
have to be interpolated by simply “stretching” the ¼ resolution data. Blues have the same
problem because pure blues will also be captured at roughly ¼ resolution. Since green is more
abundant on the sensor, pure greens will have ½ the sensor’s resolution.

The “effective resolution” for each color is therefore variable and will depend on the weighting of
how much data is available at each photo site. Gray colors where each photo site is contributing
an equal amount of information will achieve the highest effective resolution while the worst
cases will only capture ¼ of the sensor resolution, not forgetting all the combinations in
between. Pure yellow, for example, will excite the red and green sensors in the grid but the blue
sensors will be nearly useless for interpolation. Since the green sensors occupy ½ of the sensor
and the red sensors ¼, you’ll be left with ¾ effective resolution for yellow.

Note that it is difficult/rare to get “pure” colors such as a red that has absolutely no effect on the
sensor’s green and blue sensors. The color need not be “pure” however, to reduce the effective
resolution of the image. The less information contributed by a photo site, the less “weight” the
data at that site can contribute to the interpolation. The lower the effective resolution of a
particular color, the more interpolation (stretching of the data) must be done, and the more you
stretch data, the less detailed and softer that data will appear.

This inconsistency in sharpness across colors has shown up in my own work. I have noticed, for
example, that when shooting a bright red flower with gray spots, the gray spots on the flower
petals often look much sharper or more in focus than the details (veins) in the red petals even
though both are in the same focal plane. The gray details simply have a higher effective
resolution than the red details in the flower petals. This sharpness inconsistency can disturb the
3D effect in images and also may shift focus to something in the photo that is not your main
subject. In theory, full color sensors like the Foveon sensor in the SD9/SD10 should not have
this problem since all colors are captured at nearly the full resolution of the sensor. If my theory
is correct, the phenomenon should show up in some well chosen tests of a camera like the
Canon 10D versus the Sigma SD10.

To test my “sharpness inconsistency” theory, I ran a test shooting the same subject with the
Canon 10D and Sigma SD10, trying to keep all variables (like aperture, lighting conditions,
shutter speed, etc.) the same. The results of my test showed that the Sigma SD10 captured
bright reds at about the same level of sharpness as gray colors, while the 10D captured red
details much softer than gray details in the same photo. The example is shown below:

The Canon 10D crop appears on the top in the above sample while the Sigma SD10 crop is on
the bottom. Note that the SD10 crop was upsampled to a size comparable to the 10D shot since
the 10D returns a 6 MP image versus the SD10’s 3.4 MP image. The top 10D shot shows the red
details in the taillight quite blurred/soft while the gray pinstripe at the bottom appears quite
sharp. The SD10 shot on the bottom shows both the red dotted pattern in the taillight and the
gray pinstripe beneath at an acceptable (and comparable) level of sharpness.

http://www.outbackphoto.com/dp_essentials/dp_essentials_05/essay.html 2005-09-20
DP Essentials #05 @Digital Outback Photo Page 4 sur 6

Having at least preliminarily confirmed my hypothesis that different colors may appear at
different levels of sharpness when captured via a Bayer type (single color) sensor, I set out to
try to find a way to equalize sharpness levels in Bayer based images. If inconsistency in
sharpness was causing a lack of (or detriment in) 3D effect in Bayer based images, perhaps we
could make a filter that increased sharpness only where needed. If we could make a filter that
knew it had to increase sharpness of the reds in the above crop for example, while leaving grays
untouched, we might be able to bring some balance back to overall sharpness and improve the
“presence” of Bayer based images.

If we wrote a sharpening filter that could “detect” the effective resolution based on how many
photo sites on the sensor contributed information, we could apply extra sharpening proportional
to the amount of information “lost” by the imbalance of color samples. For example, we would
apply a lot of sharpening to pure red pixels or pure blue pixels because effective resolution is
down to ¼ in those regions. We’d apply a moderate amount to pure greens because effective
resolution is ½ for pure greens. Finally, we’d apply no sharpening at all for pure grays. After
testing such a filter and finding that it had a very noticeable effect on the depth of photos, I
implemented the filter and made it available in Qimage. Here is the result of application of the
filter to the above sample:

The filtered 10D crop is on the top and the SD10 crop on the bottom. Note how the filtered 10D
shot has been improved: sharpness of red details is drastically improved while the sharpness of
the gray pinstripe beneath has been affected very little.

The above shows how the new filter (called “sharpness equalizer” in Qimage) can equalize
sharpness of different colors in the image, but how does this affect the overall 3D effect or
perception of depth in photos. Take a look at the following example. The same filter as above
was applied to a “real life” image.

http://www.outbackphoto.com/dp_essentials/dp_essentials_05/essay.html 2005-09-20
DP Essentials #05 @Digital Outback Photo Page 5 sur 6

The above shows an original crop from a Canon 10D on the top and the filtered image on the
bottom. Looking at the original on the top, notice how the more vibrantly colored bird appears
softer or more out of focus than the water and rocks in the same focal plane. The filtered image
on the bottom shows how the sharpness has been equalized and the bird now appears to be in
the same focal plane as the rocks. The bird in this case needed more sharpening than the rocks.
The result is a better feeling of depth in the filtered image.

Note that while “sharpness equalization” does appear to “fix” problems with depth in Bayer
based photos, it cannot bring back information that has been lost due to interpolation deficits in
some colors. In other words, equalizing sharpness across colors in Bayer based images certainly
improves their appearance and does appear to bring back depth lost by the Bayer interpolation
process, but nothing can recover the actual detail that might be missing in some cases due to
the loss of effective resolution in some colors.

Perhaps camera manufacturers will address this issue in camera firmware in the future. This
issue (and the fix) appear to be basic enough that it seems like the best place for a solution

http://www.outbackphoto.com/dp_essentials/dp_essentials_05/essay.html 2005-09-20
DP Essentials #05 @Digital Outback Photo Page 6 sur 6

would be in the firmware that is used to translate your camera’s raw sensor data into a final
image. Manufacturers could use data related to a camera’s specific hardware and interpolation
algorithms to select the proper amount of “sharpness equalization” to apply to the image based
on the type of antialias filter used, the radius of the interpolation algorithm, etc. For now,
sharpness equalization filters appear to offer users of Bayer sensor cameras an opportunity to
make some pretty significant improvements to their images.

As a final note, the actual layout of the sensor need not be known when applying sharpness
equalization filters. We apply the filter to the final image and that image has already had its full
color data interpolated (spread over the image) as best as possible so there is no need to know
what color was actually captured at each pixel. Also note that for the filter to work, the images
need not be strictly Bayer images. All that is required is that the sensor only capture a single
color at each pixel and that the green pixels account for ½ the total pixels on the sensor, red
pixels ¼, and blue pixels ¼. The Fuji S1, S2 and all cameras using the Super CCD for example
still use a Bayer pattern; it is simply rotated 45 degrees. For the purists who shoot in raw
capture mode, be aware you can achieve slightly better performance by applying the filter to the
image after it has been interpolated to full color (all three colors at each pixel) but before
converting to a color space. In other words, there is some benefit to applying the filter to the
color space used by the sensor since the distribution of RGB values is changed slightly once you
convert to Adobe RGB, sRGB, etc. for the final image.

1/30/2004 PreSharpen 3D plugin

You can now get the new PreSharpen 3D plugin that implements Mike's concept.

Manage the Digital Workflow

For Comments post in our News Group

© 2000-2005 Digital Outback Photo

http://www.outbackphoto.com/dp_essentials/dp_essentials_05/essay.html 2005-09-20

Vous aimerez peut-être aussi