Full Color Capture:
Hype or Hero?
Background
You may have heard about
the upcoming Sigma SD14 that offers full color capture, but do you know
what full capture is and what it can do for your photos? Will the
full color capture SD14 set a new standard for digital cameras or will
it be a mere curiosity like it's older siblings the SD9 and SD10 which
developed a loyal following but never quite turned the tables on sensor
design as originally hoped. As of this writing, the Sigma SD14 is
not yet out, but the technology is already in place so let's take a look
at the technical details of full color capture versus single color
capture.
Single Color
Capture
The vast majority of digital cameras including high end professional
dSLR's use an image capture sensor that can record only one color per
pixel. Most sensors use what is often referred to as a "Bayer
mosaic" pattern where the sensor only records one of the three primary
colors (red, green, or blue) at each photo site (pixel). A six
megapixel dSLR, for example, may have a sensor with 3000 x 2000
resolution. One thing that is often overlooked is the fact that
each of those "pixels" on the sensor only records a single color: red,
green, or blue. To make matters even more complicated, single
color capture sensors do not divide their pixels evenly, recording 1/3
red, 1/3 green, and 1/3 blue. Instead, half of the pixels on the
sensor are green while only 1/4 are red and 1/4 are blue. More
green sensors are used because having greater sensitivity/resolution in
green mimics how the human eye captures color. The RGBG layout of
a standard digital camera sensor looks something like this:
Since a 3000 x 2000 (six megapixel)
dSLR returns a full color image with all three colors present at each
pixel, the most obvious question at this point is how we can end up with
a full color image when only one color was recorded for each pixel on
the sensor! The answer lies in interpolation. Digital
cameras and raw processing software use sophisticated algorithms to
predict the missing two colors at each photo site (pixel). As an
example, take a look at a blue photo site somewhere in the middle of the
above graphic. Notice that at every blue photo site, there are
four red photo sites adjacent (diagonally) to the blue photo site.
If all four of those adjacent red photo sites have high red brightness,
it can be "assumed" that the blue pixel will also have high red
brightness. This is a simple example but similar prediction-based
algorithms are used at all other pixels to recover the two missing
primary colors for each pixel until each pixel has all three colors (one
actual, and two predicted). Obviously the algorithms get much more
complicated when surrounding photo sites are not the same brightness,
but the general idea is to "guess" the missing two primary colors at any
given pixel by looking at the color of surrounding pixels. Once
both of the missing primaries have been interpolated for each pixel, the
final full color image has been reconstructed.
Problems with
single color capture
The above single capture Bayer Mosaic sensor is used in nearly all
digital cameras as of this writing. If you are familiar with
interpolation, you probably already know that interpolation comes with
certain drawbacks. Because a single color capture sensor only
captures one of the three needed colors at each photo site, two thirds
of the information in your photos is being "guessed" while only one
third is "real" data! By the numbers, you'd have to wonder how
this even works at all! The answer lies in the fact that our eyes
are more sensitive to changes in detail, edges, and brightness than
changes in color. In addition, the interpolation algorithms used
to reconstruct the missing colors at each pixel have become so advanced
that they actually do a very good job at predicting the missing colors
under most circumstances.
The real issue with single color
capture sensors arises when you have subjects that have colors close to
the primary red, green, and blue colors used for the photo sites on the
sensor. For areas of detail that are black/white, all photo sites
on the sensor will be reacting similarly (will have similar brightness).
This makes it easier for the interpolation algorithm to reconstruct the
image because each photo site will be recording near the same values.
This is why, when reviewers shoot resolution charts, the cameras return
resolution numbers comparable to what you'd expect if the sensor were
actually a full color capture sensor recording all three primary colors
at each photo site.
When the balance of color starts to
shift however, particularly toward red or blue, things start to go
downhill. When shooting a bright red flower with dark red veins
that only "excites" the red photo sites on the sensor for example, you
can see by the graphic above that your resolving power quickly drops to
near 1/4 resolution. This is because the green and blue sensors simply
offer no data (they are black) and only the red sensors contribute data.
The same would be true of a bright blue sweater or blue fabric.
While black/white subjects may be resolved at near full resolution, some
red/blue subjects may fall to near 1/4 resolution and other colors like
yellow, green, orange, etc. fall somewhere in between. Of course,
you don't see this difference as missing pixels: only a loss of
detail/sharpness. The result is that you end up with an
inconsistency in sharpness in photos that makes some colors less
sharp/detailed than other colors, and the visual result is a bit
"flatter" look that some would see as less three dimensional.
The only saving grace for the single
color capture sensor is the fact that it is often difficult to find a
subject that has a color so closely matched to the red, green, or blue
filters on the sensor that the other two primaries receive no data
whatsoever. As an example, the red photo sites on the sensor will
certainly be affected more than the green and blue sites, but most
shades of red will still invoke some type of response from
the green and blue sensors. It is rare to find a shade that
matches so well that the sensor records no information whatsoever at the
green/blue sites. Granted, the lower the brightness recorded on
the green/blue photo sites, the lower detail you'll have to work with
for that red subject and (potentially) the higher the image noise
levels.
For more information on "sharpness
equalization" as a means for correcting loss of sharpness/detail in
single capture sensors, please read my
article at Digital Outback Photo or try the "sharpness equalizer" in
my Qimage software.
Full color capture
and what it can do for us
Released in 2003, the Sigma SD9 was the first camera to offer full color
capture. The sensor, manufactured by Foveon, was touted to be the
next generation in digital camera sensors. Using three sensor
"layers", the SD9 (and soon-to-follow SD10) offered the ability to
capture all three primary colors (red, green, and blue) at each photo
site on the sensor. Since no interpolation was necessary, the
typical problem with sharpness/detail consistency across different
colors was solved and to most people, the result was a more 3D feel to
images. The new technology didn't come without problems though...
The first problem faced in mass
marketing this new technology was that, while the SD9 and SD10 were
marketed as 10 megapixel cameras, the final images were "only" a little
over 3 megapixels. The Sigmas were competing with 6 megapixel
dSLRs that, to the "unwashed" appeared to have twice the resolution even
though the full color capture Sigma was actually capturing more data,
and doing it in a more sensible fashion. Because many reviewers
base resolving power on test shots of a black/white resolution target,
the Sigma performed poorly compared to the single color capture 6
megapixel dSLR competition because black/white detail is handled nicely
on standard cameras. Had those resolution test shots been black/red or
black/blue instead of black/white, it would have been a different story.
It didn't help matters that you can't
stop the age-old rule of thumb that you need 300 PPI of detail to get a
good print. The die hard 300 PPI camp would argue that they could
print bigger prints using a standard single color 6 megapixel dSLR
because the final image was 6 megapixels compared to the 3.4 megapixels
recorded by the full color capture SD9/SD10. It also didn't help
that the SD9/SD10 could only shoot in raw format and pictures had to be
developed after-the-fact and that the camera body wasn't the best on the
market at the time and being a Sigma body, it needed Sigma lenses which
gave Nikon and Canon followers pause.
The final tether that kept full color
capture from reaching escape velocity in the SD9/SD10 is the fact that
it did have some problems recording consistent, noise free color.
People familiar with the camera and raw developing software could
produce some gorgeous photos but it did, on average, take a little more
work than standard single color capture dSLRs. It turns out that
the layers used in the Foveon full capture sensor made it more difficult
to get consistent/accurate color fidelity compared to the arguably
simpler design of the single color capture sensor. The result was
that the full color capture Foveon based SD9/SD10 were a little harder
to keep under control with respect to color accuracy and they suffered
from a bit of metamerism (colors shifting under different light sources)
that was not accounted for by the hardware/software.
Looking for a
bottom line: is full color the future?
Right now, the SD14 appears to be the
new contender in the next attempt to get full color capture into the
mainstream of digital photography. The camera has not yet been
released, but you can find information about it
here. At first glance,
the SD14 seems to step into the ring with some of the same handicaps
that held back its older siblings. While it will be advertised as
14 megapixels because it records three colors at each photo site, it
will return final (non-interpolated) images that are under 5 megapixels,
less than half the final resolution being returned by the single color
capture competition.
It remains to be seen if Foveon has
improved color fidelity of the full color capture chip and if Sigma have
made improvements to the body, but at least the SD14 is capable of
returning developed (JPEG for example) photos and doesn't require raw
developing tools. While I always shoot in raw mode by choice, some
jobs actually require shooting finished images for the sake of time and
I'm sure the ability to shoot in a "finished form" will improve sales.
Final price still has not been set to my knowledge, so I'm sure that
will be a factor as well.
Technically, the SD14 is an
interesting camera and I applaud Sigma/Foveon for keeping the concept
alive! It really has potential as it does correct some image
quality flaws inherent to single color capture devices. In this
respect, the SD14 is an important entry in the world of dSLR cameras!
Mathematically speaking, the SD14 will record 40% more "real" data than
a 10 megapixel dSLR even though the final images will have half the
pixels. It sounds confusing at first, until you realize that the
SD14 is investing the data in color capture rather than added pixels.
Whether or not the "masses" will recognize that extra data as a benefit
or a detriment remains to be seen, but if it didn't happen the first
time (with the SD9/SD10), I have my doubts this time around.
Summary: The future
of full capture
Full color capture resolves a number
of issues related to today's single color capture sensors. Single
color capture has been around for decades, however, and the sensors and
the interpolation algorithms that make them work have been refined over
time. Many of the pitfalls of single color capture can be
addressed with advanced color interpolation algorithms. As a
result, to really get noticed, I believe full color capture has to take
a leap forward that would make it a clear winner in the eyes of the
consumer. In my opinion, to do that, the final image resolution
needs to be comparable to today's dSLRs. Regardless of how good
you are with math, some will see the SD14 as a 4.6 megapixel camera
competing in a 10+ megapixel market. Even if you grant that the
SD14 actually records 1.4 times the amount of data compared to a typical
10 megapixel dSLR, 12-14 megapixel dSLRs are on the horizon that will
match the amount of data recorded by the SD14. Anyone familiar
with digital sampling and integration will realize that if you make the
pixels small enough and abundant enough, it won't matter that you can't
record all colors at once. Case in point: inkjet printers, audio
CD's, DVD's, etc. At some point, when the pixels get small enough,
it won't matter whether they are on top of each other or not!
Due to the consumer perception of
"more pixels = better camera", it is my belief that had Sigma released
an SD30 that returned 10 megapixel non-interpolated final full color
images, it may have made a big dent in the digital camera market and may
have turned the tide provided the technology worked as advertised.
As is, it may end up being nothing more than another curiosity.
Personally, I wish Sigma/Foveon had made a big leap forward like an
SD30, but I also have to realize that true technical marvels take time
and often come in small steps. Either way, for me, the SD14 will
be an interesting camera that I hope, if nothing else, will help move us
forward in the arena of full color capture!
Mike Chaney