4081
|
Technical Discussions / Articles / September 2008: dSLR Sensor Cleaning
|
on: May 27, 2009, 03:18:50 PM
|
dSLR Sensor Cleaning
Background
You've been taking great
shots with your dSLR for some time, changing lenses for the occasion,
and now you notice some spots in your photos in bright areas like blue
skies. The spots seem to be in the same place in the frame with
each shot. Then a sinking feeling of doom ensues as you realize
you have dust in the camera and visions of opening the camera and
electronic microsurgery enter your mind. Sensor cleaning can seem
beyond the ability of the average dSLR owner but the procedure really
isn't very difficult. It all comes down to your ability to follow
instructions in most cases. Let's take a look at sensor dust and
sensor cleaning to see if it is something you would like to try or if
you'd rather take the camera to your local camera shop instead.
Recognizing dust and
debris
Above are two examples of dust in the camera. Dust can appear as
near-pinpoint specs (top crop) or more diffuse/larger circles (bottom
crop). The above is a relatively mild case of dust and you may see
other types of debris such as much larger or darker spots, small hairs,
and so on. Dust is more visible in areas of bright, uniform color
such as blue skies. In addition, due to the angle of light and the
shadow created on the sensor by the dust, the dust often appears more
diffuse at larger apertures and closer to small specs at small
apertures. Most importantly, the specs or circles will be in the
same place in each frame.
If you see spots like those above in
your photos and suspect dust in the camera, the first logical step is to
go to an environment that is relatively dust free, remove the lens, and
carefully clean both the front and rear glass elements on the lens.
If spots still appear after cleaning the lens, you'll know the dust is
on the sensor. Next, set your camera to aperture priority and
select a very small aperture like f/22. Set the camera to the
lowest ISO setting such as ISO 100. Now find a uniform surface
like a white ceiling or wall in a well lit room. It is more
important that the surface be as uniform and texture-free as possible
than the surface being white: any light color will do. Take a shot
of the wall/ceiling. Note that if the camera picks an exposure
time of one second or longer, this is a good thing! In
fact, the more you move the camera (within the bounds of the uniform
wall/ceiling), and the longer the exposure the better, because we want
to blur any non uniformity on the wall/ceiling: the dust on the sensor
won't move so it will still be sharp. Be sure to open the shot in
your favorite photo viewing/editing tool and move around the shot at
100% (1:1) zoom so you can see the small specs if they exist.
It's confirmed: you have
dust on the sensor
Now that you've cleaned the lens
elements and have identified some spots in the frame, you want to clean
the sensor to remove the debris. I like to approach sensor
cleaning in stages, performing the least invasive cleaning first and
working up to the tougher cleaning techniques. The first method of
cleaning is to use a simple handheld blower bulb. These are the
little rubber bulbs with a little plastic tube that you can buy at
almost any camera store. While various forms of canned compressed
air can blow more air, you're safer using a simple squeeze bulb as some
canned air products can contain oil or may spray liquid (read very cold)
gas which can harm the sensor or at least make your cleaning job even
more difficult by spraying a residue onto the sensor.
Before beginning any sensor cleaning
task, first make sure your camera battery is fully charged. You
don't want the shutter/mirror closing on you while you are cleaning!
While most cameras offer a "sensor clean" or "mirror up" function in the
menu that was designed for cleaning, some (particularly older) cameras
don't offer this feature or only offer the feature if you have an AC
power supply. In those cases, you can usually set the camera to
manual exposure and set the shutter to 30 seconds. You then have
about 20 seconds to do your cleaning once you press the shutter release
(you don't want to come anywhere near the 30 seconds and risk the
mirror/shutter closing so getting out of there by 15-20 seconds seems
prudent). I prefer using the 30 second shutter instead of
the bulb setting for cleaning when a specific cleaning option isn't
present in the menus because: (a) you know how long you have before you
have to remove the cleaning devices and (b) if you use the bulb setting,
your finger may slip off the shutter button while you are cleaning.
Obviously, remove the lens from the
mount first and then open the shutter so that you can see the sensor in
the camera. My preference is to hold the camera so that the lens
mount is facing the floor. That way, any debris that is blown out
has a better chance of falling out onto the floor instead of just being
blown around in the camera. With the camera facing down and the
shutter open, put the tube of the bulb blower up to the lens mount and
center it in the middle of the hole. I would recommend not putting
the tube into the lens mount hole or close to the sensor because when
you squeeze the bulb, there is a chance that the movement will cause the
tube to strike something (mirror, shutter, or even sensor) in the
camera. So keep the tube just outside the lens mount hole.
Give a few quick bursts of air pointing the tube at the middle of the
sensor. Once you've done that, take another test shot (per the
above instructions). Did the dust specs go away?
If most of the dust specs went away
with only one or two very small specs left, you've probably done a good
enough job. You may want to repeat the bulb cleaning per the above
one or two more times to see if you can get all the dust removed but for
the average Joe, be happy with only a spec or two! Many times
people go too far with different techniques and end up making it worse
and/or inserting more debris into the system. Also realize that
the smaller specs are only likely to show up in "sky shots" using a very
small aperture anyway.
A more thorough cleaning
So what if the simple blower bulb
method doesn't work? Maybe some of the specs on your sensor are
"sticky" and will not come off with a simple shot of air. There
are many products on the market such as fine brushes, mild solvents
w/swabs, and even "sticky tape" products designed to clean more stubborn
debris from the sensor. Again, my preference is to go with the
lighter touch first. In my opinion, the next phase is to try a
sensor cleaning brush. A Canadian company called
Visible Dust makes good
products that I have used on a number of cameras. One Visible Dust
product that I can recommend is the Arctic Butterfly. The Arctic
Butterfly is basically a very fine bristle brush on a rotating shaft.
You simply press a button for a few seconds and the brush rotates in the
air rapidly (the unit is battery powered), flinging off any prior dust
that might have been on the brush while statically charging the brush at
the same time. Once charged, you simply lightly swipe the
brush over the sensor (never spin the brush while it is in the camera)
and recheck for dust specs.
I find that a quick swipe with the
Arctic Butterfly followed by a burst or two from the blower bulb often
leads to the best results since it can be difficult to get the dust off
the edge of the sensor sometimes and the static charge doesn't always
attract all the dust. If the brush method is still unable to
remove those last few specs, you may need a "wet cleaning".
Visible Dust also sells swabs and cleaning solution. A wet
cleaning, your last resort to cleaning, basically consists of wetting a
swab with cleaning solution and swiping the sensor with the swab.
As with any method, the most important part of the task is to follow
the instructions explicitly! Sensor cleaning at its worst,
comes down to nothing more than a window cleaning job... a delicate
one... and one done in a confined space. Other than following
directions, the best advice I can give is to first determine if you are
up to the task after reading this article and possibly even the online
instructions: Visible Dust has detailed illustrated instructions on
their web site for example. If you feel confident enough about
taking on the challenge, just be gentle! While sensors are more
protected by things like antialiasing filters than most people might
think, care is still needed to avoid damage to your camera.
Whatever cleaning method you choose, follow instructions, take your
time, and be sure that the mirror/shutter never closes while you are
doing things like brushing the sensor! Shutter/mirror damage is
actually more common than damage to the actual sensor.
But my camera has
electronic sensor cleaning
Many newer dSLR's employ a method of
electronic sensor cleaning where the camera basically shakes the sensor
at ultrasonic (fast) frequencies to dislodge debris from the sensor.
While this feature is great to have and it does work (albeit the
effectiveness varies widely across manufacturers and camera models), I'm
not a big fan of electronic sensor cleaning. When the electronic
sensor cleaning cycle is done and that dust is gone from the sensor,
where did it go? You guessed it: somewhere inside the camera, and
if all you ever did was perform electronic cleaning cycles, some of that
dislodged dust is likely to make its way back onto the sensor
eventually. On cameras that will allow you to run the electronic
cleaning cycle with the lens removed and the shutter open, I like to
hold the camera with the lens off, shutter open, camera mount facing the
floor, and then run the cleaning cycle. If you do this in a room
where the air is still, most of the debris will fall straight out the
camera mount hole and onto the floor instead of being dislodged into the
inside of your camera. In any case, even with electronic cleaning
devices, it is inevitable that eventually, you'll need to manually clean
your camera's sensor as the electronic cleaning cycles cannot remove all
(types of) debris. Self cleaning ovens are nice too, but it
doesn't mean you'll never have to wipe the inside of your oven.
The same goes for electronic sensor cleaning. If the electronic
cleaning cycle doesn't remove all the dust specs, don't get discouraged.
Just realize that such devices cannot completely eliminate the need for
an occasional sensor cleaning.
Summary
For those who shoot with dSLR
cameras, sensor cleaning will eventually become a fact of life; a part
of normal maintenance for your camera. If you're skittish about
technical things, it might be best to take your camera to the local
camera store when a cleaning is needed. If you are not intimidated
by the occasional techie type job, however, sensor cleaning might be
worth a try. In my opinion, it is certainly something achievable
by the average person as long as care is taken and instructions are
followed to the letter. In addition, starting from the simplest
cleaning method first (the blower bulb method) and working your way up,
perhaps you can get a better idea about the type of cleaning you are
capable of performing without jumping in the deep end first and then
finding that you can't swim!
Mike Chaney
|
|
|
4082
|
Technical Discussions / Articles / August 2008: Stop "Cooking" Your Photos: Shoot Raw!
|
on: May 27, 2009, 03:13:41 PM
|
Stop "Cooking" Your
Photos; Shoot Raw!
Background
In
July
2006, I wrote a brief article
about how shooting in raw capture mode could change your outlook on
photography. While the benefits of shooting in raw capture mode
are as clear today as they were two years ago, still, a lot has changed
in two years. Have you changed, or are you still
shooting in JPEG capture mode with your camera? Let's take a
look at raw shooting, how it can benefit you, and discover some of the
tools used today to help you in your "raw workflow".
Raw versus JPEG
Many recent cameras offer the ability to shoot in raw capture mode as
well as JPEG capture mode. While JPEG capture modes are often
labelled "Large", "Fine", or "Basic", the "Raw" menu selection gives you
access to a shooting mode that is entirely different. When
shooting in raw capture mode, the raw data (basically the data straight
from the sensor) is stored in a (usually proprietary format) file on
your flash card. While you can review your shots on the camera,
you'll need to "develop" the raw data once you download the raw files
from your flash card before you can view or print them using your
computer.
So what's the benefit of shooting in
raw capture mode? In a word: quality! When you shoot in JPEG
capture mode, you're looking at a processed image. Basically, your
camera has taken the raw data and "cooked" it in order to create a JPEG
image from that data. If your white balance, exposure, and
lighting are perfect, JPEG shooting might be OK. Problems arise,
however, when you need to rescue an underexposed or overexposed shot
because the JPEG data has already been "truncated" to the exposure used
by the camera. Your JPEG has 256 steps or gradations for each
color and in an underexposed shot, only the first 64 steps may be used.
When you brighten the photo, you see banding, noise, and other artifacts
because you've truncated a 16.7 million color image down to about only
260,000 possible colors!
Unlike a JPEG, the raw data contains
a much finer version of the data: one that has (usually) anywhere from
4096 steps to as many as 16,384 steps for each color. Even with a
12 bit raw file that has 4096 steps or "shades" for each (red, green,
and blue) color, instead of having only 64 steps to work with as with
your JPEG, you now have over 1000 steps even on your underexposed image.
Now you can brighten the photo by adding exposure compensation without
the ugly stepping patterns produced by the JPEG.
In addition to having more
"granularity", raw photos also contain something called "headroom".
Headroom is an area of beyond-white steps that allow you to pull back
exposure to recover blown highlights. If you shoot a yellow flower
only to find that a large portion of the flower is blown out (bright
yellow with no detail), there's no way to recover the lost data with a
JPEG. If the same shot had been taken in raw capture mode, some if
not all of the blown out highlights could be recovered because there's
enough data in the raw file for it to capture steps (brightness values)
beyond what would appear at maximum (255) in your JPEG's.
Simply put, shooting in JPEG capture
mode is equivalent to being on a construction job, quickly measuring
once, and cutting a board based on that one quick-and-dirty measurement.
After cutting, if you find out that you cut your board too short,
there's no way to go back and make it longer. The bottom line here
is that JPEG files throw away data. Don't cut your data based on
the assumption that your camera (or you) will always meter and set up
the scene properly! Shoot in raw capture mode so you can make the
most of your photos. I've seen many photos that were destined for
the recycle bin using the JPEG version but are easily rescued by using
the raw photo.
Handling and developing
your raw photos
The down side (if you can call it
that) of shooting in raw capture mode is that you may not be able to
just open them in your favorite photo printing tool and click the print
button like you may have done with your JPEG's. Two years ago, the
process wasn't so simple. You needed software to develop the raw
photos first, and not all cameras came with such software, so you had to
buy third party software costing from $100 to $200 just to be able to do
anything with your raw photos. To make matters worse, every
manufacturer had their own proprietary format for the raw photos and
even within one manufacturer, the raw format usually differed from model
to model. This made it difficult for any one software package to
support all cameras, making you have to search to find a solution that
would work with your particular camera.
Fast forward to August 2008 and
things have changed considerably. There are now open source
solutions that cover almost every camera that can shoot in raw capture
mode and while manufacturers still haven't made any headway into coming
to an agreement on one international standard for raw photos, at least
there are a wide variety of applications now that can handle almost any
raw photos from any camera.
Even more interesting is the fact
that some tools are now allowing you to treat raw photos like any other
type of photo such as JPEG's. The Studio Edition of my own
Qimage software, for
example, allows you to view, convert, print, and tweak raw photos just
like any other supported image format while applying automatic exposure
and noise reduction, thereby minimizing the amount of tweaking needed to
get the best from your raw photos. With more and more tools
supporting raw photos in different ways, it's a lot easier to find the
right tools to make raw shooting agree with your own preferred workflow.
Some people like to fiddle with each photo, trying different settings,
while others just want to be able to get the best automatic rendition
possible in order to minimize extra work. Whichever boat you find
yourself in, there are many raw tools to choose from. Here's a
quick list of some of the more popular "high end" third party solutions
and their strengths. Note that manufacturer solutions (that come
with your camera or can be ordered separately) are not included.
Note also that the solutions below are, in my opinion, solutions that
are acceptable tools to use to view, publish, or print final
versions of your raw photos. There are many more tools that
"support" raw formats that I didn't list because while you can get an
"image" from them, they lack essentials for final output like color
management or proper color transforms to allow for accurate color.
So if you use a particular raw tool that is not listed, it's most likely
because I didn't consider that tool a major player in the field of
quality raw tools.
Raw Tool |
Price range* |
Strength(s) |
Adobe Camera Raw |
Free
(must have PhotoShop) |
Integrates w/PhotoShop
Lots of aftermarket "add-ons" |
Adobe
Lightroom |
$299 |
Emphasis on image
management and cataloging |
Bibble |
$70 - $130 |
Excellent quality
Unprecedented control |
Capture One
4 |
$129 |
Excellent color
Profiles for many cameras |
Qimage Studio |
$90 |
Minimizes need to tweak each
photo. Color accuracy. |
Silkypix |
Free - $149 |
Emphasis on color accuracy
and noise control |
* as of July 2008
Summary
Shooting in raw capture mode can be a
major advantage if you are interested in squeezing the most quality from
your photos. With some of the latest raw-capable tools on the
market, shooting raw and being able to find an acceptable workflow for
actually using the raw files has become much easier in the last year or
so. Not convinced? Try shooting some photos in raw format
before you make up your mind. If your camera offers a JPEG+Raw
shooting mode where you can actually capture both, try that and compare
what you can get from the raw versus the JPEG images. I did that
at first, and now I've dumped JPEG shooting altogether as I can always
get better quality from the raw photos and the added JPEG version just
takes up space on my flash cards!
At the end of the day, taking the
time to learn which raw tool(s) are best for you and working out your
own personal raw workflow will pay dividends and can help you get better
photos than ever. So if you looked into raw shooting before and it
seemed that the extra effort wasn't worth it, it may be time to give it
another try.
Mike Chaney
|
|
|
4083
|
Technical Discussions / Articles / July 2008: Innovations in Camera Profiling
|
on: May 27, 2009, 03:10:59 PM
|
Innovations in Camera
Profiling
Background
In
January 2007, I demonstrated a
method for creating an ICC profile for your digital camera using a
standard IT8 target. The article covered how to set up and shoot
the target and how to process the photo and create a profile using my
Profile
Prism software. While the
process was relatively simple, camera profiling has always come with
some limitations and tradeoffs. The biggest problem with camera
profiling is being able to create a profile that works for all photos.
Because things like exposure, lighting, and white balance are always
relative, creating an ICC profile from a shot of a reference target can
always be problematic as the camera often uses different tone curves for
different subjects and lighting.
In addition, almost all
cameras and developing software tend to "enhance" the tone curves to
produce more vibrant photos because "linear" tone curves can look a bit
dull. The question I often got from people trying to create camera
profiles was, "How can I create a profile that corrects color problems
without modifying the tone curves or making the image look dull."
Until recently, this was not possible or at least not easily achieved
because an ICC profile will always try to correct all aspects of color:
tonality (brightness), hue, and saturation. With the recent
release of
Profile Prism v6.5,
it is now possible to create hue correcting profiles that correct
problems such as reds looking too orange, blues looking too purple,
undersaturation of yellow, and so on without changing the contrast
chosen by the camera or developing software. Let's take a look at
how this is done.
The problem
A device (camera) ICC profile is a file that describes how to accurately
reproduce color for that device. Unfortunately "accurate" profiles
are rarely what people want or need because they produce linear tonality
in photos which can look dull: like there is a fog over the photo
compared to what we are used to seeing. In other words, we are
used to seeing the linear/accurate result that has been modified to add
a little "pop" to the photo. This usually entails making the
shadows a little darker and the highlights a little brighter. This
is done automatically by your camera or raw software and is often not
optional. If you notice some minor shifts in color when using your
particular camera and you want to create a profile to keep that red
sweater from turning orange, you might think an ICC profile is the best
way to do this without having to edit the photo manually each time or
eyeballing corrections using color channel sliders. You'd be
right, except when you create that profile, it'll not only correct the
red/orange shift but will also "undo" the tonality adjustments that make
your images pop. That's the nature of an ICC profile and ICC
profiling tools: they try to be all things at once, describing
luminance, hue, and saturation accurately rather than how you may
want to see it.
Another issue with creating camera
profiles is that it has traditionally been difficult to impossible to
create a good camera profile for JPEG images straight from the camera.
While creating profiles for raw developing tools worked reasonably well,
how do you create a profile that corrects color issues in a JPEG that
came from your camera: a JPEG that has already been "profiled" once to a
color space such as sRGB or Adobe RGB but one that may have a few
hue/saturation mistakes in certain areas? Fortunately a solution
now exists that can address both problems, allowing you to create camera
"calibration profiles" for any raw developing tool, any camera, and any
in-camera JPEG without changing brightness and contrast.
The solution
The solution to correcting color
without dulling your images lies with the profiling tool. It must
be able to discern the underlying tone curve along with the
"enhancements" made to that tone curve in order to reproduce the
intended contrast. Doing so will allow color (hue) corrections
without changing overall contrast or brightness. Most cameras and
raw developing tools allow the photographer to select tonality settings
such as "neutral" or "vivid" and those selections allow you to make a
decision about contrast. The biggest complaint and one that has
traditionally been impossible to correct in camera (or in raw developing
software) is one of hue shifts where colors look shifted in hue or
under/over saturated. Let's take a look at how to create one of
these calibration profiles using Profile Prism v6.5:
-
Follow the steps in my
January 2007 article, except
-
Choose "Gamma Match (Auto)" for "Tone
Reprod. Curve"
That's it! By choosing "Gamma
Match (Auto)" in Profile Prism's "Tone Reprod. Curve", you are telling
it to discover the intended/underlying gamma curve so that it can
reproduce the same brightness and contrast, correcting only errors in
hue and saturation. This method should work in the majority of
situations. The manual gamma match options such as Gamma Match
(2.2) or Gamma Match (1.8) only need to be used if the resulting profile
appears to make images look too dull or too contrasty. In those
situations, Profile Prism may not have been able to automatically detect
the proper curve due to the camera or raw developing software
manipulating the curves too much. In all situations, manually
selecting either Gamma Match (2.2) or Gamma Match (1.8) will solve the
problem and restore the original brightness/contrast.
Camera and raw software
settings
The beauty of the gamma match camera
profiling options is the fact that they can be used to create
non-tonality-modifying calibration profiles without trial and error
modification of color channels. Due to the fact that they correct
only color shifts and saturation problems, they can be used on any type
of photo from your camera whether JPEG or raw. But what about
camera settings or raw developing tool settings? What should you
use? The answer is simple. Again, because these profiles are
only correcting (presumably small) shifts in color and saturation, you
would use whatever method you normally use to capture photos and then
create a profile for those developed photos.
For example, if you normally shoot in
JPEG mode and you have your camera set to Adobe RGB color space, keep
doing the same: take your shot of the IT8 target and then develop the
profile based on that Adobe RGB JPEG from the camera. The
resulting profile is then assigned to the image. The
assigned profile overrides the initial Adobe RGB color space and assigns
a profile that describes color more accurately than Adobe RGB. By
assigning the profile and using color management aware software (like
PhotoShop or Qimage), your corrections are automatic because the
software you are using will see and utilize the new (corrective)
profile. This is the preferred method since there is no
second/additional profile conversion. Your calibration profile in
this case is doing nothing more than modifying how to interpret the RGB
values in the photo.
If you are creating photos to be
viewed in non color managed software such as photos that will be viewed
on the web or via email, you'll want to convert from the camera
profile to a standard color space like sRGB rather than just assigning
the camera profile. Whether you choose to assign your camera
profile or convert from that profile to a standard color space, the
profile should correct all color issues without affecting brightness and
contrast.
When creating profiles for photos
processed in raw developing tools, the same rule applies. You can
keep all your raw development settings in place and create a profile to
assign after the photos have been developed. Raw tools give you
one additional option, however, in that some raw developing tools allow
you to turn off color management and create a profile based on the raw
data. Most tools offer the ability to set your color management or
camera profile to "none" or "embed camera profile". This
effectively turns off all color manipulation while only applying a tone
curve (gamma). This method is even better because you can profile
the photo before any changes have been made to hue or saturation.
Raw developing tools that allow you to turn off color management usually
offer a way to activate the new profile within the software so see the
program help for your raw developing tool for more info. Of
course, if you use more than one raw developing tool, you must develop
separate ICC profiles for each raw developing program as they all
produce color slightly differently.
Summary
Camera profiling has always been hit
or miss due to the fact that exposure, lighting, and other factors are
not constant from shot to shot. As a result, camera profiles often
cause unwanted changes in brightness or contrast as the profile tries to
"correct" for the preferred tone curve of the camera or raw developing
tool. I've found that people are almost always happy with
brightness and contrast but often want to make subtle changes to color
in order to correct issues with saturation or color shifting.
Because existing profiling tools are designed to correct all aspects of
color including brightness and contrast, people often find that ICC
profiles cause unwanted changes in brightness and contrast in addition
to correcting hue and saturation issues. This has forced most
people to create manual color "calibrations" or macros by using generic
color charts (often with only a few colors), eyeballing differences, and
changing color channel sliders to compensate.
With
Profile Prism v6.5, it is
now possible to create hue/saturation correcting profiles that do not
alter brightness or contrast. Such profiles can be described as
color calibration profiles and as far as I know, no other tool currently
has the capability to create profiles that correct hue and saturation
while leaving brightness and contrast untouched. Being able to
create calibration ICC profiles has some significant advantages over
creating color calibration routines or macros:
-
Creation of calibration profiles is
fully automated and involves no guesswork or "eyeballing".
-
Resulting profiles can be used in any
color management aware software and don't depend on using a certain
photo editor in order to apply changes.
-
Calibration profiles can be used to
convert batches of photos to standard color spaces for display on the
web or via email.
-
Calibration profiles are less time
consuming to create because changes are based on actual/measured
response rather than trial and error.
-
Calibration profiles can address
color corrections for your specific camera and/or lighting situations
rather than a broader or generic correction for one model number and/or
one type of lighting.
Mike Chaney
|
|
|
4084
|
Technical Discussions / Articles / June 2008: What to Buy: dSLR or Compact Camera
|
on: May 27, 2009, 03:08:23 PM
|
What to Buy: dSLR or
Compact Camera
Background
Just a few years ago, the
dSLR camera was reserved for professionals or amateurs who were very
serious about photography. The cost was high enough that it kept
many casual shooters from even considering a dSLR. The price gap
isn't what it used to be, however, and it is now possible to get a good
dSLR (with a decent lens) for a little more than double what you'd pay
for a compact point-and-shoot camera. Should you consider a dSLR
for your next digital camera or is a compact right for you? While
it is impossible to cover every aspect of such a decision and how they
might affect your personal choice, let's take a look at some of the
driving factors that distinguish a dSLR from a compact/pocket camera!
Compact "pocket rockets"
The term "compact" camera can cover cameras from purse-size (about the
size of a brick or smaller) down to truly compact cameras that fit in a
shirt/pants pocket. The latter have become more popular recently
just because the technology that drives them has gotten smaller,
allowing great photos in a smaller package, and if you are going to buy
a small camera, why not buy one really small that can fit in your back
pocket? The Sony W-170 is a good example of a modern "pocket
rocket".
Unlike years ago when you had to sacrifice a lot of features and quality
to shoot with a compact camera, today's compacts offer much the same
capability of dSLR's and many offer manual modes that rival the control
you'd get when using a dSLR! Also in the compact's favor is the
fact that everything is matched and made to work together. The
lens is the proper size and quality needed to pair with the imaging
sensor, the flash is mated to both the lens and camera capabilities, and
so on. Compact cameras often offer user friendly scene selections
that allow you to choose "sports", "portrait", "night shot" and other
modes and the camera takes care of the settings such as aperture,
shutter speed, and sensitivity for you. This allows the casual
shooter to choose the right settings for the type of photos they are
taking without having to know how each individual parameter affects
image capture.
In addition to ease-of-use and
features, the compact camera has one major advantage over the dSLR:
size! You can only take pictures if you have your camera with you
and if your camera fits in your pocket, you are much more likely to have
it with you than if you know you have to lug a big camera (with a lens
that sometimes weighs more than the camera) around all day with the
strap pulling at your neck. If you want to take a camera to
the amusement park for example, what are you going to do with your dSLR
while you ride the coasters? Your compact can go in your back
pocket and take the ride with you. Even when you go out to dinner,
where are you going to put your dSLR and will you be sure to remember to get it
from under the table when you leave? Also, some sporting events,
exhibitions, concerts, and other venues will allow compact cameras but
not anything even resembling a professional camera so you might get
stopped if you are carrying a dSLR. These are things to consider
when you evaluate how you will be using the camera: in what situations
and in what type of environment.
It isn't uncommon to buy a dSLR
because they are "the talk" on the web only to find out that you leave
it home more often than not due the complexity of using it or due to its size, and when you do use it you
find that while it does have automatic modes, you need to know a little
more about photography than you might with a compact camera. Many
of the compact cameras also offer movie and sound capture as well,
something very few dSLR's can do. While the video/audio modes of
most compacts make them insufficient for good TV quality viewing or
ripping to DVD's (except
maybe the Canon TX1 and a very few others), they do allow you to
capture those moving moments where you would otherwise miss them if you
were carrying a dSLR.
The mighty dSLR
Next to step in the ring is the
heavyweight champion: the digital single lens reflex (dSLR). The
dSLR is a big boy. He's got one heck of a punch when he hits you
but the featherweight compact is running circles around him taking shots
while the heavyweight is still trying to find the right combo before
making his first strike. Of
course, this analogy is a bit flawed since just about any dSLR can focus
and shoot
faster shots in succession than most pocket cameras. Still, the
analogy works to some degree since for the casual shooter, it can be
easier to set up that initial shot using a compact camera. The dSLR lumbers around waiting and hunting for just the right shot, but
when he makes his move, that one shot can be a real knockout! The
compact, on the other hand, whisks around taking one "decent" shot after
another but unlike the experienced heavyweight, the compact is more
likely to take average shots that raise less ooh's and ahh's from the
crowd. OK. Enough analogies... back to
reality. As far as size, the dSLR isn't one you would carry in a
purse or certainly not a pocket. The Nikon D60 is a good example
of a "small" dSLR.
dSLR's offer some serious advantages to the serious
photographer. Really, there's nothing a compact camera can do
(other than video capture) that
a dSLR cannot as far as taking the actual photographs, yet there is much
that a dSLR can do that most compacts cannot. Hot shoe for bounce
flash, wireless/slave and studio flash, interchangeable lenses for super
telephoto shots and other "specialty" shots, tethered shooting, and
excellent high ISO performance are just a few areas where the dSLR
smashes most compact cameras. You have to remember, however, that
all of these things come at a cost. If you want to get one of
those super telephoto lenses to do some wildlife shots, you may pay more
than you paid for your dSLR camera to get a good one! And you may
soon find that you need a camera bag as big as a suitcase in order to
have all those goodies with you when you need them. A long
telephoto lens can easily be more expensive and substantially larger and
heavier than the camera it is mounted to, so many lenses have a tripod
mount where you actually mount the lens on the tripod and the less bulky
camera hangs off the back suspended by the lens. Of course not all
lenses are that large, even some good super zooms, but you get the idea.
Another thing to consider when looking at
a dSLR versus a compact camera is image quality. How important is
image quality to you? Do you plan to do large prints where small
imperfections in image quality might show in your prints? If so,
there's nothing better than a dSLR for image quality and that may be a
factor for you. Nearly any dSLR will beat a compact camera as far as overall image quality is
concerned. dSLR cameras have much larger image sensors which allow
them to capture photos with less noise and more dynamic range. A
typical dSLR can shoot in darker conditions using ISO 400 and produce
photos at higher quality (with less noise) than a typical compact
shooting the same scene. In fact, most dSLR's have less noise at
ISO 400 or even ISO 800 than a compact camera
shooting at ISO 100! That's the price you pay for using a small
camera with a small lens and a small sensor. We can see this
effect by viewing some sample images from compact cameras and dSLR's:
10 MP compact: Sony W-170 |
10 MP dSLR: Nikon D80 |
|
|
While there are obvious color and
metering differences between the cameras, the above is a good example of
the difference in quality you might expect when comparing photos from a
compact camera to those from a dSLR: in this case, a 10 megapixel
compact versus a 10 megapixel dSLR. The above are crops from the
original shots blown up by 200% (2x) to bring out fine low level detail.
Notice how the dSLR (right) renders much smoother, cleaner, and crisper
detail. The compact camera (left) renders the same part of the
image with more noise and less visible detail. The above is pretty
typical when comparing image quality from compact cameras versus dSLR
cameras and if the photos are printed large enough, a trained eye can
frequently spot whether the photo came from a compact camera or a dSLR.
The relevant question at this point
becomes: how noticeable are the quality differences in actual printed
photos. To answer that question, you have to ask how large you
plan to print and how closely your observers tend to scrutinize the
prints. While the above shows a significant advantage in quality
to the dSLR, that difference may not be evident until you print a 13x20
photo and examine it closely. How often will you be doing that?
Will the difference still show (even if not as much) on a print with
about half that effective "blowup": say 8x10? Unfortunately this
is a gray area where there is no clear cut answer. In my
experience, I can usually tell a dSLR photo from a compact camera photo
by just holding an 8x10 from both. At sizes smaller than
8x10, it can be very difficult to discern which is better. While
the dSLR photo may not jump out at you as being much better and the
compact camera photo may not jump out as being noisy, many may see the
dSLR photo as looking very clean or silky smooth, and just looking more
like a professional photo even if you can't quite verbalize exactly why.
There is often simply a more "professional look" to dSLR photos while
compact cameras tend to produce photos that look more like snapshots.
Some people equate the difference as the dSLR photos looking like real
photographs and compact camera photos looking more like video captures.
Again though, that's really not noticeable until you start printing
large photos. Whether or not that is relevant to your own photo
shooting is a matter of personal taste.
Summary
There are many factors to consider when buying a camera and if you're in
the market and you don't know exactly what you want (or need) and you
are considering both a compact "pocket rocket" and a dSLR, you might
consider the points listed in this article. In a nutshell, they
are:
A dSLR may be better for you if you:
-
Need maximum manual control over
shooting parameters.
-
Often operate in a studio environment
or other "controlled" environment.
-
Shoot in a wide variety of conditions
where you may need multiple lenses.
-
Frequently shoot under harsh
conditions or lighting (high contrast, etc.).
-
Need the best possible image quality.
-
Do a lot of indoor shooting where
red-eye and bounce flash are factors.
-
Often shoot in low light where higher
sensitivity or better flash are required.
-
Need super fast focus and/or fast
shot-to-shot continuous shooting.
-
Plan to make large prints.
-
Don't mind lugging around and keeping
track of a larger camera.
A compact/pocket camera may be better
for you if you:
-
Would rather have user friendly
shooting selections than manual control.
-
Find it inconvenient to have to carry
the camera around your neck.
-
Plan to take your camera to sporting
events, etc. where dSLR's are prohibited.
-
Normally print smaller photos (8x10
or smaller).
-
Often shoot under "impromptu"
conditions and not studio type environments.
-
Shoot mostly landscapes or people
where precision/control are not paramount.
-
Think quick focus and fast multiple
shots are usually not necessary.
-
Might need to shoot video from time
to time.
In the end, good luck with whatever
you decide. Through the years I've learned that an acceptable
snapshot is better than no shot at all. If you love dSLR's as I do
but you find that you often miss photo opportunities because you don't
want to lug around the equipment needed to operate a dSLR all day, maybe
at some point both would be best! At the end of the day,
you can only capture the moment if you have your camera with you.
Your dSLR will be next to useless if you find yourself leaving it home
often because you don't want a heavy camera pulling at your
neck all day or because the event you are attending (sporting event,
concert, exhibition, or similar venue) doesn't allow "professional"
cameras. The simple answer might be to get both if you can
afford them and carry whatever the occasion calls for. Of course,
that's not always an option for all of us nor does it even make sense if
you're not into "professional" type shooting so if you do have to decide
between a compact camera or a dSLR, hopefully this article will help you
decide what is best for you. Happy shopping and happy shooting!
Mike Chaney
|
|
|
4085
|
Technical Discussions / Articles / May 2008: Hacking for Charity
|
on: May 27, 2009, 03:03:28 PM
|
Hacking for Charity
Background
Every once in a while I
like to take a break from writing articles about how to do something
technical and write about an interesting concept relating to technology. In June 2007 I wrote an
article entitled "Say
No to Cracks" that discussed
software cracking and reasons to avoid cracked software. While
that article focused primarily on why cracked software should be
avoided, I also acknowledged the talent present in the hacking
community in general. In this followup, we take the concept of hacking one
step further and we meet a man who is finding new ways to redirect that
talent into something good!
Introducing Johnny
Johnny Long has become a
good friend over the years. I first met him a few years ago when
he and his family moved in next door. We started talking when our
families got together for backyard picnics or a swim in the pool and we
realized we had a lot in common, including a past filled with some of
the same friends and colleagues who shaped our outlook on the digital
world, and the world in general. It was clear that we were both
hackers at heart, but we chose to fulfill that passion in different
ways. While Johnny spends his time discovering and testing
vulnerabilities in systems in order to help companies secure their
networks, part of my own job was to find new ways to thwart hacking
attempts and secure my own software against the hackers/crackers. When
Johnny wasn't filling my face with needles or hijacking my Ghost (both
typical Johnny moves in Halo
3), I actually liked the guy even though in some ways, our careers were
juxtaposed. :-) Little did I know a couple years later he'd
come up with an idea that could really make a difference in this world!
Hacking for Charity
Ever watch someone do something with
such talent that it made you wonder what they could do with that talent
if it wasn't "misdirected"? Sometimes you find someone so good at
what they do that they really do need to quit their day job!
Johnny Long has found a way to take the raw talent of the hacking
community and redirect it to a good cause! The concept of using a
hacker's talent to do good isn't new, but to recognize skill and be able
to direct that skill at something that makes you feel good can turn
computer skills into a passion. Johnny's web site,
Hackers for Charity,
does just that! You can see by the list of donors on the site that
people are starting to take the concept seriously. I've donated
some copies of my own Qimage
software to his cause and will be donating more to the cause in the
future as I have confidence in what he is doing and where my donations
will be used. As for my readers here, I thought it worthy of
mention as I believe it to be a novel concept on how to better utilize
potential resources in the tech community.
Johnny has traveled to Uganda twice
on extended trips to set up computer equipment in teaching environments,
distribute swag (pens, pencils, paper, backpacks, etc.) to children
there, and being able to make a difference in the local community is
fueling his passion to help even more. His idea of "hacking for
charity" is starting to get recognition as he and/or his organization
have made the headlines on CNN, CNBC, the Washington Post, The Wall
Street Journal, and other media outlets.
Recognizing and rewarding
talent
As someone who can relate to the
excitement of being able to get a computer or system to do something
that it wasn't designed to do, I can understand hacking even though my
career demands that I work against it. Part of the reward of being
a hacker is that you are doing something different. Hackers don't
like fitting the mold. Most of them don't want money for their
hacking. Everyone else gets paid money. The average person
in a technology related job puts on his/her suit, goes forth in the
daily grind and commute, and they come home with some money to pay the
bills. The hacker wants more. The hacker wants recognition
that they've done something unique.
One of the reasons that hacking for
charity is such a novel idea is that it is something that could actually
work. What better recognition than to know that you've used your
unique skills to make a dent at making the world a better place!
I've heard it so many times, "Hackers are just evil" or "why would
anyone want to make a computer virus". It is not about evil or
good/bad. It's about people not wanting to be another stamp in the
mold. Hackers feel like they are enlightening the world by showing
them a different way to look at things or that things are often not what
they seem on the surface. Sure, some do bad things and some do it
for monetary gain... but so do a lot of white collar workers doing the
daily grind. There's good and bad in everything. All you
have to do is look to find it. I for one, will be contributing to
the Hackers for Charity
cause so that in the future, when someone asks me "why do those people
do that", I can respond, "maybe they'll eventually be the ones to make
the world a better place." :-)
Summary
Hackers for Charity is a
new concept where the talent and skills of hackers are being used to
make the world a better place. Check out their web site if you
want more information on this interesting twist on how to better utilize
some of the worlds best technology skills. Knowledge is the way to the future, and we
should be taking advantage of it wherever we can find it!
Mike Chaney
|
|
|
4086
|
Technical Discussions / Articles / April 2008: Full Frame Versus DX Cameras
|
on: May 27, 2009, 03:00:51 PM
|
Full Frame Versus DX
Cameras
Background
With some full frame
cameras now on the market, most notably the Canon 5D and Nikon D3, there
is quite a bit of chatter on the internet about full frame versus DX
(cropped) cameras. People keep lining up in their corners to watch
a new fight posted by yet another pro photographer touting the virtues
of full frame. About the only thing that hasn't been done is a
high dollar late night event on pay-per-view. ;-) Setting
other camera features aside, what does full frame really do for you?
Is it time to dump your "old" DX camera with its 1.6x crop and buy into
the full frame hype? Let's take a quick look at this topic.
Full frame
"Full Frame" refers to digital
cameras with sensors roughly the same size as 35mm film (36x24mm).
Most digital SLR cameras now commonly referred to as "DX" cameras use
APS-C size sensors which are smaller at about 22x15mm on a 1.6x camera.
In comparison, most consumer point-and-shoot cameras use smaller sensors
still, many coming in somewhere around 7x5mm. The following figure
will give you an idea of the relative sizes.
Size matters
So what difference does sensor size
make if the camera takes good photos? Of course, if you are happy
with your photos, that's all that matters, but having a larger sensor
does give you benefits that you may not realize you are "missing" with a
smaller sensor. First and foremost is image quality. Due to
the fact that larger sensors can hold larger pixels (when comparing
cameras with the same resolution), a larger sensor usually is capable of
greater dynamic range, less noise, and better high ISO performance.
Generally speaking, cramming more pixels into a smaller area will reduce
overall image quality so having a larger sensor can alleviate some of
the issues related to "pixel cramming". In addition, smaller
sensors with the same resolution (say 12 megapixels) cram more pixels
into a smaller area which often results in the need to use the highest
quality lenses. In contrast, using a 12 megapixel full frame
sensor, the pixels are larger and more spread out, making the lens a bit
less of a factor for sharpness.
Image quality isn't the only thing
that changes when you put a smaller DX sensor in an SLR camera.
Because other aspects of the camera remain the same, putting a smaller
DX sensor in the camera equates to simply cropping the center out of the
full frame image. As a result, you end up with tighter framing of
objects and a 35mm lens on a DX camera starts to look more like a 55mm
lens on a full frame camera. This may force you to back up from
the subject and/or change your zoom. In turn, depth of field will
also be affected and you may notice that it is more difficult to get
blurry backgrounds with a DX camera. On the plus side (for DX),
your 200mm telephoto lens will give you roughly the same framing of the
subject as a 300mm lens, albeit with different depth of field (than a
300mm lens on a full frame camera).
If you are not used to shooting film
or full frame, you may never notice these differences. Those who
have been shooting with DX cameras for years won't notice the difference
in being able to get really soft, blurry backgrounds under some
situations. In addition, it is now very easy to find good quality
lenses in the 17mm range, even in a super zoom, making your ability to
get wide angle shots with your DX not as problematic as it used to be!
Light falloff
One down side to using a full frame
camera is that you may run into situations where light falloff
(sometimes incorrectly called "vignetting") is an issue at short focal
lengths. Having shot DX cameras for nearly a decade, I was
surprised at how much light falloff was present on some of Canon's best
zoom lenses at the wide angle end of the range when using the full frame
Canon 5D camera. Usually appearing as darkening in the four
corners of the frame when shooting bright or uniform subjects, this
light falloff issue with full frame cameras is shown at the very bottom
of my 20D versus 5D review.
Note that light falloff doesn't indicate something "wrong" with full
frame cameras, only that I had been spoiled by DX cameras almost never
showing this issue and I was a bit surprised at how easy it was to see
this problem in my photos when using the full frame 5D at the wide angle
end with almost any lens, even when stopping down the lens.
About image quality
I've seen some posts on other web
sites that show full frame cameras like the 5D coming out way ahead as
far as image quality. Personally, I find very little difference in
image quality when comparing the 5D with some of the latest DX cameras
like the Nikon D300. A bit of an unfair comparison with the 5D
being more than two years old and rumored to be replaced soon, but I
don't find the exaggerated quality differences that I've seen on some
other sites when comparing the 5D to the D300. Instead, I find the
D300 to be a good match for the 5D when it comes to image quality, at
least at lower ISO's (below ISO 800). At higher ISO's of around
800 and up, the 5D pulls ahead as expected, due to its larger sensor and
greater sensitivity. In controlled side-by-side testing of the 5D
and D300, I've found little difference between the two and in fact,
might give the sharpness edge to the D300 up to about ISO 400.
Here's a link to a comparison shot. Both shots were developed from
raw and only some exposure and a hint of fill light added to adjust for
differences in the way the two cameras metered the subject. Both
shots were taken at ISO 200.
5D versus D300
I believe some of the web sites
showing better detail from the 5D were running into issues with the lens
or even some issues with the noise filtering on the cameras where too
much filtering was used on one camera versus the other. The only
significant difference I can see with respect to image quality with full
frame sensors is the ability to get better detail and less noise at
higher ISO settings. Even evaluating noise at high ISO is becoming
difficult these days, however, due to the adaptive noise reduction being
used in the latest models.
Click here for information on that subject if you haven't read last
month's article.
Summary
Hopefully this article has provided
some information on what to look for when considering a full frame
versus DX digital SLR camera. To be honest, I do a lot of wildlife
shooting and the 1.6x crop factor equates to more "zoom" which can come
in handy when shooting subjects that are far away. I also feel
that with many new (and good) lenses available in the 17-85 and even
17-200 zoom range, being able to get good wide angle shots is no longer
a problem with DX cameras. DX lenses also tend to be a bit lighter
and cheaper due to their size, which can also be a plus. For me,
someone who has tried both and someone who didn't come from shooting
film, I feel that full frame is more hype than hero. Someone who
does a lot of studio work or who shoots differently may disagree.
Thankfully (for me) this article is more about what to look for when
considering whether or not to buy into full frame than an argument as to
which is better for you! Different people obviously have
different needs. All I can say at this point is that in my
opinion, I don't think the existence of a few full frame cameras is
going to push the DX models aside for a while.... if ever.
Mike Chaney
|
|
|
4087
|
Technical Discussions / Articles / March 2008: The Megapixel Race Continues
|
on: May 27, 2009, 02:56:59 PM
|
The Megapixel Race
Continues
Background
In July 2004 I wrote my
first Tech Corner article titled
The
Megapixel Race. The article discussed the steadily
increasing pixel count of both consumer and professional cameras and the
tradeoffs associated with stuffing more pixels into the same area on the
imaging sensor. At the time (in 2004), manufacturers were
increasing resolution by about one megapixel per year to keep the pot
boiling and keep consumers coming back to the store to buy the latest
models with higher resolution. Is the megapixel race still on?
What has changed in the last four years, and are people still counting
pixels when making decisions about which camera to buy? Let's take
a look at the current state of the market with regard to the ever
changing technology.
A steady race
Back in 2004 when I wrote my original
article on the subject of camera resolution, the top of the line
prosumer dSLR had about 8 megapixels and I stated in that article that
the resolution was growing by about one megapixel per year. Well,
here we are in 2008 and while the "top" is a little less easily defined
these days, the high end dSLR's are now generally in the 12 - 13
megapixel range, so it's a steady race still running at the pace of
about a megapixel per year increase in resolution. Back in 2004,
manufacturers were content with just cramming more pixels into the same
area, however, creating higher resolution images that consistently
degraded in quality year by year. While the pixel count was going
up, noise went up in proportion, bringing overall image quality down.
Cramming more pixels into the same area reduced the pixel size thereby
reducing sensitivity of the pixels and increasing noise. That's
where things have changed a bit. Manufacturers finally realized
that they couldn't keep stuffing more pixels into the same capture area
while letting image quality suffer, so a more balanced approach is being
used today with better hardware and better software (in the camera) to
compensate for the increasing resolution.
Traffic control for
crowded sensors
If you keep trying to see how many
(more) people you can fit into a compact car, eventually you'll reach a
point where you realize the car needs some upgrades to be able to carry
the load. That's exactly what happened with the megapixel race.
While resolution is still steadily increasing, so is the technology
behind the pixels. Today's sensors have better on-chip noise
control and better dynamic range. In addition, cameras are relying
more and more on adaptive noise reduction. This adaptive noise
reduction is basically noise reduction software that resides in the
firmware of your camera. Just a few years ago, shooting at ISO
1600 meant getting a very noisy image from your camera and then applying
post-processing noise reduction using one of the more clever noise
reduction software programs on the market. Now manufacturers are
building in this noise reduction right in the camera so that the JPEG
images you get from your camera (or create from raw using the included
raw processing software) are already filtered based on the ISO speed
used for the shot. Using more complex noise reduction in camera
can give the "illusion" of lower noise levels at higher ISO speeds while
in fact, the noise levels are still quite high but have been reduced by
in-camera noise reduction techniques.
Paying the piper
The balance of resolution and
signal-to-noise ratio is a bit like conservation of energy. Noise
reduction is a tradeoff. You can lower noise, but it is inevitable
that lowering noise will lower resolving power of fine detail as well.
Clever adaptive noise reduction algorithms can make this fact less
noticeable by using things like edge detection to drive the strength of
the filter, but the tradeoffs will still be visible if you look for
them. People seem amazed that the very latest digital cameras like
the Nikon D300 or Canon 40D have higher pixel counts but substantially
less noise than prior models yet when you really look at photos from
these cameras, the tradeoffs are clear. Sure, there is less noise,
but the noise reduction on these cameras is so heavy-handed by ISO 1600
that there is a noticeable decrease in resolving power in order to
achieve those low noise levels. Overall, the image looks "cleaner"
as people tend to notice noise grain before lost detail. So
perhaps this increasingly heavy noise reduction (more NR as you increase
ISO speed) makes a better balance, but you're still paying the piper in
the end!
Take a look at the two shots above.
The one on the left was shot at ISO 200 with a Nikon D300. The one
on the right was from the same camera with identical focus, shot at ISO
1600. Without comparing to the image on the left, you might be
amazed at the low noise levels in the ISO 1600 shot, however, placed
next to the ISO 200 shot on the left, we can see how much detail is lost
in the ISO 1600 shot due to the increased noise reduction used by the
camera at the higher speed. Note that these are both JPEG photos
directly from the camera.
Putting it all together
While manufacturers have made great
strides in hardware, increasing dynamic range while decreasing noise,
you have to be careful when evaluating performance of newer cameras
(particularly dSLR's). We all have a tendency to open ISO 1600 or
ISO 3200 shots and sit at our monitors looking for noise grain in the
shadows. Doing this may lead you to miss an important side effect
of those smooth images: the fact that much of the detail has been
smoothed over along with the noise grain that was removed! Keep in
mind that the very latest model cameras have gotten quite heavy handed
at noise removal so it is important to compare both noise and available
detail in the photograph to what you'd get at a much lower ISO speed.
This fact should be evident when online reviewers take the same shot
with varying ISO and place them side by side in the review. Keep
in mind that I'm not saying "heavy" noise reduction is a bad thing.
Overall I think the high ISO photos from the latest dSLR's look very
good. Just keep in mind that no miracles are being worked with the
latest and greatest cameras that claim incredibly noise free images from
very high ISO speeds. The answer is in the noise reduction... and
with the right post-processing noise reduction software, you could
probably come close to the same quality with an older camera. The
bottom line is that the increase in hardware performance is certainly
there, but what sets these latest cameras out ahead of the pack is the
more complex and stronger noise reduction being used in the processing
of the data.
Summary
While the megapixel race continues at
a steady pace of about one megapixel per year or slightly more, advances
are now being made to increase hardware and firmware performance so that
cramming more pixels onto the same size sensor will not equate to noisy
photographs. Keep in mind that the latest cameras are using some
heavy handed noise reduction algorithms to achieve their much touted
high ISO performance, however, so when reviewing high ISO performance
from the latest cameras, keep an eye on more than just the amount of
noise (grain) in photos: also take a look at how much detail gets
"smudged over" by the noise reduction algorithms. I personally
find some of the noise reduction algorithms a bit too heavy
handed in that when shooting JPEG's in-camera, there is so much noise
reduction at (say) ISO 1600 that a lot of fine detail gets lost along
with the noise. For this reason, it is even more important to
shoot raw photos with the latest cameras so that you can make the
decision as to the proper balance of noise versus detail. A JPEG
from the camera that has been overly softened due to the ISO 1600 or ISO
3200 noise reduction cannot be rescued whereas you may be able to reduce
the strength of the noise reduction on the raw file in order to bring
back some of the lost detail if needed. Suffice it to say that
evaluating high ISO performance on the latest dSLR cameras is becoming a
lot trickier. You can no longer simply open an ISO 1600 shot and
look for noise grain. You must compare the shot against a lower
ISO shot of the same scene to see just how much detail was lost to noise
reduction. I hope this article will help people better evaluate
the performance of the latest cameras, particularly dSLR's, and will
enable people to better see and understand the whole picture... pun
intended! :-)
Mike Chaney
|
|
|
4088
|
Technical Discussions / Articles / February 2008: On Spam Blockers and Blacklists
|
on: May 27, 2009, 02:53:46 PM
|
On Spam Blockers and
Blacklists
Background
As the amount of spam
(unwanted solicitous email) increases in your electronic mailbox,
associated countermeasures like spam blockers and black lists get more
"heavy handed" by the day. As a result, as spam increases, so does
the risk that important/legitimate electronic mail will be blocked or
deleted before you ever see it. Are you sure you are getting all
your (valid) emails? Have you ever had a problem where someone
claims to have sent you email, possibly multiple times, but you never
get anything from the sender? It's possible, I would argue
likely, that you've been bitten by the tools that you think are
protecting you such as a spam blocker or worse: your ISP (Internet
Service Provider) blocking certain emails before they ever reach your
spam blocker! How can you reduce spam and still be sure you are
getting all your "real" email? How do you reach a balance between
getting so many messages that the real ones get lost in a mountain of
junk, versus being protected to such a degree that your protective
measures accidentally delete or block messages you actually wanted to
see?
Spam
After all the "do not call" lists,
legislation, and other anti-spam initiatives that have been tried over
the last few years, you'd think the spam problem would have gotten
better and not worse. Unfortunately, the problem has grown to a
point that many find it difficult to even do business (reliably) via
electronic mail. Of course, part of the problem is that people
continue to click on spam and the links within the spam message since
there would be no market for spam if no one responded to it! And
of course, many spammers don't follow the law, or worse, exploit it so
that they include an "unsubscribe" link in the spam as required by law,
but clicking on it does nothing but validate that your email really
exists and puts you on even bigger spamming lists! Bottom
line: never respond to spam, never click on "unsubscribe" links unless
they relate to services that you know you signed up for, and never, ever
click on links within a spam message!
One of the best things you can do is
simply not respond to any spam. If you see a spam that reminds you
that you needed to go to a web site, order a product, etc. do not
click on any links in the spam message! Most spam messages
contain links that, when you click on them, not only take you to the web
page in question, but also credit the spammer with having a successful
hit at the same time which is how many spammers get paid. So if
you see something that you absolutely must check out, don't follow the
link in the spam message. Just open your browser and go to the
site manually or even better, Google the name of the company or web site
and go to the site from the Google results. This keeps the spammer
from getting his/her money because there is no reference telling the
company how you got there! Using Google has an added benefit too,
because you might not only get results for the product/company you are
looking for, but you might also see that Google brings up a lot of
reports about "rip offs", "don't use this company", or other indications
that the web site you are about to visit is actually fraudulent or
otherwise not a good place to do business.
Countermeasures
Of course, we all know what spam is
and many of us use some sort of counter measures to keep spam out of our
mail boxes. The most common form of anti-spam is a spam blocker.
Spam blockers are usually just software packages that analyze your email
as it comes in to your mailbox so that spam can be detected and either
put into a "junk" box or deleted entirely. Many programs exists
that allow you to block spam and I won't go into which ones work
better/worse since the point of this article is not how well they work,
but that they all have flaws and will occasionally
misidentify good email as spam! Let's take a look at the two major
categories of spam blocking tools:
User installed
The first category of spam blocking
tools are those that are installed by you, the user. These can
range from virus scanning tools that double as spam blockers or email
programs (or email program add-ons) that block spam inside your email
program. On the plus side, these tools are almost always user
configurable and allow you to set the strength of the spam filter from
low (very few emails will be improperly identified as spam) to high
(where more spams might be caught but a significant number of good
emails might be improperly flagged as spam). These tools also
usually offer the ability to either move the spam to a spam/junk folder
or just delete it so that you never see it. The biggest problem
occurs when users set their spam strength/sensitivity too high and
choose to delete mail identified as spam. In these cases, a fair
number of legitimate emails might be improperly flagged as spam and
you'll have no opportunity to see those emails or correct the problem
because the spam blocker deleted the messages in question. So
lesson one in using a spam blocking tool is to set your sensitivity so
that only the most obvious spam messages are marked as spam and also
choose to move the spams to a folder rather than delete them.
Doing this allows you to get a handle on how effective your spam filter
really is and whether or not it is marking good messages as spam.
As soon as you choose the "delete" option, you are giving your spam
blocking tool the authority to "vaporize" your email and that leaves you
with very few options. It's always best to start on the safe side
to gain experience with the tools, and then increase their spam
detection "strength" only when appropriate, i.e. when you have more
experience with the tools and their effectiveness.
ISP installed
The second category of spam blocking
tools are tools that are available to you through your Internet Service
Provider (ISP). Some providers offer web based spam blocking tools
that you can access/tweak online. Often these tools are described
and supported via the web site of your ISP, and you may need your ISP's
help to configure them. If you have a "generic" email address like
Hotmail or Yahoo, you may have access to your mail box via both a web
page and your email program that resides on your computer. When
this is the case, it is often necessary to log onto the web site to view
the contents of the "junk" folder, especially in cases where you are
expecting mail but have yet to receive it. If you use one of the
free online email services and you are missing mail, the first place to
look is in the "junk" folder found on the web site since those messages
may never make it to your computer and your email program. In
addition to these user-configurable spam blocking/filtering tools, your
ISP may use measures outside your control. See "blacklisting"
below for more details.
Blacklisting
Blacklisting is a third spam
countermeasure that is so prevalent and so counterproductive that it
deserves its own separate category! Many ISP's use one of many
online blacklists of (usually) IP addresses that they believe are
operated by spammers. If email is sent to you from one of these IP
addresses (or sometimes just one that is close to it),
your Internet Service Provider may block the email before it can ever be
downloaded to your computer. In all cases, this amounts to your
ISP making the decision for you as to what is or is not spam since you
have no control over this type of blocking. This type of
unilateral decision making is by far, the most destructive form of spam
blocking because you have no control over it and it often results in
legitimate emails being deleted entirely, as if they had never been
sent. Many times, a range of IP addresses are blocked for no
reason other than the fact that a lot of outgoing mail is coming from
those addresses. So legitimate emails that are sent to (for
example) customers from a particular company might be blocked due to
your ISP deciding that it has seen too many emails from a particular IP
address when in fact, the emails might be legitimate correspondence
between a company and its customers or paid subscribers! In
addition, many spammers use mail "spoofing" where semi-random IP
addresses are added to the header so you may find yourself
on one of these blacklists just because some spammer decided to forge
your IP address into the header! The more
correspondence you do over email, the more likely you'll be to fall
victim to spoofing.
If you find that your
messages are not being delivered to others and you suspect that
you are on some online blacklist, your recipients may suggest
that you take steps to remove yourself from the online blacklists.
Don't do it! Instead, tell your recipients that you sent the email
and their ISP is blocking it due to an error on their blacklist(s) and
insist that their ISP deliver mail properly! Put the onus where it
belongs: on the people that are deleting your mail for no reason!
If you scramble to remove yourself from errant blacklists, you become
part of the problem since those utilizing the blacklists should be held
responsible for them working properly. If you happen to be on the
receiving end and you talk to someone who insists they have sent you
mail numerous times yet you never get anything from them and you know
your own spam blocking tools aren't the culprit, your ISP might be
blocking the message(s) due to using a blacklist that has errors.
The onus is on you to inform your ISP that you will not tolerate them
only delivering some of your email and deciding not to
deliver others without your knowledge! You pay your ISP (usually)
for service and if they are not delivering all of your email, they are
not serving you appropriately! Spam blocking and decisions about
spam are things that should be handled by the user, not unilaterally
decided by an ISP working with tools that obviously do not work
properly.
To make a long story short, if you
suspect that your ISP is blocking email to you, they may be utilizing a
blacklist that decided what to deliver and what not to deliver to you.
To know whether or not this is happening for sure, you may need to
temporarily disable any user installed or online/web based spam blocking
so that you can be sure your own tools are not the culprit. If,
after disabling all spam blockers for which you have control, you still
don't get all your email, inquire with your ISP to see whether or not
they use blacklists to block email before it ever gets to you. If
so, lobby them to stop using such (frequently errant) processes as you
don't want them deciding which emails you do and do not receive.
Doing this will force ISP's to solve their own problems (like mail
server overload) in other ways rather than pushing their own problems
onto you, the people they should be supporting.
Summary
Spam blocking and blacklisting has
become as much if not more of a problem than the spam they aim to
protect you from! If you utilize spam blocking tools, be sure you
know how to use them appropriately or you'll risk losing important
emails along with the spam you are fighting. In addition, be aware
that there is another level of spam blocking that happens "behind the
scenes" for which most people are not aware. Your ISP may be
taking measures to block spam (spam blocking or blacklisting) and
sometimes those measures can block legitimate mails as well.
Unfortunately, you have no control over this latter category except to
demand that your ISP deliver all your mail and let you (or
your own installed and configured spam blocking/filtering software)
decide what is or is not spam. If we all stop clicking on links in
spam emails and we all demand that our ISP's deliver all of our email,
the growing problem of not being able to reliably communicate via email
would be over. Sounds easy, right? I guess, so does world
peace... on paper. ;-) We are, after all, human.
Mike Chaney
|
|
|
4089
|
Technical Discussions / Articles / January 2008: A New Year's "Resolution": Sharpness EQ
|
on: May 27, 2009, 02:51:45 PM
|
A New Year's
"Resolution": Sharpness EQ
Background
Once in a while when we
find or create something truly unique, the idea gets left behind as we
move on to new things. Ever bake something that was so good that
you ate it twice a week for a month and then just moved on to something
else out of sheer boredom? Ever come back to it a year later and
remember how good it really is and felt like you discovered it all over
again? This article falls into that category where we revisit an
old but very useful idea. Let's take a look at sharpness variance
in digital photos and ways to correct sharpness variances to bring out
more presence or 3D effect in photos.
The problem
The vast majority of cameras on the
market use CFA's (color filter arrays) to capture only one color at each
pixel location. The Bayer CFA above is by far the most common
sensor type. Notice that only one color (red, green, or blue) is
captured at each pixel location on the sensor. Sophisticated
algorithms must be used to "predict" the missing two colors before you
get to the final full color image that you see from your camera or raw
conversion software. To complicate matters, there are twice as
many green pixels as red or blue, in part, in order to mimic the human
eye and its greater sensitivity to green compared to red/blue.
If you take a picture of a subject
with very little saturated color like a B/W resolution chart, snow
scene, the moon, or other objects without saturated colors, it is easy
to predict the missing colors because all three primaries (red, green,
and blue) will have about the same brightness. In these cases, the
missing green and blue values will be about the same as the red
brightness captured by a red pixel, red brightness at a green pixel will
be about the same as the capture green value, etc.. Once you start
photographing subjects with more vibrant colors such as fall foliage,
colorful Halloween costumes, or the worst case scenario: a red rose, the
amount of detail captured by the camera is significantly reduced.
As an example, consider the red rose. A red rose of a particular
shade will only excite the red pixel locations on the sensor, leaving
very little (usable) information at the green and blue photosites
(pixels). For the red rose, your camera's resolution just dropped
to near 1/4 of its total resolution due to the fact that the green/blue
pixels on the sensor are contributing very little information. In
cases like this, the problem actually becomes visible in photos!
Your red rose may look a little soft or out of focus compared to the
green leaves or brown parts of the stem that are in the same focal plane
because leaving you to wonder if perhaps your camera didn't focus on the
red flower as it should have.
If you train yourself to pick up the
problem, it is quite noticeable! A bright blue sweater in one
particular photo may look a little out of focus compared to a gray
sweater right next to it, you may find it difficult to get a truly sharp
photo of a blue flower while the green leaves around the flower look
sharp, and so on. This sharpness discrepancy for different colors
can alter the relationship between sharpness and depth of field and can
take away some of the 3D effect or "presence" that is seen on cameras
that capture full color (all three colors at each pixel) like the Sigma
SD9, SD10, and SD14. If you keep up with the reviews or visit
online forums, you will likely hear a lot of buzz about how full color
capture cameras like the SD9, SD10, and SD14 create photos with more 3D
effect than other cameras. The reason for that is in large part
due to the fact that full color capture cameras do not suffer from
sharpness discrepancies and capture all colors with the same amount of
detail. This leads to a much greater correlation between depth of
field and focus which is what adds presence or 3D feel to photos.
The remedy
Fortunately, some years ago I found
that you can take a (preferably unsharpened) photo and apply a special
adaptive sharpening algorithm to effectively reverse the effect of color
sharpness discrepancies. The image sensor in your camera cannot
capture all colors with the same detail, making certain colors (like
saturated red and blue) look considerably softer than other colors such
as gray or even green. The fix is to apply sharpening in such a
way that it sharpens saturated reds and blues the most, greens to a
lesser extent but still more than grays, and so on. While
sharpening can't truly add information that has been lost to single
color capture sensors, the adaptive sharpening technique can
produce a more visibly pleasing result so that bright red detail doesn't
look considerably softer than gray/white, green detail doesn't look
twice as sharp as blue, and so forth.
I created an algorithm that
effectively reverses sharpness discrepancies and called it the
"sharpness equalizer", adding it to the repertoire of image enhancements
in my own Qimage's batch
filtering tool. Simply select your USM (unsharp mask) and slide
the equalizer slider to the right to bias the sharpening algorithm to
compensate for sensor sharpness discrepancies. Using values like 2
for the radius, 150 for the strength, and the equalizer slider all the
way to the right (to try to compensate completely for sensor sharpness
discrepancies) increases the 3D feel of images and improves overall
clarity of photos. I made my algorithm available to Uwe
Steinmueller who also created a PhotoShop plugin that does the same type
of adaptive sharpening. See my
earlier article on the Outback Photo web site for details on the
plugin.
Since I have more than one dSLR
camera and I'm always comparing the latest models to my full color
capture SD14 for sharpness and 3D feel, I have recently rediscovered how
effective the sharpness equalization tool really is and I find myself
using it more often. Here is an example that shows how detail such
as red/blue can appear soft compared to B/W detail in the same focal
plane and how sharpness equalization can help resolve problems of
sharpness and depth:
Original |
After sharpness EQ |
|
|
Notice how the color detail
(particularly the red) in the image on the left appears softer than the
B/W detail in the upper left quadrant. This is due to the sensor
having less information to work with when capturing saturated colors.
The red detail in the image on the left almost looks like it is in front
of (or behind) the B/W detail due to the red detail being a bit out of
focus. In reality, this is a test target on a flat sheet of paper
so all of the lines in each quadrant should have the same sharpness.
Take a look at how sharpness equalization has corrected this on the
right image. The color (red, green, and blue) detail is now just
as sharp as the B/W detail in the upper left quadrant. The
sharpness equalization has now effectively restored sharpness in the
photo and along with it the proper depth of field. To see examples
of how this works with real photos, see my
earlier article from Digital Outback Photo or download a trial of my
own Qimage batch
printing/processing software and look in the help under unsharp mask to
see how you can try this process on your own photos!
Summary
If you're like me and you want to get
the most detail out of your photos but you always find something missing
when capturing bright colors, take a look at the information in this
article. You may be noticing a discrepancy in sharpness/detail
produced by your camera due to the way your camera captures color.
Using sharpness equalization can help you gain more "3D effect" or feel
from your photos and increase the overall presence of the scene.
Mike Chaney
|
|
|
4090
|
Technical Discussions / Articles / December 2007: Border Patrol: All About Borderless Printing
|
on: May 27, 2009, 02:43:24 PM
|
Border Patrol: All
About Borderless Printing
Background
Most newer inkjet photo
printers now offer options for borderless printing and using those
options leads to a number of questions that I've seen from people
confused about certain aspects of borderless printing. Have you
tried borderless printing only to find that it crops more of your photo
than indicated on screen? Are you using borderless mode to print
multiple photos on a page but you've discovered that your photos are now
larger than you specified in your printing program? Have you tried
printing three 8x10 prints across a 24 inch roll of paper only to find
part of the left 8x10 missing and a white sliver beside the 8x10 on the
right? If so, this article is for you!
Understanding the
tradeoffs of borderless printing
Before going into the methods and madness of borderless printing, let's
discuss some of the tradeoffs involved with borderless printing.
First and foremost is the fact that with borderless printing, you are
trying to print a photo (or multiple photos) that fit exactly on the
page with no runoff or slack on any sides. For example, if you are
printing an 8x10 photo on 8x10 borderless paper, the objective would
obviously be to print that 8x10 photo so that it aligns perfectly to the
8x10 page. This unfortunately is nearly impossible due to the fact
that printer paper loading and feed mechanisms are not perfect. If
the paper loads just a fraction of an inch further to the left than
expected, you'll end up with the right side of your 8x10 cut off and a
white sliver of paper showing on the left edge of the paper. Even
a hundredth of an inch can make a visible difference here. Paper
loading and feed mechanisms have tolerances higher than that as they
simply cannot load and feed paper that accurately every time. The
paper feed mechanisms may also load paper slightly differently depending
on how many sheets are loaded in the tray. You may find a white
sliver missing on the left when 20 sheets are loaded and the white
sliver may move to the right when the last sheet is loaded. This
variability makes it nearly impossible to print exactly an 8x10 on 8x10
borderless paper, exactly a 4x6 print on 4x6 borderless paper, and so
on.
To compensate for the above, printers
usually offer the option (or mandatory use) of something called
expansion and overspray. To avoid white slivers of paper from
showing on your borderless prints, expansion will actually expand the
print to a slightly larger size, printing part of the print off the edge
of the paper and onto an overflow (sponge or other material) off the
edge of the paper. Your 8x10 may be expanded to 8.2 x 10.2, for
example, printing two tenths of an inch of your print off the edge of
the paper. Printing beyond the edge of the paper will obviously
eliminate white slivers along the edges and will hide the fact that the
print isn't aligned perfectly on the page where it should be.
Obviously if your photo is tightly cropped, you may notice that some of
the photo is missing. Many people disable the expansion to avoid
parts of the print printing off the edge of the paper and then spend
countless hours pulling their hair out trying to get borderless prints
aligned just right to avoid alignment problems like white slivers on one
edge and cropped image on the other edge. The first step in being
successful at borderless printing is realizing that trying to exactly
fill your borderless page by printing a photo that is exactly the same
size as your paper is nearly impossible. If borderless printing
and exact sizing is a must, you may have to reach some compromises.
It is also important to understand
that print quality may be slightly reduced near the edges of the paper.
You may actually get a warning to this effect when you select the
borderless option in the driver. While any reduction in quality is
usually minimal and not visible on most photos, it can be an issue when
printing graphs or line art that include precise edges.
Let's take a look at the most common borderless printing scenarios and
see if we can make things a bit easier but before we do that, let's
check out some common driver options to make sure we understand how the
print driver is handling borderless printing.
Print driver options
The vast majority of print drivers offer at least some control over the
amount of size expansion and related overspray will be used when
printing borderless. Typically labeled "amount of extension",
"expansion" or some other related term, this control normally appears as
a slider near the check box for "borderless" in the driver.
Sliding this control to the left results in the minimal amount of
expansion/overspray and sliding it to the right results in more
expansion/overspray. Some drivers actually allow you to turn
expansion/overspray off completely when the control is dragged to the
left while other drivers require some minimal level of expansion and do
not allow you to turn size expansion and overspray off completely.
Realize that whenever expansion is on, the printer will expand your
prints and make them slightly bigger than what was selected. A 4x6
may become 4.1 x 6.1 inches, a 5x7 may become 5.15 x 7.15 inches, etc..
And of course, the more expansion that is being done, the larger the
print becomes, and the more (of your photo) gets lost off the edges of
the paper. This may not be important when printing a single photo
on a borderless page but if you are trying to squeeze four 4x5 prints
onto a borderless 8x10 sheet, be prepared to have two edges of each 4x5
print cropped off a bit as they will be slightly larger than 4x5 in size
and the outside edges will print slightly off the paper as a result.
Some print drivers, particularly
drivers for large format Epson printers, give you the option of whether
you want the driver to expand prints in the typical fashion or you want
to do it yourself. In most Epson drivers, the options are labeled
"Auto Expand" and "Retain Size". Auto Expand works as above, with
the driver adding some level of expansion depending on where the
"expansion" slider is set. Retain Size takes a little different
approach. It expands the size of the page beyond the edges
of the paper and you have to decide how you want to handle the
expansion/overspray. With the Retain Size option, a 24 inch roll
may show as 24.23 inches wide in your printing software. The extra
.23 inches actually print off the edge of the paper: about .115 inches
on the left and .115 inches on the right. If you were to print
three 8x10 prints across the paper starting at the left edge of the
printable area (that 24.23 inches), the left .115 inches of the first
8x10 would be missing as it printed off the left edge of the paper.
As you can see, using the Retain Size
option simply allows you to address (print on) areas that are beyond the
left and right edges of the page! Your 8x10 prints will be exactly
8x10 inches and you have the option of placing them wherever you want on
the (expanded) page, including .115 inches off the left edge of the
paper up to .115 inches of the right edge of the paper. When
printing any combination of photos that add up to 24 inches such as a
24x36 print, three 8x10 prints, etc. be sure to start by centering all
prints on the page. That will leave .115 inches on both the left
and right sides of that 24.23 inch width and will give you a good start.
As pointed out above, however, you may need to adjust margins slightly
(using fractions of an inch) to adjust for "slop" in the paper loading
mechanism. Now let's take a look at some common borderless
printing scenarios.
Printing a single photo
covering the entire page
The simplest borderless printing
scenario involves printing a single photo so that it covers the entire
borderless page. Some typical setups would be printing a 4x6 on
4x6 photo paper, an 8x10 on 8x10 paper, etc.. By far the easiest
and most trouble free method of doing this is to allow at least some
expansion so that some of the photo prints off the edges of the paper in
order to hide the fact that the print might not be perfectly aligned.
When you print a 4x6, a fraction of an inch may be missing since it
printed off the edge of the paper, but you'll get nice clean prints with
no white slivers to clutter the edges. Of course, when doing this,
it is important that you don't crop your photos very tightly. If
your photo contains some type of framing that you added at the edges of
the print or you cropped so tightly that heads, shoes, or other features
are already at the edge of the photo, you'll never be happy with
overspray/expansion because it'll always crop just a little more than
what you see on screen (from whatever program you are using to print).
If you are working with tight crops
and you must print exactly a 4x6 photo on 4x6 paper without any
overspray/expansion, you are in for at least some minor headaches.
There is simply no way around the fact that you will likely need to make
some minor adjustments. First, your driver may not even offer the
option of turning off expansion completely. If it doesn't, you'll
have to use a program like
Qimage that knows how to disable the expansion outside the driver.
Once the expansion has been disabled, you'll now be getting exactly a
4x6 inch print (or whatever size you chose) and your prints will no
longer be "enlarged" but you may find that it doesn't align perfectly on
the paper, leaving a white sliver on one or more edges of the paper.
At that point, you'll have to make slight adjustments to the margins,
often using both negative and positive margins, to compensate for the
slop in your printer's paper loading and feed mechanism. A method
for this type of adjustment is outlined in the
Qimage help file
here. Just remember to never use negative margins (if they are
even allowed in the software you are using) unless you are printing
borderless because that's the only time negative margins (going beyond
the edge of the paper) make sense.
Printing multiple photos
on borderless paper
In certain situations, it is
convenient and cost effective to use borderless printing to fit more
photos onto a single page. For example, you may want to print
three 4x6 photos on a single 8x10 borderless page. The same
processes and tradeoffs are at work here (expansion versus alignment)
but people are often even more confused when printing multiple photos on
borderless paper when they discover that their 4x6 prints are not really
4x6 when printed. Instead they are either slightly larger or they
have one or more edges that appear more cropped than expected. Of
course, this is the driver's size expansion doing its dirty work!
Again, you could disable the expansion per the previous paragraph, but
you'll again be faced with trying to make near microscopic adjustments
to margins to compensate for slop in the paper loading and paper feed
mechanism. While it is relatively simple to make these
compensations, your printer is likely not always consistent in exactly
how it loads paper so your adjustments may only work with a certain type
of paper or with a certain number of sheets loaded. The exact
position of the page may differ when variables like the number of sheets
in the tray change.
Other surprises related
to print size
The expansion and overspray related
to borderless printing can cause prints to be larger than expected,
leading to complaints about getting the wrong size print or prints that
are too cropped. In this case, the print driver itself modified
the print to make it larger. Be aware that in addition to
borderless printing, there are other options in some print drivers that
can cause surprises related to print size. Options like "fit to
page" can often be used in the print driver when selecting a paper size
that exceeds the physical limitations of the printer. For example,
if you try to select a paper size of 18x25 on a printer that can only
print 17 inches wide, the driver may actually allow you to select that
18 inch width using a "fit to page" option where everything is scaled
from 18 inches wide to 17 inches wide. This causes the driver to
"lie" to your printing software, telling it that it actually is using 18
inch wide paper. When you print an 18 inch wide print, however,
the driver will scale the print down to fit it on the (true) 17 inch
wide paper and you'll end up with prints that are smaller than you
expected. Personally, I don't like print driver options that
"corrupt" data in this way by modifying it after it has been sent to the
printer, but those options are pretty standard for most print drivers,
so just be aware that no matter what software you use to print, if
the size you get from your printer disagrees with the size shown in your
printing software, it is almost always the print drivers fault for
modifying the data that has been sent to the driver and producing
something other than what was specified in the print job!
Summary
If you are not getting the sizes or
spacing you expect with borderless prints, consider the information in
this article and the fact that expansion/overspray may be involved.
Printing a single photo on borderless paper is often not a problem
because we often don't care about 1/16 inch being printed beyond the
edge of the paper. When precision is paramount, however, as it
would be when trying to fit three 8x10 prints across a 24 inch roll of
paper, be prepared to spend the time needed to turn size expansion off
and make miniscule manual adjustments to margins to get things just
right. It can be a painstaking process to align prints on a
borderless page so that all edges of the photo just touch the
edges of the paper. Fortunately if you are using
Qimage, you'll only have
to make these adjustments once for each configuration you are using
since Qimage will allow
you to save all print related settings including driver selections in a
printer setup that can be loaded at any time. Since some variables
involved with this fine alignment may not be available in the driver
(such as the ability to disable overspray/expansion and the ability to
use negative margins), just saving driver settings inside the driver (if
your driver allows that) may not be enough.
This article should not only give you
some examples that will work properly for borderless printing, but also
give you enough background to understand the process of borderless
printing to the point that you can deal with some of the common pitfalls
and headaches that can be synonymous with borderless printing.
Borderless printing is a powerful and often paper saving feature that
when combined with the right knowledge, can prove to be rewarding in the
end.
Mike Chaney
|
|
|
4091
|
Technical Discussions / Articles / November 2007: Using Matte, Semi-gloss and Glossy Paper
|
on: May 27, 2009, 02:37:30 PM
|
Using Matte, Semi-gloss and Glossy Paper
Background
There are such a wide
variety of papers available for your inkjet printer that selecting a
brand and type of paper can be mind boggling. Is brand XYZ paper
compatible with your printer and if so, what are the benefits of matte,
semi-gloss, and glossy paper types? Let's take a look at some of
the pros and cons of using matte, semi-gloss, and glossy paper for your
photos.
Manufacturer Versus Third
Party Papers
If you use paper made by the same manufacturer as your printer, try to
check the paper type selections in your printer driver to be sure the
paper is specifically listed. If it is, life is made simpler due
to the fact that you know the driver already has a selection compatible
with the paper you are using. When it comes to third party papers,
things can get a little tricky. You often end up going to the
paper manufacturer's web site to see if your printer is listed as
"compatible" with specific papers. Even if your printer is listed
as compatible with a particular paper, however, be aware that you may
need to select specific settings in the print driver that are not
immediately obvious like selecting a paper type that doesn't match the
paper you are using, adjusting color settings per the paper
manufacturer, or even using specific ICC profiles that can be downloaded
from the paper manufacturer. Also be warned that just because a
printer is listed as "compatible" with a particular paper doesn't mean
that it really works well with that paper! To be
sure, try Googling the type of paper and your model printer to see if
others are having success with the combination. I've seen some
claims of compatibility that I'd really have to question because in some
cases, I'd call the paper incompatible because the paper exhibits
significant bronzing, highly visible dot patterns, or other artifacts
that I find unacceptable. Suffice it to say that unless you can
find others on the web who recommend the combination, stick with paper
made by the printer manufacturer to be safe. There are plenty of
excellent third party papers out there by various manufacturers, some of
which I hold in higher regard than even the manufacturers own paper, but
you have to do some research before you can determine if the paper is
truly compatible and does not have other issues like longevity problems
with certain inks.
Matte Paper
Matte paper is excellent for displaying photos such as large panoramas
that must be displayed "naked" (not behind plastic/glass) in an
environment where light reflections can be an issue. Since you
don't get any glare at all from matte papers, matte paper is a good
choice for displaying a 4 foot panorama in a camera store under mixed
lighting especially where the prints are displayed high on a wall and
reflections from overhead lights can be a real issue. Matte papers
are generally not as durable as semi-gloss (sometimes called luster)
paper or glossy paper as handling of matte prints can sometimes cause
abrasion marks similar to running your fingers across a suede or
microfiber material. As a result, matte paper is not generally
suited for prints that are to be handled in their naked state.
One real issue with matte papers is
that they have less dynamic range (contrast) and a smaller gamut than
semi-gloss or glossy papers. Some like to say that they have less
"apparent" range because that range is dependent on how the light
reflects or is scattered off the surface of the paper, but the line
between "apparent" and "actual" is a very fine line when it's the light
that reaches your eyes that is important. Regardless of the
semantics, matte papers will generally have duller colors and less
contrast than semi-gloss or glossy papers. This fact even bears
out when profiling different paper types as the profiling
equipment/tools will find a smaller color gamut and less dynamic range
for matte papers and will therefore have to make more compromises when
creating the profile. Here's an example of the color gamut of a
matte paper and glossy paper profiled under the same conditions, with
the same profiling software, for the same printer (Epson 2200):
The wire frame shows the color gamut of the glossy paper and the solid
surface shows the color gamut of the matte paper. As you can see,
the glossy paper has a significantly larger color gamut, meaning that
the same print will appear more vibrant on glossy paper compared to the
matte paper. Even though the difference in gamut size can be
smaller (or larger) than that depicted above, generally you'll get more
vibrant colors from a glossy print than you will with matte prints.
Mounting matte prints behind glass or plastic can compensate for this to
some degree, but due to how the ink droplets interact with the paper
itself, matte prints will always have a smaller gamut and less contrast
than glossy prints.
Next is the issue of resolution.
Again, speaking in generalizations (since there are a wide variety of
papers that one could compare), glossy papers produce prints with the
highest level of "micro detail": that is, detail that can be seen under
very close examination of the prints. This is due to the fact that
matte papers tend to "soak" up more ink than glossy papers, causing each
ink droplet to be a little more spread out and a little less defined on
matte paper. The bottom line for matte paper is that it serves an
important role but due to color vibrancy and resolution limitations,
should be used appropriately and should probably be limited to uses
where light reflections and glare are a major concern. Matte
papers are also very good when you don't necessarily want that "wet"
look but would rather have a softer feel to your photos. They can
also be more cost effective when displaying large prints that will not
be viewed up close as distant viewing doesn't require fine
resolution/detail.
Glossy Paper
Glossy papers generally offer the widest color range and best
resolution, but they suffer from glare which can be a problem under
certain lighting conditions. As pointed out above, glossy paper is
excellent for photos that will be handled in their "naked" state.
They may show fingerprints, but they are usually quite durable, to the
point where you can easily wipe off smudges or fingerprints without
harming the prints. Profiling glossy papers is also often easier
as glossy papers offer a "no compromises" quality that truly brings out
the best in color and resolution that your printer can offer. They
are often not the best choice, however, for scrapbooks or glass mounting
as they can sometimes stick to the surface that is mounted against the
printed side of the paper! For mounting behind glass or plastic
sleeves, semi-gloss may be the best compromise. Also be aware that
if you do decide to go with third party papers, glossy papers are the
most particular about compatibility with certain printers. That
is, it is easier to find third party glossy papers that don't work well
with your particular printer or have gas/light fade problems with
certain inks.
Semi-gloss Paper
Semi-gloss or "luster" papers offer a good compromise between glare,
color range, and durability. With a color range close to that of
glossy paper, you can be sure you are getting the full power of your
printer while at the same time reducing glare and smudges.
Semi-gloss papers may not completely eliminate glare but most of them
reduce glare to a point where it is not an issue except under the most
extreme lighting conditions and viewing angles. Where glossy used
to be my favorite paper type for getting the most color vibrancy and
detail from any printer, some of the latest semi-gloss offerings are
quickly changing my mind or at least making it a toss-up between glossy
and semi-gloss paper when matte paper is not specifically called for.
Other Paper Types
Of course, we can also choose from canvas, textured, and other "fine
art" type papers like "photo rag" papers. These are normally
outside the range of what a "typical" user would normally encounter, but
suffice it to say that most of the canvas and fine art papers fall
(loosely) into the category of "matte paper on steroids" except for the
few glossy fine art papers. Canvas and photo rag paper follow the
general characteristics of matte papers with some caveats. If you
are interested, Google is your friend. A little research goes a
long way when determining whether a particular paper is well suited for
your model printer. Keep in mind, however, that most photo rag
papers soak up even more ink than your typical matte paper and that may
force you to increase ink intensity in your print driver to get decent
contrast and good blacks. Of course, that will cause a
corresponding increase in ink consumption. Personally, I'm not a
big fan of most photo rag papers for this reason.
Summary
I often get asked about when it is
best to use certain paper types or get questions such as "why use glossy
paper at all if it causes glare". I also get asked why it often
seems like more work is required when creating ICC profiles for matte
papers compared to glossy papers. Hopefully this article has
answered a few of those questions and will at least give you a start if
you are wondering about the pros and cons of matte, semi-gloss, and
glossy papers.
Mike Chaney
|
|
|
4092
|
Technical Discussions / Articles / October 2007: Posting Photos on the Web
|
on: May 27, 2009, 02:33:38 PM
|
Posting Photos on the
Web
Background
So you have a new dSLR
camera and you've been taking some great photos. You want to share
them with others and you've found an online photo hosting service where
you've uploaded some photos but you notice that after they have been
uploaded, they look dull or washed out when they are viewed on the photo
hosting web site. Where did you go wrong? They looked great
until you uploaded them to the web! In this article, I discuss a
common mistake that can cause color problems when uploading photos to
photo sharing web sites.
Your Camera's Color Space
If you are not familiar with color spaces or need a refresher as to why
your camera may offer more than one color space, you may want to check
out
this article before going further. If your camera is set (via
the menu options on the camera) to sRGB color space or the camera
doesn't offer any color space selection, you should have no problem just
uploading the original files to photo sharing web sites since sRGB is a
reasonable color space for web viewing. Web browsers generally are
not color management aware, which means they can only display the raw
image "as is" on screen. Since sRGB is a reasonable match for most
monitors, images coded in sRGB color space should look fine.
You've read about the virtues of
larger color spaces like Adobe RGB, however, and you've set your camera
to Adobe RGB color space via the camera menu option. When posting
images on a photo sharing web site, here's where the trouble starts!
If you upload an image that has been captured in Adobe RGB color space
(or converted to Adobe RGB via your raw conversion tool of choice), the
image on the photo sharing web site will be in Adobe RGB color space on
the web site. When someone goes to that site and opens the image
with their web browser, they'll be looking at an Adobe RGB image on a
screen that is best suited for an sRGB image. This is due to the
fact that the web browser ignores the color space tag in the image since
it is not color management aware: it simply "dumps" the image onto the
screen. When this happens, colors can look dull, washed out, and
some colors can appear shifted or just wrong.
Adobe RGB |
sRGB |
|
|
Take a look at the photos above.
The one on the left is what the photo would look like if you had shot
the flower in Adobe RGB and simply uploaded that file to a web page or
photo sharing site. The photo on the right is the same photograph
converted to sRGB color space prior to uploading the image. As you
can see, the colors look quite dull and lifeless in the Adobe RGB
version. Again, this is due simply to a mismatch in how your
monitor handles color and how the color was saved in the image. It
is not an indication that there is something wrong with Adobe RGB color
space! The fact that your monitor is better suited (closer to)
sRGB is what makes the sRGB version look closer to correct.
A Time and a Place for
Adobe RGB
Adobe RGB has a larger color gamut (range of colors) and is therefore
well suited for reproducing photographs in professional photographic
tools, particularly when printing since printers can actually reproduce
a wider range of colors than your monitor. If you were placing
photos on a web site for professional photographers to download and edit
or print on their computer rather than just displaying the images on the
web, it would be appropriate to use Adobe RGB. So Adobe RGB is
appropriate when you want people to be able to download and reproduce
your photos offline. This ensures that you get the larger color
gamut of Adobe RGB and most professionals who intend to download and use
the images on their computer will realize that the photos may not look
"up to par" when just viewing them in a web browser. For most
applications, however, you'll want your images to be in sRGB color space
so that people can just click on the photos in their web browser and get
reasonable color rendition without having to download the files and pull
them up in a photo editor or other color management aware application.
Converting to sRGB Prior
to Uploading
While you've been happy shooting in Adobe RGB color space and you've had
no problem editing your photos in your professional photo editing
application or printing them from
Qimage as those
applications are color management aware, it would seem your use of Adobe
RGB color space is now causing problems for you when you want others to
view your photos on the web. Do you need to switch back to
shooting in sRGB color space to avoid this hassle? The answer is
emphatically NO! You can keep on shooting in Adobe RGB to get the
extended color capture range (that your printer can use) and simply
convert those Adobe RGB images to sRGB prior to uploading them to the
web. Don't worry, this operation is simpler than it sounds!
You could always do it the hard way by opening each photo one at a time
in your favorite photo editor, converting to sRGB, and resaving, but if
you have a batch processing application like
Qimage, you can make
sRGB copies of all your Adobe RGB photos in one batch processing step.
In Qimage, here are the
steps to create sRGB copies of a batch of Adobe RGB photos:
-
Add Adobe RGB images to the queue by
selecting/dragging thumbnails.
-
Right click in the queue or on the
preview page and select "Convert Images".
-
Under "Save Options", select the file
type for the new files (JPG, TIF, etc.)
-
Check "Perform a profile to profile (ICC)
conversion".
-
Delete any text in the "From" box:
the word <input> should appear in the box.
-
Click the "...." button next to the
"To" box and then click "Utility Profiles".
-
Select "Adobe RGB".
-
Click "OK" and sRGB copies of all
images in the queue will be created.
If you know you want downsampled
JPEG's for the web, here's an even easier way:
-
Add Adobe RGB images to the queue by
selecting/dragging thumbnails.
-
Right click on queue/preview page and
select "Create Email/Web Copies".
-
Make sure "Convert to sRGB color
space" is checked.
-
Select resolution and JPEG quality
and click "Go".
-
Unless you specify an output folder,
sRGB copies will be in a {Q}e-mail subfolder.
Summary
Be aware that web posted photos can
look dull or display with inaccurate color if you are shooting in (or
converting your photos to) Adobe RGB color space and then uploading the
Adobe RGB images directly to the web. To solve this problem,
simply convert images to sRGB color space prior to uploading them to
your web page or photo sharing site. My own
Qimage software offers
batch conversion to create sRGB copies from multiple Adobe RGB images in
one shot (see above), making shooting in Adobe RGB and uploading to the
web less time consuming. By shooting in Adobe RGB, you can
reproduce a wider range of colors for printing photographs and still
convert to sRGB when needed for web display.
Mike Chaney
|
|
|
4093
|
Technical Discussions / Articles / September 2007: Why Digital Cameras Have Mechanical Shutters
|
on: May 27, 2009, 02:30:23 PM
|
Why Digital Cameras
Have Mechanical Shutters
Background
Ever wondered why digital
cameras, particularly high-end digital SLR's, have mechanical shutters?
The sensor is electronic, so why can't it be told to simply sample the
light for the length of time specified by the shutter speed? Why
can't the sensor just start accumulating light (what is sometimes
referred to as a "charge"), wait a specified length of time, and then
stop accumulating light at the end of the exposure time? Let's
take a quick look at the reason mechanical shutters are used in digital
cameras.
The Shutter Itself
Digital cameras use several different types of mechanical shutters, but
all of them serve the same purpose. They block light from reaching
the sensor when closed and move out of the way to let light accumulate
on the sensor while open. Of course, the first thing that comes to
mind is that the sensor, being an electronic device, should be able to
simply turn on/off electronically. Why is the shutter even needed?
Well, in fact, many cameras do use an electronic shutter that simply
turns on/off the "light reading" capability of the sensor when needed.
Many pocket point-and-shoot cameras use this technique. Pocket
cameras that use the rear LCD to preview the picture are sometimes set
up this way and hence have no mechanical shutter at all.
Realizing that some cameras have all-electronic shutters while others
have mechanical shutters, it's obvious that there are pros and cons to
both designs.
Sensor Types
Interline Transfer
Cameras, typically smaller point-and-shoot cameras, that use no
mechanical shutters typically use an interline transfer sensor. An
interline transfer sensor dedicates a portion of each pixel to store the
charge for that pixel. The added electronics necessary to be able
to store the charge for each pixel reduces the fill factor of the pixel,
in turn reducing it's ability to capture light since a portion of each
pixel is not light sensitive. Microlenses can be used to
compensate but they are not 100% efficient and they can add expense to
the design. Interline transfer sensor's typically have higher
noise levels and lower sensitivity than the full frame sensor's used in
high end digital SLR's. One obvious benefit is that this design
eliminates the need for a potentially bulky mechanical shutter and can
turn a purse size camera into a shirt pocket camera.
Full Frame
Digital cameras that use a mechanical shutter typically use a type of
sensor called a full frame sensor. Unlike the interline transfer
sensor (above), the full frame sensor has no circuitry on the pixel to
store the charge that builds up as light contacts the array.
Cameras that use a mechanical shutter typically bleed off any residual
electrical charge while the shutter is closed, open the shutter, and
then close the shutter. Once the mechanical shutter is closed,
circuitry is then used to shift the charge from each pixel into a
storage area. Since the pixels on the sensor remain "live" during
readout, if the shutter remained open, light would continue to alter the
charge accumulated by each pixel during the shifting operation which
could result in blur or ghosting.
Mechanical shutters: the
bottom line
In layman's terms, a mechanical shutter is used to control how long the
pixels on an image sensor collect light. A simple mechanical
shutter can be used to turn the entire sensor array on/off during the
exposure. This eliminates the need for added electronics at each
pixel location that would be used to turn on/off the pixel and store the
charge (accumulated light). By using a mechanical shutter, a
simpler, less expensive, and more efficient sensor can be used: one that
has a higher fill factor (uses more of each pixel to actually capture
light). Of course, nothing is ever cut and dried. Some
cameras use both a mechanical and an electronic shutter! In these
cases, the electronic shutter is used to supplement the mechanical
shutter by providing features like a faster flash sync speed where
mechanical shutters are just not fast/accurate enough. Most
digital SLR cameras that use a mechanical shutter, however, use the
mechanical shutter to control the amount of charge accumulated on the
sensor as this simple mechanical device can be used to simplify the
circuitry on the sensor itself thereby generally improving image quality
and reducing noise.
Summary
This article is designed to answer
the question of why a digital camera, admittedly a "solid state" device
that shouldn't logically need any moving parts other than a focus
mechanism would need a mechanical shutter. The answer, on the
surface, turns out to be relatively simple and I hope I've answered the
question so that most people can grasp the concept.
Mike Chaney
|
|
|
4094
|
Technical Discussions / Articles / August 2007: The Megapixel Masquerade
|
on: May 27, 2009, 02:27:49 PM
|
The Megapixel
Masquerade
Background
Imagine a world where a
camera can be dubbed "14 megapixel" when it has 4.6 million pixels on
its imaging sensor while at the same time, another camera can be
dubbed "10 megapixel" when it has no pixels at all on its
imaging sensor. Sound like a strange world? Maybe, but it's
the world we live in today! Evolving technologies are making it
more difficult to define exactly what is meant by the term "megapixel"
and is blurring camera specifications to the point that many people no
longer know how to compare cameras by the specs alone. In this
article, I will try to explain how manufacturers come up with their
marketing regarding the term "megapixels".
Pixel: a definition
Wikipedia defines a
picture element, or pixel, as "the smallest complete sample of an
image". Others use similar terminology such as "the smallest
discrete component of an image". The key here is that we are
talking about the smallest element in an image: that is,
the final picture or photograph. Obviously, a digital photograph
is made up of millions of tiny points of light, each of which can have
its own unique color and brightness. When these points of light
are displayed next to one another and viewed from a distance, the
individual points of light fade together and we see what appears to be a
smooth, continuous image. There are various ways to represent
color and brightness for each point of light or pixel in the image, but
the most common is to assign each pixel its own set of red, green, and
blue brightness values since you can reproduce a particular color by
combining red, green, and blue intensities. A pixel then, must
have all three (red, green, and blue) components to be a complete sample
of the final image.
Sensor photosites and
pixels
Ever since images from digital
cameras broke the one million pixel boundary more than a decade ago, the
term "megapixel" has been used to describe resolution. Using this
term, buyers could get an idea about how large they could print, how
much leeway they would have to crop images, and so on. While a "10
megapixel" claim is accurate with respect to how many pixels are in the
final (developed) image, somewhere along the way, the megapixel moniker
has gotten confused with "camera resolution". A typical camera
claimed to be a 10 megapixel digital camera may produce 10 megapixel
images, but by definition, the camera itself (the sensor) does not
contain 10 million pixels. Far from it in fact! This "10
megapixel digital camera" actually contains no pixels whatsoever on its
sensor. Instead, the sensor is a conglomerate of 5 million green
photosites, 2.5 million red photosites, and 2.5 million blue photosites.
Sophisticated software takes information from these 10 million
individual samples of red, green, OR blue at each location
in order to predict the missing two color channels at each pixel in the
final image. Since a pixel is defined as a complete picture
element, a typical digital camera cannot be defined as a "10 megapixel
camera" even if it produces a 10 megapixel final image because two
thirds (67%) of that 10 megapixel final image is "predicted" rather than
actual data. For the camera itself to be called 10 megapixels, it
must have 10 million pixels on the sensor, each of which is able to
represent complete information without borrowing information from
neighbors.
Enter Full Color Capture
For about a decade, none of this
pixel definition nit-picking mattered because all cameras were roughly
the same. They all captured only one of the three red, green, or
blue colors at each location on the sensor and they all predicted the
missing two colors by looking at neighboring locations on the sensor and
predicting. The fact that your 10 million pixel image didn't come
from a 10 million pixel camera didn't matter because everyone was
compared on a level playing field. When Sigma introduced the first
consumer full capture camera (the SD9) in 2002, they were faced with a
dilemma. Should they call it a 3.5 megapixel camera because it
delivers 3.5 million pixel final images, or should they call it 10
megapixels since it captures all three red, green, and blue color
primaries at each location on the sensor? Technically (by the
definition of a pixel), they should label it as a 3.5 megapixel camera
but its competition at the time were cameras dubbed as 6 megapixels even
though they were not really 6 megapixel cameras. Now that
technology was changing, the "fuzzy" definition of megapixel that had
worked for years suddenly broke down. People started picking sides
and arguing apples versus oranges.
Fast forward to 2007 and the same
problem exists today. Sigma's updated SD14 produces a 4.6
megapixel final image from 4.6 million sensor pixels. Once again,
Sigma was faced with how to label their product since the competition
was calling their cameras 8 and 10 megapixel yet those cameras recorded
no true pixels at all and the final 8 or 10 megapixel image had to be
"derived" using a lot of educated guessing (read complex predictive
analysis). Had Sigma called their SD14 a 4.6 megapixel camera,
most consumers wouldn't realize that since the camera captures full
color, its final images are comparable to images from typical (non full
color) 10 megapixel cameras. They chose instead to take the "high
road" and label it a 14 megapixel camera figuring that if the rest of
the industry can claim 10 megapixels when only one third of each pixel
is real data, they can claim 14 megapixels when they are capturing all
three primary colors (4.6 x 3). In reality, Sigma marketing was
fighting misleading terminology with more misleading terminology.
They likely felt they needed to because it was easier than reeducating
the masses by writing an article like this and then hoping everyone
reads it. The phrase "damned if you do, damned if you don't" comes
to mind here.
Does it Matter?
It's interesting that some (both
online and hard copy) publications can claim that calling a 4.6
megapixel full capture camera 14 megapixels is hype when no one
complains that a camera advertised as 10 megapixels can't deliver 10
megapixels of real image information. What's the real hype here:
the fact that the SD14 is really 4.6 megapixels and not 14, or the fact
that a typical camera labelled 10 megapixel really only captures one
third of the information at each pixel? The truth here is that
sometimes you have to read the fine print. When comparing single
color capture cameras with full color capture cameras, just keep in mind
that megapixel ratings really cannot be compared directly. Both
technologies work and one is not necessarily better than the other for
all things, but when comparing megapixel numbers on paper, it's
beneficial to note that the term "megapixel" is used rather loosely in
this industry by both camps: the typical single color
capture camp and the full color capture camp, i.e. Foveon/Sigma.
Due to the filtering and reconstruction involved in creating an image
from a typical single color capture camera, it can resolve less detail
per final-image-pixel than a full color capture camera like a Sigma
SD14. How much will depend on the image, but a decent rule of
thumb is that full color capture cameras like the SD14 compare nicely to
cameras with about twice as many pixels in the final image. That
is, the 4.6 megapixel SD14 can resolve detail comparable to a typical
(single color capture) camera rated at about 9.2 megapixels. I
admit it's a bit silly to try to explain "fuzzy" logic with even more
"fuzzy" logic but sometimes it's necessary unless you expect all your
readers to have engineering or computer degrees. :-) If you
want to read (and see) more about how complicated it can get comparing
single color capture to full color capture, read my article on the
SD14 versus Canon 5D
where I take a look at some of the intricacies involved in comparing
typical single color capture cameras to full color capture cameras.
The Eyeball Argument
Some reviewers screaming "hype" on
the 14 megapixel designation of the Sigma SD14 argue that normal single
color capture cameras can actually approach their rated resolution even
when only one color per pixel is captured by the sensor. I've seen
claims that cameras rated at 10 megapixels can approach 10 megapixels of
true resolution especially when capturing black and white detail.
While the algorithms designed to create a full color image from
one-color-per-pixel sensors are actually pretty good at what they do
especially on black and white detail, the edge blurring needed in order
to make single color capture work properly holds them back from their
upper limit potential. Single color capture really starts to fall
short (of rated resolution) when capturing highly detailed colorful
subjects where the red, green, or blue locations on the sensor start to
contribute less information than they would in a B/W scene such as a
resolution chart. I've also heard the argument that single
color capture cameras, particularly those with the Bayer RGBG design,
try to replicate how the human eye works, giving more resolution to
green and less to blue and red, so that design is actually better as a
result. Such arguments are absurd, however, when you realize that
replicating the deficiencies of the human eye is not a
benefit but rather a necessity for single color capture! The goal
of any imaging device should be to produce the highest quality
photographs possible and reproducing the most accurate information for
each pixel is how we accomplish that task. This is how,
resolution-wise, full color capture cameras like the SD14 can
compare nicely to single color capture cameras with much higher final
image resolution. All this just goes to show that single and full
color capture are not comparable on paper no matter what arguments are
used to try to rationalize the comparison.
Summary
Don't be another victim in the
megapixel wars. Arm yourself with a little knowledge and you won't
have to take the manufacturer's word for it when trying to compare
(especially differing) technologies. There's much more to buying a
camera than just megapixels, of course, but if you like to look at
specs, maybe this article will help a bit with understanding some of the
claims made by manufacturers today with regard to megapixels and
resolution.
Mike Chaney
|
|
|
4095
|
Technical Discussions / Articles / July 2007: Brightness, Contrast, Saturation, and Sharpness
|
on: May 27, 2009, 02:25:09 PM
|
Brightness, Contrast, Saturation,
and Sharpness
Background
At first glance, it might
seem that doing an article on the four most common image controls would
be a waste of time. After all, brightness, contrast, saturation, and sharpness
are often thought to be the simplest controls as they've been around as
long as the color TV. People often overlook the fact that all
four are related, however, and changing any one of them can change
the other three. Do you know how they are related and how you are
changing the balance of brightness, contrast, saturation, and sharpness by only
changing one of the three parameters? Let's take a look.
Brightness
Brightness is generally thought to be
the simplest in concept. Just make the image brighter or darker by
a specified amount, right? First we must distinguish between true
brightness and something else called "gamma". Increasing gamma by
moving a mid-tone slider on a histogram is not the same as increasing
brightness. Increasing Gamma/mid-tones can make an image look
brighter, but it is non-linear in that it only increases brightness of
the shadows and mid-tones in an image without affecting the highlights.
Traditional brightness on the other hand, simply brightens the entire
image from the shadows to the highlights equally. Let's see what
happens when we add some brightness to an image. The following
test image is designed to bring out some of the effects we will refer to
in this article.
Fig
1: Increase Brightness |
|
|
In figure 1 above, we have increased
brightness on the right half of both the B/W and color images. In
this case, we didn't increase brightness enough to clip the highlights
(brightest colors) so we've only affected brightness here.
Fig
2: Extreme Brightness |
|
|
If we had made a more drastic change
such as the one shown in figure 2 where we added even more brightness,
we may have clipped the white/red spokes in the wheel which would have
affected contrast, saturation, and sharpness! In the extreme case
shown in figure 2 above, we have added so much brightness that the
shadows have "caught up" to the highlights because they are already as
bright as they can get. Now we have reduced saturation, reduced
contrast, and reduced sharpness as a result. The same effect can
be seen if we had reduced brightness to the point that the shadows had
nowhere else to go and the highlights started catching up to the
shadows. Depending on how close your shadows/highlights are to
their endpoints already, you don't need an extreme change in brightness
to affect the other parameters either. When increasing brightness,
you may find that you lose some contrast on the brightest details in the
image while the rest of the image has the same contrast as before.
Again, this is due to the clipping that is caused in the highlights.
Contrast
Contrast is defined as the separation
between the darkest and brightest areas of the image. Increase
contrast and you increase the separation between dark and bright,
making shadows darker and highlights brighter. Decrease contrast
and you bring the shadows up and the highlights down to make them closer
to one another. Adding contrast usually adds "pop" and makes an
image look more vibrant while decreasing contrast can make an image look
duller. Here is an example where we add some contrast.
Fig
3: Increase Contrast |
|
|
In figure 3, we have added contrast
to the right half of both images. As you can see, the white/red
spokes have gotten brighter while the background has gotten darker.
This causes the image to look more defined. By making the
highlights brighter, however, we've also increased the brightness of the
spokes, causing the image to appear brighter since the spokes are the
main focus of the image. On the red image, increasing the
brightness of the spokes has also increase saturation (defined below).
Finally, sharpness has also been increased on both images (also defined
below). Here, we have increased brightness, contrast, saturation,
and sharpness simply by adding contrast! Note that not all areas
of the image will be affected equally and a lot depends on the content
of the image itself. Saturation effects, for example, will be less
noticeable in images that don't show bright colors because there is very
little saturation to begin with. As an extreme example, take a
look at the B/W image above. Since B/W images have zero saturation
by definition, changing contrast cannot change saturation in B/W (gray)
areas of your image.
Saturation
Saturation is similar to contrast,
however instead of increasing the separation between shadows and
highlights, we increase the separation between colors. An example
showing increased saturation would show the same effect as figure 3
above for the red image but the B/W image would not change at all because
B/W or gray detail has no saturation. As a result, an increase in
saturation results in an increase in contrast, brightness, and sharpness
on the red image as in figure 3 and no change to the B/W image.
Again, a change in saturation normally has a more noticeable effect on
vibrant colors and less on dull colors or colors that are almost
neutral. This is because to change saturation, there must be some
color saturation to work with in the first place.
Sharpness
Sharpness can be defined as edge
contrast, that is, the contrast along edges in a photo. When we
increase sharpness, we increase the contrast only along/near edges in
the photo while leaving smooth areas of the image alone. Let's
take a look at an example with increased sharpness.
Fig
4: Increase Sharpness |
|
|
The right half of the above two
images has been sharpened using unsharp mask. By only sharpening
the edges, we've actually created several different effects in the above
image. Near the outer edge of the spokes, where the spokes are
thicker, they simply look sharper without looking brighter or more
contrasty. As we approach the center of the wheel, however, where
the spokes get very thin, our edge contrast enhancement has actually
caused the center of the wheel to get brighter, more contrasty, and more
saturated (on the red photo). This is due to the fact that most of the data near
the center is edge data so the effect increases in that area.
Here, we see that increasing sharpness can cause the appearance of
increase saturation, contrast, and brightness in areas of the image that
contain fine detail where other areas (areas with broader detail) seem
less affected except for the added sharpness.
Different effects for
different parts of an image
The overall effect of brightness,
contrast, saturation, and sharpness will vary depending on the content
in each photo. Consider increasing contrast as an example.
Increasing contrast makes shadows darker and highlights brighter.
If we increase contrast on an image where most of the detail in the
photo is very bright, say an overexposed sunset, we may actually end up
with less contrast! Why? Because there are no
(or minimal) shadows in the photo so separating the shadows and
highlights in an image that only contains highlights will just compress
the highlights, making them less contrasty. Similarly, taking a
soft focus shot and increasing saturation may cause bright/vivid colors
to appear sharper than gray or near gray detail and that may
cause an unwanted change in overall balance of the photo. As an
example, increasing saturation on a shot of a cricket sitting on a red
rose petal may increase the sharpness of the red rose petal, taking
focus off the less colorful subject (the cricket) because it will be
less affected by the change in saturation. The end result may be
that the rose petal now looks sharper than the cricket, making the
cricket appear to be out of focus, all because you increased saturation. Being able to control
these linked effects when using simple controls like brightness,
contrast, saturation, and sharpness is a bit of an art, but
understanding why we sometimes get unexpected results is
half the battle!
Summary
While brightness, contrast,
saturation, and sharpness may appear to be the simplest of image
controls on the surface and may appear to be mutually exclusive controls, they are
related and intertwined in such a way that changing any one of them can
create quite complex effects in your photos. Understanding how
they are related can be a big step in understanding how to use them and
more importantly when to use them. Before adding or
reducing brightness, contrast, saturation, or sharpness, think about
this article and ask yourself what you are really trying to accomplish.
Hopefully this article will help you pick the right control or the right
situation.
Mike Chaney
|
|
|
|