Mike Chaney's Tech Corner
December 22, 2024, 05:39:15 PM *
Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
News: Qimage registration expired? New lifetime licenses are only $59.99!
 
   Home   Help Login Register  
Pages: [1]
  Print  
Author Topic: August 2007: The Megapixel Masquerade  (Read 10949 times)
admin
Administrator
Forum Superhero
*****
Posts: 4229



Email
« on: May 27, 2009, 02:27:49 PM »

The Megapixel Masquerade


Background

Imagine a world where a camera can be dubbed "14 megapixel" when it has 4.6 million pixels on its imaging sensor while at the same time, another camera  can be dubbed "10 megapixel" when it has no pixels at all on its imaging sensor.  Sound like a strange world?  Maybe, but it's the world we live in today!  Evolving technologies are making it more difficult to define exactly what is meant by the term "megapixel" and is blurring camera specifications to the point that many people no longer know how to compare cameras by the specs alone.  In this article, I will try to explain how manufacturers come up with their marketing regarding the term "megapixels".

 

Pixel: a definition

Wikipedia defines a picture element, or pixel, as "the smallest complete sample of an image".  Others use similar terminology such as "the smallest discrete component of an image".  The key here is that we are talking about the smallest element in an image: that is, the final picture or photograph.  Obviously, a digital photograph is made up of millions of tiny points of light, each of which can have its own unique color and brightness.  When these points of light are displayed next to one another and viewed from a distance, the individual points of light fade together and we see what appears to be a smooth, continuous image.  There are various ways to represent color and brightness for each point of light or pixel in the image, but the most common is to assign each pixel its own set of red, green, and blue brightness values since you can reproduce a particular color by combining red, green, and blue intensities.  A pixel then, must have all three (red, green, and blue) components to be a complete sample of the final image.

 

Sensor photosites and pixels

Ever since images from digital cameras broke the one million pixel boundary more than a decade ago, the term "megapixel" has been used to describe resolution.  Using this term, buyers could get an idea about how large they could print, how much leeway they would have to crop images, and so on.  While a "10 megapixel" claim is accurate with respect to how many pixels are in the final (developed) image, somewhere along the way, the megapixel moniker has gotten confused with "camera resolution".  A typical camera claimed to be a 10 megapixel digital camera may produce 10 megapixel images, but by definition, the camera itself (the sensor) does not contain 10 million pixels.  Far from it in fact!  This "10 megapixel digital camera" actually contains no pixels whatsoever on its sensor.  Instead, the sensor is a conglomerate of 5 million green photosites, 2.5 million red photosites, and 2.5 million blue photosites.  Sophisticated software takes information from these 10 million individual samples of red, green, OR blue at each location in order to predict the missing two color channels at each pixel in the final image.  Since a pixel is defined as a complete picture element, a typical digital camera cannot be defined as a "10 megapixel camera" even if it produces a 10 megapixel final image because two thirds (67%) of that 10 megapixel final image is "predicted" rather than actual data.  For the camera itself to be called 10 megapixels, it must have 10 million pixels on the sensor, each of which is able to represent complete information without borrowing information from neighbors.

 

Enter Full Color Capture

For about a decade, none of this pixel definition nit-picking mattered because all cameras were roughly the same.  They all captured only one of the three red, green, or blue colors at each location on the sensor and they all predicted the missing two colors by looking at neighboring locations on the sensor and predicting.  The fact that your 10 million pixel image didn't come from a 10 million pixel camera didn't matter because everyone was compared on a level playing field.  When Sigma introduced the first consumer full capture camera (the SD9) in 2002, they were faced with a dilemma.  Should they call it a 3.5 megapixel camera because it delivers 3.5 million pixel final images, or should they call it 10 megapixels since it captures all three red, green, and blue color primaries at each location on the sensor?  Technically (by the definition of a pixel), they should label it as a 3.5 megapixel camera but its competition at the time were cameras dubbed as 6 megapixels even though they were not really 6 megapixel cameras.  Now that technology was changing, the "fuzzy" definition of megapixel that had worked for years suddenly broke down.  People started picking sides and arguing apples versus oranges.

Fast forward to 2007 and the same problem exists today.  Sigma's updated SD14 produces a 4.6 megapixel final image from 4.6 million sensor pixels.  Once again, Sigma was faced with how to label their product since the competition was calling their cameras 8 and 10 megapixel yet those cameras recorded no true pixels at all and the final 8 or 10 megapixel image had to be "derived" using a lot of educated guessing (read complex predictive analysis).  Had Sigma called their SD14 a 4.6 megapixel camera, most consumers wouldn't realize that since the camera captures full color, its final images are comparable to images from typical (non full color) 10 megapixel cameras.  They chose instead to take the "high road" and label it a 14 megapixel camera figuring that if the rest of the industry can claim 10 megapixels when only one third of each pixel is real data, they can claim 14 megapixels when they are capturing all three primary colors (4.6 x 3).  In reality, Sigma marketing was fighting misleading terminology with more misleading terminology.   They likely felt they needed to because it was easier than reeducating the masses by writing an article like this and then hoping everyone reads it.  The phrase "damned if you do, damned if you don't" comes to mind here.

 

Does it Matter?

It's interesting that some (both online and hard copy) publications can claim that calling a 4.6 megapixel full capture camera 14 megapixels is hype when no one complains that a camera advertised as 10 megapixels can't deliver 10 megapixels of real image information.  What's the real hype here: the fact that the SD14 is really 4.6 megapixels and not 14, or the fact that a typical camera labelled 10 megapixel really only captures one third of the information at each pixel?  The truth here is that sometimes you have to read the fine print.  When comparing single color capture cameras with full color capture cameras, just keep in mind that megapixel ratings really cannot be compared directly.  Both technologies work and one is not necessarily better than the other for all things, but when comparing megapixel numbers on paper, it's beneficial to note that the term "megapixel" is used rather loosely in this industry by both camps: the typical single color capture camp and the full color capture camp, i.e. Foveon/Sigma.  Due to the filtering and reconstruction involved in creating an image from a typical single color capture camera, it can resolve less detail per final-image-pixel than a full color capture camera like a Sigma SD14.  How much will depend on the image, but a decent rule of thumb is that full color capture cameras like the SD14 compare nicely to cameras with about twice as many pixels in the final image.  That is, the 4.6 megapixel SD14 can resolve detail comparable to a typical (single color capture) camera rated at about 9.2 megapixels.  I admit it's a bit silly to try to explain "fuzzy" logic with even more "fuzzy" logic but sometimes it's necessary unless you expect all your readers to have engineering or computer degrees.  :-)  If you want to read (and see) more about how complicated it can get comparing single color capture to full color capture, read my article on the SD14 versus Canon 5D where I take a look at some of the intricacies involved in comparing typical single color capture cameras to full color capture cameras.

 

The Eyeball Argument

Some reviewers screaming "hype" on the 14 megapixel designation of the Sigma SD14 argue that normal single color capture cameras can actually approach their rated resolution even when only one color per pixel is captured by the sensor.  I've seen claims that cameras rated at 10 megapixels can approach 10 megapixels of true resolution especially when capturing black and white detail.  While the algorithms designed to create a full color image from one-color-per-pixel sensors are actually pretty good at what they do especially on black and white detail, the edge blurring needed in order to make single color capture work properly holds them back from their upper limit potential.  Single color capture really starts to fall short (of rated resolution) when capturing highly detailed colorful subjects where the red, green, or blue locations on the sensor start to contribute less information than they would in a B/W scene such as a resolution chart.   I've also heard the argument that single color capture cameras, particularly those with the Bayer RGBG design, try to replicate how the human eye works, giving more resolution to green and less to blue and red, so that design is actually better as a result.  Such arguments are absurd, however, when you realize that replicating the deficiencies of the human eye is not a benefit but rather a necessity for single color capture!  The goal of any imaging device should be to produce the highest quality photographs possible and reproducing the most accurate information for each pixel is how we accomplish that task.  This is how, resolution-wise,  full color capture cameras like the SD14 can compare nicely to single color capture cameras with much higher final image resolution.  All this just goes to show that single and full color capture are not comparable on paper no matter what arguments are used to try to rationalize the comparison.

 

Summary

Don't be another victim in the megapixel wars.  Arm yourself with a little knowledge and you won't have to take the manufacturer's word for it when trying to compare (especially differing) technologies.  There's much more to buying a camera than just megapixels, of course, but if you like to look at specs, maybe this article will help a bit with understanding some of the claims made by manufacturers today with regard to megapixels and resolution.

 

Mike Chaney

Logged
Pages: [1]
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.21 | SMF © 2015, Simple Machines Valid XHTML 1.0! Valid CSS!
Security updates 2022 by ddisoftware, Inc.