I’m old enough to remember film cameras (remember “Kodachrome, Paul Simon, 1973?), which have been around from pre-Civil War days until digital cameras (sometimes called “filmless” cameras) were introduced and became popular in the 1960s. Now, they’re ubiquitous. In fact, you can’t even buy a cell phone without one.
Are they the equal of film cameras? Some say no, that the old cameras that relied on chemical film and a mechanical device had superior picture quality. They may be the same people who also insist that vacuum tube electronics are superior to transistorized electronics. And they may be right, at least for professionals. But digital technology is improving at an exponential rate, and the devices (which include stand-along cameras as well as cameras incorporated into cell phones, tablets and computers as well) now take not only still photographs, but also motion pictures, even adding sound to the video, making them immensely popular. And they’re easy to use, because they’re automated. No longer do users have to worry about exposures, shutter speeds and ASA film speed ratings, just point and shoot and the camera does the rest. Moreover, as they have become more affordable, they are even more popular. It’s estimated that in 2015, about 1.3 trillion photos will be taken worldwide. [Along with all these photos, a cottage industry has developed for transmitting, receiving, storing and arranging all of them: Windows and Macs both have cloud and device storage and apps for consolidating them. See, e.g. Mylio (stands for My Life Organized).]
So, how do they work? Both mechanical and digital cameras work essentially the same way: They allow light to pass through a lens, then project it onto a media or device which is then imprinted to create the photograph. The media may be a gel-based and chemically coated photographic film or an electronic image sensor which uses photosites. Both types of media must adjust for the various shutter “speeds,” which must become increasingly shorter in order to capture increasingly faster motion. Of course, how they create the resulting photo is different: The film must be developed using chemicals, while the sensor converts the light into electricity, which is processed through a digital computer chip. This isn’t much different than how the human eye views the world: Light passes through the lens, is refracted onto the retina’s rods and cones, where it is then processed into “sight” by the brain. (For more, see the graphic below...)
We’re only interested in the digital camera right now. In that device, the digital image sensor operates by capturing photons (light) into an array of millions of tiny light cavities called “photosites,” which are then filtered by color and which finally converts the optical image into an electronic signal (electrons). Basically, they turn light into electricity. There are a couple of different types of sensors (semiconductor chips) that do this: The most widely employed is the CCD (charge-coupled device,” shown at left), which is essentially an array of photosites which use an analog-to-digital converter (“ADC”) to turn each pixel’s electrical charge into a digital value for processing and, later, storage. The other type, the CMOS (“complementary metal oxide semiconductor,” shown at right), which uses less power, can create a higher quality image but is more susceptible to “noise,” because it uses transistors and wires to amplify and measure the charge. CCDs are used in most consumer grade cameras, while the CMOS is used in higher end scientific and professional digital cameras.
Unfortunately, even though the sensor is probably the most important part of a digital camera, you can’t actually see the sensor and it’s not usually discussed in the specs on the packaging. Moreover, manufacturers make it hard to evaluate sensor size, purposely resorting to meaningless and confusing definitions. Aside from assuming that the larger the camera, the larger the sensor, you may have to do a little on-line research for specifics. Generally, however, it must be tiny in a cell phone (about 4.54 x 3.42mm) and larger in a full size camera (as high as 23.6 x 15.6mm). Larger sensors increase resolution without sacrificing quality (much like the “speed” of the film in older film cameras). Smaller ones, like those on cell phones, compensate by using wider angle lenses. Click HERE for even more explanation, if you’re interested. Some high-quality cameras may even have more than one sensor, but most consumer grade cameras have only one. To see the manufacturing process for a CCD sensor, click HERE. And check out GoPro for sports cameras.
A second measure of the amount of detail that a camera can capture in a photo is called “resolution”. Resolution is measured in “pixels,” those thousands of little “dots” which comprise a photo. Insufficient pixels can prevent a photo from becoming blurry or grainy, especially if you crop and enlarge the initial image. But past a certain point, they really contribute nothing to photo quality, and can even cause it to unnecessarily bulk up the data size for storage and transmission. The more (mega)pixels doesn’t translate to higher resolution or greater clarity or detail. Unfortunately, camera manufacturers have engaged in “megapixel wars” which lead purchasers to think that the higher the megapixels the higher will be the resolution. It’s just not true. The sensor is much more important; it controls the picture.
It’s also important to remember that resolution is completely dependent on the output that is used to view the image. You can create a 1024 x 768 photo, but if your printer only prints images at 600 x 900, the extra pixillation is useless, because it won’t be displayed. Same if you are posting a photo to the Internet: 72 pixels is all you need, more is a waste. Also, most monitors don’t have much photo resolution, maybe 800 x 600 or so. The only time you really want to consider high resolution is for printing things like 8 x 10 prints, posters, macros, etc. on photo quality paper.
Anyway, pixels are usually expressed as “megapixels,” which means millions of pixels. Resolution is expressed by vertical times horizontal numbers, like 256 x 256 (65,000 pixels total), 1600 x 1200 (about 2 million pixels) or 2240 x 1680 (about 4 million pixels). So when you see a 4 megapixel camera designation, it is 2240 x 1680. This becomes important because, when these photos are stored on the camera or elsewhere, they take up space. For example, at 640 x 480 resolution, a TIFF (uncompressed) photo takes up 1.0 Mb, or as a JPEG 300 Kb; at 1024 x 768, a TIFF uses 1.5 Mb, or 500 Kb as a JPEG; or 1024 x 768 will be a 2.5 Mb TIFF or a 800 Kb JPEG. Remember, your camera’s SD card has limited storage space, after which you must either use another card or archive the photos and erase it for further use.
Remember that pixels in a fixed environment are not the same as they are in a moving environment, such as broadcast television or comuter monitors. In video, clarity is measured in terms of “pixel density,” which is number of pixels within a given area, not the total number of pixels. Nor does the definition consider the size or shape of any given pixel, or the aspect ratio, all of which can vary and also affect resolution as well. See Screens for more. Just so you know.
The quality of a photo can also be affected by other factors: The physical size and power of the camera lens (the larger and more powerful the better), the aperture (size of the shutter opening in the camera, which determines the amount of light it lets in through the lens), the shutter speed, the focal length (the distance between the lens and the sensor surface, zoom or macro focusing, the size of the electronic sensor (the larger the better), optical vs. digital zoom, and low-light capabilities as well as other features like viewing angle. If you are purchasing a digital camera, and extreme photo quality or adjustment is really important to you, you may have to do a little research (by reading reviews) to evaluate these features, as they aren’t always apparent or specified.
Color is another and much more complex matter. Most single sensor cameras must use filtering to divide light into its three primary colors (red, green and blue, like the old RGB monitors) and then recombine them to create the full color spectrum. This can be done with either a “beam splitter” or a “rotating filter” but most often by placing a permanent filter called a “color filter array” over the sensor which breaks up the primary colors and then interpolates the true colors in between, using the “Bayer filter pattern” coupled with “demosaicing algorithms” to create accurate colors. The whole process can be quite complicated and, if you’re interested, click HERE for more. The point here is that the extent of this technology is why some cameras have more lifelike colors than others. A less expensive camera may require you to adjust the color, saturation and hue with software, while a better and more expensive camera may get it right the first time.
Generally, digital cameras work the same as film cameras (and, for that matter, the human eye), it’s just that the components are different. The graphic I’ve created below demonstrates this comparison: