Lesson 4 - RW, Culture - TECHNOLOGY - Page2

Z Studia Informatyczne
Przejdź do nawigacjiPrzejdź do wyszukiwania

Page1 Page2 Page3 Page4 Page5

If you don’t remember Lesson 2 - L&S, Functions, Pronunciation - EDUCATION very well, go back to the listening exercise and listen once again to the lecture on how a digital camera works. Now read the explanation how the black-and-white pictures taken by digital cameras are converted to colour.


(Written by Dennis P. Curtin. You can find more information on this topic on his webpage, ShortCourses, at http://www.shortcourses.com/choosing/how/03.htm)



M2 u10 l4.jpg

When photography was first invented, it could only record black and white images. The search for color was a long and arduous process, and a lot of hand coloring went on (causing one photographer to comment "so you have to know how to paint after all!").

One major breakthrough was James Clerk Maxwell's 1860 discovery that color photographs could be created using black and white film and red, blue, and green filters. He had the photographer Thomas Sutton photograph a tartan ribbon three times, each time with a different color filter over the lens. The three black and white images were then projected onto a screen with three different projectors, each equipped with the same color filter used to take the image being projected. When brought into register, the three images formed a full color photograph. Over a century later, image sensors work much the same way.

Colors in a photographic image are usually based on three colors: red, green, and blue (RGB). This is called the additive color system because when the three colors are combined or added in equal quantities, they form white. This RGB system is used whenever light is projected to form colors as it is on the display monitor.

Since daylight is made up of red, green, and blue light, placing red, green, and blue filters over individual pixels on the image sensor can create color images just as they did for Maxwell in 1860. In the popular Bayer pattern used on many image sensors, there are twice as many green filters as there are red or blue filters. That's because a human eye is more sensitive to green than it is to the other two colors so green's color accuracy is more important.

With the filters in place, each pixel can record only the brightness of the light that matches its filter and passes through it while other colors are blocked. For example, a pixel with a red filter knows only the brightness of the red light that strikes it. To figure out what color each pixel really is, a process called interpolation uses the colors of neighboring pixels to calculate the two colors that the pixel didn't record directly. By combining these two interpolated colors with the color measured by the site directly, the full color of the pixel can be calculated. "I'm bright red and the green and blue pixels around me are also bright so that must mean I'm really a white pixel." It's like a painter creating a color by mixing varying amounts of other colors on his palette. This step is computer intensive since comparisons with as many as eight neighboring pixels is required to perform this process properly.

Each time you take a picture millions of calculations have to be made in just a few seconds. It's these calculations that make it possible for the camera to preview, capture, compress, filter, store, transfer, and display the image. All of these calculations are performed by a microprocessor in the camera that's similar to the one in your desktop computer.



Page1 Page2 Page3 Page4 Page5