Open Photography Forums  

Go Back   Open Photography Forums > Color Management Discussion > CM Theory and Practice

CM Theory and Practice Profiles, color spaces, perception, science.

Thread Tools Display Modes
Old September 29th, 2010, 11:16 AM
Doug Kerr Doug Kerr is offline
Senior Member
Join Date: May 2006
Location: Alamogordo, New Mexico, USA
Posts: 8,331
Default The color space of a sensor

The three kinds of output values from a CFA sensor - generally labeled "R", "G", and "B" - are the coordinates of the sensor's color space. We do not often speak of it as such (although it is extensively spoken of in the DNG specification).

And some folks have trouble accepting that. After all, we do not have all three values for any given spot in the focal plane image, so what is it whose color can be described in that color space? But don't let that throw you off.

We can get around that, if need be, by considering a substantial region in the focal plane image of uniform color. The output of all the "B" sensels, the output of all the "G" sensels, and the output of all the "B" sensels, together tell us the color of that region, as we would expect of a color space.

The sensor color space is a tristimulus color space, just like any of the familiar RGB-model color spaces, or the CIE XYZ color space (the one we often use to describe color in absolute terms in scientific work). Thus, there are three primaries for the color space, defined uniquely for the color space.

However, unlike any of the familiar RGB-model color spaces (but as in the XYZ color space) these primaries are not physically-realizable. That is, they are not any kind of radiation (even invisible radiation). So they are nothing like the primaries R, G, and B of RGB-model color spaces - the coordinates corresponding to them are just usually called R, G, and B as a way to help people understand the concept of different spectral sensitivities of the three kinds of sensels. Would the primary called "R" at least look "sort of red"? No - it is not any physical kind of radiation, so it couldn't be seen at all.

Thus, we cannot view the way this color space describes a color is as a recipe for physically creating light of that color by mixing together the indicated amounts of the three primaries.

But in fact the mathematics of determining the luminance and chromaticity of any mixture of these three primaries - in cases where that leads to a visible color (and of course there is no other kind) - works just as well as if the primaries were themselves kinds of (visible) radiation.

We get a fuller appreciation of these values as being the coordinates of a bona fide color space when we contemplate demosaicing of the sensel outputs. We can visualize this as being done by simple interpolation among the "R", "G", and "B" values to produce a set of all three for each pixel location (although more sophisticated techniques are actually used).

Those three values, at some pixel location, describe the (estimated) color of the focal plane image at that point - in the color space of the sensor! So now we have that color space at work in a more familiar setting, which hopefully relieves any misgivings we might have as to whether or not there is such a thing.

We can transform the representation of colors in terms of the coordinates of one tristimulus color space into terms of the coordinates of another tristimulus color space (assuming both are linear) by means of a 3x3 matrix of constants.

Thus, once we have the (estimated) color at each pixel of our image in terms of the coordinates of the sensor color space, we can transform it into, for example, the coordinates of the sRGB color space, or the Adobe RGB color space, or the ProPhoto RGB color space, or the CIE XYZ color space, as we might need for further processing of the image, by multiplication of the set of three sensor values by the appropriate transformation matrix.

Now, because of matters related to metameric failure (which I will not belabor here), two different kinds of light (with different spectra) that have the same description under one color space might not have the same description under another color space.

So of course a single transformation matrix, intended to convert color representation from the sensor color space into, for example, sRGB, will not always produce a consistent result. Thus, such matrixes are always predicated on a particular illumination - not just one with a certain chromaticity but one with a particular spectrum.

In a DNG camera profile, there is such a matrix - or more likely, two - one for each of two "important" illuminations.

Best regards,

Reply With Quote


Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are Off
[IMG] code is On
HTML code is Off

Forum Jump

Similar Threads
Thread Thread Starter Forum Replies Last Post
AF sensor sensitivity Doug Kerr Canon Eos Mount DSLRs 0 July 6th, 2010 05:42 AM
The "primaries" of a digital camera sensor Doug Kerr Imaging Technology: Theory, Alternatives, Practice and Advances. 13 March 21st, 2010 12:38 PM
News: Canon's patent application for a stabilized sensor Cem_Usakligil Canon Eos Mount DSLRs 9 February 1st, 2010 01:03 PM
Sensor sizes - new tutorial article Doug Kerr Imaging Technology: Theory, Alternatives, Practice and Advances. 2 September 30th, 2008 06:22 AM
CMOS sensor - new tutorial article Doug Kerr Canon Eos Mount DSLRs 6 June 5th, 2006 03:58 AM

All times are GMT -7. The time now is 03:41 AM.

Posting images or text grants license to OPF, yet © of such remain with its creator. Still, all assembled discussion © 2006-2017 Asher Kelman (all rights reserved) Posts with new theme or unusual image might be moved/copied to a new thread!