• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

The color-detemining capability of camera sensors

Doug Kerr

Well-known member
[Part 1]

The basic job of a digital camera sensor (not for a "monochrome" camera) is to determine the color of the light in each place on the image.

I will put aside the fact that in general we use today "color filter array" (CFA) sensors in which we do not directly determine the color of each pixel location in the image. Instead, imagine a sensor where at each "pixel location" it has a photodetector of each of the three types we have interleaved in a CFA sensor. That will avoid cluttering the theme of these essays with "demoisaicing" considerations.

So now, we can determine the color of the light striking the sensor at each pixel location, right? Sadly, not quite.

Color and metamerism

The spectral distribution ("spectrum") of light determines its color. But we can have light instances with many different spectrums that look to the eye to be the same color. And since color is defined in terms of visual perception, if two different light instances look to have the same color, they by definition are the same color. This situation is called metamerism.

Sensor response to color

We would really like our sensor to "report" consistently the color of light no matter which spectrum it has that gives it that color. Here is a wish list, with the most desirable first.

a. The three outputs of the sensor (from its three "channels") in all cases consistently describe the color of the light in terms of the values r, g, and b. (These are the underlying values of the sRGB color space before they have been made nonlinear, when they will become called R, G, and B.) In this case we can use the three outputs to directly proceed to record the image in sRGB form. It is impractical to make a sensor that does this.

b. The three outputs of the sensor in all cases consistently describe the color of the light, but in a different set of coordinates. In that case we can transform that representation to one in terms of r, g, and b by a linear transformation that can be done by multiplying the three values by a matrix that defines the transformation. We can the take the resulting r, g, and b values to proceed to record the image in sRGB form. It is impractical to make a sensor that does this, either.

c. The three outputs of the sensor are not consistent for light of a certain color over all spectrums that constitute that color (a failing known as "metameric error"). Sadly, this is the case for realizable sensors. Sensors of this nature are spoken of as "non-colorimetric" sensors, the term implying that they "do not measure color". So what do we do now?

Well, we can empirically design a transformation (in terms of the matrix that implements it) that will transform the three sensor outputs to r, g, and b such that the average inconsistency in color representation will be "small" for most light spectrums we will actually encounter. Of course doing this requires us to choose a collection of spectrums we consider to be representative of "those we most likely encounter".

[To be continued]
 

Doug Kerr

Well-known member
[Part 2]

ISO 17321-1

We may be interested in how successfully we can play the game I just described for a given sensor. Of course the matter is very complex, but as always we are anions to have a single-valued "score" that tells us how well a certain sensor lets us mitigate its metameric error.

The ISO has accommodated this desire by defining, in its standard 17321-1, a procedure for determining for a sensor a single-valued score, called the Sensitivity Metamerism Index (SMI), that tells us how successfully it is possible to mitigate the metameric error of a sensor (in the sense of "on the average" over a specific range of light spectrums).

The population of light spectrums on which this is based is generated by taking 24 reflective test patches (in fact, normally the patches of a 24-patch Macbeth ColorChecker), whose reflective spectrums are well documented, and illuminating them with a standardized illuminant (in particular, CIE Standard Illuminant D65, a "daylight" illuminant), whose spectrum is well documented. Thus, we know the spectrum of the light reflected toward the camera (in which the sensor under test is mounted) from each of those 24 patches.

We can thus, analytically, determine the color of that light we will get from each patch, expressed in the CIE XYZ color space (the standard scientific way to quantify color).

We then have the camera regard each of the patches and for each record the three outputs from the sensor.

Now, using a certain transformation matrix (which one? More on that shortly), arranged to transform from the three outputs of the sensor to the CIE XYZ coordinates, and use it to "process" the recorded sensor outputs for each of the patches. This gives us the "color" that a camera using the sensor would record for each of the patches if the camera processing chain used that certain transformation matrix. Of course, for each patch, the difference between this color and the actual known color of the light (determined earlier) is the metameric error this hypothetical camera would suffer with regard to the spectrum from that patch.

For each patch, this error is quantified with a standard color difference metric, and then the 24 error numbers are combines using a certain formula to get the overall metameric "score" (the Sensitivity Metamerism Index, SMI).

Woof!

Now what transform matrix is used in this "virtual camera"? Well, essentially, it is the matrix under which the virtual camera would garner the best SMI "score". (Talk about "teaching to the test"!)

In fact, the standard prescribes an algorithm for developing this "optimal" transform matrix from the data generated in the testing phase of the process.

******

This two-part article will be a predicate of another series I plan to issue shortly, discussing the "Color Response" portion of the sensor reports published by DxOMark for the sensors in many cameras.

Best regards,

Doug
 

Asher Kelman

OPF Owner/Editor-in-Chief
Doug,

Thanks for these articles. I will try to keep up. This is important material as we tend to choose different cameras based, in the first place, on whether it's chacteristic color "style" is what we "like" for the pictures we seem to want for our taste, or think we need for our clients.

The "elephant in the room" is the truth of the matter: if we were more skilled, would the camera we hold in our hands at the time of the shot, limit that "color look or impression" to what we think is characteristic of that camera.

Or can one take pretty well any camera, and in post processing get that "characteristic look" and color response of a brand that is chosen for apparently especially refined color representation that gives a certain type of picture a finish that is magically perfect?

Asher
 
Top