Doug Kerr
Well-known member
[Part 1]
The basic job of a digital camera sensor (not for a "monochrome" camera) is to determine the color of the light in each place on the image.
I will put aside the fact that in general we use today "color filter array" (CFA) sensors in which we do not directly determine the color of each pixel location in the image. Instead, imagine a sensor where at each "pixel location" it has a photodetector of each of the three types we have interleaved in a CFA sensor. That will avoid cluttering the theme of these essays with "demoisaicing" considerations.
So now, we can determine the color of the light striking the sensor at each pixel location, right? Sadly, not quite.
Color and metamerism
The spectral distribution ("spectrum") of light determines its color. But we can have light instances with many different spectrums that look to the eye to be the same color. And since color is defined in terms of visual perception, if two different light instances look to have the same color, they by definition are the same color. This situation is called metamerism.
Sensor response to color
We would really like our sensor to "report" consistently the color of light no matter which spectrum it has that gives it that color. Here is a wish list, with the most desirable first.
a. The three outputs of the sensor (from its three "channels") in all cases consistently describe the color of the light in terms of the values r, g, and b. (These are the underlying values of the sRGB color space before they have been made nonlinear, when they will become called R, G, and B.) In this case we can use the three outputs to directly proceed to record the image in sRGB form. It is impractical to make a sensor that does this.
b. The three outputs of the sensor in all cases consistently describe the color of the light, but in a different set of coordinates. In that case we can transform that representation to one in terms of r, g, and b by a linear transformation that can be done by multiplying the three values by a matrix that defines the transformation. We can the take the resulting r, g, and b values to proceed to record the image in sRGB form. It is impractical to make a sensor that does this, either.
c. The three outputs of the sensor are not consistent for light of a certain color over all spectrums that constitute that color (a failing known as "metameric error"). Sadly, this is the case for realizable sensors. Sensors of this nature are spoken of as "non-colorimetric" sensors, the term implying that they "do not measure color". So what do we do now?
Well, we can empirically design a transformation (in terms of the matrix that implements it) that will transform the three sensor outputs to r, g, and b such that the average inconsistency in color representation will be "small" for most light spectrums we will actually encounter. Of course doing this requires us to choose a collection of spectrums we consider to be representative of "those we most likely encounter".
[To be continued]
The basic job of a digital camera sensor (not for a "monochrome" camera) is to determine the color of the light in each place on the image.
I will put aside the fact that in general we use today "color filter array" (CFA) sensors in which we do not directly determine the color of each pixel location in the image. Instead, imagine a sensor where at each "pixel location" it has a photodetector of each of the three types we have interleaved in a CFA sensor. That will avoid cluttering the theme of these essays with "demoisaicing" considerations.
So now, we can determine the color of the light striking the sensor at each pixel location, right? Sadly, not quite.
Color and metamerism
The spectral distribution ("spectrum") of light determines its color. But we can have light instances with many different spectrums that look to the eye to be the same color. And since color is defined in terms of visual perception, if two different light instances look to have the same color, they by definition are the same color. This situation is called metamerism.
Sensor response to color
We would really like our sensor to "report" consistently the color of light no matter which spectrum it has that gives it that color. Here is a wish list, with the most desirable first.
a. The three outputs of the sensor (from its three "channels") in all cases consistently describe the color of the light in terms of the values r, g, and b. (These are the underlying values of the sRGB color space before they have been made nonlinear, when they will become called R, G, and B.) In this case we can use the three outputs to directly proceed to record the image in sRGB form. It is impractical to make a sensor that does this.
b. The three outputs of the sensor in all cases consistently describe the color of the light, but in a different set of coordinates. In that case we can transform that representation to one in terms of r, g, and b by a linear transformation that can be done by multiplying the three values by a matrix that defines the transformation. We can the take the resulting r, g, and b values to proceed to record the image in sRGB form. It is impractical to make a sensor that does this, either.
c. The three outputs of the sensor are not consistent for light of a certain color over all spectrums that constitute that color (a failing known as "metameric error"). Sadly, this is the case for realizable sensors. Sensors of this nature are spoken of as "non-colorimetric" sensors, the term implying that they "do not measure color". So what do we do now?
Well, we can empirically design a transformation (in terms of the matrix that implements it) that will transform the three sensor outputs to r, g, and b such that the average inconsistency in color representation will be "small" for most light spectrums we will actually encounter. Of course doing this requires us to choose a collection of spectrums we consider to be representative of "those we most likely encounter".
[To be continued]