Doug Kerr
Well-known member
We often hear it said that in a typical digital camera sensor, the "G" channel is "more sensitive" than the "R" and "B" channels.
And that as a consequence the "R" and "B" channels need to be "multiplied by larger factors" in processing the sensor outputs to become the color coordinates of individual, pixels, and thus the contributions of the "R" and "B" channels are more burdened by notice that those of the "G" channel.
And those notions are correct in a qualitative, kitchen-table sort of way.
But if we begin to look into how can we actually quantify these different "sensitivities" it gets tricky. What will be measured as these sensitivities? A couple of possibilities come to mind:
A. We could determine, for each channel of photodetectors, their output when the sensor is illuminated by a reference "potency" of the "corresponding" "sRGB primary".
Well, there are several problems with that.
• No "sRGB primary" is associated with any of the three sensor channels (except by way of misconception). We do not, for example, expect the "R" channel to respond exclusively to "the R sRGB primary".
• And in any case, there is no kind of light that is uniquely "the R sRGB primary" that could be used for this purpose. That is, there is no spectrum defined for the "R sRGB primary". There is defined only a color. That color can be physically realized by an infinity of different spectrums. And our sensor, being "non-colorimetric", will generally give different combinations of outputs for each of them.
B. We could measure the output of the channel when the sensor is illuminated by "monochromatic" light of the wavelength at which that channel has its maximum sensitivity.
But that doesn't turn out to be a result that we can do anything with.
In fact in general neither of those properties is measured. A third possibility, used in fact in the determination of the "relative channel sensitivities" in the Color Response page of the DxOMark report on a camera sensor is this:
C. We expose the sensor to a broadband light source, specifically one of the CIE standard illuminants (each of which have a precisely defined spectrum) and note the outputs of the "R", "G", and "B" sensor channels. (We have our choice for a report based on CIE illuminant D50 or on CIR illuminant A.)
Now, what can we do with those values. Well, perhaps we can get an idea of the degree by which our image will suffer more noise from the contributions of the "R" and "B" channel outputs than from the contributions of the "G" channel output.
Well, sort of. But is it hard to draw exact conclusions as to that.
Now another thing that we often hear is that, "Before we use the sensor channel outputs for further steps of processing, we should first "normalize" them to take into account the different sensitivities of the three channels.
So suppose we do that, based on the "sensitivities" we determine based on exposure of the sensor to CIE illuminant D50. One consequence is that if we now expose the sensor to CIE illuminant D50 (perhaps we speak of an area on the sensor that received the light from a "neutral" reflective object illuminated by CIE illuminant because it is that illuminant that illuminates the entire scene), the outputs from channels "R", "G". and "B" will be equal.
So what does that do for us? Does that mean that we can take these "adjusted" sensor outputs and use them as the underlying (linear) coefficients of the color (in sRGB terms)? No. The sensor outputs do not have a direct relationship to those three coordinates.
Rather, we must (at least) process the sensor outputs by multiplying them by a matrix that makes an approximate translation into the underlying coordinates of what we will consider the color of the light on that sensor region.
Why can't we use a matrix that makes an exact transformation from the three sensor outputs to the three coordinates (in sRGB) of the color of the light on that region of the sensor? Because the sensor is non-colorimetric, and a set of three values of the sensor outputs does not imply a specific color (so of course we cannot consistently describe that color, in sRGB or any other way).
But the "best" matrix for approximating this transformation does not work any better if it runs from the sensor outputs "corrected" for the different channel "sensitivities" than if it runs from the sensor outputs directly. (Of course the matrix would be different for these two cases, but one could easily be turned into the other.)
Now if we really want to examine how the different channel outputs are "multiplied" in processing, and the implication of this on the noise contributions of the different channels to the image, we can see that from inspection of the matrix we decide to use.
So we must be very careful about attributing any "really useful purpose" to the "sensor channel sensitivities" as measured in the DxO reports, or in fact in any other way.
In part 2: "What about 'white balance'? "
[to be continued]
Best regards,
Doug
And that as a consequence the "R" and "B" channels need to be "multiplied by larger factors" in processing the sensor outputs to become the color coordinates of individual, pixels, and thus the contributions of the "R" and "B" channels are more burdened by notice that those of the "G" channel.
I put the designations "R", "G", and "B" for the three sensor channels, and often for their outputs, in quotes to remind us that, for two reasons, they do not have a direct connection with the R, G, and B coordinates of the representation of a pixel color.
And those notions are correct in a qualitative, kitchen-table sort of way.
But if we begin to look into how can we actually quantify these different "sensitivities" it gets tricky. What will be measured as these sensitivities? A couple of possibilities come to mind:
A. We could determine, for each channel of photodetectors, their output when the sensor is illuminated by a reference "potency" of the "corresponding" "sRGB primary".
Well, there are several problems with that.
• No "sRGB primary" is associated with any of the three sensor channels (except by way of misconception). We do not, for example, expect the "R" channel to respond exclusively to "the R sRGB primary".
• And in any case, there is no kind of light that is uniquely "the R sRGB primary" that could be used for this purpose. That is, there is no spectrum defined for the "R sRGB primary". There is defined only a color. That color can be physically realized by an infinity of different spectrums. And our sensor, being "non-colorimetric", will generally give different combinations of outputs for each of them.
B. We could measure the output of the channel when the sensor is illuminated by "monochromatic" light of the wavelength at which that channel has its maximum sensitivity.
But that doesn't turn out to be a result that we can do anything with.
In fact in general neither of those properties is measured. A third possibility, used in fact in the determination of the "relative channel sensitivities" in the Color Response page of the DxOMark report on a camera sensor is this:
C. We expose the sensor to a broadband light source, specifically one of the CIE standard illuminants (each of which have a precisely defined spectrum) and note the outputs of the "R", "G", and "B" sensor channels. (We have our choice for a report based on CIE illuminant D50 or on CIR illuminant A.)
Now, what can we do with those values. Well, perhaps we can get an idea of the degree by which our image will suffer more noise from the contributions of the "R" and "B" channel outputs than from the contributions of the "G" channel output.
Well, sort of. But is it hard to draw exact conclusions as to that.
Now another thing that we often hear is that, "Before we use the sensor channel outputs for further steps of processing, we should first "normalize" them to take into account the different sensitivities of the three channels.
So suppose we do that, based on the "sensitivities" we determine based on exposure of the sensor to CIE illuminant D50. One consequence is that if we now expose the sensor to CIE illuminant D50 (perhaps we speak of an area on the sensor that received the light from a "neutral" reflective object illuminated by CIE illuminant because it is that illuminant that illuminates the entire scene), the outputs from channels "R", "G". and "B" will be equal.
So what does that do for us? Does that mean that we can take these "adjusted" sensor outputs and use them as the underlying (linear) coefficients of the color (in sRGB terms)? No. The sensor outputs do not have a direct relationship to those three coordinates.
Rather, we must (at least) process the sensor outputs by multiplying them by a matrix that makes an approximate translation into the underlying coordinates of what we will consider the color of the light on that sensor region.
Why can't we use a matrix that makes an exact transformation from the three sensor outputs to the three coordinates (in sRGB) of the color of the light on that region of the sensor? Because the sensor is non-colorimetric, and a set of three values of the sensor outputs does not imply a specific color (so of course we cannot consistently describe that color, in sRGB or any other way).
But the "best" matrix for approximating this transformation does not work any better if it runs from the sensor outputs "corrected" for the different channel "sensitivities" than if it runs from the sensor outputs directly. (Of course the matrix would be different for these two cases, but one could easily be turned into the other.)
Now if we really want to examine how the different channel outputs are "multiplied" in processing, and the implication of this on the noise contributions of the different channels to the image, we can see that from inspection of the matrix we decide to use.
So we must be very careful about attributing any "really useful purpose" to the "sensor channel sensitivities" as measured in the DxO reports, or in fact in any other way.
In part 2: "What about 'white balance'? "
[to be continued]
Best regards,
Doug