• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname


    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

About sensor "channel sensitivities"

Doug Kerr

Well-known member
We often hear it said that in a typical digital camera sensor, the "G" channel is "more sensitive" than the "R" and "B" channels.

And that as a consequence the "R" and "B" channels need to be "multiplied by larger factors" in processing the sensor outputs to become the color coordinates of individual, pixels, and thus the contributions of the "R" and "B" channels are more burdened by notice that those of the "G" channel.

I put the designations "R", "G", and "B" for the three sensor channels, and often for their outputs, in quotes to remind us that, for two reasons, they do not have a direct connection with the R, G, and B coordinates of the representation of a pixel color.​

And those notions are correct in a qualitative, kitchen-table sort of way.

But if we begin to look into how can we actually quantify these different "sensitivities" it gets tricky. What will be measured as these sensitivities? A couple of possibilities come to mind:

A. We could determine, for each channel of photodetectors, their output when the sensor is illuminated by a reference "potency" of the "corresponding" "sRGB primary".

Well, there are several problems with that.

• No "sRGB primary" is associated with any of the three sensor channels (except by way of misconception). We do not, for example, expect the "R" channel to respond exclusively to "the R sRGB primary".

• And in any case, there is no kind of light that is uniquely "the R sRGB primary" that could be used for this purpose. That is, there is no spectrum defined for the "R sRGB primary". There is defined only a color. That color can be physically realized by an infinity of different spectrums. And our sensor, being "non-colorimetric", will generally give different combinations of outputs for each of them.

B. We could measure the output of the channel when the sensor is illuminated by "monochromatic" light of the wavelength at which that channel has its maximum sensitivity.

But that doesn't turn out to be a result that we can do anything with.

In fact in general neither of those properties is measured. A third possibility, used in fact in the determination of the "relative channel sensitivities" in the Color Response page of the DxOMark report on a camera sensor is this:

C. We expose the sensor to a broadband light source, specifically one of the CIE standard illuminants (each of which have a precisely defined spectrum) and note the outputs of the "R", "G", and "B" sensor channels. (We have our choice for a report based on CIE illuminant D50 or on CIR illuminant A.)

Now, what can we do with those values. Well, perhaps we can get an idea of the degree by which our image will suffer more noise from the contributions of the "R" and "B" channel outputs than from the contributions of the "G" channel output.

Well, sort of. But is it hard to draw exact conclusions as to that.

Now another thing that we often hear is that, "Before we use the sensor channel outputs for further steps of processing, we should first "normalize" them to take into account the different sensitivities of the three channels.

So suppose we do that, based on the "sensitivities" we determine based on exposure of the sensor to CIE illuminant D50. One consequence is that if we now expose the sensor to CIE illuminant D50 (perhaps we speak of an area on the sensor that received the light from a "neutral" reflective object illuminated by CIE illuminant because it is that illuminant that illuminates the entire scene), the outputs from channels "R", "G". and "B" will be equal.

So what does that do for us? Does that mean that we can take these "adjusted" sensor outputs and use them as the underlying (linear) coefficients of the color (in sRGB terms)? No. The sensor outputs do not have a direct relationship to those three coordinates.

Rather, we must (at least) process the sensor outputs by multiplying them by a matrix that makes an approximate translation into the underlying coordinates of what we will consider the color of the light on that sensor region.

Why can't we use a matrix that makes an exact transformation from the three sensor outputs to the three coordinates (in sRGB) of the color of the light on that region of the sensor? Because the sensor is non-colorimetric, and a set of three values of the sensor outputs does not imply a specific color (so of course we cannot consistently describe that color, in sRGB or any other way).

But the "best" matrix for approximating this transformation does not work any better if it runs from the sensor outputs "corrected" for the different channel "sensitivities" than if it runs from the sensor outputs directly. (Of course the matrix would be different for these two cases, but one could easily be turned into the other.)

Now if we really want to examine how the different channel outputs are "multiplied" in processing, and the implication of this on the noise contributions of the different channels to the image, we can see that from inspection of the matrix we decide to use.

So we must be very careful about attributing any "really useful purpose" to the "sensor channel sensitivities" as measured in the DxO reports, or in fact in any other way.

In part 2: "What about 'white balance'? "

[to be continued]

Best regards,


Doug Kerr

Well-known member

Part 2

What about "white balance"?


The matter of channel sensitivities can become entangled with the matter of "white balance." And in fact that term is used to describe two different, although related, concepts.

A. White point attainment. We would like it if in our image (as delivered in sRGB form), a "neutral" object in the scene has equal values of R, G, and B (and here I use those designations properly, to mean the three nonlinear coordinates of the sRGB color space).

We may need to takes certain steps (if the camera raw data is "developed" in external software, in that software) to bring that about.

B. Chromatic adaptation correction. The wondrous ability of the human eye to deduce the reflective chromaticity of an object despite changes in the chromatic of the light reflected from it as a result of its being illuminated by light of differing chromaticity is "disrupted" if the viewer sees a print or onscreen display of a photograph of the scene at a place where the ambient illumination has a different chromaticity than the light illuminating the scene when the shot is taken.

We may need to takes certain steps (if the camera raw data is "developed" in external software, in that software) to cancel out that disruption.

We can do A in a fairly simple way, essentially just "shifting" the chromaticity plane until the representation of the color of a neutral object coincides with the "white point" of the color space in which the image is delivered.

But to do B well is more complex, requiring the use of a "chromatic adaptation transform" to map the representations of all chromaticities in the way that is needed. But, when we have done that, we will have done A as well.

The channel sensitivity measurements

If, for whatever reason, we want to "normalize" the sensor outputs to take into account the different sensitivities of the three channels, we do that by dividing the raw channel outputs by the respective channel sensitivities, as measured. Said another way, we do that by multiplying the raw channel outputs by the reciprocals of the three channel sensitivities, as measured.

Now in the DxOMark reports, the reciprocals of the three channel sensitivities have been calculated for us, and listed in a table. And there they are captioned, "White balance scales."

I have to assume that what is meant is "white balance scaling factors".​

So we must assume that there is some intimation that these factors play a role in conducting"white balance correction" maybe in sense A above, or maybe even in sense B.

Now we know that if we in fact multiply the sensor channel outputs by these three factors, the result will be that, for light whose chromaticity is that of the illuminant under which the channel sensitivities were measured (perhaps the light reflected from a "neutral" object illuminated by such an illuminant), the values of the three channel outputs will be equal.

Now does that mean that, for such an object, the values of the three sRGB coordinates will be equal, the hallmark of proper "white point correction"?

Well, only if the matrix we use to transform the sensor outputs into an approximation of the color coordinates of the light on the sensor is designed such that when its three "inputs" (the three sensor outputs) are equal, its three "outputs" (the linear form of the sRGB coordinates) are also equal.

But that is not an "inherent" property of such a matrix - it only happens if we impose that as one of the requirements on its design so this story will play out.

We might just as well have designed the matrix such that for inputs that are the relative values of the raw sensor outputs (not adjusted for "channel sensitivity") for the illuminant of interest, the matrix outputs will be equal.

So we see that the maneuvering with the "relative sensor channel sensitivities, and their reciprocals, the "white balance scaling factors", is really an arbitrary, and rather "circular" process.

In part 3, I will discuss how, however, this can actually be a "practical" part of sensor data processing.

[to be continued]

Best regards,


Doug Kerr

Well-known member

Part 3

A practical use for the "white balance scaling factors"

There is a third, very important complication in this area, one to which I have referred obliquely from time to time in this series of notes

By way of background, recall that, although the spectrum of an instance of light explicitly determines its color, there are an infinity of spectrums that will have any given color. This situation is called metamerism, and the different spectrums that have the same color are said to be metamers of that color.

Ideally, the outputs of our sensor, even if not in terms of, for example, the sRGB coordinates, would nevertheless explicitly describe the color of the light on the sensor, regardless of which of the infinity of metameric spectrums it had. But sadly, that is not so.

We say that such a sensor is "non-colorimetric". That means, "does not measure color". That's plain enough!

Thus, we cannot in any way take a set of the three sensor outputs and transform it "reliably" to a set of sRGB coordinates (which would describe a color).

So what we often do is contrive a matrix, which we will use to multiply a set of three sensor outputs to get a set of the linear sRGB coordinates such that the discrepancy between the color indicated by that set of coordinates and the actual color of the light on the sensor would be a minimum:

• averaged over the cases of the light from a number of color patches (maybe 8, maybe 18, maybe more), having accurately-known reflective spectrums, when they are illuminated by a certain illuminant, in which case we know the color of the light from them.

Clearly, that optimum matrix would in general be different for each illuminant we contemplate.

Note in passing that the basic form of these matrixes do not operates on the sensor outputs "normalized for channel sensitivity". They just operate on the channel outputs. They are constructed to transform the channel outputs (as they come out of the sensor) to an approximation of the sRGB coordinates for the color of the light on the sensor. No use of "channel sensitivities" or "white balance scaling factors" is involved in their use.​

Now, ideally, we would have a "library" of these optimum matrices in our camera (for use when developing the sensor raw data into sRGB form in the camera), and in our raw developing software (for use when developing the sensor raw data into sRGB form there), one for each of a repertoire of illuminants we might expect to encounter in actual photographic use (and that would be more than just the handful of CIE standard illuminants).

Then, for each shot we wouild put into play the appropriate matrix for the illuminant we believe to be in use there. How? By making a choice from the available set of "white balance presets", each of which is predicated on a certain illuminant.


But in fact we may not want to go to that trouble in the camera (or maybe even in the raw development software).

Now, do all these matrixes (optimized for various illuminants) differ from each other in a way that we can just convert one to another, depending on the illuminant we plan to operate under?

Well, not really.

Well almost?

Yes, almost.

So in fact if we had one "master matrix", optimized for some arbitrary illuminant, we could transform it to be "almost optimum" for use under some other illuminant by multiplying it by a certain "correction matrix".

And in fact, we can do a fair job of this (remember, we are making an approximate extension of an process that is an approximation at best) by basing the adjustment matrixes on the relative channel sensitivities determined under each of those illuminants (or, if we wish, on the white balance scaling factors that are the reciprocals of those channel sensitivities).

And in fact, this is one model of what may be the way that Canon cameras handle this (and is in fact the way that some "open source" raw development packages do it).

So, the Exif metadata for a Canon EOS camera includes a table with an entry for each "white balance preset" offered by the camera, with a set of "white balance scaling factors", apparently defined as in the DxOMark report, for each.

Then, if the raw data development software wants to play the "master matrix to be adjusted for the illuminant being used, in the easy way" game, it has the numbers needed to do that.

So you see, despite my cynicism, the "channel sensitivities" and their reciprocals, the "white balance scaling factors, aren't just "numbers looking for a use."

Best regards,


Doug Kerr

Well-known member
In my initial note in this series, I rather snidely relegated the "channel sensitivity factors" for a sensor to the status of "numbers looking for something useful to do". I find that I owe these numbers an apology.

Further reviews of the literature revealed the following about the operation of dcraw, Dave Coffin's iconic raw development engine, used as the heart of many raw development packages.

Essentially the first step in the processing of the sensor raw data is to multiply the outputs of the three channels (four, actually, as there are two "G" channels, which are handled distinctly, even though typically their behavior is essentially identical) by the corresponding "white balance factors", which are the inverses of the corresponding "channel sensitivity factors".

Then, it is these "normalized" sensor outputs that are demosaiced to get values of all of them for each pixel, and then from those are transformed (by way of the XYZ color space) into the color space of our output image file.

Interestingly enough, there are several ways in which these "white balance factors" are handled in dcraw. The default is that before the factors are applied, they are normalized so that the least of them (that normally is for the "G" channel) is 1.0 (and the other then become higher).

Assuming this is the mode in effect, the advantage of "normalizing" the sensor output values is that, during demosaicing, the values for the "less sensitive" channels are boosted, thus minimizing artifacts resulting from quantizing error during the demosaicing process (or at least that's how I understand it).

However, this "boosting" of the outputs of the less-sensitive channels could lead in some cases to out of range values (blowout of highlights) not present in the raw data as recorded.

So dcraw offers a second mode, in which, before the white balance factors are applied, they are normalized so the greatest of them is 1.0 (and the others then become less). This of course would seem to have the opposite of the effect I described above: the less-sensitive channels would remain especially susceptible to artifacts during demosaicing, and the more-sensitive ones would have their susceptibility enlarged.

I still have a lot more to learn and understand about all this.

Best regards,