• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

Metameric error in our sensors; camera-lens profiles

Doug Kerr

Well-known member
Asher, in another thread, has kindly shown us how we can construct a custom colorimetric profile for a camera-lens combination operating under a certain illumination by taking a shot of an XRite Color Checker multi-patch target and then having the resulting raw file (converted to DNG form) analyzed by the XRite Passport software package.

Asher emphasizes how for each of the camera-lens combinations he does this for, he does this under three kinds of illumination that are representative of the range of kinds of illumination we will actually encounter.

One might ask the following: Assuming we are prepared to conduct white balance color correct for each shoot by, for example, including a neutral test target in at least one shot under the applicable illumination, why is it necessary to have distinct profiles done under various different types of illumination. After all, we might reason, any of those profiles, if considered in connection with the illumination under which it was taken, will completely characterize the colorimetric performance of the camera.

But the clinker is that the blue passage is not true. The reason why is not simple to explain. But I've got the time.

First some background

Color

Color is by definition a perceptual property, not an objective one. If two instances of light, viewed under certain conditions of consistency, look to a human observer to be the same color, they are the same color.

Spectrums and chromaticity

The spectrum (power spectral density, to be precise) of an instance of light will (again, assuming appropriate conditions of observation) determine its color.

But the converse is not true. We cannot, from knowledge of the color of an instance of light, determine its spectrum. Said another way (more important to us), there are an infinity of different light spectrums that have the same color. This situation is called metamerism. And two instances of light with different spectrums but the same color are called metamers.

Color camera sensors

We may believe that a certain triad of the sensor output values (often called R, G, and B values, although that's into really a good idea, a point I won't belabor here, but to dodge it I will call them D, E, and F) will consistently indicate the color of the light falling on the sensor. But they, in general, don't. For reasons I won't belabor here, the sensors we use are non-colorimetric. "Colorimetric" means "measures color", and they don't. In particular, if we bathe the sensor sequentially with two kinds of light, having different spectrums but the same colors (metamers), we would expect the D, E, and F outputs of the sensor to be the same for each. But in general they won't be.

Again, the punch line here is that a set of D, E, anf F values does not consistently represent a color.

Transformation to a color space representation

The output images delivered by our camera (or by raw development software) are in a certain color space, which among other things means that a certain set of three coordinate values unequivocally indicates a certain color.

So in the camera (or in the raw development software) we must transform a representation in terms if D, E, and F (from the sensor) into a representation in terms of r, g, and b (which will become R, G, and B when nonlinear "gamma precompensation" is applied).

But how can we do that if a set of D, E, and F values does not indicate a certain color? The answer is, "not perfectly". Any mapping will work accurately for certain light spectrums that have a certain color, but less accurately for other spectrums that have that same color. The overall discrepancy in this process is called "metameric error".

Now, modest error in the luminance aspect of the color as recorded is not usually serious, but error in the chromatically aspect can be very bothersome.

But if we are dealing with scenes that have areas with reflective spectrums (which characterize the way different wavelengths are reflected from the surface) that are "reasonable", then for any given type of illumination on the scene, we can construct a sophisticated mapping that minimizes the overall metameric error.

Note that this is related to, but different from, the matter of white balance color correction, We can perform that in such a way that, under the actual illumination on the scene, a "neutral" object will be recorded with a gray color (neutral chromaticity). But there will still be metameric error for objects of other reflective colors.

Profiles developed with a Color Checker

This is where the generation of camera-lens colorimetric profiles using shots of a Color Checker come in. If we take that profile under a certain type of illumination, and use the resulting profile to govern the DEF to rgb transformation for shots taken under illumination that is pretty much like the one used when the profile was taken, then the overall metameric error will be a minimum.

The profile generation process takes note of the sensor output value combinations for each of the target's many "patches" of known spectral response (and thus known reflective color) and from that data devises a mapping strategy that will minimize metameric error over a wide range of different reflective colors, when operation is under illumination essentially the same as that used during the profiling process.

But if we use that profile for a shot under substantially-different illumination, then, even though we may make a proper "white balance color correction", there will still be metameric error, meaning that objects of reflective colors other than "neutral' will not be "accurately" recorded in the image.

So this is why Asher, for example, generates colorimetric profiles for his various body-lens combinations under several different kinds of illumination.

Did this wear you out? Me too.

Best regards,

Doug
 
Top