Doug Kerr
Well-known member
The specification for the digital negative file (DNG) includes provision for a camera calibration profile, to be embedded in a DNG file (or used otherwise).
The purpose of this is to guide, during processing of the camera raw data, correction for inconsistent colorimetric response of the camera (as has perhaps been characterized by testing).
The organization of this profile gives the potential of applying corrections to the color "directly implied" by the raw data (in terms of hue, saturation, and/or luminance) that are different for different regions of the color space. For example, that means that a different hue adjustment could be applied for two implied colors that were the same in hue and saturation but different in luminance.
The profile is not set up to be applied at the raw data level. Rather, it is applied once the raw data has been demosaiced (and perhaps otherwise processed) into an image in an RGB-style color space.
In the DNG specification, the profile is not well-defined. There are excruciating details of its data structure, but the only "definition" of its principle is given by a section describing how the profile is to be applied to a camera file during processing of the raw data. I will paraphrase that here:
1. Convert the raw data (however you will) to a linear "RGB-style" color space using the primaries of the ProPhoto color space. [This turns out to be one of the two defined versions - the linear one - of the "RIMM" (Reference Input Medium Metric) color space.]
2. Convert the image from this to the HSV color space. [Extensive discussion of this shortly.]
3. For each pixel, enter the camera calibration table with the H, S, and V values. [The table has a three-dimensional "index", in H, S, and V. Of course, we need to go to the "closest" entry - not every possible combination of H, S, and V has its own entry. This is a "quantization" matter.]
4. Apply the H, S, and V adjustments from that table entry to the "incoming" HSV values for the pixel, giving an "adjusted" HSV representation.
5. Convert the adjusted HSV representation into the color space to be used for the "delivered" image (or more likely to XYZ to pass to the "output" profile).
At step 2, we encounter several complications.
Firstly, there is no "standardized" definition of "the HSV" color space. Numerous color spaces with coordinates having those letters have been used, generally in the "color choosers" of various graphics programs. They differ substantially in their details (even their concepts). The DNG specification does not define the particular transform to HSV that is to be used, nor does it cite any "specification" for it.
One widely recognized form of the HSV color space may be that originally proposed by Alvy Ray smith in 1978. (I have not yet acquired his original paper, so I can't be sure.) This is likely the version described by the current Wikipedia article on this color space.
It is defined by a set of well-known equations as a transform from the representation of colors by the coordinates R, G, and B. It does not presuppose any particular RGB-family color space, just the one currently in use. Note, however, that in all of those, the coordinates R, G, and B are nonlinear representations of the underlying "linear" values.
Now in step 2, we start with the pixel color in linear form (rgb, in my notation here). So what does it mean to convert this to the HSV color space (even stipulating that the Photoshop form of HSV is meant)?
Well, it might mean any of these:
1. Use the Photoshop RGB-to-HSV equations, but work them on the linear values (which I call r, g, and b) rather than R, G, and B. This of course defines a new, previously unheard of, color space. (I would perhaps call it, in detailed technical writing, "hsv".)
2. Convert the "linear RIMM" representation (derived in step 1) to the nonlinear RIMM representation (this involves using the same nonlinear function defined for the sRGB color space) and then use the Photoshop RGB-to-HSV equations on the resulting values of R, G, and B.
3. Actually, in step 1, do not convert the nascent image to a linear color space using the ProPhoto primaries but rather to the actual ProPhoto color space (which uses a nonlinear function with an exponent of 1.8).
I's be pleased to hear from anyone working in this field that could advise which of these (or perhaps something wholly different) is meant.
Best regards,
Doug
The purpose of this is to guide, during processing of the camera raw data, correction for inconsistent colorimetric response of the camera (as has perhaps been characterized by testing).
The organization of this profile gives the potential of applying corrections to the color "directly implied" by the raw data (in terms of hue, saturation, and/or luminance) that are different for different regions of the color space. For example, that means that a different hue adjustment could be applied for two implied colors that were the same in hue and saturation but different in luminance.
The profile is not set up to be applied at the raw data level. Rather, it is applied once the raw data has been demosaiced (and perhaps otherwise processed) into an image in an RGB-style color space.
In the DNG specification, the profile is not well-defined. There are excruciating details of its data structure, but the only "definition" of its principle is given by a section describing how the profile is to be applied to a camera file during processing of the raw data. I will paraphrase that here:
1. Convert the raw data (however you will) to a linear "RGB-style" color space using the primaries of the ProPhoto color space. [This turns out to be one of the two defined versions - the linear one - of the "RIMM" (Reference Input Medium Metric) color space.]
2. Convert the image from this to the HSV color space. [Extensive discussion of this shortly.]
3. For each pixel, enter the camera calibration table with the H, S, and V values. [The table has a three-dimensional "index", in H, S, and V. Of course, we need to go to the "closest" entry - not every possible combination of H, S, and V has its own entry. This is a "quantization" matter.]
4. Apply the H, S, and V adjustments from that table entry to the "incoming" HSV values for the pixel, giving an "adjusted" HSV representation.
5. Convert the adjusted HSV representation into the color space to be used for the "delivered" image (or more likely to XYZ to pass to the "output" profile).
At step 2, we encounter several complications.
Firstly, there is no "standardized" definition of "the HSV" color space. Numerous color spaces with coordinates having those letters have been used, generally in the "color choosers" of various graphics programs. They differ substantially in their details (even their concepts). The DNG specification does not define the particular transform to HSV that is to be used, nor does it cite any "specification" for it.
One widely recognized form of the HSV color space may be that originally proposed by Alvy Ray smith in 1978. (I have not yet acquired his original paper, so I can't be sure.) This is likely the version described by the current Wikipedia article on this color space.
It is defined by a set of well-known equations as a transform from the representation of colors by the coordinates R, G, and B. It does not presuppose any particular RGB-family color space, just the one currently in use. Note, however, that in all of those, the coordinates R, G, and B are nonlinear representations of the underlying "linear" values.
Now in step 2, we start with the pixel color in linear form (rgb, in my notation here). So what does it mean to convert this to the HSV color space (even stipulating that the Photoshop form of HSV is meant)?
Well, it might mean any of these:
1. Use the Photoshop RGB-to-HSV equations, but work them on the linear values (which I call r, g, and b) rather than R, G, and B. This of course defines a new, previously unheard of, color space. (I would perhaps call it, in detailed technical writing, "hsv".)
2. Convert the "linear RIMM" representation (derived in step 1) to the nonlinear RIMM representation (this involves using the same nonlinear function defined for the sRGB color space) and then use the Photoshop RGB-to-HSV equations on the resulting values of R, G, and B.
3. Actually, in step 1, do not convert the nascent image to a linear color space using the ProPhoto primaries but rather to the actual ProPhoto color space (which uses a nonlinear function with an exponent of 1.8).
I's be pleased to hear from anyone working in this field that could advise which of these (or perhaps something wholly different) is meant.
Best regards,
Doug
Last edited: