• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

aRGB vs sRBG in Camera using RAW

Larry Brown

New member
Hello and please excuse my ignorance on this topic as this question was recently brought to my

attention.I may lack the proper technical knowledge to maybe ask this question properly but I am

sure all may understand what I am going to ask and with a quick review of this section of the

forum I did not see an answer to my question although it may have been discussed before I am

sure.

The question is if I am shooting RAW what is the importance of the colorspace I have chosen in

camera to capture?It was my understanding that when shooting RAW that the entire color gamut

of the image sensor was captured and only the embedded jpeg in the RAW file would be affected

and upon processing in the RAW converter that the colorspace aRGB,sRGB or Prophoto space

could be used with no loss of gamut.

Not to question the wisdom of the person that brought this too my attention and this raised the

question of my perception of the RAW capture and I thought I would ask as it may bring some

light to me on this question and others may find it interesting as well.Naturally I want to keep my

workflow as simple as possible but I do not want to short my RAW files by crippling the the color

gamut captured by choosing the wrong colorspace to use.Am I missing something?RAW is

RAW,right?All colorspaces are captured within the RAW image file to be converted later,right?

Thanks in advance of sharing your wisdom on this topic,I hope I asked the question properly ..

I am still in awe of the knowledge contained in this forum,many here are much more informed

and knowledgeable than I!

Larry
 

Doug Kerr

Well-known member
Hi, Larry,

The question is if I am shooting RAW what is the importance of the colorspace I have chosen in camera to capture?

The raw output is (almost verbatim) the outputs of the photodetectors in the sensor. In a sense, this has its own "private" color space (although we rarely speak of it that way, and in fact in some ways it really isn't exactly a color space).

The selection of a color space for the JPEG output has no effect on the raw data itself.

I don't know whether a raw file has in it a tag for the color space selected for the JPEG output, whether it has a profile embedded for that color apace, or whether the choice of that color space affects the embedded JPEG preview in the raw file. (Others here will know.)

But in any case, that selection has no affect on the raw data itself. You need have no concern about "restricting" the gamut potential of the raw data through a color space setting.
 

Ken Tanaka

pro member
Hi, Larry,
...

I don't know whether a raw file has in it a tag for the color space selected for the JPEG output, whether it has a profile embedded for that color apace, or whether the choice of that color space affects the embedded JPEG preview in the raw file. (Others here will know.)

But in any case, that selection has no affect on the raw data itself. You need have no concern about "restricting" the gamut potential of the raw data through a color space setting.

Doug's remarks are dead-on, Larry. Choose your compressed color space with gay abandon, as it has no impact on a raw image file.

I do, however, believe that it does affect processing of the embedded JPG preview in the raw image file. This is the image that you're shown when you preview the image in the camera and upon which the in-camera histograms are built. Still, it's very, very rare for color space to make any meaningful difference in this context.
 

Larry Brown

New member
Thank you Doug for your reply, you have reinforced my understanding of the RAW capture but found myself confused if by selecting either color space if I was limiting the data put in the RAW file. Ken also stated in his reply it is as you see it but adds that the chosen color space affects the embedded JPEG preview (which is what I was referring to when I wrote JPEG earlier).

As for the way the sensor data at the photo site it is my understanding the sensor sees no color but grayscale analog info only.Algorithms are then use to determine color from that info in the analog to digital converter. Hope I was close to getting this right.

Larry

Hi, Larry,



The raw output is (almost verbatim) the outputs of the photodetectors in the sensor. In a sense, this has its own "private" color space (although we rarely speak of it that way, and in fact in some ways it really isn't exactly a color space).

The selection of a color space for the JPEG output has no effect on the raw data itself.

I don't know whether a raw file has in it a tag for the color space selected for the JPEG output, whether it has a profile embedded for that color apace, or whether the choice of that color space affects the embedded JPEG preview in the raw file. (Others here will know.)

But in any case, that selection has no affect on the raw data itself. You need have no concern about "restricting" the gamut potential of the raw data through a color space setting.
 

Larry Brown

New member
Thanks Ken for taking the time to comment on this topic. I am in agreement in your assessment of the JPEG preview having in camera color space embedded in the file, this make sense to do so and not limit the color gamut of the RAW file.

Yes on that JPEG data being used to generate the histogram info. I knew this but did not think of this till you mentioned it, thanks!

Larry

Doug's remarks are dead-on, Larry. Choose your compressed color space with gay abandon, as it has no impact on a raw image file.

I do, however, believe that it does affect processing of the embedded JPG preview in the raw image file. This is the image that you're shown when you preview the image in the camera and upon which the in-camera histograms are built. Still, it's very, very rare for color space to make any meaningful difference in this context.
 

Doug Kerr

Well-known member
Hi. Larry,

As for the way the sensor data at the photo site it is my understanding the sensor sees no color but grayscale analog info only.Algorithms are then use to determine color from that info in the analog to digital converter. Hope I was close to getting this right.
Well, the sensor typically has detectors that have three different spectral sensitivities. Any given one can only give a single output value - analog initially, but later digitized (individually for each photodetector). If we had one photodetector of each kind at each pixel location, from their three outputs together (in digital form) we could determine explicitly the color of the light at that pixel.

But there is not a trio of photodetectors at each pixel location, Rather, the three kinds are arrayed in a repetitive pattern. We can imagine that there are three color-determining aspects of the image ("layers"), to which the three kinds of photodetectors respond. We can think of those rather imprecisely as the "red", "green", and "blue" layers of the image.

We sample each of these layers (with a photodetector of the corresponding sensitivity), not at every pixel location, but only once for every two or four pixel locations, and not at the same phase for each layer (since photodetectors of a given kind are not collocated).

From each of the three sets of "samples", an algorithm (we can think of it simplistically as doing interpolation) can reconstruct that entire "layer", with an estimate of its value at each pixel location where there is not an actual measured value.

Then from the values of the three layers at each pixel location (one measured, two estimated, which ones are which depending on the specific pixel), we have an estimate of the color at that pixel.

Because the individual photodetectors have only a single output, and do not of themselves discern the color at their location (both as would be true of the photodetectors in an actual "grayscale" sensor array), it is tempting to say that they are "grayscale" photodetectors, or that their outputs are "grayscale" data, but that metaphor is not helpful here, nor even accurate.

I hope that helps to understand this kind of sensor, which of course we call the "color filter array" (CFA) sensor.

Best regards,

Doug
 

Larry Brown

New member
Thanks once again Doug! Your explanation has now given me a better understanding of the sensor function and how color is determined. The "grayscale" reference I had used was from an explanation from a class I took a while back. He, the instructor, may have simplified his description of the process by using the term "grayscale" and in your explanation I can see why he may have used that term. Your answer was much more technical in nature, explained it perfectly IMO and I do appreciate the effort you gave here in response to my question.

Larry
 

Asher Kelman

OPF Owner/Editor-in-Chief
Let me fess up to introducing the idea of bad things happening when one uses an sRGB color space in the camera. I was wrong! Like setting the white balance, It only applies to the jpg, but, it might alter what's shown as the colors outside the sRGB space would have to be remapped within it and so it will be very slghtly off. However, as Ken points out, that might not be detectable on your little LCD.

However, with live view, and using a good monitor, one might be able to see the difference. Shooting tethered or shooting fashion or nature, the color space might be significant as some natural coors are simply outside of what sRGB can show. I'm not certain what the controls are for color space in the output to monitors depending on whether one looks at the jpg n the RAW, the jpg or the RAW translated by the software on one's tethered computer.

So while there's little to no risk it, seems, from RAW processors being tricked into using sRGB color space, I don't think we should use the space unless we are giving fles to be printed at a local drug store or posting on the web.

Now monitors are showing colors of the entire Adobe RGB color space. Soon it will be Prophoto RGB which is even wider and closer to human vision. So we should get into the habit of saving our files in the largest space. This allows us to make adjustments that will not cause posterization and will be able to take advantage of the latest generation of wide gamut printers.

Of course, if we do this, we have to keep checking with our software that we are not out of gamut. The problem is that we might be able to specify a color that ether the printer cannot print or the eye cannot see and that can give rise to a messy result with weird effects. This can arise when doing aggressive hue changes or multiplication or subtraction of layers. So always remember to set Photoshop to the color space of the printer you plan to use so that one stays within that gamut.

Now if all we are doing is snaps for the local drug store to print or the web, then for sure light the subject well, use the base ISO of the camera, stay in sRGB, set auto WB and the pictures will likely not need any correction. This could even apply to wedding photographers if they use a camera with a good dynamic range and the subject is well exposed, they could get away with sRGB.

For everything else that's artistic, not using the widest color space is, IMHO, short sighted especially in light of the latest trends in monitors and printing technology.

One does not need a wide gamut, focus, correct color or detail for artistic photography. However, we do need to have the choice.

So why throw away the precious data? sRGB should not be used in most cases unless for delivering to someone who expects just that!

Asher
 

Doug Kerr

Well-known member
Hi, Asher,

For everything else that's artistic, not using the widest color space is, IMHO, short sighted especially in light of the latest trends in monitors and printing technology.
One issue that must be kept in mind is that, if we assume a certain "bit depth" in our file format, then a larger chromaticity gamut (as for example, for Adobe RGB over sRGB) leads to lesser chromaticity precision, which may manifest itself as exacerbated banding.

I'm in no position to judge the tradeoffs here. but it is important to note that chromaticity gamut expansion is not of itself an unalloyed benefit without support in the encoding system. "More information is more information", and it must be carried.

I'm sure others here can give an outlook on the real impact of this tradeoff.

Of course, if we are going to carry our image in a higher bit depth form, we can avert this degradation. Even schemes that use the sYCC color space (which is always involved in a "JPEG" file format) to its full extent, thus attaining a gamut "beyond sRGB", can be beneficial. I discuss this some in my article, "The sYCC Color Space", available here:

http://dougkerr.net/Pumpkin#sYCC
 
Hi, Asher,

One issue that must be kept in mind is that, if we assume a certain "bit depth" in our file format, then a larger chromaticity gamut (as for example, for Adobe RGB over sRGB) leads to lesser chromaticity precision, which may manifest itself as exacerbated banding.

Indeed, which is why one should never use ProPhoto RGB in a 8-b/ch mode.

In fact, it's best to save one's data in the smallest gamut that doesn't clip the camera's natural gamut, or even better the smallest gamut that doesn't clip the actual image's data (not all images have saturated colors for all primaries). In a Raw converter like CaptureOne Pro I routinely save my conversions with the camera's 'profile' by setting it to "Embed the camera profile" on the Process Recipe controls. Photoshop has no problem opening files in that space.

I'm sure others here can give an outlook on the real impact of this tradeoff.

I'm no expert, but I consider Bruce Lindbloom to be one, and he describes the coding efficiency of various colorspaces on his pages, and warns against certain practices:
http://www.brucelindbloom.com/WorkingSpaceInfo.html

Having said that, it can be necessary to use a larger working space if we intend to do significant saturation boosts during postprocessing, in which case we can convert to a larger gamut space with an "Absolute Colorimetric Rendering" intent prior to postprocessing.

Bruce Lindbloom developed a BetaRGB workingspace that does a pretty good job balancing the concerns, and it is a wider space than we are likely to encounter, even in inkjet output modalities. BTW, it's pretty close to my 1Ds3's camera profile as output by CaptureOne.

Cheers,
Bart
 

Andrew Rodney

New member
Indeed, which is why one should never use ProPhoto RGB in a 8-b/ch mode.

Bad if you don’t have to (you have more bits in the first place). But when Kodak developed the space well over a decade ago, and the late Bruce Fraser was called in to hammer on it, he found that, at least with image processing software at that time (Photoshop), as Kodak claimed, editing in 8-bit wasn’t overly caustic if I can use that word. I’ll see if I can dig up Bruce’s actual comments. IOW, not a good practice, especially if you have high bit data, but not a deal breaker if you only have 8-bit ProPhoto.

In fact, it's best to save one's data in the smallest gamut that doesn't clip the camera's natural gamut, or even better the smallest gamut that doesn't clip the actual image's data (not all images have saturated colors for all primaries). In a Raw converter like CaptureOne Pro I routinely save my conversions with the camera's 'profile' by setting it to "Embed the camera profile" on the Process Recipe controls. Photoshop has no problem opening files in that space.

Assuming said profile actually defines the cameras “gamut” (cameras don’t really have a gamut but rather a color mixing function) and further, how that profile was made. In the tradition sense, one uses the camera to capture a target which has a fixed gamut which already can limit the “gamut potential” of the capture. Then the raw data, with some color space assumption has to be rendered to build the profile. So I am very skeptical that a traditional camera ICC profile comes close to defining the color mixing function of the system and apparently so do many on the ICC. Without the spectral sensitivities of the chip and the illuminant, a lot of the creation of camera ICC profiles is a lot of guesses. The ICC photo committee has tried to come up with differing ways to actually over come these issues. The best I’ve seen is one whereby a tiny Spectrophotometer captures the illuminant of each capture and, along with the known spectral sensitivity of the chip components, a profile is built on the fly and embedded in the capture. One of the limitations here is the price and size of the Spectrophotometer.

In terms of ProPhoto, well in Adobe raw processors, that’s the color space (with a linear TRC) used for the color processing, so I see little reason of moving from that upon export to a smaller space such as Lindblooms. In other workflows, might be useful.
 
In terms of ProPhoto, well in Adobe raw processors, that’s the color space (with a linear TRC) used for the color processing, so I see little reason of moving from that upon export to a smaller space such as Lindblooms. In other workflows, might be useful.

Hi Andrew,

What would the practical benefit be to 'promote' a smaller space to a large one, other than some headroom for increasing the encoding capability of higher saturation (at the expense of losing precision at lower saturations)? Also bearing in mind that the gamut of a colorspace is only partially used in most images (e.g. not much data in the blue side of a trichromatic space for something with a red color).

When making a comparison between the gamut of BetaRGB and CaptureOne's profile for the 1Ds3 (example), there is only a small difference (1Ds3 overall a bit larger), so what's to be gained from converting to (or rather embedding in) a larger gamut ProPhoto encoding space (example), and back down for actual output?

Cheers,
Bart

P.S. Gamut visualisations courtesy of the useful http://www.iccview.de/index.php?lang=en
 

Andrew Rodney

New member
One reason we need big RGB working spaces is that they are based on theoretical emissive devices (ProPhoto being very theoretical when you look at what falls outside human vision). Necessary because of their simple and predicable shapes. So while there are way, way more colors that can be defined in something like ProPhoto RGB than you could possibly print, we have to deal with a significant disconnect between these simple shapes of RGB working space and the vastly more complex shapes of output color spaces. Simple matrix profiles of RGB working spaces when plotted 3 dimensionally illustrate that they reach their maximum saturation at high luminance levels which makes sense since are based on building more saturation by adding more light. The opposite is seen with print (output) color spaces where this is accomplished by adding ink; a subtractive color models. One reason we need such big RGB working space like ProPhoto RGB again due to its simple size and to counter the disconnect between mapping to the output space without potentially clipping. There can be issues where very dark colors of intense saturation which do occur in nature and we can capture with many devices don’t map properly with a smaller working space. Many of these darker colors fall outside Adobe RGB (1998). When you encode into such a space, you clip the colors to the degree that smooth gradations become solid blobs in print, again due to the dissimilar shapes and differences in how the two spaces relate to luminance. I suspect this is why Adobe picked ProPhoto RGB for the processing color space in their raw converters.

AS to the Capture1 profile, again, based on my experience building camera profiles, and discussions with the ICC photo group about such profiles, I’m not suggesting they don’t work, I’m suggesting they don’t define what the capture device is capable of recording and thus, the size of the gamut you show probably isn’t useful for comparisons. Again, a camera profile can’t have a gamut larger than the target gamut used to build the profile.
 

Doug Kerr

Well-known member
The difference Doug are not apparent examining the 2D chromaticity diagram. Should be visible in 3D to the extent of the two profiles you plot. You would plot an emissive display profile (or RGB working space) on top of a printer profile, and should see when you zoom into the lower Lstar areas where dark and saturated colors (like orange) will “clump” together when the working space is smaller versus larger, due primarily to the differences in subtractive color and additive color spaces at play.
 

Andrew Rodney

New member
Wouldn't the profile builder be able to extrapolate a bit beyond the actual target gamut?
Bart

Maybe, I guess, I don't know.... I don't see that with any other profiles so I don't know why camera profiles would be different. Yes, there is a lot of extrapolation in building a profile (you measure 900 patches, in a prefect world, you’d measure 16.7 million). But I don’t think the extrapolation extrapolates the gamut, that’s fixed based on the measured data.

If you look at the three main camera profile targets, you see first the Macbeth 24 patch which was never build nor intended for this purpose. So GMB came up with the ColorChecker DC which had a number of very glossy (saturated) colors but give many grief as any amount of reflections would hose the profile. Then they removed the glossy patches and built the SG target. Its gamut is smaller. So again, I don’t see how the gamut of a camera profile can be any greater than the target plus, to even build the profile, you have to render the image into RGB pixels. For that to happen, a raw converter has to assume (as Doug discussed) a color space for what is a single pixel of data from a colored filter. Is the assumption correct? What’s the raw processing color space? For example, in Aperture, I’m pretty darn sure its Adobe RGB (1998). Take the same raw file of a target, process it in differing raw converters, even if you could move the various sliders to match, I suspect you’d have a lot of different data (and gamuts) before you even build the profile. So treating a digital camera like a scanner in terms of profiling, unless you setup the camera in a scanner like state (copy work), makes a huge number of assumptions. I’m not saying such profiles don’t work, I’m saying they don’t really describe the capture gamut among other things.
 

Doug Kerr

Well-known member
This figure is an oblique projection of the three-dimensional plot of the gamut of the sRGB color space in the CIE xyY coordinate system (that is, in the CIE xyZ color space). The horizontal plane is that of the familiar "x-y" chromaticity diagram; the vertical axis represents luminance.

At any luminance (height on the solid), a horizontal section through the solid shows the chromaticity gamut available at that luminance (within the legitimate range of values of R, G, and B defined for the sRGB color space).

rgb_gamut-r.gif


At the bottom (low values of luminance), the chromaticity gamut is the one we often see for the sRGB color space (the triangle defined by the points R, G, and B, the chromaticities of the sRGB primaries). (And we may think that is is available for all luminances, but far from it, as we can easily see from the figure.)

The point labeled "W" is the chromaticity of the sRGB white point, the chromaticity of any color for which R=G=B.

The point labeled Bsat is at the luminance at which the chromaticity gamut begins to shrink back from the "B" point. Above that, we cannot, for example, have any "blue" color whose saturation is as high as we can attain for that hue for small luminance.

The point labeled Rsat is at the luminance in which the chromaticity gamut begins to shrink back from the "R" point as well. Above that, we cannot have any "red" color whose saturation is as high as we can attain for that hue for small luminance.

The point labeled Gsat is at the luminance in which the chromaticity gamut begins to shrink back from the "G" point as well. Above that, we cannot have any "green" color whose saturation is as high as we can attain for that hue for small luminance.

Finally, at Wmax, we have the highest luminance that can be represented by the RGB coordinates, and for that luminance, there is only one chromaticity possible (that of the white point) - the chromaticity gamut has shrunk to just a point (where, by definition, the chromaticity is zero).

Best regards,

Doug
 

Doug Kerr

Well-known member
The matter of the "color space" of the sensor array itself is a fascinating one. It is discussed at length in my new technical article, "Digital Camera Sensor Colorimetry", available here:

http://dougkerr.net/Pumpkin#Sensor_Colorimetry

This article is very fat, and was actually written for the selfish purpose of recording the many fascinating things I had learned, and come to somewhat understand, during an intense week-long study of the literature covering various interlocking technical areas. (If I don't do that, it will all evaporate with a half-life of about three weeks.)

Now underway is the extraction from this heap of some articles of manageable scope and length covering particular aspect of the field. One is the matter of "What color space does the camera sensor actually operate in [essentially in what color space is the raw data to be interpreted, once we get over the fact that there are not three photodetectors at any given pixel location], and what are its primaries."

The answer to that is fascinating (perhaps even shocking), and I'll give a summary here.

• The typical camera sensor array does not maintain "metameric accuracy", which means that although there can be many light spectrums that all have the same color, the sensor does not necessarily give the same set of three outputs (implying the same color) for all of them. Not desirable, but a result of various design compromises.

• Thus, the sensor cannot be said to practice any color space at all.

• However, without actually ever saying so, we treat is as if it did practice a color space (it does "approximately"); else we could not transform its outputs, after demosaicing (or perhaps more precisely, during demosaicing), to any bona fide "delivery" color space, such as sRGB or Adobe RGB.)

• Stipulating to the notion that the sensor practices a color space, what are its primaries? Fascinatingly enough, they are all imaginary (like the primaries of the CIE XYZ color space). They could not be physically generated, and if they could, we could not see them. They have existence only in that they follow the recognized mathematical rules about what happens when we combine two or more primaries to produce an actual visible color.

• Could that conundrum have been avoided by "proper" design of the sensor? No. For a sensor to practice a color space having real (physically-realizable) primaries, its "channel" spectral response curves would have to be negative for some range of wavelengths. That is physically impossible. (We cannot design a filter that, if placed in front of a photodetector, would result in the photodetector giving a positive output for some wavelengths and a negative one for others. No, an offset won't fix that - the output must be zero for "no" stimulus.)

• Wouldn't it be nice to choose filter responses for a sensor array such that its three outputs directly related to the sRGB color space (that is, were essentially the values R, G, and B in linear form)? Nice, but not possible. Such a sensor would have real primaries (the sRGB primaries, in fact). As a corollary of the matter described just above, for that to be so, each response curve would have to be negative over some range of wavelengths, not physically possible.

I have to tell you that learning all this was somewhat of a shock to me.

Best regards,

Doug
 
Top