• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

Measuring a Camera's "ISO"

Hello, my first post here in the tech zone:

Discussion rages elsewhere (DPR Sigma forum) as to the best "ISO" to use for the various Sigma models, yet I've only found one model (SD1 Merrill) that has the EXIF 'SensitivityType' tag included in the meta-data. That means there are various claims over there as to what is the "base" ISO for a particular model.

So I'm trying to measure the actual "ISO" for some of my cameras, without sophisticated equipment and bearing in mind that I am forced to assume an ISO method for the calculation - currently sticking to SOS per DC-004.

The subject for this post is my method of determining the image plane exposure and how that method can be improved, if possible. Here is the current method:

I place a Kodak R27 card outside in cloudless daylight, white side up, somewhere before or after solar noon.

I plunk a lux-meter (cheap Taiwan model, 5% accuracy claimed) in the middle of the card and record the lux (or fc if the day is too bright for the meter).

I set the camera to spot metering, infinite focus, aperture priority, no EC, and note the recommended f-number and shutter period; then shoot raw for later examination in RawDigger and for conversion to sRGB in the proprietary converter (SPP).

If necessary, I convert measured fc to lux.

I convert the illuminance to luminance per Lv = Ev/pi, assuming that the card is Lambertian and that the spot metering dispenses with consideration of steradians and stuff.

I calculate the mean sensor exposure Hm from the simple formula in Section 5 of DC-004 :

0.65*Lv*t/(N^2)

I convert the raw to sRGB JPEG and note the image brightness (green channel) which should be 118/255 (but rarely is!).

Can anybody suggest improvement to or comment on the above method without having to buy more stuff?

Thanks for looking,

Ted
 

Jerome Marot

Well-known member
I am a bit puzzled by your project. What are you trying to achieve with the measurement?

Besides, Sigma cameras with their particular sensor and the way it sees color are probably not well suited to the standard ISO measurement procedure. The output of their sensor needs considerable massaging to be fitted to the RGB model and the main problem is to avoid to saturate the blue layer.
 

Doug Kerr

Well-known member
Hi, Ted,

Welcome to OPF. And I am delighted to see you looking into the matter of camera sensitivity metrics. And thank you for the very clear presentation of your method and such.

I'll embed some further comments.

<snip>
So I'm trying to measure the actual "ISO" for some of my cameras, without sophisticated equipment and bearing in mind that I am forced to assume an ISO method for the calculation - currently sticking to SOS per DC-004.

I think working on the basis of the ISO SOS is a good choice. As you may know, it is very common for the "ISO" settings on modern cameras to (allegedly) be on the basis of the ISO SOS. And indeed there is provision in the Exif metadata for indicating which sensitivity metric is in use, but populating that seems rather iffy!

I place a Kodak R27 card outside in cloudless daylight, white side up, somewhere before or after solar noon.

I plunk a lux-meter (cheap Taiwan model, 5% accuracy claimed) in the middle of the card and record the lux (or fc if the day is too bright for the meter).

Likely the same one I have! (I got it when doing some work on exposure meters!)

I set the camera to spot metering, infinite focus, aperture priority, no EC, and note the recommended f-number and shutter period; then shoot raw for later examination in RawDigger and for conversion to sRGB in the proprietary converter (SPP).

If necessary, I convert measured fc to lux.

Sure.

I convert the illuminance to luminance per Lv = Ev/pi, assuming that the card is Lambertian and that the spot metering dispenses with consideration of steradians and stuff.

Well, the relationship is L = RE/pi, where R is the reflectance of the target surface. But we can perhaps reasonably assume R=1 for the white target.

I calculate the mean sensor exposure Hm from the simple formula in Section 5 of DC-004 :

0.65*Lv*t/(N^2)

A lot of empirical allowances in that. but it is certainly a reasonable premise for this kind of investigation.

I convert the raw to sRGB JPEG and note the image brightness (green channel) which should be 118/255 (but rarely is!).

Yes, there is lot of that going around!

Can anybody suggest improvement to or comment on the above method without having to buy more stuff?

Actually, I think that is a very reasonable way to proceed.

Keep in touch.

Best regards,

Doug
 
I am a bit puzzled by your project. What are you trying to achieve with the measurement?

Jerome, it started with one of my cameras that lists the EI range in the manual as 50-800 with no mention of "extended" for 50. Other manuals from the same manufacturer all mention "extended" in connection with 50 ISO. So, I set out to measure the SOS which turned out to be much closer to 100 than it was to 50. That got me to thinking about the later "Merrill" models for which it is often claimed that "base" (I hate that word in this context) ISO is 200.

Besides, Sigma cameras with their particular sensor and the way it sees color are probably not well suited to the standard ISO measurement procedure. The output of their sensor needs considerable massaging to be fitted to the RGB model and the main problem is to avoid to saturate the blue layer.

I don't believe that the Foveon is any different than Bayer or other CFA-type sensors as far is sensor sensitivity is concerned. It's just that there are 3 sensels per pixel and I only measure the green one.

Could you please explain why "Sigma cameras with their particular sensor and the way it sees color are probably not well suited to the standard ISO measurement procedure." and tell us which one of the four or more standard ISO procedures you are referring to?

I'm still using the CIPA standard, by the way. As Doug has said in the past, ISO standards appear to be designed especially to confuse the reader!

regards,

Ted
 
Hi, Ted,

Welcome to OPF. And I am delighted to see you looking into the matter of camera sensitivity metrics. And thank you for the very clear presentation of your method and such.

Well, the relationship is L = RE/pi, where R is the reflectance of the target surface. But we can perhaps reasonably assume R=1 for the white target.

Best regards,

Doug

Hello Doug, please pardon the snipping.

Yes, I forgot to incorporate the 'R' in the posted relationship - well spotted. Fortunately it is not missing from my spreadsheet and, as you know the Kodak white side is indeed 0.9, not 1.0.

I'm mildly concerned about the lux-meter reading: I've been moving the camera into position hand-held, after removing the lux-meter sensor. I'm thinking that I should set the camera up on a small tripod and measure the lux with the camera in-situ. Using the camera remote control (assuming I can find it and that it still works) would improve the method (arm blocking the light) even more, IMHO.

best regards,

Ted.
 

Jerome Marot

Well-known member
Could you please explain why "Sigma cameras with their particular sensor and the way it sees color are probably not well suited to the standard ISO measurement procedure."

The problem with Foveon sensors, as far as I understand it, is that Silicium itself is used as a filter. Compared to the pigments used in standard Bayer arrays, it is relatively dark. As a consequence the deeper layer, the one sensitive to red only, sees so many photons. OTOH, the top layer, the one sensitive to blue, green and red, sees a wealth of photons. But, as it can only be so deep, it saturates very quickly.

Normally, the iso measurements are derived from the green channel sensitivity, because that is what the human eye is most sensitive to. They are designed so that the green channel does not saturate and levels are nicely filled around a medium value... for the green channel. However, as explained above, for the Foveon sensors, the top channel is the problem.

The blue channel, BTW is computed from the top level minus the other two, times their respective sensitivity. So you may never notice that the top channel is saturated if you only consider the values in the blue channel. Furthermore, as the bottom channel is relatively noisy (because it only sees so many photons) and as its values are subtracted from the other two, you get that noise everywhere, which also masks saturation.
 

Doug Kerr

Well-known member
Hi, Ted,

Hello Doug, please pardon the snipping.

No, that is most appropriate!

Yes, I forgot to incorporate the 'R' in the posted relationship - well spotted. Fortunately it is not missing from my spreadsheet and, as you know the Kodak white side is indeed 0.9, not 1.0.

Ah, yes.

I'm mildly concerned about the lux-meter reading: I've been moving the camera into position hand-held, after removing the lux-meter sensor. I'm thinking that I should set the camera up on a small tripod and measure the lux with the camera in-situ. Using the camera remote control (assuming I can find it and that it still works) would improve the method (arm blocking the light) even more, IMHO.

Yes that all seems worthwhile. I also assume that the photometer is operated so that you do not yourself interfere with the ambient illumination during the measurement.

Best regards,

Doug
 
The problem with Foveon sensors, as far as I understand it, is that [Silicon] itself is used as a filter. Compared to the pigments used in standard Bayer arrays, it is relatively dark. As a consequence the deeper layer, the one sensitive to red only, sees so many photons. OTOH, the top layer, the one sensitive to blue, green and red, sees a wealth of photons. But, as it can only be so deep, it saturates very quickly.

Normally, the iso measurements are derived from the green channel sensitivity, because that is what the human eye is most sensitive to. They are designed so that the green channel does not saturate and levels are nicely filled around a medium value... for the green channel. However, as explained above, for the Foveon sensors, the top channel is the problem.

The blue channel, BTW is computed from the top level minus the other two, times their respective sensitivity. So you may never notice that the top channel is saturated if you only consider the values in the blue channel. Furthermore, as the bottom channel is relatively noisy (because it only sees so many photons) and as its values are subtracted from the other two, you get that noise everywhere, which also masks saturation.

Thank you for your expanded view on how the Foveon works.

For your information, I've owned Sigma cameras for about ten years and have several GB of papers, data sheets, layer response curves and silicon absorption data. By that token, I'm sorry to say that much of the above response is incorrect. To avoid ruffling feathers, I will avoid picking on incorrect instances.

I'll guess that the above response is the result of reading about the Foveon out of interest rather than actual continuous use of a Sigma camera.

Thank you, anyway,

Ted
 

Jerome Marot

Well-known member
Thank you for your expanded view on how the Foveon works.

For your information, I've owned Sigma cameras for about ten years and have several GB of papers, data sheets, layer response curves and silicon absorption data. By that token, I'm sorry to say that much of the above response is incorrect. To avoid ruffling feathers, I will avoid picking on incorrect instances.

I'll guess that the above response is the result of reading about the Foveon out of interest rather than actual continuous use of a Sigma camera.

Thank you, anyway,

Ted

What a suprising answer.

First, let me point out that I have been wrong more than once on these forums. I may indeed be wrong about Sigma cameras here.

But I do not quite understand the sudden turn in this thread. I never presented myself as a specialist of Sigma cameras. Neither did you, up to that answer. You just started this thread with a seemingly naive question about measuring ISO. Now you are a specialist of Sigma cameras with 10 years experience and GBytes of litterature on the subject.
 

Asher Kelman

OPF Owner/Editor-in-Chief
Ted,

I personally have been intrigued by Sigma cameras. I am equally impressed that Canon and HP who invested in a Stanford University originated company a decade ago, and after a number of patents, has failed to release either of their CMOS layered designs.

Given your extensive knowledge base and rather exceptionally long personal experience, let me pose this important question:-

Why has this obviously brilliant technology not taken off, especially the Stanford version where each pixel is an independant camera?

What has happened to make this path less travelled?

......and BTW, "ISO" )with millions of individual cameras, (i.e, that actual independant sensels), receptive to light for different times or with different sensitivity of data processing), becomes out of place as a concept, as reactivity to light becomes adaptive!

Asher
 

Doug Kerr

Well-known member
I trust the spectators here will forgive me for breaking into a perfectly good catfight with a technical observation.

The measurement of a camera's Standard Output Sensitivity (SOS) is predicated on determining the phtometric exposure required to produce "saturation" of the sensor chain, which we can think of as the situation in which greater photometric exposure does not produce a "proportional" change in the output coordinates.

For a monochromatic sensor, that point may well depend on the spectral distribution of the light involved (depending on the spectral sensitivity function of the sensor).

Remember the matter of "panchromatic" vs. "orthochromatic" black and white film?​

In the case of a tristimulus sensor (whether a CFA type, as we often have in our digital cameras, or a collocated type, as in the case of the Signal Foveon sensor), the different "channels" may well saturate at different points, and how that plays out depends on the spectral distribution of the light involved.

Accordingly, to avoid uncertainty in that regard, CIPA DC-004 (in which the definition of SOS was first put forth) prescribes a specific illuminant for determining the SOS. It in fact provides for two values of SOS, one measured using a "daylight" illuminant (D55) and one a "tungsten" illuminant (I'm not sure which one - the reference is indirect, and I don't have time now to follow the trail - probably illuminant A).

That all having been said, we note that the definition of the SOS (as suggested by its name) treats the camera as a "black box". The SOS concept is not screwed up by any unique property of the interior workings of any sensor or sensor chain. It is not somehow "invalid" with respect to, for example, a camera with a Foveon sensor.

Best regards,

Doug
 

Jerome Marot

Well-known member
I trust the spectators here will forgive me for breaking into a perfectly good catfight with a technical observation.

The measurement of a camera's Standard Output Sensitivity (SOS) is predicated on determining the phtometric exposure required to produce "saturation" of the sensor chain, which we can think of as the situation in which greater photometric exposure does not produce a "proportional" change in the output coordinates.

For a monochromatic sensor, that point may well depend on the spectral distribution of the light involved (depending on the spectral sensitivity function of the sensor).

Remember the matter of "panchromatic" vs. "orthochromatic" black and white film?​

In the case of a tristimulus sensor (whether a CFA type, as we often have in our digital cameras, or a collocated type, as in the case of the Signal Foveon sensor), the different "channels" may well saturate at different points, and how that plays out depends on the spectral distribution of the light involved.

Accordingly, to avoid uncertainty in that regard, CIPA DC-004 (in which the definition of SOS was first put forth) prescribes a specific illuminant for determining the SOS. It in fact provides for two values of SOS, one measured using a "daylight" illuminant (D55) and one a "tungsten" illuminant (I'm not sure which one - the reference is indirect, and I don't have time now to follow the trail - probably illuminant A).

That all having been said, we note that the definition of the SOS (as suggested by its name) treats the camera as a "black box". The SOS concept is not screwed up by any unique property of the interior workings of any sensor or sensor chain. It is not somehow "invalid" with respect to, for example, a camera with a Foveon sensor.

Best regards,

Doug

Hi, Doug,

All very good points. Thank you so much.

Best regards,

Jérôme.
 
What a suprising answer.

First, let me point out that I have been wrong more than once on these forums. I may indeed be wrong about Sigma cameras here.

But I do not quite understand the sudden turn in this thread. I never presented myself as a specialist of Sigma cameras. Neither did you, up to that answer. You just started this thread with a seemingly naive question about measuring ISO. Now you are a specialist of Sigma cameras with 10 years experience and GBytes of litterature on the subject.

Please accept my apologies. Didn't mean to offend.
 
Ted,

I personally have been intrigued by Sigma cameras. I am equally impressed that Canon and HP who invested in a Stanford University originated company a decade ago, and after a number of patents, has failed to release either of their CMOS layered designs.

Given your extensive knowledge base and rather exceptionally long personal experience, let me pose this important question:-

Why has this obviously brilliant technology not taken off, especially the Stanford version where each pixel is an independant camera?

What has happened to make this path less travelled?

I see the only real advantage to the Foveon as having a lack of color artifacts, at the pixel level. Sort of analogous to the difference between a JPEG and a PNG or TIFF. Other than that the Foveon loses out on many well-known and well documented fronts:

Lots of noise, for which we can blame the extreme matrices necessary to transform camera space to XYZ (the basis for all popular color spaces). Here's one:

matrixFoveon-XYZ.gif


The R,G and B designators were used by Foveon to denote sensor layers and do not refer to color space primaries.

The main thing to note is the large coefficients used - much larger than CFA equivalents a recipe for noise some would say.

Poor ISO "performance" which hasn't improved much other than by major advances in Sigma's NR and stuff. In other words, a well-under-exposed Sigma image sucks and is virtually unrecoverable.

Sigma has stayed with simplicity even with their compact DP series. Low res. video or none. No scene modes, Just PASM.

They only went mirror-less recently - but kept the same registration as the previous DSLRs. Only some previous lenses auto-focus well on the mirror-less models, yet another Sigma blunder, some say.

Myself, what few shots I make are invariably at 100 ISO on my ISO-less Sigmas; bad light = no shot. And I only view on my monitor, so 3-5MP is quite enough for me.

......and BTW, "ISO" )with millions of individual cameras, (i.e, that actual independant sensels), receptive to light for different times or with different sensitivity of data processing), becomes out of place as a concept, as reactivity to light becomes adaptive!

Sorry, Asher, I didn't understand that at all.

best,

Ted
 

Doug Kerr

Well-known member
Hi, Ted.

I see the only real advantage to the Foveon as having a lack of color artifacts, at the pixel level.

Well, in addition, for a given sensel count, the Foveon collocated sensor give a greater geometric resolution each of the three three color "channels" than a CFA sensor with the same sensel count. (With regard to the Foveon sensor, I use "sensel" to mean one sensor "position", with three overlaid photodetectors.

<snip>

Lots of noise, for which we can blame the extreme matrices necessary to transform camera space to XYZ (the basis for all popular color spaces). Here's one:

matrixFoveon-XYZ.gif


The R,G and B designators were used by Foveon to denote sensor layers and do not refer to color space primaries.

Thank you for making that clear. That notation, while unfortunate, is almost universal!

In my own writings I use e, f, and g for the (linear) outputs of the three sensor "channels" (for any kind of tristimulus sensor) to avoid any misunderstanding.

Best regards,

Doug
 
Hi, Ted.

Well, in addition, for a given sensel count, the Foveon collocated sensor give a greater geometric resolution each of the three three color "channels" than a CFA sensor with the same sensel count. (With regard to the Foveon sensor, I use "sensel" to mean one sensor "position", with three overlaid photodetectors.)

Ah, yes, "with the same sensel count" . . .

It was indeed a problem for Foveon that the converted image is so much smaller than that for a CFA sensor. People bought a "10MP" camera and got 3.4MP JPEGs or TIFFs after conversion. And there's no way to explain that to "ordinary folks" anymore than it can be explained to them that the CFA pattern gives a lower color resolution. All they see is color that looks good enough on their screen.

Thank you for making that [matrix explanation] clear. That [RGB] notation, while unfortunate, is almost universal!

In my own writings I use e, f, and g for the (linear) outputs of the three sensor "channels" (for any kind of tristimulus sensor) to avoid any misunderstanding.

Good idea!

Yes, Foveon has finally dropped their R, G, B notation in favor of B, M, T starting with their Quattro models but leaving their illustrations and graphs in the "traditional" colors!

Later,

Ted
 

Doug Kerr

Well-known member
Hi, Ted,

Ah, yes, "with the same sensel count" . . .

It was indeed a problem for Foveon that the converted image is so much smaller than that for a CFA sensor. People bought a "10MP" camera and got 3.4MP JPEGs or TIFFs after conversion.

That is of course because it was a 3.4 Mpx camera! To have called it a 10 Mpx camera was just . . . well, you know.

Best regards,

Doug
 
Hi, Ted,

That is of course because it was a 3.4 Mpx camera! To have called it a 10 Mpx camera was just . . . well, you know.

Best regards,

Doug

Quite so, Doug!

Pardon my tardy reply, I don't seem to be getting "Instant email notifications" reliably.

I also have a couple of real 4.7MP Sigmas (DP2s, SD14) and a recent purchase (DP2 Merrill) has a whopping 15 real MP, which I insist on shooting in Foveon "low res" (2x2 binned on-chip) thereby cutting that down to 3MP or so.

best,

Ted.
 
Ted,

I personally have been intrigued by Sigma cameras. I am equally impressed that Canon and HP who invested in a Stanford University originated company a decade ago, and after a number of patents, has failed to release either of their CMOS layered designs.

Given your extensive knowledge base and rather exceptionally long personal experience, let me pose this important question:-

Why has this obviously brilliant technology not taken off, especially the Stanford version where each pixel is an independent camera?

What has happened to make this path less travelled?

I forgot to add that any Sigma camera needs a bit of thought and observance of photographic basics, as opposed to the modern way of using auto-everything and expecting a perfect shot under any and all scene conditions. Bit of rant, sorry.

I just find it so irritating to read posts from people in Sigma fora whining about less-than-perfect AF or AE.

Grump :-(

Ted
 

Doug Kerr

Well-known member
Hi, Ted,

Any camera needs a bit of thought and observance of photographic basics, as opposed to the modern way of using auto-everything and expecting a perfect shot under any and all scene conditions.

Best regards,

Doug
 

Jerome Marot

Well-known member

In 2011, I posted a set of pictures explaining how to remove the IR filter of a Sigma DP1 camera. These pictures are still amongst the most popular ones on my flickr account. I use that camera for IR and also for UV photography. The top layer of the Foveon sensor is seasonably sensitive to UV. However, the particular ways the data is massaged by the Sigma convertor made the image very difficult to use, as I get a red signal which is, of course, non-existent. At the time, I could not find alternative software to convert the data. Is there free software for the x3f files today (free in the sense that the source code would be available)?
 
In 2011, I posted a set of pictures explaining how to remove the IR filter of a Sigma DP1 camera. These pictures are still amongst the most popular ones on my flickr account. I use that camera for IR and also for UV photography. The top layer of the Foveon sensor is seasonably sensitive to UV. However, the particular ways the data is massaged by the Sigma convertor made the image very difficult to use, as I get a red signal which is, of course, non-existent. At the time, I could not find alternative software to convert the data. Is there free software for the x3f files today (free in the sense that the source code would be available)?

I hate that red look with a passion. You probably noticed that all green channel values get forced to zero in the conversion. Horrible.

For the DP1, which I imagine gives similar output to my SD14 with the "dustcover" removed, I would recommend RawDigger (not free but cheap enough) - their RGB conversion is nowhere near so red. Also, you can extract the red layer only to work on and tint in your favorite editor.

RawTherapee is free, can open DP1 files but not particularly well. It defaults to the 'red' look in the review image, but there many choices of input profile - including 'None'. Currently, I prefer RawTherapee for SD14 full-spectrum work - completely avoiding SPP.

For code, https://www.libraw.org/about may be of interest - Iliah Borg there is well-known and is very helpful.

Equally, Roland Karlsson on DPR's Sigma Forum has a command-line x3f_extract program which might be of interest. The code is open for you to hack into.

There is also a popular command-line program 'dcraw' which can provide different types of conversion and the type that may be of interest takes the raw data as-is and creates an RGB file with no conversion at all. RawDigger can also do that, calling it a 'raw composite'.

I can provide links and example SD14 images if you would like . .

Ted
 
Hi, Ted,

Any camera needs a bit of thought and observance of photographic basics, as opposed to the modern way of using auto-everything and expecting a perfect shot under any and all scene conditions.

Best regards,

Doug

Indeed so.

As to ISO, I posted a link of interest here (if you can stand to access DPR):

https://www.dpreview.com/forums/post/60297212

I discovered that, according to PetaPixel, 50 ISO is "native" to my Sigma DP2s, not "extended"!

Any comment as to that? - I seem to recall that you have expounded on "extended ISO" elsewhere . .

best,

Ted
 

Doug Kerr

Well-known member
Hi, Ted,

Indeed so.

As to ISO, I posted a link of interest here (if you can stand to access DPR):

Oh, I often slum.

https://www.dpreview.com/forums/post/60297212

I discovered that, according to PetaPixel, 50 ISO is "native" to my Sigma DP2s, not "extended"!

Well, firstly, the notion of the "native" sensitivity of a camera is a bit arbitrary, but I guess I get it.

As to "extended" ISO settings, that term is most often used not for values beyond the "native" but rather for values beyond those that the manufacturer is "comfortable" making available to the "ordinary" user.

I guess.

Best regards,

Doug
 
Hi, Ted,

As to "extended" ISO settings, that term is most often used not for values beyond the "native" but rather for values beyond those that the manufacturer is "comfortable" making available to the "ordinary" user.

Hello Doug,

I've always been uncomfortable with "digital" matters being dumbed-down so's the poor old film-shooter can keep up. Witness the dreaded "equivalent" focal length for lenses mounted an smaller-sensor cameras.

My personal ISO rant, expressed often elsewhere, is that the "ISO knob" on any digital camera should have been labeled "gain" and clarified as "amount of under-exposure".

More noise in your image, Sir? Just turn up the gain to 11 (obscure rock music reference).

Ted
 

Doug Kerr

Well-known member
Hi, Ted,

Hello Doug,

I've always been uncomfortable with "digital" matters being dumbed-down so's the poor old film-shooter can keep up. Witness the dreaded "equivalent" focal length for lenses mounted an smaller-sensor cameras.
Indeed. Still, that notion goes far back before digital cameras (we just didn't hear it talked about so much).

My personal ISO rant, expressed often elsewhere, is that the "ISO knob" on any digital camera should have been labeled "gain" and clarified as "amount of under-exposure".

So if we did that, would the dial be marked in some units, or just as "more" and "less"? In what units? In dB? What would 0 dB correspond to? Or in linear terms, as a ratio (which is what a gain is)? What would a gain of unity correspond to?

If we wanted to measure the scene luminance with an exposure meter, and wanted to enact the exposure "suggested" by the meter, how would we relate the "gain" setting to that process?

And would state income tax still be deductible?

Yes, we change the sensitivity of the camera (as today commonly quantified in terms of ISO speed, or ISO SOS) by changing the gain between the output of the sensor and the input to the ADC. But what gain produces what sensitivity is a function of:

• The output voltage of the sensor at, for example, saturation.
I need to mention this "sensor output voltage" is the difference between the actual sensor voltage and a reference voltage, since the actual sensor voltage goes down with increasing photometric exposure.​

• The full-scale input voltage of the ADC.

• How the digital output of the ADC is massaged to the coordinates of an sRGB (or other) color space.

And so the absolute gain is fairly arbitrary, and knowing it wouldn't tell us much. Except that "more" would in general give us greater sensitivity and more noise.

My favorite ISO thing is ISO 646.

Just sayin'.

Best regards,

Doug
 
Hi, Ted,

So if we did that, would the dial be marked in some units, or just as "more" and "less"?

In my cynicism, I hadn't really thought about that, was just thinking about the confusion caused by the move from film to digital "ISO". How often have we read that the ISO setting "changes the sensitivity of the sensor", as if that were even possible!


In what units? In dB? What would 0 dB correspond to? Or in linear terms, as a ratio (which is what a gain is)? What would a gain of unity correspond to?

As to units, they would have to be obscured by some meaningless acronym, in the spirit of Spinal Tap, 1 could be less and 11 could be the max without ever revealing that they are referring to the crappiness of the final image . . :)

If we wanted to measure the scene luminance with an exposure meter, and wanted to enact the exposure "suggested" by the meter, how would we relate the "gain" setting to that process?

We would believe that recommendation, set the camera to manual and set the aperture and shutter to those numbers, all the time ignoring the stupid gain knob. That way, we also get to ignore the so-called "exposure triangle". :D

And would state income tax still be deductible?

Not for me here in Texas, I'm afraid, we don't have it . .

Yes, we change the sensitivity of the camera (as today commonly quantified in terms of ISO speed, or ISO SOS) by changing the gain between the output of the sensor and the input to the ADC.

Hate to tell you but none of my serious cameras do that, they are truly "ISO-less".

But what gain produces what sensitivity is a function of:

• The output voltage of the sensor at, for example, saturation.
I need to mention this "sensor output voltage" is the difference between the actual sensor voltage and a reference voltage, since the actual sensor voltage goes down with increasing photometric exposure.​

Got my attention there, Doug. All sensors do that? Are you referring to the CMOS rail polarity?

Since my cameras have a stated sensitivity in terms of uv/e-, I am inclined to think that the sensor output voltage gets more with increasing exposure, where "down" does not mean less.

• The full-scale input voltage of the ADC.

The three ADC's have differential inputs, +/- 2V.

• How the digital output of the ADC is massaged to the coordinates of an sRGB (or other) color space.

A cam-to-XYZ 3x3 matrix produces XYZ numbers (D55). WB correction is applied, still in XYZ space. Transformation to RGB includes the color space selected by the user.

There's more but that's the basics.
 

Doug Kerr

Well-known member
Hi, Ted,

In my cynicism, I hadn't really thought about that, was just thinking about the confusion caused by the move from film to digital "ISO". How often have we read that the ISO setting "changes the sensitivity of the sensor", as if that were even possible!
Indeed!

As to units, they would have to be obscured by some meaningless acronym, in the spirit of Spinal Tap, 1 could be less and 11 could be the max without ever revealing that they are referring to the crappiness of the final image . . :)
Good plan.

We would believe that recommendation . . .

Ah, but to get that recommendation, we would have to feed the exposure meter an exposure index, normally expected as the ISO speed of the imaging system, however we have it "set". So how would we do that?

. . . set the camera to manual and set the aperture and shutter to those numbers, all the time ignoring the stupid gain knob. That way, we also get to ignore the so-called "exposure triangle". :D

And the Bermuda one as well!

Not for me here in Texas, I'm afraid, we don't have it . .
Indeed. Just gigantic real estate taxes. (I lived in Texas almost 40 years, all the time a stranger in a strange land!

Got my attention there, Doug. All sensors do that? Are you referring to the CMOS rail polarity?

No. It is that the so-called "well" is charged when the sensor is initialized, leading to an initial voltage on the well "electrode" (typically positive). Then each effectively captured photon annihilates one electron, leading to a progressive discharge of the "well", and a decrease of its (absolute) voltage.

Since my cameras have a stated sensitivity in terms of uv/e-, I am inclined to think that the sensor output voltage gets more with increasing exposure, where "down" does not mean less.

Well, that sensitivity in uv/e is presumably the absolute value of that parameter. The actual number is negative.

The three ADC's have differential inputs, +/- 2V.

Perhaps in certain cameras. Not likely in all.

You may find this paper interesting:


Best regards,

Doug
 
Top