• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

About "dynamic range"

Doug Kerr

Well-known member
Dynamic range in photography

In the field of photography, the term dynamic range is ordinarily used to express the range (as a ratio) of the maximum to minimum luminance in a scene that can be "successfully captured" in a given shot.

The definitions of "successfully captured" of course can be complicated. In digital photography, often for the maximum we think in terms of the luminance that leads to just short of "saturation" of the sensor. The minimum may be defined in terms of some stated noise performance, or some stated "relative precision" (a metric that is related to the phenomenon of "banding").

This property, dynamic range, is distinct from what we might call the absolute (luminance) range of the camera, which would be the ratio of the maximum to minimum luminances that could be successfully captured, not as part of the scene for a given shot but rather by allowing us to use, for the maximum and minimum, different photographic exposure ISO sensitivity. That range is typically enormous. (We can capture a gigantic luminance at f/22, 1/5000 s, and ISO 50, and a very small one at f/1.4, 2 s, and ISO 8000.)

Said of a scene

Sometimes the term dynamic range is used to speak of the ratio of the maximum to minimum luminance occurring in the part of a scene that would be in the field of view of an actual or hypothetical camera, what is often called its contrast ratio. We recognize that a successful capture of such a scene requires a camera whose dynamic range is at least as great as the contrast ratio of the scene.

I think "dynamic" does not add much - there is not really any meaningful distinction here with some "larger" ratio (unless it might be the ratio between the brightest spot on the scene at high noon vs. the darkest spot at midnight), as we have when we consider measures of the "luminance range" of a camera.

I personally would prefer "contrast ratio", reserving "dynamic range" for the capability of the camera.

About "dynamic"

The descriptor "dynamic" came with the phrase "dynamic range" when it was borrowed from another field, audio recording. There, dynamic range refers to the ratio between the largest and smallest signals (measured at the input to the recording system) that could be "successfully" captured with the recording system gain control at a fixed setting.

Again, the definitions of "successfully captured" can be complicated. Typically for the maximum, it would be the signal amplitude at which the effect of the distortion caused by "saturation" or "clipping" had just reached some stated degree. For the minimum, it was typically defined in terms of some stated noise performance, or some stated impact of certain kinds of distortion that sometimes afflict small signals.

Note that this is not the ultimate range of the recording system. We can successfully record a gigantic signal by greatly reducing the recording gain, and successfully record a tiny signal by greatly increasing the gain. Thus the ultimate range is itself enormous.

Thus the presence of the restrictive clause "with the recording gain control at a fixed setting" in the basic definition. The term "dynamic" fits in with this outlook: it alludes to the range in signal amplitude occurring over the course of a single recording "take" (that is, over the entire piece of music), during which the recording gain is assumed to be kept constant, as distinguished from differences in the amplitude of test signals that might be applied with different gain settings in effect.

Said of the source signal

In recording practice, do we usually speak of the range of maximum to minimum signal amplitudes in an actual (or typical) musical passage as the "dynamic range" of the source signal? ("Recording Sibelius' Second Symphony today, Harvey? Be careful with your gain setup - it has an awesome dynamic range?")

Not usually. If we did, would we be understood unambiguously. Sure.

I have wonderful recordings, from the 1950's, where Harvey did not hark to that advice and, in the middle of an extended crescendo, discovered from watching the VU meter that saturation was in his future, and awkwardly reduced the gain suddenly at a breath pause in the middle of a passage.​

Color spaces

We will often speak of the dynamic range of a color space. This of course refers to the ratio between the largest and smallest relative luminance that can be successfully captured by the color space. (A normal camera color space does not describe absolute luminance.) As before, we must adopt some particular definition of what the maximum and minimum would mean.

The term "dynamic" can be rationalized here by comparison with our other examples. A certain numerical value of the color of a pixel under a certain color space does not imply an absolute luminance of that spot in the scene. Rather, that relationship depends (among other things) on the photographic exposure and the ISO sensitivity (this being the analog of the recording gain).

Thus, when we speak of the dynamic range of a color space, we mean, "the ratio of the maximum and minimum scene luminance that can be successfully captured assuming a constant photographic exposure and ISO sensitivity of the camera" (which would of course normally mean, "within a given shot").

Said of number systems

Especially in connection with color spaces having a high dynamic range, thus useful for so-called "HDR" (high dynamic range) image handling, we come into contact with certain "advanced" number systems, such as various floating point representations. They are beneficial because they combine a large range with a nearly-constant relative precision while efficiently using a certain number of bits.

Not surprisingly, we are interested in the "range" of these systems, which of course put limit on the range of a color spaces depending on them them. Often it is the "one-sided non-zero relative range" that is of interest. This is the ratio of the largest positive number that can be represented to the smallest non-zero positive number that can be represented.

We sometimes find that called the "dynamic range" of the number system. Is that appropriate?

Well, recall that (based on the history of the term, and consistent with its use in photography), "dynamic" is meant to imply the range with the photographic exposure/recording gain/etc. constant: that is with the scaling factor between the quantity being measured and the "result" the quantity gets held constant.

But in a number system, there is no variable scaling to hold constant. If our number system has a relative range of 1,000,000, there is no way to say that it could actually record two values 100,000,000 part if we "measure them separately, with different settings of the sensitivity control". There is no sensitivity control in a number system. So it only has one kind of range (of any particular flavor.)

So the qualifier "dynamic" has no meaning. Its use in this case is just an attempt to make the discussion "more sophisticated sounding".

So please, let's not speak of the "dynamic range" of a number system.

Multi-shot "HDR" photography.

Often we enlarge the dynamic range of our photographic process by shooting the same scene several times, typically with different photographic exposure, and then combining the images with special software.

Does this increase the dynamic range of the camera? No. For each "shot", that has the same value.

But we have increased the dynamic range of the entire photographic process (in terms of its ability to "successfully" capture a certain range of scene luminance).

What if we do not record the result of the combining in a high-dynamic range color space, but rather use tonal compression of some sort and then record the result in a color space of more modest dynamic range. Have we then still increased the dynamic range of the entire photographic process?

Yes, because that dynamic range is defined as the ability (in this case of the entire process) to successfully capture a certain range of maximum to minimum scene luminance, and that is in fact enlarged.

Now, does the delivered image in that case have the same contrast ratio as the scene itself. No. What about its dynamic range? I'd prefer not to use that term there, but no.

Best regards,

Doug
 
Last edited:

Doug Kerr

Well-known member
To give some more insight into the distinction I draw between "range" and "dynamic range", let's do a little numerical exercise.

I will start by assuming that that we consider the "largest" and "smallest" successfully captured luminance to be ones that would produce sRGB values of RGB=255 and RGB=2. Consider these to be produced by photometric exposures on the sensor we will call Hmax and Hmin, respectively.

The exact values of these depend on the ISO setting of the camera, but, if there is no tonal scale fiddling done by the in-camera processing,we know that their ratio will be about 1667:1 (about 10.7 stops).

Now, let's arbitrarily set the camera thus:

ISO 100, f/2.0, 1/100 s

If we go though all the metering and exposure equations, we find that the scene luminance required to produce Hmax and Hmin are:

for Hmax: 227.8 cd/m²
for Hmin: 0.136 cd/m²

Their ratio is 1667 (10.7 stops).

Now. let's reset the camera thus:

ISO 200, f/5.6, 1/500 s

Now, we find that the scene luminance required to produce Hmax and Hmin are:

for Hmax: 4867 cd/m²
for Hmin: 2.92 cd/m²

Their ratio is 1667 (10.7 stops).

That value, 1667, is the dynamic range of the camera. That is, it is the ratio of the highest and lowest luminance that can be successfully captured with the camera set to any combination of ISO sensitivity, f/number, and shutter time for both luminances; that is, that could be captured at the same time, in the same shot.

Now, is this the overall luminance range of the camera? Not at all.

Suppose that, just to contain the exercise, we consider the limits of the three camera settings to be:

ISO: ISO 100 through ISO 1600
f/number: f/2.0 through f/22
Shutter time: 1 s through 8000 sec.

Now, consider the "least sensitive" overall settings:

ISO 100, f/22, 1/800 s

and see what luminance would be needed to produce Hmax. We find it would have to be:

2.32 x 10^6 cd/m²

Next, consider instead the "most sensitive" overall settings:

ISO 1600, f/2.0, 1 s

and see what luminance would produce Hmin. We find it would be:

8.89 × 10^-5 cd/m²

The ratio of those two luminances:

2.51 × 10^10 [34.5 stops)

is the luminance overall range of the camera: the ratio of the largest luminance that can be successfully captured to the smallest luminance that can be successfully captured, with us varying the overall "sensitivity" of the camera however we need to so as to accommodate those (within the arbitrary limits we adopted above).

This is 23.8 stops greater than the dynamic range of the camera. That's because, between finding the highest acceptable luminance and the lowest, we have bumped the camera sensitivity by 23.8 stops.

Now, for what sort of scene can we, with this camera, capture its highest and lowest luminance in one shot (assuming that we set the exposure optimally)? A scene where the ratio of the highest to the lowest luminance is not over 1667 (10.7 stops).

Is that the maximum "dynamic range" of an acceptable scene? No, its just the maximum luminance ratio.

Why would I not call it the "dynamic range"? Because the concept behind a "dynamic" range doesn't even apply in the case of scene luminance. The scene doesn't have, for example, a certain ratio of maximum to minimum luminance if something is held constant. It just has a certain ratio of maximum to minimum luminance - period.

So the only reason to say that a scene "has a certain dynamic range" is that we are anxious to use that neat-sounding term.

Best regards,

Doug
 

Martin Evans

New member
Thanks, Doug, for making us think more precisely about this term.

Having read through your essay twice, I have made some changes to "DR" in my Wikipedia "List of abbreviations in photography".

Does anyone use CR as a photgrapher's abbreviation for contrast ratio? It would seem to be as useful as DR in specific situations, but I don't remember coming across it in magazines or forums.

Martin
 

Doug Kerr

Well-known member
Hi, Martin,

Thanks, Doug, for making us think more precisely about this term.

Having read through your essay twice, I have made some changes to "DR" in my Wikipedia "List of abbreviations in photography".

Does anyone use CR as a photgrapher's abbreviation for contrast ratio? It would seem to be as useful as DR in specific situations, but I don't remember coming across it in magazines or forums.
Even the quantity is rarely spoken of, and I don't think I've often seen the abbreviation in our context.

It is widely used with regard to displays, and of course there does not mean the contrast ratio of some arbitrary "scene" but rather the maxim the display can present (another difference!).

Best regards,

Doug
 

Mark Hampton

New member
Hi, Martin,


Even the quantity is rarely spoken of, and I don't think I've often seen the abbreviation in our context.

It is widely used with regard to displays, and of course there does not mean the contrast ratio of some arbitrary "scene" but rather the maxim the display can present (another difference!).

Best regards,

Doug

Doug,


what happens to a photon once it's captured ?

this may seem a stupid question - it bothers me.

cheers
 

Doug Kerr

Well-known member
Hi, Mark,

what happens to a photon once it's captured ?
Recall that the photon is a packet of energy (although it also has the properties of a particle through the great spiritual mystery of quantumhood).

In the photodetector, its energy is exploited (and thus consumed) in ripping an electron from its site, thus causing an electron-hole pair. The "tension" in that unnatural situation is the manifestation of the energy that formerly was in the photon.

In other words, the photon is "burnt" to "cock" an electron-hole pair.

In a CMOS sensor, the hole takes in an electron that was part of the initial charge of the detector, thus diminishing the charge. (The electron that was ripped out of the hole "goes to the other side" and cancels the complementary positive charge there.) It is the diminution of that charge (in terms of a change in the voltage that manifests it) that we measure at the end of the exposure to determine how many photons have been "caught and burnt".

(Physicists may not enjoy my explanation!).

Best regards,

Doug
 

Martin Evans

New member
(Physicists may not enjoy my explanation!).

Best regards,

Doug

I appreciated it; but then I'm an ex-biologist, not an ex-physicist.

Do you know if a single photon is sufficient to 'cock' a single pixel of a modern solid-state photo-sensor?

I ask this because, as a biologist, I have always been impressed by the quantum efficiency of the human retina and visual system. For about half a century, experiments on fully dark-adapted human volunteers have shown that they can, on average, detect a faint flash of light that can only have had 5-7 photons absorbed by the visual pigment rhodopsin ("visual purple") in the cells within a small area of the retina. The capture efficiency of our retinal cells is very high: about 25 to 50% of the photons reaching a single cell will be captured by the rhodopsin, and a single captured photon will cause a photochemical change that will, with a probability of about 60-70%, result in a nerve impulse being sent from the retina to the visually-conscious area of the human brain. Because of biological 'noise', a single nerve impulse won't cause any sensation of light, but 5 to 7 will.

For anyone who wants to read a bit more, there is a reasonable survey in a 1998 paper by F. Rieke and D.A. Baylor, in Reviews of Modern Physics, Vol 70. It's accessible on-line at:
http://www.cns.nyu.edu/csh04/Articles/Rieke-Baylor-98.pdf
 

Mark Hampton

New member
Hi, Mark,


Recall that the photon is a packet of energy (although it also has the properties of a particle through the great spiritual mystery of quantumhood).

In the photodetector, its energy is exploited (and thus consumed) in ripping an electron from its site, thus causing an electron-hole pair. The "tension" in that unnatural situation is the manifestation of the energy that formerly was in the photon.

In other words, the photon is "burnt" to "cock" an electron-hole pair.

In a CMOS sensor, the hole takes in an electron that was part of the initial charge of the detector, thus diminishing the charge. (The electron that was ripped out of the hole "goes to the other side" and cancels the complementary positive charge there.) It is the diminution of that charge (in terms of a change in the voltage that manifests it) that we measure at the end of the exposure to determine how many photons have been "caught and burnt".

(Physicists may not enjoy my explanation!).

Best regards,

Doug

Doug,

thanks for this description. can we change the useage of the word capture - when people say good capture - it could be burn or translation !

it carries more of the proccessing involved.

this type of information is invaluable for me - I will have to go and read some more !

cheers
 

Jerome Marot

Well-known member
The capture efficiency of our retinal cells is very high: about 25 to 50% of the photons reaching a single cell will be captured by the rhodopsin


This web page (quite an interesting read) gives a peak quantum efficiency of 15% for the human eye, much smaller than scientific grade CCDs. However, after reading the paper you cited, it seems that their estimate is quite a bit lower than what is possible after long dark-side adaptation. Still, even when using 25% for the human eye (as your paper concludes), the electronics win...

As to our cameras, one should not forget that their sensitivity is diminished by the color array filter in front of their CCDs.
 

Doug Kerr

Well-known member
Hi, Martin,

Do you know if a single photon is sufficient to 'cock' a single pixel of a modern solid-state photo-sensor?
I am not very expert on the physics of such things, but it is my understanding that indeed a single photon can be "registered" but not with 100% probability. In fact, if the quantum efficiency is 40%, in effect there is a 40% probability that any individual incident photon will generate a hole-electron pair.

Best regards,

Doug
 

Martin Evans

New member
This web page (quite an interesting read) gives a peak quantum efficiency of 15% for the human eye, much smaller than scientific grade CCDs. However, after reading the paper you cited, it seems that their estimate is quite a bit lower than what is possible after long dark-side adaptation. Still, even when using 25% for the human eye (as your paper concludes), the electronics win...

Thanks, Jerome, for that interesting link. Perhaps the discrepancy (25 to 50% efficiency cited in Rieke & Baylor's paper, vs ca 10 to 15% in Spring & Davidson's publication) may be due to measurements at different 'levels' of the eye. The early work, that calculated that 5 to 7 photons would evoke a conscious sensation of light, allowed for losses between the cornea of the eye and the cells in the retina. If one calculates capture efficiency, measuring the light arriving at the cornea, the efficiency will be lower due to reflection and absorption losses between there and the photosensitive cells of the retina. Maybe?

The efficiency of CCD devices, as stated in Spring & Davidson's web page, is astonishing. I had always thought that the ability of the human eye to 'see' low levels of illuminance pretty amazing.

Thanks and best wishes to all,

Martin
 

Andrew Rodney

New member
It seems that some photographers understand and use the term Dynamic Range when they talk about capture devices (cameras) and contrast ratio for prints.
 
Hi, Martin,


I am not very expert on the physics of such things, but it is my understanding that indeed a single photon can be "registered" but not with 100% probability. In fact, if the quantum efficiency is 40%, in effect there is a 40% probability that any individual incident photon will generate a hole-electron pair.

Hi Doug,

I'm no expert either, but my understanding is that the quantum efficiency has more to do with the ratio of the number of photons being fired at the sensor and the number of photons that actually arrive (reflection/absorption in IR filter, AA-filters, sensor cover glass, CFA filter, microlens, metallic structures/gates). Once the photon hits the sensel's photosensitive area, it's 1 electron hole pair for 1 photon (as long as the photon's wavelength is in the sensitivity range of roughly 350 - 1000 nanometers), or so is my understanding. Longer wavelengths are not stopped by silicon, and shorter ones do not penetrate deep enough.

Of course the arrival rate of photons over time is random during the exposure time, and the randomness follows a Poisson probability distribution. So while it is possible to register a single photon by a change of 1 ADU (analog to digital unit) or DN (data number) at the unity gain setting, it's the randomness of their arrival time that makes it difficult to detect it. Besides, it's most likely also swamped by read noise and other noise sources, unless the sensor is super cooled and multiple exposures/readouts are used to average out the background noise.

Cheers,
Bart
 

Doug Kerr

Well-known member
Hi, Bart,

Hi Doug,

I'm no expert either, but my understanding is that the quantum efficiency has more to do with the ratio of the number of photons being fired at the sensor and the number of photons that actually arrive (reflection/absorption in IR filter, AA-filters, sensor cover glass, CFA filter, microlens, metallic structures/gates).

My understanding is that those "gross impediments" are taken into account by the external quantum efficiency, but the internal quantum efficiency (which deals with photons that actually strike the photosensitive stuff) varies with the wavelength distribution of the photons (a function of their wavelength), the phenomenon of recombination (in which hole-electron pairs "do not make it to market"), and so forth.

But I am not very clear on all that.

Best regards,

Doug
 
Top