• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

The danger of "equivalent f-number"

Doug Kerr

Well-known member
I've spoken of this matter here before, but I bumped into it today while doing some study, and I thought it might be time time mention it again.

******

We have certainly gotten used to the concept of "equivalent focal length". We say that a lens with a focal length of 50 mm, on "our" camera, with a sensor whose linear dimensions are 0.625 times the dimension of a " full-frame 35-mm"-size sensor, will give the same field of view as would a lens with a focal length of 80 mm (1.6 × 50) on a camera with a full-frame 35-mm-size sensor (and from now on I will for conciseness refer to that as a "ff35" camera).

And thus we say that "our" sensor has a "1.6 equivalent focal length factor".

So it at first does not seem to be a surprise that there has emerged (especially on a certain other forum) the notion of an "equivalent f-number". How that works is this, considering "our" camera as discussed above. A lens on our camera with an aperture of f/2.0 is said to have an "equivalent f-number" of f/3,2 (2.0 × 1.6). (The "1.6" is in fact the equivalent focal length factor for our camera's sensor size.)

Now, historically, our first concern with the f-number has been its effect on exposure. So does the notion above mean that a lens on our camera with an aperture of f/2.0 will have the same effect on exposure that a lens with an aperture of f/3.2 would have on an ff35 camera? No. A lens with an aperture of f/2.0 will have on our camera the same effect on exposure as a lens with an aperture f/2.0 would have on an ff35 camera.

No, the significance of the "equivalent t-number" has to do with our second concern with aperture: its effect on depth of field (and its counterpart, out-of-focus blur performance).

Now, if we consider two cases, involving two cameras with different sensor sizes, and all these factors are the same between the two cases:

• The lenses have focal, lengths such that the field of view of the two cameras are the same.

• Both cameras are focused at the same distance.

• In each case, we use a circle of confusion diameter limit (COCDL) to reckon depth of field that is a consistent fraction of some dimension of the sensor (maybe its diagonal dimension).
then, with an f/2.0 lens on "our" camera, we will get the same depth of field as we would get with an f/3.2 lens on an ff35 camera.

Now, the good people on that other forum often are so kind as to provide us with graphs that show the variation with focal length in maximum available aperture for a certain camera (with a certain zoom lens aboard), over the focal length range of the lens. And that's very handy.

But this is not given in terms of f-number, but rather in terms of effective f-number, as described above. Hmm.

Now often two or more different camera-lens rigs have this plotted on the same graph. Is this to tell us how the "speed" of the lenses on the two cameras differ at different focal lengths (often a matter of great interest)? No, it it to show how the effect of the lenses (wide open) on depth of field.

Now, if all the cameras plotted on the chart have the same sensor size, then the plot indeed also shows the difference between the "speed" of the two lenses at different focal lengths. But if, as is often the case, the cameras being compared have different sensor sizes, then we cannot determine from such a chart the difference in lens "speed" between the two cameras at some focal length.

I leave it to the reader to determine how to feel about this practice. I do not have a positive feeling about it.

A second supposed utility of the concept of "equivalent focal length" has to do with the effect of f-number on diffraction. The premise of that notion is complicated (and questionable), but I will not go into that here.

Best regards,

Doug
 

Doug Kerr

Well-known member
We see the risk in this scheme in a discussion on that forum of the differences between the Panasonic ZS100 and its predecessor, the ZS60.

The ZS60 lens has (at its smallest focal length) a maximum aperture of f/3.3. The ZS100 has (at its smallest focal length, the same as for the ZS60) a maximum aperture of f/2.8. So (at the lowest focal length) the ZS100 lens is about 0.5 stop "faster".

But the "equivalent focal length factor" for the ZS60 is about 5.7, while for the ZS100 (which has a substantially larger sensor) it is about 2.7.

So the infamous "equivalent aperture chart" shows, at the lowest focal length for both lenses, an equivalent f-number for the ZS60 of f/18.8, and for the ZS100, f/7.6. If those were real apertures, that difference would be about 2.6 stops. And based on that, the discussion says that "the ZS60 is effectively 2 stops slower than the ZS 100."

But of course the ZS60 is not in any way "2 stops slower" than the ZS100. Yes, its "effective aperture" (whatever that's worth) is actually 2.6 stops "less" than the effective aperture of the ZS100. But to use the term "slower" for this difference is wholly inappropriate, and could easily mislead the reader.

For example, the reader might think that, for the same ISO sensitivity and the same scene, we would need to use a 2-stop greater shutter time (2.6, really) to get the same exposure. But in fact, the shutter time needed would be only 0.5 stop greater.

Best regards,

Doug
 

Jerome Marot

Well-known member
Now, the good people on that other forum often are so kind as to provide us with graphs that show the variation with focal length in maximum available aperture for a certain camera (with a certain zoom lens aboard), over the focal length range of the lens. And that's very handy.

That is very handy, but that bears less relation to reality than the good people believe. If you try two cameras with the same "equivalent aperture", you will be surprised to find out that the depth of field may be dissimilar. Actually, if you try to cameras with the same aperture, the same focal length and the same sensor size, you will also often notice that the depth of field is not the same.
 

Doug Kerr

Well-known member
Hi, Jerome,

That is very handy, but that bears less relation to reality than the good people believe. If you try two cameras with the same "equivalent aperture", you will be surprised to find out that the depth of field may be dissimilar. Actually, if you try to cameras with the same aperture, the same focal length and the same sensor size, you will also often notice that the depth of field is not the same.

Indeed, we would expect that that to be so if:

• For depth of field purposes, we define "the limit of acceptable blurring" in terms of the blurring that significantly compromises the camera's resolution (rather than the traditional outlook in which "acceptable blurring" is defined in terms of what would be perceptible to the viewer viewing the image at an arbitrary size and arbitrary distance).

• The two cameras have a different pixel pitch (and thus a different pixel count).

Best regards,

Doug
 

Doug Kerr

Well-known member
It is sometimes said that the "diffraction limit" occurs for the same "equivalent f-number" on cameras with different sensor sizes. This is at least closely true if:

• We consider the "limit" to occur when diffraction significantly compromises the resolution of the camera.(We usually do that, but there is an alternative outlook that may be appropriate in certain cases.)

• The resolutions of the cameras bear the same relationship to their respective pixel pitches. (Maybe, in terms of line pairs per mm, 0.75/(2p), where p is the pixel pitch in mm, a common situation.)

• The pixel pitches of the cameras are the same same as a fraction of the sensor size (that is, all the cameras have the same pixel count).

Best regards,

Doug
 

Jerome Marot

Well-known member
Hi, Jerome,

Actually, if you try to cameras with the same aperture, the same focal length and the same sensor size, you will also often notice that the depth of field is not the same.

Indeed, we would expect that that to be so if:

• For depth of field purposes, we define "the limit of acceptable blurring" in terms of the blurring that significantly compromises the camera's resolution (rather than the traditional outlook in which "acceptable blurring" is defined in terms of what would be perceptible to the viewer viewing the image at an arbitrary size and arbitrary distance).

• The two cameras have a different pixel pitch (and thus a different pixel count).

Let me rephrase that, because I don't think you are on the same track as I am.

Actually, if you try to lenses with the same aperture and the same focal length on two cameras with either the same sensor or the same sensor size and resolution, you will also often notice that the depth of field is not visually the same, wherein I define depth of field as what appears similarly sharp or similarly unsharp to a panel of viewers.
 

Doug Kerr

Well-known member
Hi, Jerome,

Let me rephrase that, because I don't think you are on the same track as I am.

Actually, if you try to lenses with the same aperture and the same focal length on two cameras with either the same sensor or the same sensor size and resolution, you will also often notice that the depth of field is not visually the same, wherein I define depth of field as what appears similarly sharp or similarly unsharp to a panel of viewers.
Thank you for that clarification. That is what I call the "traditional" concept of sharpness as it is involved with the reckoning of depth of field.

To what can we possibly attribute that difference between cameras in which all pertinent parameters are seemingly the same (which is assume is included in your case)?

Best regards,

Doug
 

Jerome Marot

Well-known member
Thank you for that clarification. That is what I call the "traditional" concept of sharpness as it is involved with the reckoning of depth of field.

To what can we possibly attribute that difference between cameras in which all pertinent parameters are seemingly the same (which is assume is included in your case)?

Now I have your attention, it seems.

How did you determine that the depth of field is the same?
 

Doug Kerr

Well-known member
Now I have your attention, it seems.

Well, I've had breakfast, and my serum glucose level is up.

How did you determine that the depth of field is the same?

Do you mean "how did I define depth of field"?

There are two widely used approaches to defining what degree of blurring from imperfect focus is "just barely acceptable" (a pivotal aspect of reckoning the expected depth of field of some "camera setup").

Outlook 1 ("traditional"). This says that the limit of acceptable blurring is when the blurring is enough to be noted by an "average" person viewing of the image at some arbitrarily-chosen size (perhaps 8" × 12") from some arbitrarily-chosen distance (perhaps 17").

This is quantified by saying that the "limiting diameter" of the blur figure is some arbitrary small multiple of what is generally considered to be the resolution of the human eye, as applies to viewing of the image under the arbitrary conditions I mentioned above. That is then normalized so it can be spoken of in terms of the diameter of the blur circle on the sensor, with the adopted "limit" often coming out to be 1/1400 of the sensor diagonal dimension.

Outlook 2 (adopted by many modern workers). This says that the "limiting diameter of the blur figure" is where it degrades the resolution of the camera proper. That is usually quantified as the diameter of the blur circle being some (arbitrary) small multiple of the pixel pitch.

If we embrace this outlook, then for cameras whose pixel count is substantially greater than is "needed" for viewing under the arbitrary conditions I mentioned above, this outlook will lead to a reckoning of the expected depth of field that may be much smaller than if we proceed under Outlook 1.

My discussion in an earlier post here was, I think, predicated on Outlook 2.

Best regards,

Doug
 

Jerome Marot

Well-known member
Do you mean "how did I define depth of field"?

Nope. Rather "How do you determine depth of field". I bet is has something to do with the 10 Deutsche Mark banknote.

r312b.jpg
 

Doug Kerr

Well-known member
Hi, Jerome,

Nope. Rather "How do you determine depth of field". I bet is has something to do with the 10 Deutsche Mark banknote.

r312b.jpg

Are you suggesting that to determine the limit of depth of field in terms of a noticeable degradation of resolution, and then define that as a certain arbitrary decline in the spatial frequency at which the MTF drops to some arbitrary level, to proceed analytically we might start with the assumption that the radial distribution of luminance on the blur figure followes a famous function attributed to the handsome subject of that banknote?

If so, I would certainly agree.

In "practice", we rarely see that analytical approach followed! Rather, under "outlook 2", as I mentioned, workers often use the criterion, "when the expected 'diameter' of the blur figure exceeds n times the pixel pitch", where n has been chosen based on empirical observation of actual results.

Thanks for that nice link to history!

Best regards,

Doug
 

Jerome Marot

Well-known member
Are you suggesting that to determine the limit of depth of field in terms of a noticeable degradation of resolution, and then define that as a certain arbitrary decline in the spatial frequency at which the MTF drops to some arbitrary level, to proceed analytically we might start with the assumption that the radial distribution of luminance on the blur figure followes a famous function attributed to the handsome subject of that banknote?

No, I am not suggesting that.

The handsome subject of the banknote did considerable work in optics. Great guy BTW, a true genius.
 

Jerome Marot

Well-known member
Good to have you back on board. Was breakfast good? Did you find out why, if you try to lenses with the same aperture and the same focal length on two cameras with either the same sensor or the same sensor size and resolution, you will also often notice that the depth of field is not visually the same, wherein I define depth of field as what appears similarly sharp or similarly unsharp to a panel of viewers?
 

Doug Kerr

Well-known member
Hi, Jerome,

Good to have you back on board. Was breakfast good?

Oh, very.

Did you find out why, if you try to lenses with the same aperture and the same focal length on two cameras with either the same sensor or the same sensor size and resolution, you will also often notice that the depth of field is not visually the same, wherein I define depth of field as what appears similarly sharp or similarly unsharp to a panel of viewers?

No, I did not. Do you know?

My guess is that it may have to do with the radial illuminance distribution on the blur figure (circle of confusion, in the proper use of that term).

Best regards,

Doug
 

Jerome Marot

Well-known member
My guess is that it may have to do with the radial illuminance distribution on the blur figure (circle of confusion, in the proper use of that term).

Yes, it does. But the formulas which are used to determine "equivalent f-numbers" are derived from the principles of Gaussian optics. Do the principle of Gaussian optics allow to determine that illuminance distribution?
 

Doug Kerr

Well-known member
Hi, Jerome,

Yes, it does. But the formulas which are used to determine "equivalent f-numbers" are derived from the principles of Gaussian optics. Do the principle of Gaussian optics allow to determine that illuminance distribution?

I think not. The usual derivation of the reckoning of the "diameter" of the circle of confusion in an imperfectly-focused system is done using equations based on the paraxial approximation, which is the hallmark of Gaussian optics. But, within that context, an aberration-free lens is assumed, and as well negligible intrusion by diffraction. And it is those assumptions that lead to the result (I think) that the circle of confusion is a figure with a distinct circular boundary, uniformly illuminated within that boundary.

If we wish to predict the actual distribution of illumination across the figure, we must take into effect the recognized aberrations of the lens as well as diffraction. This analysis can be done within the paradigm of Gaussian optics. But we have to recognize that this paradigm gives us only an approximation of the actual paths of the rays.

Of course, the analyses actually used today in lend design are based on ray tracing that is "precise" and is not premised on the paraxial approximation (that is, goes outside the paradigm of Gaussian optics.

I think.

As to the determination of "effective f-numbers": Indeed the f-number of a lens may be approximately calculated within the paradigm of Gaussian optics. But having in hand the f-number of a lens at, say, a certain focal length (in the case of zoom lenses), the so-called equivalent f-number is obtained merely by multiplying that supposed actual f-number by the ratio of (a) 43.3 mm (the diagonal dimension of the "full-frame 35-mm" format) to (b) the diagonal dimension of the format of the camera on which the lens is assumed to be mounted. There is no actual "optical calculation" involved in that stage.

In any case, I note that when the concept of "effective f-number" is actually used to compare the "expected" depth of field (or out of focus blur performance) of a lens (in terms of what lens would be expected to give the same performance in one of those two ways on a full-frame 35-mm size format camera), the result even theoretically is not exact, but will be quite close for most realistic cases.

For example, if we consider the size of the blur figure for these two actual setups;

a. Lens focal length 50 mm, f/4, camera focused at 5 m, "background object" at a distance of 20 m.

b. Lens focal length 25 mm, f/2, camera focused at 5 m, "background object" at a distance of 20 m.

We find that the reckoned diameter of the circle of confusion (on the sensor) for case (a) is 0.0947 mm and for case (b) it is 0.0471. So if in fact the format in case (a) is twice the linear size of the form in case (b), and if we consider the diameter of the blur circle as it would be seen on equal-sized printed, then the two blur circles would be very nearly the same diameter.


Best regards,

Doug
 

Jerome Marot

Well-known member
Well said, but the problem with this approach, I think, is that blur is also a subjective feeling and that feeling also depends on the distribution of illumination across the figure. A theory like Gaussian optics which neglects the shape of the blur figure is thus likely to predict that some shapes are blurred, while they are visually recognisable or that some shapes are recognisable while the distribution of illumination across the figure increases the blur to a point where the viewer does not see a sharp image any more.

I actual usage of depth of field (that is with reasonably fast lenses), the effect is quite noticeable and expects why the image given by two lenses which should otherwise be similar can loom surprisingly different.
 
Now often two or more different camera-lens rigs have this plotted on the same graph. Is this to tell us how the "speed" of the lenses on the two cameras differ at different focal lengths (often a matter of great interest)? No, it it to show how the effect of the lenses (wide open) on depth of field.

You're probably talking about me, Doug, and no, I'm talking about speed - ability to shoot in low-light. Equivalent f-stop works for that because what drives your ability to shoot in low-light is the amount of noise you're willing to accept in your final images, and that noise is controlled (assuming similar per-area sensor performance) by the absolute aperture of the lens for a given angle-of-view. This is because the aperture controls how much light gets gathered from the scene and since shot-noise usually dominates and SNR goes with sqrt(total light captured), aperture controls SNR. So, if you equate angle-of-view using equivalent focal lengths you should also adjust f-stop because f-stop = aperture / focal length. That means equivalent f-stop = aperture / equivalent focal length. Aperture is the real, physical thing you can control (for a given angle of view) and that's what actually controls the performance of the system in low-light.

I've found these graphs I make, which I call "performance envelopes" to be exceptionally useful.

Now, it turns out that DOF is controlled by the same parameters so this also works for that, but that was not my primary motivation for using this approach. Speed was.

It also turns out that, in digital photography, exposure means very little. This is because we can so effortlessly, in real-time or in post, turn up or down the "volume" (brightness) of our processed images, whereas in film, this was harder. So, with digital, f-stop means much less than it did with film what equivalent f-stop (aperture) means a whole lot - it means signal-to-noise ratio and it means DOF.
 

Doug Kerr

Well-known member
Hi. Lee Jay,

You're probably talking about me, Doug, and no, I'm talking about speed - ability to shoot in low-light. Equivalent f-stop works for that because what drives your ability to shoot in low-light is the amount of noise you're willing to accept in your final images, and that noise is controlled (assuming similar per-area sensor performance) by the absolute aperture of the lens for a given angle-of-view.

Quite so if the pixel effective dimensions remain proportional to the overall sensor dimensions.

I have in fact recently done a lot of plots of a metric that "suggests" "potential" noise performance, based on just that outlook. It is discussed here:


Best regards,

Doug
 
Last edited:
Hi. Lee Jay,



Quite so if the pixel effective dimensions remain proportional to the overall sensor dimensions.

I have in fact recently done a lot of plots of a metric that "suggests" "potential" noise performance, based on just that outlook. It is discussed here:


Best regards,

Doug

Yeah, I saw that. I disagree. In fact, I think pixel dimensions make very little difference.

The analogy I usually use goes back to a joke I heard when I was a kid.

Waiter: Would you like your pizza cut into 8 slices or 12?
Customer: Oh..8, please. I could never eat 12!

The way the light the sensor collects is cut up makes very little difference to overall performance. The total light captured is what really matters. There are secondary effects but most have been mitigated. In fact, I can make more than one argument that smaller pixels perform better in low-light than larger ones, given the same total sensor area and the same incoming light (scene illumination and aperture). In fact, the ideal situation is no pixels at all (or an infinite number, if you prefer), such that every incoming photon's location is individually recorded - no spacial averaging is done by the sensor whatsoever. In fact, that's a technology that Eric Fossum is working on.

 

Asher Kelman

OPF Owner/Editor-in-Chief
“So, with digital, f-stop means much less than it did with film what equivalent f-stop (aperture) means a whole lot - it means signal-to-noise ratio and it means DOF.”

Interesting, Lee!

Doug and Jerome, do we agree on this?

Asher
 

Doug Kerr

Well-known member
Hi, Lee,

Yeah, I saw that. I disagree. In fact, I think pixel dimensions make very little difference.

But your concept is really based on that.

The analogy I usually use goes back to a joke I heard when I was a kid.

Waiter: Would you like your pizza cut into 8 slices or 12?
Customer: Oh..8, please. I could never eat 12!

One of my favorite jokes.

Best regards,

Doug
 

Doug Kerr

Well-known member
Hi, Asher,

“So, with digital, f-stop means much less than it did with film what equivalent f-stop (aperture) means a whole lot - it means signal-to-noise ratio and it means DOF.”

Interesting, Lee!

Doug and Jerome, do we agree on this?

Not moi.

Of course, there was a very comparable (but of course not identical) issue with film, regarding "graininess" in the developed negative.

I think you know my outlook on this.

One can push the algebra around and thus "say it" in what sounds like different ways.

Best regards,

Doug
 
Hi, Lee,



But your concept is really based on that.

My "concept" has nothing to do with pixel sizes. In fact, pixel dimensions aren't even mentioned.

I've done the comparisons between pixels with areas different by a factor of more than 25, and it still works well.

Angle of view and aperture diameter are what matters. Pixel sizes really don't.
 

Doug Kerr

Well-known member
If we wish to compare different camera setups, and if all the cameras involved have the same total number of pixels (and of course if all the sensors have equivalent quantum efficiencies, and equivalent areal efficiencies) then indeed the "effective f-number" is a consistent predictor of potential SNR for a given scene element luminance and exposure time.

Again assuming equivalent sensor quantum efficiencies and areal efficiencies for the sensors involved, the square of the ratio of pixel pitch to F-number (my metric "z") is a precisely equivalent predictor of potential SNR for a given scene element luminance and exposure time.

The equivalence comes forth from simple algebraic manipulation.

Best regards,

Doug
 
I think I know where you're off the rails, Doug.

You're right.

But you're also wrong.

A simple thought experiment should do the job.

Imagine you have a sensor with 9 pixels, and you collect an image.

Now imagine each of the 9 pixels is itself made up of 9 pixels so you have an 81 pixel sensor the same size.

The first sensor will indeed be less noisy.

However, it's possible to average the individual 9-pixel blocks in the second sensor to recover the same information the first sensor captured - exactly.

On the other hand, it's not possible to manipulate the first sensor's data to produce that which was produced by the second sensor - the block-averaging of the large pixels destroyed the spacial data that the second sensor was able to capture.

So, a sensor with smaller pixels is noisier, but can be made to be as noise-free as a sensor with larger pixels by simple block averaging but the reverse is not the case.

Now, the crucial question becomes, is simple block-averaging the best noise reduction approach available? The answer is that it's actually the worst noise reduction approach available - everything else is better.

So, if we use a sensor with lots and lots of tiny pixels, and apply modern, advanced noise-reduction approaches to the output until they've squashed all the additional detail the small pixels recorded compared to the larger pixels, we'll end up with an image with less noise than the image from the large pixels. Further, those small pixels give us a choice we don't have with the larger pixels - we can choose to retain more detail at the cost of more noise if we want to. In fact, if we go the other way and use advanced noise reduction to reduce the noise in the small-pixel image to match the noise in the larger-pixel image, we'll end up with the same noise and more detail, precisely because modern noise reduction algorithms are vastly superior to simple block-averaging.
 

Asher Kelman

OPF Owner/Editor-in-Chief
There is another factor. As you subdivide the pixels to be smaller, then the ripples in the waves of the photons as particles are resolved and added disturbance. So what you can get by smaller pixels is limited by the aperture size.

Asher
 
There is another factor. As you subdivide the pixels to be smaller, then the ripples in the waves of the photons as particles are resolved and added disturbance. So what you can get by smaller pixels is limited by the aperture size.

Asher

That's not a downside, it just means you're able to resolve to the diffraction limit. When the pixels are too large for that, resolution is lost because the blur caused by diffraction is added (geometrically) to the blur caused by sampling.
 
Top