• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

The danger of "equivalent f-number"

Jerome Marot

Well-known member
Photons are discrete quantum elements of light. You cannot get less than one photon. As you subdivide the pixels to be smaller, you eventually come close to that limit.
 

Asher Kelman

OPF Owner/Editor-in-Chief
Photons are discrete quantum elements of light. You cannot get less than one photon. As you subdivide the pixels to be smaller, you eventually come close to that limit.
So what does this mean in practice. One photon is really the same as no photons at all as it can’t be counted with enough certainty.

When we talk about “low light” photography, aren’t we talking about some minimum flux that could give say at least something like say 100 photons to enter each sensel well, as that would give a count of +/- 10% accuracy and the noise might be low enough to differentiate one pixel from another.

Having too many pixels in v low light defeats out wish to count accurately and discriminate edges.

Asher
 

Asher Kelman

OPF Owner/Editor-in-Chief
So what does this mean in practice, Jerome? One photon is really the same as no photons at all, as that lone photon can’t be counted with enough certainty.

When we talk about “low light” photography, aren’t we talking about some minimum flux that could give say at least something like say 100 photons to enter each sensel well, as that would give a count of +/- 10% accuracy and the noise might be low enough to differentiate one pixel from another.

Having too many pixels in v low light defeats out wish to count accurately and discriminate edges.

Asher
 

Jerome Marot

Well-known member
When we talk about “low light” photography, aren’t we talking about some minimum flux that could give say at least something like say 100 photons to enter each sensel well, as that would give a count of +/- 10% accuracy and the noise might be low enough to differentiate one pixel from another.
Just as an exercise and taking your figure of 100 photons per sensel, does that figure of 100 is for the lighter part of the picture or for the darker parts of the picture? How many more photons do the lighter parts have if we have 100 for the darker parts? How many photons do we have for the darker parts if we have 100 for the lighter parts?

What happens if we use smaller sensels, say coming from a Sony A7 (6µm sensel pitch) to an iPhone 8 (1.22µm sensel pitch)?
 
One photon per pixel maximum is exactly the goal of the sensors gigajot is working on. This is pretty much the ideal imaging solution, which is why they're working on it.
 
This is an interesting thread, especially the way it morphed from geometric optics to quantum physics. Thoroughly enjoyable!
But I still think Doug started with a very valid observation: the equivalent focal length is helpful to compare the field of view, but deriving an equivalent f-number is a misleading thing.

Reginald
 
No, deriving equivalent f-number is a super-helpful thing, since it preserves that which is important - aperture.

A camera is a box with a hole on one side and a light-sensitive material on the other side (we'll ignore that the hole is filled with optics to focus the light since it's not relevant to this topic).

The two things the photographer can control are the distance between the hole and the sensor (the focal length) and the diameter of the hole (the aperture). The focal length combined with the size of the sensor produce the angle-of-view. The diameter of the hole controls how much light gets in.

These two parameters control everything about the image - angle of view, signal-to-noise ratio, depth-of-field, and impact of diffraction.

If you want to relate one system to another, equating the angles-of-view and apertures (or comparing them if they are not equal) is the ONLY correct way to do it. Adjusting one without adjusting the other is just flat out dishonest.

The Panasonic FZ-10 is a compact point-and-shoot that has a 35mm-equivalent angle-of-view that corresponds to 35mm-420mm on full-frame. The lens is f/2.8 throughout. Do you think that camera controls DOF as well and works as well in low-light as a full-frame camera with a 400/2.8 on it? Of course it doesn't. That's because the lens is actually 6mm-72mm and f/2.8, not 420mm and f/2.8. 72mm and f/2.8 gives you an aperture of 72/2.8=25.7mm. A real 400/2.8 has an aperture of 400/2.8 = 142.9mm.

The larger aperture diameter is why a full-frame camera using a 400/2.8 is way better in low-light and produces shallower DOF than the FZ-10. If you want to equate the FZ-10 to a full-frame lens that can do the same job, you HAVE TO adjust focal length while keeping aperture constant and that means f-stop changes too (since f-stop = focal length / aperture).

The FZ-10's lens is 6mm-72mm and f/2.8 which is equivalent to 35mm-420mm and f/16.3 on full-frame. Both cameras will capture the same scene with the same light captured, the same signal-to-noise ratio, the same depth-of-field and the same effect of diffraction on the image. Calling it 35-420mm equivalent f/2.8 is just flat wrong.
 

Doug Kerr

Active member
Hi, Lee Jay,

I may have been wrong to seem to "denounce" the concept of "equivalent f-number". You point out "interests" in which it is a useful metric.

My real original intent was to denounce the presentation of this metric with language that "suggests" (or a least doesn't prevent the user from unwarrantedly believing) that this is a metric that can be used in the same way we use f-number as a factor in exposure calculations but is somehow normalized over cameras of different format size. Because, as we know, the f-number is intended to be the generalized metric of the exposure impact of a lens' entrance pupil. (This description is intended to set aside the matter of the transmission fraction of the lens, although there is of course a generalized metric that takes that into account, the "T-stop".)

Now of course we both realize that, in setting up phtometric equations, we can use the actual entrance pupil diameter, or the actual entrance pupil area, or the ratio of focal length to entrance pupil diameter (the f-number), or the product of the entrance pupil diameter and the square root of the exposure time, or any or many other things, so long as we put all the proper other ingredients in the equation. And of course we have a corresponding flexibility in how we set up the equations for reckoning depth of field, or for reckoning the related but wholly different matter of out-of-focus blur performance (the two often being confused with one another).

And in some situations, doing this one way or the other leads to the greatest clarity for those trying to follow the work.

In any case, the matters you discuss in this thread are very interesting. I haven't taken the time to conclude whether or not I agree with all your conclusions.

Thanks for all this work.

Best regards,

Doug Kerr
 
Yes...for "exposure" equivalent f-stop makes absolutely no sense. However, I would argue that exposure itself is largely (but not completely) irrelevant. At least it's not nearly as relevant as it was in the film days.

For me, the math is simple.

Focal length = f-stop * aperture

35mm-equivalent focal length = 35mm-equivalent f-stop * aperture

or

crop-factor * focal length = crop-factor * f-stop * aperture


If you're going to do something to one side of the equation, you have to do it to the other side too!

For me, this is helpful in comparing different systems with different sensor sizes. As I said above, I like to do this in terms of "performance envelope". I'm an airplane-guy so this is very similar to the same concept in airplanes (altitude vs airspeed envelope).

Here's an example. The blue is the "envelope" in which I can shoot with my current camera and lens system, and the little dash is the "envelope" in which an iPhone 6 can shoot (it's a diagonal line instead of a dot because I've included some cropping in this analysis). I put both into 35mm-equivalent terms but the same could be produced in terms of aperture and angle of view, for example.



Personally, I've found this concept extremely useful when determining what new equipment I want to buy. Specifically, I just went from full-frame to crop and compared my old system to my new system this way to see what I'd be getting versus what I'd be giving up - before buying the new system and selling most of the old system.
 
Top