• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

MP-E 65mm f/2.8 1-5x Macro lens, diffraction blur

Hi folks,

This is a demonstration of the effect that (narrow aperture) diffraction has on microdetail. I chose the MP-E 65mm because it shows the effect very clearly and because there is a lot of misinformation about its optimum working aperture. The diffraction principle however, applies to all lenses.

In the following composite (sorry for its size, but it was necessary) you can see the same subject (the residual 'skeleton' of a small leaf) with identical focus, but shot at different apertures (f/5.6 - f/16). They are 100% zoom (=actual size) crops from 4 EOS-1Ds Mark III files. The 'Dsd' legend stands for 'Diffraction Spot Diameter', and describes the diameter of the diffraction induced blur spot (Airy disk to its first zero, for 550nm wavelength), which I have expressed in 'sensel widths'. Since the DSD is only dependent on aperture number and wavelength, the blur effect of a given aperture will vary with the sensel size. Larger sensels integrate the incident light of a larger area, and can therefore stand a larger diffraction spot diameter before it exceeds their boundaries and spill the detail over to neighboring sensels.
DiffSpotDiameter.jpg


The actual DOF at the magnification factor used (3:1) is very narrow (less that 1/4 mm at f/8), so one is tempted to select a relatively narrow aperture, but the images show that the detrimental effects of diffraction soon overshadow the gain of DOF. Beyond a certain value everything gets blurred, even the parts that are in focus, and total image quality is compromised. The only solution to mitigate the per pixel loss of microdetail from diffraction, is by down-sampling the image or by viewing it from a larger distance. Whether that is really a solution, depends on the required output dimensions.

My experience with current CCD and CMOS sensors that use a Bayer CFA is that once the diffraction spot diameter for green light exceeds approx. 1.5 times the sensel pitch, loss of microdetail becomes significant enough to cause a quality problem that cannot be solved by e.g. simple sharpening.

Bart
 

Doug Kerr

Well-known member
Hi, Bart,

Thank you for this very valuable demonstration and explanation, and thanks as usual for your careful attention to terminology and notation.
 
Hi Bart,

As the others have said, thank you for a valuable demonstration, and using one of my most coveted lenses!

Your illustration also goes to show the incredible strain the 1Ds Mk III puts on the output of any lens - so much so that I would seriously question the sanity of *ever* increasing the resolution of a successor. Tests with near-perfect lenses like the 200 f/2.8L show that visible effects of diffraction sets in as early as f/5.6!

This reason alone is why I stick to using my 1D Mk II N for Macro work, since I *always* work at at least f/16 for DOF - higher resolution would not gain me anything, and I'd probably use a 1Ds Mk III in sRAW mode all the time for Macro work - best noise performance (through the pixel "binning") and no real loss of detail at, say, f/22 in anyway.

Anyway, it's a bit of a conundrum to make large Macro prints of small-aperture images, but at smaller sizes things do clean up nicely - for example, this recent image of mine was taken at f/25, and event at 100% with careful sharpening it's not too bad, but 1Ds Mk III would yield *no* additional detail, and it'd look absolutely terrible at 100% from that camera.

"Almost"
Almost____by_philosomatographer.jpg


EOS 1D Mk II N, EF 100mm Macro
 

Asher Kelman

OPF Owner/Editor-in-Chief
Dawid,

You will still be better off having a very good stage and taking repeated images at f8 with further progression of the objective and this the plane of focus sweeping through the three-dimensional structure of the subject. Then you simply add up the slices with a suitable program. Now if you have a finer pixel pitch then just take that into account. You can also down sample each frame if you wish to get rid of noise.

As long as one is aware of what's going, you'll be fine!

Asher

BTW, what on earth are you doing to that creature? Looks like some sandwich!
 
Hi Asher,

I agree that careful focus bracketing at f/8 would be the way to go, but, alas, "in the field" who has such a luxury!

This poor creature is not being subjected to any action of mine, I found it (long dead) where it had obviously tried to crawl through the gap at a small window over a doorway.

I was standing on a chair, hand-holding the camera with one hand and hand-holding the flash with the other hand. Single-hand-held macro, about as far as one could go from a focus-bracketing studio setup as is humanly posisbly!

Alas, my style is to capture things as and when I see them, and I happily accept the laws of physics and the limitations they bring. This image is heavily cropped, to about 2.5:1.
 

Asher Kelman

OPF Owner/Editor-in-Chief
Actually, David, this spider trapped the way it is, makes a remarkable subject. I hope your wife/landlord or you did not clean it off!

There seems to be no reason why you should not be able to use a stand to take careful pictures. Why not? This is not a model with union rates?

As is, it's fascinating.

Could be that the spider saw you and recognized your new camera,

"Oh my God (that's the spiders invisible, ever-present friend), that's the Canon 1Ds Mark II N, I'm overwhelmed with the beauty of such a machine! We spiders cannot compete! I have pain in my left arm and tightness in my chest. I'm having a heart attack!".

Foolish spider, it was not a heart attack, just bloating after eating too many termites swarming from the heat wave! How stupid can a spider be to eat while stuck in a crack!

Now Dawid, had you watched closely, you could have photographed the entire story.

Asher
 
Hi Asher,

This creature (which, incidentally, is a Moth I am sorry to say, and not a spider!) is at my work (our offices are in an old, converted house) and nobody else knows about it, so it is still there - unlikely to be cleaned away considering the obscure location I just happened to see.

I could go an explore it further but - to be honest, it feels a little weird coming back to explore a dead moth. But most if all, because I do not have an MP-E 65mm to do so with! (currently limited to 1:1)
 

John Sheehy

New member
Your illustration also goes to show the incredible strain the 1Ds Mk III puts on the output of any lens - so much so that I would seriously question the sanity of *ever* increasing the resolution of a successor. Tests with near-perfect lenses like the 200 f/2.8L show that visible effects of diffraction sets in as early as f/5.6!

That does not mean that f/5.6 is the point of no further returns, though. What kind of logic dictates what is needed by the situation that shows the first signs of "strain" in only one aspect? The point where all wavelengths are sufficiently oversampled by the red and blue-filtered pixels is the point where the returns become insignificant. That is a few stops away from the point you refer to above. I get pixel-level distinction on my P&S cameras at f/5.6, with pixel pitches of only 2 microns or less; pixel densities 12x as high as the 1Dsmk3!

This reason alone is why I stick to using my 1D Mk II N for Macro work, since I *always* work at at least f/16 for DOF - higher resolution would not gain me anything, and I'd probably use a 1Ds Mk III in sRAW mode all the time for Macro work - best noise performance (through the pixel "binning") and no real loss of detail at, say, f/22 in anyway.

Binning does *not* reduce the noise of an image. It only reduces the noise of the output pixel, which gets noisier the larger it is viewed (a fact that most people conveniently ignore while engaging in pixel-centric logic). Binning only gives the illusion of noise reduction because people forget to view the resultant image at the same size as they would have viewed the original. It also brings the frequency distributions of noise and real image detail together so that noise is mistaken for detail (you would never mistake the two if the noise was finer and sharper than the optics allowed the detail to be, and you could remove the noise if you wanted to without affecting the real detail at all).

To say "no real loss of detail" is a stretch. "No major loss of detail" would be more accurate. If you only had a fraction of a second to view a crop of the same subject size, upsampled, then you might not even notice at f/22 that the pixel were 20 microns instead of 8.2 or 6.4, but when you get to spend quality time with a print or monitor display, subtle qualities arise over time. Finer noise is always a positive thing, IMO. Finer bayer artifacts, too.

Anyway, it's a bit of a conundrum to make large Macro prints of small-aperture images, but at smaller sizes things do clean up nicely - for example, this recent image of mine was taken at f/25, and event at 100% with careful sharpening it's not too bad, but 1Ds Mk III would yield *no* additional detail, and it'd look absolutely terrible at 100% from that camera.

100% pixel views are ultimately meaningless; they are only meaningful when you are totally cognizant of the fact that a certain number of them will comprise your subject, and that the more of them there are, the less each is responsible for acuity and cleanliness (and visa versa).

I find your attitude very disturbing. Not just that you, one person, think that way, but that many, many people think that way as well. If your monitor had 600 PPI, and the images were all upsampled to fill the screen or window instead of downsampled (or "down-damaged", as happens in photoshop), I think you'd be singing a different tune, as would everyone else who feels that way. The 1Dsmk3 version would be cleaner, and sharper, with less color moire (when applicable), as it would if you did both cameras and (properly) upsampled to any common subject size (IOW, cropping the 1Dsmk3 to 1.26x as the 1Dmk2 does, physically).

Your perceptions are being distorted by the existing state of technology. You should ultimately *NEVER* be able to resolve a single pixel with ease for normal subject viewing, unless you're more interested in mosaics than near-analog photography.

You have fallen into the illogical rut that is so fashionable these days; to bemoan higher pixel densities because of lower pixel level acuity (or higher pixel level noise), because you have a defective microscope (100% pixel zoom on an *extremely* coarse display). It is the digital imaging parallel to the proverbial "missing the forest for the trees".
 

Eric Hiss

Member
THanks

Hi Bart,

Thanks for providing the test. I'm just curious how you did the focus on the test? I have read that with some lenses the taking aperture focal point is different than the wide open focal point - this can give focal errors with cameras that focus wide open and then stop down for the shot, which is whole lot of them. If the focal point shift due to aperture adjustments is closer then you would see the same results here with your examples. Just curious....

Regards,
Eric
 
I find your attitude very disturbing. Not just that you, one person, think that way, but that many, many people think that way as well. If your monitor had 600 PPI, and the images were all upsampled to fill the screen or window instead of downsampled (or "down-damaged", as happens in photoshop), I think you'd be singing a different tune, as would everyone else who feels that way.

I never implied that f/5.6 is the point of no further returns - I merely said that the effects become noticeable. I, in fact, have very low standards as to what constitutes the "point of no further returns" which is why you'll see my happily shooting Macro at f/22 or smaller (if needed), even though others often proclaim to never ever do so.

John, please, I think you mis-interpreted my (admittedly hasty) post. The point I was making, is that, for the type of image I posted (at f/25) which is hopelessly diffraction-blurred at a pixel level, there will be almost no perceivable benefits to be gained in shooting that kind of Macro with a higher-resolution camera, because even in a 10x15 inch printout of my posted image, the diffraction blur is quite obvious - much more so than any form of noise or lack of resolution. Sure, when scaled to a much larger size, the image will have a different 'character' because of less pixel-level up-sampling needed with the 1Ds3, and if you shot at high ISO the noise grain will be finer, but it will still, in fact, be blurred (i.e. not really suitable for large output size in the first place).

Also, I in fact do not harbour the belief that pixel binning reduces the noise of an image - though I hastily stated as much through not explaining my thoughts better. What I meant was, at the same output size, shooting a 21MP, or a 5MP pixel-binned image would, for all practical intents and purposes, not yield a significantly different output when taking a photograph at, say, f/25 or even smaller.

Now, as Asher has pointed out, one is deliberately sacrificing sharpness for dept-of-field, but I firmly belief, when I am out in the field doing Macro, that I usually ndo not have the time or the luxury to do focus-stacking unless I become an Entomologist of the 'different kind' that collects (and takes back to study) versus a non-intrusive observer. For the output size that I target (which is not a massive fine-art print) and the depth of field I require, I am firmly diffraction limited, nothing else.

Of course, in other scenarios, like the recent further EF 200mm L samples I posted, I am 100% limited by my camera's sensor, and the 1Ds3 could extract so much more information from the projected image.
 
I'm just curious how you did the focus on the test?

Hi Eric,

I used a macro focusing rail to get the approx. distance right (viewfinder image in focus), and then I used LifeView (the actual sensor image on the camera LCD) at 10x setting for fine focus with the lens.

I have read that with some lenses the taking aperture focal point is different than the wide open focal point - this can give focal errors with cameras that focus wide open and then stop down for the shot, which is whole lot of them.

Yes, that's possible, but it mostly happens due to spherical aberration. Once the aperture closes, the edge-of-lens contribution is reduced which might shift the focal plane a bit. However, this particular lens I used, is a specialist lens for macro photography, and it has quite small diameter lenses/lens groups which doesn't allow much spherical aberration to begin with (in contrast to e.g. the EF-50mm f/1.2).

Also in practical use of the 65mm, I have not observed a shift of the focus plane. After your question I did re-check it in the following animation. I unfortunately recorded it as sRAW, so actual DOF seems twice as large as it actually is at the pixel level, and diffraction half as bad:
DIFFvsDOF.gif
It is a detail of some fine-print on a 50 Euro note, shot at 5x magnification, at an angle of 50 degrees (40 degrees from surface normal). The width of the crop represents approx. 0.96 mm.

If the focal point shift due to aperture adjustments is closer then you would see the same results here with your examples. Just curious....

The increase of DOF would counteract that effect if it were present, which it isn't. Curious is good, though!

Bart
 

John Sheehy

New member
I unfortunately recorded it as sRAW, so actual DOF seems twice as large as it actually is at the pixel level, and diffraction half as bad:

Large pixels or virtual pixels do not make diffraction less, and smaller pixels do not make diffraction greater. Diffraction is the same in both cases, but overall sharpness is greater with the smaller pixels, as lower pixel density is a blur contributor. Blurs add just like noise; in quadrature. They combine most efficiently (destructively) when they are equal.

Low pixel densities are a *source* of blur, which adds in quadrature with diffraction and other blur sources in the lens (and the AA filter as well).
 

Doug Kerr

Well-known member
Hi, Asher,
Could you explain quadrature?

I'm sure he means that the two phenomena combine as the square root of the sum of their squares. Thus if we had 3 units of one phenomenon and 4 units of another, the measure of their combined effect would be 5 units.

The two phenomena discussed here combine that way essentially because they each have a "Gaussian" distribution; that is, neither phenomenon produces a "blur circle" with a sharp edge. We describe their size using a certain measure of the Gaussian distribution, related to the statistical concept of the standard deviation. When the two phenomena "pile on", we again use that same statistical measure to state the "diameter" of the resulting blur figure. And that works as I described at the outset (just as it does for standard deviations themselves).

This is the same way that the standard deviations of two quantities with a random variation combine to tell us the standard deviation of the sum of the two quantities.

The metaphor "in quadrature" alludes (among other things) to two vectors at right angles. The length of the vector that represents their combination is the square root of the sum of the squares of their individual lengths (think of Pythagoras' theorem).

The concept of "quadrature" is extended to many other situations than a geometric right angle. Its use its use to describe the root-sum-square addition involved here is often found in certain areas of statistical analysis..

Best regards,

Doug
 
Large pixels or virtual pixels do not make diffraction less, and smaller pixels do not make diffraction greater.

I didn't say that diffraction itself was less. The effect of downsampling to 50% of the actual capture dimensions will however reduce the diameter of the diffraction spot by half, therefore the effect of diffraction is half as bad at the pixel level. The same applies to DOF resolution, the unsharpness at the pixel level (the PSF and the COC) is reduced by half (depends a bit on the algorithm used), so the apparent DOF is twice as good as it would be at the actual capture size (and some single pixel detail will probably be lost).

Bart
 

Doug Kerr

Well-known member
Hi, Bart,

I didn't say that diffraction itself was less. The effect of downsampling to 50% of the actual capture dimensions will however reduce the diameter of the diffraction spot by half, therefore the effect of diffraction is half as bad at the pixel level.
Do you mean that the diameter of the diffraction figure is half the size, when expressed in pixels?

The same applies to DOF resolution, the unsharpness at the pixel level (the PSF and the COC) is reduced by half (depends a bit on the algorithm used), so the apparent DOF is twice as good as it would be at the actual capture size

If we establish a COCDL for DoF calculation purposes that is a certain fraction of the image dimensions (as is often considered appropriate) then the DoF would not be influenced by a change in the pixel pitch.

Of course if we base our choice of COCDL on pixel pitch (which some feel is appropriate) then there would be an effect.

Best regards,

Doug
 

Doug Kerr

Well-known member
Hi, Will,

max F-stop (wide open) at 5x magnification with the MP-E 65?

Is there any info on this?

Any information about what about it?

The maximum relative aperture of that lens (at any magnification) is f/2.8.

The maximum effective relative aperture at a magnification of 5x is approximately f/16.8.

Best regards,

Doug
 

NikolayAbadjiev

New member
I've sold my MP-E a few years ago, but still remember roughly that manual states the following:

"Effective f-number=f-number x (Magnification+1). "

So... at 3:1 camera setting f2.8 equals to an effective f-stop of f11. F5.6 becomes f22, f11 becomes f44 etc.

So, if the experiment was not conducted on 1:1 magnification - it's quite hard to get real f5.6 and f8...
 
I've sold my MP-E a few years ago, but still remember roughly that manual states the following:

"Effective f-number=f-number x (Magnification+1). "

So... at 3:1 camera setting f2.8 equals to an effective f-stop of f11. F5.6 becomes f22, f11 becomes f44 etc.

So, if the experiment was not conducted on 1:1 magnification - it's quite hard to get real f5.6 and f8...

Hi Nikolai,

That's close, but not entirely correct. What is correct is that because one only uses a smaller percentage of the original image plane, and the total luminous flux of it is "spread thin" as it were by the magnification. That's why for a given aperture number (the ratio between focal length and physical aperture) the 'effective' exposure is reduced by a factor of (M+1)^2 compared to infinity focus. So a 1:1 magnification will require 4x more exposure than at infinity, either by increasing the lightsource power, or by longer shutter time, or by wider aperture, or a combination of those. It's also known as the 'bellows factor' on view cameras and macro bellows or extention rings.

What does not change with magnification, is the physical size of the aperture in relation to the focal length. It's that physical aperture size and the cone of light angle due to focal length that determines the geometry of the diffraction blur spot (caused by the incident light angles at the edges of the aperture, and how the resulting wavefronts combine at the focal plane). As mentioned, I see exactly the same diffraction effect with a 1:∞ magnification as with a 5:1 magnification, barring optical capabilities of different lens designs. Macro lenses are just different due to the limited DOF (depth of focus in this case), and the reversed optical design needed for best performance.

When you look up diffraction on the Web, reputable sources mention aperture size&shape and focal length (combined in the f/# ratio or as a Numerical Aperture angle), and wavelength of light as the only determining factors for diffraction.

Bart
 

NikolayAbadjiev

New member
What does not change with magnification, is the physical size of the aperture in relation to the focal length. It's that physical aperture size and the cone of light angle due to focal length that determines the geometry of the diffraction blur spot...

Thanks for the clarification Bart! Everything you've said makes a good point, so i'll take you as a "reputable source" :)
 
I could go an explore it further but - to be honest, it feels a little weird coming back to explore a dead moth. But most if all, because I do not have an MP-E 65mm to do so with! (currently limited to 1:1)

I do not like to take the 100/2.8 Macro past f/10 at 1:1, and past f/14 at 1:2 (Just enough DoF for a bee). These values were found by experimentation (controlled as I had a helpful dragonfly resting on a cloudy day).

I find the 100/2.8 to be too soft by f/18 at 1:2.

a thought,

Sean
 

Asher Kelman

OPF Owner/Editor-in-Chief
If your monitor had 600 PPI, and the images were all upsampled to fill the screen or window instead of downsampled (or "down-damaged", as happens in photoshop), I think you'd be singing a different tune, as would everyone else who feels that way. The 1Dsmk3 version would be cleaner, and sharper, with less color moire (when applicable), as it would if you did both cameras and (properly) upsampled to any common subject size (IOW, cropping the 1Dsmk3 to 1.26x as the 1Dmk2 does, physically)..........

You have fallen into the illogical rut that is so fashionable these days; to bemoan higher pixel densities because of lower pixel level acuity (or higher pixel level noise), because you have a defective microscope (100% pixel zoom on an *extremely* coarse display). It is the digital imaging parallel to the proverbial "missing the forest for the trees".


John, Bart and Doug,

Is it possible that although with 3-7 micron or less sensels now resolve the diffraction-disturbed detail (of the point of focus) of a tiny spot of light, that this is still better for us, in practice, than working with larger width sensels (that make us oblivious to these ever-present radial disturbances.

Is there a simple formula to guide us as to when there is no further benefit in reducing aperture or reducing sensel pitch?

Lastly, since we know the pattern by which light is diffracted, can't we now have software solutions to overcome that and calculate where the rays would be with no diffraction?

Asher
 

Jeremy Waller

New member
Hello Bart,

I like the explanation.

Quote:

" My experience with current CCD and CMOS sensors that use a Bayer CFA is that once the diffraction spot diameter for green light exceeds approx. 1.5 times the sensel pitch, loss of microdetail becomes significant enough to cause a quality problem that cannot be solved by e.g. simple sharpening."

The above (1.5 times the sensel pitch) may be used to determine CoC to compute an optimum hyperfocal distance for deep landscape focusing (In the case of a Canon 5D this will return a CoC of 12.3 micro-metres). How does one reconcile this number with the 30 micro-metre CoC that is so commonly used.

TIA,

Jeremy.
 

Doug Kerr

Well-known member
Hi, Jeremy,

How does one reconcile this number with the 30 micro-metre CoC that is so commonly used.
First. let me note that, to avoid misunderstanding, I use three different terms for three different, albeit related, things encountered in discussions of depth of field:

•Circle of confusion: the spot on the image created from a point in the object by imperfect focus.

•Circle of confusion diameter: the diameter of an actual or hypothetical circle of confusion.

•Circle of confusion diameter limit (I abbreviate this "CoCDL"): the circle of confusion diameter we adopt as indicative of the "limit of 'acceptable' blurring from defocus". (I do not usually abbreviate the prior two, for obvious reasons!) It is this we will speak of here.

Now, with that out of the way!

The widely used CoCDL value of perhaps 30 um (for the full-frame 35-mm format) is derived via a long trail of assumptions about human visual acuity and image viewing conditions (angular size of the viewed frame). It is predicated on the concept of an "acceptable" degree of blurring, and is gravely arbitrary.

This criterion, arbitrary as it is, can be normalized over format size by speaking of a CoCDL of about 1/1400 of the diagonal dimension of the frame (about 31 um for the full-frame 35-mm format size).

It does not take into account at all the "perfect focus" resolution of the particular imaging system involved (since in many cases that is well better than the "acceptable blurring" criterion).

I discuss this at some length in my paper, "Depth of Field in Film and Digital Cameras", available here:

http://dougkerr.net/Pumpkin/index.htm#DepthOfField

Best regards,

Doug
 

Jeremy Waller

New member
Thanks Doug,

It looks as if the selection is quite fluid (" acceptable sharpness" ? ). I will read the referenced article. I have done some minor investigations so that I can react confidently when I need to take a picture is a fairly short time - I have noticed that lighting conditions in the morning and evening can change so fast that I miss the best shot.

Regards,

Jeremy.
 
Top