• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

A "simulation" of mitgation of diffraction with deconvolution

Doug Kerr

Well-known member
Bart van der Wolf recently performed what we might call a "simulation" of mitigating the effects of diffraction through the use of deconvolution. He posted an excellent report on his procedure, with links to the test images, in Luminous Landscape in July. (In fact only the diffraction is "simulated" - its mitigation is the real thing.)

I think the results are so important that I wanted them to be reported here. Bart is pretty busy looking after our interests in Europe, so I volunteered to give a synopsis here. Hopefully Bart will be able to pile on and fix anything I got wrong.

Bart started with a shot of a building face with many interesting areas and surfaces for photographic testing.

Then, working in a program used for astrophotography, he simulated the effects of diffraction, as would have been caused by an aperture of f/32, on the image. He did this by convolving the original image with a "PSF kernel" - a digital description of the diffraction blur figure (Airy disc). Well, actually, part of one (a square piece of it 9 x 9 pixels - that was as big a map as the software could handle).

Then, he took that "afflicted" image and attempted to "back out" the effects of the diffraction by deconvolving that image with the piece of the PSF kernel.

As to the result, you can judge that from this display of the image in its three stages:

diffraction_deconvolution_02A.jpg

In the center (1), we have a small piece of the original image (at original resolution - nothing here has been downsampled). To its left (2), we see the same piece of the image that had been afflicted by the simulated f/32 diffraction. To its right (3), we see the final result of the effort to mitigate the impact of the diffraction.

The final result looks pretty well "healed" to me - and then some.

I'm very excited about he implications of this "simulation".

If you want to read Bart's original report on the LL forum, here is the link (the report is at reply no. 66 - I don't know how to link to that directly):

http://www.luminous-landscape.com/forum/index.php?topic=45038.60

The three images, in 16-bit form and original resolution, can be accessed from there, along with a table showing the 9x9 matrix describing the diffraction PSF.

We owe Bart much gratitude for providing this clear and powerful demonstration of the potential of what may be a very valuable image enhancement technique.

Best regards,

Doug
 

Asher Kelman

OPF Owner/Editor-in-Chief
Doug,

Thanks you for following Bart's work so closely. Bart is not a boaster but this work is really helpful. I personally think that lenses will need to be less perfect as physics, engineering, clever mathematical approximations and guesses take the lens to the next level. I have the feeling that we are only seeing the edge of a new major part of optics that will become more important as mass-produced plastic optics get into millions of new cameras each year. I see automatic production-line calibration of each individual phone camera as being routine.

We in the "middle field" of hight cost professional cameras are not likely to be getting advances of the latest mathematics before grandma with her digicam. She has the power of mass production and high turnover rate of models with capacity to absorb new advances in image processing from major actors like Texas Instruments. They supply the chips for HP, Kodak and other cameras that, for example rescues the white dress of her grandchild from harsh sunlight while to grandpa, nearby in the shade of a large oak is rescued from the dark.

Strange as it may seem, these advances are naturally expected in the highest end optics for rockets, space and astronomy, but at the mass-marketing end of consumer gadgets similar advances are being commoditized.

Still, as Bart has demonstrated, there's a huge potential for new tools in pro photography to give us solutions we never imagined were possible. Case in point: have always felt that since diffraction was based on physics that degradation of focus would be amenable to mathematical solutions. Thanks Bart for helping demonstrating this potential for us!

Asher
 

Doug Kerr

Well-known member
Hi, Asher,

Thanks you for following Bart's work so closely. Bart is not a boaster but this work is really helpful. I personally think that lenses will need to be less perfect as physics, engineering, clever mathematical approximations and guesses take the lens to the next level. I have the feeling that we are only seeing the edge of a new major part of optics that will become more important as mass-produced plastic optics get into millions of new cameras each year. I see automatic production-line calibration of each individual phone camera as being routine.

We in the "middle field" of hight cost professional cameras are not likely to be getting advances of the latest mathematics before grandma with her digicam. She has the power of mass production and high turnover rate of models with capacity to absorb new advances in image processing from major actors like Texas Instruments. They supply the chips for HP, Kodak and other cameras that, for example rescues the white dress of her grandchild from harsh sunlight while to grandpa, nearby in the shade of a large oak is rescued from the dark.

Strange as it may seem, these advances are naturally expected in the highest end optics for rockets, space and astronomy, but at the mass-marketing end of consumer gadgets similar advances are being commoditized.

Still, as Bart has demonstrated, there's a huge potential for new tools in pro photography to give us solutions we never imagined were possible. Case in point: have always felt that since diffraction was based on physics that degradation of focus would be amenable to mathematical solutions. Thanks Bart for helping demonstrating this potential for us!

Absolutely. The question is what software manufacturer will be the first to make this available for use in our current contexts (such as your need to attain a large DoF).

Best regards,

Doug
 

David Ellsworth

New member
Doug,

This is an impressive little test. But what would it look like if the simulation were made to be more realistic? Add some simulated shot noise (modeled after ISO 100 noise on some camera) after applying the simulated diffraction, but before doing deconvolution. The noise will probably be vastly magnified.

A solution would be to take multiple exposures and stack them before doing deconvolution... however, if you're going to take multiple exposures anyway (meaning you have a subject that's not moving), you have another option: Focus stacking, at a larger aperture (where diffraction is not dominant).

David
 
Doug,

This is an impressive little test. But what would it look like if the simulation were made to be more realistic? Add some simulated shot noise (modeled after ISO 100 noise on some camera) after applying the simulated diffraction, but before doing deconvolution. The noise will probably be vastly magnified.

A solution would be to take multiple exposures and stack them before doing deconvolution... however, if you're going to take multiple exposures anyway (meaning you have a subject that's not moving), you have another option: Focus stacking, at a larger aperture (where diffraction is not dominant).

Hi David,

The main goal of the demonstration was to show that deconvolution does restore resolution, as opposed to edge enhancing techniques, e.g. like Unsharp Masking or High Pass filtering.

Deconvolution sharpening has to make a trade off between discriminating and enhancing signal and noise, and as such is more successful at restoring low noise images. Some algorithms can disciminate and restore the signal more than 'restoring' the noise (because it has a different Point Spread Function, PSF), others need help from us by using edge masks (only processing areas with a high(er) signal to noise ratio).

Another possibility is by using a prior noise reduction step, although that might hurt the convolved signal, making it harder to deconvolve.

Stacking procedures have the restriction of being useful only for stationary subjects (or those where the ghosts can be easily removed), in which case one could also use a longer (lower) ISO exposure to get better S/N ratios in the first place. Noise is just a lack of photons, and some added electronic circuitry noise.

Cheers,
Bart
 
Last edited:

David Ellsworth

New member
Hi Bart,

Well I decided to try this myself (using the links to the original files and diffraction kernel you provided in the Luminous Landscape thread). I used a small C program I wrote a while ago that does deconvolution by taking the Discrete Fourier Transform (using the FFTW library) of both the PSF kernel and the convoluted image, dividing the image's DFT by the kernel's DFT (i.e., multiply by inverted matrix), then doing an inverse DFT on the result. The algorithm takes just over 2 seconds to operate on a 1201x1201x3x16bit image on my computer, single-threaded. It doesn't have to do any iterations (unless you count "one on each color channel") or any kind of successive approximation. (Yes, I did cheat a bit; to avoid edge effects, I enlarged the image giving it a black border.)

Bart, my FFT division result (here) looks virtually identical to your 1000 iterations of Richarson-Lucy, except with a bit more ringing and sharper restoration of venetian blind detail (maybe Richardson-Lucy applies a deringing filter that has a side-effect of softening the venetian blinds?) But add a teensy bit of noise, and it begins to be obvious in the deconvolution. Add more noise and it looks really bad.



Click on the image to see one with higher JPEG quality and an extra middle row.

The left column on the top row is a crop of the original image. The central column is the image with the diffraction kernel applied (with no attempt made to do gamma correction), with a barely visible amount of noise added in the middle row and an obvious amount of noise added in the bottom row. (The noise is Gaussian, with no attempt made to simulate a Poisson distribution.) The right column is the deconvoluted result.

Denoising before the deconvolution does result in some improvement, but significant loss of detail, and it doesn't look anywhere near as good as the no-noise version.

Cheers,
David
 

Mike Shimwell

New member
Another point that is relevant, and which for accuracy I should be clear that Bart pointed out and Doug and David will be well aware of(!), is that there is a very significant difference between deconconvolving a known PSF - i.e. seeking to invert a known PSF - and applying a 'general' or approximating deconvolving algorithm to an unknown PSF. Bart has demonstrated (iirc) that the latter approach can add some value, and it is used in various sharpening approaches, but it is a potentially very different animal to the former approach.

David's demonstration of the impact of noise at the wrong place in the chain is amusingly graphic and useful. Try high levels of smart sharpen on some colour neg film some day...

I'm not sure that some engineering or mathematical 'magic' will replace high quality lens design for best performance. The techniques may be used in a more accurate and targetted fashion to improve 'consumer' cameras (some already are), but I suspect that better results will be obtained with better optics and more controlled/optimum PSFs to start with.

Just a few thoughts in passing. Thanks Doug and David for your input.

Mike

That all approaches magnify post PSF noise is not a surprising result as the noise looks like detail to an algorithm that is seeking to return detail information to the place it belongs.
 

David Ellsworth

New member
Oops, must make a correction to something I said earlier (and the edit privilege expired).

I used a small C program I wrote a while ago that does deconvolution by taking the Discrete Fourier Transform (using the FFTW library) of both the PSF kernel and the convoluted image, dividing the image's DFT by the kernel's DFT (i.e., multiply by inverted matrix), then doing an inverse DFT on the result.

I was confusing this with another project. It's just straight element-by-element division of complex numbers, but not matrix inversion or multiplication. I divide the PSF kernel's FFT by the FFT of a single white pixel against a black background, and then divide the convoluted image's FFT by this result. (Technically I should still say "DFT", since FFT is only an algorithm to calculate the DFT, but "FFT" is a more familiar acronym so perhaps I should use it instead.)

Looking at my algorithm, I realized I had it doing a range-check which refused to do the division if the divisor was below a certain threshold. This was the reason for the ringing artifacts, and by reducing this threshold by a factor of 1024 I was able to make my deconvoluted result look better than any result anybody on the Luminous Landscape thread got...
See improved deconvolution result here (1.2 MB JPEG). However with this lowered threshold, the algorithm amplifies noise orders of magnitude more! It only works because there's 16 bits per channel of precision. Just the noise from quantizing the convoluted image to 8 bits per channel results in this mess (2.0 MB PNG).

I should probably move this discussion to the LL thread.

Cheers,
David
 
Last edited:
Looking at my algorithm, I realized I had it doing a range-check which refused to do the division if the divisor was below a certain threshold. This was the reason for the ringing artifacts, and by reducing this threshold by a factor of 1024 I was able to make my deconvoluted result look better than any result anybody on the Luminous Landscape thread got...
See improved deconvolution result here (1.2 MB JPEG).

Hi David,

Indeed, noise is a significant obstacle for successful deconvolution. Also the divide by (near) zero risk will cause difficulties if not addressed adequately. Your deconvolved result of the original (low noise) image looks great, but I have a nagging feeling that you performed all calculations in floating point accuracy, also keeping the intermediate convolved result in floating point instead of e.g. 16-bit/channel limited accuracy. What a full floating-point workflow demonstrates is that the convolution/deconvolution approach can (under ideal circumstances) restore to virtually the exact same result as the original data.

However, we are usually confronted with 16/15/8-b/ch results that need restoration, so that should be the basis of our deconvolution attempts. Relatively low noise images improve the success rate further.

However with this lowered threshold, the algorithm amplifies noise orders of magnitude more! It only works because there's 16 bits per channel of precision. Just the noise from quantizing the convoluted image to 8 bits per channel results in this mess (2.0 MB PNG).

Noise rarely helps, that's for sure. Especially when restoring resolution, it complicates the situation, and reduces our chances on success. There are however algorithms that cope better than others. Also, floating point calculations will postpone the introduction of rounding errors to when they matter less, in the final result.

I should probably move this discussion to the LL thread.

I guess LL has more technically interested members, but don't underestimate the OPF audience ;-)

Cheers,
Bart
 
Top