• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

PhotoAcute; How to downsample SuperResolution images to 50% size

Michael Fontana

pro member
I think, that one is not to bad, and has quite a advantage:
it's a °simple° one, and can therefore be adapted without problems - if required while the action is running:


tambo_midtone_74-127_128_.jpg


After trying quite a lot of possbilities, I think, it's not a bad road.
Custom PS and bicubic downsize included.
 

StuartRae

New member
Hi Bart,

Actually, Stuart, the center needs to maintain contrast (although not be boosted) and it doesn't look too bad. Which/how many steps did you use?

I tried several combinations. I feel embarrased that, despite my previous life as a scientist, I didn't document the steps.
I did find however that using odd numbers (not necessarily primes) gave a slightly better result. I think the example I posted was 1000 --> 901 --> 801 --> 701, etc.

Another interesting result was that while the incremental down-sizing gave good results with bicubic, it was absolutely dreadful with Lanczos, but when going straight from 1000 to 500 Lanczos was far superior.

While Radial blur might work for this particular image, it's not universal enough, too image specific.

I geussed as much.

Regards,

Stuart
 

Ray West

New member
I've tried it in Qimage. using Lanczos, and I was going to try the other sizing options. However, I have found that using the 'undo' does not bring back the previous image, so have temporarily given up on that.
 

Ray West

New member
Another, but possibly unrelated thing I've noticed. If you open the image in Irfan view (maybe other image viewers the same) you can resize the view, i.e. the image zooms in and out as you adjust the window frame. This is a very quick and dirty implementation of a resizing algorithm. However, I think that it's doing its calculations starting from one corner of the image, since the interference patterns do not match around the centre circle. It may also be integer based - it works in sort of real time on all images. Now, in photoshop, this auto zoom facility does not exist, afaik, but I wonder if the resizing calculations are started from the centre of the image, or one corner. It is not just the amount of memory available that can limit the size of an image, but the resolution of the maths. I've no idea, as yet, what is happening with Q-image.

Best wishes,

Ray
 
"tambo_midtone_74-127_128_.jpg"

After trying quite a lot of possbilities, I think, it's not a bad road.
Custom PS and bicubic downsize included.

Michael, which filter settings did you use? It looks balanced and it suppresses aliasing artifacts quite well. Unfortunately it also loses some modulation towards the limiting resolution, especially in the diagonal direction. Maybe something can be done to fix that, but whatever we do it will remain a compromise.

Bart
 

Michael Fontana

pro member
Michael, which filter settings did you use? It looks balanced and it suppresses aliasing artifacts quite well. Unfortunately it also loses some modulation towards the limiting resolution, especially in the diagonal direction. Maybe something can be done to fix that, but whatever we do it will remain a compromise.
Bart

Bart, you' re quite right: but whatever we do it, will remain a compromise.
well that's quite my experiences as well: it has to be well balanced, and for myself: I'd prefer to be cautious with the downsize-sharpening, as this is only a parth of the entire story.

Beside yours testimage, I' v been running the 380-MBtiffs as well:
I think I can live with that compromise.

The action is here , the amount of Highpass has to be optimised, towards 160, for real world photos, though. This version has some "ghosting", if in real world photos, some big dark areas are beside bright ones.
 

Michael Fontana

pro member
So with tambo, vers 1.1, = Highpass 152, instead of the lower amount in the action to download, it looks like that:


tambo's-vers-1.1.jpg


Yes, a compromise, but much better than the example from post 1, my very early action.
 
Bart did you see Tim's LR/Mogrify plugin?

I did not seen it until you mentioned it. Interesting.

Would it be possible to to optimize all this steps we try so hard by installing ImageMagick and scripting Tim's plugin, to do the rest?

If it becomes possible to call the ImageMagick Resize command from Photoshop, it would solve many issues. ImageMagick's Resize command is flexible, it covers upsampling and downsampling, and most importantly it does it as it should be done (proper antialiasing protection for downsampling, a choice of various low-pass filters (and their inherent trade-offs), and 'virtual pixels' to prevent edge pixel artifacts). Several of its strengths come from the proper implementation/setting of other parameters. ImageMagick is available for all major OS platforms.

Although I understand that Tim's efforts are focused at Lightroom, a Photoshop variation would be phenomenal.

Bart
 

Michael Fontana

pro member
Bart
you can call PS (and make it working) from LR, with a action, too, if you save' em as a droplet.

That's what I'm doing all the time with these big PhotoAcute-DNGs:

From these downsize-actions, I create droplets, and everything runs automatically, once the RC's settings are adjusted: Look here, post 65
 

nicolas claris

OPF Co-founder/Administrator
Bonjour Michael

Of course I have donwloaded your action (from above post # 37)
In CS3, I get an error saying that "Tone Mapping" is not available (both on 16 and 8 bit files).…
Why do you set a LZW compression at the end? of course I could change this (I prefer no compression) but like to know the why of your choice ;-)

Any hint on the "Tone mapping" error?

[EDIT]Same error with CS2…[/EDIT]
 

Michael Fontana

pro member
Huho…
Hohu

Bonjour ;-)

yep, Bart is correct: sorry, I have so many actions now, that I forgotten to delete that part. But for Bart's testimage, the Photomatix-plugin wasn' t used: --> testimage in post 32.

So you might delet that tone mapping part and use the upper section only: until reduce saturation.

BTW: The Photomatixplugin - as used in that script - enhances contrast in the 3/4-tones to black. For the realworldphoto-tests, I alwith test the entire action, with the idea to selectit as droplet out of LR. Best workflow.

Best would be to have the tonemapping done before downsampling, but that's not possible, with a 380 MB-file; PS crashes then.

I still couldn't decide, if another version, running a midtone-sharpening prior to smooth/downsample makes sense: IMHO, the advantage would be to have the midtones better separated, before downsampling. Off course, it has to be gently.....
 
Bart and anyone else who can answer,

An update to this thread? What's the best practice now?

Hi Asher,

Well, the principles haven't changed, so ImageMagic is still very good (also for other downsampling tasks). Another benefit is that IM alows to convert to a gamma 1.0 space, downsample, and go back to e.g. gamma 2.2 when done (which work fine for regular continuous tone images). I use a small batch file which will activate with a right mouse click on an image file in the Windows Explorer, and resample to Web size (800px maximum) under a new name or directory.

Any other application that allows a 'Lanczos 3' type of filtering will also do well for downsampling, and Lightroom 3 does a decent enough job.

For an exact downsample to 50%, one of the earlier mentioned pre-blurs or filter kernels can be used in Photoshop.

So it depends on one's workflow, which is the preferred route to take ...

Cheers,
Bart
 

Asher Kelman

OPF Owner/Editor-in-Chief
For myself, Bart, I'm stuck in Photoshop as I haven't as yet, employed Viveza u-point technology for local adjustments and hardly ever use Lightroom. So I'd love any updated actions you have for sharpening especially to incorporate Nicolas Claris actions too.

Asher
 
I have being using this plug-in http://www.fsoft.it/imaging/en/Esempi.htm
as it creates a very sharp image full of details, the ringing can be a problem
with some images types, to remedy mix with Mitchell algorithm from ImageMagick :)

Hi Luiz,

Indeed, the results are sharp, but also with potential artifacts (even at a smooth setting). It's hard to beat ImageMagick, which is nothing special, it just does things a bit more proper.

Cheers,
Bart
 

Asher Kelman

OPF Owner/Editor-in-Chief
Thanks Bart for the update.

You are our guardian and guide in focus, sharpening and downsizing. You're generous help is always appreciated.

Asher
 
Time for another update, perhaps? Have the boundaries been moved again?

Hi Asher,

The boundaries have not moved much if any, but that's partly because we are combatting physics (discrete sampling with optimal down-sampling filters).

The only further development is in ImageMagick, which now has better support for Ellipticaly Weighted Average (EWA) resampling, which does an even more accurate (but slower) resampling (not only in 2 orthogonal tensor directions, but circular). Not much news with other solutions.

Cheers,
Bart
 

Asher Kelman

OPF Owner/Editor-in-Chief
Hi Asher,

The boundaries have not moved much if any, but that's partly because we are combatting physics (discrete sampling with optimal down-sampling filters).

The only further development is in ImageMagick, which now has better support for Ellipticaly Weighted Average (EWA) resampling, which does an even more accurate (but slower) resampling (not only in 2 orthogonal tensor directions, but circular). Not much news with other solutions.

Cheers,
Bart

Thanks so much, Bart for keeping us up to date, even if the boundaries of our world have only inched forward.

:)
 
Bart,

Maybe I misunderstand what you are looking for. I did a downsample with artifacts eliminated.

Hi Arthur,

The ultimate goal with down-sampling is to produce an image that looks identical, but only smaller than the original. Of course the highest (e.g. single pixel micro contrast) spatial frequencies will have to be sacrificed, because we have fewer pixels, but preferably not much more than those. We would hope for a smaller image, also with detail all the way to the corners, but without aliasing artifacts. That is unfortunately impossible, therefore we try to achieve a balance, keeping as much detail with as few disturbing artifacts as possible.

In a regular square pixel grid, the diagonal resolution is potentially up to 41% higher than in the horizontal/vertical directions. If we do not want to lose that when we down-sample, we must use good filters to both maintain that resolution, and minimize artifacts at the same time.

To illustrate that, here is a so-called Fast Fourier Transform (FFT) of the original image, a transformation from the spatial domain in to a frequency domain, which will allow us a better analytical view of the spatial frequency content (low frequencies in the center, high frequencies in the corners, in a radial progression, brighter is more detail, darker is less detail):

Rings_FFT.jpg


As you can see, the original has a somewhat square region in the middle, where the lowest spatial frequencies are represented in the center at a more or less uniform amplitude (all those details are equally well resolved). Towards the edges and specifically the corners, the further away from the center of the FFT, the higher/highest spatial frequencies are shown. Beyond a certain radius they are somewhat compromised, because the details get so small that they cannot be accurately resolved. The FFT representation (actually a Log Power spectrum) is very sensitive to small deviations which are visually exaggerated for easier viewing.

So, ideally our down-sampled version should look similar, only a smaller.

Here is an ImageMagick EWA down-sampled example, and its FFT:

Rings_500x500px.png


Rings_500x500px_FFT.jpg


The FFT shows detail virtually all the way to the (highest spatial frequency) corners, with very minimal artifacting (just a bit at the horizontal/vertical edges).

For comparison, the FFT of your http://farm8.staticflickr.com/7300/11179981655_325c1eaa9f_o.jpg looks like this:

11179981655_325c1eaa9f_o_FFT.jpg


The darker edges and corners indicate lower amplitude high spatial frequencies, less micro detail (but obviously also less risk of artifacts).

Cheers,
Bart
 
Top