• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

Deep Shadows to Bright brights: taming the span of light!

Asher Kelman

OPF Owner/Editor-in-Chief
Here's an issue that divides the snap shooter from the experienced photographer! The latter, all dedicated planners, know what problems are and how to face them.

Taming the span of light is critical to wedding photographers and many other situations. The darkest blacks and the brightest whites need to be captured equally and represented without the picture looking artificial or posterized. Meanwhile the mid tones, giving the full life of an object, needs to be utterly preserved.

Here we just deal with image capture, since if the information isn't there, everything that follows is already compromised! For now, we'll forget that neither monitors nor printers can show all this information, but at least we need the information to artfully alot to what can be seen and printed so that it all looks real! So this discussion is not just for swans but for ALL such dark to bright situations. For the swan pictures see the original thread here.

Still Holly's swan presents an archetypical challenge.

Holly said:
However, there are clearly places on his feathers where the highlights have been lost. He's an affable creature who lives on the farmpond of an apparently equally affable farmer so I'll be able to go back once there's a bit of a thaw and re-try this. Do you have any advice how on to avoid this light-on-white issue?

ArrogantSwan700px.jpg


We don't always solve every difficulty. However, we can do better, by studio lights or even cheating after the fact!

Megapixel count is not relevant:. You need just DR!

Camera with highest DR you can afford: First one can have a camera with a better spread of dark tones to the brightest whites that can be spanned without losing either.

Avoid "High Noon": OK, you have the camera, now shoot later in the day or (if you are more like Nicolas than me, get up very early). This is avoid the brightest sun. That creates very dark shadows and terriblely uniformly over illuminated whites so there are no subtle shades!

In this case looking at the histogram, you may well find that it's impossible.

Polarizing filter: cuts glare and enhances the white tonalities, but this may not be sufficient for your swan.

Gradient neutral density filter or split filter:, use the grey part to pull down the bright sky and the clear part allows the foregorund and middle ground to be properly exposed too. This is of no use when taling pictures of swans on dark water or of a dark smart groom with a bride in gorgeous white!

Use shade of filtered light from trees: Here one can use a the gentler light and add a reflector to add back light. Well you migh scout for an overhanging branch over the edge of the water and wait...or throw crumbs! Is that allowed?

Overhead screen: to cut down the bright light to a gentle flattering diffuse light.

Fill in Flash: to fill in the shadows.

Cheat:

!. Bracket enough pictures with the bird perfectly exposed.

2. Take pictures of the water without the swan, obtaining ripples from a deftly tossed small stone!

3. In CS2, cut out the bird and its reflection, paste it into the water inside the ripples.
and then blend the perimeter.

and "Bob's your uncle!".

MF camera with Digital Back!

The next alternative is a great thing to try as a present to yourself once you have excelled with your current camera:

Rent a MF digital camera for the day or a weekend. I'd have a whole lot of projects set up in advance to give you your money's worth. You'll get another 1-3 srop of dynamic range. Also with a Sinar back, you can use the Brumbaer tools(free, I believe), to grab an extra 2 stops even by going even behind the RAW data to get nothing of the information lost at all!

Wide dynamic range Set-Up for Practice:

White lace, embroided blouse, white and silver fabric used for gowns, cushions and heavy drapes and place them into an assembly of dark scarves and a jacket with a branch on top.

This setup will allow you to practice spanning from bright to dark. Now take a shot and look at the histogram. There should be no part of the curve going into each end of the borders or else the scene is over or under-exposed!

Good luck!

Asher
 
Last edited:

Holly Cawfield

New member
Asher, I think I'll choose the route of forwarding them to you first. There are only a small handful of them left that survived the computer file housecleaning a couple of weeks ago and they've been waiting in the editing queu.

Holly
 

Ferenc Harmat

New member
Excellent question...

Why does it outperform it, though? It outperforms it because the pixel pitch is less demanding on the lenses.

Not really. It outperforms it simply because the sensor has a higer full-well capacity, it can "swallow" more photons (significantly more), and because it can see "things" that the 30D's sensor can not. 1D-series noise reduction still occurs as a single-stage process at the sensor (or surrounding electronics), whereas the 20D/30D employ a triple-stage noise-reduction technique that is necessary to ensure their current levels of dynamic range and signal-to-noise ratio, thus coping with their 6.4 microns photosites. Invariably, you will get less noise, at the expense of detail, in general. I originally saw this question coming, so I prepared for you a vis-a-vis comparison, so you can see this in reality.

Below is a (20secs, f/8, ISO100) exposure on the 30D, loaded with the EF 50mm f/1.4 (LEFT). On the RIGHT, is a (30secs, f/8, ISO80) on the 1D MarkIIN (which is what "L" mode does), but loaded (surprise) with the EF 24-105 f/4L IS, same Manfrotto tripod, same position, both remote-wired triggered, no changes in outdoor scene (late evening, same lights, same everything), fairly cold outside (cameras where pretty cold, good for the Sensors). According to R. Clark, the 1D MKII (and presumably N too) achieve Full-Well capacity in "L" mode, around 80K photons, per photosite. Both images converted with C1 (and Magne's profiles for both cams, HiSat version), and ZERO noise-reduction, chroma noise reduction set to +3, and ZERO sharpening, then equally sharpened with the same exact procedure in Photoshop.

Pay attention to the far-grass fine detail, on the slopes joining the Lake, as well as remote/distant bushes. Also, pay attention to the low-frequency component of noise (mostly chrominance "gobs") on the Red and Blue channels, visible to the bare eye on my calibrated, twin-screens workstation (switch to channel mode in Photoshop):

http://www.pbase.com/feharmat/image/74608480/original
http://www.pbase.com/feharmat/image/74608481/original


Both cameras have the same shot noise at the same ISO.

Not my bodies. The deep shadows on my 30D exhibit noise levels that I simply do not see on my 1D MarkII-N, right from ISO100, and either from cams' pipeline or ZB/RIT (which is an emulation of the cam's pipelines). The N is evidently superior, especially on the depth of the shadows and the micro-contrast of detail. And it seems that the N's sensor can go down to sensitivity or gain levels that are not accessible, by any means, on our 30D. This is the actual reason of the above experiment (I have been studying, more closely, what ISO 50 means on the N, and, contrary to popular belief, ISO 50 seems to be, by all practical purposes, ISO75-80, capable of delivering same or *better* dynamic range than ISO100 and *less* noise (I am absolutely positive about the noise part).

The only dead-end is with Canon wide-angle lenses;

Far more important are the sensor dynamics and physics. Lense are secondary, although Canon's WA lenses do leave a lot of room for improvement. You can have great lenses and second/third class DR and S/N ratios above ISO400, thus leaving you trapped on a physically impaired dead-end street. The sensor's photosites are fundamental, essential for ensuring the quality we expect (that is, with today's technology and with today's Quantum Efficiency levels).

This "bigger pixels are better" thing is nothing more than a myth and an illusion; they are only better for write speed and compact storage. Many people have a strong impression of more pixels in the same format being a killer of IQ, but no-one can demonstrate it, at the image or subject level.

In reality, bigger pixels is everything. And the above samples tell a pretty compelling story, indeed. In areas where you see sterile slopes of grass (like there is nothing in there) on the left-side image, you can see ultra-fine micro detail that the viewer on the right may have not even imagined htey existed or could be captured (and we are still talking about 8.2 Mpixels, imagine at higher pixel-counts, but same photosite size.)

Only zoomed in, at 100% pixel view, does the smaller pixel become inferior.

...A stroke of low-to-mid frequency sharpening will debunk this as well, as it will immediately show in paper. Small detail can be made look coarse/thick (great for high-dpi prints), but no detail can not be made look like anything other than that: nothing (you will see nothing in paper).
 
This "bigger pixels are better" thing is nothing more than a myth and an illusion; they are only better for write speed and compact storage. Many people have a strong impression of more pixels in the same format being a killer of IQ, but no-one can demonstrate it, at the image or subject level. Only zoomed in, at 100% pixel view, does the smaller pixel become inferior.

I like the provocative thought of many small and relative noisy pixels blending into a more detailed area with average noise, but it doesn't work out that way, except for high contrast subjects like powerlines against a sky.

The absolute size of a sensel, and its associated potential well depth, will (together with the read noise) determine the dynamic range. That will allow to differenciate between smaller luminance differences per pixel. Paired with the lower noise, that will result in the capacity to visualize very subtle luminance differences without being overwhelmed by the noise floor. Because these very subtle differences become visible, it will result in a seemingly higher resolution, especially when the subject detail is relatively low contrast.

Bart
 

John Sheehy

New member
Well it seems to me that the smaller pixels will have a lower signal to nosie ratio in the lower levels of illumination.

Sensor size makes the math!

Larger sensors make for potentially less noise. Smaller pixels in the larger sensor are better yet, for overall maximum IQ.

Any crop of a larger sensor, using the same focal length lens, and the same ISO, Av, and Tv values, of a smaller sensor with smaller pixels is inferior. The lower noise per pixel doesn't help, when the pixel is displayed larger than the ones which are statistically noisier.

Put the sharpest 90/100mm macro you can find on a 1DmkII, 1DsmkII, or a 5D, take the same shot as a Panasonic FZ50 at the same aperture (f/4), same shutter speed, same ISO setting, and crop the large-sensor image to match the size of the Panasonic at 88.8mm, and the Canon images will be much, much lower in resolution and look just as noisy, even though the standard deviation is lower.

And no one has yet seen what tiny-pixel SLRs are like with Canon's CMOS technology.
 

Ferenc Harmat

New member
There you go...

Depending on the definition of Image Quality ...

Bart

In my book (and in the books of folks R. Clark's results, for instance), "sensel" size and quality will definitely affect pretty much any aspect of your image's quality.

Pure resolution to me does not mean *anything* if signal-to-noise ratio is not high enough for the necessary sharpening that edges and fine detail require, that is, for a strong MTF boost near Nyquist boundaries, which will bring your image to life, from a "crispness" point of view.

If that is not the case, what happens is that noise becomes "sticky": it adheres easily especially to fine detail (high-frequencies), thus when you try to bring back its contrast, you also bring back noise around it. Pretty lame. Therefore, and by any means, clean output from sensels is very, very important, if we see Image Quality under the lupe of a much broader and competitive point of view.

On the other hand, there is a fundamental (and completely erroneous) assumption when comparing bigger systems with smaller ones and assuming same ISO, f/ratio, and shutter speed in low light conditions, because, under such assumptions, the smaller systems will always perform WORSE. The reason for this lies in the optics of the larger system, which will provide a wider physical path at any f/ratio setting, thus allowing more photons through the lens than anything the smaller systems will capture. In turn, more photons will land into the larger system's sensor, regardless of its density or sensel size, which means more Quantum performance.

:)
 

Asher Kelman

OPF Owner/Editor-in-Chief
Larger sensors make for potentially less noise. Smaller pixels in the larger sensor are better yet, for overall maximum IQ.

Any crop of a larger sensor, using the same focal length lens, and the same ISO, Av, and Tv values, of a smaller sensor with smaller pixels is inferior. The lower noise per pixel doesn't help, when the pixel is displayed larger than the ones which are statistically noisier.

Put the sharpest 90/100mm macro you can find on a 1DmkII, 1DsmkII, or a 5D, take the same shot as a Panasonic FZ50 at the same aperture (f/4), same shutter speed, same ISO setting, and crop the large-sensor image to match the size of the Panasonic at 88.8mm, and the Canon images will be much, much lower in resolution and look just as noisy, even though the standard deviation is lower.

And no one has yet seen what tiny-pixel SLRs are like with Canon's CMOS technology.

Of course!

I shouldn't write sleepy! I meant that the signal to noise ratio is worse with smaller pixels! Thanks for putting it right.

This however does not need to be.

In the San Francisco consortium with their unique CMOS sensor, each pixel is individually addressed.

This opens the possibility of keeping sensels switched on until that well has enough photons captured such that the charge is enough to measure accurately above the electronic noise of the system.

So essentially each sensel is a separate camera independant of the others (as long as the site adjacent is not suffering spillover).

In otherwords the camera itself needs no shutter and the concept of ISO has to change (since the gain is often not needed) to demonstrate the shadows.

So pixel size should be able to decrease significently when we can keep sensels collecting until the mathematics are favorable!

So 2-4 micron pixels could have very low noise. Just the camera would need fantastic lenses, good tripod and mirror up to take full advantage.

Smaller pixels do not necessarily give more resolution.

That may be limited by the lens. Still one should be able to overcome of Moire and color artifacts more readily with smaller pixels.

Asher
 

Asher Kelman

OPF Owner/Editor-in-Chief
Hi John,

Could you post the whole picture of the cat? IOW did both images conver the same filed. I'd like to see. :)

Thanks,

Asher
 

John Sheehy

New member
Hi John,

Could you post the whole picture of the cat? IOW did both images conver the same filed. I'd like to see. :)

I don't know what I did with the master images at this moment, but the point of this comparison is that both were taken with (approximately) the same real focal length, so that the FZ50 is simulating a 10MP crop from an ~80MP APS-sized sensor. The 10D is the biggest-pixel camera I have to compare with.

The upper left is the 10D at a 272% crop, the upper right is the 10D at a 100% crop, the lower left is the FZ50 at a 100% crop, and the lower right is the FZ50 binned 3x3 to approximately the same magnification as the 10D at 100% (but is slightly smaller). None are sharpened, except for the sharpening that results from binning in the lower right crop.
 

Asher Kelman

OPF Owner/Editor-in-Chief
Hi John,

First, there's a depth of field issie. The smaller sensor wins there.

Next you need to put the entire small cat image, whatever, that is, on the 10D sensor too.

You cannot magnify or anything else to equalize. We want detail not size.

How many pixels are devoted to the head in each? The one with more pixels should have an advantage unless the lens limits resolution.

Was the image otherwise identical in composition?

IOW did you have a small image of the entire cat on one and the exact same scene in the other with nothing excluded?

Can you pick an example of that?

Asher

I can't believe I actually was sucked into examining cat pictures!!
 

John Sheehy

New member
Pure resolution to me does not mean *anything* if signal-to-noise ratio is not high enough for the necessary sharpening that edges and fine detail require,

You don't need that as much when you have 4 or 9 pixels replacing one. The AA filters cover a smaller area, so their influence is more limited.

that is, for a strong MTF boost near Nyquist boundaries, which will bring your image to life, from a "crispness" point of view.

If that is not the case, what happens is that noise becomes "sticky": it adheres easily especially to fine detail (high-frequencies), thus when you try to bring back its contrast, you also bring back noise around it. Pretty lame. Therefore, and by any means, clean output from sensels is very, very important, if we see Image Quality under the lupe of a much broader and competitive point of view.

Noise is as big as the pixel when you have big pixels. What it lacks in statistical strength (which typically ignores the spatial frequency of noise), it makes up for by holding it so wide.
 
Pure resolution to me does not mean *anything* if signal-to-noise ratio is not high enough for the necessary sharpening that edges and fine detail require, that is, for a strong MTF boost near Nyquist boundaries, which will bring your image to life, from a "crispness" point of view.

Yes, I agree, and the noise will pose restrictions on sharpening.

To illustrate the effect of noise:
For the purpose of orientation, this is the (rough initial version) full stitched (4943x13994 pixels) image of a recently restored watertower as seen from its left side, (obviously) resized to a more manageable size:
WaterTower.jpg


From that image I took a crop (from the section with the 2 round windows) which shows the difference in mortar structure, new on the left, old on the right, and the same crop but with added Gaussian noise, amount 5 in Photoshop CS2:
WaterTower_Bricks1.jpg

Areas to note are the difference in apparent sharpness due to contrast, and the structure of the bricks themselves. The lower contrast old mortar gives an impression of blurred bricks, while it suggests more sharpness on the new mortar side, but totally loses the subtle individual brick surface structure (=dynamic range resolution) in the noisy version.

Taking the original crop, but lowering the contrast to -90 in Photoshop CS2, and then also adding the same Gaussian noise amount 5, produces the following image composite:
WaterTower_Bricks2.jpg

The effect demonstrates what would happen in e.g. shadow areas (although now lifted to the most contrasty part of the gamma curve) where luminance (and color) contrast is low. The virtually noiseless crop still shows a lot of sharp detail, but the noise almost overwhelms that in the lower contrast old mortar side.

This is in fact a demonstration that is even somewhat biased in favor of the noisy versions, because it exhibits pixel per pixel noise, where a Bayer CFA demosaicing will spread the noise over a larger, more visible, area.

The resolution difference between sensors, besides the differences from the AA-filters and sampling density, are mostly of importance when the image is not downsampled for its intended use. Nevertheless, important surface structural detail (low contrast by definition) will have drowned in the noise of the smaller sensel sized sensor array. Afterall, a doubling of linear resolution (=< 1/4th of the sensel area) will double the noise level.

Bart
 
Last edited:

John Sheehy

New member
Hi John,

First, there's a depth of field issie. The smaller sensor wins there.

They're the same focal length; REAL focal length; not the same FOV, except as cropped. The pixels know nothing of the full format size. All they know is themselves, the AA filter above them, and the lens.

Next you need to put the entire small cat image, whatever, that is, on the 10D sensor too.

Why? I put the same subject, with the same focal length and distance, and therefore the same analog magnification, on two different sensors, with very different pixel pitches. My comparison is about pixel pitch, pixel size, etc, all other things being equal, and nothing else. The original composition is totally irrelevant to the point I am trying to make. Do you understand what I meant when I said "100% crop", "272% crop"? The 100% crops are literal pixels. The 272% crop is the original resolution resized to 272% with Bicubic. Nothing up my sleeve; no hidden resizing facts. You're looking at original pixels (2 crops), 272% pixels (7.4 pixels from each original), and a 9 -> 1 pixel binning. We don't need to see anything else to see that the smaller pixels are much, much better in terms of resolution, and little or no worse in practical, visible noise, especially when you consider that the 10D pixels will get noisier when you try to make them look as sharp as equally-magnified FZ50 pixels.

You cannot magnify or anything else to equalize. We want detail not size.

You have your detail. It's in the 1.97 micron fz50 pixels.

How many pixels are devoted to the head in each? The one with more pixels should have an advantage unless the lens limits resolution.

That's my point, but lots of people seem to think that you pay for it in noise, when you achieve it through smaller pixels, and I think that's wrong.

I can't believe I actually was sucked into examining cat pictures!!

Haha! Gotcha! It took a fake cat, though.
 

Ferenc Harmat

New member
Because...

Why? I put the same subject, with the same focal length and distance, and therefore the same analog magnification, on two different sensors, with very different pixel pitches. My comparison is about pixel pitch, pixel size, etc, all other things being equal, and nothing else.

Unless you set the images at correct and equivalent FOV, these comparison only demonstrates (for practical purposes) the spatial advantage of having a higher concentration of pixels per surface unit, on the original scene. Nothing else, nothing more. In other words, we can not see the effects of having and comparing such pixels with the *same* number of *better* pixels in the same area, and this where the story ends (unfortunately).

If you want these comparisons, you will then need a macro lens and focus upclose (really close) so you can equalize big-sensor absolute resolving power with respect to smaller sensor (or at least close the gap as much as dSLR optics allow).


and little or no worse in practical, visible noise, especially when you consider that the 10D pixels will get noisier when you try to make them look as sharp as equally-magnified FZ50 pixels.

...The comparison still looks horrendous, in essence. Image is opaque/dim, and blobs of chroma noise all over the place on the FZ50. Reminds me of an ISO400 film scan at 2700-5400 dpi.

...
 

John Sheehy

New member
Unless you set the images at correct and equivalent FOV, these comparison only demonstrates (for practical purposes) the spatial advantage of having a higher concentration of pixels per surface unit, on the original scene. Nothing else, nothing more.

I intended to demonstrate nothing else, and nothing more. What did you think I intended?

In other words, we can not see the effects of having and comparing such pixels with the *same* number of *better* pixels in the same area, and this where the story ends (unfortunately).

"The story ends"? What is that supposed to mean? The story is just beginning. In a few years, you will be shooting cameras with tiny pixels and wondering why you ever doubted them.

If you want these comparisons, you will then need a macro lens and focus upclose (really close) so you can equalize big-sensor absolute resolving power with respect to smaller sensor (or at least close the gap as much as dSLR optics allow).

Why? That's not what I'm interested in. I'm interested in the phenomenon of pixel size; not camera resolution. You complained that going to higher resolutions in the same format size leads to noisier images and is counter-productive, and that is what I am disagreeing with. Why is it, that no matter how clearly I qualify my statements, so many people read something else?

...The comparison still looks horrendous, in essence.

The comparison, or one or both of the cameras' samples?

These weren't meant to be absolutely pretty. They were meant to compare different pixel sizes in the same focal areas crop, and I did it at ISO 1600 for a sort of "worst case" effect.

Image is opaque/dim, and blobs of chroma noise all over the place on the FZ50. Reminds me of an ISO400 film scan at 2700-5400 dpi.

The images are *RAW* except for interpolation and white-balancing. I did this to avoid any contamination of the difference by RAW converters. RAW data always has noise, and when color is reconstructed, it becomes chromatic unless a converter filters the chroma. Again, the two are for comparative purposes. The 10D version looks horrible compared to the FZ50 version, IMO. The 10D would do much better with a lot more smaller pixels, IMO.
 

Ferenc Harmat

New member
Hard to illustrate better, indeed...

Taking the original crop, but lowering the contrast to -90 in Photoshop CS2, and then also adding the same Gaussian noise amount 5, produces the following image composite:
WaterTower_Bricks2.jpg

The effect demonstrates what would happen in e.g. shadow areas (although now lifted to the most contrasty part of the gamma curve) where luminance (and color) contrast is low. The virtually noiseless crop still shows a lot of sharp detail, but the noise almost overwhelms that in the lower contrast old mortar side.

Perfectly clear: Reduce contrast (MTF), and, to finish your example, I would simply run an USM stroke (around 50-100%, 0.6-1.0 radius, 0 Threshold) and just see what happens on the left, and then on the right...

This is just a simple and clear example of what happens with noise *even* if you have lots of pixels, especially on low MTF capture-areas, which are the first ones you want to bring back to life during sharpening.

Pretty well done, and well illustrated, too!
 

John Sheehy

New member
Pretty well done, and well illustrated, too!

Yes, it clearly demonstrates that adding noise obscures detail. No Kidding!

However, that has nothing to do with the issue that he allegedly addressed. Having smaller pixels does not "add noise". It simply increases the random variance of each individual pixel, in the shot noise component. The variance in each unit of real area does not change.

In order for Bart's simulation to be relevant in this context, he would have to make a 33% resample of his image, and put 1/3 the amount of noise in it than he puts in the full resolution image, and compare the two. Guess what? The one with the 1/3 noise looks like garbage, when both are viewed at the same subject magnification.

Try it. Copy both the full-contrast and low-contrast images into PS. Crop so just the non-noise halves are left.

Make a duplicate of each. Reduce the duplicates to 33%. Add a noise level of 2 to the reduced images, and 6 to the full-res ones. So far, the smaller ones with less noise look less noisy. Nice. Now, bring them back up to the same size as the originals (300%). Whoops! What happened?
 

Joel Slack

New member
I doubt that Canon would go to the trouble to develop and release a 22mp camera (~$8000?) without considerations for all of the technical jibberrijoo that you guys have expressed so well. I know nothing about HOW the cameras work, but from a simple marketing standpoint, I cannot imagine Canon putting out a product that does not somehow improve on previous models, aside from just having more mp's crammed onto a sensor than the previous model had. Perhaps there are new noise-reduction or other highly-advanced processors or sensors that will actually improve such things as resolution, S/N ratio, and DR if/when they release a camera that cracks the 20mp plateau?

I have no doubts that you guys know your stuff---it is very impressive to read! But perhaps Canon will push the envelope in ways that are not immediately visible to those using the current technology/paradigm limitations as their platform for determining possible future developments. I'm personally excited to see what will appear. No way will they release a 22-mp Ds3 that does not improve on the Ds2, by whatever majicke they may use to accomplish this.
 

Asher Kelman

OPF Owner/Editor-in-Chief
Joel,

All canon has to do is use the new CMOS they have signature to and voila, ISO out the window but pixels that keep open until the S/N ration is acceptable for that particular pixel.

No shutter needed except to kep the camera inside cool. But why do they have to do that when people will "Oooh!" and "Ahh!" over another 4-8 MP! Which they will!

Nothing is pushing them now. Of course they might surpise is with a MF camera or a 5D that has focus assist, but that's too much to ask!

Asher
 
Yes, it clearly demonstrates that adding noise obscures detail. No Kidding!

It seems like you missed the essence of the examples.

Dynamic Range is expressed by the maximum (clipping or saturation) signal level, divided by the noise floor level of the sensor array. That means that one can influence the DR part of image quality by either improving the maximum signal level or reducing the noise level, or both.

Smaller sensels have, with today's technology, smaller potential well depths. The storage capacity for photons converted to electrons is in the order of 1000-1800 electrons per square micron. That automatically implies that the dynamic range of a smaller sensel is reduced because the maximum signal level (in electron Volts or -eV) that can be recorded is lower.

You could view the lower contrast version of my examples as a(n extremely) reduced dynamic range version. Try boosting the contrast back to the original level, and see what happens, especially when noise is present.

The only technological breakthrough that could counter-act that loss of dynamic range in smaller sensels, is a technology that decouples sensel area from storage capacity. The potential improvements on the noise reduction side of DR are much smaller than on the maximum signal side.

However, that has nothing to do with the issue that he allegedly addressed. Having smaller pixels does not "add noise". It simply increases the random variance of each individual pixel, in the shot noise component. The variance in each unit of real area does not change.

Smaller sensels are inherently more noisy, due to the laws of physics and due to ADC amplification. So the issue is not about adding, but rather having more noise. The only way of demonstrating the visual difference while keeping all other parameters the same, is by adding it (although I admit that a Poisson noise distribution is more accurate than a Gaussian one).

Let's assume that 2 sensels of different size have the same read noise characteristics, then the limitation of the maximum signal, due to different potential well depths, will still dictate a noisier result from the smaller sensel.

Poisson statistics show that e.g. an 8x8 micron sensel (assume capacity is 64 square micron times 1500 electrons = 96000 electrons) has a maximum Signal to Noise (assume a virtually perfect noise of 1 electron SD) ratio of sqrt(96000):1=310:1. A smaller e.g. 2x2 micron sensel has a (assume capacity is 4 square micron times 1500 electrons = 6000 electrons) has a maximum S/N ratio of sqrt(6000):1 = 77:1 which would be quite visible in featureless areas. This is the best possible noise achievable with there sensels, in practice the noise floor will be higher, and the S/N ratio even worse.

So, increasing the noise level in my examples is useful in visualizing differences between sensel sizes, because smaller sensels are inherently noisier as dictated by the laws of physics. One can only quibble about the amount and type of noise, not about that size does matter.

In order for Bart's simulation to be relevant in this context, he would have to make a 33% resample of his image, and put 1/3 the amount of noise in it than he puts in the full resolution image, and compare the two. Guess what? The one with the 1/3 noise looks like garbage, when both are viewed at the same subject magnification.

That would only demonstrate that more pixels can help resolution, which nobody denies. It would deny that smaller sensels need better optics, and that smaller sensels have issues with diffraction, in addition to them being noisier than your suggestion would show.

All of that is no surprise because I already hardly ever used my Powershot G3 at anything higher than ISO 50 (at apertures not smaller than f/5.6), because its ~3x3 micron sensels would become too noisy. Its on-sensor resolution is superb because the small physical lens diameter allows better aberration correction than a larger lens, but its dynamic range is limited.
As an example see this stitched simulation of a large sensor array at 1/3rd of its actual '12Mp' stitched size, and a full size crop of what is possible at the pixel level, after significant tonemapping.

I'm not contesting the resolution benefit of a denser populated sensor array, that benefit is selfevident, but I also recognize the drawbacks. What I'm pointing out is the inherent loss of dynamic range and increased noise which will limit the use to Low ISO (with better lens and diffraction friendly aperture) types of photography that will still be limited in their Dynamic Range performance. It will be good enough for average use, but it won't cut it for professional use (and the reduced manufacturing yield for large area sensor arrays will require 'Professional price levels'). Professional use will require a breakthrough in the sensel's maximum storage capacity, before denser sampling will become a viable option, IMHO of course.

Bart
 
Last edited:

Asher Kelman

OPF Owner/Editor-in-Chief
Dynamic Range is expressed by the maximum (clipping or saturation) signal level, divided by the noise floor level of the sensor array. That means that one can influence the DR part of image quality by either improving the maximum signal level or reducing the noise level, or both.

Smaller sensels have, with today's technology, smaller potential well depths. The storage capacity for photons converted to electrons is in the order of 1000-1800 electrons per square micron. That automatically implies that the dynamic range of a smaller sensel is reduced because the maximum signal level (in electron Volts or -eV) that can be recorded is lower.

However, if a well is always filled sufficiently above the background noise, then the dynamic range will be increased.

One would simply look at the time taken to reach this level of mathematical satisfaction. So Dynamic Range definition will have to be re-examined. It will depend on the longest exposure allowed for the sensels exposed to the the least photon flux from shadow areas of the subject and the amount of circuit gain used for the output of individual sensel. A new world with no ISO as we know it for the CMOS array as we know it today. We just ned the MFRS to deploy the new CMOS chip in their cameras!

So the sensel is kept open until say the well is filled to at least 1k, 2k,3k,4k, 5k, 6k, 7k electrons, for example and with each ecrease in level of filling, the counting accuracy is improved and the S/N ration is increased at the lower end of the image capture. IOW, each sensel is an individual camera responsible for a contribution to the DR of the composite sensor.

So the math is quite different from the case where all the pixels are measured are exposed at and for the same time!

So CMOS sensors with individually controlled small pixels do seem to offer increased DR even before noise is decreased further!

The only technological breakthrough that could counter-act that loss of dynamic range in smaller sensels, is a technology that decouples sensel area from storage capacity. The potential improvements on the noise reduction side of DR are much smaller than on the maximum signal side.
Done!

All of that is no surprise because I already hardly ever used my Powershot G3 at anything higher than ISO 50 (at apertures not smaller than f/5.6), because its ~3x3 micron sensels would become too noisy. Its on-sensor resolution is superb because the small physical lens diameter allows better aberration correction than a larger lens, but its dynamic range is limited.
As an example see this stitched simulation of a large sensor array at 1/3rd of its actual '12Mp' stitched size, and a full size crop of what is possible at the pixel level, after significant tonemapping.
I like that example Bart and it demonstrates what hapens when the sensels recording from shadowed areas are allowed to fill by increasing the exposure time. This will be readily accomplished with one click of the shutter (actually not used to time the exposure of each individual sensel a this is done elcectronically).

Professional use will require a breakthrough in the sensel's maximum storage capacity, before denser sampling will become a viable option, IMHO of course.
That breakthrough is, for the moment prolonged exposure for sensels either by bracketing, as your composite demonstrates, or independant sensel timing.

Asher
 

Ferenc Harmat

New member
Bingo!

It seems like you missed the essence of the examples.

(...)

The only technological breakthrough that could counter-act that loss of dynamic range in smaller sensels, is a technology that decouples sensel area from storage capacity. The potential improvements on the noise reduction side of DR are much smaller than on the maximum signal side.

Right on! The decoupling would be necessary (will be a must) because, otherwise, the surface (which ultimately drives photon-gathering capabilities) has a critical impact on performance, as clearly defined in the relatively simple math. that convey the governing principles:

From R. Clark's, on his wonderful series (http://www.clarkvision.com/imagedetail/digital.sensor.performance.summary/index.html)

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

The noise model for digital cameras is:

N = (P + r2 + t2)1/2, (eqn 1)

Where

=> N = total noise in electrons,
=> P = number of photons,
=> r = read noise in electrons, and
=> t = thermal noise in electrons.

Noise from a stream of photons, the light we all see and image with our cameras, is the square root of the number of photons, so that is why the P in equation 2 is not squared (sqrt(P)2 = P).

The signal corresponding to equation 1 would simply be the number of photons, P, so the signal-to-noise ratio, SNR, in a pixel is:

SNR = P/N = P/(P + r2 + t2)1/2. (eqn 2)

It is this predictable signal and noise model that allows us to predict the performance of digital cameras. It also shows us that those waiting for the small pixel camera to improve and equal the performance of today's large pixel DSLR will have a long wait: it simply can not happen because of the laws of physics. So, if you need high ISO and/or low light performance, the only solution is a camera with large pixels.
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=


Additionally, Roger also has a pulverizing comparison between a large-vs-small sensor, even with differences in FOV. The results are very compelling, indeed:

http://www.clarkvision.com/imagedetail/does.pixel.size.matter2/index.html


IN CONCLUSION: You are absolutely correct in stating that, unless we can make these sensels "depth-capable", that is, instead of just a "shallow" surface, heve them made and operate "deeper", thus "sucking" more photons regardless of their actual surface, we are going nowhere, absolutely nowhere with smaller sensels based on currently technology.

It can't be simpler than that, and those waiting will be waiting a life-time because of the above math. principles (again, based on current technology).
 

John Sheehy

New member
Right on! The decoupling would be necessary (will be a must) because, otherwise, the surface (which ultimately drives photon-gathering capabilities) has a critical impact on performance,

This is only going to give lower ISOs with less shot noise; it isn't going to do a thing for existing ISOs. And unless readout noise is going to be reduced significantly, the bottom end is going to fail to extend the DR.
The noise model for digital cameras is:

N = (P + r2 + t2)1/2, (eqn 1)

There should be a carat after the closing parenthesis

Where

=> N = total noise in electrons,
=> P = number of photons,
=> r = read noise in electrons, and
=> t = thermal noise in electrons.

There is also a read noise that is proportional to signal strength, which he doesn't seem to mention.

It is this predictable signal and noise model that allows us to predict the performance of digital cameras. It also shows us that those waiting for the small pixel camera to improve and equal the performance of today's large pixel DSLR will have a long wait: it simply can not happen because of the laws of physics. So, if you need high ISO and/or low light performance, the only solution is a camera with large pixels.

That's what I call a false conclusion. That is what you can say about small sensor cameras, not about small pixel cameras. No matter how many charts and measurements and formulas someone can show, they can always be misapplied or misinterpreted.

Additionally, Roger also has a pulverizing comparison between a large-vs-small sensor, even with differences in FOV. The results are very compelling, indeed:

http://www.clarkvision.com/imagedetail/does.pixel.size.matter2/index.html

Although Roger may claim that this page is about small pixels vs large pixels, it is really about small sensors vs large sensors.

IN CONCLUSION: You are absolutely correct in stating that, unless we can make these sensels "depth-capable", that is, instead of just a "shallow" surface, heve them made and operate "deeper", thus "sucking" more photons regardless of their actual surface, we are going nowhere, absolutely nowhere with smaller sensels based on currently technology.

That's great for landscape photography, and architecture, studio lighting, etc. For any kind of existing-light photography of moving subjects and/or hand-holdability, higher resolution is valuable without collecting any extra photons per square millimeter.

It can't be simpler than that, and those waiting will be waiting a life-time because of the above math. principles (again, based on current technology).

The principles apply to maintaining a similar # of MPs with different sized sensors *and* pixels. It has absolutely no application to having more pixels in the same sensor size, for images to be viewed at a given magnification.
 

John Sheehy

New member
So the sensel is kept open until say the well is filled to at least 1k, 2k,3k,4k, 5k, 6k, 7k electrons, for example and with each ecrease in level of filling, the counting accuracy is improved and the S/N ration is increased at the lower end of the image capture. IOW, each sensel is an individual camera responsible for a contribution to the DR of the composite sensor.

I can't really get too excited about that technology. The first implementations will probably have all kinds of bugs, and even if they're ironed out, the application is still limited. How do you use flash? Can the empty/reset cycle keep up with flash, or any other transient light? What happens to photons from the down-time? What kind of noise that cause?

How do you shoot a non-static scene?
 

Ferenc Harmat

New member
CANON is not in agreement with you...

It has absolutely no application to having more pixels in the same sensor size, for images to be viewed at a given magnification.

Well, it turns out that Canon's 1DMKIII has 7.2micron pixels, which seem "smaller" than the N's 8.2microns, right? Well...Wrong!

Canon cleverly managed to reduce on-board circuitry "footprint", around the "sensels", while increasing size of micro-lenses (even better fill-ration) and still keeping the *same* surface/size of the photo-sensitive element in the sensel.

WOW! The cramped two additional million pixels in the wonderful APS-H 1.25x sensor, while *potentially* (my opinion) increasing quantum efficiciency due to better fill-factor and same photo-surface.

Canon is surely sensitive to the mathematics implied here, and seems to be far away from anything that you are proposing, in terms of smaller (and worse) pixels. They are avoiding that and for very good reasons...

And Roger is right on the money, too.

....
 
Top