• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

Post Kina thoughts

leonardobarreto.com said:
Your calculations look impressive to a mere photographer -- I have just learned the term "sensel", for example -- but I think that in Photo terms you mean to say that "don't know how they are producing 16bits from a sensor that seams to have unremarkable sensels".

Something like that ;-)
Actually, one could say there is little benefit to use a precision that exceeds the signal's variation (noise) by a large margin. It won't hurt, but it won't benefit the result either.

I think that what we should do is wait to see this less-than-FF-Kodak 16bits sensor and the sample images when available and celebrate the achievement then.

Fair enough, although I don't expect the laws of physics to be redefined ...

On the other side, if Kodak does this on a Leica body there is no reason Nikon and all could not be able/want to also have 16bit bodies... or Canon, being one much mentioned advantage of digital backs.

Actually, but that is more of a subject for another topic, Canon have surprised the world by their low levels of read noise, from CMOS devices no less. It was deemed technically impossible before they achieved it. But they also cannot break the laws of physics and Photon statistics, and the results show they don't. But then again they are not worse than the physical limitations either.

Bart
 

Paul Schefz

New member
..i am in no way a technician and actually refuse to learn too much about the details...in the end it is the image that speaks for itself (one way or another)...
the leica back is 16bit (as far as i know), the images look good, but have problems...
it reminds me of the kodak 14n which was a great camera on paper and under the right conditions produced amazing files...but under 75% of conditions almost unuseable...some stuff was fixable in software, some wasn't...either way a ridiculous amount of time spent processing/post production...

the M8 does not do anything for me...a 5000$ 10mpix small sensor? the best lenses can only get the information across with as little loss of information as possible...so at best it will still compete (and loose i am afraid) with cameras half the price....a nice little expensive toy for leica enthusiasts......
the "real leica" seems to be the sigma MP-1?, super compact, fovenon sensor, great fixed lens...i am hoping for a reasonable price (anything over 1500 would be too much)...a perfect little street shooter...
of course both could be a surprise or dissapointment...
 

Don Lashier

New member
All I know is that from playing with an H20 a few years ago (9um but higher noise than current 6.8um sensors), it clearly had more DR and smoother gradients than a Canon 1Ds or my own 9um 1D. I've got shots to prove this. As to whether 16 bits was necessary to reveal this, I don't know, but I do know that you've got to discount a bit or even two - at least with Canon the 12th bit hardly (if ever) gets used. Also remember that sensor data is linear, so shadow detail/gradient will suffer even if theoretically the bit depth is adequate. I've got little interest in theoretical DR calculations - as Leonardo and Paul point out - the proof is in the pudding.

- DL
 
Don Lashier said:
I've got little interest in theoretical DR calculations - as Leonardo and Paul point out - the proof is in the pudding.

Yes, I agree that the eating of the pudding is the ultimate test, I'm just predicting the outcome based on the laws of physics (defying marketing speak can have a sobering effect).

It would be fun and enlightening to do some fundamental testing, although it is harder to execute than it might seem, if the results have to be objective and void of external influences (like Raw converters that 'hide' shortcomings, improve the 'look', but do not fundamentally add anything that isn't already in the image data). The benefit of such tests would be to get a better 'feel' for what is real and what is to be expected (and thus defuse marketing hype).

Bart
 

Asher Kelman

OPF Owner/Editor-in-Chief
Several points:

12 v 16 BIT The terms are used for marketing flat bed scanners with the fallacious idea that the 48 BIT scanners give a better dynamic range and darker blacks. Not so! None of the 48 BIT (3x16 BIT/channel) dp much better than an O.D. of 3.1 to 3.6 or if it very good 3.7.

The A/D converter can vary in quality and then downstream what are the circuits and software to process the signals and add the least noise?

So while the Phase One may choose 16BIT, and that might be not neceesary, the quality might be better than that of the 12BIT they could use instead.

One thing I'd like to know about is how we arrive at the photon capture limit for a particular "sensel".

Bart van der wolf said:
First, the Kodak KAF-10500 sensor apparently (according to their info) has a saturation level of 60,000 photons (electrons), and a read-noise of 15 electrons. Those are not uncommon values for such a sensor with 6.8 micron sensel pitch. For a theoretical 60000:15 Signal/Noise Ratio, or 4000:1 which is almost 12-bits, there is little benefit to be expected from 16-bit processing, other than perhaps a slightly more accurate quantification of noise. Personally I don't expect any miracle performance due to 16-bit processing, the sensels are too small to offer a challenging dynamic range.

In addition, the sensor (like almost all others) is photon shot noise limited for full exposures which further restricts the S/N ratio. That means that the sensor will in practice exhibit a maximal 245:1 S/N ratio, something a 12-bit ADC could already handle well enough.

Do we have figures for the "sensels" of other pixel pitches in various cameras and backs?

What is happening on the physics level. IOW where can we expect the practical limits of the photon capacity and noise of an individual "sensel? This might tell us where things are going.

Also that information might be a good way of helping us predict the performance in the shadow areas.

Asher
 

Don Lashier

New member
Asher Kelman said:
Do we have figures for the "sensels" of other pixel pitches in various cameras and backs?

You need to do a little googling to determine which cameras have which sensors.

Sensor specs are available: Kodak, Dalsa

Unfortunately Canon, Nikon, and Sony don't appear to publish sensor specs so it's hard to compare with DSLRs.

- DL
 
Don Lashier said:
Unfortunately Canon, Nikon, and Sony don't appear to publish sensor specs so it's hard to compare with DSLRs.

Ah, but then there are pixel peepers (like me ;-), and others) who use alternate methods to empirically derive those specs ... !
Manufacturers cannot hide it, the truth is out ... ;-).

Most of the difficulty in uncovering that information reliably, is in systematically eliminating as many external sources of influence as possible. One obstacle is the Raw converter, which may also do all sorts of noise related stuff with the data. It is therefore helpful if the Raw sensor data after Analog to Digital Conversion (ADC) can be accessed.

The empirical determination is based on a range of exposures, and on measuring the signal and noise levels in the resulting data numbers (DN) of the Raw linear gamma files.

Part 1 would be a determination of the "read-noise" level at the shortest possible exposure time that the camera allows, while blocking any external light from contributing to the 'exposure' (use the lens cap, and cover the viewfinder).
The shortest possible exposure time will reduce the potential buildup of temperature related dark noise. All the noise we then detect in the resulting file is due to the electronics, and thus represents the lowest possible signal level in a given camera at that temperature and ISO (= amplification) setting.

Part 2 would be a determination of the actual saturation level of the sensels. By photographing a uniformly lit surface, and by keeping that surface out of focus to avoid surface structure influence, we can take a range of exposures with small increments in exposure time until the resulting DN no longer increases (the signal saturates the sensel's charge storage capacity).
I simply use a piece of opaline glass held flush with the filter threads of a lens which is pointed at a uniformly lit area or cloudy sky. I also use a medium range aperture and a modest tele lens to reduce the influence of vignetting, and only analyze a small area in the center of the image. That helps to get an as uniformly lit exposure area as possible.

There are several checks and balances that need to be built-in in (especially) the part 2 tests and evaluations (like subtracting 2 exposures for better statistical performance and elimination of sensor dust influence). You can find an example of such a workflow here. Although it is not a medium/large format camera that is tested there, the principles apply equally to the Phase One's (and similar) of this world.

The easiest part is the Read-noise determination, which gives the noise floor for all other Dynamic Range calculations. That too will benefit from subtracting 2 equal exposures at each ISO setting one would like to test, because it can reduce the influence of e.g. hot pixels.

Try it, it'll be fun to build a better understanding of real world limitations. It may require a Raw converter like DCRAW to extract the linear gamma data before demosaicing and white balancing, but that is a hurdle that can be taken once the double Read-noise shots are taken. If there is broader interest in this fundamental evaluation for various cameras, we could start a new thread "Dynamic Range evaluation" in a more appropriate place, like for the particular camera models.

Bart
 
Asher Kelman said:
What is happening on the physics level. IOW where can we expect the practical limits of the photon capacity and noise of an individual "sensel? This might tell us where things are going.

Also that information might be a good way of helping us predict the performance in the shadow areas.

For those interested in such fundamentals, I suggest this write-up.

It basically confirms that image quality benefits most from larger storage capacity per sensel, which currently correlates with physically larger sensels, and low noise circuits. Those will then also require better Analog to Digital Converters (ADCs) e.g. low noise 16-bit ones.

The test also suggests that current sensors are limited by photon shot noise (a law of physics), and that there can be improvement expected from better quantum efficiency, where more of the arriving photons will actually be contributing to the recorded signal in a given exposure time. That would also require large storage capacity per sensel to be able and benefit from it.

So in summary, larger sensel sizes, low noise circuits, and improved quantum efficiency are the paths to better technical image quality. Then we'll only have to blaim our own lack of creativity ...

Bart
 
I want to tell my friends -- and Asher will move this to the proper place in the forums -- that I'm back in Mexico packing to go to NY ... finally (I will not move to Tokio after all) so I'm turning off my WiFi and broadband stream.

Interesting thread, and I like the "sensel" word, but, why do we need to use "", isn't it a proper word yet?

Please go back to Medium Format or Asher will move you elsewhere, after all Leica is a mere 35mm. : )

For example, why is the P 45 disappointing some P 25 users? is it because the sensels are 9 microns? So a 49.9x36.7mm CCD with 22,000,000 pixels is as good as it gets in 645 format? I know that they can always "fix" the image amplifiers in the P 45, but the P 25 will have same and will win -- even if resolution is lower --./

Ok, I will be back .. in NY !

P 25 = 9 microns CCD = 49.9 x 36.7 mm
P 30 =6.8 microns CCD = 44.2 x 33.1 mm
P 45 = 6.8 microns CCD = 49.1 x 36.8 mm
 

Don Lashier

New member
So here's a dumb question - why can't they just make the wells deeper? I'm sure there's a simple answer or they would have done it already.

- DL
 

Asher Kelman

OPF Owner/Editor-in-Chief
HI Don,

I'm using "sensel" here in lieu of pixel, meaning the photoelectric sensitive area of the pixel, not the total physical pixel size. I myself had never heard of this term before, and since it sound fine, I'll use it too. The slate grey text goes into the subject in further detail.

The light flux*, arrives at the camera sensor. The surface area of the sensel limits how many photons could be collected in the time the shutter is open. The efficiency of photon capture and the background noise can be altered by better engineering design. However, the number of photons collected can only be increased by either increasing the surface area of the sensel or else by polonging the shutter time.

The latter can be accomplished by using slower shutter speeds but then the sensels in the brightest areas would be full and might effect adjacent pixels.

There are available CMOS sensors in which each pixel is separately mapped with it's own A to D converter and so theoretically at least, these could be kept on longer in the shadow areas. Good for still shots
.

*or energy packets Photons or light wave, exist at a certain number of ergs/mm2 and that energy level depends on the light incident of the illuminated subject and the degree to which some of that light from that object is reflecte and/or is absorbed on the object and fluoresces, sending light back at another wavelength or comes from a light source in the object, such as a flame or electric light bulb.
 

Don Lashier

New member
Asher Kelman said:
However, the number of photons collected can only be increased by either increasing the surface area of the sensel or else by polonging the shutter time.

But that's my point. Deeper wells would allow more photons to be collected, albeit at a lower sensitivity (ISO) than larger diameter. Assuming noise didn't change this would theoretically increase DR (saturation level:noise)?

- DL
 

Ray West

New member
But when I get all these photons in the wells, how do I get 'em out again? Turn the camera upside down and give it a jolly good shake?
 
leonardobarreto.com said:
Interesting thread, and I like the "sensel" word, but, why do we need to use "", isn't it a proper word yet?

I use the word sensel, which is short for 'sensor element', quite deliberately. When discussing things that go on at the silicon level, it is only too easy to get confusing terminology in the way of clear understanding. Many people talk about a sensor, when they probably mean a 'sensor-array', and when talking about a pixel they may correctly mean an RGB triplet, although some think about a sensor (but not a sensor-array).

Following the terminology as used by the ISO, it is IMHO unambiguous when we use sensor-array for the full surface, and sensor element or sensel for a single element.

Please go back to Medium Format or Asher will move you elsewhere, after all Leica is a mere 35mm. : )

Yes, 35mm is very off-topic here. I'm rather thinking of the Phase One's, especially the 9 micron sensel pitch ones with superior Dynamic Range which are done justice by a 16-bit ADC.

Ok, I will be back .. in NY !

Good luck relocating!

Bart
 
Don Lashier said:
So here's a dumb question - why can't they just make the wells deeper? I'm sure there's a simple answer or they would have done it already.

Not a dumb question at all. The way I understand it is that the electrons are formed/stored at the immediate boundary between two layers in the doped silicon. That space is extremely 'thin' and thus the storage capacity is mostly defined by the surface area, rather than depth.

Bart
 
Ray West said:
But when I get all these photons in the wells, how do I get 'em out again? Turn the camera upside down and give it a jolly good shake?

Nah, that's the analog way. Digital tends to apply electroshocks ... ;-)

Bart
 

Ray West

New member
Last edited:
Top