• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

New Samsung sensor for Pentax K20D

Well, the rumors about who is making the Pentax sensor can now be answered with fact!

Just posted on DPR is a brief hands-on preview that states, "that by minimizing the circuitry around each photo site, they have been able to keep the light-sensitive area of each pixel to the same size as other companies manage on 12MP sensors".

So, does this mean Nikon D300 / Canon 40D noise control in a 14.6MP sensor? We'll see!
 

John Sheehy

New member
Well, the rumors about who is making the Pentax sensor can now be answered with fact!

Just posted on DPR is a brief hands-on preview that states, "that by minimizing the circuitry around each photo site, they have been able to keep the light-sensitive area of each pixel to the same size as other companies manage on 12MP sensors".

So, does this mean Nikon D300 / Canon 40D noise control in a 14.6MP sensor? We'll see!

That should increase the total number of photons that can be captured at once, but does not necessarily mean they will be collected at a higher rate; it might just mean a lower base ISO. The cameras that perform well at high ISOs do so not just because of photon collection, but because of the lower noise generated in reading and digitizing the sensor. There is no direct correlation to pixel size for this.

In the K10D, the read noise is purportedly very low at ISO 100, but the higher ISOs are really just the same 22-bit digitization quantized differently, and do not have the benefits of lower read noise in electrons (or relative to absolute signal) at high ISO, like many other cameras now do.

To do that, it seems that space usually is needed near the photosites, but who knows what surprises are next.
 
I usually prefer to watch technical discussions of this magnitude, however I think I have a question!

If a larger photosite doesn't increase the signal to noise ration (so to speak?) at higher sensitivities, then what does? I repeatedly see comments on DPR blaming photosite size for digital noise in small sensors. Is this wrong or just misleading?

If I understand what you're saying, the size of THE SPACE AROUND THE PHOTOSITE is more likely to increase the signal to noise ratio at higher sensitivities because it is likely to isolate the electronics that create the noise?
 

John Sheehy

New member
I usually prefer to watch technical discussions of this magnitude, however I think I have a question!

If a larger photosite doesn't increase the signal to noise ration (so to speak?) at higher sensitivities, then what does? I repeatedly see comments on DPR blaming photosite size for digital noise in small sensors. Is this wrong or just misleading?

If I understand what you're saying, the size of THE SPACE AROUND THE PHOTOSITE is more likely to increase the signal to noise ratio at higher sensitivities because it is likely to isolate the electronics that create the noise?

Noise isn't just about photon collection. You could have a huge sensor with huge photosites and still have very poor shadows if there is a lot of electronic read noise. Even ignoring read noise, big photosites really only mean lower resolution. The total number of photons captured in a given subject area is far more important than how many are captured in each pixel. Subdividing them into smaller pixels does not increase *image* shot noise.

The Canon model of having special amplifiers right at the photosite is going to hit a wall soon; the fact is, you can get much better base ISO IQ by having lots of little pixels, with readout optimized only for one true gain level. The Pentax K10D already has the lowest read noise per pixel in the industry, and it looks like the K20D may go even lower, even with 14MP in an APS sensor. I have not had the proper RAW samples to tell for sure just yet, but it looks like the K20D will have a pixel read noise of about 0.74 ADU, and the lowest of any other brand (1Dmk3, for example) is a 12-bit equivalent of 1.25 ADU. 14MP in a 1.5x crop means the equivalent of 31.5MP full-frame. The read noise would then be equivalent to 0.74/((31.5/14)^0.5) = 0.46 ADU (which would need 14 bits to avoid visible quantization). That's about 1.5 stops more DR than the Canon mk3 cameras.

Sorry, I replied to this post weeks ago but never typed in my reply. I'm cleaning up Firefox, where I had about 40 tabs open and it was bogging down, so I killed Firefox from the Task Manager, and brought it back up with the old session (it used about 85% less memory after the resurrection, even though everything was open that was open before, and included the history as well). Keep that in mind, anyone using Firefox. You can virtually hibernate Firefox (and clean up its memory usage) by killing its process tree in Windows Task Manager. When you launch Firefox again, it will as you if you want to resume the old session, or start fresh, at least the latest versions. Of course, it goes to the websites again, so if they have changed, you will lose the page.
 
So, theoretically, some could design a camera for a higher base ISO, and come out ahead of the competitor's current, gain modified, image quality?

What are the necessary sacrifices that allow high quality, high-iso images?
 
The total number of photons captured in a given subject area is far more important than how many are captured in each pixel. Subdividing them into smaller pixels does not increase *image* shot noise.

From a human visual system (HVS) point of view, that statement is a bit to general, IMHO.

The HVS averages scene luminance (for adaptation) in an area of approx. 1 degree. Visual acuity is much (50 or more times) better, so detection of noise is quite good against an average level unless it's much smaller that the level of visual acuity (or even harder, vernier acuity).

In other words, the statement only holds for small magnification factors, where the 'smaller' noisier pixels cannot be resolved or are drowned in other detail. As soon as we enlarge a bit, noise becomes more noticable.

Bart
 
Bart!

I'd love to see a diagram of that!

Hi Asher,

The simplest method would be by looking at the PPI of the output image. When the output exceeds 300(-600) PPI at a viewing distance of about 10 inches, the single pixel random noise becomes blurred by our limited acuity, even in smooth areas like skies. Lower spatial noise frequencies will still be visible, as long as the amplitude is high enough.

The HVS is keen on detecting patterns (even if there are none) to reduce information overload, so pattern noise is even easier to spot and real detail will be picked up with a larger priority than random noise, so here we get into the territory of perception, where it becomes even trickier to make absolute statements. The consequence is that we'd need to view our computer displays from something like almost 1 metre or 3 feet to blur per pixel noise (or use noise reduction to lower the amplitude).

One of my favorite demonstrations of pattern recognision (where there is none) is the following, do you see a triangle?:
Triangle-or-not.gif


Which shows how hard it becomes to predict visual response, unless we are talking about pure random phenomenae, where visual acuity starts to become the threshold.

Bart
 

John Sheehy

New member
The simplest method would be by looking at the PPI of the output image. When the output exceeds 300(-600) PPI at a viewing distance of about 10 inches, the single pixel random noise becomes blurred by our limited acuity, even in smooth areas like skies. Lower spatial noise frequencies will still be visible, as long as the amplitude is high enough.

I don't think that this resolution threshold is as relevant to the experience of noise as you seem to suggest. Yes, some of the noise becomes unresolved, but there really is no connection between resolvability and the experience of noise, on the well-resolved side of the issue. The lower noise of bigger pixels is noisier when viewed at their real subject size, compared to smaller-but-noisier ones, both well resolved. I can not find a single arbitrary or real image that improves when binned. Can you? The only ones that clean up with a visual gain are ones with artifical, repeating noise. For example, if you had a 2x2 pixel tiles of noise, each with a zero sum, then binning 2x2 would remove it completely. True random noise always leaves something, and while weaker than the original in local depth, is just as strong when its size (area) is experienced.

There is no question in my mind that binning is false economy. The HVS can do a much better job working with the rawest visual materials, binning and regrouping as the eyes scan the image for a much higher resolution than the very permanent single grouping of a hard bin can provide.
 
Top