Doug Kerr
Well-known member
We note that, even with a "very good lens", and an aperture such that the effects of diffraction can essentially be ignored, for a digital camera sensor with, say 4000 pixels per picture width, the measured "resolution" is notably less than the 2000 cycles per picture width we might expect.
I often mention the "Kell factor" in this connection.
This situation comes from several closely-related mechanisms (which are actually different ways of looking at the same thing). Although the basic matter was noted during early work on facsimile, it was first well characterized in 1934 by Raymond D. Kell of RCA during work on early television systems.
Outlook A
Imagine that, with the sensor described above, we had a vertical-line test pattern consisting of 2000 line pairs per picture width (imaged on the senor by a "perfect" lens). Could it be "captured" by the sensor?
First imagine that in one test (probably by accident) the alignment of the pattern was such that each black line and each white line fell along one row of sensels. Then the pattern would be captured in good style by the sensor.
Now imagine that in a second test the alignment of the pattern was such that the boundaries between black and white stripes fell along the centerlines of a row of sensels. Now every sensel would pick up a "mid-gray" result (it being illuminated half by white and half by black, if you will excuse the conceit of being "illuminated by black"). The delivered image would be pure gray.
Thus we certainly could not say that this sensor could resolve a pattern of 2000 line pairs per picture width. Sometimes it could, and sometimes it couldn't at all, and sometimes it "sort-of-could" (that is, with a degraded contrast).
A simplistic explanation of the concept of the Kell factor is that "well, overall, for random orientation of the pattern, the 'average' attainable resolution would be on the order of 75% of that suggested by the sensel pitch." That value, 75%, is the "Kell factor" associated with this situation.
Outlook B
The problem with that outlook is that the notion of the "average resolution" attained over the range of pattern orientation doesn't really have much meaning insofar as the user experience.
But let's do a test (using that same sensor) with a "signal" (it now helps to think in terms of raster scan in a video context) whose frequency is slightly less than 2000 cycles per picture width. As we go along, the average "phase" of the pattern (compared to the phase of the sensels) continues to shift. The result might be like this:
Courtesy of Wikimedia Commons
We see a progressive shift from the "picked up OK" to the "all we get is gray" situations. And this is often described as a "beat frequency effect" between the frequency of the sensels and the frequency of the lines. The gray swaths are regions in which the phase between the pattern and the sensel layout is nearly the "adverse one".
Courtesy of Wikimedia Commons
Note that here there is a small appearance of the "beat frequency" phenomenon, but it is not prominent.
So perhaps here we can safely operate with a pattern at a frequency of 66% the Nyquist rate; that is, the usable resolution of this system is 66% the Nyquist rate. The Kell factor here seems to be 66%.
But what about Shannon-Nyquist?
The Shannon-Nyquist sampling theorem tells us that it we sample some phenomenon (the voltage of an audio waveform, the luminance of a path across an image) at a frequency of fs, then all frequency components of the phenomenon with frequency less than fs are "captured" by the succession of samples, and from the succession of samples the original phenomenon can be precisely reconstructed.
Why do we not seem to enjoy this promise in our little example?
Because in digital photography, we do not have a system which end-to-end completely plays the "Shannon-Nyquist game".
In a digital audio system, we sample the waveform, and then, at the "receiving" end, we expose a train of pulses matching the samples to a low-pass filter cutting off at the Nyquist rate. This is the "reconstruction filter", and is a vital part of the whole game. Out of it comes the original waveform.
But in digital photography (as in television), there is really not a "reconstruction filter". We do not, for example, create little spots of light on a display, one for each sample value, and then place in front of this an optical low-pass filter, out of which will come the original image. Rather, we create little blobs of light, one for each sample, and then just look at that array.
The consequence of this "shortcut" is that, for a test pattern a a little below the Nyquist frequency, we see these spurious "beat frequency" components in the delivered image.
In effect, the Kell factor tells us how much the practical resultion of the system has been compromised by this "shortcut".
************
Time for breakfast. Today, five kinds of fruit, scrambled eggs, bacon, and greaseless hashed brown potatoes.
Best regards,
Doug
I often mention the "Kell factor" in this connection.
This situation comes from several closely-related mechanisms (which are actually different ways of looking at the same thing). Although the basic matter was noted during early work on facsimile, it was first well characterized in 1934 by Raymond D. Kell of RCA during work on early television systems.
Outlook A
Imagine that, with the sensor described above, we had a vertical-line test pattern consisting of 2000 line pairs per picture width (imaged on the senor by a "perfect" lens). Could it be "captured" by the sensor?
First imagine that in one test (probably by accident) the alignment of the pattern was such that each black line and each white line fell along one row of sensels. Then the pattern would be captured in good style by the sensor.
Now imagine that in a second test the alignment of the pattern was such that the boundaries between black and white stripes fell along the centerlines of a row of sensels. Now every sensel would pick up a "mid-gray" result (it being illuminated half by white and half by black, if you will excuse the conceit of being "illuminated by black"). The delivered image would be pure gray.
Thus we certainly could not say that this sensor could resolve a pattern of 2000 line pairs per picture width. Sometimes it could, and sometimes it couldn't at all, and sometimes it "sort-of-could" (that is, with a degraded contrast).
A simplistic explanation of the concept of the Kell factor is that "well, overall, for random orientation of the pattern, the 'average' attainable resolution would be on the order of 75% of that suggested by the sensel pitch." That value, 75%, is the "Kell factor" associated with this situation.
Outlook B
The problem with that outlook is that the notion of the "average resolution" attained over the range of pattern orientation doesn't really have much meaning insofar as the user experience.
But let's do a test (using that same sensor) with a "signal" (it now helps to think in terms of raster scan in a video context) whose frequency is slightly less than 2000 cycles per picture width. As we go along, the average "phase" of the pattern (compared to the phase of the sensels) continues to shift. The result might be like this:

Courtesy of Wikimedia Commons
We see a progressive shift from the "picked up OK" to the "all we get is gray" situations. And this is often described as a "beat frequency effect" between the frequency of the sensels and the frequency of the lines. The gray swaths are regions in which the phase between the pattern and the sensel layout is nearly the "adverse one".
And yes, this is in fact a manifestation of aliasing, a matter that is inextricably linked with the matter of the Kell factor.
Now suppose that we do the same thing but with a pattern whose horizontal line frequency is only 66% of 2000 cycles per picture width (we can of course say that its frequency is 66% of the Nyquist frequency). The result might be like this:
Courtesy of Wikimedia Commons
Note that here there is a small appearance of the "beat frequency" phenomenon, but it is not prominent.
So perhaps here we can safely operate with a pattern at a frequency of 66% the Nyquist rate; that is, the usable resolution of this system is 66% the Nyquist rate. The Kell factor here seems to be 66%.
But what about Shannon-Nyquist?
The Shannon-Nyquist sampling theorem tells us that it we sample some phenomenon (the voltage of an audio waveform, the luminance of a path across an image) at a frequency of fs, then all frequency components of the phenomenon with frequency less than fs are "captured" by the succession of samples, and from the succession of samples the original phenomenon can be precisely reconstructed.
Why do we not seem to enjoy this promise in our little example?
Because in digital photography, we do not have a system which end-to-end completely plays the "Shannon-Nyquist game".
In a digital audio system, we sample the waveform, and then, at the "receiving" end, we expose a train of pulses matching the samples to a low-pass filter cutting off at the Nyquist rate. This is the "reconstruction filter", and is a vital part of the whole game. Out of it comes the original waveform.
But in digital photography (as in television), there is really not a "reconstruction filter". We do not, for example, create little spots of light on a display, one for each sample value, and then place in front of this an optical low-pass filter, out of which will come the original image. Rather, we create little blobs of light, one for each sample, and then just look at that array.
The consequence of this "shortcut" is that, for a test pattern a a little below the Nyquist frequency, we see these spurious "beat frequency" components in the delivered image.
In effect, the Kell factor tells us how much the practical resultion of the system has been compromised by this "shortcut".
************
Time for breakfast. Today, five kinds of fruit, scrambled eggs, bacon, and greaseless hashed brown potatoes.
Best regards,
Doug