....The attraction of these cameras (if they would cost less than my present car, new...) is that 100 Mpix is starting to reach the maximum resolution that can be used in general photography. We can use more than 100 Mpix, of course, see what Gigapan does. But if we image tridimensional subjects, there is a limit to resolution which comes from the combined constraints of depth of field and diffraction. We are now approaching that limit.
Jerome,
All good points, except I do not know the math of the optical limits you point out.
I Have several Gigapans! Fabulous machines!
With computational optics and even the current status of AI in Photography we already entered this phase with
1. predictive AF and
2. with Hasselblad’s angular “Focus & Recompose” adjustment using a single central accurate focus point.
But for many decades researchers were using arrays of small cameras to make 3 D models of a scene. One could later select straight, angled or shifted views and obtain computational generated realistic images.
Until recently, apart from “stereo photography”, no major camera company included multiple lenses in an array. Today however it’s normal for higher end mobile phones. I routinely photograph with the “Portrait Mode” where a “wide angle” and a “normal” lens are employed simultaneously. Then one can alter the “aperture” of this “combined lens” to blur the b.g. At one’s whim and fancy!
I imagine that tiny separate cell phone camera chips might be employed either side of the MF camera lens to fine tune b.g.
This knowledge will further allow total recognition of shadows in the high level primary image so that the luminance could theoretically changed erasing or creating shadows perfectly in camera!
So we will see many new advances yet!
But whether it will allow more MP I don’t know.
What I am certain of is that there’s a tremendous space for better ahead use of the image elements we already get from the main sensor we use!
Asher