Jerome Marot
Well-known member
Photography was invented in 1826, the earliest extant written record of the camera obscura is to be found in Chinese writings dated to the -4th century. It is pretty old technology. All cameras, analog, digital, photo and video use the same principle of the camera obscura which projects an inverted image on a sensor.
The limitation inherent to that process is that depth information is lost: the sensor is flat. This is not usually perceived as a limitation, because the paper on which the image is printed or the display screen are also flat. But is is a limitation if one wants to edit the picture afterwards, changing the lights for example.
Other principles of photography exist which do not discard the depth information. 3D cameras based on dual cameras are only a crude approximation which make extracting the 3D map computationally intensive. Plenoptic cameras and cameras using laser ranging have been available, but not very popular. Basically, only Lytro tried to reach the mass market and did not really succeed as their software was not quite ready for prime time.
What can you do with a camera which keeps accurate depth information? You can (given the right software):
(3 means that you could, for example, also replace a talking face by someone else, impersonating other people or appearing younger or older than in reality).
The news is that Apple announced a new Phone with a such camera and the software to go with it. It may not feel like a revolution now, but it is likely to be. The iPhone is, today, the most popular camera. The technology is likely to be available in other, cheaper, phones soon and there are thousands developers ready to think about new "apps" for the new cameras. Actually, the new iPhone comes with functions 1-3 above right out of the box.
The limitation inherent to that process is that depth information is lost: the sensor is flat. This is not usually perceived as a limitation, because the paper on which the image is printed or the display screen are also flat. But is is a limitation if one wants to edit the picture afterwards, changing the lights for example.
Other principles of photography exist which do not discard the depth information. 3D cameras based on dual cameras are only a crude approximation which make extracting the 3D map computationally intensive. Plenoptic cameras and cameras using laser ranging have been available, but not very popular. Basically, only Lytro tried to reach the mass market and did not really succeed as their software was not quite ready for prime time.
What can you do with a camera which keeps accurate depth information? You can (given the right software):
- change the lights
- change the amount and kind of background blur
- move objects around and even replace one object with another, even in real time.
(3 means that you could, for example, also replace a talking face by someone else, impersonating other people or appearing younger or older than in reality).
The news is that Apple announced a new Phone with a such camera and the software to go with it. It may not feel like a revolution now, but it is likely to be. The iPhone is, today, the most popular camera. The technology is likely to be available in other, cheaper, phones soon and there are thousands developers ready to think about new "apps" for the new cameras. Actually, the new iPhone comes with functions 1-3 above right out of the box.