• Please use real names.

    Greetings to all who have registered to OPF and those guests taking a look around. Please use real names. Registrations with fictitious names will not be processed. REAL NAMES ONLY will be processed

    Firstname Lastname

    Register

    We are a courteous and supportive community. No need to hide behind an alia. If you have a genuine need for privacy/secrecy then let me know!
  • Welcome to the new site. Here's a thread about the update where you can post your feedback, ask questions or spot those nasty bugs!

News: A paradigm change: programmatic photography.

Jerome Marot

Well-known member
Photography was invented in 1826, the earliest extant written record of the camera obscura is to be found in Chinese writings dated to the -4th century. It is pretty old technology. All cameras, analog, digital, photo and video use the same principle of the camera obscura which projects an inverted image on a sensor.

The limitation inherent to that process is that depth information is lost: the sensor is flat. This is not usually perceived as a limitation, because the paper on which the image is printed or the display screen are also flat. But is is a limitation if one wants to edit the picture afterwards, changing the lights for example.

Other principles of photography exist which do not discard the depth information. 3D cameras based on dual cameras are only a crude approximation which make extracting the 3D map computationally intensive. Plenoptic cameras and cameras using laser ranging have been available, but not very popular. Basically, only Lytro tried to reach the mass market and did not really succeed as their software was not quite ready for prime time.

What can you do with a camera which keeps accurate depth information? You can (given the right software):
  1. change the lights
  2. change the amount and kind of background blur
  3. move objects around and even replace one object with another, even in real time.

(3 means that you could, for example, also replace a talking face by someone else, impersonating other people or appearing younger or older than in reality).

The news is that Apple announced a new Phone with a such camera and the software to go with it. It may not feel like a revolution now, but it is likely to be. The iPhone is, today, the most popular camera. The technology is likely to be available in other, cheaper, phones soon and there are thousands developers ready to think about new "apps" for the new cameras. Actually, the new iPhone comes with functions 1-3 above right out of the box.
 

Asher Kelman

OPF Owner/Editor-in-Chief
Jerome,

For some time I thought of using an array of multiple cameras to assemble 3 D data clouds representing an entire scene so that one could edit at any depth within the picture. However, I discovered quickly that I did not have the prowess in geometry and mathematics to do any work myself, but just followed the literature. It all makes sense. I can imagine a MF camera set up being linked to an infrared scanner and several lateral cameras, to build a really accurate 3 D model with the color data being added as the skin at the depths one wishes.

There are many decades of scholarly work that can be now exploited by making new iPhone X apps extending the new Programmatic capabilities. Even Apple will be surprised by the inventiveness that is about to be released by this new platform!

Now, the challenge for the Canon-Nikon-Pentax-Fuji-phase One -Sony camera manufacturers to grasp what has finally happened!

There will always be film cameras and planar imaging. But as you point out, as of yesterday, the imaging world comes face to face with programmetric photography, that the previous attempts become almost irrelevant.

I have waited for this!

Asher
 

Jerome Marot

Well-known member
The news is that Apple announced a new Phone with a such camera and the software to go with it. It may not feel like a revolution now, but it is likely to be. The iPhone is, today, the most popular camera. The technology is likely to be available in other, cheaper, phones soon and there are thousands developers ready to think about new "apps" for the new cameras. Actually, the new iPhone comes with functions 1-3 above right out of the box.

This thread was started with the 2017 iPhone, which was the first to include 3D measurements. The 3D function was limited to the face camera, where it is also used to unlock the phone by recognising the user's face. But, for the past 3 years the measuring system was limited to the face camera. So it seems that in the mind of Apple and other smartphone manufacturers, the 3D function is only intended for selfies. There are some efforts to compute a limited depth of field on the back sensor, but the manufacturers do not spend the money for an extra component and the results are somewhat crude. This, I think, demonstrates that the function of improving photography is not really a priority for the manufacturers.

Earlier last month, Apple issued a new iPad Pro. This new iPad has, for the first time, a 3D sensor on the back. It is also a different and more expensive sensor based on LIDAR. The question is: what does Apple have in mind with that sensor? Keep in mind that Apple will not spend the money for a new sensor without reasons.

When I say "what does Apple have in mind?", maybe I need to expect how the market for smartphones and tablets is working. You may think that you own your smartphone and can do what you want with it. This is not quite true. You and developers are restricted to access the actual hardware and can only do what the operating system allows. This is not an Apple-only feature, Android does the same, BTW.

So, let us suppose you bought the new iPad. You may believe you bought a LIDAR camera and can use it to map things in front of it. You may expect a developer to write an app for 3D scan, maybe for producing models or, for example, if you want to make movies, for the inclusion of CGI graphics, replacing actors, etc... That won't be so easy.

We can understand what Apple has in mind by looking at the libraries to access the sensor. They are called ARKit 3.5. Basically, in the mind of Apple, the new device is there to recognise walls, ceilings and objects like tables and seats. It is not there for your photography but to allow to project virtual objects in your living room. It seems that the only uses they plan are games (e.g. projecting pokemons in your living room) and virtual catalogues (e.g. projecting objects you consider buying, like a new chair, in your living room to see whether it fits).
 

Asher Kelman

OPF Owner/Editor-in-Chief
Jérôme,

I am already using the front 3D camera for building 3D models.


F3366B04-1088-47DF-9657-174DE701CDDF.jpeg


I also have a prism tgat allows me to pretend it’s actually a back camera.


C3B51143-95D3-49DA-A393-26D356E9CDA8.jpeg




105D47F1-C43F-4F77-85F4-342006E11B11.jpeg


Asher
 

Jerome Marot

Well-known member
The front camera supports the command AVCaptureDepthDataOutput() which outputs a depth map directly as an array. I am not so sure that the LIDAR camera supports that same command. It can output a direct scene polynomial reconstruction, see:
 

Asher Kelman

OPF Owner/Editor-in-Chief
Jérôme,

An important advance!

Can it be exported now, in a standard file format, so it can be then imported to Solidworks or Autocad?

Asher
 
Top