Today the US Patent & Trademark Office published a patent application from Apple that relates to a next-gen depth sensing camera system associated with a Time-of-Flight camera, one that is coming to the iPhone 12 and just on the latest Galaxy S20 smartphones. The new ToF cameras could improve Face ID and because of its clarity of depth, it could enable in-air gesturing for the iPhone and in vehicles in the future, improve AR experiences and more.
Apple notes that existing and emerging consumer applications have created an increasing need for real-time three-dimensional (3D) imagers. These imaging devices, also known as depth sensors, depth mappers, or light detection and ranging (LiDAR) sensors, enable the remote measurement of distance (and often intensity) to each point in a target scene--referred to as target scene depth--by illuminating the target scene with an optical beam and analyzing the reflected optical signal.
A commonly-used technique to determine the distance to each point on the target scene involves transmitting one or more pulsed optical beams towards the target scene, followed by the measurement of the round-trip time, i.e. time-of-flight (ToF), taken by the optical beams as they travel from the source to the target scene and back to a detector array adjacent to the source.
In some embodiments, the processing and control circuitry is configured to group the sensing elements in each of the identified areas together to define super-pixels, and to process together the signals from the sensing elements in each of the super-pixels in order to measure the depth coordinates.
Apple's patent FIG. 3A below is a schematic representation of a pattern of spots projected onto a target scene; FIG. 3B is a schematic frontal view of a ToF sensing array; FIG. 3C is a schematic detail view of a part of the ToF sensing array of FIG. 3B, onto which images of the spots in a region of the target scene of FIG. 3A are cast, in accordance with an embodiment of the invention.
Apple's patent FIGS. 4A and 4B above are schematic frontal views of a ToF sensing array showing sets of super-pixels that are selected for activation and readout in two different time periods.
This is a highly technical invention and subject and the U.S. Patent Office published two in-depth patent applications covering this. For engineers in this field and the curious, you could review these patents: 20200256669 and 20200256993.
To make this easier to understand, the video presented below explains some of the applications that a Time-of-Flight (ToF) camera could bring to the iPhone 12.
Technically it could provide superior Face ID imagery that might be handy at night and easily create movable 3D images like a model of a head as you'll see in the video.
A ToF camera will capture up to 4x the data that previous technologies like 3D scanning. Because of its clarity of depth, it could also enable and usher in what's known as in-air gesturing for the iPhone and in vehicles in the future.
Other applications could be superior indoor navigation, improving AR experiences and theoretically, provide superior blur backgrounds or bokeh effects in portrait mode.
Samsung just introduced their ToF camera on their new Galaxy S20+ and Ultra phones. One of the features that I've wanted to see on the iPhone for years, that a company called Lytro was an expert at, should be available on the iPhone 12 as well. Samsung describes it this way:
"In the case of Live focus video, you're now able to blur out the background in real time as you take a video, adding a brand-new vibe to your movies. You can even swap between foreground and background focus with ease, to switch up the focus with a tap. Live focus video is available on both the front and rear cameras."
We'll get to see how Apple wants to present their ToF camera hopefully on October 12th, the rumored date for the iPhone 12 keynote.
For the record, our report's cover graphic represents Apple's patent FIG. 1 which is a schematic side view of a depth mapping system. The depth mapping system comprises of a radiation source which emits individual beams. The radiation source comprises multiple
banks of emitters arranged in a two-dimensional array together with an optical beam. The emitters typically comprise of solid-state devices, such as vertical-cavity surface-emission lasers (VCSELs) or other sorts of lasers or light-emitting diodes (LEDs). The beam optics typically comprise a collimating lens and may comprise a diffractive optical element (DOE, not shown), which replicates the actual beams emitted by the array to create the M beams that are projected onto the scene. For more on this, read the full applications linked to above.