Apple Advances their next-gen Time-of-Flight (ToF) camera for iDevices providing superior 3D Modeling for Face ID and more
Today the US Patent & Trademark Office published a patent application from Apple that relates to range sensing, and particularly to devices and methods for depth mapping based on Time-of-Flight (ToF) measurement. Such cameras could be used in an iMac or iDevice. One application would be to allow for in-air gesturing controls. It could advance Face ID by creating 3D models of the user's head and face as illustrated in the video below from Analog Devices.
Apple explains that Time-of-Flight (ToF) imaging techniques are used in many depth mapping systems (also referred to as 3D mapping or 3D imaging). In direct ToF techniques, a light source, such as a pulsed laser, directs pulses of optical radiation toward the scene that is to be mapped, and a high-speed detector senses the time of arrival of the radiation reflected from the scene.
Laser-based time-of-flight cameras are part of a broader class of scannerless LIDAR.
Apple further notes that the depth value at each pixel in the depth map is derived from the difference between the emission time of the outgoing pulse and the arrival time of the reflected radiation from the corresponding point in the scene, which is referred to as the "time of flight" of the optical pulses. The radiation pulses that are reflected back and received by the detector are also referred to as "echoes."
Single-photon avalanche diodes (SPADs), also known as Geiger-mode avalanche photodiodes (GAPDs), are detectors capable of capturing individual photons with very high time-of-arrival resolution, on the order of a few tens of picoseconds. They may be fabricated in dedicated semiconductor processes or in standard CMOS technologies. Arrays of SPAD sensors, fabricated on a single chip, have been used experimentally in 3D imaging cameras.
In direct ToF depth mapping systems that are known in the art, the data acquisition rate is limited by the distance to the target that is to be mapped: The light source emits a bright pulse of radiation, and the system then waits for a time no less than the time of flight of the photons to the target and back to the receiver before the next pulse can be fired.
In other words, the system waits a fixed amount of time, which corresponds to the maximum working distance, i.e., the maximum distance to a target object that could be measured by the system.
If the pulse repetition period were to be less than the time of flight, the receiver might not be able to distinguish between the echoes of successive pulses, leading to problems of aliasing in the ToF measurements.
Considering that laser sources used in such systems typically have pulse widths less than 1 ns, while time of flight (in air) grows at 6 ns per meter of distance to the target, the limitation on the pulse rate means that the light source operates at a very low duty cycle. Therefore, the light source may have to emit very intense pulses in order to achieve good resolution and signal/noise ratio with acceptable measurement throughput.
Apple's patent FIG. 1 below is a schematic side view of a depth mapping device; FIGS. 5A and 5B are schematic representation of a scene that is mapped by a depth mapping device, illustrating different, respective operational modes of an emitter array in the device.
Apple's patent application number 20200309955 that was published today by the U.S. Patent Office was filed back June 2020. It's a technical patent to be sure and you could dive into it here. Considering that this is a patent application, the timing of such a product to market is unknown at this time.
Another patent report on ToF cameras was posted in August 2020 titled "Apple Patent Reveals some of the Details behind the iPhone 12's ToF Depth-Camera System."