Apple, via their acquisition of the Israeli company PrimeSense in 2013, used their 3D Mapping technology for Face ID with iPhone X. In 2015 a granted patent, originally from PrimeSense, illustrated a projector system for a much larger canvas beyond Face ID. PrimeSense technology was originally behind the Xbox Kinect used as a motion controller. Over the years, Apple has refined this 3D depth mapping technology with one the patents covered in our 2021 report titled "Apple Invents Enhanced Depth Mapping using Visual Inertial Odometry for iPhone and/or a Tabletop Device."
Today the US Patent & Trademark Office published a patent application from Apple titled "Multi-sensor Depth Mapping," that once again advances their 3D Mapping technology. Apple states that "The present invention relates generally to systems and methods for depth mapping, and particularly to improving the accuracy of depth maps." More specifically, Apple notes that their invention is to provide depth mapping beyond short distances such as Face ID that used a disparity-based depth mapping system.
According to Apple, ToF-based depth mapping systems are more accurate at longer ranges and are less vulnerable to mechanical and thermal effects than disparity-based depth measurements. Short-range ToF measurements, however, can be strongly affected by small deviations between the time of photon transmission and the time-sensitive signals that are generated in response to photon arrival. Furthermore, whereas disparity-based depth mapping systems can use standard image sensors with small pitch and high transverse resolution, ToF systems typically require special-purpose radiation sources and range sensors, with inherently lower resolution.
Embodiments of the present invention provide depth mapping systems that combine the high transverse resolution of disparity-based depth sensors with the high depth accuracy of ToF-based range sensors.
These systems use the accurate depth measurements made by a ToF sensor in generating a disparity correction function, which is then applied in improving the accuracy of disparity-based depth measurements made by a patterned light or stereoscopic depth sensor. This disparity correction is particularly significant at longer measurement distances, as well as in compensating for loss of calibration due to factors such as mechanical shocks and environmental conditions. In some embodiments, the disparity-corrected depth measurements made by the disparity-based depth sensor are also used in computing a range correction function that can be used to improve the accuracy of the longer-range depth measurements provided by the ToF sensor.
In the disclosed embodiments, an illumination assembly directs modulated optical radiation toward a target scene. For purposes of ToF sensing, the radiation is temporally modulated, for example in the form of short pulses for direct ToF sensing or carrier wave modulation for indirect ToF sensing. In addition, the radiation may be spatially modulated to project a pattern of structured light for disparity-based sensing. Based on the temporal modulation, a range sensor senses respective times of flight of photons reflected from a matrix of locations disposed across the target scene. For disparity-based sensing, a camera captures a two-dimensional image of the target scene.
Apple's patent FIG. 1 below is a schematic, pictorial illustration of a depth mapping system; FIG. 2 is a schematic side view of the depth mapping system of FIG. 1.
For more details, review Apple's patent application US 20220364849 A1.
Apple's patent refused to identify what range of products that their advanced 3D Depth Mapping is for. While the invention could apply to a future iPhone, the device illustrated in patent FIG. 1 isn't an iPhone. Could Apple be hinting of a future high-end Apple TV box supporting Apple Fitness+ and interactive gaming or something completely new?
Shay Yosub: Engineering Manager, Depth Hardware
Assaf Avraham: System Manager (from PrimeSense)
Joe Nawasra Ph.D: Engineering Manager, Camera Hardware Design
Jonathan Pokrass: Algorithms Manager
Moshe Laifenfeld: Manager Depth Sensing Algorithm
Niv Gilboa: Electro-Optics Hardware Manager
Tal Kaitz: Algorithm Team Leader
Akerman; Ronen: System Engineering Team Leader
Naveh Levanon: Image Processing Engineer