Major Apple Patents Reveal work on next-gen LiDAR Systems using VCSELs for 3D Sensing Systems and Headset
On Tuesday Patently Apple posted two patent reports covering inventions from Google and LG. Google's patent win covered using a depth camera like Apple's TrueDepth camera for gesture recognition that interprets hand gesturing commands to control features found on future Pixel phones, tablets, Chromebooks and/or slate hybrids. It will use vertical-cavity surface-emitting lasers, or VCSELs which Apple uses for Face ID and more.
The LG patent was about a 16 lens camera that could provide a superior camera experience with each lens covering a slightly different angle from the one shot. It's also designed to allow a user to take the best body shot and mix with another head-shot to create a Photoshop type of feature.
At the end of the report I noted that as the smartphone camera wars escalate, expect to see the leading smartphone OEMs go all out to advance their respective camera systems to provide users with a lot of interesting choices. One crazy off-the-chart camera idea could have the power of gaining a switcher from one brand to another.
Today the US Patent & Trademark Office published a patent application from Apple that relates to improving depth cameras and more specifically to opto-electronic devices, and particularly to light detection and ranging (LiDAR) sensors.
Earlier this year a company by the name of TriLumina developed lower costs for reliable LiDAR systems with VCSELs that could be used in mixed reality headsets, 3D sensing applications, gesture recognition systems, gaming, robotics and automotive as their graphic below reveals.
The combination of LiDAR and VCSEL technologies, as noted by TriLumina, is the subject of two new Apple patents published today by the U.S. Patent and Trademark Office.
Apple notes that existing and emerging consumer applications have created an increasing need for real-time three-dimensional imagers. These imaging devices, also commonly known as light detection and ranging (LiDAR) sensors, enable the remote measurement of distance (and often intensity) of each point on a target scene--so-called target scene depth--by illuminating the target scene with an optical beam and analyzing the reflected optical signal. A commonly used technique to determine the distance to each point on the target scene involves sending an optical beam towards the target scene, followed by the measurement of the round-trip time, i.e. time-of-flight (ToF), taken by the optical beam as it travels from the source to target scene and back to a detector adjacent to the source.
A suitable detector for ToF-based LiDAR is provided by a single-photon avalanche diode (SPAD) array. SPADs, also known as Geiger-mode avalanche photodiodes (GAPDs), are detectors capable of capturing individual photons with very high time-of-arrival resolution, of the order of a few tens of picoseconds. They may be fabricated in dedicated semiconductor processes or in standard CMOS technologies.
Arrays of SPAD sensors, fabricated on a single chip, have been used experimentally in 3D imaging cameras.
Apple's invention is focused on improving LiDAR sensors and methods of their use.
Apple notes that in some embodiments of the present invention, the target scene is illuminated and scanned by either one laser beam or by multiple beams. In some embodiments utilizing multiple beams, these beams are generated by splitting a laser beam using diffractive optical elements, prisms, beamsplitters, or other optical elements that are known in the art.
In other embodiments, multiple beams are generated using several discrete laser light sources. In some of these embodiments, the multiple beams are generated using a monolithic laser array, such as an array of VCSELs or VECSELs.
Apple's patent FIG. 1 below shows us a schematic of a LiDAR system 18, in accordance with an embodiment of the invention. The beam or beams from a laser light source 20, comprising one or more pulsed lasers, are directed to a target scene 22 by a dual-axis beam-steering device 24, forming and scanning illumination spots 26 over the target scene. (The term "light" is used herein to refer to any sort of optical radiation, including radiation in the visible, infrared, and ultraviolet ranges.) Beam-steering devices can comprise, for example, a scanning mirror, or any other suitable type of optical deflector or scanner that is known in the art. Illumination spots 26 are imaged by collection optics 27 onto a two-dimensional detector array 28, comprising single-photon, time-sensitive sensing elements, such as SPADs.
Apples patent FIG. 1 presented above shows a schematic of a LiDAR system (#18). The beam or beams from a laser light source (#20), comprising of one or more pulsed lasers, are directed to a target scene (#22) by a dual-axis beam-steering device (#24) forming and scanning illumination spots (#26) over the target scene. (The term "light" is used herein to refer to any sort of optical radiation, including radiation in the visible, infrared, and ultraviolet ranges.) Beam-steering devices can comprise, for example, a scanning mirror, or any other suitable type of optical deflector or scanner that is known in the art. Illumination spots are imaged by collection optics (#27) onto a two-dimensional detector array (#28), comprising single-photon, time-sensitive sensing elements, such as SPADs.
Both of chips 52 and 54 in patent FIG. 3 may be produced from silicon wafers using well-known CMOS fabrication processes, based on SPAD sensor designs that are known in the art, along with accompanying bias control and processing circuits.
Most people think of LiDAR systems being solely associated with autonomous vehicles and/or mapping landscapes. So it was helpful to find information from TriLumina that showed how the combination of LiDAR and VSCELs could be used in next-gen applications including consumer oriented devices like a mixed reality headset, time of flight cameras and for virtual reality applications.
Last month Patently Apple posted a report titled "Apple has shown great interest with In-Air Gesturing via Multiple Patent Applications and their Pursuit of Leap Motion." To work with next-gen headsets requires depth cameras for gesture recognition in context with mixed reality headsets, a product that TriLumina said would use chips using a combination of LiDAR and VCEL technologies.
Apple's patent applications 20180341009 and 20180341009 that are both titled "Multi-range time of flight sensing," were filed back in Q2, 2017. Considering that this is a patent application, the timing of such a product to market is unknown at this time.
Knowing the inventors behind these patents illustrates that these are serious inventions that are likely to be part of a very large project at Apple. One of the inventors is from Apple's PrimeSense team out of Israel that was behind Apple's TrueDepth camera used for Face ID along with Animoji and Memoji applications.
Apple Inventors
Alexander Shpunt: Architect. Shpunt came to Apple from PrimeSense where he was CTO
Gennadiy Agranov: Director, Imaging Sensors Technology
Matt Waldon: Director, Depth Hardware, Camera Hardware Design. Waldon came to Apple via Lockheed Martin Space Systems Company where he was lead for technology of infrared camera systems.
Thierry Oggier: Apple Depth Sensing. Oggier's experience - 3D imaging, camera development, RGB, depth sensor, sensor, camera technologies, pixel technologies, image processing, point cloud processing, camera module design, triangulation, stereo, time-of-flight.
Cristiano Niclass: Hardware Development Manager. Specialties: Single-photon avalanche diodes in CMOS, 3D imagers and rangefinders based on time-of-flight, ASIC and system level model-based design.
Comments