Apple Invents Enhanced Depth Mapping using Visual Inertial Odometry for iPhone and/or a Table Top Device
Today the US Patent & Trademark Office published a patent application from Apple that relates to systems and methods for depth mapping, and particularly to depth mapping using time-of-flight sensing for an iPhone or table top device like a future Apple TV box.
Existing and emerging consumer applications have created an increasing need for real-time three-dimensional (3D) imagers. These imaging devices, also known as depth sensors or depth mappers, enable the remote measurement of distance (and often intensity) to each point in a target scene--referred to as target scene depth--by illuminating the target scene with an optical beam and analyzing the reflected optical signal. Some systems capture a color image of the target scene, as well, and register the depth map with the color image.
A commonly-used technique to determine the distance to each point in the target scene involves transmitting one or more pulsed optical beams towards the target scene, followed by the measurement of the round-trip time, i.e., time of flight (ToF), taken by the optical beams as they travel from the source to the target scene and back to a detector array adjacent to the source.
Some ToF systems use single-photon avalanche diodes (SPADs), also known as Geiger-mode avalanche photodiodes (GAPDs), in measuring photon arrival time, or possible an array of SPAD sensing elements. In some systems, a bias control circuit sets the bias voltage in different SPADs in the array to different, respective values.
Apple's invention provides improved depth mapping systems and methods for operating such systems.
imaging apparatus, including a radiation source, which is configured to emit pulsed beams of optical radiation toward a target scene. An array of sensing elements is configured to output signals indicative of respective times of incidence of photons on the sensing elements.
Objective optics are configured to form a first image of the target scene on the array of sensing elements. An image sensor is configured to capture a second image of the target scene. Processing and control circuitry is configured to process the second image so as to detect a relative motion between at least one object in the target scene and the apparatus, and which is configured to construct, responsively to the signals from the array, histograms of the times of incidence of the photons on the sensing elements and to adjust the histograms responsively to the detected relative motion, and to generate a depth map of the target scene based on the adjusted histograms.
In some embodiments, the relative motion is due to a movement of the apparatus, and the processing and control circuitry is configured to filter the histograms to compensate for the movement of the apparatus. In a disclosed embodiment, the apparatus includes an inertial sensor, which is configured to sense the movement of the apparatus and to output an indication of the movement, wherein the processing and control circuitry is configured to apply the indication output by the inertial sensor in conjunction with processing the second image in detecting the movement of the apparatus.
Apple's patent FIG. 1 below presents is a schematic, pictorial illustration of a depth mapping system; FIG. 2 is a schematic side view of the depth mapping system of FIG. 1.
More specifically, in the illustrated scenario of FIG. 1, an imaging device #22 generates a depth map of a target scene (#24) within a field of view (#26) of the device. In this example, the target scene contains moving objects, such as human (#28), as well as stationary objects, including a bureau (#30), a wall (#32), a picture (#34), a window (#36) and a rug (#38). Although the imaging device is shown in FIG. 1 as a tabletop unit, the imaging device may alternatively be a mobile or handheld device, and may thus move, as well, during acquisition of the depth map. The "Tabletop unit" could be referring to a future Apple TV device.
The imaging device measures depth values by directing beams of optical radiation toward points in the target scene and measuring times of arrival of photons reflected from each point. The front plane of the device is taken, for the sake of convenience, to be the X-Y plane, and depth coordinates of points in the target scene are measured along the Z-axis. The depth map generated by imaging device thus represents the target scene as a grid of points in the X-Y plane with a depth coordinate indicating the distance measured to each point.
Apple's patent FIG. 3 below is a flow chart that schematically illustrates a method for processing ToF information.
To review the deeper details behind Apple's patent application number 20210270969, click here. Apple filed for another patent on depth mapping titled "Calibration of a depth sensing array using color image data," back in July that shared some of the same patent figures.
The four inventors listed on Apple's patent application are from Apple's Israeli team. One of them is Moshe Laifenfeld: Lead, Digital Signal Processing, Algorithms.
Considering that this is a patent application, the timing of such a product to market is unknown at this time.
Comments