Apple was Granted Three Patents today relating to Hand and Eye Tracking Systems for Future Macs and Mixed Reality Headset+
Today, the U.S. Patent and Trademark Office officially published a series of three granted patents relating eye and hand tracking technologies for Apple Inc. All three cover different aspects of the technology. The first patent covers 3D mapping technology that allows for hand and eye tracking on a desktop or future devices. The other two patents focus more on the technology used in Apple's future Head mounted devices like a VR Headset of glasses.
3D Sensing System Tracks Hand & Eye Commands
In 2013 when Apple acquired the Israeli firm PrimeSense, it was initially for developing Face ID for iPhone X. Though beyond Face ID, Apple gained a lot of intellectual property related to hand and eye gaze controls, technology that Microsoft first adopted for their Xbox Kinect device. many Apple's patents about eye tracking stem from their technologies.
Today's granted patent that incorporates 3D mapping technology dating back to 2012 and it's Apple's first granted patent for this invention. Today's granted patent related to user interfaces for computerized systems, and specifically to user interfaces that are based on three-dimensional sensing.
In accordance with an embodiment of the present invention an apparatus, including a sensing device configured to receive a sequence of three dimensional (3D) maps containing at least a physical surface, one or more physical objects positioned on the physical surface, and a hand of a user, the hand positioned in proximity to the physical surface, a projector, and a computer coupled to the sensing device and the projector, and configured to analyze the 3D maps to detect a gesture performed by the user, to present, using the projector, an animation onto the physical surface in response to the gesture, and to incorporate the one or more physical objects into the animation.
Apple's patent FIG. 1 is a schematic, pictorial illustration of a computer system implementing a non-tactile three-dimensional (3D) user interface; FIG. 3 is a flow diagram that schematically illustrates a method of detecting gazes and gestures.
It's surprising that Apple has been sitting on this technology for so long without implementing it, considering that it was working so well with Microsoft's Kinect. How Apple intends to use this technology in the future is still unknown. It could be incorporated into their upcoming first-gen Head Mounted VR Headset, future Macs, iPads, Apple TV and perhaps in a future vehicle.
For more details on this invention, review Apple's granted patent 11,169,611.
Position Estimation based on Eye Gaze
Apple's second eye-tracking related patent covers techniques for determining a position of an object in a computer-generated reality environment using an eye gaze. According to some embodiments, a user uses his or her eyes to interact with user interface objects displayed in a computer-generated reality environment using an electronic device. The techniques are advantageous for virtual reality and augmented reality devices and applications.
In some embodiments, the techniques include, at an electronic device having one or more cameras: determining a first direction of gaze for a first eye of a user detected via the one or more cameras; determining a second direction of gaze for a second eye of the user detected via the one or more cameras; determining a convergence point of the first direction of gaze and second direction of gaze; determining a distance between a position of the user and a position of an object in an environment based on the convergence point; and performing a task based on the determined distance between the position of the user and the position of the object in the environment.
Apple's patent FIGS. 7A-7C below illustrates exemplary techniques for generating a three-dimensional reconstruction of the physical environment using eye gaze detection.
Apple's patent FIGS. 7A-7C below illustrate an embodiment in which electronic device #250 is tracking the gaze of user (#200) within the user's real environment (#700) and generating a three-dimensional reconstruction of the real environment, which can optionally be displayed in computer-generated reality environment 710. Portion A of FIGS. 7A-7C depicts real environment 700 shown from an overhead view in which device 250 is positioned on user 200, and physical objects 704 and 706 are positioned on physical table 702. Portion B of FIGS. 7A-7C depicts electronic device 250, shown from the perspective of user 200. Device 250 is shown with display 255 and camera(s) 260, which is used to detect the user's eye gaze. Device 250 displays a representation of computer-generated reality environment #710 showing the portions of the real environment that have been reconstructed by device for display in the computer-generated reality environment.
Apple's patent 11,170,521 titled "Position estimation based on eye gaze" is a very detailed patent that could further review here.
Sensor Fusion Eye Tracking
Apple's third eye-tracking related patent is titled "Position estimation based on eye gaze" relates to remote eye tracking for head mounted devices, and in particular, to systems, methods, and devices for providing remote eye tracking for HMD's that move relative to the eye.
In some implementations, remote eye tracking determines gaze direction by identifying two locations in a 3D coordinate system along a gaze direction (e.g., a cornea center and an eyeball-rotation center) using a single active illumination source and depth information. In some implementations, a first location (e.g., the cornea center) is determined using a glint based on the active illumination source and depth information from a depth sensor and the second location (e.g., eyeball-rotation center) is determined using a RGB sensor (e.g., ambient light) and depth information.
In some implementations, a single sensor using the same active illumination source determines the first location (e.g., the cornea center) and the second location (e.g., eyeball-rotation center), and the single sensor determines both depth information and glint information. In some implementations, remote eye tracking is provided by mobile electronic devices.
In some implementations, remote eye tracking determines a head pose in a 3D coordinate system, determines a position (e.g., eyeball rotation center) of the eye in the 3D coordinate system, and then identifies a spatial relationship between the head pose and the position of the eye. In some implementations, the spatial relationship is uniquely determined (e.g., user specific transformation). In some implementations, the spatial relationship is determined in an enrollment mode of remote eye tracking. Subsequently, in some implementations of a tracking mode of remote eye tracking, only feature detection images (e.g., RGB camera images) and the spatial relationship are used to perform remote eye tracking. In some implementations of a tracking mode of remote eye tracking, the depth information and active illumination are turned off (e.g., reducing power consumption).
Apple's patent FIG. 3 below is a block diagram depicting a 3D representation of an eyeball to illustrate an example eyeball modeling implementation assisting in remote eye tracking.
For greater detail, review Apple's granted patent 11,170,212.
Comments