Apple Introduces a 3D Component to Assist Head and Eye Tracking in Future Desktops
In September I posted a granted patent report regarding Apple's technology regarding gaze and pointing gestures for a possible future iMac. The proven technology was developed by Apple's PrimeSense team in Israel, the company that was behind Microsoft's Kinect. Today the US Patent & Trademark Office published a new patent application from Apple's PrimeSense team that takes this technology to the next level of 3D.
Apple's CEO reiterated this week that Apple has great desktops on their road map. Microsoft went all-out this fall to dazzle the market with their all-new Surface Studio desktop that provides artists and architects alike with a large touch display that slants down to accommodate drawing on a natural angle.
Apple is now under the gun to add some pizazz to the next iMac. While there are many ways to accomplish this, one of the new ideas that Apple could introduce is head and visual tracking that would be based on a 3D version of their iSight camera taking advantage of today's invention. A lot of engineering activity has been centered on this invention lately and it may indicate that it's getting closer to viability.
The foundation of the invention / granted patent doesn't change. The new 3D twist is simply added and such additions are always found in Apple's patent claims. Below are the claims that support the new 3D mapping and interactions as it relates to gaze controls:
1. A method, comprising: receiving a sequence of three-dimensional (3D) maps of at least a part of a body of a user of a computerized system; extracting, from the 3D map, 3D coordinates of a head of the user; identifying, based on the 3D coordinates of the head, a direction of a gaze performed by the user; identifying an interactive item presented in the direction of the gaze on a display coupled to the computerized system; extracting from the 3D maps an indication that the user is moving a limb of the body in a specific direction; and repositioning the identified interactive item on the display responsively to the indication.
2. The method according to claim 1, and comprising receiving a two dimensional (2D) image of the user, the image including an eye of the user, wherein identifying the direction of the gaze comprises finding the direction of the gaze based on the 3D coordinates of the head and the image of the eye.
3. The method according to claim 2, wherein identifying the direction of the gaze comprises analyzing light reflected off an element of the eye.
4. The method according to claim 2, wherein extracting the 3D coordinates of the head comprises identifying, from the 2D image, a first position of the head along a horizontal axis and a vertical axis, and segmenting the 3D maps in order to identify, from the 3D maps, a second position of the head along a depth axis.
5. The method according to claim 1, wherein extracting the 3D coordinates of the head comprises segmenting the 3D maps in order to extract a position of the head along a horizontal axis, a vertical axis, and a depth axis.
6. The method according to claim 1, and comprising changing a state of the interactive item.
7. The method according to claim 6, wherein the state of the interactive item is changed responsively to the gaze.
8. The method according to claim 6, wherein the state of the interactive item is changed responsively to a vocal command received from the user.
9. The method according to claim 6, wherein changing the state comprises directing input received from the user to the interactive item.
10. The method according to claim 6, and comprising identifying a target point on the display in the direction of the gaze, and calculating a calibration coefficient based on the proximity of the target point to the interactive item.
11. An apparatus, comprising: a sensing device configured to receive a sequence of three dimensional (3D) maps of at least a part of a body of a user, including a head of the user; and a computer coupled to the sensing device and configured to extract, from the 3D maps, 3D coordinates of the head of the user and to identify, based on the 3D coordinates of the head, a direction of a gaze performed by the user, to identify an interactive item presented in the direction of the gaze on a display coupled to the computer, to extract from the 3D maps an indication that the user is moving a limb of the body in a specific direction, and to reposition the identified interactive item on the display responsively to the indication.
12. The apparatus according to claim 11, wherein the sensing device is configured to receive a two dimensional (2D) image of the user, the 2D image including an eye of the user, and wherein the computer is configured to identifying the direction of the gaze using the image of the eye together with the 3D coordinates of the head.
13. The apparatus according to claim 12, wherein the computer is configured to identify the direction of the gaze by analyzing light reflected off an element of the eye.
14. The apparatus according to claim 12, wherein the computer is configured to extract the 3D coordinates of the head by identifying, from the 2D image, a first position of the head along a horizontal axis and a vertical axis, and segmenting the 3D maps in order to identify, from the 3D maps, a second position of the head along a depth axis.
15. The apparatus according to claim 11, wherein the computer is configured to extract the 3D coordinates of the head by segmenting the 3D maps in order to extract a position of the head along a horizontal axis, a vertical axis, and a depth axis.
16. The apparatus according to claim 11, wherein the computer is configured to change a state of the interactive item.
17. The apparatus according to claim 16, wherein the computer is configured to change the state of the interactive item is changed responsively to the gaze.
18. The apparatus according to claim 16, wherein the computer is configured to change the state of the interactive item responsively to a vocal command received from the user.
19. The apparatus according to claim 16, wherein the computer is configured to change the state by directing input received from the user to the interactive item.
20. A computer software product comprising a non-transitory computer-readable medium, in which program instructions are stored, which instructions, when read by a computer, cause the computer to receive a sequence of three-dimensional (3D) maps of at least a part of a body of a user of the computer, including a head of the user, to extract, from the 3D maps, 3D coordinates of the head of the user, to identify, based on the 3D coordinates of the head, a direction of a gaze performed by the user, to identify an interactive item presented in the direction of the gaze on a display coupled to the computer, to extract from the 3D maps an indication that the user is moving a limb of the body in a specific direction, and to reposition the identified interactive item on the display responsively to the indication.
Apple only filed for this invention (patent application 20160370860) back in September 2016. Considering that this is a patent application, the timing of such a product to market is unknown at this time.
Patently Apple presents a detailed summary of patent applications with associated graphics for journalistic news purposes as each such patent application is revealed by the U.S. Patent & Trade Office. Readers are cautioned that the full text of any patent application should be read in its entirety for full and accurate details. About Making Comments on our Site: Patently Apple reserves the right to post, dismiss or edit any comments. Those using abusive language or behavior will result in being blacklisted on Disqus.
Comments