Apple has Won a Patent for a Cover Accessory with an Integrated Inductive Power Receiving Assembly
A $1 billion class action lawsuit has been filed against Apple by app developers in UK

Apple has Won a Key Patent that relates to the Vision Pro's Eye Tracking System

1 cover Eye Tracking Apple Vision Pro

In 2017 Apple acquired a German company by the name of SensoMotoric Instruments (SMI) who was a leader in eye tracking technology. In 2016, SMI had already created glasses that used eye tracking. While the technology seemed a bit science fiction at the time, it's now one of the key technologies behind Apple's coming Vision Pro Spatial Computing headset. Today the U.S. Patent and Trademark Office officially granted Apple a patent that relates to advanced eye tacking and three out of the four engineers listed as inventors are from the former SMI team now at Apple.

Sensor Fusion Eye Tracking

Apple's granted patent covers various implementations that include devices, systems, and methods that perform remote eye tracking for electronic devices that move relative to the user.

In some implementations, remote eye tracking determines gaze direction by identifying two locations in a 3D coordinate system along a gaze direction (e.g., a cornea center and a eyeball-rotation center) using a single active illumination source and depth information. In some implementations, a first location (e.g., the cornea center) is determined using a glint based on the active illumination source and depth information from a depth sensor and the second location (e.g., eyeball-rotation center) is determined using a RGB sensor (e.g., ambient light) and depth information. In some implementations, a single sensor using the same active illumination source determines the first location (e.g., the cornea center) and the second location (e.g., eyeball-rotation center), and the single sensor determines both depth information and glint information. In some implementations, remote eye tracking is provided by mobile electronic devices.

In some implementations, remote eye tracking determines a head pose in a 3D coordinate system, determines a position (e.g., eyeball rotation center) of the eye in the 3D coordinate system, and then identifies a spatial relationship between the head pose and the position of the eye. In some implementations, the spatial relationship is uniquely determined (e.g., user specific transformation). In some implementations, the spatial relationship is determined in an enrollment mode of remote eye tracking. Subsequently, in some implementations of a tracking mode of remote eye tracking, only feature detection images (e.g., RGB camera images) and the spatial relationship are used to perform remote eye tracking. In some implementations of a tracking mode of remote eye tracking, the depth information and active illumination are turned off (e.g., reducing power consumption).

One use of remote eye tracking is to identify a point of regard (POR) on a device in the direction of the user gaze, e.g., where the gaze direction intersects the display of the device. A POR can be used to facilitate user interaction with the device. For example, a system may detect that the users gaze has reached the bottom of the display and, in response, automatically scroll down to display more content to the user.

Some implementations of the disclosure involve, at a device having one or more processors, one or more image sensors, and an illumination source, detecting a first attribute of an eye based on pixel differences associated with different wavelengths of light in a first image of the eye. These implementations determine a first location associated with the first attribute in a three dimensional (3D) coordinate system based on depth information from a depth sensor. Various implementations detect a second attribute of the eye based on a glint resulting from light of the illumination source reflecting off a cornea of the eye. These implementations determine a second location associated with the second attribute in the 3D coordinate system based on the depth information from the depth sensor, and determine a gaze direction in the 3D coordinate system based on the first location and the second location.

Some implementations of the disclosure involve, at a device having one or more processors, one or more image sensors, and an illumination source, detecting a first attribute of an eye based on pixel differences associated with different wavelengths of light in a first image of the eye and determining a first location associated with the first attribute in a three dimensional (3D) coordinate system based on depth information from a depth sensor. Various implementations determine a head location in the three dimensional (3D) coordinate system based on a head (e.g., facial feature) detected in a second image and the depth information from the depth sensor. These implementations determine a second location associated with a second attribute of the eye based on the head location and a previously-determined spatial relationship between the head and the eye, and determine a gaze direction in the 3D coordinate system based on the first location and the second location.

2 Apple  advanced Eye Tracking ganted patent - Patently Apple IP report

For more details, review Apple's granted patent 11710350.

The Team Members on this Granted Patent

  • Tom Sengelaub: Senior Engineering Manager - Computer Vision (came from SMI)
  • Hao Qin: Computer Vision Engineer (came from SMI)
  • Julia Benndorf: Software Engineer (came from SMI)
  • Hua Gao: Computer Vision Engineer

 

10.52FX - Granted Patent Bar



Comments

The comments to this entry are closed.