Apple Reveals Advanced Controls for their Future Head Mounted Device that will include Eye Gaze, Siri, Touch & more
Today the US Patent & Trademark Office published a patent application from Apple that reveals user interfaces for interacting with their future HMD that will use a combination of eye gaze and touch controls along with hand and body gestures and even Siri.
Apple's invention covers techniques for interacting with a Head Mounted Device (HMD) using eye gaze. A user use will be able to use their eyes to interact with user interface objects displayed on the HMD display. The techniques provide a more natural and efficient interface by, in some exemplary embodiments, allowing a user to operate the device using primarily eye gazes and eye gestures (e.g., eye movement, blinks, and stares).
Techniques are also described for using eye gaze to quickly designate an initial position (e.g., for selecting or placing an object) and then moving the designated position without using eye gaze, as precisely locating the designated position can be difficult using eye gaze due to uncertainty and instability of the position of a user's eye gaze.
The techniques can be applied to conventional user interfaces on devices such as desktop computers, laptops, tablets, and smartphones. The techniques are also advantageous for computer-generated reality (including virtual reality and mixed reality) devices and applications.
An affordance is a property or feature of an object which presents a prompt on what can be done with this object. In short, affordances are cues which give a hint how users may interact with something, no matter physical or digital. For example, when you see a door handle, it is a prompt you can use it to open the door. When you see a receiver icon, it gives you a hint you may click it to make a call. Affordances make our life easier as they support our successful interactions with the world of physical things and virtual objects.
In the case of Apple's invention, an affordance associated with a first displayed object is displayed and a gaze direction or a gaze depth is determined. A determination is made whether the gaze direction or the gaze depth corresponds to a gaze at the affordance.
A first input representing an instruction to take action on the affordance is received while the gaze direction or the gaze depth is determined to correspond to a gaze at the affordance, and the affordance is selected responsive to receiving the first input.
determination is made whether the first gaze direction or the first gaze depth corresponds to a gaze at both the first affordance and the second affordance. In response to determining that the first gaze direction or the first gaze depth corresponds to a gaze at both the first affordance and the second affordance, the first affordance and the second affordance are enlarged.
As illustrated in patent FIGS. 19C and 19D, the patent describes interacting with a virtual room #1902 and moving an object #1908 using gaze controls and touch controls that are built into the side surface of the HMD.
The HMD includes sensor(s) configured to detect various types of user inputs, including (but not limited to) eye gestures, hand and body gestures, and voice inputs. In some embodiments, input device includes a controller configured to receive button inputs (e.g., up, down, left, right, enter, etc.).
The virtual room/environment #1902 includes stack of photos #1908, which includes individual photos #1908a-1908e, lying on a table. Gaze #1906 seen in view #1902b indicates that user is looking at stack of photos.
Designation of photo #1908a below is indicated by focus indicator #1914, which includes a bold border around photo #1908a. In some embodiments, the focus indicator includes a pointer, cursor, dot, sphere, highlighting, outline, or ghost image that visually identifies the designated object. In some embodiments, the HMD un-designates the photo and returns the photos back to table in response to receiving further input (e.g., selection of an exit button or liftoff of a touch).
Other Moving Object Patent Figures
In the patent figures below Apple illustrates how using gaze and touch controls will be used to move items within a virtual environment like paintings/photos and even a coffee mug object.
Later in the patent, Apple notes that the HMD system includes image sensor(s), optionally includes one or more visible light image sensors such as charged coupled device (CCD) sensors, and/or complementary metal-oxide-semiconductor (CMOS) sensors operable to obtain images of physical objects from the real environment.
Image sensor(s) also optionally include one or more infrared (IR) sensor(s), such as a passive IR sensor or an active IR sensor, for detecting infrared light from the real environment. For example, an active IR sensor includes an IR emitter, such as an IR dot emitter, for emitting infrared light into the real environment. Image sensor(s) 108 also optionally include one or more event camera(s) configured to capture movement of physical objects in the real environment.
Image sensor(s) also optionally include one or more depth sensor(s) configured to detect the distance of physical objects from the HMD system. In some embodiments, the HMD uses CCD sensors, event cameras, and depth sensors in combination to detect the physical environment around the HMD.
Apple's patent application was filed in Q1 2020 and published today by the U.S. Patent Office. Considering that this is a patent application, the timing of such a product to market is unknown at this time.