Apple has Won a Deep Patent relating to GUIs for interacting with Augmented and Virtual Reality Environments
Today the U.S. Patent and Trademark Office officially granted Apple a patent that relates to computer systems for virtual/augmented reality, including but not limited to electronic devices for interacting with augmented and virtual reality environments. Apple's granted patent dates back to 2017 and it's one of Apple's deepest detailed patents on the subject to date.
Apple's invention covers computer systems with improved methods and interfaces for interacting with augmented and virtual reality environments. Such methods and interfaces optionally complement or replace conventional methods for interacting with augmented and virtual reality environments.
Such methods and interfaces reduce the number, extent, and/or nature of the inputs from a user and produce a more efficient human-machine interface. For battery-operated devices, such methods and interfaces conserve power and increase the time between battery charges.
In some embodiments, the computer system includes a desktop computer. In some embodiments, the computer system is portable (e.g., a notebook computer, tablet computer, or handheld device).
In some embodiments, the computer system includes a personal electronic device such as a headset as noted in patent FIG. 5A2 below wherein the user is viewing an augmented reality environment and able to interact with the environment with a touch sensitive remote control, a wand, a touch sensitive surface found on the headset and so forth. The image below illustrates the user touching an iPhone as a navigation tool.
Apple's patent figures below illustrate examples of systems and user interfaces for multiple users to interact with virtual user interface objects in a displayed simulated environment.
Apple's patent Abstract states that the patent covers "A computer system concurrently displays, in an augmented reality environment, a representation of at least a portion of a field of view of one or more cameras that includes a respective physical object, which is updated as contents of the field of view change; and a respective virtual user interface object, at a respective location in the virtual user interface determined based on the location of the respective physical object in the field of view. While detecting an input at a location that corresponds to the displayed respective virtual user interface object, in response to detecting movement of the input relative to the respective physical object in the field of view of the one or more cameras, the system adjusts an appearance of the respective virtual user interface object in accordance with a magnitude of movement of the input relative to the respective physical object.
To dive deeper into the details, review Apple's granted patent 11,163,417