In April 2017 Patently Apple posted a report titled "NASA's Mission Operations Innovation Lead is now a Senior Manager on Apple's AR Glasses Team." A year ago we also posted a report titled "Apple's Augmented Reality Team is bringing in more Specialists to work on their Future Platform." Apple has certainly gathered a world class team of experts to develop a whole range of next-gen AR/VR and Mixed Reality headsets, smartglasses and more.
Earlier today we posted a report titled "Apple Advances their Head Mounted Display Project by adding a new GUI, an External Camera, Gaming & more." While Apple has been updating some of the features of this headset, we're still stuck with a 2008 patent image a headset concept that is somewhat outdated.
Buried in a plethora of patent applications published by the U.S. Patent Office today was a patent filing with a completely new vision for Apple's future headset. It's what I call a master overview and there were many patents published today that feed into this one master overview. I'll work on those in the coming day.
Today's patent filing is a utility patent, so this isn't a patent that is showing what it will look like, only how it's to function. Apple's utility invention advances their head mounted display device substantially.
In some ways, Apple's engineers are also beginning to dream of where this type of headset could be going and I'm sure over time we'll learn more about this vision. For now, let's dig in.
Apple's engineering team notes that Virtual Reality (VR) allows users to experience and/or interact with an immersive artificial environment, such that the user feels as if they were physically in that environment. For example, virtual reality systems may display stereoscopic scenes to users in order to create an illusion of depth, and a computer may adjust the scene content in real-time to provide the illusion of the user moving within the scene.
When the user views images through a virtual reality system, the user may feel as if they are moving within the scenes from a first-person point of view.
Similarly, mixed reality (MR) combines computer generated information (referred to as virtual content) with real world images or a real world view to augment, or add content to, a user's view of the world. The simulated environments of virtual reality and/or the mixed environments of augmented reality may thus be utilized to provide an interactive user experience for multiple applications, such as applications that add virtual content to a real-time view of the viewer's environment, interacting with virtual training environments, gaming, remotely controlling drones or other mechanical systems, viewing digital media content, interacting with the Internet, or the like.
A Mixed Reality System
Apple's invention relates to a mixed reality system that may include a mixed reality device such as a headset, helmet, goggles, or glasses (referred to herein as a head-mounted display (HMD)) that includes a projector mechanism for projecting or displaying frames including left and right images to a user's eyes to thus provide 3D virtual views to the user.
The 3D virtual views may include views of the user's environment augmented with virtual content (e.g., virtual objects, virtual tags, etc.).
The mixed reality system may include world-facing sensors that collect information about the user's environment (e.g., video, depth information, lighting information, etc.), and user-facing sensors that collect information about the user (e.g., the user's expressions, eye movement, hand gestures, etc.).
The sensors provide the information as inputs to a controller of the mixed reality system. The controller may render frames including virtual content based at least in part on the inputs from the world and user sensors. The controller may be integrated in the HMD, or alternatively may be implemented at least in part by a device external to the HMD. The HMD may display the frames generated by the controller to provide a 3D virtual view including the virtual content and a view of the user's environment for viewing by the user.
In some embodiments, the sensors may include one or more cameras that capture high-quality views of the user's environment that may be used to provide the user with a virtual view of their real environment.
In some embodiments, the sensors may include one or more sensors that capture depth or range information for the user's environment. In some embodiments, the sensors may include one or more sensors that may capture information about the user's position, orientation, and motion in the environment.
In some embodiments, the sensors may include one or more cameras that capture lighting information (e.g., direction, color, intensity) in the user's environment that may, for example, be used in rendering (e.g., coloring and/or lighting) content in the virtual view.
In some embodiments, the sensors may include one or more sensors that track position and movement of the user's eyes. In some embodiments, the sensors may include one or more sensors that track position, movement, and gestures of the user's hands, fingers, and/or arms.
In some embodiments, the sensors may include one or more sensors that track expressions of the user's eyebrows/forehead. In some embodiments, the sensors may include one or more sensors that track expressions of the user's mouth/jaw.
In some embodiments, the world sensors may include one or more "video see through" cameras (e.g., RGB (visible light) cameras) that capture high-quality views of the user's environment that may be used to provide the user with a virtual view of their real environment.
In some embodiments, the world sensors may include one or more world mapping sensors (e.g., infrared (IR) cameras with an IR illumination source, or Light Detection and Ranging (LIDAR) emitters and receivers/detectors) that, for example, capture depth or range information for the user's environment. In some embodiments, the world sensors may include one or more "head pose" sensors (e.g., IR or RGB cameras) that may capture information about the user's position, orientation, and motion in the environment; this information may, for example, be used to augment information collected by an inertial-measurement unit (IMU) of the HMD.
In some embodiments, the world sensors may include one or more light sensors (e.g., RGB cameras) that capture lighting information (e.g., color, intensity, and direction) in the user's environment that may, for example, be used in rendering lighting effects for virtual content in the virtual view.
Apple's patent FIG. 1 presented above illustrates a mixed reality system. More specifically, a mixed reality system #10 may include a HMD #100 such as a headset, helmet, goggles, or glasses that may be worn by a user #190. In some embodiments, virtual content #110 may be displayed to the user in a 3D virtual view #102 via the HMD; different virtual objects may be displayed at different depths in the virtual space #102. In some embodiments, the virtual content may be overlaid on or composited in a view of the user's environment with respect to the user's current line of sight that is provided by the HMD.
The HMD may implement any of various types of virtual reality projection technologies such as a near-eye VR system that projects left and right images on screens in front of the user's eyes that are viewed by a subject, such as DLP (digital light processing), LCD (liquid crystal display) and LCoS (liquid crystal on silicon) technology VR systems.
As another example, the HMD may be a direct retinal projector system that scans left and right images, pixel by pixel, to the subject's eyes. To scan the images, left and right projectors generate beams that are directed to left and right reflective components (e.g., ellipsoid mirrors) located in front of the user's eyes; the reflective components reflect the beams to the user's eyes.
To create a three-dimensional (3D) effect, the virtual content at different depths or distances in the 3D virtual view are shifted left or right in the two images as a function of the triangulation of distance, with nearer objects shifted more than more distant objects.
The HMD may include world sensors #140 that collect information about the user's environment (video, depth information, lighting information, etc.), and user sensors #150 that collect information about the user (e.g., the user's expressions, eye movement, hand gestures, etc.). The sensors #140 and #150 may provide the collected information to a controller of the mixed reality system.
The controller may render frames for display by a projector component of the HMD that include virtual content based at least in part on the various information obtained from both kinds of sensors. Example sensors #140 and #150 are shown in FIGS. 2A through 2C further below.
Apple's patent FIGS. 2A/B/C presented in the next two graphics below illustrate world-facing and user-facing sensors of a head-mounted display (HMD.
Apple notes that user sensors 214-217 collect information about the user e.g., the user's expressions, eye movement, hand gestures, etc.
As shown in the non-limiting example HMD of FIGS. 2A through 2C, there may be two video see through cameras 210A and 210B (seen in FIG. 2B) located on a front surface of the HMD at positions that are substantially in front of each of the user's eyes 292A and 292B.
In an example non-limiting embodiment, video see through cameras #210 may include high quality, high resolution RGB video cameras, for example 10 megapixel (e.g., 3072.times.3072 pixel count) cameras with a frame rate of 60 frames per second (FPS) or greater, horizontal field of view (HFOV) of greater than 90 degrees, and with a working distance of 0.1 meters (m) to infinity.
In some embodiments, the world sensors may include one or more "head pose" sensors #212 (e.g., IR or RGB cameras) that may capture information about the position, orientation, and/or motion of the user and/or the user's head in the environment.
The information collected by those sensors may, for example, be used to augment information collected by an inertial-measurement unit (IMU) of the HMD.
The augmented position, orientation, and/or motion information may be used in determining how to render and display virtual views of the user's environment and virtual content within the views. For example, different views of the environment may be rendered based at least in part on the position or orientation of the user's head, whether the user is currently walking through the environment, and so on. As another example, the augmented position, orientation, and/or motion information may be used to composite virtual content into the scene in a fixed position relative to the background view of the user's environment.
Apple's patent FIG. 4 below illustrates components of a mixed reality system.
Apple's patent FIG. 3 below is a flowchart of a method of operation for a mixed reality system.
The Future of Apple's HMD
One of the areas that Apple's HMD could be headed is for a desktop replacement a decade out. In Apple's top patent figure below we're able to view a concept of a desktop replacement system that Apple refers to as a 3D document editing system.
As the old saying goes, 'If it walks like a duck and quacks like a duck, it's a duck; in this case call it what they will, it's a desktop replacement system. The image below illustrates a keyboard-like device working in sync with Apple's HMD acting as your desktop display.
Apple's lower patent figure noted in the graphic above we're able to see an area of text forward or backward on a Z axis relative to a document in a 3D virtual space in response to user gestures. There will be on this in the future.
All in all this has to be one of the coolest patents from Apple in some time. Of course it took close to a decade of patent filings before we got the iPhone, iPad and Apple Pencil when naysayers said those products would never see the light of day.
Yes this if futuristic and likely to take a decade to get the HMD to a point of replacing your desktop, but until such time we're likely to see the starting point of an HMD come to market sooner.
On Monday Patently Apple posted a report titled "The Discovery of Apple's Secret Micro-LED Display Testing Plant in California Rattles Display Maker Stocks in Asia." One of the areas such a display will be aimed at first is Apple Watch followed by an HMD.
Patently Apple posted three reports on micro-LED for a VR headset (one, two and three) with the third presenting an estimated time line for micro-LED for various products. The chart forecasts AR/MR headsets with micro-LED displays should arrive sometime around 2021. So the pieces of the puzzle are coming together.
Apple's patent application was filed back in Q3 2017. The inventors of Apple's invention are noted as being Manohar Srikanth, Camera & Imaging Researcher and Sr. Camera Prototyping Engineer; Tobias Rick, Video Engineer; Brett Miller, Engineering Manager – Camera Incubation; and Ricardo Motta, Distinguished Engineer - DEST – Camera Incubation.
Considering that this is a patent application, the timing of such a product to market is unknown at this time.
Patently Apple presents a detailed summary of patent applications with associated graphics for journalistic news purposes as each such patent application is revealed by the U.S. Patent & Trade Office. Readers are cautioned that the full text of any patent application should be read in its entirety for full and accurate details. About Making Comments on our Site: Patently Apple reserves the right to post, dismiss or edit any comments. Those using abusive language or negative behavior will result in being blacklisted on Disqus.