A new Apple Invention reveals a High-End Sensor System for a future HMD Designed to Operate in Low-Light
Today the US Patent & Trademark Office published a patent application from Apple that relates to their future Head Mounted Display Device having a low light operation. Apple notes that sensors coupled to the head support sense an environment in low light. The sensors include infrared sensors for sensing the environment with infrared electromagnetic radiation, or a depth sensor such as LiDAR for sensing distances to objects of the environment, and also include an ultrasonic sensor for sensing the environment with ultrasonic sound waves.
Apple patent application begins by them noting that human eyes have different sensitivity in different lighting conditions. Photopic vision is human vision with high levels of ambient light such as daylight conditions. Photopic vision is provided by cone cells of the eye that provide sensitivity to different colors (i.e., wavelengths) of light.
Scotopic vision is human vision with low levels of ambient light such as at night with overcast skies (e.g., with no moonlight). Scotopic vision is provided by rod cells of the eye.
Mesopic vision is human vision with levels of ambient light between those for photopic vision and scotopic such as at night without overcast skies (e.g., with moonlight) to early twilight times.
Mesopic vision is provided by both the cone cells and the rod cells. As compared to photopic vision, scotopic vision or even mesopic vision may result in a loss of color vision, changing sensitivity to different wavelengths of light, reduced acuity, and more motion blur. Thus, in poorly lit conditions, such as when relying on scotopic vision, a person is less able to view the environment than in well-lit conditions.
Apple's invention covers implementations of display systems, including head-mounted display units and methods of providing content. The sensors in the system includes one or more infrared sensors for sensing the environment with infrared electromagnetic radiation, or a depth sensor for detecting distances to objects of the environment, and also include an ultrasonic sensor for sensing the environment with ultrasonic sound waves.
The controller determines graphical content according to the sensing of the environment with the one or more of the infrared sensors or the depth sensor and with the ultrasonic sensor, and operates the display to provide the graphical content concurrent with the sensing of the environment.
In an implementation, a display system includes a controller, and a head-mounted display unit. The head-mounted display unit includes a display for displaying graphical content to a user wearing the head-mounted display unit and sensors for sensing an environment from the head-mounted display unit.
The sensors include an infrared sensor, a depth sensor, an ultrasonic sensor, and a visible light camera. In high light conditions, the sensors sense the environment to obtain first sensor data that is stored.
The first sensor data includes first visible light sensor data obtained with the visible light camera and first non-visible light sensor data obtained from one or more infrared sensors, the depth sensor, or the ultrasonic sensor.
In low light conditions after the first sensor data is stored, the sensors sense the environment to obtain current sensor data, and the controller determines the graphical content according to the current sensor data and first visible light sensor data.
In an implementation, a method of providing graphical content with a display system includes sensing an environment, processing sensor data, determining graphical content, and outputting the graphical content. The sensing includes sensing with sensors an environment to obtain sensor data in low light.
The sensors are coupled to a head-mounted display unit of the display system and include an infrared sensor, a depth sensor, and an ultrasonic sensor. The processing includes processing the sensor data with a controller.
The graphical content includes an ultrasonic graphical component and one or more of an infrared graphical component based on the sensor data obtained with the infrared sensor, a depth graphical component based on the sensor data obtained with the depth sensor, or a combined graphical component based on the sensor data obtained with both the infrared sensor and the depth sensor.
Apple further explains that the depth sensor may operate in different frequency ranges of the electromagnetic radiation spectrum than the infrared sensor, so as to not detect or otherwise be sensitive to electromagnetic radiation of the other (e.g., using appropriate filters, camera image sensors, and/or illuminators and the projector 434a in suitable frequency ranges).
In other examples, the depth sensor may be a radar detection and ranging sensor (RADAR) or a light detection and ranging sensor (LIDAR). It should be noted that one or multiple types of depth sensors may be utilized, for example, incorporating one or more of a structured light sensor, a time-of-flight camera, a RADAR sensor, and/or a LIDAR sensor.
Apple's patent FIG. 1 below presents a display system (#100) which includes a head-mounted display unit (#102) configured to provide a computer-generated reality.
The display system includes a head support (#110), one or more internal displays (#120), and one or more sensors (#130). The head support includes a chassis (#112) and a head-engagement mechanism (#114) coupled to the chassis. The one or more displays and the one or more sensors are coupled to the chassis, while the head-engagement mechanism engages the head (H) of the user for supporting the displays for displaying graphical content to eyes of the user.
The one or more displays may each be configured as a display panel (e.g., a liquid crystal display panel (LCD), light-emitting diode display panel (LED), organic light-emitting diode display panel (e.g., OLED)), or as a projector (e.g., that projects light onto a reflector back to the eyes of the user), and may further be considered to include any associated optical components (e.g., lenses or reflectors). The sensors are configured to sense the environment.
Further, the display system further includes a controller (#140) and other electronics (#150). The controller and the other electronics may be coupled to the head-mounted display unit (e.g., to the chassis).
The controller controls various operations of the display system, for example, sensing various conditions with the sensors and providing content with the displays.
The other electronics may include, for example, power electronics (e.g., a battery), communications devices (e.g., modems and/or radios for communicating wirelessly with other devices), and/or other output devices (e.g., speakers for aural output, haptic devices for tactile output).
Apple's patent FIG. 8 below presents a process (#800) for processing the sensor data and determining the graphical content which may be performed. The process generally includes processing operations for processing the infrared sensor data, the depth sensor data, the ultrasonic sensor data, and the visible light sensor data respectively. The process also generally includes graphical content determining operations for determining the infrared graphical component, the depth graphical component, the ultrasonic graphical component, and the visible light graphical component respectively. The process may, for example, be performed in low light conditions.
Apple's patent application number 20200341563 that was published today by the U.S. Patent Office was filed back April 2020 with some work going back to April 2019. Considering that this is a patent application, the timing of such a product to market is unknown at this time.