Apple's Computer Vision Team is working on using Depth-based Touch Detection for Games & Virtual Keyboards
Back in March Patently Apple posted a report titled "Apple Wins a Patent Relating to the use of a Specialized VR Keyboard with a Future Headset." In the future, students, executives, lawyers and engineers will be able to slip on a headset and be able to use any table surface, even an airline pull-down tray as a virtual keyboard. It will present users with a new private way of working in public spaces, because unlike a notebook, the display is on your face.
In order to use a virtual keyboard with deadly accuracy, Apple has been working on advanced computer vision technologies. In this particular patent application, Apple's head of Computer Vision and Machine Learning Manager Daniel Kurtz, who came to Apple via the acquisition of Metaio in 2015, explains the use of a Depth-Based Touch Detection system to make next-gen Virtual keyboards and mice a reality. This form of touch detection could also apply to future VR gaming.
Two years after acquiring Metaio, Apple introduced AR at their 2017 World Wide Developer Conference. Senior VP of Software Craig Federighi demonstrated how virtual objects could be placed on a table in front of users using an iPhone. He started by placing a cup of coffee object on the table and then added a lamp to go beside the cup of coffee with crazy accuracy and ease. The shadows even shifted in realtime as he moved the lamp object around the coffee cup.
This was prior to the iPhone X being released months later that included its advanced TrueDepth camera. What Apple's team is working on now is going beyond simply presenting AR objects in front of a user but rather to give users the ability to interact with them in realtime when using a headset.
While the main focus of Apple's patent is on interacting with a virtual keyboard in realtime, you could only imagine what this invention will be able to do to advance interactive VR gaming: open doors, steer a car, pick up a weapon and engage an enemy and much more in a more intimate way instead of pressing a button on an Xbox or PS4. This technology could be huge if they could ever get it to market in a timely way. And the good news is that Apple's TrueDepth camera, in many ways, is going to play a vital role in making all of this all happen.
Invention Background
Kurtz first describes the shortfall of current systems before explaining how Apple's invention is to overcome certain problem areas in depth-based virtual touch detection. Firstly he notes that detecting when and where a user's finger touches a real environmental surface can enable intuitive interactions between the user, the environment, and a hardware system (e.g., a computer or gaming system).
Using cameras for touch detection has many advantages over methods that rely on sensors embedded in a surface (e.g., capacitive sensor). Further, some modern digital devices like head-mounted devices (HMD) and smartphones are equipped with vision sensors--including depth cameras, like Apple's TrueDepth camera on all iPhone X models.
Current depth-based touch detection approaches use depth cameras to provide distance measurements between the camera and the finger and between the camera and the environmental surface.
One approach requires a fixed depth camera setup and cannot be applied to dynamic scenes. Another approach first identifies the finger, segments the finger, and then flood fills neighboring pixels from the center of the fingertip so that when sufficient pixels are so filled, a touch is detected.
However, because this approach does not account or even consider normalizing pixel depth-data, it can be quite error prone. In still another approach, finger touches are determined based on a pre-computed reference frame, an analysis of the hand's contour and the fitting of depth curves.
Each of these approaches require predefined thresholds to distinguish touch and no-touch conditions. They also suffer from large hover distances (i.e., a touch may be indicated when the finger hovers 10 millimeters or less above the surface; thereby introducing a large number of false-positive touch detections).
Invention: Depth-Based Touch Detection
Apple's invention covers concepts that provide a depth-based touch detection method for obtaining a depth map of a scene having a surface, the depth map comprising a plurality of pixel values (e.g., the depth map could come from a depth sensor or one or more optical cameras); identifying a first region of the depth map, the first region comprising a first plurality of pixel values indicative of an object other than the surface in the scene (e.g., the object could be a finger, a stylus or some other optically opaque object); identifying a surface region of the depth map based on the first region, the surface region comprising a second plurality of pixel values indicative of the surface (e.g., the surface could be planar or non-planar); normalizing the first region based on the surface region, wherein each normalized pixel value in the normalized first region is indicative of a distance relative to the surface (e.g., an orthogonal distance to the surface in the area of the object); generating an identifier based on the normalized first region (e.g., the identifier can be composed of the pixel values within the normalized first region); applying the identifier to a classifier (e.g., the classifier may be binary or multi-state); obtaining an output from the classifier based on the applied identifier; and performing a first affirmative operation when the classifier output is indicative of a touch between the object and the surface (e.g., such as performing the affirmative action corresponding to a "mouse click").
Embodiments of the touch detection operations set forth can assist with improving the functionality of computing devices or systems that accept non-keyboard input.
Computer functionality can be improved by enabling such computing devices or systems to use arbitrary surfaces (e.g., a tabletop or other surface) from which to get input instead of conventional keyboards and/or pointer devices (e.g., a mouse or stylus).
Computer system functionality can be further improved by eliminating the need for conventional input devices; giving a user the freedom to use the computer system in arbitrary environments.
Some of the sensors supporting a depth-based touch detection system are listed as an optical activity sensor, an optical sensor array, an accelerometer, a sound sensor, a barometric sensor, a proximity sensor, an ambient light sensor, a vibration sensor, a gyroscopic sensor, a compass, a barometer, a magnetometer, a thermistor, an electrostatic sensor, a temperature or heat sensor, a pixel array and a momentum sensor.
Apple's patent figure below illustrates various aspects of a depth-based touch detection operation.
Apple's patent FIG. 3A below is flowchart form, a temporal depth-based touch detection operation.
Apple's patent application that was published today by the U.S. Patent Office was originally filed back in Q3 2017. Considering that this is a patent application, the timing of such a product to market is unknown at this time.
About Making Comments on our Site: Patently Apple reserves the right to post, dismiss or edit any comments. Those using abusive language or negative behavior will result in being blacklisted on Disqus.
Comments