Microsoft won a Patent for a Future Surface Tablet that could support Touchless Input & Recognize hand Gestures
This week Microsoft introduced 'Surface Go' – a budget sensitive Surface tablet for the education market and beyond. While there were a few nice refinements to their Surface tablet, there was nothing that could actually shake up the tablet market in any measurable way.
The U.S. Patent and Trademark Office published a granted patent for Microsoft yesterday that covers their invention for a future touchless input system; a system that could shake up the market if they're able to be first to market with this feature. It would definitely go a long way in showing that they could be a serious leader in tablet innovation, instead of being an iPad follower.
The system, in some ways, resembles characteristics that are found in today's TrueDepth Camera from Apple that's used for Face ID. Apple has patents (one, two and three for example) that will be able to take advantage of the next-gen TrueDepth camera that could provide gesture recognition and beyond. Microsoft's current granted patent covers a similar invention but in round one focuses in on touchless input.
Microsoft's granted patent covers visually detecting touchless input. A tracking system including a depth camera and/or other source is used to receive one or more depth maps imaging a scene including one or more human subjects. Pixels in the one or more depth maps are analyzed to identify non-static pixels having a shallowest depth. The position of the non-static pixel(s) is then mapped to a cursor position. In this way, the position of a pointed finger can be used to control the position of a cursor on a display device. Touchless input may also be received and interpreted to control cursor operations and multitouch gestures.
Microsoft's patent FIGS. 1A and 1B illustrated below show us an example of a touchless input system; FIG. 4 schematically shows a visual representation of a depth map; FIG. 6 schematically shows another visual representation of a depth map.
Microsoft further notes that other movements made by a user may be interpreted as other controls. As non-limiting examples, the user may carry out a plurality of cursor operations, including click and drag operations.
Further, the user may carry out other operations not related to a cursor, including multitouch gestures such as zooming and panning. While the GUI is provided as an example, it is to be understood that virtually any GUI and/or other aspect of a computing device may be controlled with the touchless input.
Objects other than a human may be modeled and/or tracked. Such objects may be modeled and tracked independently of human subjects. For example, the motion of a user holding a stylus and/or the motion of the stylus itself may be tracked.
Virtually any finger or multitouch gesture may be interpreted without departing from the scope of this invention. While "multitouch" is used to describe finger gestures that utilize more than one finger, it is to be understood that the present invention enables such gestures to be performed in a touchless manner.
Examples of such touchless multitouch gestures include a tap gesture, a double-tap gesture, a press gesture, a scroll gesture, a pan gesture, a flick gesture, a two finger tap gesture, a two finger scroll gesture, a pinch gesture, a spread gesture, and a rotate gesture. However, it will be appreciated that these examples are merely illustrative and are not intended to be limiting in any way.
Natural User Interface (NUI)
Lastly, Microsoft notes that when included, the input subsystem may comprise or interface with one or more user-input devices such as a keyboard, mouse, touch screen, or game controller. In some embodiments, the input subsystem may comprise or interface with selected natural user input (NUI) componentry (e.g., tracking system 108 of Patent FIG. 1A and 1B).
Such componentry may be integrated or peripheral, and the transduction and/or processing of input actions may be handled on- or off-board. Example NUI componentry may include a microphone for speech and/or voice recognition; an infrared, color, stereoscopic, and/or depth camera for machine vision and/or gesture recognition; a head tracker, eye tracker, accelerometer, and/or gyroscope for motion detection and/or intent recognition; as well as electric-field sensing componentry for assessing brain activity.
In some embodiments, the input subsystem may comprise or interface with a "structured light" depth camera, which may be configured to project a structured infrared illumination comprising numerous, discrete features (e.g., lines or dots). A camera may be configured to image the structured illumination reflected from the scene. Based on the spacings between adjacent features in the various regions of the imaged scene, a depth map of the scene may be constructed.
The input subsystem may also comprise or interface with a "time-of-flight" depth camera, which may include a light source configured to project a pulsed infrared illumination onto a scene. Two cameras may be configured to detect the pulsed illumination reflected from the scene. The cameras may include an electronic shutter synchronized to the pulsed illumination, but the integration times for the cameras may differ, such that a pixel-resolved time-of-flight of the pulsed illumination, from the light source to the scene and then to the cameras, is discernible from the relative amounts of light received in corresponding pixels of the two cameras. This is very much like Apple's TrueDepth Camera set up.
Microsoft was granted this patent yesterday, July 10, 2018. It was originally filed in Q1 2016.
Patently Mobile presents only a brief summary of granted patents with associated graphics for journalistic news purposes as each Granted Patent is revealed by the U.S. Patent & Trademark Office. Readers are cautioned that the full text of any Granted Patent should be read in its entirety for full details. About Comments: Patently Mobile reserves the right to post, dismiss or edit comments.