Google Patent reveals plans to use their Radar technology and next-gen Soli Chip to work with Retail AR Applications and more
The Google Pixel 4 smartphone introduced motion sensing radar. The first wave of technology behind Project Soli is to support three capabilities noted as "presence, reach and gestures which were limited out of the gate. Last January Patently Apple posted a report titled "Google won a Major Patent for an In-Air Gesturing System last Month and now the FCC has just approved its use on Monday." Then in June 2019 we posted a second report titled "Google is reportedly set to introduce a new In-Air Gesturing System for the Pixel 4 called 'Aware.'"
After Google released the Pixel 4 with the Soli radar chip The Verge posted a report in October 2019 titled "Google's Project Soli: The Tech behind Pixel 4's Motion Sense Radar." The Verge noted that there was a tutorial — featuring Pokémon — but it was just for handling the basics.
Today Google was granted another patent relating to their radar technology and it appears that Google has bigger AR plans for the technology in the future that includes commerce applications. Instead of store staff running around with price sticker guns to mark each item, they'll be able to work with Google so that future Pixel phones will allow users to just aim the camera on an article in the store and the price will be augmented over or near the item as illustrated in the patent figures below.
Google's patent FIG. 1 below illustrates an example environment in which techniques enabling a smartphone-based radar system facilitating ease and accuracy of user interactions with displayed objects in an augmented-reality interface can be implemented.
Google's patent FIG. 12 above illustrates an example environment #1200 that describes additional details of the method. The movement is illustrated by a dotted-line arrow 1208. Using the described techniques, the radar-based application 106 is still maintaining the AR element 116 and the touch input control 118 at approximately the same location, near the corner of the display 108.
Overall, Google's granted patent describes techniques and systems that enable a smartphone-based radar system facilitating ease and accuracy of user interactions with displayed objects in an augmented-reality interface.
As noted, making complex inputs for augmented-reality (AR) applications using a touch input interface can be challenging because it is difficult to manipulate three-dimensional (3D) objects using a two-dimensional (2D) touchscreen. Thus, users may not realize the full potential of their AR applications because of the limitations of touch input methods.
The techniques and systems employ a radar field to accurately determine three-dimensional (3D) gestures (e.g., a gesture that comprises one or more movements, in any direction, within a 3D space illuminated by a radar field).
The 3D gestures can be used to interact with augmented-reality (AR) objects. Because the techniques and systems use the radar field to enable an electronic device to recognize gestures made in a 3D space around the electronic device, the user does not have to touch the screen or obstruct the view of the objects presented on the display.
The techniques and systems can enable the electronic device to recognize both the 3D gestures and two-dimensional (2D) touch inputs in AR environments. Often, AR content is related to real objects. Thus, when a user moves a device around to view real objects that are AR-enabled, the AR content may be presented on a display, as 2D touch input controls, while the real object is framed in the display.
For example, AR content for a decorative plant in a furniture store may include product information and purchase options. Using the radar field with the described techniques, the electronic device can determine that a user is reaching toward the 2D touch input controls on the display and fix or lock the touch input controls to the 2D touchscreen at a particular location. This allows the user to interact with the controls, even if the user moves the electronic device so that the real object is no longer framed in the display.
Additionally, the techniques and systems can enable the device to determine 3D gestures that can be used to manipulate AR objects in three dimensions. The techniques thereby improve the user's efficiency, work flow, and enjoyment when using AR applications by enabling convenient and natural 3D gestures for interacting with 3D objects without having to obstruct the user's view.
Consider, for example, an electronic device that includes a radar-based application with an AR interface that provides added functionality when shopping. For example, the radar-based application may allow a user to view real objects in a store and display AR objects associated with the real objects, such as a virtual price tag or a link that allows the user to add the real object to a virtual shopping cart.
In this example, the electronic device may include multiple cameras to enable the AR interface. A conventional AR interface is configured primarily for "discovery" (e.g., panning around a real environment to display whatever AR content is available). Thus, the user may move the device around in the real environment and touch-activated AR content related to the real object displayed on the screen can be presented on the screen near the displayed real object (e.g., an "add to cart" button).
Google's patent FIG 6 below illustrates an example scheme #600 implemented by the radar system. This is highly technical and you can review the details of FIG. 6 and beyond here.
Google's patent FIG. 7 above illustrates an example environment in which techniques enabling a smartphone-based radar system facilitating ease and accuracy of user interactions with displayed objects in an augmented-reality interface can be implemented. Google illustrates 3D Gesture module #708 in action.
Google's granted patent 20200064996 that was published today by the U.S. Patent and Trademark Office was originally filed in AugusT 2018.
Comments