Microsoft invents a Tablet, Notebook Accessory in the form of a 'Smart Backpack' for Students and Sports Enthusiasts
Last week the U.S. Patent Office published a patent application from Microsoft that reveals concepts related a smart backpack that could be used by students and sports enthusiasts like hikers, skiers and bikers. The backpack may receive a contextual voice command from a user. The contextual voice command may include a non-explicit reference to an object in an environment. The backpack may use the sensors to sense the environment, use an artificial intelligence engine to identify the object in the environment, and use a digital assistant to perform a contextual task in response to the contextual voice command.
Microsoft notes in their filing that the functionality and usefulness of conventional digital assistants are mostly limited to the home surroundings. Accordingly, such digital assistants are not available or useful when users are on the go or out and about.
Second, many conventional digital assistants that are available on mobile devices, such as smartphones, tablets, and laptops, require the user to divert his attention and focus away from the task at hand, because the mobile devices require manual operation using the user's hands and require the user to look at the mobile device. For instance, a user may be required to stop whatever they're doing; look for and take out the device from a pocket, purse, backpack, etc.; press buttons or move switches; tap or swipe touchscreens; look at the display and changing graphical user interfaces (GUIs); and/or put the device back into the user's pocket, purse, backpack, etc.
These requirements make it difficult to use conventional digital assistants in many circumstances when the user is preoccupied with an ongoing task, the user's hands are occupied, and/or when the mobile device is stowed away. For example, using conventional digital assistants can be inconvenient when the user is skiing while wearing gloves and holding ski poles; when the user is biking and holding the bike handles; or when the user has stowed the device inside a pocket, purse, or backpack.
Third, conventional digital assistants are not context-aware. That is, conventional digital assistants are incapable of perceiving the user's surroundings and thus require the user to provide overly explicit commands.
For example, conventional digital assistants cannot see what the user sees and cannot hear what the user hears. Accordingly, the user is required to speak to conventional digital assistants in an unnatural and cumbersome manner, contrary to how the user would normally speak to another person who can perceive the user's surroundings at the same time.
Such an unnatural interaction with conventional digital assistants can discourage users from using conventional digital assistants. The lack of contextual information renders conventional digital assistants very difficult or even impossible to use in many scenarios in which the user may wish to perform certain tasks relating to the environment.
The present concepts solve the above-discussed problems associated with conventional digital assistants. First, a digital assistant consistent with the present concepts is available with a wearable that is worn by the user. Second, a user can interact with the digital assistant hands-free using voice commands and one or more of auditory feedback, visual feedback, and/or haptic feedback without distracting the user away from the current task at hand. Third, the digital assistant can perceive the user's surroundings and thus is context-aware, enabling the user to provide contextual commands to the digital assistant.
A digital assistant consistent with the present concepts has several advantages and provides many benefits to the user. The digital assistant can be with the user wherever they go so long as they bring the wearable with them. The user can conveniently utilize the digital assistant using voice commands and need not free their hands or distract their eyes from whatever activity they are currently engaged in.
Furthermore, the user can provide contextual commands that requires some understanding or perception of the environment in which they're in, because the digital assistant is capable of sensing and interpreting the user's surroundings. The present concepts allow the user to form commands relating to the environment in a more natural way to cause the digital assistant to perform contextual actions based on the environment around the user.
Microsoft's patent FIG. 1 illustrates a smart backpack with various components that are present on the straps of the wearable device.
The backpack may include one or more buttons (#108 e.g., switches). The buttons may be used to control any components of the backpack. For instance, the buttons may control the battery (#106) to power on, power off, sleep, hibernate and/or wake the backpack. The buttons may be operated by pressing, long pressing, holding, double clicking, tapping, touching, squeezing, flipping, and/or rotating the buttons.
The buttons may also be used to pair the backpack with a companion device such as a notebook, tablet or smartphone that is in a pocket of the rear backpack. The buttons may be used to activate or provide voice commands to the digital assistant. Voice commands may include a request to perform a certain function and/or a query seeking certain information. The buttons may be located on the strap of the backpack, on the battery, or elsewhere on the backpack.
The backpack may include a camera (#110) for visually sensing the environment surrounding the user. For example, the camera may be attached to the strap of the backpack and may face the front of the user, as illustrated in FIG. 1.
In one implementation, the camera may be embedded inside the strap, such that the camera is discreetly hidden and/or less noticeable. Alternative camera configurations are possible. For instance, one or more cameras may be positioned to face the rear, sides, down, and/or up above the user. For example, the cameras may be positioned on each of the straps to capture the environment in a direction the user's body is facing.
Another camera, such as a fish eye camera may be positioned on the straps to capture the user's face (e.g., where the user is looking). Collectively, the cameras are able to provide data about the orientation of user's body and the user's head.
Microsoft's patent FIG. 2 below illustrates an overview of the smart backpack system where we see artificial intelligence engine. For example, to interpret voice commands from the user, sense the environment surrounding the user, perform tasks in response to the voice commands, and/or generate outputs to the user.
Microsoft's patent FIGS. 3A and 3B below present a skiing scenario.
Microsoft further notes, suppose that the user is unsure which way to ski in order to stay in bounds. Conventionally, the user may need to stop skiing, release their ski poles from their hands, take off their gloves from his hands in the frigid weather, reach into their pocket or backpack to pull out their smartphone using his shaking bare hands, and manually use a map app or a conventional digital assistant to determine which way he should go to stay in bounds.
On the contrary, with the smart backpack, the user may conveniently ask the backpack "Can I ski this direction?" In this example, the microphone on the backpack may record the user's voice command and send the audio recording to the speech recognition module. The speech recognition module may then interpret the audio recording of the voice command into text. The cognitive module may interpret the text transcript of the voice command and recognize that the pronoun "this" is a contextual signal, and therefore, the voice command provided by the user is a contextual voice command that references the environment surrounding the user.
Accordingly, the backpack may attempt to perceive the environment in one or more ways. For example, the camera on the backpack may be activated to record the environment. Where the camera faces the front of the backpack, the camera may be pointing in the same direction that the user is facing. In this example scenario, the camera may capture an image recording of the environment including, for example, the ski slopes, the ski lifts, the ski lift poles, the mountains, the trees, etc.
The backpack may use the compass to determine the cardinal direction that the user is facing. The backpack may use GPS to determine the geographical location of the user. By sensing the environment in one or multiple ways, the backpack may interpret and understand that the pronoun "this" in the voice command provided by the user is referring to a specific cardinal direction (e.g., west) from a specific geographical location where the user is standing. Accordingly, the backpack may determine that the user is at a specific ski resort, may obtain the slopes map for that specific ski resort by accessing a ski resort map database, determine which direction the user should ski to stay in bounds, and answer the user's question by formulating an appropriate response. For example, as shown in FIG. 3B, the backpack may use the speaker to produce an auditory response "No. That direction is out of bounds. Ski to your right to stay in bounds."
In patent FIG. 4 below is another scenario. A user views information about a concert on a poster. Instead of pulling out a smartphone from their pocket, the user can say "Hey Cortana" take a photo of this poster quickly and conveniently. The user could also ask Cortana to place the event in a calendar app or to send the photo to certain contacts/friends.
The user could also ask Cortana "Who is this band?" In response, the artificial intelligence engine may use the camera to capture an image recording of the poster, the cognitive module may understand that the user is asking information about the band named Beatles, and the digital assistant may access one or more data sources to provide information about the Beatles to the user by outputting an answer through the speaker: "The Beatles were a British rock music band popular in the 1960's. The band consisted of four men named John Lennon, Paul McCartney, George Harrison, and Ringo Starr…" In this example, the backpack may be able to provide information to the user about the surrounding environment.
The U.S. Patent Office published Microsoft's patent application on March 18, 2021 which was originally filed in September 2019.
At present, Apple has no equivalent patent on record that we're aware of. This could be something that an Apple developer could bring to market, though Apple may focus on bringing this type of functionality to future smartglasses that would end up being be a superior, lightweight solution.