Apple files a patent for a possible new Dimension to Shazam for a wide array of devices including an MR Headset
(Click on image to Greatly Enlarge)
On Thursday the US Patent & Trademark Office published a patent application from Apple that relates to a possible next-gen feature for their song identifying app 'Shazam.' The patent suggests that a next generation of this app will function on many more devices (headphones, an iPhone, a Mixed Reality HMD, an iPad, smart contact lenses, a heads-up display on a vehicle windshield etc.).
More importantly, the patent describes an all-new feature that could determine that a user is interested in audio content by determining that a movement, such as a head bob, and trigger the app to identify the tune that you're enjoying based on your head movement to the beat.
The method identifies a time-based relationship between one or more elements of the audio and one or more aspects of the body movement based on the first sensor data and the second sensor data.
For example, this may involve determining that a user of the device is bobbing their head to the beat of the music that is playing aloud in the physical environment. Such head bobbing may be recognized as a passive indication of interest in the music.
In another example, user motion is recognized as an indication of interest based on its type (e.g., corresponding to excited behavior) and/or the movement following shortly after the time at which a significant event occurs. For example, this may involve determining that a particular song is playing, and that the user is interested in the song based on his or her movement matching the beat of the song.
Various actions may be performed proactively based on identifying interest in the content. As examples, the device may present an identification of the content (e.g., displaying the name of the song, artist, etc.), present text corresponding to words in the content (e.g., lyrics), and/or present a selectable option for replaying the content, continuing to experience the content after leaving the physical environment, purchasing the content, downloading the content, and/or adding the content to a playlist.
In another example, a characteristic of the content (e.g., music type, tempo range, type(s) of instruments, emotional mood, category, etc.) is identified and used to identify additional content for the user.
Device resources may be used efficiently in determining that a user is interested in audio content. This may involve moving through different power states based on different triggers at the device. For example, audio analysis may be performed selectively, for example, based upon detecting a body movement, e.g., a head bobbing, foot tapping, leap of joy, first pump, facial reaction, or other movement indicative of user interest.
Apple's patent FIG. 3 below illustrates the exemplary electronic device of FIG. 1 obtaining movement data according to implementations disclosed; ] FIG. 4 is a flowchart illustrating a method for identifying interest in audio content by determining that a movement has a time-based relationship with detected audio content.
(Click on image to Enlarge)
Apple's patent FIG. 2 below represents a view from a Mixed Reality Headset wherein the user is able to see augmented content (#265) that includes an information bubble with information and features selected based on detecting a body movement and the audio within the physical environment.
(Click on image to Enlarge)
According to some implementations, the electronic device #105 in FIG. 2 above generates and presents an extended reality (XR) environment to one or more users. An extended reality (XR) environment refers to a wholly or partially simulated environment that people sense and/or interact with via an electronic device.
For more details, review Apple's patent application number 20220291743.
Apple Inventors
Brian Temple: Software Engineer, Technology Development Group
Devin Chalmers: Experience Prototyping Lead
Tom Salter: Senior Engineering Manager
Comments