This summer has been one where audio as a patent category has risen to the surface as a new trend from Apple. So much so that we've now listed "Audio Related" as a new sidebar category of technology that we'll be following more closely going forward. The new archive that was just set up this morning already contains seven patents going back to this past July.
The first patent of interest in this new category was one that Patently Apple discovered at the Australian Patent Office. Our report covering that invention was titled "An Australian Apple Patent Describes a Smart Multi-Speaker Audio System Designed for TV Live Streaming."
A Follow-up patent on that very theme was covered days later in a report based on a U.S. patent this time titled "Apple's invents a Rotationally Symmetric Speaker Array System that can detect where the listener is in the Room."
The audio patent category seemed to popup out of nowhere and yet it seem more than plausible with Apple introducing their new HomePod smart speaker device at this year's WWDC.
Yesterday the US Patent & Trademark Office published yet another patent application from Apple adding to this common theme of patents that we've hyperlinked to above. The invention relates to an audio system that automates the detection, setup, and configuration of distributed speaker arrays using video and/or audio sensors.
Yesterday Patently Apple posted a video review of Apple TV 4K by Nilay Patel from the Verge. One of Patel's pet peeve's with the new Apple TV 4K was that it didn't support Dolby's HDR Atmos, Dolby's high-end surround sound standard. Of course Atmos was designed to work in theaters where speakers are everywhere including the ceiling, something the masses don't have at home. Yet the principles could be adapted to home theaters.
When you combine the major theme of patents that we've pointed to including today's addition, it appears that this is what Apple is trying to achieve for a future Apple TV system. Perfect surround sound that's so intelligent that it smashes the concept of stereo sound in the home and introduces us to true intelligent audio.
The intelligent sound system uses an iDevice camera and with a depth camera that can analyze where people are sitting, what objects are in the room and automates the speaker arrays to ensure that surround sound would be heard by all, even if family or friends aren't exactly centered to the screen. This would be an amazing achievement if the Apple team could eventually bring this to market.
Speaker arrays may reproduce pieces of sound program content to a user through the use of one or more audio beams. For example, a set of speaker arrays may reproduce front left, front center, and front right channels for a piece of sound program content (e.g., a musical composition or an audio track for a movie). Although speaker arrays provide a wide degree of customization through the production of audio beams, conventional speaker array systems must be manually configured each time a new user and/or a new speaker array are added to the system. This requirement for manual configuration may be burdensome and inconvenient as speaker arrays are added to a listening area or moved to new locations within the listening area.
Optimizing the Performance of an Audio Playback System with a Lined Audio/Video Feed
An audio system is provided that efficiently detects speaker arrays in a listening area and configures the speaker arrays to output sound. In one embodiment, the audio system may include a computing device that operates on a shared network with one or more speaker arrays and an audio source. The computing device may detect and record the addresses and/or types of speaker arrays on the shared network.
In one embodiment, a camera associated with the computing device (iPhone) may capture a video of the listening area, including the speaker arrays. The captured video may be analyzed to determine the location of the speaker arrays, one or more users, and/or the audio source in the listening area. These determined locations may be determined relative to objects within the listening area.
While capturing the video, the speaker arrays may be driven to sequentially emit a series of test sounds into the listening area. As the test sounds are being emitted, a user may be prompted to select which speaker arrays in the captured video emitted each of the test sounds. Based on these inputs from the user, the computing device may determine an association between the speaker arrays on the shared network and the speaker arrays in the captured video. This association indicates a position of the speaker arrays detected on the shared network based on the previously determined locations of the speaker arrays in the captured video.
Using the determined locations, the computing device may assign roles to each of the speaker arrays on the shared network. These roles may be transmitted to the speaker arrays and the audio source.
In some embodiments, the test sounds emitted by the speaker arrays and the captured video captured by the computing device may be further analyzed to determine the geometry and/or characteristics of the listening area. This information may also be forwarded to the speaker arrays and/or the audio source. By understanding the configuration of the speaker arrays and the geometry/characteristics of the listening area, the speaker arrays may be driven to more accurately image sounds to the users.
Apple's patent FIG. 5A presented above illustrates a three speaker array system set up in a living room while FIG. 5B illustrates a four speaker array system. Adding a speaker will be simple using Apple's new system using an iDevice to set up the extra speaker, perform a sound check that analyzes how the sound will be reset for a room depending on where people are arranged in the room.
In Apple's patent FIG. 8A we're able to see a user interface either from an iOS or macOS device for initiating calibration of the speaker arrays; FIG. 8B shows a user interface for capturing video of the listening area while FIG. 8C shows a user interface for identifying speaker arrays in the captured video.
Apple's patent application 20170272886 was filed back in February 2017. Considering that this is a patent application, the timing of such a product to market is unknown at this time.
As a patent tidbit, the image that is shown in the figures 8B and 8C are actually related to Apple TV and a granted patent that we covered in February titled "Apple Granted 38 Patents Today Covering an Original Apple TV Related Invention, Dual Mode Headphones and more." That patent was actually filed in 2006, before the iPhone came to market. The exact patent figure of the two gentlemen was found in the original patent under figure 15 as noted below.
The Message: the audio system of the future is seen to be in sync with Apple TV.
\Patently Apple presents a detailed summary of patent applications with associated graphics for journalistic news purposes as each such patent application is revealed by the U.S. Patent & Trade Office. Readers are cautioned that the full text of any patent application should be read in its entirety for full and accurate details. About Making Comments on our Site: Patently Apple reserves the right to post, dismiss or edit any comments. Those using abusive language or negative behavior will result in being blacklisted on Disqus.