Apple to Develop a Drama Series based on the Trilogy by Isaac Asimov about the Milky Way controlled by the Galactic Empire
Israel's Consumer Protection Authority Suspects Apple of violating Israeli law, Interviews Apple's Rony Friedman

Apple Wins Patent for Advanced Spatial Audio for Beamforming Speakers like HomePod, Home Theaters and beyond

1 X Cover beamforming spatial audio patent apple april 10  2018

 

The U.S. Patent and Trademark Office officially published a series of 57 newly granted patents for Apple Inc. today. In this last granted patent report of the day we cover two patents. The first and most important, covers Apple's first granted patent regarding next-gen spatial audio centric audio systems that technically touches on the HomePod yet goes far beyond a single speaker system.  A key Apple patent claim points to the new audio technology being applied to a home theatre entertainment system. The second granted patent in this report covers Apple's modular shelving system found in their newer Apple Stores.

 

Granted Patent: Spatial Audio Rendering for Beamforming Loudspeaker Array

 

Apple's first wave of next-gen audio patents began to roll out in August 2017. I covered the first in this batch in a patent report titled "An Australian Apple Patent Describes a Smart Multi-Speaker Audio System Designed for TV Live Streaming." The other patents in that initial wave could be found in our Audio Related archives.

 

The overview of the system goes far beyond just the HomePod. Patent Claim # 8 states the following: "The system of claim 1 wherein the piece of sound program content is the sound track of a motion picture film, and the plurality of audio channels are all of the audio channels of the sound track." This is in sync with other Apple patents on next wave audio systems as noted in patent figure below from the August 2017 patent filing.

 

2 audio patent august 2017 image

In today's granted patent, Apple later states: "Alternatively, there may be more than two input audio channels, such as for example the entire audio soundtrack in 5.1-surround format of a motion picture film or movie intended for large public theater settings."

 

Interesting enough the future Apple Store in Milan, Italy that's under construction will introduce an amphitheater as noted in the patent concept drawing below.  This will be an excellent opportunity for Apple to test out their spatial audio technology. 

 

A  - X - EXTRA GRAPHIC FOR REPORT SUPPORTING NEW AUDIO SYSTEM

 

Today Apple received their first granted patent relating next-gen audio system which specifically relates to spatially selective rendering of audio by a loudspeaker array for reproducing stereophonic recordings in a room. While interesting, what does that actually mean?

 

Apple helps us understand what they're trying to achieve for the home market by explaining how this technology is being used today in the market. Apple notes that much effort has been spent on developing techniques that are intended to reproduce a sound recording with improved quality, so that it sounds as natural as in the original recording environment. The approach is to create around the listener a sound field whose spatial distribution more closely approximates that of the original recording environment. Early experiments in this field have revealed for example that playing a music signal through a loudspeaker in front of a listener and a slightly delayed version of the same signal through a loudspeaker that is behind the listener gives the listener the impression that he is in a large room and music is being played in front of him. The arrangement may be improved by adding a further loudspeaker to the left of the listener and another to his right, and feeding the same signal to these side speakers with a delay that is different than the one between the front and rear loudspeakers.

 

A stereophonic recording captures a sound environment by simultaneously recording from at least two microphones that have been strategically placed relative to the sound sources. During playback of these (at least two) input audio channels through respective loudspeakers, the listener is able to (using perceived, small differences in timing and sound level) derive roughly the positions of the sound sources, thereby enjoying a sense of space. In one approach, a microphone arrangement may be selected that produces two signals, namely a mid-signal that contains the central information, and a side signal that starts at essentially zero for a centrally located sound source and then increases with angular deviation (thus picking up the "side" information.) Playback of such mid and side signals may be through respective loudspeaker cabinets that are adjoining and oriented perpendicular to each other, and these could have sufficient directivity to in essence duplicate the pickup by the microphone arrangement.

 

Loudspeaker arrays such as line arrays have been used for large venues such as outdoors music festivals, to produce spatially selective sound (beams) that are directed at the audience. Line arrays have also been used in closed, large spaces such as houses of worship, sports arenas, and malls.

 

Apple's Invention

 

Because this granted patent worked around the public patent application process as an Apple Patent, we're going to provide you with a larger overview of the invention that explains beamforming and related technologies for HomePod and beyond.

 

Apple's invention aims to render audio with both clarity and immersion or a sense of space, within a room or other confined space, using a loudspeaker array. The system has a loudspeaker cabinet in which are integrated a number of drivers, and a number of audio amplifiers are coupled to the inputs of the drivers.

 

A rendering processor receives a number of input audio channels (e.g., left and right of a stereo recording) of a piece of sound program content such as a musical work that is to be converted into sound by the drivers. The rendering processor has outputs that are coupled to the inputs of the amplifiers over a digital audio communication link. The rendering processor also has a number of sound rendering modes of operation in which it produces individual signals for the inputs of the drivers.

 

Decision logic (a decision processor) is to receive, as decision logic inputs, one or both of sensor data and a user interface selection. The decision logic inputs may represent, or may be defined by, a feature of a room (e.g., in which the loudspeaker cabinet is located), and/or a listening position (e.g., location of a listener in the room and relative to the loudspeaker cabinet).

 

Content analysis may also be performed by the decision logic, upon the input audio channels. Using one or more of content analysis, room features (e.g., room acoustics), and listener location or listening position, the decision logic is to then make a rendering mode selection for the rendering processor, in accordance with which the loudspeakers are driven during playback of the piece of sound program content. The rendering mode selection may be changed, for example automatically during the playback, based on changes in the decision logic inputs.

 

The sound rendering modes include a number of first modes (e.g., mid-side modes), and one or more second modes (e.g., ambient-direct modes). The rendering processor can be configured into any one of the first modes, or into the second mode. In one embodiment, in each of the mid-side modes, the loudspeaker drivers (collectively being operated as a beamforming array) produce sound beams having a principally omnidirectional beam (or bean pattern) superimposed with a directional beam (or beam pattern).

 

In the ambient-direct mode, the loudspeaker drivers produce sound beams having i) a direct content pattern that is aimed at the listener location and is superimposed with ii) an ambient content pattern that is aimed away from the listener location. The direct content pattern contains direct sound segments (e.g., a segment containing direct voice, dialogue or commentary, that should be perceived by the listener as coming from a certain direction), taken from the input audio channels.

 

The ambient content pattern contains ambient or diffuse sound segments taken from the input audio channels (e.g., a segment containing rainfall or crowd noise that should be perceived by the listener as being all around or completely enveloping the listener.) In one embodiment, the ambient content pattern is more directional than the direct content pattern, while in other embodiments the reverse is true.

 

The capability of changing between multiple first modes and the second mode enables the audio system to use a beamforming array, for example in a single loudspeaker cabinet, to render music clearly (e.g., with a high directivity index for audio content that is above a lower cut-off frequency that may be less than or equal to 500 Hz) as well as being able to "fill" a room with sound (with a low or negative directivity index perhaps for the ambient content reproduction). Thus, audio can be rendered with both clarity and immersion, using, in one example, a single loudspeaker cabinet (think HomePod) for all content, e.g., that is in some but not all of the input audio channels or that is in all of the input audio channels, above the lower cut-off frequency.

 

In one embodiment, content analysis is performed upon the input audio channels, for example, using timed/windowed correlation, to find correlated content and uncorrelated content. Using a beamformer, the correlated content may be rendered in the direct content beam pattern, while the uncorrelated content is simultaneously rendered in one or more ambient content beams. Knowledge of the acoustic interactions between the loudspeaker cabinet and the room (which may be based in part on decision logic inputs that may describe the room) can be used to help render any ambient content.

 

For example, when a determination is made that the loudspeaker cabinet is placed close to an acoustically reflective surface, knowledge of such room acoustics may be used to select the ambient-direct mode (rather than any of the mid-side modes) for rendering the piece of sound program content.

 

In other cases of listener location and room acoustics, such as when the loudspeaker cabinet is positioned away from any sound reflective surfaces, one of the mid-side modes may be selected to render the piece of sound program content.

 

Each of these may be described as an "enhanced" omnidirectional mode, where audio is played consistently across 360 degrees while also preserving some spatial qualities.

 

A beam former may be used that can produce increasingly higher order beam patterns, for example, a dipole and a quadrupole, in which decorrelated content (e.g., derived from the difference between the left and right input channels) is added to or superimposed with a monophonic main beam (essentially an omnidirectional beam having a sum of the left and right input channels).

 

Apple's patent FIG. 1 below is a block diagram of an audio system having a beamforming loudspeaker array.

 

3 X apple beamforming patent fig. 1

Apple's patent FIG. 2A below is an elevation view of sound beams produced in a mid-side rendering mode; FIG. 2B shows the spatial variation in the rendered audio content, as a superposition of the sound beams of FIG. 2A, in a horizontal plane.

 

4 X beamforming patent figs 2a b

Apple's patent FIG. 3A below is an elevation view of sound beam patterns produced by a higher order mid-side rendering mode; FIG. 3B shows the rendered beam content in the embodiment of FIG. 3A for the case of two input audio channels being available to form the beams.

 

5 patent figs 3a b  c

Apple's patent FIG. 4 below depicts an elevation view of an example of the sound beam patterns produced in an ambient-direct mode; FIG. 5 is a downward view onto a horizontal plane of a room in which the audio system is operating.

 

6 apple beamforming patent fig. 5

 

Apple's granted patent was originally filed in Q2 2017 and published today by the US Patent and Trademark Office. Apple applied for their patent on June 13, 2017 or 8 days after Apple revealed the HomePod at their WWDC conference on June 5th.

 

I had read a while back that "Spatial Audio" describes any of a variety of techniques for simulating the 3D soundfield that occurs in a real environment. This could be done via a speaker array or via headphones. Of course that may also apply to a future head mounted display system that Apple is working on – though Apple's current patent makes no reference to any such headsets.

 

Apple's inventors

 

Afrooz Family: Senior Audio Engineer; previously employed by THX

Mitch Lerner: Audio for Interaction Design

Sylvain Choisel: Senior Audio Technologist, Audio HW Design; previously employed by Phillips as Innovation Engineer, Sound and Acoustics

Tomlinson Holman: Audio Direction

 

Patent: Modular Wall System for displaying a product

 

Apple was also granted a patent today for a modular shelf system that is used in their Apple stores. Apple's granted patent 9,936,826 was originally filed in July 2016. Patently Apple covered this patent when it was first published as a patent application by USPTO back on February 23, 2017.

 

You could review our earlier report for more details and patent figures.

 

7 Apple modular shelving system for Apple Stores

10.0BB Patent Notice Bar

Patently Apple presents only a brief summary of granted patents with associated graphics for journalistic news purposes as each Granted Patent is revealed by the U.S. Patent & Trademark Office. Readers are cautioned that the full text of any Granted Patent should be read in its entirety for full details. About Making Comments on our Site: Patently Apple reserves the right to post, dismiss or edit any comments. Those using abusive language or negative behavior will result in being blacklisted on Disqus.

 

 

 

Comments

The comments to this entry are closed.