Apple Wins a Patent for the AirPods Max's 3D Audio using a Virtual Acoustic System
On the last granted patent day of 2020, the U.S. Patent and Trademark Office officially granted Apple a patent for a Virtual Acoustic System for headphones that produce 3D audio.
Apple notes that the term "headphones" is intended to encompass on-ear headphones, over-the-ear headphones (AirPods Max) and AirPods that deliver a distinct sound program to each ear of the listener with no significant cross-over of each ear's sound program to the other ear of the listener.
Apple notes that the human auditory system modifies incoming sounds by filtering them depending on the location of the sound relative to the listener. The modified sound involves a set of spatial cues used by the brain to detect the position of a sound. Human hearing is binaural, using two ears to perceive two sound-pressure signals created by a sound.
Sound is transmitted in air by fluctuations in air pressure created by the sound source. The fluctuations in air pressure propagate from the sound source to the ears of a listener as pressure waves.
The sound pressure waves interact with the environment of the path between the sound source and the ears of the listener. In particular, the sound pressure waves interact with the head and the ear structure of the listener. These interactions modify the amplitude and the phase spectrum of a sound dependent on the frequency of the sound and the direction and the distance of the sound source.
These modifications can be described as a Head Related Transfer Function (HRTF) and a Head-Related Impulse Response (HRIR) for each ear. The HRTF is a frequency response function of the ear.
It describes how an acoustic signal is filtered by the reflection properties of the head, shoulders and most notably the pinna before the sound reaches the ear. The HRIR is a time response function of the ear. It describes how an acoustic signal is delayed and attenuated in reaching the ear, by the distance to the sound source and the shadowing of the sound source by the listener's head.
A Virtual Acoustic System
A virtual acoustic system is an audio system (e.g., a digital audio signal processor that renders a sound program into speaker driver signals that are to drive a number of speakers) that gives a listener the illusion that a sound is emanating from somewhere in space when in fact the sound is emanating from loudspeakers placed elsewhere.
One common form of a virtual acoustic system is one that uses a combination of headphones (e.g., earbuds) and binaural digital filters to recreate the sound as it would have arrived at the ears if there were a real source placed somewhere in space. In another example of a virtual acoustic system, crosstalk cancelled loudspeakers (or cross talk cancelled loudspeaker driver signals) are used to deliver a distinct sound-pressure signal to each ear of the listener.
Binaural synthesis transforms a sound source that does not include audible information about position of the sound source to a binaural virtual sound source that includes audible information about a position of the sound source relative to the listener.
Binaural synthesis may use binaural filters to transform the sound source to the binaural virtual sound sources for each ear. The binaural filters are responsive to the distance and direction from the listener to the sound source.
Sound pressure levels for sound sources that are relatively far from the listener will decrease at about the same rate in both ears as the distances from the listener increases.
The sound pressure level at these distances decreases according to the spherical wave attenuation for the distance from the listener. Sound sources at distances where sound pressure levels can be determined based on spherical wave attenuation can be described as far-field sound sources.
The far-field distance is the distance at which sound sources begin to behave as far-field sound sources. The far-field distance is greatest for sounds that lie on an axis that passes through the listener's ears and smallest on a perpendicular axis that passes through the midpoint between the listener's ears.
The far-field distance on the axis that passes through the listener's ears may be about 1.5 meters. The far-field distance on the perpendicular axis that passes through the midpoint between the listener's ears may be about 0.4 meters. Sound sources at the far-field distance or greater from the listener can be modeled as far-field sound sources.
In Apple's summary, they note that it would be desirable to provide a way to synthesize binaural audio signals (that would drive respective earphone transducers at the left and right ears of a listener) for a virtual acoustic system, to create the illusion of a sound source moving toward or away from the listener between I) the end of the effective range of near-field modeling and II) the center of the listener's head or another in-head location.
In a device or method for rendering a sound program for headphones, a location is received for placing the sound program with respect to first and second ear pieces.
If the location is between the first ear piece and the second ear piece (an in-head location), then the sound program is filtered to produce low-frequency and high-frequency portions. The high-frequency portion is panned according to the location to produce first and second high-frequency signals. The low-frequency portion and the first high-frequency signal are combined to produce a first headphone driver signal to drive the first ear piece.
A second headphone driver signal is similarly produced, by combining the low-frequency portion and the second high-frequency signal to produce a second in-head signal. The sound program may be a stereo sound program. The device or method may provide for rendering of the sound program at a location between the first ear piece and a near-field boundary. The location may be variable over time, so that the method can for example move the sound program gradually from an in-head position to an outside-the-head position, or vice-versa (e.g., from outside-the-head to an in-head position.)
Apple's patent FIG. 1 below is a view of an illustrative listener wearing headphones; FIG. 3 is a flowchart of a portion of a process for synthesizing a binaural program for a sound located in the in-head region between the ear pieces on the listener's ears; FIG. 7 is a block diagram for a portion of a circuit for processing a stereophonic sound program when the sound location is in the in-head region between the two ear pieces.
More specifically Apple's patent FIG. 1 above shows a vector having an origin at the midpoint #110 between the two ear pieces #104 and #106, which is generally the center of the user's head. The vector extends through the first ear piece #104, shown as the ear piece for the right ear of the listener #100. The vector may be divided into regions by I) a boundary #114 at the ear piece #104, a boundary #118 where a near-field HRTF becomes effective, and a boundary #122 where a far field HRTF becomes effective.
A virtual acoustic system according to the present disclosure may select the processing for a sound signal according to a desired placement of the sound signal in one of the regions between these boundaries.
Review Apple's granted patent 10,880,649 for finer details.
Another earphone related patent was granted to Apple today covering Beats earphones with a formable ear hook could be reviewed here. The lead inventor noted on the patent is Robert Boyd who came from Beats.