Apple won a Major Spatial Audio patent last week covering 'Head-Related Transfer Function' (HRTF) Maps
Apple quietly added spatial audio to AirPods Pro in Q4 2020 and in December 2020 added spatial audio to AirPods Max. In May 2021 Apple introduced spatial audio with Dolby Atmos for Apple Music's entire catalog. Last week Apple was granted a spatial audio related patent titled " Audio system and method of generating an HRTF map."
Apple notes that headphones can reproduce a spatial audio signal communicated by a device to simulate a soundscape around the user. An effective spatial sound reproduction can render sounds such that the user perceives the sound as coming from a location within the soundscape external to the user's head, just as the user would experience the sound if encountered in the real world.
When a sound travels to a listener from a surrounding environment in the real world, the sound propagates along a direct path, e.g., through air to the listeners ear canal entrance, and along one or more indirect paths, e.g., by reflecting and diffracting around the listeners head or shoulders. As the sound travels along the indirect paths, artifacts can be introduced into the acoustic signal that the ear canal entrance receives. User-specific artifacts can be incorporated into binaural audio by signal processing algorithms that use spatial audio filters.
For example, a head-related transfer function (HRTF) is a filter that contains all of the acoustic information required to describe how sound reflects or diffracts around a listener's head, torso, and outer ear before entering their auditory system.
To implement accurate binaural reproduction, a distribution of HRTFs at different angles relative to a listener can be determined. For example, HRTFs can be measured for the listener in a laboratory setting using an HRTF measurement system.
A typical HRTF measurement system includes a loudspeaker positioned statically to the side of the listener. The loudspeaker can emit sounds directly toward a head of the listener.
The listener can wear ear microphones, e.g., microphones inserted into the ear canal entrances of the listener, to receive the emitted sounds. Meanwhile, the listener can be controllably rotated, e.g., continuously or incrementally, about a vertical axis that extends orthogonal to the direction of the emitted sounds.
For example, the listener can sit or stand on a turntable that rotates about the vertical axis while the loudspeaker emits the sounds toward the listener's head. As the listener rotates, a relative angle between a direction that the listener faces and the direction of the emitted sounds changes. The sounds emitted by the loudspeaker and the sounds received by the microphones (after being reflected and diffracted from the listener anatomy) are be used to determine HRTFs corresponding to the different relative angles. Accordingly, a dataset of angle-dependent HRTFs can be generated for the listener.
An HRTF selected from the generated dataset of angle-dependent HRTFs can be applied to an audio input signal to shape the signal in such a way that reproductions of the shaped signal realistically simulates a sound traveling to the user from the relative angle at which the selected HRTF was measured. Accordingly, a listener can use simple stereo headphones to create the illusion of a sound source somewhere in a listening environment by applying the HRTF to the audio input signal.
Existing methods of generating datasets of angle-dependent head-related transfer functions (HRTFs) are time-consuming or impractical to perform outside of a laboratory setting.
For example, HRTF measurements currently require an HRTF measurement system to be used in a controlled laboratory setting. Accordingly, accurate HRTF measurements require access to a specialized laboratory, which can be costly, as well as time to visit the specialized laboratory to complete the measurements.
Apple's invention/granted patent relates to an audio system and a method of using the audio system to generate an HRTF map for a user.
The HRTF map contains a dataset of angle-dependent HRTFs at respective HRTF locations on an azimuth extending around a head of the user. By applying an HRTF from the HRTF map to an audio input signal, a spatial audio signal corresponding to the respective HRTF location can be generated and played for the user. When reproduced, the spatial audio signal can accurately render a spatial sound to the user.
The method of using the audio system to generate the HRTF map can include generating, sounds at known locations along an azimuthal path extending along a portion of the azimuth. For example, a mobile device can be moved, e.g., continuously, along the azimuthal path while a device speaker emits sounds within path segments of the azimuthal path. The locations that sounds are emitted can be known locations.
For example, the mobile device can have a structured light scanner to capture images for determining a relative distance and orientation of the mobile device relative to the headphones being worn by the user. A microphone of the headphones can detect input signals corresponding to the sounds. For example, the input signals can represent directly received sounds and indirectly received sounds propagating toward the user from the mobile device as it moves along the azimuth.
One or more processors of the audio system can determine an HRTF of each path segment based on the input signals, and the HRTF can be assigned to respective HRTF locations along the path segments based on the known locations that the corresponding sound was emitted. Accordingly, the one or more processors can generate the HRTF map, which includes the measured HRTFs assigned to respective HRTF locations along the azimuth.
Apple's patent FIG. 1 below is a pictorial view of a user handling an audio system. An audio system (#100) can include a device, e.g., a mobile device such as AirPods Pro, AirPods Max, an iPhone, a MacBook etc.; FIG. 2 a block diagram of an audio system. Audio sources #206 can include phone and/or music playback functions controlled by telephony or audio application programs that run on top of the operating system. In an aspect, an audio application program can generate predetermined audio signals, e.g. sweep test signals, to be played by the device speaker (#108). Similarly, audio sources can include an augmented reality (AR) or virtual reality (VR) application program that runs on top of the operating system.
Apple's patent FIG. 4 below a pictorial view of operations to determine HRTFs and corresponding HRTF locations of an HRTF map.
Apple's patent FIG. 5 above is a pictorial view of operations to detect input signals corresponding to generated sounds is shown; FIG. 6, a pictorial view of operations to determine an HRTF and an HRTF location on an azimuth.
For deeper details, review Apple's granted patent 11,115,773.
Apple's Listed Inventors
Marty Johnson: Audio Technology Development, Distinguished Engineer. Mr. Johnson came to Apple from Virginia Tech where he was an associate Professor.
Darius Satongar: Interaction Architecture
Jonathan Sheaffer: Lead/Manager, Acoustics Technology
Victor Jupin: Listed as from Copenhagen, Denmark. No profile was found.
A Few of the Spatial Audio Patent Reports in our Archives
01: Apple's Spatial Audio File Format Revealed in new Patent Filing prior to Spatial Audio debuting on AirPods Pro
02: Apple files new 'Spatial Audio' patent for its future HMD and is likely to apply to AirPods Pro and Next-Gen Apple TV
03: Apple Patent reveals their Work on a Spatial – 3D Audio Engine that will take VR Gaming to the Next Level
04: Apple Invents Over-Ear Headphones with Spatial Audio driven Virtual Controls that Mimic Physical Control Sounds