An Apple Patent Reveals the technical features behind a Spatial Audio Rendering Processor Designed for AirPods Max
Today the US Patent & Trademark Office published a patent application from Apple that relates to a spatial audio rendering processor for AirPods Max that calibrates audio beamforming array processing algorithms in response to a change in a physical shape of the wearable audio device. Whether Apple's patent is describing a future aspect of their H1 Chip is unknown at this time.
Beamforming is an audio signal processing operation in which transducer arrays are used for directional sound transmission or reception. For example, in the case of directional sound reception, a microphone array, normally having two or more microphones, capture sound that is processed to extract spatial audio information.
For example, in the case in which several sound sources are picked up by the microphone array, beamforming allows for the extraction of an audio signal that is representative of one of the several sound sources. Thereby allowing the microphone array to focus sound pickup onto that sound source, while attenuating (or filtering out) the other sound sources.
With regards to directional sound transmission, a loudspeaker array, normally having two or more loudspeakers, generates beam patterns to project sound in different directions.
For example, a beamformer may receive input audio channels of sound program content (e.g., music) and convert the input audio channels to several driver signals to drive the loudspeakers of the loudspeaker array to produce a sound beam pattern.
An aspect of Apple's invention is to perform transfer function measurements for each different physical arrangement of a wearable device's beamforming array, such as a microphone array or a loudspeaker array, to account for several shapes of the wearable device.
These measurements, may be performed in a dedicated anechoic chamber in which the wearable device is placed and morphed (manipulated) into a given shape.
To perform the measurements for a microphone array, loudspeakers are positioned about the wearable device, each representing a sound source in an acoustic space.
Each loudspeaker independently produces a sound, in order to measure the transfer functions for each of the microphones in the array for that location of the sound source.
Another aspect of Apple's invention relates to determining the physical arrangement of the beamforming array through the use of at least one of several methods.
An acoustic method relates to measuring a near-field transfer function of at least one of the audio elements of the beamforming array, which represents an acoustic transmission path between an audio element and a known location.
Other methods include an optical method in which each physical arrangement of the array is associated with image data captured by a camera, and a mechanical sensing method in which a given physical arrangement is associated with mechanical sensing data.
Apple's patent FIG. 2A shows a far-field transfer function measurement for the microphone array for AirPods Max; FIG. 3A shows the far-field transfer function measurements for the loudspeaker array.
Apple's patent FIG. 5 below illustrates one example of a block diagram of an audio system for self-calibrating a microphone array.
Apple's patent FIG. 10 below illustrates a calibration of a beamforming process in order to direct a sound pattern towards a sound source in response to a change in a physical arrangement of a microphone array.
To review Apple's detailed patent application number 20220053281, click here.
Comments