Apple wins a Patent for iPhone GUIs with Special Wet Modes that allow a user to take a photo or shoot a Video underwater
Vanity Fair takes a first look at the Apple TV+ Stephen King Thriller Series 'Lisey's Story'

Apple wins a Patent for a Next-Gen HomePod with depth cameras and a microphone array for Virtual Reality & Communication Apps

1 cover HomePod with depth cameras


Today the U.S. Patent and Trademark Office officially granted Apple a patent that relates to audio systems for rendering virtual sound sources to a user. More specifically, the audio system may include a depth camera and a microphone array to independently detect a point cloud and a local sound field, and processor(s) to reconstruct a global sound field, e.g., at any point in the room, based on the point cloud and the recorded audio data. The audio system may be used for virtual reality or augmented reality applications, e.g., to render a virtual reality environment to a user. The audio system may, however, be used for other applications, such as telecommunications applications.


Virtual reality and augmented reality environments can include virtual sound sources, which are computer-generated sound sources in a virtual space. The virtual space can map to an actual space. For example, a user may wear headphones in a room and the headphones can reproduce a sound to the user as though the sound is a voice of a colleague in front of the user, even though the colleague is actually in another room.


As the user moves in the room, e.g., as the user walks forward five paces, the reproduced sound can change. For example, the headphones can reproduce the sound to the user as though the colleague is now behind the user. Accurate rendering of the virtual world requires that the colleague be recorded in a manner that identifies a location of the colleague in the other room. When the location is identified, the recording can be reproduced to the user in a manner that localizes the reproduced sound as if the sound is coming from a similar location in the room that the user occupies.


Existing methods of reproducing a sound image to an observer in a virtual space includes mounting a multitude of microphones within an actual space, e.g., a room, that is to be associated with the virtual space. The distributed microphones record a sound field from the room and localize sound sources based on audio detected by each of the microphones. For example, the microphones are distributed around the room, in corners and along the walls or ceiling, and picked up audio from the separated microphones can be combined to determine a location of a sound source that is making the sound.


In existing methods of reproducing a sound image to an observer in a virtual space, the microphones that are distributed around the actual space must be spaced apart from one another in order to rely on the detected sound for source localization. Accordingly, such methods do not allow for a single, compact microphone array to be used for sound source localization, and equipment and installation costs of such methods can be substantial.


In an embodiment, an audio system includes a ranging component, such as a depth capturing device or range imaging device, and an audio component, such as a compact microphone array, to detect and localize sounds within a source environment.


The depth capturing device and the microphone array can be collocated within a system housing. Accordingly, the audio system can occupy a smaller footprint than existing systems for imaging sounds for virtual reality applications.


The depth capturing device can be a camera array, or another depth detector, to detect a point cloud including several points in a field of view. The microphone array can capture a local sound field, which is one or more sounds arriving in respective directions from the field of view.


Apple's patent FIG. 1 is a pictorial view of an audio system having a depth capturing device and a microphone array; FIG. 2 is a schematic view of an audio system used to reconstruct a global sound field based on a point cloud and a local sound field.




The Base Station: Concerts


Lastly, Apple describes the ability to watch a user's favorite band or performer located at a remote location. The transmitting base station of the audio system (#100) could be placed at the remote location to record a scene in which the performers make sounds, e.g., vocal or instrumental sounds.


A receiving base station of the audio system, which may be installed in the same room as user (#220), can receive the recording. Audio and/or video data of the recording may be provided to components of audio system worn by the user 220, e.g., headphones and/or a virtual reality headset.


The user-mounted components can reproduce the scene to the user by playing video through the headset and/or playing back audio through speaker (#222). As the user walks through the room the scene changes. For example, when the user is at first position, the performers could be experienced through the virtual-reality components as singing or playing an instrument in front of user.


As the user walks forward, the replayed video and audio can be adjusted such that the performers and the sounds they make are rendered to the user as though the user is now behind the performers. In brief, the audio system provides a realistic rendition of a virtual reality environment to user.   


For more details, review Apple's granted patent 10,979,806


10.52FX - Granted Patent Bar


The comments to this entry are closed.