Apple won 47 patents today covering Spatial Audio, an Apple Pencil with Optical Sensors that sample Color & Texture, Flexible Speakers+
Today the U.S. Patent and Trademark Office officially published a series of 47 newly granted patents for Apple Inc. In this particular report we briefly cover spatial audio, a possible future Apple Pencil that could sample color and texture, and lastly, a patent on flexible speakers. And as always, we wrap up this week's granted patent report with our traditional listing of the remaining granted patents that were issued to Apple this week.
Electronic Device With Optical Sensor For Sampling Surfaces
Today Apple was granted a patent relating to a new color sensor system that could be added to a future Apple Pencil. The color sensor may be used to sample the color of the surface of an external object. Texture measurements and/or other measurements on the appearance of the object may also be made.
Measurements from Apple Pencil may be transmitted wirelessly to a companion device such as a tablet computer (e.g., so that a sampled color or other attributes may be used in a drawing program or other software).
The color sensor may have a color sensing light detector having a plurality of photodetectors each of which measures light for a different respective color channel. The color sensor may also have a light emitter.
Apple's patent FIG. 3 below is a side view of an in illustrative electronic device with a color sensor; FIG. 5 is a side view of an illustrative electronic device with optical components that are mounted in a shaft portion of a device housing and that are optically coupled to a tip portion of the device using light guides.
For full details of this invention including 20 new patent claims, review granted patent 12105897.
Flexible Speakers
Apple has been granted a patent for a speaker that includes a housing having walls that define a cavity and a diaphragm covering the cavity and configured to vibrate under application of a magnetic field. The vibration produces sound waves. The walls are configured to deform under bending stress. The speaker is configured produce the sound waves both in an undeformed state and in a deformed state. Another speaker includes a flexible layer, a sensor configured to detect a curvature of the flexible layer, and a transducer disposed on and configured to vibrate the flexible layer. The vibrations of the flexible layer generate sound waves and output generated by the transducer is based on the curvature of the flexible layer.
Apple's patent FIG. 1A below is a schematic sectional view through a speaker; FIG. 1B is a schematic sectional view through the speaker of FIG. 1A with the speaker in a deformed state.
The invention could one day be applied to electronic devices such as desktop computers, televisions, set top boxes, internet-of-things (IoT) devices, wearable devices such as headphones and earbuds, and portable electronic devices including a mobile phones, portable music players, smart watches, tablet computers, smart speakers, remote controllers for other electronic devices, and laptop computers and more.
For more on this, review Granted Patent 12108200.
Head Tracking Correlated Motion Detection For Spatial Audio Applications
In December 2021 Patently Apple posted a patent report that related to o Spatial Audio which shares a few patent figures with a secondary Spatial Audio patent granted to Apple Today titled " Head Tracking Correlated Motion Detection For Spatial Audio Applications."
Apple's patent FIG. 2 presented above illustrates a centered and inertially stabilized 3D virtual auditory space #200 that includes virtual sound sources or “virtual speakers” (e.g., center (C), Left (L), Right (R), left-surround (L-S) and right-surround (R-S)) that are rendered in ambience bed #202 using known spatial audio techniques, such as binaural rendering.
To maintain the desired 3D spatial audio effect, it is desired that the center channel (C) be aligned with a boresight vector #203. The boresight vector originates from a headset reference frame and terminates at a source device reference frame. When the virtual auditory environment is first initialized, the center channel is aligned with boresight vector by rotating a reference frame for the ambience bed (X.sub.A, Y.sub.A, Z.sub.A) to align the center channel with boresight vector 203, as shown in FIG. 2.
This alignment process causes the spatial audio to be “centered.” When the spatial audio is centered, the user perceives audio from the center channel (e.g., spoken dialogue) as coming directly from the display of source device. The centering is accomplished by tracking the boresight vector to the location of source device from the head reference frame using an extended Kalman filter (EKF) tracking system.
For full details, review Granted Patent 12108237.
This Week's Remaining Granted Patents