Multiple Patent Pending Inventions from Apple Surface for Emojicon Puppetting, Facial Expressions and Voice Effects
Today the US Patent & Trademark Office published five of Apple's patent applications covering Emojicon Puppetting and more relating to Animojis. Along with the introduction of iPhone X was Apple's prized feature, the TrueDepth Camera. With it Apple was able to introduce Face ID and live puppetting via Animoji. The camera is able to capture a 3D model of a user's face and then transfer it to a series of characters with one example presented in our cover graphic. Today five of Apple's patents covering various aspects of the Animoji project came to light and are listed further below.
When researching the inventors behind one of the patents in this group titled "Emojicon Puppetting" it came to light that four of the key inventors came from Faceshift, a company that Apple acquired in mid-2015. Rumors surfaced in early September and again in November. The LinkedIn profile of CEO and Co-Founder of Faceshift is noted as joining Apple in August 2015. Many will remember the image below in relation to Faceshift.
You could explore all five Animoji related Patents for more details here: 01: Voice Effects Based on Facial Expressions; 02: Emoji Recording and Sending – also (3) Part 2 and (4) Part 3; and lastly 05 Emojicon Puppeting.
From the first patent titled " Voice Effects Based on Facial Expressions" we see Apple's patent FIG. 2 below which is a simplified block diagram illustrating example flow for providing audio and/or video effects techniques as described herein, according to at least one example.
This particular Apple invention covers systems, methods, and computer-readable medium for implementing avatar video clip revision and playback techniques. In some examples, a computing device can present a user interface (UI) for tracking a user's face and presenting a virtual avatar representation (e.g., a puppet or video character version of the user's face). Upon identifying a request to record, the computing device can capture audio and video information, extract and detect context as well as facial feature characteristics and voice feature characteristics, revise the audio and/or video information based at least in part on the extracted/identified features, and present a video clip of the avatar using the revised audio and/or video information.
From the fifth patent titled "Emojicon Puppeting" we see Apple's patent FIGs 3A, B and C below. Apple's patent FIG. 3A below illustrates an image sensor and a depth sensor gathering image information and depth information, respectively, from the face, expressions, movements, and head of a user of the sending client device, according to some embodiments. FIG. 3B illustrates a message transcript that includes a puppetted emoji received in a message from a sending client device.
Apple's patent FIG. 3C above illustrates a human face and head, a base mesh of the human face and head generated from image and depth data, one or more tracking points on the base mesh, and an emoji having tracking points corresponding to one or more of the base mesh tracking points. This is illustrated in our cover graphic which is Jony Ive in the mesh from Apple's introductory video.
Some of the Inventors behind Apple's Patents
Brian Amberg: Software Engineer. Amberg was CTO and Co-Founder of Faceshift.
Thibaut Weise: Software Engineering Manager. Weise was CEO and Co-Founder of Faceshift.
Guillaume Barlier: Designer. Barlier came to Apple via Faceshift where he was Senior Technical Director.
Nico Scapel: Designer. Scapel came to Apple via Faceshift where he was Creative Director
Justin Stoyles: Sr. Engineering Program Manager (ARKit Augmented Reality, Animoji / Memoji). Came to Apple via AMD FirePro Graphics and is now at Roku.
Alexandre Moha: Product development. R&D Management
Bruno Sommer: Software Engineering Manager, Technology Development.
Thomas Goossens: Senior Software Engineering Manager, worked on SceneKit, Animoji, Memoji