A Key Patent that Apple acquired from Zurich's Faceshift to create both Animoji and Memoji was published late last month
Apple acquired Zurich-based Faceshift sometime in September or October 2015. It was eventually used to create Animoji and more importantly, Memoji. The U.S. Patent Office granted Apple a patent stemming from Faceshift in 2019. One of the key Faceshift patents that Apple acquired, that stems back to 2013, was published by the U.S. Patent Office on January 26, 2023.
The patent credits Mark Pauley, an Apple Senior Software Engineer and one of the original Faceshift engineers Sofien Bouaziz. His profile states the he "Designed, developed, and productized the realtime face tracking algorithm powering the iPhone X Animoji and also available to third-party developers through ARKit." After leaving Apple in 2018, Bouaziz worked for Google and is now the Director, XR Presence at Meta.
Technically, the invention relates to a method for real-time facial animation, and a processing device for real-time facial animation. The method includes providing a dynamic expression model, receiving tracking data corresponding to a facial expression of a user, estimating tracking parameters based on the dynamic expression model and the tracking data, and refining the dynamic expression model based on the tracking data and estimated tracking parameters. The method may further include generating a graphical representation corresponding to the facial expression of the user based on the tracking parameters. Embodiments pertain to a real-time facial animation system.
Apple's patent FIG. 1 below shows a schematic representation of a dynamic expression model in accordance with one embodiment of this invention; FIG. 3 shows a flowchart of an optimization pipeline.
Apple's patent FIG. 4 shows a graphical representation of a virtual avatar generated using embodiments of the disclosed subject matter; FIG. 5 is a flowchart of a method according to one embodiment.
Apple's patent FIG. 7 above shows different sets of blendshape weights used to approximate a facial expression of the user; FIG. 8 depicts results of an initial estimation of a neutral facial expression using different dynamic expression models in accordance with one embodiment.
For those wishing to dive into the technical aspects behind this invention, review continuous patent application US 20230024768 A1.