Apple reveals a possible Future update to Vision Pro's 'Persona' Feature wherein it creates a Full Body Avatar of the user
One of the cool features of Vision Pro is called 'Persona" which is a personal digital image of yourself or personalized avatar as noted above in our cover graphic derived from Apple's marketing. Today the U.S. Patent and Trademark Office officially published a patent application from Apple that reveals a possible advancement that could be used at creating a full-body 'Persona' instead of limiting to just your face. This new Persona update could capture a user's clothing style, textures, hair style, hair color, facial hair features, eye color, accessories and more.
Apple notes in their patent background that it may be desirable to generate or modify a representation of a user, such as a 3D user model, while a user is using a device, such as a head mounted device (HMD). However, existing systems may not utilize data that is potentially available from external sources to generate or modify such a representation.
Multiple Device Model Augmentation
Overall, Apple's invention covers systems, methods, and devices that use data from an external device (e.g., a camera, head-worn speaker device, HMD, or other device worn by a user) to improve user data (e.g., a 3D user model) generated on a wearable device.
Data provided by an external device may include visual data related to body portions of a user (e.g., a torso, back, leg portion, foot, etc.) and/or information associated with features of the user (such as, inter alia, body dimensions, body shape, skin texture, clothing texture/material, user pose, etc.) that may not be visible or otherwise captured via sensors (e.g., cameras) of a device being worn by the user.
For example, an HMD's sensors may only provide a view of a limited portion of a user wearing Vision Pro via the Persona feature that's from a limited perspective to supply data for generating a body model of the user while an external device may provide a differing perspective view that may include one or more different portions of the user. A dataset (associated with an external device's view of the user) obtained via the external device may be transmitted to the HMD such that a 3D model of the user may be more accurately, completely, efficiently, or otherwise more desirably generated or updated.
In some implementations, the data from the external device corresponds to the same time period, e.g., current external device data is used to supplement current wearable device data. In other implementations, the data from the external device corresponds to a different time period, e.g., previously-captured external device data is used to supplement current wearable device data.
For example, stored sensor data (e.g., previously retrieved images of the user from a device such as a security camera) may be additionally used to update the body model of the user.
In some implementations the data from the different devices that is used to generate or update a model is matched (e.g., identified as corresponding to the same person/user) based on matching processes or criteria.
In some implementations, data retrieved from the external device may be used to update a 3D model of the user (e.g., a point cloud model, a parametric representation model, a skeleton model (e.g., bone lengths, etc.)).
In some implementations, data retrieved from the external device may be used to update a predictive model (e.g., a neural network) used by the HMD to interpret its own sensor data. For example, data retrieved from the external device may be used as additional training data to update the predictive model or as input to the predictive model. In some implementations, data retrieved from the external device may be shared (e.g., directly or as gradients) to support federated learning.
Apple's patent FIG. 1 below illustrates an example operating environment associated with a process for generating and/or updating a user model representing a user based on receiving persona data from exterior cameras. Sometime an iPhone's camera will be used; FIG. 2 illustrates an example operating environment associated with a process for generating and/or updating a user model representing a user comprising a body portion obscured from view; and FIG. 3 illustrates an example process for identifying a user to determine sensor data that represents the user.
Apple's patent FIG. 4 below illustrates a system flow diagram of an example environment in which a system can provide sensor data from multiple devices to update a user model generated on a wearable device; FIG. 5 is a flowchart representation of an exemplary method that provides sensor data from multiple devices to update a user model generated on a wearable device, in accordance with some implementations.
A 3D model of the user may be used to model e.g., bone lengths, body shape, body/skin texture (e.g., including tattoos, scars, etc.), clothing types, clothing shapes, clothing color, clothing material, clothing texture, hair style, hair color, facial hair features, eye color, accessories (e.g., an ear ring, nose piercing, necklace, glasses, headphones, purse, etc.). Multiple 3D models may be used for a single user to model different properties using different representations (e.g., a first model for body shape and a second (differing) model for body texture).
Model representations may include, inter alia, a parametric human body shape model, a voxel representation, an implicit surface representation, a neural radiance field (NeRF) representation, a skeleton representation, a point cloud, a triangle mesh, a bitmap (for texture), implicit texture, reference to a material catalog, reference to a texture atlas, reference to an asset catalog (e.g., for hair styles, clothes, accessories, etc.), etc.
To review the full details of this invention, check out patent application 20240312167. The lead engineer/inventor of this patent is noted as being Daniel Kurz, Senior Machine Learning Manager. Kurz came to Apple 9 years ago when Apple acquired Metaio in 2015.