Behind Apple's announcement this week regarding Eye Tracking for iPhone and iPad were multiple patents over several years
Apple launches another round of iPhone discounts in China while Ming-Chi Kuo predicts iPhone 16 colors

Apple won a patent for one aspect of creating realistic Virtual Personas in Apple Vision Pro involving hair generation

1 cover visionOS facial dynamic

When Apple introduced Memoji in 2018, it provided a way to create personalized avatars to match a user's perceived personality and mood that could be sent in Messages and FaceTime. Then came Vision Pro's creation of the user's "Persona," which introduced a stunning advancement in creating a user's likeness in the digital world.

In one segment of his WWDC23 Keynote, Mike Rockwell, VP Technology Development Group, stated: "For digital communication like FaceTime, Vision Pro goes beyond conveying just your eyes and creates an authentic representation of you. This was of the most difficult challenges we faced in building Apple Vision Pro. There's no video conferencing camera looking at you, and even if there were, you're wearing something over your eyes. Using our most advanced machine learning techniques, we created a novel solution. After a quick enrollment process using the front sensors on Vision Pro, the system uses an advanced encoder-decoder neural network to create your digital Persona. This network was trained on a diverse group of thousands of individuals. It delivers a natural representation, which dynamically matches your facial and hand movements. With your Persona, you can communicate with over a billion FaceTime-capable devices. When viewed by someone in another Vision Pro, your Persona has volume and depth not possible in traditional video."

Last week the U.S. Patent and Trademark Office officially granted Apple a patent that relates to Rockwell's description of creating a realistic Persona for the Vision Pro headset. In this particular granted patent, Apple focuses on the realistic rendering of a user's hair data.

In their patent background, Apple notes that previously available hair rendering methods had various shortcomings. For example, rendering based on hair card data is computationally expensive due to a relatively large corresponding hair texture size and complexities associated with translating the hair texture. As another example, rendering based on hair strand data is associated with aliasing and strand deformation issues.

Projection Based Hair Rendering

Apple's invention relates to methods, systems, and electronic devices for dynamically generating hair textures across corresponding rendering cycles, based on a projection to a hair mesh. The hair mesh is associated with a virtual agent, such as a computer-generated persona.

For example, the hair mesh corresponds to a two-dimensional (2D) UV map that is associated with a three-dimensional (3D) facial representation of a virtual agent. The method includes rendering a subset of a plurality of hair strands in order to generate a hair texture, based on a corresponding portion of the projection. The plurality of hair strands is represented by hair curve data.

Accordingly, various implementations described in this granted patent includes pre-projecting (e.g., before rendering) hair strands, and using a portion of the projection at render time in order to selectively generate a desired hair texture.

For example, a method includes projecting 1,000 hair strands to a hair mesh. Continuing with this example, during a first rendering cycle the method includes generating a high resolution hair texture based on a projection of 900 (of the 1,000) hair strands to the hair mesh, and during a second rendering cycle the method includes generating a lower resolution hair texture based on a projection of 500 (of the 1,000) hair strands to the hair mesh.

Selectively generating hair texture across rendering cycles enables the avoidance of aliasing/deformation issues associated with hair strand rendering methods, while enabling the generation of rich hair textures associated with hair card rendering methods. Moreover, the method includes rendering the generated hair texture in association with the virtual agent in order to generate a display render for display.

Apple further notes that an XR environment may include augmented reality (AR) content, mixed reality (MR) content, virtual reality (VR) content, and/or the like. With an XR system, a subset of a person's physical motions, or representations thereof, are tracked, and, in response, one or more characteristics of one or more virtual objects simulated in the XR environment are adjusted in a manner that comports with at least one law of physics. As one example, the XR system may detect head movement and, in response, adjust graphical content and an acoustic field presented to the person in a manner similar to how such views and sounds would change in a physical environment.

In Apple's patent FIG. 2A below we see a plurality of hair strands #200 includes a first hair strand #200-1, a second hair strand #200-2, . . . , up to optionally an Nth hair strand #200-N. The plurality of hair strands is represented within hair curve data, such as hair strand data. The hair curve data may indicate various textures values, such as Albedo, Tangent, ID, etc. Each of the plurality of hair strands 200 includes a respective plurality of hair points; FIG. 2B includes a portion of a virtual agent 210. The portion of the virtual agent 210 corresponds to a three-dimensional (3D) representation of a portion of a face of a person, including the chin and upper lip area of the face. The virtual agent 210 may correspond to a computer-generated object or a computer-generated model, to be rendered for display on a display. For example, an electronic device renders (e.g., via a graphics processing unit (GPU)) the virtual agent 210, and displays the render (e.g., a video frame of the virtual agent) as part of an extended reality (XR) environment. As the electronic device changes position relative to the XR environment (e.g., rotates or moves along an axis of the XR environment), the electronic device updates the render in order to account for the positional change; FIG. 2C includes a hair mesh 220 that is associated with the virtual agent 210. The hair mesh 220 includes a hair region 222a onto which hair can be added, and a non-hair region (e.g., the mouth) 222b onto which hair cannot be added. The hair mesh 220 is not textured with hair. For example, the hair mesh 220 corresponds to a hair shell that is indicated within hair card data. As another example, the hair mesh 220 corresponds to a two-dimensional (2D) UV map.

2AppleGrantedPatentXRHair

Apple's patent FIG. 3 below is an example of a block diagram of a system #300 for projecting a plurality of hair strands to a hair mesh in order to generate a hair texture in accordance with some implementations.

3.FIG.3ApplePatent

For greater detail, review Apple's granted patent 11983810 which was made public on May 14, 2024. The lead inventor is listed as Mariano Merchante, Senior Software Engineer, AR/VR Graphics, specializing in VR/AR, computer graphics and game development.

While developing a new way to generate hair for Vision Pro Persona's isn't the most exciting invention on record, to be sure, it goes to the overall depths that Apple went to in creating  just one aspect of the most realistic personal avatars to date. It makes Meta's VR personas look childish in comparison.  

10.52FX - Granted Patent Bar