Apple invents a Sketchbook App for iPad that will allow a user to merge sketches with Realworld photos
Today the US Patent & Trademark Office published a patent application from Apple that relates to computer graphics, and in particular, to systems and methods for sketch-based placement of computer-generated graphical objects into realworld photos using an iPad.
In some instances, a user may populate their computer-generated room by selecting virtual objects from a pre-existing library. However, this limits the customizability of the computer-generated room.
Apple's patent covers various implementations, systems, and methods for sketch-based placement of computer-generated graphical objects (sometimes also referred to as "virtual objects", "graphical objects", or "extended reality (XR) objects") into a computer-generated graphical setting (sometimes also referred to as a "virtual environment", a "graphical environment", or an "XR environment").
A method is performed on an iPad using one or more cameras. In various implementations, the method includes obtaining an input directed to a content creation interface (e.g., a sketchpad), wherein the input corresponds to a sketch of a candidate object, and wherein the content creation interface facilitates creation of computer-generated graphical objects presentable using the device.
The method also includes: obtaining a three-dimensional (3D) model using the input that corresponds to the sketch of the candidate object; generating a computer-generated graphical object using the obtained 3D model; and causing presentation of the computer-generated graphical object together with imagery obtained using the one or more cameras of the device.
In some implementations, the iPad is configured to present computer-generated graphical content and to enable optical see-through or video pass-through of at least a portion of the physical environment (e.g., including a table) on the display.
As shown in FIG. 4A, the iPad also displays a content creation interface #410 and a tools panel #420 on the display. According to some implementations, the content creation interface is configured to detect/receive user inputs such as sketches or strokes with Apple Pencil held by the user or touch/finger inputs from the user. According to some implementations, the tools panel includes selectable tools configured to change one or more characteristics of the user inputs retrospectively and/or prospectively such as line thickness, line color, line type, fill color, texture filler, and/or the like.
In some implementations, in response to detecting a user input within the content creation interface that corresponds to a sketch of a candidate object, the iPad is configured to obtain a 3D model based on the sketch of the candidate object and present a computer-generated graphical object within the computer-generated graphical setting #425 based on the 3D model. As such, computer-generated graphical objects are placed within the computer-generated graphical setting based on user sketches directed to the content creation interface.
As shown in FIG. 4B, the instance #450 of the first computer-generated graphical presentation scenario associated with time T.sub.2 shows a computer-generated graphical object #475 (e.g., a 3D palm tree) displayed within the computer-generated graphical setting #425 in response to detecting a user input #465 (e.g., a sketch of a palm tree) within the content creation interface #410.
Apple's patent FIG. 6 illustrates a flowchart representation of a method of sketch-based placement of computer-generated graphical objects.
(Click on image to Enlarge)
For more details, review Apple's patent application number 20210383613.
Considering that this is a patent application, the timing of such a product to market is unknown at this time.