Apple to Bring Z Depth Mapping to Final Cut Pro X
On April 16, 2015, the U.S. Patent & Trademark Office published a patent application from Apple that reveals Apple's invention relating to Z Depth image segmentation for Final Cut Pro X. With Apple's acquisition of Israeli companies PrimeSense and more recently LinX Imaging, they'll be able to bring 3D mapping to future iSight cameras for photography, videos and applications relating to facial detection and facial recognition. With 3D mapping coming to Apple cameras, Apple see's the need to bring Z Depth mapping to Final Cut Pro X.
Apple's Patent Background
When a camera takes a photograph, parts of the scene within view of the camera are closer to the camera than other parts of the scene. The distance of an object in a scene from a camera is sometimes referred to as the "depth" of that object. The farther an object is from the camera, the greater the depth of the object.
Standard cameras can only focus on one depth at a time. The distance from the lens of the camera to the depth that is in perfect focus is called the "focusing distance". The focusing distance is determined by the focal length of the lens of the camera (a fixed property for lenses that do not change shape) and the distance of the lens from the film or light sensor in the camera. Anything closer to or farther from the lens than the focusing distance will be blurred. The amount of blurring will depend on the distance from the focusing distance to the object and whether the object is between the camera and the focusing distance or farther away from the camera than the focusing distance. In addition to the distance that is in perfect focus, there are some ranges of distances on either side of that perfect focus in which the focus is close to perfect and the blurring is imperceptible or is acceptably low.
Distinct from ordinary cameras which capture images with a single depth of focus for a given image, there are "light field cameras". Light field cameras determine the directions from which rays of light are entering the camera. As a result of these determinations, light field cameras do not have to be focused at a particular focusing distance. A user of such a camera shoots without focusing first and sets the focusing distance later (e.g., after downloading the data to a computer).
Apple Invents an Editing Application User Interface for Z Depth Image Segmentation
Apple's invention relates to an image organizing and editing application that receives and edits image data from a light field camera. The image data from the light field camera includes information on the direction of rays of light reaching the camera. This information lets the application determine a distance from the light field camera (a "depth") for each portion of the image (e.g., a depth of the part of the scene that each pixel in an image represents).
The applications of some embodiments use the depth information to break the image data down into layers based on the depths of the objects in the image. In some embodiments, the layers are determined based on a histogram that plots the fraction of an image at a particular depth against the depths of objects in the image.
In some embodiments, the applications provide a control for setting a depth at which a foreground of the image is separated from the background of the image. The applications of some such embodiments obscure the objects in the designated background of the image (e.g., by graying out the pixels representing those objects or by not displaying those pixels at all). In some such embodiments, an initial setting for the control is based on the determined layers (e.g., the initial setting places the first layer in the foreground, or the last layer in the background, or uses some other characteristic(s) of the layers to determine the default value).
The applications of some embodiments also provide layer selection controls that allow a user to command the applications to hide or display particular layers of the image. In some such embodiments, the number of layers and the number of layer selection controls vary based on the depths of the objects in the image data.
In some embodiments, the applications provide controls that allow a user to select objects for removal from an image. In some such embodiments, the applications remove the selected object by erasing a set of pixels that are (i) in the same layer as a user selected portion of an image, and (ii) contiguous with the selected portion of the image.
The image below makes it clear that the "application" that Apple refers to in their patent is Final Cut Pro X.
Today there are advanced stereoscopic 3D post-production tools supporting Final Cut Pro X from Noise Industries (FxFactory) that specifically cover "Z Depth Map Mode for 2D to 3D Dimensionalization." Another one is from Dashwood Cinema Solutions.
Although Apple has a number of patent figures on this subject, they're difficult to appreciate if you're not someone in this field. So to help the rest of us understand this feature, the video below is about Z Depth mapping for an Adobe application. The video will give you an idea as to what Apple could be bringing to Final Cut Pro X in the future.
Apple credits Andrew Bryant and Daniel Pettigrew as the inventors of patent application 20150104101 which was originally filed in Q4 2013. Considering that this is a patent application, the timing of such a product to market is unknown at this time.
Patently Apple presents a detailed summary of patent applications with associated graphics for journalistic news purposes as each such patent application is revealed by the U.S. Patent & Trade Office. Readers are cautioned that the full text of any patent application should be read in its entirety for full and accurate details. About Making Comments on our Site: Patently Apple reserves the right to post, dismiss or edit any comments. Comments are reviewed daily from 5am to 7pm MST and sporadically over the weekend.