Apple Patents a system that Captures & Processes Motion Information in a 3D Point Cloud for both MR Headsets & Vehicles
Last month the US Patent & Trademark Office published a patent application from Apple that relates to compression and decompression of three-dimensional volumetric content, such as point clouds, immersive video, etc. and generating metadata that enables viewport adaptive streaming and/or rendering of portions of the three-dimensional volumetric content. Apple's patent will be more appreciated by VR developers and engineers.
Below is a introductory video about the ABC's of a point cloud.
Apple's patent notes that as data acquisition and display technologies have become more advanced, the ability to capture three-dimensional volumetric representations, such as point clouds, immersive video content, etc. comprising thousands or millions of points in 3-D space has increased.
Also, the development of advanced display technologies, such as virtual reality or augmented reality systems, has increased potential uses for volumetric representations, such as point clouds, immersive video, etc.
However, volumetric content files are often very large and may be costly and time-consuming to store and transmit. For example, communication of volumetric point cloud or immersive video content over private or public networks, such as the Internet, may require considerable amounts of time and/or network resources, such that some uses of volumetric data, such as real-time uses, may be limited.
Also, storage requirements of volumetric point cloud or immersive video content files may consume a significant amount of storage capacity of devices storing such files, which may also limit potential applications for using volumetric point cloud or immersive video content.
In some embodiments, an encoder may be used to generate a compressed version of three-dimensional volumetric representations to reduce costs and time associated with storing and transmitting large volumetric point cloud or immersive video content files.
In some embodiments, a system may include an encoder that compresses attribute and/or spatial information of a volumetric point cloud or immersive video content file such that the file may be stored and transmitted more quickly than non-compressed volumetric point cloud or immersive video content and in a manner that the compressed volumetric point cloud or immersive video content file may occupy less storage space than non-compressed volumetric point cloud or immersive video content.
In some embodiments, such compression may enable three-dimensional volumetric information to be communicated over a network in real-time or in near real-time.
In some embodiments, a system may include a decoder that receives one or more sets of volumetric point cloud or immersive video content data comprising compressed attribute information via a network from a remote server or other storage device that stores the one or more volumetric point cloud or immersive video content files.
In Respect to a Future Apple Headset: For example, a 3-D display, a holographic display, or a head-mounted display may be manipulated in real-time or near real-time to show different portions of a virtual world represented by volumetric point cloud or immersive video content.
In order to update the 3-D display, the holographic display, or the head-mounted display, a system associated with the decoder may request data from the remote server based on user manipulations of the displays, and the data may be transmitted from the remote server to the decoder in form of viewing areas and decoded by the decoder in real-time or near real-time. The displays may then be updated with updated data responsive to the user manipulations, such as updated views within a given viewing area, or another viewing area may be requested and transmitted to the decoder.
In some embodiments, a system includes one or more LIDAR systems, 3-D cameras, 3-D scanners, etc., and such sensor devices that capture spatial information, such as X, Y, and Z coordinates for points in a view of the sensor devices.
In some embodiments, the spatial information may be relative to a local coordinate system or may be relative to a global coordinate system (for example, a Cartesian coordinate system may have a fixed reference point, such as a fixed point on the earth, or may have a non-fixed local reference point, such as a sensor location).
In some embodiments, such sensors may also capture attribute information for one or more points, such as color attributes, texture attributes, reflectivity attributes, velocity attributes, acceleration attributes, time attributes, modalities, and/or various other attributes. In some embodiments, other sensors, in addition to LIDAR systems, 3-D cameras, 3-D scanners, etc., may capture attribute information to be included in volumetric point cloud or immersive video content.
In Respect to Future Vehicles: For example, in some embodiments, a gyroscope or accelerometer, may capture motion information to be included in a point cloud as an attribute associated with one or more points of the point cloud. For example, a vehicle equipped with a LIDAR system, a 3-D camera, or a 3-D scanner may include the vehicle's direction and speed in a point cloud captured by the LIDAR system, the 3-D camera, or the 3-D scanner. For example, when points in a view of the vehicle are captured they may be included in a point cloud, wherein the point cloud includes the captured points and associated motion information corresponding to a state of the vehicle when the points were captured.
Apple's patent FIG. 1A below illustrates a point cloud and a plurality of viewing areas for the point cloud, wherein a viewport can be positioned to view the point cloud from multiple different views within each of the respective viewing areas around a circumference of the point cloud, according to some embodiments; FIGS. 1B and 1C illustrate portions of the point cloud viewable when viewed from different ones of the viewing areas, wherein the portions of the point cloud are segmented into sub-point clouds for the different viewing areas.
Apple's patent FIG. 2A above illustrates a point cloud and a plurality of viewing areas for the point cloud positioned around a circumference of the point cloud, wherein the viewing areas include views at different distances from the point cloud.
Apple's granted patent 11,418,769 that was published by the U.S. Patent Office in mid-August was the first time that this invention was made public. It was never published as a patent application. To review Apple's rich patent details along with many more patent figures, click here.