Foxconn has implemented the Strictest Health Guidelines that workers at iPhone assembly plants must abide by & more
Apple's Executive Team lead by Tim Cook will be holding a Virtual Meeting to update & assist employees during this Difficult Time

Patent Reveals Apple's work on Quantized Depths for Projection Point Cloud Compression for AR/VR HMD & more

1 Cover Point Cloud

 

Today the US Patent & Trademark Office published a patent application from Apple that relates to Point Clouds using LiDAR systems. A 3-D display, a holographic display, or a head-mounted display may be manipulated in real-time or near real-time to show different portions of a virtual world represented by point clouds. Point Clouds could be used in connection with playing video games over the internet, with vehicle mapping systems and more.

 

As data acquisition and display technologies have become more advanced, the ability to capture point clouds comprising thousands or millions of points in 2-D or 3-D space, such as via LIDAR systems, has increased.

 

Also, the development of advanced display technologies, such as virtual reality or augmented reality systems, has increased potential uses for point clouds.

 

However, point cloud files are often very large and may be costly and time-consuming to store and transmit. For example, communication of point clouds over private or public networks, such as the Internet, may require considerable amounts of time and/or network resources, such that some uses of point cloud data, such as real-time uses, may be limited.

 

Also, storage requirements of point cloud files may consume a significant amount of storage capacity of devices storing the point cloud files, which may also limit potential applications for using point cloud data. This is what Apple's patent is to remedy. 

 

Apple's patent covers an encoder that may be used to generate a compressed point cloud to reduce costs and time associated with storing and transmitting large point cloud files.

 

In some embodiments, a system may include an encoder that compresses attribute and/or spatial information of a point cloud file such that the point cloud file may be stored and transmitted more quickly than non-compressed point clouds and in a manner that the point cloud file may occupy less storage space than non-compressed point clouds.

 

In some embodiments, a system may include a decoder that receives one or more sets of point cloud data comprising compressed attribute information via a network from a remote server or other storage device that stores the one or more point-cloud files.

 

For example, a 3-D display, a holographic display, or a head-mounted display may be manipulated in real-time or near real-time to show different portions of a virtual world represented by point clouds.

 

In order to update the 3-D display, the holographic display, or the head-mounted display, a system associated with the decoder may request point cloud data from the remote server based on user manipulations of the displays, and the point cloud data may be transmitted from the remote server to the decoder and decoded by the decoder in real-time or near real-time. The displays may then be updated with updated point cloud data responsive to the user manipulations, such as updated point attributes.

 

In some embodiments, a system, may include one or more LIDAR systems, 3-D cameras, 3-D scanners, etc., and such sensor devices may capture spatial information, such as X, Y, and Z coordinates for points in a view of the sensor devices.

 

In some embodiments, such sensors may also capture attribute information for one or more points, such as color attributes, texture attributes, reflectivity attributes, velocity attributes, acceleration attributes, time attributes, modalities, and/or various other attributes.

 

In some embodiments, other sensors, in addition to LIDAR systems, 3-D cameras, 3-D scanners, etc., may capture attribute information to be included in a point cloud. For example, in some embodiments, a gyroscope or accelerometer, may capture motion information to be included in a point cloud as an attribute associated with one or more points of the point cloud.

 

Apple's patent FIG. 3D below illustrates a point cloud being projected onto multiple projections; FIG. 3E illustrates a point cloud being projected onto multiple parallel projections.  

 

2 point clouds  LIDAR  MR HEADSET OVER NETWORK

 

Apple's patent FIG. 5A above illustrates components of an encoder that includes geometry, texture, and/or other attribute downscaling.

 

The point cloud system could also be applied to Apple's Project Titan. For example, a vehicle equipped with a LIDAR system, a 3-D camera, or a 3-D scanner may include the vehicle's direction and speed in a point cloud captured by the LIDAR system, the 3-D camera, or the 3-D scanner.

 

For example, when points in a view of the vehicle are captured, they may be included in a point cloud, wherein the point cloud includes the captured points and associated motion information corresponding to a state of the vehicle when the points were captured.

 

This is a highly technical patent that covers segments titled:

 

  • Example Intra--3D Frame Encoder
  • Example Intra 3D Frame Decoder
  • Segmentation Process
  • Depth/Geometry Patch Images
  • Uniform Quantization
  • Zero Biased Quantization
  • Non-Uniform Quantization
  • Generating Images Having Depth
  • Occupancy Map Compression
  • Video-Based Occupancy Map Compression
  • Patch Alignment and Size Determination in a 2D Bounding Box of an Occupancy Map
  • Point Cloud Resampling
  • 3D Motion Compensation
  • Compression/Decompression Using Multiple Resolutions, and more

 

Apple's patent application 20200111237 that was published today by the U.S. Patent Office was filed back in Q4 2019. Engineers and super geeks could review the full details here. , Considering that this is a patent application, the timing of such a product to market is unknown at this time.

 

Apple Inventors

 

Khaled MAMOU: Senior Software Engineer. Previously worked at MPEG working on Point Cloud Compression. Also worked at AMD on video compression for Xbox One & PS4.

Fabrice Robinet: Engineering Manager

Jungsun Kim: Senior Software Engineer; Previous worked at MediaTek on the video compression standardization team. Research and develop video compression algorithms for current and future video codec standardization.

Valery Valentin: R&D Soft. Engineer in Computer Vision

 

10.51FX - Patent Application Bar

Comments

The comments to this entry are closed.