Apple won three 'Project Titan' patents today covering advanced LiDAR Systems for Autonomous Vehicles using Machine Learning+
Today, the U.S. Patent and Trademark Office officially published a series of Project Titan patents or Apple Inc. Two of the key patents cover LiDAR (also Lidar) systems. The first patent covers predicting Lidar data using machine learning which is currently being used in Apple's autonomous training vehicles. The second patent covers enabling LiDAR detection on autonomous vehicles. The third patent covers a multi-stage active suspension actuator.
Predicting Lidar Data using Machine Learning
According to Apple, vehicles use light detection and ranging (lidar) sensors to detect the nearby environment. An autonomous vehicle may use lidar sensors to determine objects or other vehicles on the road to determine appropriate driving actions to perform. The autonomous vehicle may also include additional sensors, such as an optical sensor (e.g., a camera) or a radar sensor, to determine additional information of the nearby environment. Although lidar sensors are capable of sensing environmental information in a form that is interpretable by machine learning algorithms or decision-making systems, lidar sensors are currently slower than existing cameras or radar sensors.
Cameras or radar sensors may be able to capture image frames or radar data frames, respectively, at a rate that is faster than a capture rate of lidar sensors. Thus, the autonomous vehicle may utilize image data or radar data from the cameras or radar sensors to supplement lidar data from the lidar sensors. The autonomous vehicle may read the image data or radar data to determine objects in the nearby environment. A combination of image data, radar data and lidar data allow for the autonomous vehicle to have a more complete understanding of its surroundings. Lidar data is more robust than image data or radar data for machine learning and interpretation such that additional lidar data must be accessible at a rate faster than the capture rate of the lidar sensor.
Apple's granted patent covers systems and methods for predicting lidar data at a vehicle using machine learning. In some embodiments, a vehicle may include one or more sensors, a light detection and ranging (lidar) sensor and a lidar prediction system. The one or more sensors includes an optical sensor, a radar sensor, or both, configured to capture sensor data of a particular view. The lidar sensor is configured to capture lidar data of the particular view. The lidar prediction system includes a predictive model. The lidar prediction system is configured to generate a predicted lidar frame by applying the predictive model to the sensor data captured by the one or more sensors and send the predicted lidar frame to an external system.
In other embodiments, a method for generating a predictive model for predicting lidar frames is described. The method includes receiving, from one or more training vehicles, a plurality of lidar frames for a plurality of locations, wherein the plurality of lidar frames indicates a plurality of objects at the plurality of locations.
The method also includes receiving, from the one or more training vehicles, a plurality of sensor frames from one or more sensors of the one or more training vehicles for the plurality of locations. The method further includes determining a mapping between lidar data points of the plurality of lidar frames and the plurality of sensor frames from the one or more sensors based on the plurality of lidar frames with the plurality of sensor frames from one or more sensors.
The method also includes generating a predictive model configured to convert sensor frames from one or more sensors to a lidar frame based on the mapping between the lidar data points and the plurality of sensor frames from the one or more sensors.
Apple's patent FIG. 1 illustrates a block diagram of a vehicle having one or more sensors configured to detect another vehicle; FIG. 2 illustrates multiple data frames used in detecting a vehicle or understanding a nearby environment for enabling autonomous driving.
Apple's patent FIG. 3 below illustrates a model generation system configured to generate a predictive model to generate a predicted lidar frame from one or more sensor frames from one or more sensors; FIGS. 4A-B illustrate sensor data captured by a vehicle; FIG. 6A illustrates an image frame depicting a vehicle obstructing view of a road sign; FIG. 6B illustrates a predicted lidar frame generated by a detection system depicting an environment; FIG. 6C illustrates a heads-up display generated based on predicted lidar frames indicating obstructed portions of the road sign.
For more details, review granted patent 11,092,690.
Enabling Lidar Detection
Roads or road signs include reflective materials, such as reflective paint or attachments, to improve their optical visibility by reflecting light. Lane markers generally include a reflective paint in addition to physical bumps to ensure that drivers can be made aware of the lane's outer bounds even in low-light situations. License plates on vehicles also include reflective materials to better illuminate the text on the license plate to be visible to other drivers, including police officers.
Autonomous vehicles include numerous sensors configured to detect obstacles that may appear while driving. These obstacles may include other vehicles driving along the same road. Vehicles on the road may be detected by the sensors, such as a light detection and ranging (lidar) sensor or a radar sensor. The sensors may generally be able to detect a vehicle by determining that a lidar signal or a radar signal has been reflected by the vehicle. The sensors may not necessarily be able to determine that the obstacle is a vehicle by simply having reflected signals. Detectability of other vehicles on the road can be improved by making the sensors more effective by improving usability of signals detectable by the sensors.
Apple's granted patent covers systems and methods for enabling lidar detection on a vehicle. In some embodiments, a vehicle may include a light source configured to emit a light signal, a receiver sensor configured to receive a reflected light signal based at least in part on the light signal reflected from a plurality of reflectors and a controller. The controller may be configured to identify an arrangement pattern of the plurality of reflectors based at least in part on the reflected light signal and determine that plurality of reflectors are coupled to another vehicle based at least in part on an identification of the arrangement pattern.
Apple's patent FIG. 1 below illustrates a block diagram of a vehicle having one or more sensors configured to detect another vehicle; FIG. 2a illustrates a side view of a sensor configured to send a signal to a plurality of reflectors embedded in a vehicle; FIGS. 3A&B illustrate block diagrams of a vehicle having multiple patterns of pluralities of reflectors identifying multiple orientations of the vehicle.
For more details, review Apple's granted patent 11,092,689
Apple's third Project Titan patent granted today is titled "Multi-Stage Active Suspension Actuator" which relates to suspension systems for vehicles and, in particular, active suspension actuators and suspension systems with active suspension actuators. For more, review granted patent 11,090,997.
Comments