The Enterprise Wireless Alliance urges the FCC to not support 6 GHz band for 5G
Apple Invention Covers a Multitask Neural Network System for Controlling Important Autonomous Vehicle Functions in Realtime

New Apple Invention Covers Inspection and Primal Neural Networks for Controlling Future Autonomous Vehicles

1 cover autonomous vehicle


In late May, Patently Apple posted a report titled Apple's Autonomous Shuttle Service will use Volkswagen Vans. Finally, Project Titan was taking shape. Not initially as an advanced next-generation vehicle but rather an autonomous ride-sharing service vehicle. Yesterday Patently Apple posted a report titled "Apple was granted a Patent this week for the Roof / Body Structure of a Vehicle," showing that Apple had engineers had worked on a next-gen vehicle that may still be alive somewhere in Apple's equivalent of area 51.


Apple's patent goes down the rabbit hole on the topic of neural networks. In general Apple's invention relates to systems and algorithms for machine learning and machine learning models, and in particular using machine learning techniques to determine the reliability of neural networks of an autonomous vehicle.


Further, a scaled image of a car would still be a coherent image of a car. Thus, for the training of neural networks analyzing images, augmented training data sets may be generated by randomly translations, flipping, or scaling the images in the initial training data set.


Overview of Apple's Invention


Various embodiments of methods and systems are disclosed herein to determine the reliability of the output of a neural network using an inspection neural network (INN).


The inspection neural network may be used to examine data generated from a primary neural network (PNN) during the PNN's decision making or inference process.


The examined data may include the initial input data to the PNN, the final output of the PNN, and also any intermediate data generated during the inference process. Based on this data, the INN may generate a reliability metric for an output of PNN. The reliability metric generated using the embodiments described herein may be significantly more accurate than reliability metrics generated using conventional methods.


The generation of accurate reliability metrics for the output of neural networks is of great importance in many applications of neural networks. As one example, a neural network may be used by an autonomous vehicle to analyze images of the road, generating output that are used by the vehicle's navigation system to drive the vehicle. The output of the neural network may indicate for example a drivable region in the image; other objects on the road such as other cars of pedestrians; and traffic objects such as traffic lights, signs, and lane markings. In such a setting, it is important that the navigation system receive not just the analytical output of the neural network, but also a reliability measure indicating the confidence level or potential probably of error associated with the output. The navigation system may adjust its behavior according to the reliability measure.


For example, when the autonomous vehicle is driving under bad lighting conditions, the output generated by the neural network may be less reliable. In that case, the navigation system may be provided low measures of reliability for with the network's outputs, which may cause the navigation system to slow the speed of the vehicle. In some cases, the navigation system may switch from a sensor generating less reliable data to another sensor that is generating more reliable data.


In one conventional approach, a reliability metric may be generated for a neural network output using a mathematical function, such as a polynomial function, that computes the measure based on the output of neurons in the neural network. However, such mathematical functions do not generally produce satisfactory results, because they fail to capture the complexity of decision making process of the network. In another conventional approach, the neural network itself may be configured to generate a reliability metric along with its output.


However, such self-reporting of reliability is typically flawed, because the output is evaluated based on the same knowledge that was used to generate it, and the network is generally blind to its own shortcomings. For this reason, self-reported reliability metrics tend to be biased in favor of the network, and they do not represent good measures of the network's reliability.


In some embodiments disclosed herein, a computer implemented method is described. The method includes receiving input data for a PNN captured by one or more sensors. The method then generates a output using the PNN based on the input data. The method includes capturing certain inspection data associated with the generation of the output. The method also includes generating an indication of reliability for the output using an INN based on the inspection data. The method further includes transmitting the output and the indication of reliability to a controller. In the embodiments, the PNN is trained using a different set of training data from the training data set used to train the INN.


In some embodiments disclosed herein, a system is described. The system includes a sensor that is configured to capture sensor data. The system also includes a data analyzer configured to generate an output based on the sensor data using a PNN. The system further includes an analyzer inspector configured to capture inspection data associated with the generation of the output by the data analyzer, and then generate an indication of reliability for the output using an INN, based on the inspection data. In the embodiments, the PNN is trained using a different set of training data from the training data set used to train the INN.


In at least some embodiments, the sensor data comprises an image captured from a camera on an autonomous vehicle, and the navigator system of the autonomous vehicle uses the indication of reliability to navigate the vehicle.


In yet other embodiments disclosed herein, a training method for neural networks is described. The method includes providing a PNN configured to generate output from respective input data and an INN configured to receive inspection data associated with applications of the PNN and output a reliability metric for output of the PNN based at least in part on the inspection data. The method includes separating a set of input data for the PNN into a first data set, a second data set, and a third data set. The PNN is trained using the first data set. The INN is trained using a first inspection data set generated from applying the PNN to the second data set. The INN is then tested using a second inspection data set generated from applying the PNN to the third data set.


The reliability metrics generated using the embodiments disclosed herein are more accurate than reliability metrics calculated from mathematical functions. Importantly, the INN is a neural network that can be trained to recognize particular behaviors of the PNN during its inference process that are indicative of the reliability of its output. Further, the INN is not biased in favor of the PNN, because the INN is trained using different data than the PNN. Thus, the INN is capable of making an objective examination the inference process of the PNN. This objectivity makes the INN's reliability metric more accurate and useful in the real-world setting. These and other benefits and features of the inventive concepts are discussed in further detail below, in connection with the figures.


In other embodiments, the PNN of the analyzers may generate output other than a confidence map, depending on the task. For example, in some embodiments, the PNN may be configured to infer one or more classification of a subject in the image. Such classifications may include for example types of objects observed on the road such as other vehicles, pedestrians, lane markings, or traffic signs.


Some of this was covered in a December report titled "Apple Participated at the NIPS Machine Learning Event of the Year Revealing a Number of their Deep Projects."


2 pedestrian identification  deep learning

In this week's patent application we see Apple's patent FIG. 1 below which is a block diagram illustrating one embodiment of a system using an inspection neural network to generate a reliability indicator.


3 X  Neural engine for vehicles

Apple's patent FIG. 2 presented above is block diagram illustrating an autonomous vehicle that employs an inspection neural network; FIG. 6 is a diagram illustrating a process of augmenting a data set used to train an inspection neural network.



Apple's patent application was originally filed back in Q4, 2017. Considering that this is a patent application, the timing of such a product to market is unknown at this time.


In May we posted a report titled "Surprising for Americans to learn that AI was Largely Pioneered in Canada." That was made abundantly clear this week when two of Apple's patents came to light covering an autonomous vehicle being run by an advanced inspection neural network for assessing neural network reliability. The inventors were mainly Canadian experts in this field.


Some of the inventors on Apple's patent include: Yichuan (Charlie) Tang, University of Toronto, Machine Learning/Deep Learning; Nitish Srivastava, University Toronto, Special Project Group at Apple; Russ Salakhutdinov, Canadian Researcher, Professor at Carnegie Mellon University, Director of AI Research at Apple.


This report is one of two that cover Apple's neural networks controlling an autonomous vehicle. The second report will be available in the next hour covering all-new territory.


10.0BB Patent Notice Bar

Patently Apple presents a detailed summary of patent applications and/or granted patents with associated graphics for journalistic news purposes as each such patent application is revealed by the U.S. Patent & Trade Office. Readers are cautioned that the full text of any patent application should be read in its entirety for full and accurate details. About Making Comments on our Site: Patently Apple reserves the right to post, dismiss or edit any comments. Those using abusive language or negative behavior will result in being blacklisted on Disqus.


The comments to this entry are closed.