Apple's Most Advanced Project Titan Invention Surfaces Detailing Gesture Controls for a Heads-Up Display
In February Patently Apple posted a report titled "A New Project Titan Patent Surfaces in Europe Covering Interchangeable Heads-Up Display Interfaces & more." Today the US Patent & Trademark Office published a patent application from Apple that advances that heads-up display invention published in Europe. Apple details an advanced gesture control system for when the vehicle's occupant wants the autonomous vehicle to perform a particular action that's not part of a particular journey from home to work for example. The occupant could make a gesture that's acknowledged by an internal camera to perform a particular action such as pass a vehicle, accelerate the vehicle or park.
Apple notes that motorized vehicles which are capable of sensing their environment and navigating to destinations with little or no ongoing input from occupants, and may therefore be referred to as "autonomous" or "self-driving" vehicles, are an increasing focus of research and development. Given the multiplicity of choices that are typically available with respect to vehicle trajectories in real-world environments, occupant input or guidance with regard to selecting vehicle trajectories (without requiring traditional steering, braking, accelerating and the like) may be extremely valuable to the motion control components of such vehicles. However, providing interfaces for such guidance which are intuitive and easy to use may present a non-trivial challenge.
Apple invention relates to various embodiments of methods and apparatus for gesture based control of autonomous or semi-autonomous vehicles.
In at least some embodiments, a method may comprise one or more computing devices detecting that a triggering condition has been met for initiation of a gesture-based interaction session with respect to an occupant of a vehicle.
Detecting that the triggering condition has been met may itself comprise analyzing or matching a particular hand or body gesture made by the individual within an interaction zone (a three-dimensional region near the occupant, whose boundaries may be customizable).
Other modes of initiating an interaction session, such as using a voiced command, may be used in other embodiments, and combinations of signals of different modalities (e.g., voice, gesture, gaze direction etc.) may be used in some embodiments.
The method may further comprise identifying one or more options for operations associated with the vehicle, which may be of interest to the occupant participating in the session. A wide variety of options for operations may be identified in different embodiments, including for example passing another vehicle, accelerating the vehicle, decelerating the vehicle, parking the vehicle, changing a direction in which the vehicle is moving, or generating a signal detectable outside the vehicle.
At least some of the options may be identified based at least in part on the analysis of signals collected from the external environment of the vehicle--e.g., using one or more cameras or other sensors. For example, based on the location of the vehicle and the views of the external environment, options to park the vehicle near a particular building such as a restaurant or a retail store, to turn the vehicle onto another road, to enter or exit a highway on-ramp, etc., may be identified.
The method may include populating a display with respective representations of at least some of the options. In some cases, the options identified may be assigned respective interest scores or relevance scores based for example on contextual information (such as the time of day, the status of various components of the vehicle such as the gas tank or battery, and so on), personal profiles or preferences of the occupant, and the like.
From among a plurality of options identified, representations of a subset (selected for example based on the scores) may be displayed at least initially; additional options may be displayed if the first subset does not include the option the occupant wishes to have implemented. Based at least in part on an analysis of a particular gesture made by the occupant (e.g., a swiping gesture within the interaction zone, or some other type of displacement of a hand within the interaction zone), and/or some other signal from the occupant, a particular option of the one or more options may be selected for implementation. An indication of the particular option which was selected may be provided (e.g., by highlighting the representation of that option on the display), and an operation corresponding to the particular option may be initiated. In some embodiments, after the selection is indicated to the occupant, another gesture or signal confirming or approving the selected option may be required before the corresponding operation is initiated.
A Variety of Displays
A variety of displays may be used in different embodiments, such as a heads-up display incorporated within or attached to the vehicle, a three-dimensional display, a display of a wearable device (such as an augmented reality headset or eyeglasses) being worn by the occupant, a television screen, or a display of a portable computing device.
Apple's patent FIG. 1 presented below illustrates an example system environment in which operations such as trajectory changes of an autonomous vehicle may be guided at least in part via gestures.
Apple's patent FIG. 2 presented below illustrates an example vehicle environment comprising a plurality of sensors which may collect data that can be analyzed to respond to gestures or other signals from the vehicle's occupants.
Apple's patent FIG. 3a and FIG. 3b presented below illustrate examples of gestures which may be used to select from among options being displayed for a vehicle's movements; FIG. 4a, FIG. 4b and FIG. 4c illustrate examples of gestures which may be used to initiate and terminate interaction sessions.
Apple's patent FIG. 6 presented below illustrates an example scenario in which options for vehicle operations may be ranked relative to one another.
Apple's patent FIG. 8 presented below illustrates example sub-components of an interaction management device which analyzes multimodal signals to display and select options for operations of interest to one or more individuals.
Apple's patent application was filed back in Q3 2017 and published today by the U.S. Patent Office. The inventors on this patent application are noted as Scott Herz Engineering Manager; Karlin Bark, Interaction Architecture (and Experience Prototyping); and Nguyen-Cat Le, UX/UI Sr. Motion Graphic Designer – now with Samsung as a Sr. Visual & Motion Designer.
Considering that this is a patent application, the timing of such a product to market is unknown at this time.
Another Patently Apple patent report on Project Titan was posted back in December 2017 titled "The First Apple Patent Regarding Autonomous Vehicle Navigation was published today by the U.S. Patent Office."
Patently Apple presents a detailed summary of patent applications with associated graphics for journalistic news purposes as each such patent application is revealed by the U.S. Patent & Trade Office. Readers are cautioned that the full text of any patent application should be read in its entirety for full and accurate details. About Making Comments on our Site: Patently Apple reserves the right to post, dismiss or edit any comments. Those using abusive language or negative behavior will result in being blacklisted on Disqus.
Comments