Apple's Second Siri Patent Discusses All Things Hands-Free
On January 26, 2012, the US Patent & Trademark Office published Apple's second Siri centric patent. Our first report on Apple's Siri was titled "Apple introduces us to Siri, the Killer Patent," which described a large basket of concepts and possible future applications. In Apple's second Siri patent, it's all about the "hands-free Context."
Apple's Patent Background
Many existing operating systems and devices use voice input as a modality by which the user could control operation. One example is voice command systems, which map specific verbal commands to operations, for example to initiate dialing of a telephone number by speaking the person's name. Another example is Interactive Voice Response (IVR) systems, which allow people to access static information over the telephone, such as automated telephone service desks.
Many voice command and IVR systems are relatively narrow in scope and could only handle a predefined set of voice commands. In addition, their output is often drawn from a fixed set of responses.
An intelligent automated assistant, also referred to in the patent application as a virtual assistant, is able to provide an improved interface between human and computer, including the processing of natural language input. Such an assistant, is described in a related patent application 12/987,982 for "Intelligent Automated Assistant" filed Jan. 10, 2011. It's incorporated into this patent by reference. It allows users to interact with a device or system using natural language, in spoken and/or text forms. Such an assistant interprets user inputs, operationalizes the user's intent into tasks and parameters to those tasks, executes services to support those tasks, and produces output that is intelligible to the user.
Virtual assistants are capable of using general speech and natural language understanding technology to recognize a greater range of input, enabling generation of a dialog with the user. Some virtual assistants could generate output in a combination of modes, including verbal responses and written text, and could also provide a graphical user interface (GUI) that permits direct manipulation of on-screen elements. However, the user may not always be in a situation where he or she could take advantage of such visual output or direct manipulation interfaces. For example, the user may be driving or operating machinery, or may have a sight disability, or may simply be uncomfortable or unfamiliar with the visual interface.
Any situation in which a user has limited or no ability to read a screen or interact with a device via contact (including using a keyboard, mouse, touch screen, pointing device, and the like) is referred in the patents as being a "hands-free context". For example, in situations where the user is attempting to operate a device while driving, as mentioned above, the user could hear audible output and respond using their voice, but for safety reasons should not read fine print, tap on menus, or enter text (ya think?!).
Hands-free contexts present special challenges to the builders of complex systems such as virtual assistants. Users demand full access to features of devices whether or not they are in a hands-free context. However, failure to account for particular limitations inherent in hands-free operation could result in situations that limit both the utility and the usability of a device or system, and could even compromise safety by causing a user to be distracted from a primary task such as operating a vehicle.
Apple's present invention generally relates to a user interface for a system such as a virtual assistant which is automatically adapted for hands-free use. A hands-free context is detected via automatic or manual means, and the system adapts various stages of a complex interactive system to modify the user experience to reflect the particular limitations of such a context. The system of the present invention thus allows for a single implementation of a virtual assistant or other complex system to dynamically offer user interface elements and to alter user interface behavior to allow hands-free use without compromising the user experience of the same system for hands-on use.
For example, in various embodiments, the system of the present invention provides mechanisms for adjusting the operation of a virtual assistant so that it provides output in a manner that allows users to complete their tasks without having to read details on a screen. Furthermore, in various embodiments, the virtual assistant could provide mechanisms for receiving spoken input as an alternative to reading, tapping, clicking, typing, or performing other functions often achieved using a graphical user interface.
In various embodiments, the system of the present invention provides underlying functionality that is identical to (or that approximates) that of a conventional graphical user interface, while allowing for the particular requirements and limitations associated with a hands-free context. More generally, the system of the present invention allows core functionality to remain substantially the same, while facilitating operation in a hands-free context.
In some embodiments, systems built according to the techniques of the present invention allow users to freely choose between hands-free mode and conventional ("hands-on") mode, in some cases within a single session. For example, the same interface could be made adaptable to both an office environment and a moving vehicle, with the system dynamically making the necessary changes to user interface behavior as the environment changes.
According to various embodiments of the present invention, any of a number of mechanisms could be implemented for adapting operation of a virtual assistant to a hands-free context. Such an assistant engages with the user in an integrated, conversational manner using natural language dialog, and invokes external services when appropriate to obtain information or perform various actions.
According to various embodiments of the present invention, a virtual assistant may be configured, designed, and/or operable to detect a hands-free context and to adjust its operation accordingly in performing various different types of operations, functionalities, and/or features, and/or to combine a plurality of features, operations, and applications of an electronic device on which it is installed. In some embodiments, a virtual assistant of the present invention could detect a hands-free context and adjust its operation accordingly when receiving input, providing output, engaging in dialog with the user, and/or performing (or initiating) actions based on discerned intent.
Actions could be performed, for example, by activating and/or interfacing with any applications or services that may be available on an electronic device, as well as services that are available over an electronic network such as the Internet. In various embodiments, such activation of external services could be performed via application programming interfaces (APIs) or by any other suitable mechanism(s). In this manner, a virtual assistant implemented according to various embodiments of the present invention could provide a hands-free usage environment for many different applications and functions of an electronic device, and with respect to services that may be available over the Internet.
In addition, in various embodiments, the virtual assistant of the present invention provides a conversational interface that the user may find more intuitive and less burdensome than conventional graphical user interfaces. The user could engage in a form of conversational dialog with the assistant using any of a number of available input and output mechanisms, depending in part on whether a hands-free or hands-on context is active. Examples of such input and output mechanisms include, without limitation, speech, graphical user interfaces (buttons and links), text entry, and the like. The system could be implemented using any of a number of different platforms, such as device APIs, the web, email, and the like, or any combination thereof. Hmm, I wonder if that's just a hangover from the original Siri founders or if Apple will actually try to invade the PC and Mobile markets like they did with iTunes? Time will tell what Apple does on this particular front.
The patent application continues by talking about requests for additional input could be presented to the user in the context of a conversation presented in an auditory and/or visual manner. Short and long term memory could be engaged so that user input could be interpreted in proper context given previous events and communications within a given session, as well as historical and profile information about the user.
In various embodiments, the virtual assistant of the present invention could control various features and operations of an electronic device. For example, the virtual assistant could call services that interface with functionality and applications on a device via APIs or by other means, to perform functions and operations that might otherwise be initiated using a conventional user interface on the device. Such functions and operations may include, for example, setting an alarm, making a telephone call, sending a text message or email message, adding a calendar event, and the like. Such functions and operations may be performed as add-on functions in the context of a conversational dialog between a user and the assistant.
Such functions and operations could be specified by the user in the context of such a dialog, or they may be automatically performed based on the context of the dialog. One skilled in the art will recognize that the assistant could thereby be used as a mechanism for initiating and controlling various operations on the electronic device. By collecting contextual evidence that contributes to inferences about the user's current situation, and by adjusting operation of the user interface accordingly, the system of the present invention is able to present mechanisms for enabling hands-free operation of a virtual assistant to implement such a mechanism for controlling the device.
Apple's patent FIG. 7 is a flow diagram depicting a method of operation of a virtual assistant that supports dynamic detection of and adaptation to a hands-free context, according to one embodiment.
This is a huge patent and for those who want to know everything about Siri could check out patent application 20120022872 which was originally filed in Q2 2010 by the original inventors Thomas Gruber and Harry Saddler. Here's a temporary link to the patent that's valid for about 48 hours.
A Personal Note to Apple about Virtual Assistants
I don't currently own an iPhone with Siri yet, so I have no personal view on this service that many are professing to be nothing shy of phenomenal. But on occasion, I have had to contact Apple about problems concerning my iMac and iPod. I can't tell you how many times I banged the phone on the table out of shear frustration with your "Virtual Assistant." The damn thing is programmed to say things like, I don't understand you, goodbye. No, I don't understand you, you stupid …. Well, you get my drift. You can't argue with a virtual assistant. When things go wrong with a virtual assistant, they really go wrong and the little virtual tin man on the other end of the line actually gets to hang up on you. Well I'll be. This wasn't an isolated incident either. So – here's to hoping that Apple will bring back real people to answer frustrated customers with Apple hardware problems. I'm sure with Apple's ability to rake in 13 billion per quarter means that could easily hire real people in this down economy. I've spoken to some Apple's great sales reps over the years. The problem is getting to them without their virtual bouncer kicking me off line. Enough said.
Other Noteworthy Patent Applications Published Today
Six Apple patent applications relating to display technology were published by the USPTO today. The links that we list below are only valid for 24-48 hours. So if you want to investigate these patents in any detail, you should get to them as soon as you can.
Apple's patent FIG. 2 of patent application 20120019494 shown below, illustrates a representative display undergoing calibration where a calibration system ambient light sensor having a Lambertian response could be oriented to have an angle of incidence of about 90.degree.
Patent 20120019546: COLOR CORRECTION OF MIRRORED DISPLAYS
Patent 20120019494: ALIGNMENT FACTOR FOR AMBIENT LIGHTING CALIBRATION
Patent 20120019493: DISPLAY BRIGHTNESS CONTROL TEMPORAL RESPONSE
Patent 20120019492: DISPLAY BRIGHTNESS CONTROL BASED ON AMBIENT LIGHT LEVELS
Patent 20120019152: DISPLAY BRIGHTNESS CONTROL BASED ON AMBIENT LIGHT ANGLES
Patent 20120019151: AMBIENT LIGHT CALIBRATION FOR ENERGY EFFICIENCY IN DISPLAY SYSTEMS
Remember that the links above are only valid for 24-48 hours.
Notice: Patently Apple presents a detailed summary of patent applications with associated graphics for journalistic news purposes as each such patent application is revealed by the U.S. Patent & Trade Office. Readers are cautioned that the full text of any patent application should be read in its entirety for full and accurate details. Revelations found in patent applications shouldn't be interpreted as rumor or fast-tracked according to rumor timetables. Apple's patent applications have provided the Mac community with a clear heads-up on some of Apple's greatest product trends including the iPod, iPhone, iPad, iOS cameras, LED displays, iCloud services for iTunes and more. About Comments: Patently Apple reserves the right to post, dismiss or edit comments.