Google Invents an Original Search Gesture for Future Devices
Google has invented an original search based gesture for future Android devices. In some cases, the new "continuous gestures" will allow a user to simply and quickly draw a circle around what they want to search for in the form of the letter "g" or in the combination of "g + o" as noted in our cover graphic above. When the user lifts their finger from the display, the search is automatically initiated. If a user wishes to use another search engine such as Yahoo or Wikipedia, they simply use a different continuous gesture in the form of the letter "S." This will trigger a pop-up menu with search engine options such as Wikipedia, Yahoo or others. Being that it's a new gesture concept, Google has gone to extraordinary lengths to explain this concept to both users and realistically, the USPTO examiners, to ensure that they nail this gesture as their own going forward. Our report tries to accommodate Google's presentation, and if you're not the reading type, then pretend that's it's a Playboy article and only look at the pictures.
The Pitfalls of Using Traditional Methods of Touch Controls on Touchscreens
Google states that many touch-sensitive devices are designed to minimize a need for external device buttons for device control, in order to maximize screen or other component size, while still providing a small and portable device. Thus, it may be desirable to provide input mechanisms for a touch-sensitive device that, for the most part, rely primarily on user interaction via touch to detect user input to control operations of the device.
Due to dedicated buttons (e.g., on a keyboard, mouse, or trackpad), classical computing systems may provide a user with more options for input. For example, a user may use a mouse or trackpad to "hover" over an object (icon, link) and select that object to initiate functionality (open a browser window to link a dress, open document for editing). In this case, functionality is tied to content, meaning that a single operation (selecting an icon with a mouse button click) selects a web site for viewing, and opens the browser window to view the content for that site.
Touch-sensitive devices present problems with respect to the detection of user input that are not present with more classical devices as described above. For example, if a user seeks to select text via a touch-sensitive device, it may be difficult for the user to pinpoint the desired text because the user's finger (or stylus) is larger than the desired text presented on the display. User selection of text via a touch-sensitive device may be even more difficult if text (or other content) is presented in close proximity with other content. For example, it may be difficult for a touch-sensitive device to accurately detect a user's intended input to highlight a portion of text of a news article presented via a display. Thus, a touch-sensitive device may be beneficial for more simple user input (e.g., user selection of an icon or link to initiate a function), but may be less suited for more complex tasks (e.g., a copy/paste operation).
As discussed above, for classical computing devices, a user may initiate operations based on content not tied to particular functionality rather easily, because using a mouse or trackpad to select objects presented via a display may be more accurate to detect user intent. Use of a classical computing device for such tasks may further be easier, because using a keyboard provides a user with specific external non-gesture mechanisms for initiating functionality (e.g., cntl-C, cntl-V for copy/paste operation, or dedicated mouse buttons for such functionality) that are not available for many touch-sensitive devices.
A user may similarly initiate functionality based on untied content via copy and paste operations on a touch-sensitive device. However, due to the above-mentioned difficulty in detecting user intent for certain types of input, certain complex tasks that are easy to initiate via a classical computing device are more difficult on a touch-sensitive device. For example, for each part of a complex task, a user may experience difficulty getting the touch-sensitive device to recognize input. The user may be forced to enter each step of a complex task multiple times before the device recognizes the user's intended input.
For example, for a user to copy and paste solely via touch screen gestures, the user must initiate editing functionality with a first independent gesture, select desired text with a second gesture, identify an operation to be performed (e.g., cut, copy, etc.), open the functionality they would like to perform (e.g., browser window opened to search page), select a text entry box, again initiate editing functionality, and select a second operation to be performed (e.g., paste). There is therefore opportunity, for each of the above-mentioned independent gestures needed to cause a copy and paste operation, for error in user input detection. This may make a more complex task, e.g., a copy and paste operation, quite cumbersome, time consuming, and/or frustrating for a user.
Google Introduces the Smart "Continuous Gesture"
To address these deficiencies with detection of user input for more complex tasks, Google's invention is generally directed to improvements in the detection of user input for a touch-sensitive device. In one example, as shown in FIG. 1 below, touch-sensitive device 101 is configured to detect a continuous gesture (110) on a touch-sensitive surface (e.g., display 102 of device 101 in FIG. 1), by a finger (116) or stylus. Going forward, the term "continuous gesture" (e.g., the continuous gesture shown in FIG. 1) refers to a continuous gesture drawn on a touch sensitive surface and detected by a touch sensitive device in response to the drawn gesture. The continuous gesture indicates both a function to be executed and content that execution of the function is based on. The continuous gesture includes a first portion (112) that indicates the function to be executed. The continuous gesture also includes a second portion 114 that indicates content in connection with the function indicated by first portion 112 of the continuous gesture.
Google's patent FIG. 1 shows a user's finger has drawn a continuous gesture that includes a first portion indicating a character "g". The first portion may indicate particular functionality, for example the character "g" may represent functionality to perform a search via a search engine available at www.google.com.
The example illustrated in FIG. 1 is merely one example of functionality that may be indicated by a first portion of a continuous gesture. Other examples, including other characters indicating different functionality, or a "g" character indicating functionality other than a search via www.google.com, are also contemplated by the techniques of this disclosure.
As also shown in FIG. 1, a user has used their finger to draw a second portion of the continuous gesture that substantially encircles, or lassos, content 120. The Content may be displayed via the display, while the second portion 114 may completely, repeatedly or partially surround the content.
Although FIG. 1 shows the continuous gesture drawn by a finger directly on the display encircling the content presented on the display, the continuous gesture may instead be drawn by user interaction with a touch-sensitive non-display surface of the device, or another device entirely.
In various examples, the content may be any image presented via the display. For example, the content could be an image of text presented via the display. In other examples, the content may be a photo, video, icon, link, or other image presented via the display.
The Continuous Gesture may be continuous in the sense that the first and second portions are detected while a user maintains contact with a touch-sensitive surface. As such, the device may be configured to detect user contact with the touch-sensitive surface, and also detect when a user has released contact with the touch-sensitive surface.
The device shown in figure 1 is configured to detect the first and second portions of the continuous gesture, and correspondingly initiate functionality associated with the first portion based on the content indicated by the second portion. According to the example of FIG. 1, the continuous gesture may cause the touch-sensitive device to execute a Google search for content.
Examples of the "Continuous Gesture" used for Simple Online Searches
Google's series of patent Figures that are illustrated below (4A-4F) represent conceptual diagrams that illustrate various examples of continuous gestures.
Google's first example is noted as continuous gesture 410A of FIG. 4A which shows us a first gesture portion 412A that is a "g" character. A second portion 414A is drawn surrounding content 120. Continuous gesture 410B of FIG. 4B includes a second portion 414B that, instead of surrounding first portion 412B, surrounds content 120 at a different position on a display than first portion 412B. As shown in patent FIG. 4C, continuous gesture 410C shows a first portion 412C that is an "s" character. The Continuous gesture may indicate a search in general.
In some examples, when a user releases contact with a display when drawing continuous gesture 410C, detection of gesture 410C may cause options to be provided to the user to select a destination (e.g., a URL) for a search operation to be performed based on content indicated by second portion 414C.
For example, a user may be presented with options to search via a particular search engine (e.g., Google, Yahoo, Bing search), or to search for specific information (e.g., contacts, phone number, restaurants - See FIG. 3 Below). As a specific example, Google presents the Wikipedia Example represented by FIGS. 4E and 4F. Continuous Gestures 410E and 410F each illustrate a continuous gesture that includes a first portion that is a "w" character. The "w" character may indicate, in one example, that a search is to be performed based on the content 120 via the URL at www.wikipedia.org.
If you follow Google's logic, then it could very translate into allowing users or companies to create their own letter combinations to signify a specific search engine related to their industry or profession. Meaning instead of drawing in "G + O" to use Google's search engine, a lawyer could draw in the letter combination such as "L + O" and have that combination translate into a search in LexisNexis. That could open up some interesting possibilities going forward.
Example of the "Continuous Gesture" used for Complex Online Searches
Google's patent FIG. 6 shown below is a conceptual diagram illustrating yet another example detection of a continuous gesture consistent with the techniques of this invention. In this scenario, Google is illustrating a complex online search technique whereby a user will combine a series of word found in a magazine article. For instance, if a user has a news article open that displays the words "restaurant" and "Thai food" and a map of New York City, a user may, via a series of continuous gestures cause a search to be performed on the phrase "Thai food restaurant New York City."
The example illustrated in patent FIG. 6 may be advantageous in certain situations, because the continuous gesture 610 provides a user with a heightened level of flexibility to initiate functionality based on user-selected content. According to known touch-sensitive devices, a user would need to go through several copy-and-paste operations or type in the terms of a particular search, to execute similar functionality. Both of these options may be cumbersome, time consuming, difficult, and/or frustrating for a user. By providing a touch-sensitive device configured to detect continuous gestures as just described, a user's ability to easily and quickly initiate more complex tasks (e.g., a search operation) may be improved.
In Google's patent FIG. 7 a touch-sensitive device may, in response to detection of completion of gesture 710 provide a user with an option list (718). For example, where a user has selected content 720 and indicated a search with a continuous gesture 710, the device may present the user with various options for performing the search. The device may, based on user selection of content, automatically determine options that a user may likely want to search based on the indicated content. For example, if a user selects the text "pizza," or a photo of a pizza, the device may determine restaurants near the user (where the device includes global positioning system (GPS) functionality), and present web pages or phone numbers associated with those restaurants for selection.
The device may instead or in addition provide a user with an option to open a Wikipedia article describing the history of the term "pizza," or a dictionary entry describing the meaning of the term "pizza." Other options are also contemplated and consistent with this disclosure. In still other examples, based on user selection of content via a continuous gesture, device 101 may present to a user other phrases or phrase combinations that the user may wish to search for. For example, where a user has selected the term pizza, a user may be provided one or more selectable buttons to initiate a search for the terms "pizza restaurant," "pizza coupons," and/or "pizza ingredients."
In other examples, the device may provide options to a user based on words, images (photo, video) that are viewable along with user selected content, such as other words/photos/videos displayed with the selected content.
Dealing with Ambiguity
Google's FIGS. 8A and 8B illustrated below are conceptual diagrams illustrating various examples of resolving ambiguity in detection of a continuous gesture. For example, as shown by gesture 810A in FIG. 8A, a user has drawn a second portion 814A only surrounding a portion of content 820A. As such, detection of the gesture may be somewhat ambiguous, because the device may be unable to determine whether the user desired to initiate a search based on only a portion of a word, phrase, photo, or video presented by content 820A, or whether the user intended to initiate a search based on the entire word, phrase, photo, or video of content 820A.
In response to the detection of ambiguous gesture 810A, the device may present to a user various options to resolve the ambiguity. For example, the device may present to a user various combinations of words, phrases, photos, or video for which the user may have desired to search. For example, if the content 820A was text stating the word "Information," and the user circled only the letters "Infor" of the word information, the device may present to the user options to select one of "Info," "Inform," or "Information."
Google's patent FIG. 3 shown below is a block diagram illustrating components configured to detect a continuous gesture.
Google's patent was originally filed in Q3 2011 and published by the USPTO this month.
Notice: Patently Apple presents a detailed summary of patent applications with associated graphics for journalistic news purposes as each such patent application is revealed by the U.S. Patent & Trade Office. Readers are cautioned that the full text of any patent application should be read in its entirety for full and accurate details. Revelations found in patent applications shouldn't be interpreted as rumor or fast-tracked according to rumor timetables. About Comments: Patently Apple reserves the right to post, dismiss or edit comments.
Here are a Few Great Sites covering our Original Report
MacSurfer, The New York Times "Headlines Around the World, section" + Blogrunner NYTimes, Real Clear Technology, Twitter, Facebook, The UX Daily, Electronista, Apple Investor News, Google Reader,Techmundo Brazil, Macnews, MarketWatch, Engadget, Droid Life, DroidDog, 9to5 Google, Phandroid, Talk Android, SlashGear, Android Community, Amanz Malaysia, Blognone Thailand, Engadget Germany, El Caparazon Spain, phoneArena UK, Ubergizmo, Venture Beat, Hong Kong Silicon China,golem Germany, t3n Germany, Techmeme, and more.
Note: The sites that we link to above offer you an avenue to make your comments about this report in other languages. These great community sites also provide our guests with varying takes on Google's latest invention. Whether they're pro or con, you may find them to be interesting, fun or feisty. If you have the time, join in!
@ Luc. I think the idea is to touch the word you want to search first which would likely freeze the screen for you to circle it etc and then unfreeze once your search is in progress. I don't think Google would be that dumb as you suggest. Is that iPad app from Apple? Likely not. So when it's from the originating OS maker, it's a different matter altogether.
Posted by: MonkeyMo | February 24, 2012 at 03:15 PM
Try doing this on an iPad. The screen moves all over the place when you do this as it thinks you are scrolling the document.
Posted by: Luc | February 24, 2012 at 02:34 PM
Isn't there a web browser in tha app store that already provides that level of functionality. Believe the way it worked was, you created custom gestures as shortcuts to for example do a search ? .....though this seems to take it a but further with " content awareness" hopefully they can somehow simplify the gestures by minimizing the strokes needed.
Posted by: Aarchit | February 24, 2012 at 07:40 AM