ZDNet's David Braue posted a great report last week titled "Apple Maps' worldview is now better than Google Maps." In his summary Braue stated that Apple "had its share of problems, but Apple Maps is back with a vengeance. Powered by some jaw-dropping 3D graphics and enjoying an aggressive multi-platform strategy, Apple is finally set to redefine our geospatial expectations – and take Google down a few notches." Yes, Apple's mapping team is hungry and back with a vengeance and today the US Patent Office published a whopping 28 mapping inventions from Apple with many of them covering those jaw-dropping 3D graphics and so much more.
Today's report provides you with a brief patent abstract from each of the 28 patent applications verbatim and a link to them for those mapping buffs that want to know more about Apple's major endeavor.
Note about Some of the Patent Inventors: A number of patents listed here today list Christopher Blumeberg and/or Patrick Piemonte as one of the inventors without Apple being the assignee. Christopher Blumeberg is the Manager of iOS map applications and frameworks. Patrick Piemonte used to be the iOS Software Engineer at Apple who worked on Apple's Flyover mapping feature. Technically Apple doesn't have to be shown as an assignee at the patent application phase.
With that said, here's the full list of the mapping patent applications that were published today by the US Patent Office.
Some embodiments provide a non-transitory machine-readable medium that stores a mapping application which when executed on a device by at least one processing unit provides automated animation of a three-dimensional (3D) map along a navigation route. The mapping application identifies a first set of attributes for determining a first position of a virtual camera in the 3D map at a first instance in time. Based on the identified first set of attributes, the mapping application determines the position of the virtual camera in the 3D map at the first instance in time. The mapping application identifies a second set of attributes for determining a second position of the virtual camera in the 3D map at a second instance in time. Based on the identified second set of attributes, the mapping application determines the position of the virtual camera in the 3D map at the second instance in time. The mapping application renders an animated 3D map view of the 3D map from the first instance in time to the second instance in time based on the first and second positions of the virtual camera in the 3D map.
Some embodiments provide a non-transitory machine-readable medium that stores a program which when executed on a device by at least one processing unit provides different viewing modes for viewing a three-dimensional (3D) map. The program renders a first view of the 3D map for display in a first viewing mode based on a first set of map data. The program receives input to adjust the view of the 3D map. In response to the input, the program renders a second view of the 3D map for display in a second viewing mode based on a second set of map data different from the first set of map data.
Methods and apparatus for a map tool displaying a three-dimensional view of a map based on a three-dimensional model of the surrounding environment. The three-dimensional map view of a map may be based on a model constructed from multiple data sets, where the multiple data sets include mapping information for an overlapping area of the map displayed in the map view. For example, one data set may include two-dimensional data including object footprints, where the object footprints may be extruded into a three-dimensional object based on data from a data set composed of three-dimensional data. In this example, the three-dimensional data may include height information that corresponds to the two-dimensional object, where the height may be obtained by correlating the location of the two-dimensional object within the three-dimensional data.
Methods and systems are provided for efficiently identifying map tiles of a raised-relief map to retrieve from a server. An electronic device can use estimates of height(s) for various region(s) of the map to determine map tiles that are likely viewable from a given position of a virtual camera. The device can calculate the intersection of the field of view of the virtual camera with the estimated heights to determine a location of the map tiles (e.g., as determined by a 2D grid) needed. In this manner, the electronic device can retrieve, from a map server, the map tiles needed to display the image, without retrieving extraneous tiles that are not needed. Identifying such tiles can reduce the amount of data to be sent across a network and reduce the number of requests for tiles, since the correct tiles can be obtained with the first request.
Systems and methods for rendering 3D maps may highlight a feature in a 3D map while preserving depth. A map tool of a mapping or navigation application that detects the selection of a feature in a 3D map (e.g., by touch) may perform a ray intersection to determine the feature that was selected. The map tool may capture the frame to be displayed (with the selected feature highlighted) in several steps. Each step may translate the map about a pivot point of the selected map feature (e.g., in three or four directions) to capture a new frame. The captured frames may be blended together to create a blurred map view that depicts 3D depth in the scene. A crisp version of the selected feature may then be rendered within the otherwise blurred 3D map. Color, brightness, contrast, or saturation values may be modified to further highlight the selected feature.
A device that includes at least one processing unit and stores a multi-mode mapping program for execution by the at least one processing unit is described. The program includes a user interface (UI). The UI includes a display area for displaying a two-dimensional (2D) presentation of a map or a three-dimensional (3D) presentation of the map. The UI includes a selectable 3D control for directing the program to transition between the 2D and 3D presentations.
A context-aware voice guidance method is provided that interacts with other voice services of a user device. The voice guidance does not provide audible guidance while the user is making a verbal request to any of the voice-activated services. Instead, the voice guidance transcribes its output on the screen while the verbal requests from the user are received. In some embodiments, the voice guidance only provides a short warning sound to get the user's attention while the user is speaking on a phone call or another voice-activated service is providing audible response to the user's inquires. The voice guidance in some embodiments distinguishes between music that can be ducked and spoken words, for example from an audiobook, that the user wants to pause instead of being skipped. The voice guidance ducks music but pauses spoken words of an audio book in order to provide voice guidance to the user. Apple has another patent by the same name published today under number 20130322665.
For a device that runs a mapping application, a method of displaying search completions in a display area of the mapping application that includes a search field for receiving inputs is described. The method identifies a set of search completions that include any recent search completions used to search locations on a map. Upon receiving a non-text input through the search field when the search field is empty, the method displays the set of search completions in the display area.
Apple's patent is for a mobile device having a display area, a method of displaying instructional signs of a route in the display area is described. The method receives selection of a route having several junctures. The route includes several displayable signs for showing a set of maneuver instructions for at least some of junctures of the route. The method tracks the current location of the device as the device is moving. The method displays different signs by sliding the signs in and out of the display area based on the current location of the device.
Apple's patent FIG. 8 noted above illustrates that the mapping application of some embodiments operating in the automatic stepping mode does not backtrack when displaying signs and the current step indicator.
10. Rendering Maps
Apple's patent is about a mapping application which includes a map receiver for receiving map tiles from a mapping service in response to a request for the map tiles needed for a particular map view. Each map tile includes vector data describing a map region. The mapping application includes a set of mesh building modules. Each mesh building module is for using the vector data in at least one map tile to build a mesh for a particular layer of the particular map view. The mapping application includes a mesh aggregation module for combining layers from several mesh builders into a renderable tile for the particular map view. The mapping application includes a rendering engine for rendering the particular map view.
Methods, systems and apparatus are described to render a map with adaptive textures for map features. Embodiments may for a portion of map data, such as a map tile, including a feature of a given feature type specify a level-of-detail texture. A level-of-detail texture may be one of a plurality of level-of-detail textures for a given feature type ordered according to level-of-detail. Embodiments may then provide the specified level-of-detail texture with a mipmap chain to a rendering unit to render the map data. At the lowest level of the mipmap chain may be the specified level-of-detail texture. At the next lowest level of the mipmap chain may be a portion of the level-of-detail texture adjacent to the specified level-of-detail texture in the ordered plurality of level-of-detail textures for the feature type.
Methods, systems and apparatus are described to dynamically generate map textures. A client device may obtain map data, which may include one or more shapes described by vector graphics data. Along with the one or more shapes, embodiments may include texture indicators linked to the one or more shapes. Embodiments may render the map data. For one or more shapes, a texture definition may be obtained. Based on the texture definition, a client device may dynamically generate a texture for the shape. The texture may then be applied to the shape to render a current fill portion of the shape. In some embodiments the render map view is displayed.
Methods, systems and apparatus are described to provide a three-dimensional transition for a map view change. Various embodiments may display a map view. Embodiments may obtain input selecting another map view for display. Input may be obtained through the utilization of touch, auditory, or other well-known input technologies. In response to the input selecting a map view, embodiments may then display a transition animation that illustrates moving from the displayed map view to the selected map view in virtual space. Embodiments may then display the selected map view.
Embodiments may include receiving signal strength information reported by multiple client communication devices. The signal strength information reported by a given client device may indicate one or more locations detected by the given client device. The signal strength information may also indicate, for each location, a respective measure of signal strength for a communication signal detected at that location by the client device. Embodiments may also include generating a signal strength map for a region based on the client-reported signal strength information. Generating the signal strength map may include, for each location of multiple locations within the region, generating an expected signal strength value for that location based on an evaluation of the signal strength information received for that location. The generation of the signal strength map for the region may also be based on the expected signal strength values for the locations within the region.
Embodiments of a system and method for loading and rendering curved features in a map are described. Embodiments may include a map tool of a mapping or navigation application configured to generate a display for a map that includes one or more curved features (e.g., curved roads or curved polygons). The map tool may be executed in a client/server environment in which a server portion receives digitized map data in the form of polylines, detects a curved feature in the map data by fitting it to a parametric curve, and transmits data representing the parametric curve to a client device for subsequent rendering. The client device may render the curved feature using the received parametric curve data or, dependent on characteristics of the client device, extract data corresponding to points on the parametric curve to generate a triangle mesh for rendering the curved feature at a suitable resolution.
A mapping program for execution by at least one processing unit of a device is described. The device includes a touch-sensitive screen and a touch input interface. The program renders and displays a presentation of a map from a particular view of the map. The program generates an instruction to rotate the displayed map in response to a multi-touch input from the multi-touch input interface. In order to generate a rotating presentation of the map, the program changes the particular view while receiving the multi-touch input and for a duration of time after the multi-touch input has terminated in order to provide a degree of inertia motion for the rotating presentation of the map.
For a mapping application, a method for reporting a problem related to a map displayed by the mapping application is described. The method identifies a mode in which the mapping application is operating. The method identifies a set of types of problems to report based on the identified mode. The method displays, in a display area of the mapping application, a graphical user interface (GUI) page that includes a set of selectable user interface (UI) items that represent the identified set of types of problems.
A mapping application that provides a graphical user interface (GUI) for displaying information about a location is described. The GUI includes a first display area for displaying different types of media for a selected location on a map. The GUI includes a second display area for displaying different types of information of the selected location. The GUI includes a set of selectable user interface (UI) items, each of which for causing the second display area to display a particular type of information when selected.
A graphical user interface (GUI) of a triage tool for triaging reported problems is described. The GUI includes a first set of UI items for viewing reported problems of maps. The GUI includes a second set of UI items for viewing map data related to the reported problems. The GUI includes a third set of UI items for sending triaged problems to a set of sources of the map data.
A method of providing navigation on an electronic device when the display screen is locked. The method receives a verbal request to start navigation while the display is locked. The method identifies a route from a current location to a destination based on the received verbal request. While the display screen is locked, the method provides navigational directions on the electronic device from the current location of the electronic device to the destination. Some embodiments provide a method for processing a verbal search request. The method receives a navigation-related verbal search request and prepares a sequential list of the search results based on the received request. The method then provides audible information to present a search result from the sequential list. The method presents the search results in a batch form until the user selects a search result, the user terminates the search, or the search items are exhausted.
Some embodiments provide a navigation application. The navigation application includes an interface for receiving data describing junctures along a route from a first location on a map to a second location on the map. The data for each juncture includes a set of angles at which roads leave the juncture. The navigation application includes a juncture decoder for synthesizing, from the juncture data, instruction elements for each juncture that describe different aspects of a maneuver to be performed at the juncture. The navigation application includes an instruction generator for generating at least two different instruction sets for a maneuver by combining one or more of the instruction elements for the juncture at which the maneuver is to be performed. The navigation application includes an instruction retriever for selecting one of the different instruction sets for the maneuver according to a context in which the instruction set will be displayed.
Some embodiments provide a mapping application that provides routing information to third-party applications on a device. The mapping application receives route data that includes first and second locations. Based on the route data, the mapping application provides a set of routing applications that provide navigation information. The mapping application receives a selection of a routing application in the set of routing applications. The mapping application passes the route data to the selected routing application in order for the routing application to provide navigation information.
For a device running a mapping application that includes a display area for displaying a map and a set of graphical user interface (GUI) items, a method for providing routes is described. The method computes a route between a starting location and a destination location. The route includes a sequence of maneuvering instructions for guiding a user through the route. The method provides a movable GUI item for showing each maneuvering instruction in the sequence in order to allow a user to navigate the route by moving the GUI items in and out of the display area.
Some embodiments provide a method for generating intersection data for paths in a map region. The method receives a set of junctions at which paths intersect in the map region. For a particular junction of at least two paths, the method automatically determines whether any of the other junctions in the map region satisfy criteria to be part of a single intersection with the particular junction. When at least one of the other junctions satisfies the criteria, the method automatically combines the other junctions that satisfy the criteria with the particular junction into a single intersection for use in performing mapping operations.
An integrated map and navigation program is described. The program provides a first operational mode for browsing and searching a map. The program provides a second operational mode for providing a navigation presentation that provides a set of navigation directions along a navigated route by reference to the map.
Methods and apparatus for a map tool on a mobile device for implementing cartographically aware gestures directed to a map view of a map region. The map tool may base a cartographically aware gesture on an actual gesture input directed to a map view and based on map data for the map region that may include metadata corresponding to elements within the map region. The map tool may then determine, based on one or more elements of the map data, a modification to be applied to an implementation to the gesture. Given the modification to the gesture implementation, the map tool may then render, based on performing the modification to the gesture, an updated map view instead of an updated map view based solely on the user gesture.
Methods and apparatus for a roof analysis tool for constructing a parameter set, where the parameter set is derived from mapping data for a map region, and where the parameter set describes the roofs for the buildings within the map region. In some cases, the parameter set includes a list of roof type identification values and the respective buildings in the map region for which a given roof type identification value corresponds. The roof analysis tool may operate on a server and work in conjunction with a mobile device, where the mobile device may display map views of a map region such that the map view is based on a three-dimensional model of the map region, and where a portion of the three-dimensional model is based on data generated on the mobile device and a portion of the three-dimensional model is based on data generated on the server.
Rudolph van der Merwe is listed on this patent. It should be noted that Rudolph is currently an R&D engineer in Apple's Advanced Computation Group.
Methods, systems and apparatus are described to provide visual feedback of a change in map view. Various embodiments may display a map view of a map in a two-dimensional map view mode. Embodiments may obtain input indicating a change to a three-dimensional map view mode. Input may be obtained through the utilization of touch, auditory, or other well-known input technologies. Some embodiments may allow the input to request a specific display position to display. In response to the input indicating a change to a three-dimensional map view mode, embodiments may then display an animation that moves a virtual camera for the map display to different virtual camera positions to illustrate that the map view mode is changed to a three-dimensional map view mode.
Patently Apple presents a detailed summary of patent applications with associated graphics for journalistic news purposes as each such patent application is revealed by the U.S. Patent & Trade Office. Readers are cautioned that the full text of any patent application should be read in its entirety for full and accurate details. Revelations found in patent applications shouldn't be interpreted as rumor or fast-tracked according to rumor timetables. About Making Comments on our Site: Patently Apple reserves the right to post, dismiss or edit any comments.