Apple Invents a New Kind of Message Builder for Social Networkers
Apple Attempts to Trademark their USB 2.0 Cable Design

Apple Invents a New Kind of Synchronized and Interactive Augmented Reality Display for iOS Devices

1 - Apple Invents a new Kind of Synchronized & Interactive AR Display for iOS Devices - July 2011 - Patently Apple 
A recent Apple patent application was published by the US Patent & Trademark Office that revealed the invention of a highly advanced synchronized and interactive augmented reality (AR) display for future iOS devices. A week ago we uncovered a related AR patent application describing Smart Transparent Display technology in context with varying kinds of new consumer oriented applications. Today's report delves into Apple's initial vision for using augmented reality applications in business, health care and education. It's also Apple's second patent in a week that points to future iOS devices possibly utilizing a next generation positioning system. While we can't make the call just yet that Augmented Reality Displays and System inventions are a definite trend at Apple, we can say that they're gaining traction. Apple's leadership in portable device innovation is once again evident in this patent.    


Synchronized, Interactive Augmented Reality Displays for iOS Devices


In this first segment of our report we'll focus on how Apple has invented a new synchronized and interactive Augmented Reality (AR) Display system that will one day allow business personnel to better collaborate with their employees or clients whether they be within a multilevel office building or around the world – in real-time.


Apple's patent FIG. 1A shown below illustrates an example device 100 for receiving live video of a real-world, physical environment. The device could be any device capable of supporting AR displays, including but not limited to personal computers, mobile phones, electronic tablets, game consoles, media players, etc.


Although Apple's invention could apply to a wide variety of devices as noted above, the actual patent application figures presented to us are mainly focused on an iPad and/or iPhone with dual cameras in order to keep it simple and practical. Going forward to keep it simple, we'll refer to device 100 as an "iPad" whenever possible. The dual camera aspect of this patent application was better discussed in last week's report titled "Apple Developing Applications for Smart Transparent Displays."


In the first example shown below, the user is holding an iPad over a circuit board to record a live video by pressing the Pad's standard virtual camera button (115).


2 B - Apple Introduces us to Synchronized, Interactive Augmented Reality Displays, July 2011, Patently Apple  

The All-Important Information Layer or Annotations


The next aspect of this invention relates to the all-important "Information Layer." Apple's patent application FIG. 1B shown above illustrates an iPad displaying a live video combined with an information layer. Various components could be seen outlined highlighted or otherwise annotated by the Information Layer (hereafter referred to collectively as "annotations").


For example, memory cards 110 are shown outlined with dashed line 130 and processor 106 and capacitor 108 are shown with thick outlines. Generally, any visual attribute that could set off an object from other objects in the live video could be an annotation. Annotations could include text, images or references to other information (e.g., links). The annotations could be displayed proximate to their corresponding objects in live video. Annotations could describe or otherwise provide useful information about the objects to a user (e.g., a computer technician).


Additional related information, such as the manufacturer and part number could be included in the balloon callouts. The information layer could display annotations automatically or in response to trigger events. For example, the balloon call outs may only appear in a live video when the user is touching the corresponding annotated component.


The information layer could include a variety of information from a variety of local or network information sources. Some examples of information include without limitation specifications, directions, recipes, data sheets, images, video clips, audio files, schemas, user interface elements, thumbnails, text, references or links, telephone numbers, blog or journal entries, notes, part numbers, dictionary definitions, catalog data, serial numbers, order forms, marketing or advertising and any other information that may be useful to a user.


Employing Object Recognition Techniques

Before an information layer could be generated, the objects to be annotated should be identified. The identification of objects in a live video could occur manually or automatically. If automatically, a frame of a live video could be "snapped" (by pressing the virtual iPad camera button) and processed using known object recognition techniques, including but not limited to: edge detection, Scale-invariant Feature Transform (SIFT), template matching, gradient histograms, intraclass transfer learning, explicit and implicit 3D object models, global scene representations, shading, reflectance, texture, grammars, topic models, window-based detection, 3D cues, context, leveraging Internet data, unsupervised learning and fast indexing. To assist in identification of components or parts, barcode 112 could be identified by an image processor and used to retrieve a predefined information layer.


Magnifying Glass Tool


Apple's invention calls for a magnifying glass tool that we see designated as patent point 116. It could be manipulated by a user to magnify or zoom in on any object in the live video. For example, if the user wanted to see a detail of processor 106, the user could move the magnifying glass tool over processor and the live video would zoom in on the processor for more details. The view of the magnifying glass tool could be sized using Apple's classic pinch gestures.


Live Videos May Allow for 3D Viewing


In Apple's patent FIG. 1C above we see the circuit boards that were first shown in FIGS. 1A and 1B now presented in a three-dimensional (3D) perspective view of the live video combined with the information layer. In this example, the user is pointing the iPad's video camera at a different location to obtain a 3D perspective view of the circuit board. The information layer could be overlaid on the perspective view and aligned without having to re-perform object recognition using data output from onboard motion sensors.


For example, outputs from onboard gyros, magnetometers or other motion sensors can be used to determine current video camera view angles relative to a reference coordinate frame and then use the view angles to redraw the information layer over the perspective view such that annotations remain properly aligned with their respective objects.


We covered true 3D imaging on the iPhone in our special report titled "Body Area Networks: Apple, Sensor Strips & the iPhone." The imagery presented in that report could give you an idea what kind of 3D imaging that could technically be deployed in the future for medical applications, considering that Apple briefly points to a medical application later on in this patent application.


Live Video Synching between Multiple iOS Devices for Better Collaboration


3b - Live Video Between Multiple iOS Devices for Better Collaboration,  july 2011 - Patently Apple  

Apple's invention could provide all levels of management, sales and/or service personnel with the ability to collaborate or share information about production, manufacturing processes, sales or marketing problems or promotions – live.


Apple's patent FIG. 1D noted above illustrates synchronizing live video displays on first and second devices and sharing changes to the information layer. The information layer generated for live video 104a on device 100a could also be shared with device 100b by sending the information layer data with the live video feed over a communication link.


In some implementations, the user of either device 100a or device 100b could use touch input or gestures to generate new annotations (e.g., a draw a circle around a component) and those annotations could be shared with the other device, live.


In some implementations, a gesture itself could indicate desired information. For example, drawing a circle around processor 106 in live video 104 could indicate that the user wants more information about processor 106 or that there's a problem in this area that has to be addressed. As a user draws annotations on live video 104a they're able to be seen or reflected to live video 104b. This feature allows users of devices 100a, 100b to interact and collaborate through the information layer. The process could be quickened if all devices also include telephony capabilities.


Medical Application


The idea of collaborating could be naturally extended in a variety of medical applications. In some implementations, a doctor could use their iOS device to capture a live video of the patient's face. Using pattern recognition and/or other information (e.g., patient identifier), information related to the patient (e.g., medical history, drug prescriptions) could be displayed on their device.


In other implementations, a live video of a body part that needs medical attention could be captured and augmented with annotations that could help the doctor make a diagnosis. The video could be quickly shared with other doctors who could generate annotations on their respective devices to assist the doctor in a diagnosis. Pattern matching or other image processing could be used to identify problems with the injured body part based on its visual appearance (e.g., color). In one example application, an x-ray or MRI video could be displayed with the live video.


Vehicle Mechanics Application


In another example, device 100 could capture a live video of a car's engine and the parts of the engine could be recognized from the live video. An information layer (e.g., a manual excerpt) could be generated and combined with the live video. For example, a car mechanic could hold their iOS device over a car engine and an outline identifying parts and providing excerpts from a repair manual or schematics could be displayed in the live video to assist the mechanic in repairing the engine.


One could easily imagine multiple spin-off ideas of how this could be used with a vehicle or other types of businesses to assist service repair personnel for home appliances and beyond.


Distant Learning or Inter-Office Applications


In another example application, an iOS device could capture images or live video of a document and the text of the document could be recognized in the images or the live video. An information layer, in this case an "answer sheet" could be generated and combined with the live video. For example, a teacher could hold their iOS device over a student's exam paper and an outline showing incorrect answers to exam questions could be displayed in the live video to assist the teacher in grading the exam paper.


Once again, when it comes to live video document collaboration, the applications could extend to any number of professions that are required to quickly collaborate with colleagues and/or clients on a rush document that has to meet a filing deadline. Think of financial and legal contracts alone.


On the Consumer Side: Split Screens for Live Mapping


4 - Synchronization of Split Screen Displays - Live Mapping, Apple patent July 2011, Patently Apple 

On the consumer side of this application Apple illustrates live mapping capabilities. FIG. 2B illustrates synchronizing split screen displays of first and second devices 200a, 200b. In the example shown, device 200a has established communication with device 200b. The live video scene of downtown San Francisco captured by the video camera on device 200a could be displayed in display area 202b of device 200b. Also, computer-generated imagery shown in display area 204a could be shown in display area 204b of device 200b. Note that in display area 204b, the location of device 200b is indicated by "You" and the destination or device 200a is indicated by the marker "Mark," i.e., the user of device 200a. The communication link could be a direct communication link or an indirect communication link like Wi-Fi.


The Share Button


When a user moves device 200a, resulting in a change in the video camera view, motion sensor data could be used to update the computer-generated imagery in display areas 204a, 204b, thus maintaining synchronization between display areas 202a, 204a and display areas 202b, 204b. In some implementations, share button 214 could be used to initiate sharing of live video, the information layer and computer-generated imagery with another device. So for security sake, your location won't be shared unless both parties are in agreement.


Hybrid Positioning System


Another interesting tid bit in this patent application confirmed that Apple is also considering the inclusion of "Hybrid positioning systems" using a combination of satellite and television signals. This was first discovered in our patent report titled "Apple Reveals the Next Chapter for iBooks & New Chip for iOS Devices" in context with a Rosum Corporation chip. If more than one Apple engineering team is already including this in their future iOS device sample architectures, then it looks pretty promising a technology and all of the goodies that could come along with that (TV, Sirus Radio etc.).


5 - Apple's Wireless Schematic illustrates New Synching & Augmented Reality Services, July 2011, Patently Apple 

Apple's patent FIG. 5 is a block diagram of an exemplary network operating environment for devices implementing synchronized, interactive AR displays. Note that Apple could be introducing two new wireless services in the future: Syncing and Augmented Reality as highlighted in yellow above.


Apple's patent application 20110164163 was originally filed in Q1 2010 by inventors Brett Bilbrey, Nicholas King and Aleksandar Prance. With Apple introducing their volume purchasing for businesses with the iPad and iPhone standing tall, it stands to reason that we'll see more iOS apps and/or API's developed for the enterprise market going forward. 


Notice: Patently Apple presents a detailed summary of patent applications with associated graphics for journalistic news purposes as each such patent application is revealed by the U.S. Patent & Trade Office. Readers are cautioned that the full text of any patent application should be read in its entirety for full and accurate details. Revelations found in patent applications shouldn't be interpreted as rumor or fast-tracked according to rumor timetables. Apple's patent applications have provided the Mac community with a clear heads-up on some of Apple's greatest product trends including the iPod, iPhone, iPad, iOS cameras, LED displays, iCloud services for iTunes and more.


About Comments: Patently Apple reserves the right to post, dismiss or edit comments.


Related Material: Apple Mages Working on Augmented Reality Magic


Here are a Few Great Community Sites Covering our Original Report


MacSurfer, Twitter, Facebook, Apple Investor News, Google Reader, UpgradeOSX, TechWatching, Macnews, iPhone World Canada, CBS MarketWatch, MacDailyNews, iSpazio Italy, and more.





Two days from now Google will announce the same thing.

That's Google's way of innovating.

Daniel OS

It´s totally possible to use this patent today. The iPad has a fast CPU, a medium camera and the main idea of this patent is easy for the user understand the functionality.

The network is the bottleneck of this technology.

The comments to this entry are closed.