Earlier in September the U.S. Patent & Trademark Office published a patent application from Samsung that reveals a Google-Glass-like Invention that indicates a real depth that could signal that they're in fact miles ahead of Google regarding this form of wearables computer. Although not listed as one of the inventors, Samsung has a secret weapon behind this project; an individual who worked on a similar project at MIT well before 2009 or at least four years prior to Google Glass surfacing in 2013. Whether or not Samsung will be able to bring their Glass device to market ahead of Google with any success is of course unknown at this time. Will Samsung strike first or will they partner with others? Only time will tell.
Samsung's Patent Background
The real world is a space consisting of 3-dimensional (3D) coordinates. People are able to recognize 3D space by combining visual information obtained using two eyes. However, a photograph or a moving image captured by a general digital device is expressed in 2D coordinates, and thus does not include information about space. In order to give a feeling of space, 3D cameras or display products that capture and display 3D images by using two cameras have been introduced.
Meanwhile, a current input method of smart glasses is limited. A user basically controls the smart glasses by using a voice command. However, it is difficult for the user to control the smart glasses by using only a voice command if a text input is required. Thus, a wearable system that provides various input interaction methods is required.
Samsung's Glass-like Wearable Computer
Samsung's patent covers methods and apparatuses consistent with exemplary embodiments include a method and wearable device which is in the form of a Google-Glass-like device for setting an input region in the air or on an actual object based on a user motion, and providing a virtual input interface in the set input region.
According to one or more exemplary embodiments, a wearable device includes: an image sensor configured to sense a gesture image of a user setting a user input region; and a display configured to provide a virtual input interface corresponding to the user input region set by using the sensed gesture image.
The sensed gesture image may correspond to a figure drawn by the user, and the virtual input interface may be displayed to correspond to the sensed figure.
The virtual input interface may be determined based on a type of an application being executed by the glasses type wearable device.
The glasses type wearable device may further include: a depth sensor configured to sense a first depth value corresponding to a distance from the wearable device to the user input region, and a second depth value corresponding to a distance from the wearable device to an input tool; and a controller configured to determine whether an input is generated through the virtual input interface based on the first depth value and the second depth value.
For more details about Samsung's invention, review our full Patently Mobile report here.