Apple Watch may one day introduce Gesture input that is driven by Machine Learning and Multiple Sensors
In the last two years various Apple Watch teams have worked on adding gesture recognition to Apple Watch as covered in a series of patent reports (01, 02 & 03). Today the US Patent & Trademark Office published a patent application from Apple that covers this subject matter once again, though from a different angle that uses machine-learning based gesture recognition using multiple sensors.
Apple notes that that the Apple Watch may be configured to include various sensors. For example, the Apple Watch may be equipped with one or more biosignal sensors (e.g., a photoplethysmogram (PPG) sensor), as well as other types of sensors (e.g., a motion sensor, an optical sensor, an audio sensor and the like). The various sensors may work independently and/or in conjunction with each other to perform one or more tasks, such as detecting device position, environmental conditions, user biological conditions and the like.
In some cases, a user may wish to use touch input (e.g., on the Apple Watch touchscreen) to perform an action. Alternatively, or in addition, it may be desirable for a user to perform a gesture without having to rely on touch input. For example, a user may wish for Apple Watch to perform a particular action based on a gesture performed by the same hand wearing the smartwatch.
The subject technology provides for detecting user gestures by utilizing outputs received via one or more sensors of Apple Watch
For example, Apple Watch may receive respective outputs from first sensor(s) (e.g., biosignal sensor(s)) and second sensor(s) (e.g., non-biosignal sensor(s)). The outputs may be provided as input to a machine learning model implemented on Apple Watch, which had been trained based on outputs from various sensors, in order to predict a user gesture.
Based on the predicted gesture, Apple Watch may perform a particular action (e.g., changing a user interface). In one or more implementations, the machine learning model may be trained based on a general population of users, rather than a specific single user. In this manner, the model can be re-used across multiple different users even without a prior knowledge of any particular characteristics of the individual users. In one or more implementations, a model trained on a general population of users can later be tuned or personalized for a specific user.
Apple's patent FIG. 3 below illustrates an example architecture, that may be implemented by an electronic device, for machine-learning based gesture recognition in accordance with one or more implementations; FIGS. 4A-4B illustrate example diagrams of respective sensor outputs of an electronic device that may indicate a gesture.
Apple's patent FIG. 9 below illustrates a flow diagram of another example process for machine-learning based gesture recognition.
For more details, review Apple's patent application number 20210142214.
Considering that this is a patent application, the timing of such a product to market is unknown at this time.
Comments