A Group of 15 Apple Customers have filed a Class Action against Apple for Degraded Connectivity & Performance of the iPhone XR
Some online Stores in China, like JD.com, are selling iPhones at a discount. Is this going to be a trend or something short lived?

Apple's iPadOS Inspired Google to Introduce Chrome OS touch Gestures for Chromebooks while Planning for In-Air Gesturing

1 FINAL -- COVER CHROMEBOOKS AND 2-IN-1's with In-Air Gesturing


Late yesterday Google announced that with the latest Chrome OS update, Chromebook tablet mode is simpler to navigate thanks to new gestures, the launch of Quick shelf, and updates to Chrome browser that are tailored specifically for tablet mode.   


What is Chromebook "tablet mode"?


Google notes on their blog that Chromebooks, which all run on Chrome OS, help you to get things done and keep you entertained. All 2-in-1 Chromebooks work as both a high-performing laptop and a tablet.


If you have a convertible Chromebook, fold your screen back on its hinge and your Chromebook transitions to tablet mode. Or if you’re using a detachable Chromebook like the Lenovo Chromebook Duet, then you can fully remove the keyboard to activate tablet mode.


Google built new gestures for Chromebook tablet mode, which make it easier for you to navigate using touch. Now, to get to your tablet mode’s Home screen, swipe up from the bottom of the screen. You could read about other new Chrome OS touch gestures here.


While Apple's iPadOS gestures had inspired Google's engineering team, it's probably just one step towards the future when Google will be adding an "in-Air" gesturing version sometime in the not-too-distant future.


The timing of introducing this new feature yesterday is both interesting and wildly coincidental to the fact that Google was granted a patent for a new in-air gesturing system.  The technology behind the new in-air gesturing system appears to have no connection to their Pixel 4 Soli technology which is based on radar.


The proposed in-air gesturing system simply works using a depth camera, much like Apple's TrueDepth camera, but able to identify gestures at a greater distance than Face ID. It's something that Apple is also planning to introduce sometime in the not too distant future – and hopefully with iPhone 12.


Google's future hand tracking could be used with Chromebooks and more importantly, with hand gestures used as an input mechanism for virtual and augmented reality systems, thereby supporting a more immersive user experience.


A generative hand tracking system captures images and depth data of the user's hand and fits a generative model to the captured image or depth data. To fit the model to the captured data, the hand tracking system defines and optimizes an energy function to find a minimum that corresponds to the correct hand pose.


Currently conventional hand tracking systems typically have accuracy and latency issues that can result in an unsatisfying user experience and Google's granted patent claims to have what it takes to correct these issues.


Google notes in their granted patent that "In some embodiments, the hand tracking module uses the current pose estimate to update graphical data on a display. In some embodiments, the display is a physical surface, such as a tablet, mobile phone, smart device, display monitor, array(s) of display monitors, laptop, signage and the like or a projection onto a physical surface.


Google's patent covers techniques for estimating a pose of at least one hand by volumetrically deforming a signed distance field using a skinned tetrahedral mesh to locate a local minimum of an energy function, wherein the local minimum corresponds to the hand pose.


Further, Google notes further into their patent that a hand tracking module receives depth images of a hand from a depth camera and identifies a pose of the hand by fitting an implicit surface model of a hand, defined as the zero crossings of an articulated signed distance function, to the pixels of a depth image that correspond to the hand.


The hand tracking module fits the model to the pixels by first volumetrically warping the pixels into a base pose and then interpolating 3D grid of precomputed signed distance values to estimate the distance to the implicit surface model. The volumetric warp is performed using a skinned tetrahedral mesh. Google's patent FIG. 4 below is a diagram illustrating a base pose of a skinned tetrahedral volumetric mesh.


2 X Hand Pose skinned tetrahedral volumetric mesh


According to Google, the hand tracking module uses the skinned tetrahedral mesh to warp space from a base pose to a deformed pose to define an articulated signed distance field from which the hand tracking module derives candidate poses of the hand. Explicitly generating the articulated signed distance function is, however, avoided, by instead warping the pixels from the deformed pose to the base pose where the distance to the surface can be estimated by interpolating the precomputed 3D grid of signed distance values. The hand tracking module then minimizes the energy function based on the distance of each corresponding pixel as to identify the candidate pose that most closely approximates the pose of the hand.


Google's future hand tracking system is configured to support hand tracking functionality for AR/VR applications, using depth sensor data.  The hand tracking system can include a user-portable mobile device, such as a tablet computer, computing-enabled cellular phone (e.g., a "smartphone"), a head-mounted display (HMD), a notebook computer, a personal digital assistant (PDA), a gaming system remote, a television remote, camera attachments with or without a screen, and the like.


Google's patent FIG. 1 is a diagram illustrating a hand tracking system estimating a current pose of a hand based on a depth image; FIG. 6 is a diagram illustrating a two-dimensional cross-section of the end of a finger in a base pose contained inside a triangular mesh; FIG. 7 is a diagram illustrating a two-dimensional cross-section of the end of a finger in a query pose contained inside a deformed triangular mesh; and FIG. 9 is a flow diagram illustrating a method of estimating a current pose of a hand based on a captured depth image. 




The U.S. Patent and Trademark Office published Google's granted patent yesterday, April 7, 2020. Google originally filed for this patent back on May 31, 2018.


10.3 - Patently Extra News


The comments to this entry are closed.