A Team at Stanford University is Working on an Open, Privacy-Preserving Virtual Assistant to Compete with Siri, Alexa & more
Scientists at Stanford University are warning about the consequences of a race to control what they believe will be the next key consumer technology market — virtual assistants like Amazon’s Alexa, Google Assistant and Apple's Siri. In an interview with the New York Times, Stanford's Dr. Lam stated that "A monopoly assistant platform has access to data in all our different accounts. They will have more knowledge than Amazon, Facebook and Google combined."
Dr. Lam is collaborating with a group of Stanford faculty members and students to build a virtual assistant called Almond that would allow individuals and corporations to avoid surrendering personal information as well as retain a degree of independence from giant technology companies.
Dr. Lam said the threat to privacy cannot be overstated. For example, she noted that Wynn Resorts in Las Vegas last year installed Amazon Echo devices in rooms.
Lam added that "Once they said that what happens in Las Vegas stays there. Now that’s no longer necessarily true. Now it might end up in Seattle," or Cupertino.
Stanford's open, privacy-preserving virtual assistant called "Almond" is now open to the public to help test out here.
The researchers’ biggest concern is that virtual assistants, as they are designed today, could have a far greater impact on consumer information than today’s websites and apps. Putting that information in the hands of one big company or a tiny clique, they say, could erase what is left of online privacy.
They also hope Almond can leapfrog existing virtual assistant systems in its ability to understand complex language. Virtual assistants are doing a better job of understanding what humans say, but they have made much less progress in understanding what those words mean. Context and nuance are difficult for a machine to understand.
The NYTimes report later notes that "It has taken years of work to get to this point. Three decades ago, Apple commissioned a group led by the computer scientist Alan Kay to create a video showing how in the future humans might interact with computers using spoken language.
The video, known as 'Knowledge Navigator,' featured an absent-minded professor who talked with a computing system to perform everyday tasks and academic research.
The demonstration inspired a number of developers, including the artificial intelligence researchers Adam Cheyer and Tom Gruber, who began research on virtual assistants while they were still at SRI International, an independent research laboratory in Menlo Park, Calif.
In 2010, Apple acquired the start-up and then released its technology for the iPhone the following year.
Since then, Siri has faced stiff competition. Last year, Amazon said it had 10,000 employees working on its Alexa service, many of them focusing on improving the ability to understand complex commands.
The Stanford researchers argue that Alexa’s approach, even with thousands of employees, will never be able to adequately deal with the complexity and variability of human language because it is incredibly labor-intensive and may not extend to more complex conversations.
Dr. Gruber, who recently left Apple after heading advanced development there, remains skeptical of any near-term technical breakthrough that will make it possible for virtual assistants to have human-like understanding.
For more of this story, read the full NYTimes report titled "Stanford Team Aims at Alexa and Siri With a Privacy-Minded Alternative," here.
It's ironic that on Sunday, Apple's CEO Tim Cook gave the commencement speech at Stanford University and emphasized the the issue of online privacy while one of the University's teams were hard at work on a next-gen virtual assistant called "Almond," because Virtual Assistants like Alexa, Siri and others may not be protecting our privacy as hoped for.
About Making Comments on our Site: Patently Apple reserves the right to post, dismiss or edit any comments. Those using abusive language or negative behavior will result in being blacklisted on Disqus.