Samsung wins Design Patent for a Foldable Tablet with Built-In Stand and Keyboard
Senator Hatch Holds Utah Tech Tour this Friday with Apple's CEO

Major Tech Companies Form Organization to Establish Ethical Boundaries and Best Practices Surrounding AI

16 - XTRA NEWS
1AF 88 COVER AI

In 2015 Apple acquired an advanced deep learning company last October called Vocal IQ, a UK company that once singled out Siri as being a mere toy compared to what their AI technology was able to do. In June of this year Patently Apple posted a report titled "Forbes Misguided View is that Apple has Missed the AI Revolution," wherein we made the case that the media's slant, and particularly Forbes, towards Apple missing the AI revolution was nonsense. On August 5 we learned that Apple had made yet another investment in acquiring a Seattle based artificial intelligence startup called Turi. In July we posted a report titled "The Vision of Artificial Intelligence According to the Gospel of Google." Then, as AI technology raced along the yellow brick road to a rosy future, the Tesla Vehicle crash happened. The Auto Pilot feature using AI to drive a vehicle without the need of a human driver miscalculated a road condition and killed the driver in a head-on collision. It was a brutal reminder just how badly things can go when humans put too much trust into AI systems. But this is just the tip of the iceberg as America's top technology companies race to the future with AI front and center for future products. It's moving so quickly that some of the biggest tech names have come together to form a nonprofit organization to establish best practices for the development of AI technology in partnership with academics and ethics experts.

 


In the video above, CNN Money interviews Bill Gates wife, Melinda Gates, on the issue of potential risks associated with AI and where it's going.

 

Facebook, Amazon, Google, Microsoft and IBM have formed a new non-profit to establish best practices for the development of AI technology in partnership with academics and ethics experts.

 

The group, unveiled on Wednesday, goes by the heartwarming name Partnership on Artificial Intelligence to Benefit People and Society.

 

"This partnership will ensure we're including the best and the brightest in this space in the conversation to improve customer trust and benefit society," Ralf Herbrich, director of machine learning science and core machine learning at Amazon, said in a statement.

 

Each of the corporate members is expected to make financial and research contributions to the group. although details are scarce right now. The non-profit is also looking to engage with the scientific community and bring academics onto its board.

 

CNN was quick to point out in their report that Apple was noticeably absent from the group.

 

The non-profit's creation comes as tech companies race to incorporate AI into products ranging from personal assistants to photo apps. Yet, even the brightest minds have raised concerns about the impact of AI on humanity.

 

"The development of full artificial intelligence could spell the end of the human race," Stephen Hawking, author and physicist, said in one interview in 2014.

 

Elon Musk, the CEO of SpaceX and Tesla, expressed a similar fear around the same time as Hawking.

 

Elon Musk wrote in 2014 that "The leading AI companies have taken great steps to ensure safety. They recognize the danger, but believe that they can shape and control the digital 'super-intelligences' and prevent bad ones from escaping into the Internet. That remains to be seen."

 

Yet the public in general, and those who are particularly tech savvy in particular, already distrust the government when it comes to privacy issues and rightfully so. Have you seen the Edward Snowden movie yet? I'm not so sure at this point in time that the public are ready to trust tech companies to do the right things either. We definitely want Apple's Siri and equivalents to do handy things in the car to enhance hands-free operations and to perform handy tasks in home automation, but there's a limit as to how far the public will close their eyes to the potential dangers of where this technology is going.

 

For now, the organization is at least trying to find some common ground to safeguard the public. But then we'll need a watchdog to watch that group. I think that it's safe to say that the public is a little weary as to where all of this is going at the moment - and tech companies better take it slow in bringing this to market and they better not hide AI in products to better track our lives when we're unaware. 

 

What's your take on this issue? Send in your comments below.

 

 

17 Bar - Xtra News

About Making Comments on our Site: Patently Apple reserves the right to post, dismiss or edit any comments. Those using abusive language or behavior will result in being blacklisted on Disqus.

 

 

 

Comments

The comments to this entry are closed.