During Google I/O 2024 today they demonstrated how they've Infused Search with AI in a fully Revamped Experience and much more
For nearly two years, Google has been locked in a race with OpenAI and others to bring generative artificial intelligence — which can answer complex questions in a conversational manner — to the public in a way that most consumers will actually adopt. On Tuesday, Google fired a clear shot at competitors, signaling it has no intention of losing its leading position as the world’s most popular search engine.
The act of “Googling,” which has been synonymous with search for the past two decades, will now become supercharged with the technology from Alphabet Inc.’s powerful AI model, Gemini, the company said at its annual developer conference in Mountain View, California.
“Google search is generative AI at the scale of human curiosity,” Chief Executive Officer Sundar Pichai said onstage in announcing the new features at the company’s I/O summit.
In front of a live audience, Google unveiled what Pichai called a “fully revamped, new search experience” that will roll out to all US users this week, with the new Gemini-powered search coming to other countries “soon.”
The biggest single change in Googling is that some searches will now come with “AI overviews,” a more narrative response that spares people the task of clicking through various links.
An AI-powered panel will appear underneath people’s queries in the famously simple search bar, presenting summarized information drawn from Google search results from across the web. Google said it would also roll out an AI-organized page that groups results by theme or presents, say, a day-by-day plan for people turning to Google for specific tasks, such as putting together a meal plan for the week or finding a restaurant to celebrate an anniversary. Google said it won’t trigger AI-powered overviews for certain sensitive queries, such as searches for medical information or self-harm.
The nature of online search is fundamentally changing — and Google’s rivals are increasingly moving in on its turf. The search giant has faced enormous pressure from the likes of OpenAI and Anthropic, whose AI-powered chatbots ChatGPT and Claude are easy to use and have become widely adopted — threatening Google’s pole position in search and menacing its entire business model.
By bringing more generative AI to its search engine, Google hopes to reduce the time and mental load it takes for users to find the information that they are looking for, Reid said.
“Search is a very powerful tool. But there’s lots of times where you have to do a lot of hard work in searching,” Reid said. “How can we take that hard work out of searching for you, so you can focus on getting things done?” Reid said the new AI-powered Google search will be able to process billions of queries.
In a call with reporters on Monday, Demis Hassabis, CEO of Google’s AI lab DeepMind, went even further in showcasing Gemini’s ability to respond to queries. Hassabis showed off Project Astra, a prototype of an AI assistant that can process video and respond in real time. In a prerecorded video demo, an employee walked through an office as the assistant used the phone’s camera to “see,” responding to questions about what was in the scene. The program correctly answered a question about which London neighborhood the office was located in, based on the view from the window, and also told the employee where she had left her glasses. Hassabis said that the video was captured “in a single take, in real time.”
“At any given moment, we are processing a stream of different sensory information, making sense of it and making decisions,” Hassabis said of the Project Astra demo. “Imagine agents that can see and hear what we do to better understand the context we’re in and respond quickly in conversation, making the pace and quality of interaction feel much more natural.” Pichai later clarified that Google is “aspirationally” looking to bring some features of Project Astra to the company’s core products, particularly Gemini, towards the later half of this year.
In order to keep advancing in artificial intelligence, Google has also had to update its suite of AI models, and the company shared more progress on that front on Tuesday. It announced Gemini 1.5 Flash, which Google says is the fastest AI model available through its application programming interface, or API, typically used by programmers to automate high-frequency tasks like summarizing text, captioning images or video, or extracting data from tables. For more, read the full Bloomberg report.
Today's Google I/O 2024 was a bit of mind bender if you're not a developer. While there was a lot to take in, it's clear that AI is going to change Search forever and there'll be a learning curve to get your head around the changes and advancements. Below are a few references from Google I/O today that may interested in diving into.
- 01: Gemini breaks new ground with a faster model, longer context, AI agents and more
- 02: Get more done with Gemini: Try 1.5 Pro and more intelligent features
- 03: Gemini 1.5 Pro updates, 1.5 Flash debut and 2 new Gemma models
- 04: New generative media models and tools, built with and for creators
- 05: Generative AI in Search: Let Google do the searching for you
- 06: Ask Photos: A new way to search your photos with Gemini
- 07: 3 new ways to stay productive with Gemini for Google Workspace
- 08: Experience Google AI in even more ways on Android
- 09: Introducing VideoFX, plus new features for ImageFX and MusicFX
- 10: Building on our commitment to delivering responsible AI
Google I/O 2024 Keynote
Next Stop: Apple's WWDC24 on June 10th. The event will likewise be about all-things AI and most can't wait to see what new AI features that Apple will be bringing to their various operating systems later this year.