Last May, Sundar Pichai, Google’s chief executive, said the company would use artificial intelligence to reimagine all of its products.
But because new generative A.I. technology presented risks, like spreading false information, Google was cautious about applying the technology to its search engine, which is used by more than two billion people and was responsible for $175 billion in revenue last year.
On Tuesday, at Google’s annual conference in Mountain View, Calif., Mr. Pichai showed how the company’s aggressive work on A.I. had finally trickled into the search engine. Starting this week, he said, U.S. users will see a feature, A.I. Overviews, that generates information summaries above traditional search results. By the end of the year, more than a billion people will have access to the technology.
A.I. Overviews is likely to heighten concerns that web publishers will see less traffic from Google Search, putting more pressure on an industry that has reeled from rifts with other tech platforms. On Google, users will see longer summaries about a topic, which could reduce the need to go to another website — though Google downplayed those concerns.
“The links included in A.I. Overviews get more clicks” from users than if they were presented as traditional search results, Liz Reid, Google’s vice president of search, wrote in a blog post. “We’ll continue to focus on sending valuable traffic to publishers and creators.”
The company also unveiled a host of other initiatives — including a lightweight A.I. model, new chips and so-called agents that help users perform tasks — in an effort to gain the upper hand in an A.I. slugfest with Microsoft and OpenAI, the maker of ChatGPT.
“We are in the very early days of the A.I. platform shift,” Mr. Pichai said on Tuesday at Google’s I/O developer conference. “We want everyone to benefit from what Gemini can do,” including developers, start-ups and the public.
When ChatGPT was released in late 2022, some tech industry insiders considered it a serious threat to Google’s search engine, the most popular way to get information online. Since then, Google has aggressively worked to regain its advantage in A.I., releasing a family of technology named Gemini, including new A.I. models for developers and the chatbot for consumers. It also infused the technology into YouTube, Gmail and Docs, helping users create videos, emails and drafts with less effort.
All the while, Google’s tit-for-tat competition with OpenAI and its partner, Microsoft, has continued. The day before Google’s conference, OpenAI presented a new version of ChatGPT that is more akin to a voice assistant.
(The New York Times sued OpenAI and Microsoft in December for copyright infringement of news content related to A.I. systems.)
At its Silicon Valley event, Google showcased how it would enmesh A.I. more deeply into users’ lives. It presented Project Astra, an experiment to see how A.I. could act as an agent, vocally chatting with users and responding to images and videos. Some of the abilities will be available to users of Google’s Gemini chatbot later this year, Demis Hassabis, chief executive of DeepMind, Google’s A.I. lab, wrote in a blog post.
DeepMind also presented Gemini 1.5 Flash, an A.I. model designed to be fast and efficient but lighter in size than Gemini 1.5 Pro, the midtier model that Google rolled out to many of its consumer services. Dr. Hassabis wrote that the new model was “highly capable” at reasoning and was good at summarizing information, chatting and captioning images and videos.
The company announced another A.I. model, Veo, that generates high-definition videos based on simple text prompts, similar to OpenAI’s Sora system. Google said that some creators could preview Veo and that others could join a wait-list for access to it. Later this year, the company expects to bring some of Veo’s abilities to YouTube Shorts, the video platform’s TikTok competitor, and other products.
Google also showed off the latest versions of its music-generation tool, Lyria, and image generator, Imagen 3. In February, Google’s Gemini chatbot was criticized by users on social media for refusing to generate images of white people and presenting inaccurate images of historical figures. The company said it would shut off the ability to generate images of people until it fixed the issue.
In the past three months, more than one million users have signed up to Gemini Advanced, the version of Google’s chatbot available through a $20 monthly subscription, the company said.
In the next months, Google will add Gemini Live, which will provide users a way to speak to the chatbot through voice commands. The chatbot will respond in natural-sounding voices, Google said, and users could interrupt Gemini to ask clarifying questions. Later this year, users will be able to use their cameras to show Gemini Live the physical world around them and have conversations with the chatbot about it.
Besides A.I. Overviews, Google’s search engine will present search results pages organized by A.I., with generated headlines highlighting different types of content. The feature will start with dining and recipe results, and will later be offered for shopping, travel and entertainment queries.
Ms. Reid, the head of search, said in an interview before the conference that she expected the search updates to save users time because Google “can do more of the work for you.”
Mr. Pichai said he expected that a vast majority of people would interact with Gemini A.I. technology through Google’s search engine.
“We’re going to make it more and more seamless for people to interact with Gemini,” Mr. Pichai said in a briefing before the conference.
Source link