For more than three months, Google executives have watched as projects at Microsoft and a San Francisco start-up called OpenAI have stoked the public’s imagination with the potential for artificial intelligence.
But on Tuesday, Google tentatively stepped off the sidelines as it released a chatbot called Bard. The new A.I. chatbot will be available to a limited number of users in the United States and Britain and will accommodate additional users, countries and languages over time, Google executives said in an interview.
The cautious rollout is the company’s first public effort to address the recent chatbot craze driven by OpenAI and Microsoft, and it is meant to demonstrate that Google is capable of providing similar technology. But Google is taking a much more circumspect approach than its competitors, which have faced criticism that they are proliferating an unpredictable and sometimes untrustworthy technology.
Still, the release represents a significant step to stave off a threat to Google’s most lucrative business, its search engine. Many in the tech industry believe that Google — more than any other big tech company — has a lot to lose and to gain from A.I., which could help a range of Google products become more useful, but could also help other companies cut into Google’s huge internet search business. A chatbot can instantly produce answers in complete sentences that don’t force people to scroll through a list of results, which is what a search engine would offer.
Bard is a stand-alone webpage featuring a question box.
Google started Bard as a webpage on its own rather than a component of its search engine, beginning a tricky dance of adopting new A.I. while preserving one of the tech industry’s most profitable businesses.
“It’s important that Google start to play in this space because this is where the world is headed,” said Adrian Aoun, a former Google director of special projects. But the move to chatbots could help upend a business model reliant on advertising, said Mr. Aoun, who is now the chief executive of the health care start-up Forward.
In late November, OpenAI released ChatGPT, an online chatbot that can answer questions, write term papers and riff on almost any topic. Two months later, the company’s primary investor and partner, Microsoft, added a similar chatbot to its Bing internet search engine, showing how the technology could shift the market that Google has dominated for more than 20 years.
Google has been racing to ship A.I. products since December. It declared a “code red” in response to ChatGPT’s release, making A.I. the company’s central priority. And it spurred teams inside the company, including researchers who specialize in studying the safety of A.I., to collaborate to speed up the approval of a wave of new products.
Industry experts have wondered how quickly Google can develop new A.I. technology, particularly given OpenAI and Microsoft’s breakneck pace in releasing their tools.
“We are at a singular moment,” said Chirag Dekate, an analyst at the technology research firm Gartner. ChatGPT inspired new start-ups, captured the public imagination and prompted greater competition between Google and Microsoft, he said, adding, “Now that market demand has shifted, Google’s approach has, too.”
Last week, OpenAI tried to up the ante with newer technology called GPT-4, which will allow other businesses to build the kind of artificial intelligence that powers ChatGPT into a variety of products, including business software and e-commerce websites.
The chatbot often gets facts wrong and sometimes makes up information without warning — a phenomenon A.I. researchers call hallucination.
Google has been testing the technology underlying Bard since 2015, but has so far not released it beyond a small group of early testers because, like the chatbots offered by OpenAI and Microsoft, it does not always generate trustworthy information and can show bias against women and people of color.
“We are well aware of the issues; we need to bring this to market responsibly,” said Eli Collins, Google’s vice president for research. “At the same time, we see all the excitement in the industry and the excitement of all the people using generative A.I.”
Mr. Collins and Sissie Hsiao, a Google vice president for product, said in an interview that the company had not yet determined a way to make money from Bard.
Google announced last week that A.I. was coming to its productivity apps like Docs and Sheets, which businesses pay to use. The underlying technology will also be on sale to companies and software developers who wish to build their own chatbots or power new apps.
“It is early days for the technology,” Ms. Hsiao said. “We’re exploring how these experiences can show up in different products.”
The recent announcements are the beginning of Google’s plan to introduce more than 20 A.I. products and features, The New York Times has reported, including a feature called Shopping Try-on and the ability to create custom background images for YouTube videos and Pixel phones.
Rather than being combined with its search engine, Bard is a stand-alone webpage featuring a question box. At the bottom of an answer there is a button to “Google it,” which takes users to a new tab with a conventional Google search results page on the topic.
Google executives pitched Bard as a creative tool designed to draft emails and poems and offer guidance on how to get children involved in new hobbies like fly-fishing. The company is keen to see how people use the technology, and will further refine the chatbot based on use and feedback, the executives said. Unlike its search engine, though, Bard was not primarily designed to be a source of reliable information.
“We think of Bard as complementary to Google Search,” Ms. Hsiao said. “We want to be bold in how we innovate with this technology as well as be responsible.”
Google lets users provide feedback on the usefulness of a particular answer.
Like similar chatbots, Bard is based on a kind of A.I. technology called a large language model, or L.L.M., which learns skills by analyzing vast amounts of data from across the internet. This means the chatbot often gets facts wrong and sometimes makes up information without warning — a phenomenon A.I. researchers call hallucination. The company said it had worked to limit this behavior, but acknowledged that its controls were not entirely effective.
When executives demonstrated the chatbot on Monday, it refused to answer a medical question because doing so would require precise and correct information. But the bot also falsely described its source for an answer it generated about the American Revolution.
Google posts a disclaimer under Bard’s query box warning users that issues may arise: “Bard may display inaccurate or offensive information that doesn’t represent Google’s views.” The company also provides users three options of responses for each question, and lets them provide feedback on the usefulness of a particular answer.
Much like Microsoft’s Bing chatbot and similar bots from start-ups like You.com and Perplexity, the chatbot annotates its responses from time to time, so people can review its sources. And it dovetails with Google’s index of all websites, so that it can instantly gain access to the latest information posted to the internet.
This may make the chatbot more accurate in some cases, but not all. Even with access to the latest online information, it still misstates facts and generates misinformation.
“L.L.M.s are tricky,” said Mr. Collins, Google’s vice president for research. “Bard is no exception.”