TL;DR:
- Google introduces an experimental AI-powered search feature, offering direct answers from AI language models.
- The convenience of AI summaries may lead to mainstream adoption, but caution is advised due to the potential for false information.
- The new search function utilizes Google’s advanced language models, PaLM2 and MUM.
- Concerns arise regarding the potential control Google may exert over online information and the need for fact-checking.
- Content creators may face significant readership drops as users rely more on AI summaries.
- Addressing challenges in AI search monetization and ensuring a steady supply of articles is crucial.
- The market implications call for increased discussions on responsibility, trust, and potential remedies.
Main AI News:
Google, the tech giant known for its innovative products and services, is taking a major leap forward in artificial intelligence (AI) with the introduction of an experimental AI-powered search feature. This groundbreaking development is poised to grant Google unprecedented control over the information users see online. Instead of traditional search results, the new function utilizes an AI language model to provide direct answers to user queries, drawing information from a wide array of online articles.
The convenience of receiving AI-generated results without the need to click through multiple websites has ignited speculation about the imminent mainstream adoption of this technology. Microsoft Bing and other search giants have already embraced generative AI, further fueling expectations for its widespread acceptance. However, experts urge caution when relying solely on the advice of these AI models. Similar to ChatGPT, Google’s language model, too, is prone to presenting false information, a phenomenon termed “hallucinations,” and even fabricating sources.
Presently, the experimental feature is limited to select Chrome and Google app users in the United States. Google aims to expand its availability beyond the U.S., but specific timelines remain uncertain. The trial period for this new search function will conclude in December.
The Inner Workings of Google’s Generative AI Search Function
Users approved for the testing phase can utilize Google as they normally would, but with one significant difference. A box containing an AI-generated “snapshot of key information” appears before any articles related to the search query. Unlike ChatGPT, Google’s AI provides citations for the sources it draws upon, facilitating a deeper exploration of the referenced material.
Below the AI snapshot, users will find options to ask follow-up questions and suggested next steps. Clicking on these options activates a new “conversational mode,” enabling users to delve further into their inquiries and receive context-specific answers.
Daniel Russell, a former senior research scientist at Google and current professor at Stanford’s Institute for Human-Centered Artificial Intelligence, expressed his conviction that generative AI search will dominate the landscape by the middle of the following year. He emphasized that Google’s new feature sets itself apart from traditional search engines by understanding nuanced phrases and responding in natural-sounding prose.
The underlying technologies powering this revolutionary search function are Google’s next-generation Pathways Language Model 2 (PaLM2) and the Multitask Unified Model (MUM).
Google’s Response and the Perils of AI-Driven Search Results
Despite the potential convenience offered by a generative AI-powered search engine, there are inherent risks associated with this technology, as outlined by Russell. Large language models like Google’s AI are prone to generating false information, leading to what computer scientists refer to as “hallucinations.” This is a well-documented issue in AI chatbots, including ChatGPT and Google’s Bard.
Russell highlighted a significant challenge with AI search results—when presented with well-written prose, users tend to trust the information without verifying its authenticity. The origin and evaluation of the information remain ambiguous, even though it may appear incredibly plausible. Such plausibility poses a problem, as users may unknowingly rely on fabricated or misleading data.
Although Google’s AI provides source citations, there is a possibility that these references, too, are manufactured. Russell revealed that during his testing of the new feature, he encountered a citation with a fabricated journal name, fake page numbers, and a counterfeit publication date. While he believes Google will address this issue, there is still progress to be made.
Joel Blit, an associate professor specializing in AI and the economics of innovation at the University of Waterloo, emphasized the importance of fact-checking search results as AI search gains popularity. Being a large language model, there is always a risk of the AI generating incorrect information or presenting a synthesized version that contains biases and omits critical details. However, given the option, many users are likely to settle for Google’s AI summary instead of exploring external websites.
Google’s Information Monopoly and the Implications
Blit raised concerns about Google’s already substantial control over the information people consume worldwide. With the introduction of AI-powered search, Google could attain unprecedented authority over online information access. If Google establishes a monopoly over this new iteration of AI-powered search, there is a genuine concern that individuals may rely solely on a single source for their information, a source that is more heavily edited than ever before.
While acknowledging Google’s efforts to act responsibly, Blit emphasized that civil society should not blindly trust the company to make the right decisions. The issue of information control is not unique to Google; it has been a long-standing challenge for search engines and their peers.
Russell concurred, stating that finding a solution to this problem would take time, as it has been an ongoing struggle for Google and other search engines to determine which sources to display.
Disrupting the Content Creation Industry through AI Search
As users increasingly rely on Google’s AI summary, the underlying articles that contribute to the creation of these summaries may experience a significant decline in readership. This shift could have profound implications for journalists, bloggers, and content creators who rely on traffic generated through traditional search engines. Blit predicts a potential drop of 50 percent or more in readership for many content providers in the coming years.
Addressing this issue presents a considerable challenge, often described as the “million-dollar question.” Russell suggested the need for an entirely new monetization model to accommodate AI-generated summaries. One potential approach could involve micropayments based on impressions, providing a share of revenue to publishers whose content is cited by Google’s AI. Such models are essential to ensure a consistent supply of articles for AI to function effectively.
Blit expressed optimism that the majority of users would embrace the convenience and usefulness of this service. However, he called for a discussion on the potential risks and ways to address them, emphasizing the need for collective action as a civil society.
Conclusion:
The introduction of Google’s AI-powered search feature represents a significant shift in the way we access and consume information. While the convenience is undeniable, the risks associated with false information and concentration of information control raise concerns. The content creation industry may experience significant disruptions, necessitating new monetization models. To ensure a healthy market, proactive discussions on responsibility, trust, and effective remedies are crucial.