Study: Most readers want publishers to label AI-generated articles

TL;DR:

  • Readers desire transparency regarding AI’s role in news production.
  • AI disclosure negatively impacts news organizations’ trustworthiness, even though content remains accurate.
  • Familiarity with news production lessens the trust gap caused by AI disclosure.
  • Generative AI fails to enhance trust among media skeptics.
  • Transparency, through source disclosure, mitigates the negative impact of AI attribution.
  • Most respondents advocate for news organizations to disclose AI usage and provide explanatory notes.
  • Communicating the role of AI in news remains a challenge for newsrooms.

Main AI News:

In the realm of news publishing, the integration of AI has sparked both intrigue and apprehension. A recent study conducted by Benjamin Toff from the University of Minnesota and Felix M. Simon from the Oxford Internet Institute sheds light on this paradox. Their research, titled “‘Or they could just not use it?’: The paradox of AI disclosure for audience trust in news,” delves into audience perceptions of AI-generated news articles, offering critical insights into the complex relationship between technology and journalism.

The study revealed that a significant majority of readers desire transparency from news publishers when AI plays a role in shaping news coverage. However, disclosing AI involvement appears to come at a cost for news outlets. This dilemma has profound implications for the news industry’s quest to maintain public trust in an age of automation.

More than three-quarters of U.S. adults expressed reservations about news articles authored by AI, deeming it “a bad thing.” Nevertheless, the prevalence of AI-generated content, from established names like Sports Illustrated to media giant Gannett, underscores that AI has already become an integral part of our contemporary news landscape. The days of requesting information from Google and receiving AI-generated content as a response are not some distant vision; they are our present reality.

Previous studies have primarily focused on the influence of AI algorithms on news recommendations, addressing questions related to readers’ comfort with the robotic curation of headlines. Some theories suggest that AI-generated news may be perceived as fair and unbiased due to the “machine heuristic,” a tendency to attribute objectivity to technology devoid of human emotions or hidden agendas.

In their groundbreaking experiment conducted in September 2023, participants were presented with news articles spanning various political subjects. Some articles were explicitly marked as AI-generated, while others were not. Some of the AI-credited articles included a list of source materials. However, it’s crucial to note that the news articles came from HeyWire AI, a tech startup specializing in “actual AI-generated journalistic content,” albeit under a fictional news organization’s name. Furthermore, the sample size of nearly 1,500 participants leaned towards higher education levels and liberal ideologies, potentially impacting the study’s applicability. As a working paper, these findings have yet to undergo peer review.

The inception of this research was prompted by a critical question: How does the public perceive AI-generated news, and does it erode trust in journalism? Surprisingly, the experiment’s outcomes revealed that news organizations labeling stories as AI-generated were seen as less trustworthy by survey respondents. On an 11-point trust scale, the label led to a statistically significant decrease in trust, even though the content itself was not deemed less accurate or more biased.

Interestingly, those familiar with the intricacies of legitimate news production and reporting did not penalize news organizations for attributing content to AI. However, individuals who harbored a deep-seated distrust of news media continued to be skeptical when AI was involved.

The hope that generative AI could bridge trust gaps among those with the least faith in traditional journalism proved elusive in this experiment. While previous studies hinted at AI’s potential to reduce perceived bias among media skeptics, Toff and Simon found no such improvement. Future research may explore different labeling methods to foster trust within specific audience segments.

The experiment also shed light on the significance of transparency in mitigating AI-related trust issues. When participants were provided with a list of sources used by AI to generate the articles, the negative impact of AI disclosure on trust was nullified. In essence, transparency countered the erosion of trust, emphasizing the importance of open access to source materials.

The study’s conclusions echo previous sentiments, with a resounding majority of respondents advocating for news organizations to inform readers when AI is involved in content creation. Over 80% of participants expressed this preference, and 78% of them believed that news outlets should provide an explanatory note detailing how AI was employed.

While people may express a desire for transparency, the likelihood of them delving into detailed explanations about AI usage in news production remains uncertain. The parallels drawn to food labeling provide valuable insights. Consumers desire transparency about the ingredients in their food, even if they rarely scrutinize ingredient lists. Similarly, news consumers may want AI disclosure, even if they don’t delve into the intricacies of how AI shapes their news.

As we reflect on the year since the emergence of ChatGPT and the transformative impact it has had on the tech industry, it is evident that both journalists and audiences are still acclimating to this evolving landscape. AI’s role in journalism may continue to evolve, and public perceptions may shift accordingly. This study highlights the pressing need for newsrooms to navigate the challenge of effectively communicating the role and limits of AI in news production, as the vocabulary for this task remains underdeveloped on both sides.

Simon emphasizes that these preliminary findings should not dissuade news organizations from establishing rules for responsible AI usage and disclosure. Comparative research on disclosure practices is already underway, offering valuable insights into when and how to disclose AI involvement in news creation.

Conclusion:

The integration of AI in news publishing presents a dual challenge of meeting reader expectations for transparency while navigating the trust paradox. While audiences desire disclosure, this study suggests that labeling AI-generated content may erode trust in news organizations, particularly among those already skeptical of media. To foster trust and transparency in the evolving landscape of AI journalism, news outlets must innovate in labeling and disclosure methods and engage in ongoing dialogues with their audiences.

Source