A study shows OpenAI’s GPT-3 excels at both informing and disinforming on social media, surpassing real individuals

TL;DR:

  • A study shows OpenAI’s GPT-3 excels at both informing and disinforming on social media, surpassing real individuals.
  • GPT-3’s ability to mimic human writing challenges the identification of synthetic information.
  • Concerns arise over potential misuse, with AI generating disinformation and misleading content.
  • Individuals better recognize disinformation from real users than AI-generated disinformation.
  • Conversely, GPT-3’s accurate information is more likely to be trusted than genuine user-generated content.
  • The study highlights the difficulty in distinguishing AI-generated content from human-created text.

Main AI News:

A groundbreaking study published in Science Advances reveals that OpenAI’s GPT-3, an advanced language model, has the remarkable ability to outperform real individuals in both informing and disinforming on social media platforms. The research underscores the challenges of discerning synthetic (AI-generated) information, as GPT-3’s uncanny capacity to mimic human writing bewilders readers, making it arduous to distinguish between AI-generated content and authentic human-produced text.

The motivation behind this study stems from the escalating attention and interest in AI text generators, with OpenAI’s GPT-3 making waves after its release in 2020. This cutting-edge AI language model boasts the remarkable ability to produce highly credible and realistic texts, catering to diverse applications like translation, dialogue systems, question answering, and even creative writing.

Nonetheless, there remain lingering concerns about the potential misuse of GPT-3, particularly in the generation of disinformation, fake news, and misleading content. Such malpractices could have detrimental effects on society, especially amid the ongoing infodemic of fake news and disinformation that has flourished alongside the COVID-19 pandemic.

Federico Germani, a researcher at the Institute of Biomedical Ethics and History of Medicine and director of Culturico, expounds on their research group’s dedication to understanding the impact of scientific disinformation and ensuring the safe consumption of information. Their interest in AI models like GPT-3 has driven them to explore how AI influences the information landscape and shapes people’s perceptions and interactions with both information and misinformation.

To conduct the study, researchers focused on 11 topics highly susceptible to disinformation, including climate change, vaccine safety, COVID-19, and 5G technology. They generated synthetic tweets using GPT-3 for each topic, creating a mix of true and false content. Concurrently, they collected a random sample of real tweets from Twitter on the same subjects, encompassing both authentic and misleading information.

Expert assessments played a pivotal role in determining whether the tweets, whether synthetic or organic, contained disinformation. Selected subsets of tweets from each category underwent evaluation by experts, allowing the researchers to craft a comprehensive survey.

The survey, carried out through the Qualtrics platform and completed by 697 participants, primarily from the United Kingdom, Australia, Canada, United States, and Ireland, displayed the tweets to respondents. Their task was to discern whether each tweet contained accurate information or disinformation and whether it originated from a real person or an AI. A gamified approach ensured the respondents remained engaged throughout the survey.

The research brought to light that individuals were more adept at recognizing disinformation in tweets authored by real users (“organic false”) compared to those generated by GPT-3 (“synthetic false”). Intriguingly, AI-generated disinformation proved to be more convincing than that crafted by humans, presenting a significant concern for the spread of misleading information.

Conversely, participants demonstrated a higher likelihood of correctly recognizing accurate information in GPT-3 generated tweets (“synthetic true”) as opposed to tweets composed by real users (“organic true”). When GPT-3 produced accurate information, readers were more inclined to trust its validity compared to authentic human-generated content.

The research also highlighted the considerable challenge of distinguishing between tweets authored by real users and those generated by GPT-3. The AI’s remarkable ability to replicate human writing styles and language patterns rendered it almost indistinguishable from genuine human-produced content.

Federico Germani emphasized the study’s findings as a compelling reminder of the importance of critically evaluating the information we encounter on social media. Trusting reliable sources becomes paramount, especially as AI-generated content can sway readers into believing it originates from real individuals. Familiarity with emerging technologies like GPT-3 will help individuals grasp their potential, both for positive advancements and potential risks.

Additionally, the study uncovered instances where GPT-3 refused to generate disinformation, while in other cases, it produced misleading content even when explicitly instructed to generate accurate information. While these results raise concerns about the efficacy of AI in disseminating persuasive disinformation, the real-world implications remain to be fully understood.

Germani underscored the necessity of conducting larger-scale studies on social media platforms to observe how people interact with AI-generated information and the consequent impact on behavior and adherence to recommendations, especially concerning individual and public health.

The study, authored by Giovanni Spitale, Nikola Biller-Andorno, and Federico Germani, is a crucial milestone in understanding the transformative role of AI in shaping the information landscape and its effects on human perception and decision-making.

Conclusion:

The study on GPT-3’s persuasive capabilities on social media has significant implications for the market. As AI text generators like GPT-3 continue to advance, businesses must exercise caution when engaging with information online. The risk of misinformation and disinformation poses a threat to brand reputation and consumer trust. Companies should prioritize the critical evaluation of information sources and invest in technologies to identify AI-generated content to maintain transparency and authenticity in their communications. Additionally, businesses must stay vigilant and adapt to evolving AI capabilities to protect themselves and their customers from the potential negative effects of persuasive AI-generated content on social media platforms.

Source