TL;DR:
- Maurice Jakesch, a doctoral student, explored the extent of influence AI writing assistants have on user opinions.
- Biases can be embedded in AI systems through programming or training on limited datasets, perpetuating societal biases.
- AI’s latent persuasion can subtly influence individuals’ opinions and behaviors, both online and in real life.
- Users’ perception of AI trustworthiness and uncertainty affects their likelihood of adopting AI recommendations.
- AI assistants can sway opinions on social media and other platforms, potentially impacting marketing and elections.
- Concerns arise regarding the exploitation of AI assistants for biased purposes, necessitating awareness and preventative measures.
- Human writers can leverage AI tools to their advantage by skillfully editing generated content.
- It is crucial to exercise critical thinking and maintain control over AI’s impact on our writing.
Main AI News:
The prevalence of AI-powered writing assistants in today’s digital landscape has undoubtedly transformed the way we communicate. However, these intelligent tools have also raised concerns about the unintended consequences they might impose on our writing. Autocorrect mishaps have shown us that AI can alter our intended messages, but can AI writing assistants also shape our thoughts?
Maurice Jakesch, a doctoral student of information science at Cornell University, embarked on a quest to explore this very question. He developed his own AI writing assistant, based on the powerful GPT-3, with a twist. Jakesch programmed the assistant to provide biased suggestions when users were asked to answer the question, “Is social media good for society?”
Bias in the Realm of AI Even though AI lacks consciousness, it can exhibit biases due to the personal biases inadvertently embedded by its creators during the programming phase. Furthermore, if trained on limited or biased datasets, AI models can manifest these biases in their outputs. This raises concerns about the perpetuation of societal biases on a large scale. Moreover, at an individual level, AI’s latent persuasion can subtly influence people, with users often unaware of the persuasive impact imposed by automated systems. Previous studies have already revealed AI’s ability to influence online opinions and even shape real-life behaviors.
Motivated by prior research highlighting the substantial influence of automated AI responses, Jakesch delved deeper into the extent of this influence. Recently, at the 2023 CHI Conference on Human Factors in Computing Systems, he presented a study suggesting that AI systems, such as GPT-3, might harbor biases acquired during their training, influencing writers’ opinions, regardless of their awareness.
“The lack of awareness of the models’ influence supports the idea that the model’s influence was not only through conscious processing of new information but also through the subconscious and intuitive processes,” Jakesch stated in his study.
The Influence of Perception Research has shown that the impact of AI recommendations hinges on users’ perception of the program. If users perceive the AI as trustworthy, they are more inclined to adopt its suggestions. Moreover, in scenarios where uncertainty clouds their opinion formation, individuals are even more likely to rely on AI advice. To examine this further, Jakesch developed a social media platform akin to Reddit and an AI writing assistant akin to Google Smart Compose or Microsoft Outlook. Unlike autocorrect, this assistant acted as a collaborative writer, providing suggestions for letters and phrases. Users simply had to click to accept a suggestion.
The AI assistant was calibrated to suggest words that would lead to positive responses for some users, while for others, it was programmed with a bias against social media, nudging them toward negative responses. (A control group was also included, which did not use the AI assistant at all.) Intriguingly, the results revealed that individuals who received AI assistance were twice as likely to align with the AI’s embedded bias, even if their initial opinions differed. Those repeatedly exposed to techno-optimist language were more inclined to support the idea that social media benefits society, whereas subjects encountering techno-pessimist language were more prone to argue against it.
Expanding the Scope It remains uncertain whether the participants’ opinions were genuinely influenced by their AI-aided experiences after completing the essays. Nevertheless, the implications of these findings are disconcerting. Jakesch and his colleagues worry that AI influence could permeate various domains, ranging from marketing to elections. With tools like ChatGPT generating complete essays, turning humans into mere editors rather than primary authors, the origins of opinions become muddled. Moreover, the impact of AI extends beyond written material, as advertisers and policymakers often rely on online content to gauge public sentiment. They are left in the dark regarding whether the opinions expressed by anonymous keyboard warriors are truly independent or subtly shaped by AI.
Another pressing concern revolves around the potential exploitation of AI assistants for their biases. Such manipulation could involve modifying assistants to possess stronger biases, which could be wielded to promote products, influence behaviors, or advance political agendas. “Publicizing a new vector of influence increases the chance that someone will exploit it,” Jakesch warned in his study. “On the other hand, only through public awareness and discourse [can] effective preventative measures be taken at the policy and development level.”
Mastering AI’s Sway While AI may possess a convincing allure, we retain the power to control its impact. The software can only interfere with our writing to the extent programmed by its creators and permitted by the writers themselves. Any writer can leverage AI to their advantage by utilizing text generated by an AI and skillfully editing it to convey a specific message. Ultimately, human agency and critical thinking must prevail over unquestioning reliance on AI tools.
Conlcusion:
The findings underscore the significant influence of AI writing assistants on user opinions, raising concerns about the perpetuation of biases and the potential impact on decision-making. This has implications for various markets, including marketing and political campaigns, where reliance on online sentiment and content is prevalent. Businesses should be aware of the persuasive power of AI and the need for responsible utilization and public awareness. Critical thinking and human agency remain crucial in maintaining the integrity of messaging and avoiding undue manipulation in the marketplace.