The Increasingly Human-Like AI: Its Impact on Trust in Conversations (Video)

TL;DR:

  • Advanced AI systems are becoming increasingly human-like
  • Research from the University of Gothenburg shows that AI systems can compromise trust in conversational partners.
  • Individuals may not realize they are interacting with an AI system until a significant amount of time has passed.
  • Suspicion towards conversational partners can lead to damage to relationships, even when there is no reason for suspicion.
  • The design of AI with human-like features can be problematic in situations where it is unclear who or what individuals are communicating with
  • The use of human-like voices in AI systems can create a sense of intimacy and lead people to form impressions based on the voice alone.
  • The uncertainty of whether an individual is communicating with a human or a computer can impact relationship-building and joint meaning-making aspects of communication.
  • Creating AI with well-functioning and eloquent voices that are still clearly synthetic can increase transparency in interactions with these systems.
  • Some forms of therapy that require more human connection may be negatively impacted by the lack of human-like qualities in AI systems.

Main AI News:

As AI technology continues to advance, its increasingly human-like features pose a potential threat to the trust we have in those we communicate with. Recent research conducted by the University of Gothenburg has explored how the rise of advanced AI systems has impacted our trust in conversational partners.

One study involved a would-be scammer attempting to defraud an elderly man, only to find himself conversing with a computer system that used pre-recorded loops. The fraudster spent a considerable amount of time patiently listening to the system’s somewhat confusing and repetitive responses, failing to realize that he was interacting with an AI system rather than a human.

Professor Oskar Lindwall of the University of Gothenburg’s Communication Department explains that it often takes individuals a long time to realize they are communicating with an AI system. He and collaborator Professor Jonas Ivarsson have written an article entitled “Suspicious Minds: The Problem of Trust and Conversational Agents,” which explores how individuals interpret and relate to situations where one of the parties may be an AI agent.

The authors of the article note that harboring suspicion toward conversational partners can have negative consequences, including damage to relationships. Ivarsson provides an example of a romantic relationship where trust issues lead to jealousy and an increased tendency to search for evidence of deception. Lindwall and Ivarsson argue that even when there is no reason for excessive suspicion, being unable to fully trust a conversational partner’s intentions and identity can result in suspicion.

Their study found that during interactions between humans, some behaviors were interpreted as signs that one of the parties was actually an AI agent. The researchers suggest that a pervasive design perspective is driving the development of AI with increasingly human-like features. While this may be appealing in some contexts, it can also be problematic, particularly when it is unclear who or what you are communicating with.

Ivarsson questions whether AI should have human-like voices, as they create a sense of intimacy and lead people to form impressions based solely on the voice. In the case of the scammer and the elderly man, the believability of the human voice and the assumption that the confused behavior was due to age led to the scam being exposed only after a long time.

Once an AI has a voice, humans infer attributes such as gender, age, and socio-economic background, making it harder to distinguish between human and machine. To address this issue, the researchers propose creating AI with voices that are clearly synthetic, increasing transparency.

The uncertainty of whether one is conversing with a human or a computer affects the relationship-building and joint meaning-making aspects of communication. While it may not matter in certain situations, such as cognitive-behavioral therapy, other forms of therapy that require a more human connection may be negatively impacted by AI’s lack of human-like qualities.

Conlcusion:

The increasing human-like features of AI systems have significant implications for the market. As individuals become more aware of the potential for AI to impersonate humans, trust in automated systems may erode. Companies that use AI in their products or services will need to consider the impact of these systems on their customers’ trust and ensure that they are transparent in their use of AI.

Additionally, businesses that rely on human connection and emotional engagement, such as certain forms of therapy or customer service, may need to find ways to incorporate more human-like qualities into their AI systems to maintain a positive customer experience. As AI technology continues to evolve, it will be essential for businesses to adapt their strategies to address the challenges posed by these increasingly human-like systems.

Source