Blast Theory’s exhibition “AI: Who’s Looking After Me?” at Science Gallery London: Exploring the Intersection of Artificial Intelligence and Human Care

TL;DR:

  • Blast Theory’s exhibition “AI: Who’s Looking After Me?” at Science Gallery London explores the ethical implications of artificial intelligence in care systems.
  • The exhibition brings together doctors, patients, artists, and scientists to delve into key issues surrounding AI and care.
  • It raises thought-provoking questions about the tension between robot assistance and human contact in care settings.
  • Fear and suspicion surrounding AI are addressed, highlighting the entangled benefits and risks in contemporary life.
  • The exhibition showcases diverse projects, including immersive installations, soft robotics, and investigations into human labor in AI systems.
  • The importance of collaborative and multidisciplinary approaches in grappling with societal problems is emphasized.
  • The exhibition challenges preconceived notions about AI, encourages critical thinking, and prompts consideration of power differentials.
  • The role of humans in caregiving and the potential impact of AI on human relationships are explored.
  • The exhibition serves as a timely reminder of the complex integration of AI in our lives and the underlying human decisions that shape it.
  • It underscores the need to approach AI use thoughtfully, considering power dynamics and striving for equitable practices.

Main AI News:

The integration of artificial intelligence (AI) in care systems has sparked intriguing debates about the ethics surrounding its implementation. In a thought-provoking exhibition titled “AI: Who’s Looking After Me?” presented at Science Gallery London until January 20, artist group Blast Theory delves into the complexities of AI and its impact on human care. This captivating showcase brings together doctors, patients, artists, and scientists to examine the key issues surrounding AI in the context of caregiving.

The exhibition raises profound questions, such as the possibility of having a robot perform intimate tasks like washing. Matt Adams, a member of Blast Theory, explains the contrasting perspectives: “There’s this tension where you might not want a robot to do something so intimate; you want human contact. But the flip argument is it’s better for a robot to wash you so you’re not dealing with the embarrassment of another person; you have some privacy. There are these tensions between what impersonal means versus private.”

Fear and suspicion surrounding AI have been steadily increasing, with concerns ranging from privacy issues to artistic authenticity and human redundancy. This exhibition refuses to provide simple solutions, instead delving into the entangled benefits and risks of artificial intelligence in contemporary life. Siddharth Khajuria, the director of Science Gallery London, emphasizes the present nature of AI, stating, “It’s not dystopian or future hopeful. It’s present and messy.

Science Gallery London, in collaboration with King’s College London, fosters a multidisciplinary approach by merging diverse knowledge bases. Khajuria explains the significance of bringing different perspectives together, stating, “We need to bring different perspectives together to grapple with increasingly knotty societal problems. The projects that feel messy in the best sense are collaborations between patient groups, medical engineers, and artists. When you encounter them, it will be tricky to know whose imagination has led or shaped it.

Among the projects showcased is an immersive installation by sound artist Wesley Goatley, exploring defunct voice assistants, and an investigation by Fast Familiar into the romantic potential of a machine that has absorbed everything about love from the internet. Dr. Oya Celiktutan, Head of the Social AI & Robotics Lab at King’s Department of Engineering, collaborated with soft robotics studio Air Giants and King’s students to create a “huggable” robot named Vine. This robot interacts with visitors emotively, aiming to imitate non-verbal communication between people and build trust.

Celiktutan shares her insights: “I’m interested in non-verbal communication between people. I’m interested in how we can imitate that with robots so they can be clear and build trust with humans. This robot really doesn’t have any resemblance to a human, but with this basic shape, it can communicate and connect using nonverbal movements.

Contrary to the stereotypical violent portrayals of robots in movies, Vine presents itself as trustworthy and inviting. Jeffrey Chong, one of the collaborating students, raises an essential question: “What can we do to make a robot seem more approachable? Also, what can a robot do for you to be able to trust and want to interact with it? What buttons can it press on the human brain or what behaviors can it display to make you think of it as a conversational partner?

The cuddly appearance of Vine raises inquiries into the aesthetics of robotics. Theodore Lamarche explains the appeal of soft robotics: “Soft robotics are interesting because they look cute. I think a lot of the time people are scared of AI because of job replacement, but soft robotics see a lot of interest in the health sector where there are not enough people.” Lamarche highlights the PARO robot as an example—a gentle, seal-like creature designed to assist dementia patients, providing physical and mental interaction through its soothing light and responsive movements.

Artist Mimi Ọnụọha takes a different approach, focusing on the human labor force that supports AI systems. Her project, “The Future is Here!” investigates the working environments of crowdsourced laborers, who play a crucial role in manually tagging vast amounts of data. These workers, predominantly based in the Global South, operate remotely from their bedrooms, front rooms, and cafes. Ọnụọha highlights the disparity in compensation between these laborers and AI specialists or researchers, shedding light on the necessity for a fair and equitable approach to AI utilization.

Rather than advocating for a complete reversal of our relationship with these technologies, Ọnụọha emphasizes the importance of a thoughtful and deliberate approach to their use. She states, “We need to insert a little friction into how people approach these tools. What is this ecosystem, and how do we want it to be? What types of power differentials are we considering? If folks can consider this while at the same time holding the potential of AI, I think that’s great. We’re past the point of being able to throw it out. The question becomes how to think strategically.

While most projects in the exhibition focus on the human-A.I. relationship, Blast Theory introduces a unique element by involving house cats in the conversation. In their project called Cat Royale, the group collaborates with animal behavioral experts and welfare officers to conduct a controlled experiment. Cats interact with a robotic arm that offers games at regular intervals, such as dragging a feather or throwing a ball. Through observation and learning, the system adapts to each cat’s preferences, calculating the happiness levels associated with each game.

Cats, known for their independent and somewhat aloof nature, add an intriguing perspective to the discourse. Matt Adams explains, “There was something interesting about a cat out of all animals that we have a close relationship with. They aren’t going to just be gulled into accepting something.” The resulting video prompts contemplation on the role of humans in caregiving. It becomes evident that a care system involving robots could potentially replace human caregivers, leading to questions about the role, value, and power dynamics of humans in this context.

This exhibition serves as a timely reminder of the intricate intertwining of AI with human existence, revealing the existing duality of good and evil within our societal structures. As Jeffrey Chong remarks, “Ultimately, robots are what we make of them. I think the reason scary robots are so popular in the media is because it reflects a fear that we have of other humans. It’s a reflection of the danger inherent in humanity.

Khajuria echoes this sentiment, underscoring the importance of critically examining the underlying biases ingrained in AI systems. He emphasizes, “There’s so much emerging technology that is deliberately presented to feel magical and sleek. But ultimately, all AI is the result of humans in a room making decisions, and there is usually a certain kind of person in those meetings and a certain power dynamic. Those conversations embed value systems and prejudices into the products they churn out. I hope the show will remind people just how human this stuff is.”

Installation view, “AI: Who’s Looking After Me?” at Science Gallery London, King’s College London, 21 June 2023 – 20 January 2024. Source: George Torode

Conclusion:

This exhibition sheds light on the intricate intersection of artificial intelligence and human care. It highlights the need for critical examination and thoughtful approaches to the use of AI in caregiving. The exploration of ethics, collaboration, and power dynamics presented in the exhibition carries significant implications for the market. Businesses in the healthcare and technology sectors should take note of the evolving perspectives and concerns surrounding AI in care systems. Emphasizing inclusivity, ethics, and the human touch in the development and deployment of AI technologies will be crucial for building trust and acceptance in the market. Moreover, recognizing the value of multidisciplinary collaborations and incorporating diverse perspectives will foster innovation and help address complex societal challenges effectively.

Source