Legal Experts Navigate the Rising Tide of AI Lawsuits

TL;DR:

  • Litigation and lobbying efforts have surged in response to AI development and deployment.
  • Copyright infringement claims and commercial disputes plague AI companies, including OpenAI and its rivals.
  • High-profile figures like Sarah Silverman and John Grisham accuse AI companies of misusing copyrighted content.
  • Microsoft and GitHub face allegations of neglecting legal essentials in AI tool development.
  • Universal Music sues Anthropic for generating copyrighted lyrics using AI.
  • Visual artists target AI ventures for copyright infringement.
  • AI developers rely on the “fair use doctrine” to defend against copyright claims.
  • Microsoft pledges to cover legal costs for customers facing AI-related lawsuits.
  • Legal experts anticipate future AI lawsuits focusing on “safety and accountability.”
  • Firms like DLA Piper actively participate in AI regulation discussions.
  • AI models have become crucial in various fields, sidelining traditional gatekeepers.
  • Legal professionals adapt to AI disruptions, with a focus on evidence validation and discovery challenges.

Main AI News:

As the ChatGPT chatbot, backed by Microsoft-affiliated OpenAI, made its debut in November 2022, the world witnessed the immense potential of generative artificial intelligence. However, the development, deployment, and regulation of AI have quickly become contentious issues, giving rise to a surge in litigation and lobbying efforts. This wave of legal challenges has propelled legal expertise into the forefront of the AI landscape. Over the past year, numerous writers, musicians, visual artists, and software developers have filed copyright infringement and commercial dispute claims against OpenAI and its rival startups.

Prominent figures like comedic writer Sarah Silverman and novelist John Grisham have accused OpenAI of failing to obtain writers’ approval to use copyrighted materials for training its language models. In addition, programmers allege that Microsoft, along with its subsidiary GitHub and OpenAI, collaborated to launch AI assistants and software tools, such as Codex and CoPilot, without adequately addressing attribution, copyright notices, and license terms. Universal Music, the world’s largest music group, has taken legal action against OpenAI’s competitor, Anthropic, claiming that its AI platform, Claude, generates copyrighted lyrics “nearly word-for-word.” Meanwhile, visual artists have targeted AI ventures like Stability AI, Midjourney, and DeviantArt for copyright infringement, asserting that these platforms utilized their styles without seeking permission or providing due credit or compensation.

In the United States, some AI developers facing copyright claims have invoked the “fair use doctrine,” similar to Google’s successful defense in 2015 against a claim by the Authors Guild regarding its online book search function violating writers’ copyrights. AI developers are now hopeful that copyright infringement concerns won’t deter businesses from adopting their services. Microsoft, for instance, has committed to covering the legal costs of commercial customers sued for using its AI tools or generated outputs. According to Danny Tobey, who leads DLA Piper’s AI-focused practice group and represents developers like OpenAI before regulators and courts, these commitments reflect savvy business strategies. Developers are projecting confidence in their technology’s adoption potential.

In the courtroom, Tobey has previously defended OpenAI in defamation battles, including a lawsuit filed by radio host Mark Walters, who claimed that ChatGPT falsely accused him of embezzlement. Another defamation lawsuit involving aerospace author Jeffrey Battle accuses Microsoft’s AI-assisted Bing of wrongly associating him with a convicted felon. These legal challenges aim to treat AI systems as publishers or speakers of the information they generate.

However, copyright infringement and defamation are just the tip of the iceberg when it comes to legal concerns surrounding AI. Tobey predicts that future claims will revolve around “safety and accountability.” DLA Piper has been actively involved in assisting OpenAI in presenting its perspectives on AI regulation to Congress. The uncertainty surrounding future AI regulations is a significant concern for businesses.

Tobey argues that generative AI-based large language models serve as “the Dictaphone for everything on Earth.” These tools, which can process voice or text queries, have the potential to address a wide range of issues, from vacation planning to health inquiries, all without relying on traditional human gatekeepers like lawyers, architects, engineers, and doctors. Tobey’s team comprises forensic lawyers, data analysts, science experts, and subject matter experts, who help AI tool developers and innovators assess accountability, mitigate discrimination and bias risks, and ensure statutory compliance. They also work to establish legal “guardrails” for Fortune 500 companies to develop credible policies, procedures, controls, monitoring, and feedback loops for AI technology use.

According to Tobey, these guardrails must be deemed “credible” by policymakers and regulators, as there’s unlikely to be political support for broad immunities that nurture the AI industry. Since 2020, OpenAI has been represented by Morrison Foerster, a law firm with a growing involvement in AI-related work. The firm has supported OpenAI in its defense against claims by Silverman-led authors and software programmers alleging copyright infringement.

As AI continues to evolve, legal experts anticipate significant changes in their roles. David Cohen, chair of Reed Smith’s records and ediscovery group in Pittsburgh, believes that AI-assisted tools will necessitate lawyers to reinvent their roles. Disruption is expected, and professionals may need to validate evidence, particularly in cases involving deep fakes as evidence. The emergence of AI tools has led to the creation of rules on their deployment by court administrators and judges.

Cohen anticipates discovery disputes related to the liability of AI-assisted tool owners or developers in cases of accidents, discrimination, or other claims. He notes that AI tools are already reshaping the field of ediscovery, even if they haven’t yet translated into billable hours. Lawyers are closely monitoring AI advancements to determine which tools to test and integrate into their practices. Ultimately, Cohen envisions the transformation of ediscovery, where both sides agree to use a generative AI system to streamline document review and questioning, reducing inefficiencies in today’s discovery processes. The AI revolution in the legal domain is not just a distant prospect but a rapidly evolving reality.

Conclusion:

The rising wave of legal challenges in the AI industry underscores the need for robust legal frameworks and accountability measures. Companies operating in this space must prioritize legal compliance to mitigate risks and foster industry growth. The emergence of AI as a transformative force in various sectors necessitates adaptation within the legal profession to address new challenges and opportunities effectively.

Source