Lawyer Probed by B.C. Law Society Over AI-Generated Fake Case Law

  • The BC Law Society is investigating a lawyer who unintentionally submitted AI-generated fake case law in a family law hearing.
  • The lawyer, Chong Ke, relied on A.I. suggestions without verifying their authenticity, leading to the inclusion of non-existent case law in a client’s petition.
  • The Law Society emphasizes adherence to standards of conduct when utilizing A.I. in legal practice.
  • Despite the error, the lawyer cooperated with the court, withdrew the fake cases, and expressed remorse for the oversight.
  • The judge declined to impose “special costs” against the lawyer, recognizing the absence of intent to deceive and procedural safeguards.

Main AI News:

The BC Law Society is currently investigating a lawyer’s conduct following the revelation that AI-generated fake case law was inadvertently submitted to support a client’s petition during a family law hearing. Christine Lam, a spokeswoman for the Law Society, confirmed the investigation into Chong Ke’s actions, emphasizing the alleged reliance on submissions featuring non-existent case law identified by ChatGPT.

In response to the incident, Lam highlighted the issuance of guidance by the Law Society regarding the appropriate utilization of artificial intelligence technology among lawyers. The organization expects practitioners to adhere to established standards of conduct, especially when employing A.I. in client representation.

While acknowledging the need for additional guidance, Lam noted the absence of an A.I. expert spokesperson available before the deadline. Ke, who represented Wei Chen in a parenting time application within divorce proceedings, had initially included a summary of two purported B.C. Supreme Court cases under the banner of “legal basis” in the application.

Subsequent scrutiny by Zhang’s legal counsel revealed the absence of these referenced cases, prompting Ke to furnish an alternative list. However, Zhang’s lawyer insisted on obtaining copies of the original cases, leading to Ke’s admission of error in relying on AI-generated suggestions without proper verification.

Expressing remorse and apologizing for the oversight, Ke withdrew the erroneous cases from the application before court proceedings. Despite the embarrassment, Ke maintained transparency and cooperation with the court throughout the process.

During a subsequent hearing regarding the allocation of costs, Zhang’s legal team sought “special costs” against Ke, citing additional time and resources expended due to the fabricated case law. However, the presiding judge declined to impose such costs, recognizing the absence of deceptive intent and the inherent safeguards in the legal process.

Justice David Masuhara emphasized the seriousness of the situation while acknowledging the procedural checks that prevented the inclusion of false information in the application.

Conclusion:

The inadvertent submission of AI-generated fake case law underscores the importance of thorough verification processes in legal practice. While the incident highlights the potential pitfalls of A.I. reliance, it also emphasizes the need for continued guidance and education within the legal community regarding the appropriate use of technology. Moving forward, practitioners must exercise caution and diligence when incorporating A.I. tools to avoid similar mishaps, ensuring the integrity and credibility of legal proceedings in the market.

Source