Advertisement

Court hits B.C. lawyer with costs over fake AI-generated cases, despite no intent to deceive

Click to play video: 'Justice community reacts to AI fake legal case decision'
Justice community reacts to AI fake legal case decision
People in B.C.'s justice system are considering the implications of a judge's ruling that found a lawyer personally liable for using artificial intellingence and submitting fake case law to the courts. Rumina Daya reports.

A B.C. lawyer who submitted legal briefs with fake cases generated by ChatGPT has been ordered to personally pay court costs, despite a judge finding she had no intent to deceive.

The case is thought to be the first instance of fictitious AI-generated cases making their way into the Canadian legal system, and potentially precedent-setting.

“Citing fake cases in court filings and other materials handed up to the court is an abuse of process and is tantamount to making a false statement to the court,” Justice David Masuhara wrote in a ruling posted Monday. “Unchecked, it can lead to a miscarriage of justice.”

Click to play video: 'AI in B.C. court case being closely watched'
AI in B.C. court case being closely watched

Fraser MacLean, lead counsel for the side that uncovered the fake cases, praised the decision.

Story continues below advertisement

“I thought it was a well-reasoned and well-balanced decision that gives critical and sage advice to the public and the profession on the use of artificial intelligence in the legal profession,” he said.

“What’s scary about these AI hallucinations is they’re not creating citations and summaries that are ambiguous, they look 100 per cent real. That’s why it’s important that judges and lawyers need to be vigilant in double-checking these things.”

AI 'hallucinated' cases

The fake cases were submitted by lawyer Chong Ke, as a part of a high-net-worth family dispute centred on a parent’s application to bring their children on an overseas trip to China.

Court records state Ke withdrew the cases when she realized they were fake, but did not inform opposing counsel about the reasons for pulling them.

Click to play video: 'First Canadian court case over AI-generated court filings'
First Canadian court case over AI-generated court filings

Monday’s ruling, for the first time, includes a summary of the two fake cases at the heart of the controversy:

Story continues below advertisement

“In M.M. v. A.M., 2019 BCSC 2060, the court granted the mother’s application to travel with the child, aged 7, to India for six weeks to visit her parents and extended family,” the ruling states. “The court found that the trip was in the best interests of the child, as it would allow him to maintain his cultural and familial ties, and that the mother had taken reasonable steps to address the father’s concerns about the child’s safety and health.  The court also noted that the mother had provided a detailed travel itinerary, a consent letter from the father, and a return ticket for the child.”

The ruling continues, “In B.S. v. S.S., 2017 BCSC 2162, the court granted the mother’s application to travel with the child, aged 9, to China for four weeks to visit her parents and friends. The court found that the trip was in the best interests of the child, as it would enhance her cultural and social development, and that the mother had complied with the terms of the existing parenting order and agreement.  The court also noted that the father had consented to the trip in writing, and that the mother had provided a travel consent letter, a copy of the child’s passport, and contact information for the trip.”

Opposing counsel eventually discovered the cases were fictitious, and subsequently sought to have Ke personally pay punitive special costs as well as court costs for their time spent uncovering the error.

In his ruling, Masuhara said he accepted Ke’s submissions that she was “naïve about the risks of using ChatGPT and that she took steps to have the error corrected.”

Story continues below advertisement

Masuhara also noted the “well-resourced” nature of the opposing legal team, finding there was “no chance” the two fake legal cases would have slipped through into the legal record.

As such, he declined to penalize Ke with special costs, which he said are only appropriate in the case of a serious abuse of the judicial system or deliberate dishonest or malicious misconduct.

Ke, he added, has also suffered the effects of “significant negative publicity,” and sincerely apologized to the court admitting a “serious mistake.”

Click to play video: 'B.C. lawyer under fire for AI-generated fake case law'
B.C. lawyer under fire for AI-generated fake case law

However, he also ruled Ke had failed to take note of notices from the Law Society of B.C. in June and November warning of the risks of generative AI, along with warnings on ChatGPT itself that its output could be inaccurate.

The use of the fake cases, he found, did cause a delay in the case, creating confusion and extra work for opposing counsel.

Story continues below advertisement

“I recognize that as a result of Ms. Ke’s insertion of the fake cases and the delay in remedying the confusion they created, opposing counsel has had to take various steps they would not otherwise have had to take,” Masuhara wrote.

“This additional effort and expense is to be borne personally by Ms. Ke.”

Masuhara ruled that Ke should be liable for the equivalent of two full days’ worth of court hearing time — potentially several thousand dollars.

He also ruled that Ke must review all of her files before the court, and if any are found to have summaries made by generative AI, she must immediately inform opposing counsel. If no such cases are found she must provide a report confirming her review within 30 days.

“As this case has unfortunately made clear, generative AI is still no substitute for the professional expertise that the justice system requires of lawyers.  Competence in the selection and use of any technology tools, including those powered by AI, is critical,” Masuhara wrote.

“The integrity of the justice system requires no less.”

The Law Society of BC issued warnings about the use of AI and provided guidance for lawyers in November, while the Chief Justice of the B.C. Supreme Court issued a directive last March telling judges not to use AI, and Canada’s federal court followed suit in December.

Story continues below advertisement

Thompson Rivers University law librarian Michelle Terriss, said the ruling sets a new precedent.

“It says that judges are taking this very, very seriously, that these issues are front and center in the minds of the judiciary and that lawyers really can’t be making these mistakes,” she said.

“My hope is that lawyers will read it, they’ll take it to heart, they will come to understand these tools a little bit better.”

Terriss said she believes it is extremely unlikely an AI-hallucinated case would make it through the courtroom process without being uncovered as fictitious. But she said even if they get that far, they stand to slow down and raise costs in an already over-burdened legal system.

“Bills are already ballooning, access to justice is a real problem just because of legal costs, and it’s not helpful when lawyers have to trace and track and go on these wild goose chases for cases that don’t even exist,” she said.

“The cost award was quite high, but it wasn’t beyond the pale of how much time that team would have taken to make sure that these were not real cases that didn’t exist anywhere.”

Ke still faces an investigation into the incident by the Law Society of B.C.

Sponsored content

AdChoices