Judge under investigation for errors in AI-authored ruling

Lazy eyes listen


Authorities confirmed to AFP on Monday that a Brazilian federal judge in the northern state of Acre has been asked to explain how he came to publish an error-ridden verdict co-authored by AI chatbot ChatGPT in a first-of-its-kind case for the country.

According to case records, the National Justice Council (CNJ) has given Judge Jefferson Rodrigues 15 days to explain a ruling riddled with wrong data concerning earlier court cases and legal precedent, including the incorrect attribution of former decisions to the Superior Court of Justice.

In documents filed with the supervisory board, Rodrigues revealed that the decision was co-written with a “trusted advisor” – and AI. He dismissed the blunder as “a mere mistake” committed by one of his subordinates, blaming “the work overload faced by judges.”

According to the CNJ, the occurrence was “the first of its kind” in Brazil, which has no rules forbidding the use of artificial intelligence in court settings. Indeed, the Supreme Court’s president is thought to be planning to commission the development of a “legal ChatGPT” for use by judges, a project that is already in the works in the state of Sao Paulo.

Despite their tendency to produce extraordinarily vivid, authoritative-sounding “hallucinations” – responses with no basis in reality – judges have been utilising AI chatbots to guide their verdicts for nearly as long as they have been available to the public.

Colombian First Circuit Court in Cartagena Judge Juan Manuel Padilla Garcia proudly credited ChatGPT in a decision he issued in January regarding whether an autistic child should receive insurance coverage for medical treatment, qualifying the unusual research method with a reassurance that its responses were fact-checked and were “in no way [meant] to replace the judge’s decision.”

In June, US federal judge P. Kevin Castel fined two lawyers from the firm Levidow, Levidow & Oberman PC $5,000 for submitting phoney legal research – including several nonexistent cases – generated by ChatGPT to support an aviation injury claim, then doubling down on the phoney citations when questioned by the judge.