Go to contents

Lawyers fined for submitting false precedents written by ChatGPT

Lawyers fined for submitting false precedents written by ChatGPT

Posted June. 24, 2023 07:51,   

Updated June. 24, 2023 07:51


According to CNBC's report on Thursday (local time), attorneys Peter LoDuca and Steven Schwartz were each fined 5,000 dollars by Judge Kevin Castel of the Federal District Court in Manhattan, New York. The fines were imposed due to their inclusion of a ‘non-existent precedent’ supposedly provided by ChatGPT in their legal pleadings. The attorneys had filed a lawsuit against Avianca Airlines on behalf of a plaintiff who claimed to have injured their knee on an in-flight meal tray in 2019. Upon investigation, it was discovered that the pleadings submitted by the lawyers in March contained fabricated precedents and quotations falsely attributed to ChatGPT. Judge Castel emphasized, “Lawyers are responsible for acting as gatekeepers and ensuring accuracy, even when utilizing AI as an assisting tool.”

With the advancement of artificial intelligence (AI) technology and the increasing deployment of AI-powered services, concerns about various social implications have arisen. Consequently, there is a growing consensus that a collective approach is necessary to address these issues, including the formulation of a globally coordinated regulatory framework. The objective is to establish universal standards to prevent discrimination, bias, and social inequality arising from AI's potential misjudgments, ensuring fairness and equity for all individuals.

During his keynote speech at the AI X Data Privacy International Conference, held at The Plaza Hotel in Jung-gu, Seoul on Friday, Chairman Koh Jin of the Digital Platform Government Committee emphasized the importance of approaching policy discussions from an international perspective in collaboration with industry and civil society groups. He highlighted that the current expansion of AI businesses by global big tech companies knows no boundaries, making it challenging for individual governments to address the associated issues adequately.

The conference was jointly organized by the Personal Information Protection Committee and the Digital Platform Government Committee, drawing the participation of global big tech representatives and policymakers from Europe and Japan. “Establishing a unified core for regulations concerning AI and personal information protection is pivotal even if individual countries have different approaches,” Anupam Chander, a respected law professor from Georgetown University, said in his presentation. He highlighted the role of international organizations such as the Organization for Economic Co-operation and Development (OECD) in disseminating AI ethics and fundamental principles globally.