Here’s what you’ll find in this week’s AI Update:
- European Parliament approves the EU AI Act.
- US Senate’s draft Transparency Automated Governance Act.
- Lawsuit against OpenAI as ChatGPT fabricates fake embezzlement case.
EUROPEAN PARLIAMENT APPROVES THE EU AI ACT
On June 14, 2023, in a full plenary vote, the European Parliament passed the proposed text of the EU AI Act – the world’s first legislation that seeks to regulate AI systems and foundation models like those on which ChatGPT are based. With 499 votes in favor, 28 against and 93 abstentions, the European Parliament settled on a text it will use as the basis of its negotiation in the trilogue process.
If the trilogue is completed and an agreement is reached before January 2024, then the EU AI Act could come into force before the EU elections in June 2024 next year.
The version approved by the European Parliament imposes strict requirements including safety requirements, risk assessments, transparency obligations, and logging on high-risk systems. Foundational models such as ChatGPT are not automatically considered high-risk, but they are subject to additional transparency obligations such as declaration of copyrighted material used to train AI. Similar transparency obligations, however, are not imposed in respect of personal data. The Act also grants citizens the right to file complaints and receive explanations regarding decisions made by high-risk AI systems, and imposes enhanced penalties in case of breach.
US SENATE’S DRAFT TRANSPARENCY AUTOMATED GOVERNANCE ACT
On June 7th, the US Senate introduced the draft text of the Transparency Automated Governance (TAG) Act, which proposes to impose transparency obligations on US government entities using automated systems to make critical decisions.
The TAG Bill defines “augmented critical decision process” as the utilization of an automated system by an agency or a third party on the agency’s behalf to determine or substantially influence critical decision outcomes while a “critical decision” includes government determinations that affect the access to, cost or terms of things like education, employment, utilities, government benefits, financial services, healthcare, housing, immigration services and more.
The proposed obligations for US agencies under the TAG Bill include:n
- Informing individuals that they are interacting with an automated system before or during the interaction;
- Establishing an appeal system for individuals aggrieved by the critical decisions resulting from an automated system’s use;
- Offering individuals an alternative review of critical decisions driven by the automated system (such review must be conducted by an individual without the assistance of an automated system);
- Tracking and collecting issues related to the use of augmented critical decision process to ensure accuracy, reliability, practicability, and explainability of the decisions made.
LAWSUIT AGAINST OPENAI AS CHATGPT FABRICATES FAKE EMBEZZLEMENT CASE
Mark Walters, a radio broadcaster from Georgia, filed a defamation suit against OpenAI after ChatGPT falsely accused him of embezzlement. When journalist Fred Riehl requested ChatGPT to summarize the case of “The Second Amendment Foundation v. Robert Ferguson”, it produced a completely fabricated complaint with Walters as the defendant, accusing him of defrauding and embezzling funds for personal expenses. When questioned about the accuracy of the facts, ChatGPT fabricated a paragraph from the complaint against Walters citing an erroneous case number.
As a matter of fact, the Second Amendment Case was unrelated to Walters or financial fraud – and pertains to a suit filed by a gun group over Washington state’s gun laws.
The fake case may be the result of AI hallucinations. Notably, on its prompt search page, ChatGPT includes a warning stating that it may “occasionally generate incorrect information about people, places, or facts”.
Authors: Shantanu Mukherjee, Anushka Iyer, and Diya Parvati