AI Updates: October 23 – October 30

Here’s what you’ll find in this fortnight’s AI update: 

  • 28 countries sign declaration to contain AI risks at AI Safety Summit 2023 held in the UK
  • White House issues an Executive Order for standards for safe, secure, and trustworthy AI
  • EU Trilogues move a step further in drafting the final draft of the EU AI Act
  • Judge Orrick dismisses the majority of the artists’ claims in the lawsuit against Stability AI and Midjourney
  • Scarlett Johansson takes legal action for the unauthorized use of her likeness in an AI-generated ad
  • UN creates an advisory body for AI governance
  • Singapore’s IMDA and AI Verify unveil Generative AI Evaluation Sandbox

 

28 COUNTRIES SIGN DECLARATION TO CONTAIN AI RISKS AT AI SAFETY SUMMIT 2023 HELD IN THE UK

On 1 November, 28 countries, including India, US, UK, and EU, signed a declaration, called the Bletchley Declaration that addresses the risks associated with AI. The declaration is an agreement between the countries to work together to identify and assess the risks arising from the increased use of AI, and build risk-based policies to address such risks.

This declaration was signed during the AI Safety Summit 2023 held in the UK during 1 November – 2 November.

 

WHITE HOUSE ISSUES AN EXECUTIVE ORDER FOR STANDARDS FOR SAFE, SECURE, AND TRUSTWORTHY AI

The Biden-Harris Administration, on 30 October, 2023, issued an Executive Order establishing new standards for the safety and security of its citizens against AI risks. These standards include requiring that developers of powerful AI systems produce safety results and other critical information with the government, development of standards, tools, and tests for responsible AI by the NIST, and the development of strong standards to ensure the risks against the use of AI to engineer dangerous biological weapons. The standards also include the protection of citizens from AI-enabled fraud, the establishment of cybersecurity standards for the development of AI, and the creation of a National Security Memorandum that directs the use of AI securely.

Aside from setting standards, the Executive Order also envisages the establishment of safeguards to protect data privacy, advance equity and civil rights by ensuring fairness in the judicial system, protecting students, consumers, and patients, supporting employees in the workplace, promoting innovation and competition, and ensuring responsible use of AI by the government.

 

EU TRILOGUES MOVE A STEP FURTHER IN DRAFTING THE FINAL DRAFT OF THE EU AI ACT

Another round of the EU AI Act Trilogue Negotiations for the final draft of the EU AI Act concluded on 25 October, 2023. The negotiations did not produce unanimity on several issues. However, certain issues were agreed on and received further clarity. A certification process was introduced as a requisite for high-risk AI systems, along with certain exemptions from high-risk classification for AI systems that perform “purely accessory” tasks. These tasks include narrow procedural tasks, tasks that detect deviations from decision-making patterns, tasks that are not meant to influence decisions, and tasks that only improve the quality of work.

The issues that remain unresolved include the use of foundational AI models in law enforcement and the definition of AI.

 

JUDGE ORRICK DISMISSES THE MAJORITY OF THE ARTISTS’ CLAIMS IN THE LAWSUIT AGAINST STABILITY AI AND MIDJOURNEY

In the ongoing case filed by 3 artists against Stability AI, Midjourney, and DeviantArt, the District Court for the Northern District of California dismissed the majority of the claims made by the plaintiffs, including the claims alleging vicarious copyright infringement, violations of Digital Millennium Copyright Act, right of publicity, and competition law, and breach of contract claims. However, Judge Orrick retained the claims made by one of the plaintiffs regarding the use of their copyrighted images for the training datasets of the AI applications.

Judge Orrick, on 30 October, while dismissing most of the claims made by the plaintiffs, allowed them to amend their complaint and clarify “their theories of liability for the right to publicity claims.”

 

SCARLETT JOHANSSON TAKES LEGAL ACTION FOR THE UNAUTHORIZED USE OF HER LIKENESS IN AI-GENERATED AD

Scarlett Johansson took legal action against a generative AI app called Lisa AI for using her digital likeness in image and voice to create an advertisement. This advertisement was posted on X and later removed after the actress clarified that she is not a spokesperson for the app.

The issue of the unauthorised use of image and vocal likeness of famous personalities is becoming a persistent issue, with Drake’s voice being used for an AI-generated song, “Heart on My Sleeve”, and Tom Hanks alerting his fan base about his likeness being used for a dental advertisement that he did not endorse.

 

UN CREATES AN ADVISORY BODY FOR AI GOVERNANCE

The United Nations, on 26 October announced the creation of a 39-member advisory body that oversees and addresses the issues of international AI governance. The members include several stakeholders from the industry, such as tech company executives, government officials, and academics.

The body will be tasked with providing non-binding recommendations, that will serve as international standards for AI governance.

 

SINGAPORE’S IMDA AND AI VERIFY UNVEIL GENERATIVE AI EVALUATION SANDBOX

Infocomm Media Development Authority (IMDA) and the AI Verify Foundation unveiled their Generative AI (GenAI) Evaluation Sandbox on 31 October. This Sandbox makes use of a new Evaluation Catalogue that provides a common benchmark for evaluation LLMs. This Sandbox will also act as a knowledge base on how GenAI models should be evaluated and tested, and evolve new benchmarks and tests as and when gaps in the current evaluation ecosystem are revealed.


Authors: Shruti Gupta, Shantanu Mukherjee