In this update, we unpack the latest developments in AI regulations this week:
- NASSCOM Generative AI guidelines
- Singapore IMDA’s Discussion Paper
- INTERPOL Responsible AI Toolkit
NASSCOM GENERATIVE AI GUIDELINES
On 6th June 2023, NASSCOM released a set of guidelines for the research, development, and use of generative AI (GenAI) in India. It addresses key AI-related concerns such as misinformation, privacy violations, biases, job displacement, environmental impact, and cyber threats.
What is GenAI?
The guidelines define GenAI as a type of AI technology capable of creating images, text, audio, video, and various types of multi-modal content.
Obligations of Researchers, Developers, and Users
The guidelines set out individual and joint obligations of researchers, developers, and users of GenAI:
- Researchers are urged to exercise caution, prioritize transparency and accountability, and demonstrate inclusion by accounting for risks arising out of harmful bias.
- Developers and users (including for commercial, non-commercial, and personal use) must exercise caution and foresight by conducting comprehensive risk assessments throughout the life cycle of the GenAI solution in line with Nasscom’s Responsible AI Governance Framework, NITI Aayog’s Principles for Responsible AI, UNESCO’s Recommendations on AI ethics, and OECD AI Principles. In addition to demonstrating transparency and accountability, it also requires developers to demonstrate reliability and safety in line with Nasscom’s Responsible AI Architect’s Guide.
Nasscom’s Responsible AI Architect’s Guide
Nasscom’s Guide is directed at stakeholders engaged in the development of AI systems including systems designers, architects, and UX designers. The Guide focuses on the implementation of a human-centered design in line with Microsoft’s 18 Guidelines for Human-AI Interaction Design which recommends the implementation of practical measures depending on the stage of development of the AI system including providing an explanation of the process of output generation, level of accuracy, generation of context-based outputs, mitigation of social biases, and ease of correction, and dismissal of unwanted or wrong output.
SINGAPORE IMDA’s DISCUSSION PAPER
On 7 June 2023, Singapore’s Minister for Communications and Information announced the launch of the AI Verify Foundation to support the development and use of AI Verify (Singapore’s AI Governance tool – currently in beta testing) to build responsible AI systems.
On the same day, Singapore’s Infocomm Media Development Authority (IMDA), in collaboration with Aicadium, a global tech company, released a discussion paper entitled “Generative AI – Implications for Trust and Governance”.
The paper identifies six new risks posed by GenAI models and contemplates the necessity to adopt new governance approaches.
New Risks
- Mistakes and Hallucinations: AI hallucination refers to outputs generated by an AI model that may sound deceptively convincing or authentic, but are in fact, either factually incorrect, or unrelated to the given context. For instance, several LLM like ChatGPT, or the infamous Bing are known to make factual errors or generate false content.
- Privacy and Confidentiality: Generally, outputs generated by GenAI models have no trace of the underlying training data. However, it has been found that these models have a tendency to “memorise” the data and replicate it when prompted to do so. This is especially problematic if a model is trained on sensitive personal data such as medical records.
- Disinformation, Toxicity, and Cyber-Threats: GenAI has the ability to generate text, images, video, and art at scale – making it difficult to discriminate between real and fake content. In addition to threats to reputation, and privacy (such as using deepfakes), GenAI can also be used to generate malicious code and phishing emails.
- Copyright issues: GenAI models require massive amounts of data to generate content – which is usually made possible by scraping the web for data leading amplifying the concerns surrounding unauthorized use of copyrighted materials (e.g. Getty Images sued Stable Diffusion over alleged copyright violation of close to 12 million stock images).
- Embedding Biases: GenAI models can reflect the social biases it inherits from the training data – for instance, when asked to generate the image of an American, GenAI models often tend to lighten the images of darker-skinned persons.
- Value Alignment: AI that aligns with human values and goals is often considered safe. To achieve this, GenAI models prioritize between being “helpful” versus being “harmless” – i.e. focusing on being helpful may lead to the system generating toxic responses while insistence on being “harmless” leads to safe responses which might not be of much help. This can only be mitigated by relying on human feedback using reinforcement learning.
AI Governance Framework
The discussion paper reiterates compliance with key governance principles set out in Singapore’s Model AI Governance Framework including transparency, fairness, accountability, explainability, and robustness. Similar to the Model AI Governance Framework, the discussion paper provides practical measures to enhance the safety and trust in AI systems including risk assessments and third-party evaluation.
INTERPOL RESPONSIBLE AI TOOLKIT
INTERPOL and the United Nations Interregional Crime and Justice Research Institute (UNICRI) released “A Toolkit for Responsible AI Innovation in Law Enforcement” with the aim to support law enforcement agencies in institutionalizing responsible AI innovation and integrating AI systems in their work.
The Toolkit consists of three primary guidance documents, three practical tools, and a technical reference book. The guidance documents are designed with a balanced perspective on responsible AI innovation centered around human rights, ethics, and established good policing principles with practical examples of law enforcement specific use cases. The practical tools allow a stakeholder engaged in law enforcement to conduct a self-assessment using questionnaires for risk management and testing the organizational readiness of an AI system.