Here’s what you’ll find in this fortnight’s AI updates:
- OpenAI and Apple Announce Partnership to Integrate ChatGPT into Apple Devices
- OpenAI Secures Key Partnership with Reddit
- EU Launches Office to Implement AI Act and Foster Innovation
- Michael Schumacher’s Family Triumphs in Legal Battle Over Fake AI Interview
- Elon Musk’s xAI Secures $6B to Challenge OpenAI in AI Race
- AI Employees Urge Expanded Whistleblower Protections to Mitigate AI Risks
OPENAI AND APPLE ANNOUNCE PARTNERSHIP TO INTEGRATE CHATGPT INTO APPLE DEVICES
On June 10, OpenAI and Apple revealed a partnership to integrate ChatGPT into iOS, iPadOS, and macOS. Announced at Apple’s Worldwide Developer Conference 2024, this collaboration will allow users to leverage ChatGPT’s advanced capabilities, such as image and document understanding, directly within Apple’s ecosystem. Siri will also utilize ChatGPT for enhanced responses, ensuring user consent and privacy protections. Additionally, ChatGPT will be embedded in Apple’s system-wide writing tools, aiding in content generation and image creation. This integration, powered by GPT-4o, will be available for free later this year, with premium features accessible to ChatGPT subscribers.
OPENAI SECURES KEY PARTNERSHIP WITH REDDIT
OpenAI and Reddit have announced a partnership to integrate Reddit’s vast content into ChatGPT and other AI products. By accessing Reddit’s Data API, OpenAI will enhance its AI tools with real-time, structured content, improving user engagement with Reddit communities. Reddit will leverage OpenAI’s AI models to introduce new AI-powered features for users and moderators. Additionally, OpenAI will serve as a Reddit advertising partner. This collaboration aims to foster human learning and community building online, enriching both the Reddit and OpenAI user experiences.
EU LAUNCHED OFFICE TO IMPLEMENT AI ACT AND FOSTER INNOVATION
The European Commission has inaugurated the European AI Office to enforce the AI Act. This office will ensure AI safety, protect fundamental rights, and provide legal certainty for businesses. It will support Member States in implementing AI regulations, particularly for general-purpose AI, and foster an innovative AI ecosystem. The AI Office, part of the ‘GenAI4EU’ initiative, aims to boost AI development in sectors like healthcare, climate, and mobility, aligning with EU values and rules.
MICHAEL SCHUMACHER’S FAMILY TRIUMPHS IN LEGAL BATTLE OVER FAKE AI INTERVIEW
The family of Formula 1 legend Michael Schumacher has emerged victorious in a legal dispute against a magazine publisher that printed an AI-generated interview with him. German magazine Die Aktuelle featured the interview on its cover in April 2023, claiming it to be authentic. Schumacher’s family pursued legal action, resulting in a reported €200,000 ($217,000) compensation. The magazine’s publisher issued an apology and dismissed the chief editor following the incident.
ELON MUSK’S XAI SECURES $6B TO CHALENEGE OPENAI IN AI RACE
Elon Musk’s venture xAI has raised $6 billion in funding, aiming to rival OpenAI in the AI race. Musk, previously involved with OpenAI, founded xAI last year, focusing on AI systems for societal benefit. The funding will support product development, infrastructure, and research. xAI’s first product, Grok, is set to compete with OpenAI’s ChatGPT. The investment round attracted prominent backers like Andreessen Horowitz and Sequoia Capital. Musk plans to launch a data center by 2025 to enhance xAI’s capabilities.
AI EMPLOYEES URGE EXPANDED WHISTLEBLOWER PROTECTION TO MITIGATE AI RISKS
On June 4, current and former employees from leading AI companies like OpenAI, Anthropic, and DeepMind published an open letter advocating for stronger whistleblower protections. The “Right to Warn AI” petition, endorsed by AI pioneers Yoshua Bengio, Geoffrey Hinton, and Stuart Russell, calls for the elimination of non-disparagement clauses, establishment of anonymous reporting channels, and enhanced anti-retaliation measures. The initiative aims to ensure that employees can freely raise concerns about potential AI risks without fear of retaliation, promoting greater transparency and accountability in AI development.
Authors: Astha Singh, Shruti Gupta, Shantanu Mukherjee