The European Union is taking strides to regulate general-purpose AI models with the release of a draft Code of Practice under the EU AI Act.
This draft, open for feedback until November 28, outlines how AI giants like OpenAI, Google, and Meta should comply with the new regulations. The Code is a guiding document, not a rulebook, allowing AI providers some flexibility in demonstrating compliance.
The EU AI Act, effective since this summer, uses a risk-based approach to regulate AI, focusing on powerful models with systemic risks. The draft Code, currently 36 pages, is expected to expand as feedback is incorporated. It highlights transparency, risk assessment, and copyright compliance as key areas for AI providers.
Want to hear more? Join Mal on the Property AI Report Podcast each week!
Access from your preferred podcast provider by clicking here.
Transparency measures, effective from August 2025, require AI makers to disclose web crawlers and data sources used in training. For models with systemic risks, compliance with risk mitigation measures is expected by August 2027. The draft also suggests identifying additional risks like privacy infringements and deepfake threats.
The Code proposes a "Safety and Security Framework" for ongoing risk management and requires AI developers to forecast when models might reach systemic risk levels. It encourages using diverse evaluation methods to assess AI capabilities and limitations.
This draft is a collaborative effort, reflecting input from various stakeholders and international standards. The EU invites further feedback to refine the Code, aiming for a comprehensive final version by May 2025.
Want to hear more? Join Mal on the Property AI Report Podcast each week!
Access from your preferred podcast provider by clicking here.
Made with TRUST_AI - see the Charter: https://www.modelprop.co.uk/trust-ai
Comments