top of page
  • Writer's pictureSarah Ruivivar

Global Partners Unveil Secure AI System Guidelines


Image credits: The Hacker News

In an unprecedented move, the UK, US, and 16 other international partners have launched a set of guidelines aimed at the secure development of artificial intelligence (AI) systems. This collaborative effort underscores the growing importance of AI security in our increasingly digital world.


According to the US Cybersecurity and Infrastructure Security Agency (CISA), these guidelines focus on customer security outcomes, radical transparency, and accountability. They also aim to establish organisational structures where secure design takes centre stage.

The National Cyber Security Centre (NCSC) further elucidates that the goal is to amplify the cybersecurity levels of AI and ensure that the technology is designed, developed, and deployed securely.


These guidelines build on the US government's ongoing efforts to manage the risks posed by AI. They advocate for thorough testing of new tools before public release, the establishment of guardrails to address societal harms such as bias and discrimination, and privacy concerns. Additionally, they call for robust methods for consumers to identify AI-generated material.


Companies are also urged to facilitate third-party discovery and reporting of vulnerabilities in their AI systems through a bug bounty system. This would allow for swift identification and rectification of potential issues.


The NCSC describes these guidelines as a 'secure by design' approach. This means that cybersecurity is an essential precondition of AI system safety and is integral to the development process from the outset and throughout.


This approach covers secure design, development, deployment, operation, and maintenance. It encompasses all significant areas within the AI system development life cycle. Organisations are required to model the threats to their systems and safeguard their supply chains and infrastructure.


The guidelines also aim to combat adversarial attacks targeting AI and machine learning (ML) systems. These attacks can cause unintended behaviour in various ways, including affecting a model's classification, allowing users to perform unauthorised actions, and extracting sensitive information.


The NCSC notes that these effects can be achieved through various methods, such as prompt injection attacks in the large language model (LLM) domain or deliberately corrupting the training data or user feedback, known as 'data poisoning'.


In a nutshell, these guidelines represent a significant step forward in ensuring the secure development and deployment of AI systems. It's a clear indication that the world is waking up to the potential risks associated with AI and taking proactive steps to mitigate them.



Made with TRUST_AI - see the Charter: https://www.modelprop.co.uk/trust-ai

16 views0 comments

Recent Posts

See All

Comments


bottom of page