ARTICLE
13 December 2023

UK And US Joint AI Cybersecurity By Design Guidelines

PC
Preiskel & Co

Contributor

 Preiskel & Co logo
Preiskel & Co LLP, is an English law firm independently recognised as a leader in the telecommunications, media and technology sectors. Preiskel & Co team of lawyers is truly international many of whom are qualified in multiple jurisdictions. This international mind-set has proved of considerable advantage to many clients, as the firm advises on matters in England but also coordinates advice across Europe, and other continents. The firm also advises on issues concerning outer space and the virtual world.
On the 27th of November 2023, the UK's National Cyber Security Centre ("NCSC") announced the new global cybersecurity guidelines entitled "Guidelines for Secure AI System Development"...
Worldwide Technology
To print this article, all you need is to be registered or login on Mondaq.com.

Background

On the 27th of November 2023, the UK's National Cyber Security Centre ("NCSC") announced the new global cybersecurity guidelines entitled "Guidelines for Secure AI System Development", developed together with the US's Cybersecurity and Infrastructure Security Agency ("CISA").

In addition to the UK and the US, the guidelines are endorsed by national cybersecurity and intelligence agencies from 16 other countries, including all members of the G7 group of nations, Nigeria, Singapore, South Korea and Chile. The NCSC says that the guidelines will help AI system developers embed cybersecurity by design into their decision-making process at each developmental phase.

Implementation

The guidelines are applicable to all types of AI systems, though they are voluntary. It should be noted that the proposed EU AI Act and AI Liability Directive (should they come into force) would impose minimum cybersecurity requirements on in-scope AI systems placed on the EU-market.

The Guidelines Purpose

The guidelines are designed to guide developers through the design, development, deployment and operation of AI systems and ensure that security remains a core focus throughout their life cycle. They are structured into four sections, each corresponding to the different stages of the AI system life cycle, as follows:

  1. Secure design covers the design phase of the AI system development cycle.It raises awareness of threats and risks, model system threats as well as balancing design for security as well as functionality and considering security when selecting the AI model.
  2. Secure development covers the development phase of the AI model. The guidelines focus on securing the supply chain, identifying, tracking and protecting the assets, documenting data, models and prompts, and managing technical debt effectively (e.g., by robust lifecycle control and mitigating in future development of similar AI systems).
  3. Secure deployment covers the deployment phase of the AI model. The guidelines involve safeguarding infrastructure, ensuring continuous protection of the model, developing compromise, threat or loss processes for continuous incident management, developing principles of responsible release and use by end-users.
  4. Secure operation and maintenance covers the operation and maintenance phase post-deployment of AI models. It covers the system's behaviour, logging and monitoring, managing update and sharing information, following a "secure by design" approach to updates and sharing lessons learned.

Links and related content

Find the guidelines here and the NCSC press release here.

For more information on the Proposed Directive on AI liability and AI Act, see our previous blog here and the press release here.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More