As artificial intelligence (AI) evolves, organisations must adopt new tactics for managing AI incidents. Drawing on insights from Luminos.Law, specialists in AI liabilities, this blog post addresses incident response for AI systems. We will cover AI incident preparedness, identification, and response by looking at containment, eradication, and recovery.

Differences between AI Incident Response and Traditional Software Incident Response

Over the last two decades, traditional software systems’ incident response has become a matured field with a universally accepted playbook. Nonetheless, AI incidents call for a distinct approach and playbook. Traditional software systems’ incident response frameworks are inadequate for addressing AI incidents, highlighting the need to develop new strategies for AI incident response.

Formulating AI Incident Response Framework

To effectively address and manage AI incidents, organisations must create and implement AI incident response policies with the following components:

  1. Establish a clear definition of AI: Providing a precise definition of AI helps differentiate between AI incidents and traditional software incidents.
  2. Recognise the most pertinent harms: Identifying potential harms arising from AI incidents enables organisations to prioritise their response strategies.
  3. Appoint incident responders: Assemble a team with diverse expertise, including IT, cybersecurity, communications, legal, business units, and domain experts, along with external resources.
  4. Develop a short-term containment strategy: Craft high-level instructions for modifying AI systems to mitigate potential harm caused by an incident. Ensure this strategy is in place before deploying an AI model.

Prompt Identification of AI Incidents

AI incidents can occur without the involvement of malicious actors. To quickly detect AI incidents, organisations should utilise tools and approaches including:

  1. Appeal and override systems: Enable users to report issues, concerns, and undesirable outputs through user-friendly channels.
  2. Model-monitoring systems: Set up alerts for anomalous or problematic behaviour to ensure AI security.
  3. Pre-deployment testing: Conduct tests such as “red teaming,” in which an independent group attempts to expose vulnerabilities in the AI system.

Addressing AI Incidents: Containment, Eradication, and Recovery

Upon identifying an AI incident, organisations must first contain the incident to prevent further harm. Subsequently, they should eradicate the source of the problem before focusing on recovery and improvement of their AI systems. Vital assessment questions include:

  1. Who is affected?
  2. What options are available to modify the AI system’s behaviour?
  3. What is the cause of the harm?
  4. Can existing harms be addressed or remedied?

Post-incident Reviews and Continuous Improvement

After completing response activities, organisations should conduct a post-mortem review to learn from each incident. Documenting lessons learned, seeking feedback, and acting upon it can result in continuous improvement of AI incident response.

While implementing AI incident response strategies might appear complex and resource-intensive, it is essential for promoting AI adoption. Similar to brakes in a car, knowing how to respond when things go wrong enables organisations to confidently advance with AI technology. 

At Global Coach Group UK (GCG UK), we are committed to harnessing the full potential of leadership coaching by promoting the involvement of coworkers in the development proces.  For more information on how GCG UK can assist your leaders visit our Leadership Coaching page. Connect with our network of over 4,000 exceptional coaches to begin your leaders’ journey towards confident and effective leadership today.