AI and the Evolution of Clinical Risk Safety Standards

AI-driven systems have the potential to significantly improve patient assessments, treatment pathways, and diagnosis accuracy. However, their integration into healthcare settings requires a new focus for clinical digital risks necessitating careful consideration and a new thinking to address the challenges

 

DCB 0129 and DCB 0160 have been the cornerstone of clinical risk management in the UK for more than two decades, guiding the development and deployment of health IT systems. Whilst they have proved effective in capturing security and availability risks to patient safety, the dynamic, data-driven nature of AI, coupled with its evolving learning capabilities, introduce potential hazards that these standards were not designed to identify, including:

  • Data-Driven Hazards: The risk of poor data quality leading to erroneous AI outputs and poor decision making.
  • Algorithmic Complexity Hazards: The complexity of AI algorithms can obscure errors or biases.
  • Adaptive Learning Hazards: Unintended changes in system behaviour due to machine learning.
  • Interoperability Hazards: Compatibility and data exchange issues with other AI data-driven systems.
  • Ethical and Governance Hazards: Concerns over consent, transparency, and accountability as well as how collected data might be used for continuous improvement training and learning.

The integration of AI into clinical decision-making processes necessitates a re-evaluation of digital clinical  management frameworks. The unique attributes of AI-driven systems including traceability, explainability, drift, bias, fairness, and transparency require explicit attention. These terms are new for many and demand a simple plain English definition if we are going to truly understand the risks associated with them:

Traceability is the ability to track the decision-making process of an AI system, including the data inputs and algorithmic paths taken to arrive at a conclusion.

Explainability is the ability of an AI system to present its processes and decisions in understandable terms to users, ensuring clinicians can interpret AI recommendations accurately.

 

Drift refers to changes performance over time, as AI learns from new data, which can lead to deviations from its original accuracy.

 

Bias  arises from unrepresentative training data or flawed algorithms potentially leading to unfair outcomes for certain patient groups.

 

Fairness is the principle that AI systems should make decisions without discrimination or prejudice, ensuring equitable treatment for all patients.

 

Transparency is the degree to which AI’s functioning and decision-making processes are open and understandable to users.

 

So, where does this leave decision makers when they want to introduce AI into a care pathway to benefit both patients and clinical staff, but aren’t sure how to assess the safety and efficacy of those systems?

 

A good answer is to look for system providers who have already mitigated AI related risks in their design, development, training and testing processes as far as possible. These are the providers who have adopted recognised standards such as ISO 42001, and operate an effective AI management system across their development lifecycle to mitigate potential risks.

Risk TypePossible Mitigation Strategies
Transparency     AI systems using open algorithms where possible.Detailed documentation and user training available to explain how AI systems make decisions.Look for systems compliant with standards and transparency frameworks.
Explainability   Look for explainable AI techniques and outputs users can understand..Look for providers who incorporate user feedback to improve their interface and explanations provided by AI systems.
Fairness         Undertake a fairness analysis to identify and correct AI disparities.Look for providers who demonstrate they design, train and test AI models with objective equitable outcomes.Request evidence from providers of regular review of algorithms to confirm fairness.
Drift (Model or Data Drift) Establish continuous monitoring systems for AI performance.Look for providers who can demonstrate they regularly update models with new data to reflect current clinical scenarios.Look for adaptive learning systems that adjust to new data patterns while maintaining oversight.
Bias in Training Data Ensure diverse and representative datasets have been used for training.Look for providers who can demonstrate they utilise transparent and explainable AI models for easier identification and correction of biases.

A health organisation should also introduce and use AI systems with the an appropriate level of care and governance, implementing its own AI management system aligned with ISO 42001:2023, the standard for AI Governance. This is an effective framework for trustworthy, ethical and safe use of AI in healthcare. which emphasises accountability, integrity, transparency, and sustainability in AI system development, deployment, and use.

By aligning with ISO 42001:2023, both manufacturers and healthcare providers can reduce the clinical risks associated with use of AI in healthcare, and demonstrate ethically responsible and digital clinical safety.

 

Deviceology is committed to supporting organisations developing and deploying innovative AI to deliver better health outcomes, and reduce digital clinical risk as far as possible. You can download a more detailed briefing paper here

 

Come and see us at Rewired on the 12th/13th March 2024 at the NEC, Stand A11.