Digital clinical safety risks and AI

The advent of artificial intelligence (AI) in healthcare promises to revolutionise patient care pathways, offering unprecedented advances in digital clinical support and decision-making systems. AI-driven systems have the potential to significantly improve condition severity assessments, treatment pathways, and diagnosis accuracy. However, their integration into healthcare settings introduces a new class of clinical risks that necessitate careful consideration and a new approach to safety standards.

The Limitations of Current Standards

DCB 0129 and DCB 0160 have been the cornerstone of clinical risk management in the UK for more than two decades, guiding the development and deployment of Digital Health systems. However, they were drafted at a time when AI’s role in patient care was not anticipated, and the dynamic, data-driven nature of AI, coupled with its evolving learning capabilities, introduces potential hazards that necessitates a re-evaluation of clinical risk management. The existing frameworks may be sufficient to guide clinical safety activity, but what we need to think about AI related risks and hazards is radically different.

The unique attributes of AI-driven systems including traceability, explainability, drift, bias, fairness, and transparency require explicit attention.

The deployment of AI-driven systems introduces new patient safety hazards and changes the focus of existing ones. Those with oversight and governance roles need to be mindful of:

Data-Driven Hazards: The risk of poor data quality leading to erroneous AI outputs and poor decision making.

Algorithmic Complexity Hazards: The complexity and lack of transparency of AI algorithms can obscure errors or biases and inhibit explainability and usability.

Adaptive Learning Hazards: Unintended changes in system behaviour due to machine learning including drift in a model and lack of predictability and consistency.

Interoperability Hazards: Compatibility and data exchange issues with other healthcare IT systems.

Security and Privacy Hazards: Risks of unauthorised access and breaches of patient privacy through ‘leaky’ AI systems.

Ethical and Governance Hazards: Concerns over consent, transparency, fairness, and accountability as well as how collected data might be used for continuous improvement training and learning.

Regulatory and Compliance Hazards: The challenge of staying compliant with evolving regulations.

Clinical Validity Hazards: The effectiveness of human in the loop oversight and impact of AI on human effectiveness.

Usability Hazards: Trustworthiness, usability and transparency risks that inhibit confidence and adoption of AI and impede the potential benefits.

Possible mitigation strategies for AI related risks are varied and the proliferation of governance frameworks needs careful consideration before choosing which to adopt. Compliance with international standards such as 42001 fits well with the accepted approach to information security and quality, and other more technical frameworks such as the NIST AI standards outline technical controls that should be expected to be in place. Our briefing paper suggests some possible mitigations for key risks here.