The advent of artificial intelligence (AI) in healthcare promises to revolutionise patient care pathways, offering unprecedented advances in digital clinical support and decision-making systems. AI-driven systems have the potential to significantly improve condition severity assessments, treatment pathways, and diagnosis accuracy. However, their integration into healthcare settings introduces a new class of clinical risks that necessitate careful consideration and a new approach to safety standards.
The Limitations of Current Standards
DCB 0129 and DCB 0160 have been the cornerstone of clinical risk management in the UK for more than two decades, guiding the development and deployment of Digital Health systems. However, they were drafted at a time when AI’s role in patient care was not anticipated, and the dynamic, data-driven nature of AI, coupled with its evolving learning capabilities, introduces potential hazards that necessitates a re-evaluation of clinical risk management. The existing frameworks may be sufficient to guide clinical safety activity, but what we need to think about AI related risks and hazards is radically different.
The Need for a New Approach
The integration of AI into clinical decision-making processes necessitates a re-evaluation of risk management.
The unique attributes of AI-driven systems including traceability, explainability, drift, bias, fairness, and transparency require explicit attention. These terms demand a simple plain English definition if we are going to truly understand the risks associated with them:
Traceability refers to the ability to track the decision-making process of an AI system, including the data inputs and algorithmic paths taken to arrive at a conclusion.
Explainability entails the capacity of an AI system to present its processes and decisions in understandable terms to users, ensuring clinicians can interpret AI recommendations accurately.
Drift denotes the change in an AI system’s performance over time, as it learns from new data, which can lead to deviations from its original accuracy.
Bias refers to systematic errors in AI decision-making that can arise from unrepresentative training data or flawed algorithms, potentially leading to unfair outcomes for certain patient groups.
Fairness is the principle that AI systems should make decisions without discrimination or prejudice, ensuring equitable treatment for all patients.
Transparency is the degree to which AI’s functioning and decision-making processes are open and understandable to users.
The deployment of AI-driven systems introduces new patient safety hazards and changes the focus of existing ones. Those with oversight and governance roles need to be mindful of:
Data-Driven Hazards: The risk of poor data quality leading to erroneous AI outputs and poor decision making.
Algorithmic Complexity Hazards: The complexity and lack of transparency of AI algorithms can obscure errors or biases and inhibit explainability and usability.
Adaptive Learning Hazards: Unintended changes in system behaviour due to machine learning including drift in a model and lack of predictability and consistency.
Interoperability Hazards: Compatibility and data exchange issues with other healthcare IT systems.
Security and Privacy Hazards: Risks of unauthorised access and breaches of patient privacy through ‘leaky’ AI systems.
Ethical and Governance Hazards: Concerns over consent, transparency, fairness and accountability as well as how collected data might be used for continuous improvement training and learning.
Regulatory and Compliance Hazards: The challenge of staying compliant with evolving regulations.
Clinical validity: The effectiveness of human in the loop oversight and impact of AI on human effectiveness.
Usability Hazards: Trustworthiness, usability and transparency risks that inhibit confidence and adoption of AI and impede the potential benefits.
Cookie Policy | Privacy Policy | Accessibility Statement | © 2024 Deviceology Ltd UK Company No. 14199635