The use of artificial intelligence (AI) in healthcare has rapidly shifted from futuristic concept to tangible reality. From streamlining administrative tasks to supporting diagnostic decisions and predicting disease outbreaks, AI is changing how healthcare is delivered. But as with all powerful tools, its use—especially when involving patient-identifiable data—comes with serious legal, ethical, and regulatory implications.
The Rise of AI in the NHS
NHS England (I mention NHS England in full knowledge it won't exist in the coming weeks, but in the knowledge its functions and policies will transfer into the Department of Health and Social Care, instead of being so called "arms length" policies) has embraced AI as a potential force multiplier in tackling systemic challenges: long waiting lists, stretched workforce, and early disease detection. AI has been deployed in radiology (e.g. lung cancer screening), pathology, ophthalmology, and even hospital workflow optimisation.
NHS AI Lab, created in 2019 and now part of the NHS Transformation Directorate, continues to fund and evaluate AI solutions through programmes like the AI in Health and Care Award. The goal is clear: improve outcomes and efficiencies, but always in line with legal safeguards.
Patient Data: Powerful, but Personal
The foundation of AI’s power in healthcare lies in data—particularly patient-identifiable data (PID). This includes any information that could, directly or indirectly, identify a patient: name, NHS number, date of birth, postcode, or even combinations of clinical detail.
Training and deploying AI systems on this type of data creates complex obligations. The question is not only can we use it—but how, when, and under what legal basis.
The Legal Framework
The use of AI with patient-identifying data is governed by multiple overlapping legal and regulatory frameworks. The most critical are:
1. UK GDPR and Data Protection Act 2018
- Lawful Basis for Processing: Processing PID requires a lawful basis under Article 6 UK GDPR and a condition under Article 9 (for special category data, like health data).
- Often, the lawful basis for NHS use is public task (Art. 6(1)(e)) and healthcare purposes under Art. 9(2)(h).
- Where AI is used for research, it may fall under Art. 9(2)(j), with appropriate safeguards.
2. Common Law Duty of Confidentiality
- Even where data processing is legal under GDPR, the NHS (and any partner organisations) must also satisfy the common law duty of confidentiality.
- This typically requires either:
- Patient consent, or
- A legal gateway, such as Section 251 approval under the NHS Act 2006 via the Confidentiality Advisory Group (CAG).
3. Data Protection Impact Assessments (DPIAs)
- For any high-risk processing, such as AI development or deployment using PID, a DPIA is mandatory.
- NHS organisations must assess risks to patients' rights and freedoms, ensure transparency, and embed data minimisation and security.
NHS England & DHSC Position
Both NHS England and the Department of Health and Social Care (DHSC) advocate for the safe, ethical, and explainable use of AI. Key principles include:
- Transparency: Patients must understand how their data is used and for what purpose.
- Fairness: AI systems should not exacerbate health inequalities or introduce bias.
- Accountability: There must be human oversight, particularly in decision-making that affects diagnosis or treatment.
- Security: Data must be stored, transferred, and processed securely under strict controls.
NHS England also supports Federated Data Platforms (FDPs) like the Palantir-led platform, which aggregate anonymised and de-identified data—but this has triggered public and professional debate over transparency and trust.
AI + Ethics = Explainability
Even when data is de-identified or anonymised, the ethical dimension remains critical. Black-box algorithms pose problems for clinical accountability. Under UK GDPR, individuals have a right to meaningful information about the logic involved in automated decision-making—this applies directly to some AI tools.
Practical Safeguards for AI in Healthcare
- Use anonymised data wherever possible.
- Avoid re-identifiable datasets unless absolutely necessary.
- Be clear about data flows—who accesses what, and when.
- Embed governance: DPIAs, data sharing agreements, CAG approvals, and regular audits.
- Be cautious with third-party vendors: ensure contracts are compliant, and the partner’s security standards meet NHS Digital requirements.
The Path Forward
AI has the power to transform patient care, but it must not do so at the expense of patient trust, legal compliance, or ethical integrity. As we move forward, the NHS and its partners—whether in startups, hospitals, or academic centres—must continue to build solutions that are not only innovative, but lawful, transparent, and just.
In short: AI can help save lives, but it must also respect the people behind the data.