top of page

Health

Who is liable if AI in healthcare fails and causes harm?

what could happen if a claim is made against a healthcare provider that relies on artificial intelligence to spot diseases.

Who is liable if AI in healthcare fails and causes harm?
Wix Ai image
By
Simon Perkins
17 May 2024
less than 3 min read
Become smarter in just 5 minutes

Ai Onion delivers quick and insightful updates about the most important and impactful Ai news and insights from careers to crime

Thanks for subscribing!

Law firm DAC Beachcroft’s Simon Perkins and Stuart Wallace reveal what could happen if a claim is made against a healthcare provider that relies on artificial intelligence to spot diseases.


Artificial intelligence is already at the forefront of clinical care, particularly in the area of radiological analysis.  AI to detect diseases on scans can be deployed across a number of different clinical specialties, and its potential benefits are vast.  The obvious ones that come to mind are accuracy (thus improving patient safety) and time saving (thus freeing up the workforce). What happens, though, if the AI fails to detect a disease on a scan, and this results in harm? What route would a patient need to take to establish liability, and against whom? Should the patient sue in tort, pleading clinical negligence and relying on the Bolam test? The problem here would be that this test considers whether the care provided by clinicians, not AI, was reasonable, and so may not apply in a case arising from an isolated AI defect without clinician input. Arguably, this error should fall under product liability litigation, on the basis that the product or ‘machine’ has made a mistake, which has affected the end consumer, the patient. 



bottom of page