Technology is moving forward faster than ever before, and advances are making waves in various fields – autonomous cars and bipedal robots are on the cusp of commercial availability, and computers are getting faster and better at understanding our online wants and needs. This technological advancement will eventually revolutionize the way in which healthcare is provided – but this has ramifications beyond the provision of care. Can an intelligent machine be liable for negligence?
What Exactly is AI?
Artificial intelligence, or AI, is a broad term, covering the development of machines and programs which are, to some extent, intelligent. Intelligence is also a broad term here, encompassing everything from what we consider intelligence to an ideal state of rationality. In short, AI spans a gamut of new computer technology, from programs that can teach themselves to sort files better all the way to the potential synthesis of a thinking mind.
The Future of AI in Medicine
While AI might sound like a grand, high-concept field with esoteric applications, the reality is far simpler – and AI is already advancing a multiplicity of fields. AIs can be simple self-learning algorithms, which get better and better at performing the same task – with far-reaching implications for otherwise labor-intensive work filing and archiving paperwork, financial data, and information. AIs can also be complex networks that get to grips with an existing system and optimize it smartly, improving efficiency.
As far as medicine is concerned, AI can be a useful tool for sharing medical data with the right practitioners, booking appointments, and managing schedules. On a customer-facing basis, advanced AIs could well examine symptoms and refer to human specialists, removing the pressure on local GPs and A&E services. In the far future, AIs could even find their way into the operating theatre, performing complex surgical tasks with greater precision than a human surgeon.
AI and Medical Negligence – Who is Liable?
Discussion of the involvement of AI in human healthcare can be a charged one, as science-fiction ideas of AI revolt or malfunction pervade public opinion. But there are very real questions to be had about the ethics of programs making healthcare decisions – questions which have new prescience after the first fatalities caused by AI-controlled ‘self-driving’ cars.
Though the chances of an AI making a mistake are theoretically lower than the chance of human error in the NHS if an AI did make a mistake who would be responsible? If you were injured as a result of AI negligence or error and were required to seek the help of a medical negligence solicitor, who would that solicitor be able to identify as liable?
The law currently does not have the scope to specifically cover this scenario, as it trails behind the exponential leaps in technology and infrastructure. However, conclusions can be drawn from the existing standard for negligence claims. You must ascertain that a healthcare provider had a duty of care over you, that there was a failure to honor that duty of care, and that that failure resulted in your injury or impairment.
With AI, you can establish the first but the second and third present issues. In order to fail in the duty of care, the AI must either be wrong or defective – and these may be argued to be one and the same thing. As a result, your provider may distance themselves from responsibility and point instead of at the AI’s parent organization – turning your claim into a personal injury suit under the Consumer Protection Act.
In more definable instances such as a grievous injury made in error by a surgical AI machine, it may be the case that the AI is treated as an employee of sorts – but not a culpable one, owing to the lack of concrete consciousness past the learning of specific techniques. As such, the employer – being the healthcare provider – could be considered liable.