BMA wants tighter controls on use of AI in healthcare
The British Medical Association (BMA) wants the use of AI in healthcare to prioritise “safety, efficacy, ethics, and equity.”
It wants to see a much clearer framework to ensure that each AI implementation is “rigorously assessed in real-world settings and continuously monitored to ensure it improves care quality and job satisfaction without exacerbating inequalities”. Doctors say stronger “governance and up-to-date regulation” are needed in order to protect patient safety.
The BMA represents all UK doctors and medical students. It has just published “Principles for Artificial Intelligence (AI) and its application in healthcare”.
The report notes that AI offers potential benefits of AI in healthcare including better efficiency, diagnosis, and treatment; it is being used in healthcare administration, clinical decision-making, diagnostics, personalised treatment, digital therapies, analysis of population health data, and biomedical research.
However, the BMA cautions that successful use of AI in healthcare depends on its proper testing, and dealing with issues of liability, regulation, and data governance.
This is because, as the doctors’ association points out, the use of AI can involve serious risks, which include potential harms to patient health, exacerbation of health inequalities, and impacts on doctor-patient relationships and productivity. So effective AI use requires careful management to maximise benefits and mitigate risks
THE BMA wants to see AI tools being rigorously tested for safety and efficacy, better governance and regulation that evolves as AI evolves, and clear legal liability frameworks.
The Association wants AI in healthcare to prioritise safety, efficacy, ethics, and equity. Each AI implementation must be rigorously assessed to ensure it doesn’t exacerbate inequalities.
And the BMA wants staff and patients to be given the choice to opt out or dispute AI decisions. Legal liability must ensure that developers are accountable and that doctors can challenge AI decisions.