In a new report offering guidance to the state bodies that license physicians, the Federation of State Medical Boards said doctors are responsible for their use of artificial intelligence and accountable for any harm the technology causes.
“Once a physician chooses to use AI, they accept responsibility for responding appropriately to the AI’s recommendations,” the report said.
The report added that doctors may not rely entirely on an AI tool to make medical decisions and must explain why they followed or rejected the tool’s advice. If doctors relying on AI deviate from the standard of care, they should be held accountable, the report, adopted by the federation’s House of Delegates, said.
Why it matters: As AI suffuses into health care, medical tech companies and health systems alike are wondering who is ultimately responsible for AI decisions that harm patients.
The American Medical Association, the country’s leading doctors’ group, has previously said it wants physicians protected if AI steers them wrong and that to do otherwise will slow adoption of the technology. At the same time, the group deliberately calls AI “augmented intelligence” because it believes a doctor must always be the final decision-maker in patient care.
Tech companies have said that the liability is ultimately with health systems and doctors.
The report, unveiled this week at the federation’s annual meeting in Nashville, Tennessee, outlines best practices for regulating doctors’ use of artificial intelligence. It acknowledges that not all AI carries the same risk and encourages state medical boards to set accountability measures based on the risk a tool poses to patients.
Guidance in the report says state boards should require that physicians be proficient in the AI tools they use and understand how they function and on what data they were trained. Even in cases where the reasoning of AI systems is not easily understood — the so-called black box problem — the doctor should be able to provide a “reasonable interpretation” of the AI’s recommendation.
The report says that boards should not allow chatbots to stand in for doctors. Research has shown that intelligent chatbots may be better at explaining a course of treatment than a doctor, and the industry has discussed using chatbots as a part of treatment discharge instructions. But the federation insists that doctors talk to their patients about their treatment and get their consent on next steps. Doctors should also be transparent about their use of AI and be required to explain how it works to patients, the guidance says.
What’s next: States are beginning to consider their medical boards’ role in regulating how doctors use AI, with decisions expected in the months to come.
Copyright © 2024 Protects Patients Now. All Rights Reserved.