Doctors using new artificial intelligence tools to help them diagnose and treat their patients say they wish Congress would provide some clarity on a big unanswered question: Who pays if AI makes a mistake?

Advancements in AI promise to improve care, but only if doctors trust the systems and are protected from liability, according to the country’s leading physicians’ group, the American Medical Association.

“We’re seeing lawsuits already,” AMA President Jesse Ehrenfeld said at a recent meeting on doctors’ lobbying priorities, adding the issue so far has been “a little bit [like] building the plane as you’re flying it.”

At stake is more than just millions in medical malpractice payouts, over a dozen health, legal and tech leaders told POLITICO. Judges, lawmakers and regulators are shaping what a medical system infused with artificial intelligence owes patients, both in terms of quality care and the right to recompense if something goes wrong.

It won’t happen without a fight. Health tech companies and some hospitals say that doctors making the final call in care are ultimately responsible for their decisions.

The AMA has in one way reinforced the point. The group refers to AI as “augmented intelligence” because it says doctors must never rely on it entirely.

The explosion of AI tools in health care is reviving the once heated debate over medical malpractice. But it’s not clear that the old battle lines, in which doctors and Republicans pushed for liability limits, and Democrats and plaintiffs’ attorneys pushed back, will hold. In the years since the “med-mal” debate went dormant, doctors’ politics have trended Democratic and the tech giants building the tools have run afoul of both parties. That suggests there could be an opening to give the doctors what they want.

There’s no legislation to do that yet, but lawmakers have begun looking into it. Rep. Greg Murphy (R-N.C.), a urologist and co-chair of the GOP Doctors Caucus in the House,recently wrote to Food and Drug Administration Commissioner Robert Califf to ask for Califf’s take and to suggest an approach.

Rep. Vern Buchanan (R-Fla.), chair of the Ways and Means Health Subcommittee, told the American Medical Association he sees a role for Congress.

“We’re going to look at that and see if we can’t make a big difference,” he said of liability reform.

Until Congress does, legal experts said judges will set the parameters of the new system.

“Courts are going to need to evolve some of their existing rules [and] doctrines,” said Michelle Mello, a Stanford health law scholar who has testified on the matter before Congress.

In the meantime, care providers, tech companies and patients find themselves in an uncertain, even risky, situation.

Safety vs. absolution

The medical malpractice debate used to center on a GOP list of “tort reform” proposals to cap damage awards, limit plaintiffs’ attorney fees, and shorten the window during which patients could bring suits.

Republicans were rarely successful in Congress, but many state legislatures imposed restrictions.

Now, the policy decisions about AI liability are less about how much compensation a patient gets, but who bears the blame for a mistake.

Murphy says AI could reshape the medical malpractice debate in Washington, so he’s taking an early stab at steering it.

In his letter to Califf he suggested a “safe harbor” that would apply to doctors and the AI products they use if both join a surveillance program tracking patient outcomes.

“There are going to be great precedents set here on the national level,” Murphy told POLITICO.

But Murphy hasn’t proposed a bill, nor has Califf responded to his inquiry.

Trial lawyers say they’ll fight if Murphy proceeds.

“When we give people absolution from responsibility, it means everyday people bear the brunt of it,” said Sean Domnick, president of the American Association for Justice, a group that represents trial lawyers.

AI tools make it “easier to just click, click, click,” he said. “We have to guard against that.”

The government’s take-it-slow approach to regulation of the new AI tools is adding legal uncertainty, Mello said.

Currently, the FDA regulates some AI-powered medical devices, but its approach to the latest version of the tech is still forming.

Unregulated tools could be more vulnerable to lawsuits, according to Mello, because they aren’t protected by the “preemption doctrine,” which bars some claims against regulated devices on the theory that they’re known to be safe enough and effective enough to have received FDA clearance.

So how federal agencies like the FDA decide to regulate the latest AI tools could play a role in determining when software makers could be held accountable for errors.

It could also prompt some makers of AI products to seek regulatory approval.

“If you’re explicitly saying, ‘I’m definitely not a medical device,’ well, now you don’t get to benefit from the preemption doctrine,” Mello said.

The legal landscape

The case law suggests doctors aren’t likely to have similar avenues to avoid lawsuits.

When patients get substandard care because an intake form is incomplete, medical literature is incorrect or a drugmaker doesn’t adequately warn about its risks, courts have rarely let doctors off the hook.

In some cases, doctors have followed allegedly errant software suggestions for which drugs to prescribe and how to treat patients — which plaintiffs said harmed them. When sued, courts still took the cases, despite doctors’ efforts to blame the tool.

“The group that I worry about most is physicians,” Mello said. “Physicians are the ones that kind of get left holding the bag.”

That’s the case for a reason, some health tech companies say. Diagnostic software is like GPS, in their view: It’s up to the driver to stay on the road, no matter what instructions are given.

“In that room, there’s only one licensed health care provider,” said Dr. C.K. Wang, chief medical officer at COTA, a health data company working to advance AI models. “I’m liable for the decisions made under my license. Therefore, I do advise clinicians to really truly understand the technology that they’re using.”

Making the situation especially thorny for doctors, they also could open themselves up to litigation if they eschew AI.

Legal scholars foresee plaintiffs arguing that doctors were negligent because they didn’t use the best tools available to them.

Doctors are contemplating a host of other questions, too: What should a physician do if a patient wants to follow the advice of an algorithm instead of their doctor?

In what scenarios should doctors trust themselves over a machine?

Insuring the unknown

There’s big money at stake in the answers — and implications for how quickly doctors adopt AI tools, the specialities they choose, and the places they decide to live.

After a big dip during the Covid pandemic, malpractice payouts have spiked and returned to more than $3 billion in 2022, according to statistics collected by the National Practitioner Data Bank.

The AMA said last year that many doctors have incurred double-digit percentage increases in their malpractice insurance premiums for four years running.

If the upward trend in premiums continues, the AMA said, it could have implications for where doctors choose to practice and what specialities they pick.

The AMA noted that premiums vary considerably by speciality. In Philadelphia, internists paid $31,909 for an annual policy in 2022, while general surgeons paid $105,013 and OB/GYNs paid $185,565.

And fees varied markedly by region, in part because state laws can affect the risk of high-cost litigation. OB/GYNs in Los Angeles, for instance, paid $49,804, while those in Miami paid $226,224.

Insurance companies are taking note of the AI revolution, but making no forecasts on how it will affect premiums.

“It’s uncharted territory,” said Melissa Menard, executive vice president of the health care division of CAC Specialty.

Insurers need to look at risks from AI that could include more bias, less privacy and new cybersecurity threats, she said.

But if the technology, on the whole, improves care, it could also reduce the chances a doctor is sued for malpractice — and possibly reduce insurance costs.

Amid the uncertainty, the core principles of practicing medicine responsibly should be front-and-center, lawyers, health leaders and tech companies agree.

“We always go back to the standard of care,” Menard said.

But that standard, legal and medical experts admit, could change with AI, especially as its adoption continues to grow.

Even so, some health systems are intent on using AI as a tool to support and better current standards.

“We’re using AI like a stethoscope,” said John Couris, president and CEO of the Florida Health Sciences Center, emphasizing the need to balance excitement about the technology with level-headedness. “It’s a tool to help augment the care that they’re providing, but our doctors and our nurses are the ultimate authority on what gets done or doesn’t get done to a patient.”

Other health providers aren’t as calm, including Dr. Wendy Dean, president and co-founder of Moral Injury of Healthcare, an organization that advocates for physicians’ well-being.

“The confidence with which AI posits its conclusions,” she said, “makes it really hard as a human to say, ‘Wait a minute, I need to question this.’”