Artificial intelligence has erupted in health care and could change the face of the industry irrevocably. As it continues to reach new heights, the risk of liability grows. At this time, it remains unclear exactly who is liable, and for what.

Radiologists are using AI in medical imaging. Deep-learning tools are detecting signs of rare disease. AI-powered drug discovery platforms are recommending medications based on a patient’s individual genetics. And AI chatbots are messaging with patients, scheduling appointments, and sending out reminders for medication refills. The Biden administration recently secured the commitments of dozens of health-care provider groups to ensure the safe and trustworthy use of AI technologies. But that doesn’t mean AI in health care is foolproof.

If a medical mistake is made, does liability rest with the provider, the AI manufacturer, or both? If AI is considered a service, not a product, would product liability still apply? These are the questions that providers and the courts will grapple with as health-care entities continue to harness the power of AI.

Medical Malpractice & Product Liability

The first step in a medical negligence suit is determining how to file. In other words, does the suit fall under medical malpractice, product liability, or both? This determination is fact-dependent, but AI could be implicated if it was used to guide or inform a physician or in the development of a medical product.

Typical medical malpractice lawsuits originate from medical mistakes, a misdiagnosis (incorrect diagnosis), a missed diagnosis (failure to identify a medical issue), surgical errors, mistakes administering anesthesia, or general negligence. A med mal suit might name the physician, the hospital or practice group, and possibly the attending staff or nurse anesthetist as defendants. For a claim to succeed, the plaintiff must establish that the physician owes a professional duty to the patient, that professional duty was breached, the plaintiff’s injury was caused by the breach, and they are entitled to damages.

In a medical product liability suit, a plaintiff usually files suit against the manufacturer of a medical device or product, claiming that the product is defectively designed, defectively manufactured, or defectively marketed or sold. Product liability claims generally come down to pinpointing exactly where and under whose custody the defect occurred. Many states allow plaintiffs to sue under a theory of strict liability, but in other cases plaintiffs have found it easier to prove negligence on the part of the manufacturer.

But product liability does not directly translate to generative AI, because AI is constantly evolving, and it is hard to pinpoint when or where the negligence occurred. There is also no settled law on whether AI is even considered a “product” for purposes of product liability.

It’s also not clear at this time where fault would lie, or which cog of the AI system wheel is most vulnerable. It could be the developer of the program itself, the software engineer feeding the dataset or large language model, or the physician asking the query of the large language model.

Similar dilemmas arise in prescribing medication manufactured using AI technology or utilizing nanomedicine or precision medicine for the treatment of cancer or disease.

Ultimately, the patient has autonomy in making medical decisions for their own plan of care, but a patient can be heavily influenced by a physician or confused by medical terminology, resulting in a less than informed decision. If something does go wrong, and that defect or medical mistake can be traced back to the technology itself, a patient could have a viable claim against the developer and the provider.

The issue unique to the use of AI in health care is whether the AI manufacturer or software developer owe a duty to the patient—and if so, what exactly that duty is. A manufacturer could conceivably insulate itself from liability if it made a disclaimer, or if it could prove that software was altered beyond what the design capability indicates or that an algorithm was manipulated beyond its control.

But that argument may only work for generative AI, not the physical tools that work in conjunction with AI, like the DaVinci robot or a remote patient monitoring device. Unless AI is harnessed in software as a medical device or in the form of wearables, there is no physical device linked to a manufacturer defect.

Meanwhile, AI product users—such as physicians, provider groups, or hospitals—will need to perform due diligence analysis similar to evaluating a cloud data or electronic health records vendor to ward off liability. This ensures that the tools or software that the provider relies upon, including generative AI, are audited, verified, comprehensive, and appropriate for each patient on a case-by-case basis. In addition, thorough informed consent would need to be obtained from the patient, with adequate attention given to explain the risks and benefits of using diagnostic tools and novel technology.

Current Legal Landscape

There is sparse medical malpractice litigation tied to the use of generative AI, chatbots, or nanomedicine. But recent civil complaints involving physical AI tools like surgical robots highlight the potential for similar litigation involving generative AI in the future.

In a case filed in a Philadelphia county court in November 2023, a patient alleged that they suffered a vaginal laceration during robotic assisted laparoscopic hysterectomy, resulting in an anovaginal fistula and requiring further surgical repair to correct the secondary issue. The patient sued the doctor and hospital group for negligence. Also, in a September 2023 complaint filed in Broward County, Florida, a patient sued the attending surgeon for a robotic assisted laparoscopic hysterectomy that resulted in complications.

The plaintiffs in these recent complaints opted only to sue their health-care providers. But product liability claims against robotic manufacturers could also come into play in these types of cases. With robotic surgery, one can point to a defective part or malfunction with the machine itself, which is squarely a product liability concern. In these cases, a plaintiff can sue the physician (and often the hospital) in addition to the manufacturer of the robotic surgery parts, and any other entity in the product supply chain who could be liable (such as a parts distributor or medical supply retailer).

For example, previous product liability claims have also been filed against manufacturers of robotic surgery equipment, such as Taylor v. Intuitive Surgical (2017) and Mendoza v. Intuitive Surgical (2021). In Taylor, the Washington state supreme court held that manufacturers have a duty to warn the purchaser—in this case the hospital—about the dangers of their devices. The Mendoza case ultimately settled.

But with generative AI, it’s challenging to decide where to point the finger.

If there is a flaw in the software itself, if the data used to feed the large language model is incorrect or biased, or if a mistake was made when a query was answered, it is possible that a software developer or AI engineer could be held responsible. But it is much more likely that a physician would ultimately be held liable for a medical mistake if they relied on generative AI to make a diagnosis or prognosis, or if gen AI suggestions led the physician to a misdiagnosis or a failure to diagnose a clinical issue in a patient.

This is because, even with advances in medicine and health technology, the physician owes a duty to the patient to make an informed decision based on their own knowledge. It is standard practice to sue all entities in the supply chain in a product liability suit, but it’s also easier to identify a physician than to name a phantom entity or engineer behind an AI dataset or large language model.

What Will the Future Bring?

President Joe Biden recently issued an executive order on artificial intelligence that addresses the risks of bias, privacy and ethical concerns. The order also sets a goal for standardized evaluations of AI systems. This is promising, because if regulation is created to make AI testing rigorous, it may minimize the likelihood of mistakes that could possibly harm patients.

The order also establishes an AI Council that will include the Secretary of Health and Human Services, but it doesn’t list the commissioner of the FDA. Given the rapid pace of drug development with the incorporation of AI and machine learning components, the FDA will be playing an important role in AI regulation in the near future.

While the question of liability in health AI is ambiguous, one principle remains constant: Doctors ultimately owe a duty of care to patients and the responsibility to make informed decisions about a patient’s standard of care. Whether other actors in the AI industry owe a similar duty is unresolved.