AI in healthcare: Are we dinosaurs waiting for the comet to hit?

In the last couple of months lectures and articles about the impact AI will have on healthcare have grown at an exponential pace, especially in the wake of interest in ChatGPT and GPT-4 and how these large language AI models could revolutionize well … everything.

Witness for example this tweet by Canadian Medical Association President Dr. Alika Lafontaine while attending #TED23 earlier this month.

The world’s largest health IT conference – HIMSS – being held in Chicago at the same time also saw the potential impacts of new AI applications dominate the talks from the podium but largely in a positive fashion. Here the emphasis was on how large electronic medical record vendors and other software developers are integrating models such as ChatGPT into their systems in a way that could help physicians dramatically reduce the amount of paperwork they have to do or enhance care delivery.

However, for many the uneasiness felt by Dr. Lafontaine dominates. Perhaps one of the best showcases discussing the current state of those concerns in a policy and ethical context was a recent lecture and discussion dealing with the regulatory and ethical challenges of AI in healthcare, held by the University of Ottawa Centre for Health Law, Policy and Ethics.

For Dr. Colleen Flood, director of the centre and University Research Chair in Health Law & Policy, issues related to the use of AI in healthcare represent a prime opportunity for the federal government and Health Canada to proactively intervene and set policies in this area to protect the public.

“I think most legal scholars understand, even if the public and patients don’t, the incredible uphill battle that patients face to successfully sue a doctor or health care professional for negligent treatment,” said Dr. Flood in her introductory remarks at the session. “I think that the present difficulties that patients have in this regard will be greatly exacerbated by the black box of algorithmic decision making.”

“In the future … all doctors will be supported by AI, in their decision making, and in some cases may replace actual professional judgement or decision making.” She noted this could significantly impair the trust that is fundamental to the physician-patient relationship.

Dr. Flood said Health Canada must play a role to improve safety and quality of medical devices with AI. She returned to this theme in her closing remarks stating that “we have an opportunity here through federal regulation, through Health Canada, to really make the right platforms for Canadian AI innovation in healthcare that can set appropriate standards.

“We’re flat footed, a lot of time when it comes to regulation and legal responses, like dinosaurs waiting around in the swamp waiting for the comet to hit us. We have got to get a lot smarter, faster, flexible and innovative and actually use some of these technologies to help us regulate as well.”

Keynote speaker for the session was Glenn Cohen, deputy dean and professor at Harvard Law School and someone whose work was described by Flood as being “foundational” in the area of AI and healthcare.

After outlining the reasons for using AI in healthcare (see note below in ‘Bonus content’), Cohen then outlined 4 use cases of AI:

  • Choosing cancer therapeutics
  • ICU bed allocation
  • Starting does of FSH (follicle-stimulating hormone) during ovarian stimulation
  • Using AI to select embryo with the highest chance of successful pregnancy
  • Endocardial boundary detection for LVEF (left ventricular ejection fraction)

Cohen then presented a detailed analysis of the ethical considerations in each of the phases of building and implementing AI tools that use predictive analytics. For example, when it comes to acquiring data to build the algorithms to train and use AI, he said questions include:

Do patients need to be explicitly consented if we want to use their data – all the electronic medical records and the data that’s gone into them that you produced over the course of your lifetime? Has anybody ever asked you whether artificial intelligence agent can be trained on it? Is it enough to be notified or do we need actual consent? How representative is the data we’re going to have? What about people in rural settings? What about racial and other minorities? What about First Nations peoples?

At the end of the day, if you develop an AI model or tool that does work, Cohen asked “how do you ensure that it’s disseminated and available and licenced in a way that’s also equitable,” rather than just being used in a concierge medicine setting.

When it comes to liability, Cohen said that just like physicians, AI tools will also make errors in patient care. “Under the current law, if you’re a physician, you face liability only when you do not follow the standard of care and an injury result (so) the safest way to use medical AI is to confirm the thing you were going to do anyways.” But if you consider the benefit of medical AI is catch cases where a physician should do something different from standard care, he said, this approach “is leaving most of the value on the table.”

Cohen also dealt with the concept of explainable AI and the concept of having clinicians work in an environment where the algorithms used to make decisions by AI tools can be understood and explained. (A topic dealt with by Dr. Jeremy Petch (PhD), director of health innovation at Hamilton Health Sciences Centre at the recent HIMSS conference). The challenge with this, said Dr. Cohen is that the explanations used for the “black box” AI function may fit the data but may not be accurate.

One of the session commentators, Maggie Keresteci, executive director at Canadian Association for Health Services & Policy Research, brought a strong patient and caregiver focus to the discussion. In her remarks, Keresteci stressed the importance of having patient and caregiver involvement in the development of AI and medicine, its implementation and its governance.

In using AI in healthcare, Keresteci said she worried that AI will ignore patient stories, “reducing us to a specific demographic diagnosis or a disease profile. Data alone is not sufficient to provide excellence in healthcare.”

Bonus Content

Just because, I had an AI-driven tool from Humata.ai prepare a summary of the 1 hr. 25-minute lecture. Here, in slightly edited and modified form is what was produced.

“(The session) discusses the legal, ethical and practical considerations surrounding the use of artificial intelligence (AI) in healthcare. It highlights the need for adequate regulation of safety, quality and privacy of AI before it comes to the market. Patients may face challenges in successfully suing healthcare professionals for negligent treatment due to the block box of algorithmic decision making. Health Canada has the ability to regulate medical devices with AI, including software. But there are concerns with the present approach and the need to deal with transparency and issues of algorithmic bias. The use of AI in the medical field raises a number of ethical and legal issues, including data privacy, bias and discrimination as well as the need for separate governance of anonymized data. The goals of medical AI can be categorized into four areas:

  • Democratizing expertise
  • Automating drudgery
  • Optimizing resources
  • Expanding frontiers

The process of building AI tools involves acquiring data, building and validating the model, testing the model in real-world settings and disseminating the model. Liability is a concern as medical AI tools will inevitably make errors. Questions about informed consent, privacy, bias and explainability are also involved especially for underserved and underrepresented groups. The design of a comprehensive, coordinated health technology control system is needed to ensure the seamless and reliable use of AI in healthcare.

Leave a comment