Regulating AI in healthcare: Fall 2023 observations — Part one

Photo of Catherine BushenMultiple authors
September 22, 2023
... min read

NOTE: This article was published in September 2023. Commentary on the status of legislation is accurate at the time of writing.

No doubt about it, Artificial intelligence (AI) is bursting onto the healthcare scene, and it’s already causing quite a stir. However, the rapid expansion of AI capabilities and the sudden increase in demand for its use in healthcare has stranded existing regulations in a state of antiquity with increasing pressure to create adequate policies and guidelines that safeguard patients and providers.

https://a.storyblok.com/f/120667/3120x2200/dd041d7348/ai_regulations_blogpost.jpg

Regulating AI in healthcare requires a careful balance between ensuring clinical safety and not stifling development and growth. Illustration by Magdalena Kościańska.

At a glance:

So, how are those in charge of establishing this legislation going to catch up? What challenges do they face? And how important is this for the development and growth of AI in healthcare?

In this article, we offer an understanding of where we are now with regard to current regulations and the approach that various regions are taking. In our follow-up article, "Regulating AI in healthcare: Fall 2023 observations — Part two," we take a look at the top considerations when forming this new legislation and the responsibilities that lie with healthtech companies themselves.

A shift in healthcare regulation needs

In decades past, the realm of medical devices had a clear definition: a medical device was an apparatus or tool used by a physician to achieve a specific effect on a patient — improving their health and monitoring or alleviating symptoms. This scope was well-defined until the first major shift occurred: medical devices designed for patient use, for self-monitoring health at home.

This change allowed the medical device to be ‘detached’ from the realm of professionals and placed in the hands of laypeople. Regulations had to adapt to this situation, and manufacturers of self-health management products faced a set of additional requirements imposed by regulatory bodies due to the emergence of this new user group — the everyday person. 

We are now witnessing an even further-reaching change — software as a medical device. It’s no longer simply about medical tools in human hands, but rather about intelligent software that aims to largely take control of tasks where it excels: analyzing vast datasets or performing operations free from human fatigue or exhaustion. The possibilities are immense, but so is the responsibility. Therefore, the need for new regulations arises, ones that would reflect the advancement of technology and keep up with the ever-evolving changes and expanding potential applications of AI in medicine.

Current healthcare regulations fall short

Although the healthcare sector is heavily regulated, no regulations directly target the use of AI in medical devices. Most of them are now drafts or guidelines, which means they contain non-binding recommendations — a concept that is insufficient in such a high-stakes domain.

Notably, there are distinctly different approaches toward AI regulations between various countries. We've outlined the approach of three such jurisdictions below, and in the following years, we will be observing which approach is most effective, secure, and nurturing of innovation.

United Kingdom AI

Earlier this year, the United Kingdom announced the so-called "White Paper," which stands for their pro-innovation approach to AI regulation policy paper. Instead of proposing one AI regulation and one AI-specific regulator, the UK decided to follow the sector- and context-specific approach focused on the application of AI, which may bring different risks and benefits depending on the sector and the use case.

That means that healthcare service providers may be interested in using generative AI not designed specifically for this sector, such as ChatGPT, and the Medicines and Healthcare Products Regulatory Agency (MHRA) would be a dedicated authority to set guidelines or determine principles in this area. The MHRA already presented their Software and AI as a Medical Device Change Programme - Roadmap that gives an insight into, including but not limited to, how to interpret the existing regulation or where to find useful guidelines in lack of regulation. An example of this is the Good Machine Learning Practice for Medical Device Development: Guiding Principles, which is a guidance issued as a result of international cooperation among the U.S. Food and Drug Administration (FDA), Health Canada, and the U.K.’s MHRA.

Revolutionizing Results: The Power of Member Engagement in Reducing Costly Healthcare Claims

Engaged members take control of their health, make informed decisions, and actively participate in their healthcare journey resulting in improved efficiency for all stakeholders. How can AI-powered technology empower your members?

European Union AI

The European Union, on the other hand, by introducing the famous AI Act, plans to apply a risk-based and AI-specific approach to the regulation. The AI Act will exist autonomously and, as such, will overlap with other already existing sectoral regulations such as the Medical Devices Regulation and the In Vitro Diagnostic Medical Devices Regulation (MDR/IVDR). That means companies developing software, such as medical devices that are powered by AI and process the personal data of consumers, will have to function under one additional regime and be subject to dedicated AI supervisory authority (as per the latest EU Parliament proposal it should be just one AI supervisory authority per Member State).

USA AI

The USA is somewhat in the middle of the road between the UK and the EU. Looking at the bill of the Algorithmic Accountability Act of 2022 (AA Act) one could make a comparison with the EU AI Act but its applicability would be much narrower. The AA Act would only apply to large companies that use automated decision systems, to make critical decisions including those related to health and to consumers. As of now, it is stuck in Congress and its ultimate scope and date of possible enactment is unknown. In the meantime, President Joe Biden meets with core players [Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI] at the White House to receive from them voluntary commitments to manage the risks related to development based on AI. Of course, America will not only rely on tech giants' promises, the President signed the Executive Order which focuses on the fight against biases to protect the public from algorithmic discrimination and issued a Blueprint for AI Bill of Rights to “guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence”.

The future of AI regulations

It’s clear that current legislation is not sufficient for regulating the use of AI in healthcare, yet the various approaches being taken by different regions provide interesting insight into what the future of healthcare AI regulations might look like.

In part two of this article, we dive deeper into the considerations regulators will need to make when drawing up these new laws and look at the challenges authorities will face in order to keep a balance between ensuring clinical safety and not stifling the developments that can be made in healthcare when using AI-driven solutions.

BL/EN/2023/09/22/1