Regulating AI in healthcare: Fall 2023 observations — Part two

Photo of Catherine BushenMultiple authors
October 5, 2023
... min read

NOTE: This article was published in October 2023. Commentary on the status of legislation is accurate at the time of writing.

The floodgates have opened. Artificial intelligence (AI) has broken through into almost every industry, constantly flowing and evolving like a river navigating a vast landscape of previously untouched land. But, just as a river needs to be contained within its banks, we must also ensure we have boundaries in place to direct and control such a powerful source of knowledge and potential as AI.

https://a.storyblok.com/f/120667/3120x2202/5325d80204/ai_regulations_blogpost-2.png

How can regulators determine boundaries for the safe use of AI in healthcare? Illustration by Zuzanna Szostak.

What legislation is in place to ensure the safe use of this revolutionary new tool in healthcare? And how can regulators cope with the ever-changing, ever-evolving nature of AI?

In part one of this piece, we outlined why new legislation is needed in this area and explored the approaches different regions are taking to regulate the use of AI, including in healthcare.

In this article, we dive deeper into the fundamental principles that should underpin any new regulations. We look at the challenges regulators face and highlight the responsibility that AI software manufacturers have to ensure the clinical safety and validity of their products in the healthcare market.

At a glance:

Fundamental principles of new regulatory framework

Adopting a risk approach is as important as creating opportunities for the development of safe technology. The former is essential in terms of understanding what exactly requires protection and how.

Protection of human rights

Human beings and their fundamental rights must remain at the center of consideration and protection — no doubt about that. We create this technology for us, not the other way around. Risk and impact assessment, human supervision, and transparency are the directions we should follow.

International collaboration

In today's global world, products and services cross national borders, making it necessary to harmonize regulations at the international level to ensure consistency and effectiveness.

In the case of medical products containing or relying on AI, the product development process, data governance, and algorithm refinement are at the core of all operations. Organizational processes should be established to ensure the best possible quality and performance.

For these products, creating variations or different versions of the products for various jurisdictions is impractical. Their development and certification are too expensive and time-consuming to replicate the same work depending on local market requirements. Therefore, regulators should strive for the best cooperation and standardization of regulations, or else they risk depriving their citizens of access to modern technologies.

Essentially, global solutions deserve a global approach. The ultimate goal would be an international regulation, such as a United Nations convention that could be ratified by member states into their legal system. We are not there yet though, that’s why the main focus on the unification is in the collaboration among states, private and public organizations, and appointed notified bodies (organizations designated to assess the conformity of certain products before being placed on the market).

In terms of AI all the stakeholders are dealing with the same troubles, so sharing knowledge, exchanging experience, and the unification of standards can benefit everyone; regulators, software developers, medical device manufacturers, and users.

Avoid overregulating AI

At the same time, those responsible must be careful about overregulating the industry and ‘killing the business’ by setting unreachable thresholds. Such an approach may result in a knowledge and development exodus to more ‘friendly’ territories with less stringent rules, or, indeed, the accumulation of the technology in the hands of a few large corporations that could monopolize the market — and potentially control the technology itself.

Nevertheless, we must recognize that striking a balance between the protection of humans and creating a space for the development of safe technologies is a difficult task. In this case, the rush is not a good consultant and any stringent regulation should be prepared as diligently as possible, taking into consideration its impact in all aspects. 

What challenges do regulators face when implementing new regulations?

Overlapping cross-disciplinary regulations

AI in healthcare stretches across multiple disciplinary boundaries, including but not limited to the medical, finance, data, and pharmaceutical industries. Such overlapping regulations and needs can lead to confusion regarding the range of competencies and responsibilities in each field.

It is critical to create a space for collaboration and open communication with all stakeholders, such as notified bodies from other fields, legislators, experts, and representatives of the private sector. Looking outside the European Union, it’s also critical to establish communication channels among AI regulators/legislators in other jurisdictions because such technologies in many cases will have a worldwide territorial range.

The lack of sufficient knowledge among notified bodies

It appears the speed of AI development has not been proportional to the pace of gaining knowledge and expertise among personnel of notified bodies. Regulating AI requires a deep understanding of both the technology itself and its potential implications across healthcare.

As such, regulators may face a shortage of experts with the necessary knowledge and skills to assess the technical aspects and potential risks of AI systems. To try to address this, close cooperation between regulators and manufacturers would allow regulators to stay up-to-date with the latest advancements, understand the technical complexities, and develop appropriate guidelines and regulations that promote innovation while safeguarding patient well-being. In the US, the AI Training Act, is one such motion that is trying to address the lack of expertise amongst the workforce of executive agencies.

Developers can provide valuable insights into the capabilities and limitations of their AI medical devices, helping regulators make informed decisions. Conversely, regulatory authorities can offer guidance and support, ensuring that developers understand the regulatory landscape and can navigate the approval process smoothly.

Law is slower than technology

This is a fact. Enacting legislation that does not become quickly outdated is a huge challenge. A recent example of this was evident with the European AI Act. It was visible that during the negotiations the draft became somewhat outdated after the public launch of ChatGPT and other similar generative solutions and legislators needed to amend the language they used to include this type of technology.

This aspect was not overlooked by the EU. An example of an attempt to mitigate this unequal pace is the entitlement given to the European Commission within the AI Act to amend the list of high-risk AI systems by adding or modifying areas or use cases when necessary.

Regulating AI in healthcare: Fall 2023 observations — Part one

Learn about the current regulations for the use of AI in healthcare and discover how the UK, the US, and the EU are taking different approaches.

Validation and safety of AI in healthcare

When developing AI systems, it's important to establish clear performance requirements from the beginning. This lays a strong foundation for the system's operation. It's also crucial to gather information and understand how the system learns and adapts over time. Following a professional software development life-cycle that includes thorough verification and validation is vital for ensuring reliability.

In the field of medical devices, regulations focus on managing risks throughout the entire lifecycle, covering both the hardware and software aspects. Recently in the US, the National Institute of Standards and Technology (NIST) introduced the NIST AI Risk Management Framework (AI RMF). This framework, released on January 26, 2023, provides a comprehensive approach to address the risks associated with AI for individuals, organizations, and society. Its aim is to enhance trustworthiness considerations in the design, development, use, and evaluation of AI products, services, and systems.

For manufacturers of medical devices, the only secure and reliable approach is to implement a robust process of verification and validation that ensures safety and maintains an adequate level of accuracy. Through validation, manufacturers can verify that their AI-powered devices function accurately, provide reliable results, and operate within acceptable performance parameters. This not only safeguards patient health but also instills confidence in healthcare professionals who rely on these devices for diagnosis and treatment. 

The validation topics are well-explored in the publication "Validation of artificial intelligence containing products across the regulated healthcare industries" authored by Dr. David C. Higgins from the Berlin Institute of Health and Prof. Christian Johner from Johner Institut GmbH. This insightful publication delves into the intricacies of validating AI-containing products in the context of regulated healthcare industries, shedding light on the best practices and considerations for ensuring safety and efficacy. According to the authors, there are clear gaps in knowledge along with inconsistencies across the system as to how to validate products containing AI technologies. Engineers should pay attention to the importance of validation and adapt their approach to developing AI and machine learning technologies to meet regulatory requirements. 

How to move forward?

So, where does all this leave us when it comes to regulating healthcare AI? We can see that when evaluating modern products that incorporate AI components, it is important to balance requirements with the risk and value the product can bring to the market. And while we recognize the negative impact of extensively rigid regulation on the development of these technologies, we understand that strict control is required in sensitive fields such as healthcare.

The importance of validation and safety cannot be overlooked, and it’s going to take a lot of cross-collaboration between multiple disciplines and nations in order to provide a comprehensive and effective set of rules that safeguard both patients and providers.

In the meantime, healthtech manufacturers need to take this responsibility upon themselves — aided by guidelines, frameworks, or other non-statutory instructions issued by national institutions — to ensure their products and services are held to the highest standards of accuracy and to mitigate potential risk factors. At the end of the day, verifying the accuracy of technologies through rigorous testing and external validation is essential.

BL/EN/2023/10/05/1