How to reconcile physicians and AI?

Photo of Raquel CorreiaRaquel Correia
January 12, 2021
... min read

The word is out there.  Future healthcare will be accessible, cheap and highly qualified … partially thanks to Artificial Intelligence (AI). Many of AI’s benefits are irrefutable: improved efficiency, reliable workflows, strengthened patient safety and the democratization of medicine.

Sometimes, however, unnecessary tension is said to exist between doctors and new technologies like AI. Sensationalized headlines associated with lack of transparency on how AI works and proper preparation of doctors to understand the true benefits of AI have created a climate of mistrust between both.

The question is: can we reconcile doctors and AI?

https://a.storyblok.com/f/120667/1560x1100/4f2c0a0a28/cover_1560.jpeg
Artificial Intelligence and doctors - can we reconcile them? Illustration by Aga Więckowska.

To find an answer, we’ve looked at the latest research. What we found points towards a partnership built on education, full disclosure, and regulation.

The 2020 KPMG Health Insiders report is loud and clear: 91% of healthcare executives believe AI is increasing patient access to care and 89% report that AI is already making their systems more efficient. Two thirds are confident that, one day, AI will be effective in diagnosing patient conditions and illnesses. This doesn’t mean that doctors will be redundant. Quite the opposite: doctors will finally have enough time to focus on the humanistic aspect of their job - to help people lead longer and healthier lives.

Dealing with the unknown is a scary place to be but, as with everything in life, you can choose to be a victim or a hero. There are three common themes that can change the way doctors feel about AI: education, transparent collaboration, and regulatory measures.

Education must include AI-based tools

Artificial intelligence (AI) understands and uses algorithms that can perform tasks usually done by humans. Vast amounts of data can be analyzed, understood and used to “make faster and accurate diagnosis, reduce errors due to human fatigue, decrease medical costs, assist and replace dull, repetitive, and labor-intensive tasks and reduce mortality rates” (Paranjape, K. et al. 2019).

Today, medical information doubles every 73 days. Compare this to the eighties, when information was doubling every 7 years. You can see how difficult it is to keep up to date with the latest evidence-based practices. Medical schools are the privileged place to train future doctors to practice medicine differently, since students are immune to old habits and wide open to new knowledge. These schools can invest in curricula where future professionals are comfortable using AI and other technologies to help them take better care of their patients by keeping up with the latest state of the art knowledge.

The American Medical Association adopted their first AI policy in 2018. Following this, schools in the USA - such as the Duke Institute for Health Innovation, University of Florida and the Sharon Lund Medical Intelligence and Innovation Institute - have already begun offering AI related courses.

To prepare doctors for the future, medical schools can implement new learning and teaching technologies. On the other hand, a curriculum that relies strongly in digital literacy and lifelong learning skills ensures that doctors will be able to adapt in the future, regardless of the new technologies that will be developed.

https://a.storyblok.com/f/120667/1000x705/ae56120ece/ilu_02_684x2.jpeg
Transparent collaboration is one of the key factors building trust between Artificial Intelligence and doctors. Illustration by Aga Więckowska.

AI and doctors – setting the rules of transparent collaboration

One of the most cited roadblocks that prevents a successful partnership between doctors and AI is mistrust (Gille F. et al. 2020).

Why are doctors wary of AI? Well, for starters, they need proof that it works, and that it works well. The number of randomized clinical trials to test the performance of AI systems is quite low at present. Plus, the way information is treated by AI tools is opaque and algorithms are sometimes biased, bringing inequalities and errors to the system.

“The physician needs to understand the inputs and the algorithm and interpret the AI-proposed solution to ensure no errors are made.” (Paranjape, K. et al. 2019).

Doctors are critical to the development of reliable and efficient AI solutions. In their 2020 AI Report, investors Nathan Benaich and Ian Hogarth state that “causal reasoning is a vital missing ingredient for applying AI to medical diagnosis”.

One example is having doctors amplify the capacity and efficiency of the probabilistic models that link medical concepts in the medical information database to associated symptoms, conditions, case studies, etc. “Both the mathematical inference model and the medical database are complementary elements of our solutions and cannot exist without one another", underlines Anna Nowicka, Head of Medical Content at Infermedica. "We put the emphasis on both researching enhancements of our probabilistic algorithms and keeping our medical content consistently rooted in evidence-based medicine.”

Building a quality digital medical knowledge base

Our medical information base lies at the very heart of Infermedica solutions. We present 6 things that are the key to its quality and accuracy.

Regulations have to be put in place, to overcome mistrust

Building on the previous point, to ensure a trust relationship between AI and doctors, legal frameworks and proper regulation need to be developed. Legislation can provide common rules for the development and implementation of AI in healthcare, in order to protect patients and professionals.

Steps are being taken in this direction. The World Health Organisation (WHO) works in partnership with the ITU/WHO Focus Group on Artificial Intelligence for health (FG-AI4H) to develop and “establish an assessment framework for the evaluation of AI-based methods for health, diagnosis, triage or treatment decisions”.

Furthermore, guidance for AI designers, the creation of third-party AI accreditation authorities, or new AI implementation policies (Gille. et al 2020) are also important measures to establish a solid foundation for this collaboration.

In April 2019, the American Food & Drug Administration (FDA) released the “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD)” or simply, the “AI Framework”.

“Our vision is that with appropriately tailored regulatory oversight, AI/ML-based SaMD will deliver safe and effective software functionality that improves the quality of care that patients receive”, they write.

The FDA proposed the Total Product Life Cycle (TPLC) regulatory approach to Software as a Medical Device (SaMD). The workflow is at follows:

https://a.storyblok.com/f/120667/1368x1505/7db0e79b7c/fda_total_product_life_cycle_tplc_scheme_684x2.png
Overlay of FDA's TPLC approach on AI/ML workflow.

When state regulation doesn’t exist, companies using AI must make sure that internal regulation is in place in all phases - development, implementation, use, and evaluation. This will serve as evidence of quality, as well as a guarantee of good will and trustworthiness.

One example is the testing methods used to validate all medical content from the AI-powered triage solutions offered by Infermedica. This is also essential if you’re using the Infermedica API to build a healthcare application “(...) checking contractual and legal requirements is very important as it helps to set up an appropriate communication of the tool and its purpose.”.

It’s not easy to immediately improve Doctors’ trust in AI

We all want better healthcare.

However, to reach that blissful paradise, Doctors and AI must be reconciled. They amplify each other's capabilities and improve the quality and safety of everyday healthcare after all.    

To build a long lasting relationship, we should aspire for digital literacy in medical education, aim for transparency throughout AI, and prioritize adaptive and robust regulation.

AI is already supporting physicians by supporting triage processes, improving several emergency room workflows , saving up to 25 minutes on a heart MRI interpretation, and helping them hone their skills by playing video games.

As a healthcare professional or as an insurance company, the best way to start building a strong relation with AI is to work with companies that make sure their software is appropriately tested, validated, and regulated throughout their lifecycle.

Wondering how Artificial Intelligence can support your healthcare services? Get in touch with our team →

References:

Paranjape, K., Schinkel, M., Nannan Panday, R., Car, J., Nanayakkara, P., 2019. Introducing Artificial Intelligence Training in Medical Education. JMIR Med Educ 5, e16048.

Longoni, C., Morewedge, C.K., 2019. AI Can Outperform Doctors. So Why Don’t Patients Trust It? Harvard Business Review.

Das, R., Five Technologies That Will Disrupt Healthcare By 2020 [WWW Document]. Forbes.  (accessed 10.1.20).

Gille, F., Jobin, A., Ienca, M., 2020. What we talk about when we talk about trust: Theory of trust for AI in healthcare. Intelligence-Based Medicine 100001.

BL/EN/2021/01/12/1