We are living in an extraordinary era for artificial intelligence (AI) in healthcare. This revolutionary technology provides so many wonderful opportunities to positively impact the way healthcare is delivered and consumed. At Infermedica, we are driven by our mission to make healthcare accessible, accurate, and convenient for everyone, and AI is at the heart of making it a reality.
At a glance:
LLMs and the promise of a more empathetic and accurate healthcare AI
We are witnessing rapid changes in the world of AI. Just nine months ago, only a select few were familiar with AI and Large Language Models (LLM). Today, hundreds of millions of people have used ChatGPT, the groundbreaking AI chatbot by OpenAI, which continues to make waves in the mainstream media. And already, many people rely on it daily to enhance their work. The same technology is now being integrated in search engines like Microsoft’s Bing, with Google’s Bard soon to follow. It is thus not crazy to say that this is probably the fastest example of the “diffusion of new technology” and that we are unequivocally entering a new age, the Age of AI.
No doubt, the recent advancements in LLMs are impressive, and what is even more breathtaking is the pace at which the technology is progressing. Every month brings new breakthroughs, pushing the boundaries of what AI can achieve. We can reasonably expect that as the field progresses, the limitations we face today will gradually be overcome.
When it comes to healthcare, GPT-4, the latest language model by OpenAI, can answer questions to a level that exceeds the pass mark of the US Medical Licensing Examination, as can Google’s medical LLM, Med-PaLM 2. There is also emerging evidence that LLMs like ChatGPT can provide quality and empathetic answers to patients seeking medical advice.
In the near future, LLM-enabled tools could interact with patients, easing the burden on physicians who are already stretched by an ever more demanding healthcare system.
Trust and safety: the cornerstones of Healthcare AI
However, healthcare is a high-stakes domain, and any promising technological advancement must be rigorously scrutinized for trust and safety. Those of us implementing AI solutions must understand the limitations of these new technologies, choose the right tools for the right problems, and hold each other to the highest standards of validation.
LLMs, like GPT-4, hold enormous potential to improve healthcare delivery and patients' quality of life, but there are risks involved. To understand these challenges it is crucial to understand how LLMs function — by generating text one word at a time, based on massive amounts of unfiltered text from the internet.
The most advanced LLMs like GPT-4 are further fine-tuned to follow human instructions. There’s no denying that GPT-4 is a very powerful AI, and its results, as many of you might have already experienced, are impressive. However, being right most of the time, as GPT-4 might be today, without full validation of the consequences when it is not, simply doesn't meet the bar of quality for healthtech providers.
There are three outstanding challenges that we need to overcome for LLMs to be successful in healthcare:
Lack of clinical validation - given the breadth of potential use cases, there are no established methods capable of reliably measuring the clinical safety and accuracy of LLMs. Clinical validation is not only essential for obtaining regulatory approval as a medical device, but also for fostering trust among patients and healthcare providers. Moreover, it can protect providers from liability in case of negative patient outcomes.
Lack of explainability - there is no effective way to understand why a specific result was generated by an LLM, making it an impractical solution for healthcare providers who may need to investigate adverse incidents or complaints.
Unpredictable and uncontrollable behavior - words spit out by LLMs are randomly sampled one after another, making the final outcome hard to predict or recreate, and making the model sensitive to subtle changes in user input. While fast progress is being made in this area, with sophisticated methods for aligning LLM behavior to the expected outputs, we’re still far from the standards needed in healthcare.
AI’s real-world impact on primary care today
While it might take time for LLMs to meaningfully improve healthcare in the real world, other AI technologies, such as expert systems, have been successfully automating primary care for years. Expert systems use a curated knowledge base and a set of rules to mimic the decision-making abilities of a human expert in specific domains, like medicine.
These AI systems can reason, infer, and provide recommendations based on the input provided by the user. They offer a mature, consistent, reliable way to automate decision-making processes, similar to how a doctor operates in the real world.
At Infermedica, we’ve spent the last 11 years developing an expert system AI — our Inference Engine can perform medical interviews and accurately draw conclusions from patient data to construct dynamic medical interviews on the fly. Our AI asks the right questions to gather supplementary health information, better understand patients' state of health, and give appropriate recommendations. Because we also recognize the importance of letting patients express themselves in their own words, we have developed a natural language layer, our NLP Engine, that can safely translate patient complaints in plain language and map them to precise medical concepts.
The Inference and NLP Engines are at the core of the Medical Guidance Platform, our digital platform that guides patients, clinicians, and administrators toward more meaningful health outcomes, and which more than 100 companies around the world are already seeing the benefits of.
Our commitment to making healthcare more accessible, accurate, and convenient
At its core, we firmly believe technology in healthcare is to be trusted, empathetic, accessible, and flexible. For this, we are focused on instilling trust in Infermedica’s AI approach and providing clarity and direction among the uncertainty around AI for healthcare.
At Infermedica we are committed to improving our AI, which is trusted by some of the largest healthcare providers and insurers in the world. Our team of healthcare professionals, who have spent more than 84,000 hours working on our medical content to date, continue to build the Medical Knowledge Base that will fuel the Healthcare AI of tomorrow.
We know that, whatever will be the technology driving the healthcare AI of the future, solutions will have to rely on accurate and trustworthy data — such as those we’ve amassed over the last 11 years. AI is our ally as we firmly progress toward making healthcare accessible, accurate, and convenient for everyone and this is why we will continue innovating and driving the field forward in a safe and responsible way.