AI and ML Decision Support Models

AVBCC Highlights

There are a lot of questions around AI and ML and how it will impact cancer care and healthcare in general. But to understand what’s to come, you need to first understand what these applications are and how they’ve evolved. Will Shapiro, Vice President of Data Science at Flatiron Health, began the conversation by providing a high-level overview of AI and ChatGPT—explaining what each letter in the GPT acronym stands for and how Google’s development of the Transformer (the T in ChatGPT) has essentially changed the game by enabling the technology to take in and produce huge amounts of text.

The Issue of Trust

Among one of the themes of the conversation was the validity, or trustworthiness, of the information being generated by AI platforms.

“Bad data leads to bad outcomes,” said David Holecek, Co-Founder of Wild Type Advocates. “The worst part is that the tools are so good you can hallucinate an answer…and it looks totally feasible. It’s hard to assess the accuracy, so it’s critical that the data is right.”

Another concern he raised is that ChatGPT has only read the Internet up to September 2021, so it’s hit a plateau. One solution Mr. Holecek offered to the question of validity is to insert the voice of the oncologist, the experts, into the data.

“It’s not just what data is input, but what’s a sequence of the data, what do you look at first, how do you break ties?” he said. “We believe the best strategy moving forward is to take a baseline tool, fine tune it, and populate it with oncologist-curated data and then test and compare the tools based upon physician responses.”

Mr. Holecek also raised the tremendous potential that the “right AI tool” could have in addressing the shortages in healthcare by creating efficiencies for not just the physician but for the payers, the medical affairs team, and beyond.

“I think it’s important to be skeptical and to really engage and understand what’s going on because we all know you can really harm people if the model you’re using isn’t working,” summarized Moderator James Hamrick, Director of Medical Oncology at Flatiron Health.

Improving Clinical Trial Enrollment

One area where the use of AI is particularly promising is in more efficient clinical trial enrollment.

“Where we’re seeing a benefit is in the allocation of promotional and educational dollars,” said Jeffrey Becker, Vice President of Business Development for PRECISIONxtract. “We’re not doing blanket LinkedIn ads but [using AI for] prompts for more tailored, curated content.” This also translates into better, more curated and personalized resources for clinicians and patients.

Susan Weidner, Senior Vice President, IntrinsiQ Specialty Solutions, shared that AI and ML are also being used to select the practices with the right patients for trials.

“What a lot of people don’t realize is that the tools aren’t comprehensive enough to actually find the right patients to be on the trial,” she said. “But by using data to document line of therapy you can match what a manufacturer may be looking to do in a trial. Then, inside of that, we have so much structured and unstructured data that often what you really need to be looking for is the combination of the two to determine how likely a patient matches trial criteria.”

Relieving Administrative Burdens

Among the many benefits of AI in healthcare is its potential to alleviate the administrative burden of physicians and practices.

“The advancement in AI language really changes the game,” said Tanya Park, Director of Innovation Technologies at Cardinal Health. “Take for example dictation—it’s now way more accurate. You don’t have to go back and make corrections. Beyond that there are opportunities for these tools to actually create language, something we couldn’t do before. So now we can use AI to do things like write prior authorization letters.”

Lavi Kwiatkowsky, Founder of Canopy, added to this theme, explaining that AI can also understand voice and help with image processing. “For example, a human who reads an entire patient medical record is able to extract the context. This understanding of language basically allows the machines to communicate with us in our own language. And that’s a huge transformation.”

Mr. Kwiatkowsky also spoke to the topic of hallucinations, something that he thinks is going to be resolved soon.

“Imagine if you could ask a ChatGPT trained on medical data any medical question as if it were the world’s greatest expert on any disease, and it could provide you with the information. I think that’s going to change how physicians, how providers in general, practice medicine.”

Mitigating AI Risk

When looking at AI tools, it’s important to keep in mind what it is that you’re trying to accomplish, and then test the tool for its ability to accomplish the task accurately. Mr. Kwiatkowsky suggests testing the tools’ capabilities using your own data sets. Another recommendation is to get references from others who have used it, because demos can sometimes be misleading.

“I think getting familiar with some of the basic performance metrics for machine learning is a really useful thing to do right now,” suggested Mr. Shapiro. “But more than anything, having really high-quality ground truth data that you can validate the output against is critically important. That actually gives us comfort that what they’re producing is correct and sometimes actually outperforms what humans can do.”

Mr. Holecek adds that it’s also important that the tools are able to display footnotes and references side-by-side with text. “Show me the proof, you know, I want to see where you’re pulling this from. I think it’s important that the user sees there’s actually something behind the model and it’s accurate. But it will take time to build this trust.”

Where Humans Fit In

When looking at the promise of AI, it’s easy to wonder where humans fit into the picture. Across the board, panelists agreed that the provider’s role will lie in data validation, decision-making, and in direct patient care.

“If you really sit down and talk with physicians, they will say that they want to stay informed, but don’t take away their decision-making capabilities,” said Ms. Weidner. “So, we need to find a balance–how does AI create efficiencies without removing what the physician does best?”

“The benefit of this technology is that it removes the need for physicians to memorize every [diagnostic and treatment] tree,” said Mr. Kwiatkowsky. “I think it’s a completely viable universe where clinicians can actually use their empathy to explain the options to a patient and what’s best for them and work to support their decision making, versus being some person who’s memorized everything.”

“In our survey, the number one pain point for clinicians were biomarkers and precision models,” added Mr. Holecek. “Whatever they do, they can’t keep up. And if you’re treating a bunch of diseases, how do you keep up with guidelines and label changes and indicators—you can’t do it. AI can pull this information up for the provider and give that time back to them so they can do what they do best. We need to get back to letting the doctors do their job—that’s the most beautiful thing we have in medicine.”

Related Articles