“Physicians are trained mechanistically, in why something works some way. [But] most of the algorithms offer no perspective on why a particular lesion is classified as benign or malignant. They only offer the end-point solution. There’s hesitancy [to embrace AI technology] because our brains are trained to think about [the question ‘Why?’].”
— Dr. Siddhartha Mukherjee, “A Peek Into the Future of Biomedical Transformation” (2022)
Earlier this year, I had a chance to give a kind of “Grand Rounds” presentation to a wonderful group of surgeons at the University of Michigan Medical School. The topic: artificial intelligence.
Ever since then, I’ve kept my eye out for resources that explore the role AI may play in the way doctors care for their patients. Below are a few of them.
Enjoy!
—The Digital Lawyering Team
AI Reading: Professor Harnesses AI to Act Like a Patient (Dartmouth Geisel School of Medicine, 2024)
Sample Insights:
“A new application developed at Dartmouth and geared to medical students is tapping the power of artificial intelligence to role-play as a patient.”
“The app can be thought of as a customized version of ChatGPT, says [Professor] Thomas Thesen, allowing educators to create a database of tailor-made cases that would be most instructive for students learning the ropes of clinical history-taking and diagnosis.”
“Modeling a clinical interaction with a virtual patient is an easy first step before graduating to actual clinical settings or even mock interviews with actors, says Nsomma Alilonu, a second-year medical student who worked with Thesen on the experimental design for evaluating AI Patient Actor, creating a feedback rubric and providing a student’s perspective on the app’s functionality. ‘It’s a very good way to practice interviewing patients in a stress-free environment and get formative feedback,’ she says.”
“‘In higher education there is a prevailing fear that AI will take the human side out of learning,’ says [Professor Thesen]. ‘The beauty of this way of using AI is that it actually helps students to become better communicators and ultimately connect better with their patients.’”
AI Listening: Helping Doctors Make Better Decisions with Data (Me, Myself, and AI Podcast, with special guest Professor Ziad Obermeyer of UC Berkeley, 2023)
Sample Insights:
“If we train an algorithm that just encodes the radiologist’s knowledge in an algorithm, we’re going to encode all of the errors and biases that that radiologist has. So what we did instead is we trained an algorithm to predict not what the radiologist said about the knee but what the patient said about the knee. We trained the algorithm to basically predict, is this knee a painful knee or not?That’s how we designed an algorithm that could expose bias in, not the radiologist, [but] in medical knowledge.”
“You need to understand how to do useful things with data. But you also need to really understand the clinical medicine side of these problems to be effective because you can’t just swap in the radiologist’s judgment for the judgment of whether there’s a cat or not in this image. It’s a much, much harder problem.”
“There are clearly downsides to using health data for product development, and I think that there are real risks to privacy and a lot of things that people care about. I think those risks are real, and they’re very salient to us. There’s another set of risks[, however,] that are just as real but a lot less salient: [the risks of] not using data.”
“Doing machine learning in medicine is very, very different from other areas, because we fundamentally have a more complicated relationship with the ground truth. And human opinion, as trained as these experts are and as much practice as they’ve gotten over years of residency and training — we can’t consider that the truth.”
“We [recently] saw some sad news from yet another promising Alzheimer’s drug. It’s been pretty sad news for decades in this area. And one of the reasons is this weird fact that I hadn’t thought of until I started seeing some of these [AI] applications, which is that if you want to run a trial for a drug for Alzheimer’s, you have to enroll people who have Alzheimer’s. But that means the only drugs that you can develop are the ones that basically have to reverse the course of a disease that’s already set in. So now imagine you had an Alzheimer’s predictor, that with some lead time could find people who are at high risk of developing Alzheimer’s but don’t yet have it. Now you can take those people and enroll them in a clinical trial. And now you can test a whole new kind of drug, a drug that could prevent that disease, instead of having to reverse it or slow it down. So, that’s, I think, really, really exciting too.”
AI Watching: AI in Law and the Application in Health Policy (Professor Nicholson Price of the University of Michigan Law School, 2024)
Sample Insights:
“It’s pretty easy to find lots of ways that AI can exacerbate bias and disparities in medicine. AI is trained on data, and the questions [we need to ask are]: ‘Where are those data collected? About whom are they collected? And how do they turn into systems that make predictions and recommendations?’ If the answer is that those data are collected about only a certain set of people, we’re going to have bad tools for everybody that’s not in that set. If the answer is that we collect data that are fully representative but what those data are representative of is a health care
system that is itself rife with deeply embedded biases, then the AI that results is going to reflect those biases as well.”
“There’s more regulation of AI in healthcare than in most other spaces [because] lots of AI in healthcare falls under the definition of a medical device. Even though it is strange to think of software as a medical device, the FDA certainly thinks that it is. So hundreds of products have gone through FDA review to make sure that they are safe and effective. That’s simply not the case in lots of other industries.”
“There are lots of things the FDA doesn’t see and lots of things it can’t really do [when reviewing an AI tool]. The FDA can say: ‘Does [this AI tool] work in general?’ But it can’t say: ‘Does [this AI tool] work in your hospital for your patients—or for your [specific] patient, the one standing right in front of you?’”
AI Exercise: AI Patient Actor
Try out the Dartmouth AI app (“AI Patient Actor”) mentioned above in the “AI Reading” section. Unless you are a doctor, the point of this exercise is not to sharpen your diagnostic skills or improve your bedside manner. Instead, I want you to start thinking of ways a similar AI app might be helpful in your own line of work.
Here are a few possibilities. But see if you can come up with at least three more on your own.
Commercial Litigators: An AI app that simulates deposing a witness
M&A Lawyers: An AI app that simulates a negotiation
Immigration Lawyers: An AI app that simulates a client intake interview
We’ll be back in mid-January with more AI-related resources.
It's nice to see they are taking care to address bias. Humans are all biased, whether you recognize it or not. Like all things techy AI is advancing rapidly, we can just hope it does more good than harm. Keep up the good work and hope you & yours have a Merry Christmas & safe New Year.