Episode Transcript
[00:00:00] Speaker A: Healthcare and care in general is about human connection and human touch. And so I'm hoping the AI lets us move back to that, where we can focus on how I can look you in the eye and have a conversation and less about the technology in general. In my mind, this is not a healthcare problem. This is a societal problem. Right now, we've all lost some human touch, and I think in healthcare we can set the example for how we can use technology to get back. I think it's something we hope we can do everywhere.
[00:00:40] Speaker B: Hello everyone, I'm Amark Paria. I'm a managing director in Alvarez and Marcel's healthcare industry group. I'd like to welcome you to our podcast, AI in healthcare, along with my co host, Doctor Lakshmi Halaksi Amani from Endeavor Health. Today on the podcast, I have a distinguished panel of three leading healthcare practitioners. Firstly, I'd like to welcome Doctor Molin Shah from Providence Health. Second, I'd like to welcome Doctor Brian Patterson from UW Health. And finally, I'd like to welcome Doctor Neerao Shah from Endeavor Health. This is going to be a very exciting podcast series and I'm excited to welcome you to this. Thank you.
AI has been such a talk about technology, especially in healthcare, and the opportunities seem limitless. I'm eager to hear from all of you, first on what excites you most about AI. Doctor Patterson, let's start with you.
[00:01:41] Speaker A: Thanks.
[00:01:42] Speaker C: I feel like we've lived through a couple hype cycles around AI now, first with predictive models, and now another, maybe even larger cycle around large language models. And it's fun this time around as we're hearing about large language models that we're finally, it seems like a lot more of that is percolating through to the provider level. Previously, I feel like I've been trying to push AI and tell people how it's going to be able to change practice. But now increasingly, providers are coming up to me and asking me, when are we going to get this stuff? And having that engagement from the clinical side seems really exciting. It is kind of fun to be back at that peak of inflated expectations around AI technologies, but I don't think we ever really got, with our earlier hype cycles to that point of actually improving care in a really widespread fashion. I think there's still a lot of work to be done on non large language model AI using predictive models that use discrete data to improve patient care through more simple prediction classification tasks.
[00:02:41] Speaker B: Thank you, Brian. Doctor Marlon Shah, you want to go next?
[00:02:44] Speaker A: Sure. That was great, Brian. I love thinking that it's fun to be at the peak of the hype cycle. It's a great point. It's really not fun to be at the bottom. So let's enjoy what we can when we can.
You know, for me, you know, I agree we've been doing AI for a long time, but this year things change, especially, you know, again with the llms and generative. My hope honestly though is that AI lets humans kind of be human again. I feel like in the last 20 years weve got a deluge of more and more and more information and weve become slaves to the information as well as gathering the information. Healthcare and care in general is about human connection and human touch. And so im hoping the AI lets us move back to that, where we can focus on how I can look you in the eye and have a conversation and less about the technology in general. In my mind, this is not a healthcare problem, this is a societal problem. Right now, we've all lost some human touch and I think in healthcare we can set the example for how we can use technology to get back. But I think it's something we hope we can do everywhere.
[00:03:57] Speaker B: Thank you Doctor Shah, Doctor Neerosha, what are your thoughts?
[00:04:02] Speaker D: Thanks for inviting us on this podcast. I'm going to take a slightly different approach. So Brian took the practical approach, Mullen took the empathic approach, I'll take the futurist approach. So I think what's really exciting is the convergence of numerous technologies within AI. So I see two large macro themes that are coming out that are very interesting. So Brian mentioned Generative AI and the interest around that large language models generate text response to a question in a way that mimics human capabilities. The second area that's interesting and that's developing rapidly is around multimodal algorithms. So in healthcare, we have generally had silo data in a specific modality, such as structured data in the EHR, unstructured data, in clinical notes, imaging data, genomics data, voice data. What's becoming more common is the leveraging of multiple data modalities for prediction. So I'm going to proceed down this futuristic path with an example of combining these two larger macro themes. And this is something that I read in an interesting nature article. So imagine an operating room where the entire surgery is recorded via video, and an AI model is built on surgical videos and text based surgical reports that help to predict the next best option in conducting surgery to optimize outcomes. Add a generative AI component that allows for the surgeon to communicate and query this algorithm with a human like response, and you can see how the future of AI holds tremendous possibilities.
[00:05:32] Speaker E: Wow. Well, as predicted, Amar, there are many opportunities, I have to say, that each of you touched on something that I think is really important.
And oftentimes in healthcare, we have spent more and more time alone and not with each other. And the idea that these integrated models, that generative AI could actually give us greater connection with one another, I think is really quite interesting and incredibly important as we think about some of the key societal challenges that we're facing. So I think what's also really important to understand is how can we perhaps learn from each of you how you're using AI to enhance care and patient outcomes? It is exciting, but at the same time it can feel a little daunting. So I would love to hear about how each of you are bringing AI to life and how you're employing it in your systems and day to day work. Doctor Mullen Shah, could we start with you?
[00:06:35] Speaker A: Yeah, for sure. That's a great question. Like I said earlier, we've been using AI for a long time, been embedded in our workflows for a long time. Examples might be looking for the risk of COVID mortality, might be sepsis, risk factors. We've had operational models that can predict acuity or predict staffing levels that we need. We have models that look for Ed overcrowding. I'm sure, Brian, you're using. I mean, there are a number of different models that have been around for a long time, and the challenge has been making them better and then bringing them to life through workflow integration. And I think that's one of the keys is workflow integration is king.
I think the thing that's caught everyone's attention we've already talked about is generative AI. And as I think about it, one of the reasons for that is because you could call generative AI for the layperson would be communication AI, right? All these other ones are these like abstract predictions, and you're like, cool. The computer can do cool computer things. This is like, now the computer can do things. I can do. But wait, now I understand it a little bit better. We're leveraging that concept of communication AI to think about where we're going to use this generative AI to embed things in the workflow and improve communication.
The two areas we've focused on, one is with the partner of a vendor, is ambient listening. So most people in the healthcare sector have heard about these. There's multiple companies doing it where it's listening to the patient physician interaction and generating a note in the clinical documentation, allowing freeing the physician from looking at the computer or running through checklists, and instead allowing them to have a natural conversation that then gets recorded. We've been doing this for about a year using multiple different technologies, and it is a huge satisfier. Doc say they go home on time, they're not working after hours to document, they're increasing their revenue capture. In the meantime, they're doing better documentation. It's a huge satisfier, and it's just going to get better. The other area that we're really focused on in this communication AI space is around patient messaging. So patients email their doctors more than they call them anymore. We at providence receive 700,000 emails or messages from our patients per month. And so that equates. So I think it's somewhere in the order of like three minutes of work for a patient message for every clinical visit that we have. It's an insane amount of work.
Our goal is to figure out how do we make sure patients are getting their questions answered as rapidly as possible. We're triaging patients and getting in for care, giving them the right advice, but doing it on a scale that actually lets us get the work done. At Providence, we are implementing an end to end approach to using communication AI. On the front end, patients are allowed to interact with like a chatbot. That helps them get to the right level of resources. Is this like a thank you? Is this a referral request? It just helps them get to the right place quicker and maybe answers their questions right in real time. If they get past that, they want to get. They still want to contact their clinic, they can write a message, but we have a system that uses generative AI to read the message and figure out what the message might be about to allow for better, again, triage, deciding which ones we need to address first, but also then embed the correct workflows and job aids and best practice, as well as single click actions to allow us to embed those workflows as quickly as possible. So that's another. And then the third area in that space is helping our physicians or clinicians to be able to respond to the messages they need to respond to in a more effective manner. Partnering with our EHR vendor on some ways to use AI to build responses in advance. And we also have kind of our own projects around that space to try to see what are the best ways that we can use generative AI to help. Again, it's that model for us. It was thinking about, okay, this is a communication tool. Let's leverage it for communication as a first step. We could leverage it for a lot of things, and I think that's what's making it real great.
[00:10:42] Speaker E: Well, thank you. Those are incredible examples because our goal is to give patients the information they need so that they can get the care they need. And oftentimes those queries can be solved very simply. And it sounds like thinking of it as a communication tool is actually enhancing the care. So, Nirav, let's hand it over to you to understand some of your perspectives.
[00:11:07] Speaker D: Yeah, thanks, Lakshmi. So the way that I like to think about it is grouping it into general categories. So the first grouping is around what Mullen said before. We have a lot of models around each our structured data.
We have over 15 or so that are deployed with actionable workflows that are around predicting cardiac arrest, readmission, mortality, a variety of complications, including sepsis, falls, et cetera. Then we have a lot of use cases specific around imaging and radiology. So they help us identify breast masses, lung nodules, diabetic retinopathy, cardiac and cerebrovascular abnormalities. We're going down the path of evaluating our clinical notes. And so we have, the third bucket is really looking at natural language processing. So we have technologies that help to identify incidental findings from radiology reports. So if someone has an incidental pulmonary nodule when we're looking for something else, we have workflows around identifying that and connecting with the patients. We do have a platform around natural language processing that allows us to extract social determinants of health from our patients notes and surface that to our ed social workers. We've had great success with that. And we extract sentiment analysis from patient and provider surveys as well. The last category, I like to bucket this as kind of like an other category. We have a variety of robots that help transport medications, labs, and then we have a variety of things that are tied to kind of business and corporate functions. We're much earlier in our journey around the conversational AI, so we're starting to look at vendors specifically around ambient AI. So doing similar stuff to what Doctor Mullenshaw mentioned, of freeing up physicians to have communication with their patients. And we're evaluating large language models as well. In terms of day to day work, we're kind of in the process of creating that system of governance around these tools and putting some structure around it as we come together as a health system and finding out ways that we can actively monitor and ensure these algorithms are accomplishing what they're supposed to in a safe accurate and equitable way, and making sure that we take in new data sources and platforms to help serve our patients and execute on our mission.
[00:13:22] Speaker E: Great. Thanks so much, Nirav. It sounds like there's so much related to AI and there's different approaches and starting points that health systems have taken. Doctor Patterson, how have you approached this space?
[00:13:34] Speaker C: Yeah, this is a tough one to go third on. Thanks. So similar to. I want to give a whole laundry list, but similar to Nirav and Mullen, we're working on a number of AI projects, and this space has expanded so much that we don't want to lose this ability to look at the ecosystem of all the different AI tools that we're using and how these fit together to create a good experience for our patients and our providers. And I think you can look at it across a number of different spectrums, like from really more simple models that verge on, is this really AI, or is this just a rules based model to our more complex models, both in the discrete space, the natural language space, and in generative AI space. And I think, but in addition to that complexity spectrum, we have this spectrum of models that are supporting really acute decisions like detecting deterioration and sepsis, moving out to other models that are working in a sort of slower time space, where it's saying, how can we identify patients who are at risk for complications pretty far downstream, that we can refer them to preventive services.
And then we can also split it by looking at where are these models coming from? In a big academic health system, all of us have some models and some AI solutions that are coming from our EHR partners, some that are coming from other vendors coming in, some that are being created in house by our applied data science team in our health system to fix very specific problems that we have in patient care. And then the research side of the university has a bunch of people who are working to develop new technologies and are going to be looking for partners on the health system side to keep them to get their research results into practice. So I think for us, stepping back and looking at that whole ecosystem is a big part of our governance to make sure that this is all fitting together, everyone's playing by the same rules, and that we're creating a seamless experience for both the patients and the providers.
[00:15:26] Speaker B: Thank you, Doctor Patterson. So, wow, those are some fascinating use of AI. One of the biggest challenges in things working well is integrating systems. And I'm curious to get each of your thoughts on what are you doing to integrate AI into your EHRs, as well as other systems that make up your health systems ecosystem. Doctor Patterson, we won't have you go last this time, and I'm sure you've got some great stories here as well.
[00:15:58] Speaker C: I think I just highlight a couple points that we feel like we've learned a lot in our journey from. I think one of the key points is that these workflows are hard to change and that people who are getting into this space often are really concerned with having the best possible product from their model, from their model output. In terms of am I technically classifying things correctly or is my generative response the best response that it could be? And the more that we do this and roll these things out, we find I'd much rather have a model that's just performant but not great with a really slam dunk workflow around it that the clinicians can use, then spending more of my time on making the model better. I think people tend to wait the algorithmic production over the implementation work, which is the really hard work to do. And I also am a big fan of a socio technical model. I come from a sort of human factors background of how to design these things. And what I always tell people is there are many professional disciplines that kind of come in on the design of workflows, and it doesn't necessarily matter which one you choose, as long as you have someone who's expert in design on your team. And it's not just a bunch of technical people sitting around saying, yeah, I think if we get this output out, the doctors will just use it. We need to have all those links in the system of how are we going to design this so people can take this and integrate it into their existing workflows. The other point that I tend to make here is we spend a lot of time talking about how AI is is a different technology from everything else that we used, how it's really an exceptional thing that's going to change practice. But most health systems are really good at change management and have people in quality operations, other areas who are changing workflows every day in response to new guidelines or things like that. So we have to switch gears from making AI different from everything else, to really focusing on how AI is just like every other practice changing thing that we bring in and engaging our resources in the larger hospital system to make sure that for somebody who's an end user, this feels like other things that come down the line and not some big huge disruptor.
[00:17:55] Speaker B: Thank you, Doctor Patterson. Doctor Molinshire, how about you? What are your thoughts?
[00:18:01] Speaker A: My concern to add is around caregiver burden or clinician burden.
Our goal is to reduce burden while improving care.
You could easily increase burden with any one of these tools. One of the things that makes me crazy is discussion of human in the loop. Somehow, if a human is reviewing this AI tool, that'll be better. Well, you've just increased the burden for the clinician to have to do a review that they didn't used to have to do. Now, maybe you've improved something else, but it's too easy for us to just assume that the clinician will solve the problem. And frankly, we're making the same mistakes we made rolling out ehrs a decade ago. We're like, oh, the clinician can fill in the problem list, or the clinician can do that. You just constantly put more and more and more and more on the clinician. And so I think the workflow integration is important, not just in terms of getting the best outcomes and actually getting a value out of your tool, but to really continue to keep kind of a laser focus on what I talked about before, which is we're trying to be more human, we're trying to gain that traction back while making better decisions.
[00:19:13] Speaker E: Wow. I just have to say that listening to the three of you is both inspiring and incredibly reassuring, because as we learn about your real world experiences, we realize that we need people who care deeply about what the positive impact of these kinds of tools can be. Before we wrap up, however, I do think we should discuss any of the challenges, and perhaps even cautionary tales with AI use and implementation. So what worries you the most about AI? Doctor Patterson, can we start with you?
[00:19:45] Speaker D: Sure.
[00:19:45] Speaker C: Well, I'd say I've got a lot of worries, but I think the biggest is the sort of big picture worry, as opposed to the small stuff, which is that obviously these AI technologies have the potential to really disrupt healthcare. But I think there's a big difference between disrupting it in a developmental way from within medicine and feeling like changes are being forced on us as providers. And I see a lot of skepticism. I see a lot of providers who are really excited about AI, but you also hear a lot of providers who are very skeptical of any new technology coming down the way. And frankly, a lot of that skepticism is warranted. But the big challenge for me is a story in which we get people and providers to engage in the development of these technologies, and they feel like these are something that are coming from development within health systems, with clinicians engaged in the development, engaging in the ways that we've improved medical technologies in the past. And not saying this isn't something for us to do, this is a technological thing that needs to come from the outside, because I think that's going to be a harder path to integrating these technologies into healthcare.
[00:20:46] Speaker E: Great point. Doctor Mullenshaw, what about your thoughts?
[00:20:49] Speaker A: What I've found, and this is hype cycle related, is that people are super eager to bring these things to bear. And I think there's a step missing a lot of the time, especially when you're partnering maybe with a vendor instead of something in house around real science, real science being done. They'll say, oh, we've shown this or doctor satisfied, that's not science. And you know, I'm very much a show me the numbers kind of person, show me how your model works in my system, on my patients. But I do think there is this feeling of urgency, especially maybe in the C suite of we need to get the AI live, we need to get the AI helping our clinicians. And I think you have a group of physicians here who are like, yes, we're converts. We want to do this, but we need to be cautious. And I'll give you an example from a couple of years ago when I stepped right in it, I was super eager with the predictive modeling during COVID and 19, was working on a model that would look at patients physical locations and their proximity to other patients we knew had COVID. And we were thinking about, oh, that would be a way to improve our detection of COVID prior to the patient even coming in the door, which was the big deal then, except that when you start thinking about it, the socioeconomic biases that that brings to care are huge. And I'm lucky to work in an organization where I have ethicists and compliance and legal and others who are going to look at something before it moves forward. And it was sort of a lesson in that eagerness of, I have a critical problem, I have tools in my fingers I can make something happen with. Let's balance this with all the reality of real science, real ethics, real concern for our patient populations, and not just get carried away with technology. So let's keep the science in this, I guess, is my, my challenge and my, my caution.
[00:22:45] Speaker E: Neurop doctor Nirav Shah to you.
[00:22:48] Speaker D: Yeah, I think I'd agree with Doctor Mullen Shah that a lot of vendors sell snake oil and with the hype cycle, everything has AI slapped onto it. I mean, you can have a tool that has nothing to do with AI and the vendor will say this is an AI tool just because we're in that kind of hype cycle right now. I'll take as a second point, I'll take kind of a different approach, kind of being in this space. I mean, we're all in the space of really, like, implementing this and communicating with clinicians. I think trust is another big thing that I think is very important, and it's very hard to gain trust. It's very easy to lose trust. And there's this really great quote that AI is going to scale at the speed of trust. I think that's really important because frequently with all the stuff that people are saying, hype cycle, and there's no evidence, if you have a system that doesn't truly provide value or does something in a harmful way or an inequitable way, you can quickly lose that trust. I'll give one kind of example that we encountered with our natural language processing example, where we extracted social determinants of health and surfaced that to our ed social workers. So one of our queries was extracting depression. And as one of the subqueries, a component of that, it was weight gain. So our algorithm essentially started picking up every single patient that was pregnant and saying they were depressed because they were all gaining weight. And this kind of quickly came to the attention of our EDC social workers, who then pass it along to our data scientists, and they were in close communication as we kind of leveraged this technology. So we were fortunate that we, our solution, our platform solution, was not a black box, and we could make tweaks to it. And so we were quickly able to make those tweaks and learn from that. And as a result, we fixed that solution. And it was in that moment that our ed social workers really gained trust in this system, because they knew that where these algorithms were coming from, that they could connect with the data scientists and essentially fix something that may have looked broken. I know in other health systems that have had more black box type solutions, you have something like this occur that breaks the whole trust cycle with those providers for that instance and potentially going forward. So I think that's a real important thing for us as we're thinking about how to deploy and really scale these solutions.
[00:25:16] Speaker A: We're in a place now where the conversation has flipped, for us at least, and probably for you guys. And you said this, where we have our front line coming to us and saying, when are you going to bring this to us? And that's a huge opportunity, and it's one that's easy to squander with one bad tool, for example. But there's a genuine sense of hope that I'm seeing out there that, hey, technology is going to finally work for me instead of the other way around. And that feeling of hope hasn't been there for really my entire career. And so I think that that's something that we really need to pay attention to. And while we need to be cautious, I think we also need to leverage the moment and really try to say, well, what can we do right now? What are some small wins to continue that excitement, to continue to build a partnership between all these people generating these tools, building these tools and those on the front lines so that they have trust and they think that they're supported and overall they feel like they're providing better care and their burden is going down. In other words, continuing to support our workforce, that's dwindling so that we can grow it back again and really give the care we need to out there.
[00:26:27] Speaker B: Wow, what a fascinating discussion. On behalf of Alvarez and Marcel and our listeners, I'd like to thank doctor Nirav Shah, Doctor Brian Patterson and Doctor Molin Shah for contributing to this podcast. And I'd also like to thank Dr. Lakshmi Halasyamani for co hosting this with me. Until next time, thank you all.