AI's potential to revolutionize healthcare requires a focus on responsible and trustworthy implementation.
In this episode, Dr. David Rhew from Microsoft, along with Marcella Dalrymple and Dr. Michael Pencina from Duke Health, discuss the collaboration between Microsoft and Duke Health to explore the transformative potential of artificial intelligence (AI) in healthcare. Dr. Rhew emphasizes the importance of responsible and trustworthy AI, acknowledging its limitations and the need for operationalizing key principles. Dr. Pencina outlines four principles for trustworthy AI: prioritizing the human person, defining AI use cases, anticipating consequences, and establishing governance. Marcella Dalrymple, with her community perspective, highlights the necessity of addressing public uncertainty and mistrust regarding AI development. The partnership aims to form a Center of Excellence for trustworthy AI, focusing on collaborative efforts to align with ethical values and engage the community bidirectionally. The guests stress the importance of a robust governance system, automation for efficiency, and continuous monitoring to ensure AI's intended impact.
Tune in and learn how this collaboration strives to revolutionize healthcare responsibly through AI!
Resources:
[00:00:00] Hey everybody, welcome back to the Health Matters Podcast. I'm Saul Marquez, your host for today's episode. I have the privilege of hosting three extraordinary leaders in healthcare, and I'm going to introduce them to you first.
[00:00:21] We have Dr. David Rue, Global Chief Medical Officer and VP of Healthcare at Microsoft. We're also joined by Marcela Dalrymple, Director of Strategic Development and Business Partnerships at Duke Health, as well as Dr. Michael Pensina, Chief Data Scientist at Duke Health. Welcome to the podcast, everyone. Hi.
[00:00:41] Thank you. Hi, Saul. It's a pleasure to have you guys here. And look, just to begin with, I'd love to just level set with the listeners. Tell us a little bit about why we're together here and the partnership between Microsoft and Duke Health. Maybe I'll start.
[00:00:55] My name is David Rue, and again, really excited about the opportunity to start looking at how artificial intelligence can help transform healthcare. One of the things that we've recognized is that there are some amazing opportunities, largely because there's a lot of challenges.
[00:01:08] But at the same time, we know that AI has some limitations. And until we figure out ways that we can operationalize a lot of these key principles, it'll be difficult for us to be able to feel confident that we can do this at scale.
[00:01:22] So one of the things that we at Microsoft have been actively doing is collaborating with organizations around the world, but more specifically here in the US, we're working with organizations that have been thinking about responsible and trustworthy AI and how
[00:01:35] we might be able to work with them to be able to help take those principles and bring it to practice. Love that. Those collaborations are very important. And from our friends at Duke Health, any thoughts on the partnership?
[00:01:47] We are very excited to partner with Microsoft to form this Center of Excellence for Trustworthy AI. We share the excitement that AI brings to healthcare. I think it can truly revolutionize it on many levels, right? The clinical level, but also the operational level.
[00:02:04] But at the same time, if we don't do it in a trustworthy manner, putting the human person first, there is huge opportunity for misuse, abuse, and the AI revolution failing. I outlined four principles for trustworthy AI, starting with human person first, then identification of use cases.
[00:02:28] We get to define what AI is used for. In the end, it's a technology. It's very powerful, but it has to serve human beings. Then we have to predict and expect what it can do and be ready for the consequences and monitor them. And finally, we need governance.
[00:02:48] AI cannot run in the wild. We need a governance system. We need to register and make sure what AI tools are running, and we need to make sure that we have control over what's happening. Thank you, Dr. Penzina. Marcela?
[00:03:03] So I think this partnership really just builds on each other's capabilities here. Like Microsoft would not be able to excel in healthcare AI, and Duke wouldn't be able to excel in healthcare AI alone.
[00:03:16] Together, we can really lead the way by collaborating on both the intellectual and the theoretical practices that we have deployed at Duke, and then the expert technology that Microsoft has to offer. I think we will be unstoppable here. It's a great collaboration.
[00:03:32] Well, thank you all for the level set about the partnership and how you guys are working together. How do you ensure that your partnership and ethical AI initiatives are aligned with the values and goals of your stakeholders and beneficiaries?
[00:03:47] I think it's critical that we listen to the patient and listen to the community and what they have to say. And I'll turn to Marcela, right? Marcela, you grew up in Durham, North Carolina. You kind of get the pulse- Yep. I was born at Duke Hospital.
[00:04:00] You get the pulse of the community. How do people outside the bubble of AI and healthcare feel about all this? I really think that people who do not work in AI don't understand how we are developing AI.
[00:04:15] The community is not sure whether it's a robot that's going to take their blood pressure when they walk into the ER and waiting room, or if it is just some big computer behind the scenes that increases their medical bills on the back end, right?
[00:04:28] So there's kind of a mistrust and a level of uncertainty that I'm really hoping this collaboration will address because Duke is such a pillar in the Durham community and the North Carolina community at large. And we have those touch points with those patients and other stakeholders that we'll
[00:04:47] be able to communicate all of the good deeds that this partnership is really doing. And we need to engage the community in a bidirectional way. We hear a lot of people saying, let's go and educate the community, right?
[00:05:01] That implies that we know and they don't, but we have as much to learn if not more from the community because ultimately we serve the people, we serve the patients, right? So we need to get the feedback from them.
[00:05:14] And one really important idea in trustworthy AI is transparency, right? I mentioned knowing what's running in the health system and disclosing that information. But we need the patient, the community input of telling us what they want disclosed.
[00:05:30] And we don't want a situation like consent forms, which we will disclose so much that it's practically useless. We want to disclose what's relevant, what people are interested in knowing. Yeah, some great points there. And really none of this comes without challenges.
[00:05:44] So what were the main challenges and benefits of implementing generative AI at Duke? And how do you measure its impact on health equity? I think the biggest challenge is having the strategy, right? Because there are so many opportunities to implement generative as well as predictive
[00:06:00] AI in so many areas. I think the interest is exploding at Duke as an academic medical center. All our creative researchers want to see it implemented now. And so the question becomes, where do we start and how we prioritize? That's one side of it.
[00:06:17] And then how do we govern it? And again, we're fortunate to have what I joke as the largest AI governance system in the country, and it's two years old, right? We put it together before the generative AI revolution, exactly with the purpose being
[00:06:33] the commitment that no new algorithm is applied to Duke patients until it goes through a thorough review through the process that we put together. And so that has to be in place.
[00:06:44] So on the one hand, the strategy of what goes in, on the other, the evaluation and monitoring of the AI tools that will be implemented. And I think Microsoft recognized that we took responsible AI very seriously.
[00:06:57] And that is why they kind of tapped us on the shoulder and like, hey, would you like to work with us? You know, would you like to sit next to us at the cafeteria? And so I think that was just kind of a natural relationship.
[00:07:10] And I think we're going to do so many great things together under Dr. Rue and Dr. Pansino's leadership. What we find is that so much of these principles need to be operationalized. And that means that we have to figure out what those workflow environments are, that
[00:07:25] where AI will reside, how it's being used and tested. And so in order to make this scalable, you have to find ways to reduce friction in the governance process. Today, a lot of what we saw Duke doing was very manual.
[00:07:38] A lot of individuals taking the time to review and assess and document and then pass that on to other folks. And we saw an opportunity to automate. And that's really what we're trying to figure out.
[00:07:50] Where are the opportunities to be able to bring this into the clinical and operational workflows, use some of the newer tools? So we're using mechanisms that will allow us to be able to then streamline the process and ultimately solve all these very challenging problems.
[00:08:05] How do you provide greater transparency around the AI? How do you ensure that it's actually working in the specific populations, leveraging the local data, and then also ensuring that it is actually working as designed?
[00:08:17] That we know that that type of opportunity can be done manually, but it can be done a lot better, faster, and at scale through automation. Right. This is the excitement of the partnership. Our governance process is people process technology. We have great people.
[00:08:32] We have strong processes, but we have technology that can definitely benefit from partnering with Microsoft to make it scalable, easier to implement and operate at Duke, but also very importantly, take it beyond. We talk about health equity.
[00:08:49] We don't want a world where health systems that can afford AI strategy and AI governance have it covered and the other ones don't. So we want to streamline it in partnership with Microsoft in a way that AI governance
[00:09:06] is baked into the process and available to any health system, regardless of their size, affluence, patient population, and so on and so forth. Right? So that's the goal here. That's fantastic. The creation of a playbook that could be potentially applied across the country.
[00:09:23] What are some of the best practices you could highlight or lessons learned from selecting and implementing use cases that leverage generative AI? So our strategy to selecting use cases was what could we determine that were going to be high impact and low risk?
[00:09:40] What type of data sources were they going to be using? Where was the data coming from? And then what was going to be something we could put through that evaluation and governance framework that Microsoft is helping us automate that could help us improve that process to
[00:09:56] be applicable to all types of data and all types of AI tools. And so what we selected were an imaging use case. We've also thought about EHR extractions for scheduling, billing practices, also updating registries, national registries. And am I forgetting one, Michael?
[00:10:15] I think the other important one, and actually that came from the community feedback, is the DAX solutions suite of tools, which is gaining tremendous popularity. In my conversations with AI skeptics, right, that comes fairly frequently. I ask colleagues or family members, what do you think about AI?
[00:10:33] And they say, it's bad. And then I say, well, do you go to the doctor or do you take your children to the doctor? They say yes. And I said, well, how much time does the doctor spend talking to you versus typing things on the computer?
[00:10:45] And they say, well, they spend a lot of time on the computer. I said, do you like it? No. What if there was a tool that would listen to the conversation recorded for the doctor so they can focus on you as the patient? Oh, I would love that.
[00:10:58] That would be phenomenal. So, as an example, we have this technology where generative AI can come in. So, that's the community feedback in practice with a tool that Microsoft designed that now can be put into good use, but to Marcella's point, also evaluated and monitored through
[00:11:16] our governance system. Some great applications there. Dr. Rue, go ahead. And one of the things that we do with these tools, and Dr. Pansina's talk about this, is that once the documentation has been auto-generated, the clinician still reviews it and just makes sure that it meets their standards.
[00:11:34] And then they have the opportunity to then make a few edits if necessary, then push it seamlessly into the electronic health record. So, the person is always in control of the process. And they can then just think about, rather than trying to draft all the notes and start
[00:11:49] from scratch, it's there. So, it's a lot easier to be able to then just make a few edits as opposed to trying to start from scratch. And this is where the automation can greatly help.
[00:11:59] So many of these activities that we look at are very tedious, extracting information from different sources in areas that you have to then pull together, synthesizing that into something that is understandable, and then presenting it in a format that is appropriate for that particular scenario.
[00:12:16] And so, in this case, we're talking about clinical notes, but in other cases, it could be putting that into specific fields within the electronic health record or its registries. Or in some cases, we find that we can potentially use AI to generate code to be able to then
[00:12:31] demonstrate things. So, there's some remarkable capabilities that AI has that we can leverage to solve these problems but helping the human, the individual, the clinician, the administrator, the researcher all along the way. It goes back to your enablement.
[00:12:45] Microsoft wants to enable and they want to enable all stakeholders in the healthcare field here and beyond, honestly. Yeah. And I think the point David makes, I want to reemphasize it. It's people talk about human in the loop with AI. I think that's not enough.
[00:13:00] Human on top, human person in charge of AI is doing. And this is a good example, right? It's a support for the clinician and the patient, but the clinician still reviews it and they
[00:13:11] make the decision, not the AI or not even a joint decision between the human and the machine. It's really the human who is in charge here. Yeah. Great clarification and a great use case of how generative AI can really help scale care.
[00:13:26] How do you manage the risks and uncertainties involved? Right? We talked about process, we talked about governance, but maybe some good examples that we could chat about. Well, I think one of the first things that we recognize is that there's the risk of data, security, privacy.
[00:13:39] So when we think about how data is managed in AI, it has to be in an enterprise secure environment. And so within Microsoft, we have Azure and that allows us to be able to leverage all the security controls.
[00:13:51] So the data we know are, we're confident that it's going to be secure. And even the prompts, if we're going to put prompts in specifically around for the generative AI, we know that that's going to be secure. It doesn't get leaked out.
[00:14:01] It doesn't even get presented to Microsoft. It's just purely within that own tenant, the own environment for the organization. As opposed to if you were to simply copy and paste pieces of a medical record and put it onto the internet, which ultimately is in the public domain.
[00:14:17] And that is sort of like of high concern for a lot of folks. Now, the other element is just in terms of ensuring that the results are things that you feel confident. And what we realize is that that's where the use cases are so important.
[00:14:29] We have to select the ones where even if there were an error, it's not going to cause a significant impact. So now we have these safeguards in place. We want to start with ones where it's really one where the clinician or the individual
[00:14:43] can still auto correct, can still fix that. And so the selection of the use cases allows us to be able to identify, as Marcela described, those high impact but low risk scenarios. And a lot of times these are things that we don't even realize like we're doing.
[00:14:57] We're asked to summarize a whole bunch of information and present it back to individuals. We're writing reports. It happens all the time. We're writing hypotheses on certain things and thinking about it. It's the type of work that is oftentimes done by smart humans, but with artificial intelligence
[00:15:14] we have the ability to have that help us. So I think that we're starting to realize that that's going to be a part of it. But then the last is really about monitoring. You have to monitor even though you've done all these safeguards.
[00:15:25] We need to know that is it actually working as designed? And that's a cornerstone of what we're talking about here with the center where we're going to actually start looking at the real world post-deployment impact that this will have. And is this actually having the intended consequences?
[00:15:39] And also if there are any unintended consequences, we want to know that. The monitoring part is really important. An analogy I like to make is vaccines, right? Where we had pandemic and everybody was rushing to get the vaccines.
[00:15:51] We had the emergency use authorization to start using it before the formal approval process. It's a little similar with AI, right? We have different crises in healthcare that AI can help make better.
[00:16:03] And so we want to implement it fast, but then we need to monitor because it might have been tested thousands of times, but it's going to be used hundreds of million times and things might go sideways.
[00:16:16] And if we catch it, it's probably there is an easy correction to make it happen, but we need to be aware. That's great. What a great framework on the specific ways to manage risks and uncertainties around it.
[00:16:27] It's great to have minds like yours on a topic like this that I think will really define the future of healthcare. If you guys wanted to close out this session with a call to action to our listeners, what would you leave them with?
[00:16:39] I think the potential of AI is tremendous and I will say my long term dream for AI is the clinical applications. It's the identifying cancer early before it occurs and trying to make things better. The short term goal and excitement, it's the operational use cases.
[00:16:58] It's the recording and creating clinical notes, improving billing practices, all the parts of healthcare that the patient doesn't see and doesn't really care about, but they make them inefficient and create tremendous clinician burden that we need to reduce. And I think AI can really help.
[00:17:18] So that's the hope. The important side, it has to be done in an ethical and equitable way to generate trust. So I will say, I really hope that through this partnership we can scale our governance
[00:17:30] as a service and making it something that any health system can implement, not just a luxury for a few. Thank you, Dr. Penzina. Marcela? I would say the call to action here is to actually join CHI, which is the Coalition of Health AI.
[00:17:44] Really get involved, have your voice heard, and really have the potential to impact legislation for the guardrails and guidelines for AI being implemented in healthcare. It's a great way to discuss with like-minded individuals, people that might even have
[00:17:58] a different perspective or a different type of company that they're working with, but really for us to develop a consensus around how we want AI to look when it is deployed in a healthcare setting. Thank you.
[00:18:11] Yeah, one of the things that we recognize is that technology has traditionally been viewed as something where it's tested and then it's deployed and we will move on. It's like software. We don't think of it after it's been deployed.
[00:18:24] This is going to require a change in mindset, a change that we have to realize that this is a life cycle that AI goes through. Think about machine learning. It's learning and adjusting, so it may end up with a slightly different algorithm than what it originally started with.
[00:18:37] You've got different populations coming in. You've got new variables. But every time we make changes, we don't know if the results are going to be the same. So there's a new element of updating, monitoring, and this governance that's so essential.
[00:18:50] And as we think about this new paradigm, it's now a change in terms of responsibility. It used to be the developer and the regulator is the sole responsible. Now it's a shared sense of responsibility for those deploying it.
[00:19:03] And then if we were to then think about what does every organization need to do, they have to realize that if AI is going to be a part of their strategy and their ability to be
[00:19:11] able to be successful in this new environment, then they're going to have to start figuring out how do they resource at the C-suite level for organizations to put governance in place. For the folks that are actually doing the governance, they need to have the right tools
[00:19:25] and processes in place to be able to apply it. And then for the actual end users, they need to know how to use it. And so there's a lot of things that we need to do, a call to action to everyone.
[00:19:35] But at the end of the day, it's going to be a new world with some exciting tools, and we have to learn how to use them. Thank you, Dr. Rue and Myrcella, Dr. Pansina. This has been an extraordinary interview. Thank you all.
[00:19:45] And in the show notes, folks, we'll leave ways for you to learn more about the work that Microsoft and Duke Health are up to and resources for you to find out the best ways to do AI for your organization. Thank you all for joining us today.
[00:19:58] Thanks for having us. It's our pleasure.

