In the second episode of "The Future of AI in Health" podcast series, hosts Dr. Jenny Yu, Chief Health Officer at Healthline Media, and Saul Marquez, healthcare entrepreneur and founder of Outcomes Rocket, explore how AI is expected to enhance healthcare by providing deeper insights, improving patient experiences, and increasing operational efficiency, all while working alongside human expertise.
Their esteemed guest, Dr. Tarun Kapoor, Senior Vice President and Chief Digital Transformation Officer at Virtua Health, delves into the transformative impact of artificial intelligence on healthcare, sharing insights on its current use, challenges, and the promising future it holds for care delivery. He talks about the mindful approach to AI integration at Virtua, emphasizing the distinction between automation and AI as well as the importance of leveraging the latter for meaningful opportunities in healthcare. Dr. Kapoor also highlights the need to differentiate the roles of humans and machines in setting and rewriting rules to ensure patient safety and the effective use of AI tools. From patient data security to addressing interoperability challenges, he provides valuable perspectives on AI adoption throughout the healthcare industry.
Tune in to gain a deeper understanding of the promise of AI and digital transformation in driving healthcare forward!
Resources:
- Check more episodes of this Thought Leadership Series here.
- Watch the entire interview here.
- Connect with and follow Tarun Kapoor on LinkedIn here.
- Learn more about Virtua Health on LinkedIn and their website.
- Connect with and follow Jenny Yu on LinkedIn.
- Learn more about Healthline Media on LinkedIn and their website.
[00:00:00] .
[00:00:02] Hello everyone and welcome to the forefront of healthcare innovation in an era where artificial intelligence reigns supreme.
[00:00:11] I welcome you to join us on a journey to uncover its impact in healthcare.
[00:00:15] This is the Future of AI and Health podcast series that we're doing in collaboration with Healthline Media and Outcomes Rocket.
[00:00:24] I'm your host Saul Marquez and I'll be doing this series together with the outstanding Dr. Jenny Yoo, Chief Health Officer at Healthline Media.
[00:00:32] We're lucky to have Dr. Tarun Kapoor with us today.
[00:00:36] He is Senior Vice President and Chief Digital Transformation Officer at Virtua Health.
[00:00:41] In this role he oversees Virtua's digital transformation office and orchestrates Virtua's enterprise wide master plan in support of an intuitive care journey for all consumers.
[00:00:53] Previously, Dr. Kapoor was the president of Virtua Physician Partners.
[00:00:57] We're excited to have him here.
[00:00:59] Thank you for joining us Tarun.
[00:01:00] Oh, thank you so much.
[00:01:01] A pleasure to meet both of you and a pleasure to appreciate the opportunity to speak with your audience.
[00:01:05] Yeah and Jenny it's so great to have you here.
[00:01:07] Please take it away.
[00:01:08] Thank you so much for having me.
[00:01:09] We're very excited to get this conversation going.
[00:01:11] So Tarun I'm going to pose the first question to you which is how is AI currently being used in your hospital?
[00:01:17] And can you highlight some of the AI driven innovations that have benefited both patient care and operations?
[00:01:24] So I think the way we've been trying to approach here, AI at Virtua, is being mindful of the first thing.
[00:01:30] I want to make sure we're all speaking the same definition of AI.
[00:01:34] And because I do find that the term automation and AI are being interchanged and that you're actually eating automation and people are calling it AI and vice versa.
[00:01:44] So the way we, very simplistic definition to help get around in my head is automation is the human sets the rules.
[00:01:51] The machine carries out the rules and doesn't question it.
[00:01:55] AI, the human may set the rules or the machine may be able to set the rules but the machine is allowed to rewrite the rules as it goes on.
[00:02:05] So very simplistic definition.
[00:02:07] And I think what we have to be very mindful and careful of here with the tools is when are we comfortable with the machine?
[00:02:14] Being able to rewrite the rules of engagement.
[00:02:17] The way we've thought about it is, and again, it's not my grid that came up with it is if it's straight administrative work and there's a relatively low risk of the machine making a mistake.
[00:02:29] And I think that's one of the big misconceptions.
[00:02:31] People are like, oh, AI won't make mistakes. No, no machines make mistakes.
[00:02:34] People don't realize it. It happens all the time.
[00:02:37] If there's a relatively low consequence of the mistake the machine makes, then we're okay with letting it rewrite its rules as it goes along and more enabled with machine learning.
[00:02:47] However, if we're not comfortable with a significant mistake, then we're still keeping the human in the loop regardless of whether the machine is there or not.
[00:02:55] So I think that's how we're thinking through it.
[00:02:58] The use cases that we have right now underway, rolling out radiology, again, human in the loop to double check.
[00:03:05] We have use cases where we're helping to try to do a better job with sepsis detection, ideas around using, I wouldn't really call it AI, but natural language processing to read our notes to look for a, hey, this is a patient who for whatever reason didn't get this procedure done.
[00:03:20] And it's a life enhancing procedure.
[00:03:22] They should do that.
[00:03:23] So that's how we've been doing some of these individual use cases right now.
[00:03:29] And with those use cases, have you been seeing some successes or wins?
[00:03:34] Right.
[00:03:34] How are you finding that the balance of positive consequences outweigh the risks in which?
[00:03:42] Yeah.
[00:03:42] So from a risk perspective, we feel very comfortable that because we've been intentional not to do straight out clinical AI by itself.
[00:03:52] Just let's say, for example, I'm going to let the machine decide and let the machine intervene by itself.
[00:03:57] We're not touching that.
[00:03:59] We are nowhere close to comfortable with that.
[00:04:02] What we are very comfortable with is, are we seeing some speed and efficiency improvements?
[00:04:06] I'll give you an example of some automation.
[00:04:09] I would say NLP, maybe a touch machine learning inside of it.
[00:04:14] We wanted to find out what is the data point for about 20,000 patients.
[00:04:20] And unfortunately, it wasn't recorded in a discrete field.
[00:04:24] So the team wrote some queries using some NLP to read the notes and they read six million progress notes.
[00:04:32] And I did a back of the envelope calculation.
[00:04:33] If a human were to spend 30 seconds per note, it would have taken 27 years.
[00:04:39] The machine did it in a weekend.
[00:04:41] And because of that, it gave us insights on here's several hundred people we think would actually benefit from this intervention.
[00:04:49] We're now reaching out to those people.
[00:04:51] So I think that's a clear-cut use case of great.
[00:04:54] I have a result, but the question that my team just really shares me say all the time, so what?
[00:05:01] And I think that is both the promise, but that's the double-edged sword of AI.
[00:05:06] There's a lot of excitement about what it can do.
[00:05:08] The question is how good are we?
[00:05:11] How sophisticated are we at actually putting it to work?
[00:05:14] And that's where I spend a lot of my time.
[00:05:16] That's fantastic.
[00:05:17] Tarun, thank you.
[00:05:17] And great question there, Jenny.
[00:05:20] And so a common question that comes up is around patient data security when implementing AI.
[00:05:26] What measures are you guys doing to take care of that?
[00:05:29] The question is how much in order to be secure about the patient data, how much are you running internally and how much you're running externally?
[00:05:36] And this is where my colleague, our chief information officer and I, we partner on these projects.
[00:05:42] And on his team, even though I think the chief information security officers, questionnaires are probably the most complex and difficult things to go through, there's a reason we do them.
[00:05:54] So from the perspective of what data do we let leave, we are partners with Truveta and we do work with Truveta on that.
[00:06:01] And so there is information that we let go into the Truveta universe, but otherwise we are mining internally.
[00:06:08] That's how we are going through our information.
[00:06:10] We're a Microsoft software with Azure and going through those settings certainly would have someone talk to our security team.
[00:06:18] The question though is having, if you can run the information internally, then it comes back again.
[00:06:24] What are you doing with the information?
[00:06:26] And so that's where we then say here's the opportunity to then make an impact on somebody.
[00:06:32] So we feel pretty good about it.
[00:06:34] But at some point, we also understand if you want to accelerate and partner with other organizations,
[00:06:38] there's going to have to be some degree of data sharing.
[00:06:41] I think we're all working through that right now.
[00:06:43] Yeah, and it sounds like you guys are maybe would you say I don't want to oversimplify it, but 80-20, like 80% in house 20% outside?
[00:06:51] Yeah, I think yeah, there's a blend right now of that.
[00:06:54] And we are looking for partners because this is a classic example of build versus buy that every single organization has to go through.
[00:07:01] Should we build our own algorithms for this or are there commercially available algorithms out there that we can go get?
[00:07:09] There are use cases that we know our team and everyone has limited resources.
[00:07:15] There are use cases that our team is going to go at, are going to after, but we have to go after them.
[00:07:20] They're really important to us in terms of outcomes and safety.
[00:07:24] And no one in the private market has commercialized it yet.
[00:07:27] It's just never going to be commercializable.
[00:07:30] The other piece of it though, it's going to come back to is that if we continue to buy every single algorithm out there, you're going to run out of money very quickly.
[00:07:38] None of these algorithms, I don't know why it is, but none of the thing is under six digits.
[00:07:43] And if anyone thinks that health systems are sitting on enormous budgets and enormous margins, that's just not realistic.
[00:07:50] So we're going to have to figure out this mechanism.
[00:07:53] A person I'll call out who's done a lot of thinking on this space is Michael Halfelberg up at University of Rochester Medical Center.
[00:08:00] And I very much espouse his view, we are going to, health systems are going to have to learn how to do some of this ourselves.
[00:08:08] Not all of it, but we have to have that critical thinking in-house.
[00:08:12] I'll mention one other thing.
[00:08:14] Whether you're 80, 20, 50, 50 of what you build in-house, you are 100 zero, meaning you are 100% responsible of educating your health system employees of how to use the tools.
[00:08:27] And I think that is perhaps even a bigger challenge.
[00:08:30] What is your AI literacy approach?
[00:08:35] How are you going to teach people this is how the tools have?
[00:08:38] I joke around that I have a dream that someone will walk up to me one day and say,
[00:08:43] Karun, I was thinking about your podcast and I looked at this model and I'm really concerned about drift.
[00:08:49] And I'm like, OK, then I've done my job.
[00:08:52] Someone has actually thought about that recognizes that these models can deteriorate when you take them out of the lab and put them into real world data.
[00:09:00] So having that education, I ask the following question all the time.
[00:09:04] I say people, so how many of you have used chat GPT or Paul Med or Bard in the last week and maybe one tenth of the hands go up?
[00:09:15] And I ask the same question.
[00:09:16] How many of you have used Google Bing or something in the last hour?
[00:09:21] And I said when we invert the two, now we know we're in a place where you're using these tools effectively.
[00:09:26] And so I think there's an adoption curve is just going to take time.
[00:09:30] And I would say that comment you made is very similar to how EHR was first being adopted in the health care system.
[00:09:35] So I think there's an allergy there.
[00:09:38] But with your use case earlier about how you have the machine learning models were able to read the patient notes and figure out the interventions are needed.
[00:09:48] Clearly all these technology and all these tools are in trying to improve and better the patient experience and tying it to bettering their health outcomes.
[00:09:58] And when we think about how we best leverage data, whether it's cleaning it up, the usability, making sure that the inputs are accessible in such a way that it's giving out outputs that you as the digital transformation officer who is like thinking about, hey, why does this matter and how is this going to matter?
[00:10:18] Can you just share some examples of best practices around what your organization is doing?
[00:10:23] So we certainly talked about the echo example.
[00:10:26] I think the other one of the areas that we're actually in the midst of right now going through is we're wiring all of our hospital beds with cameras, but not just optical cameras, but cameras and ambient sensors.
[00:10:39] And the idea behind this is you use the optical camera for virtual nursing or virtual ICU.
[00:10:46] But from a privacy perspective, we don't want that camera on all the time.
[00:10:51] The patient may not want that camera all the time.
[00:10:54] But the concept behind the ambient sensor is sitting there and say, I can actually have a visualization, not an optical visualization, but a computer vision image understanding of what's going on with this patient.
[00:11:06] Is this patient trying to get out of bed?
[00:11:08] So today when we teleset people, we've gone from in-room tell it sitting where it's a one to one ratio.
[00:11:15] Now we do tell us sitting where we roll in a camera and we do 12 to one.
[00:11:19] Where a thousand bed health system across five campuses, why are we only telesitting 50, 60 people?
[00:11:27] Why not all 1000?
[00:11:29] Shouldn't everyone get this?
[00:11:31] And so I think that's the idea behind this.
[00:11:33] Why?
[00:11:34] Well, we're not going to be able to do that with just 1000 beds.
[00:11:37] We can't scale it up to that many people.
[00:11:39] So then the use cases we start coming up on are can I tell the person trying to get out of bed?
[00:11:45] Can I tell that the person is actually in bed?
[00:11:47] As a rounding physician, how many times have I made rounds only to find out the patient's not there.
[00:11:53] Where are they?
[00:11:54] They're off at a test.
[00:11:55] All right.
[00:11:56] How do I know when they come back?
[00:11:57] And so how much time and waste efficiency there?
[00:11:59] There are some really exciting use cases about the patient has not turned over in X number of hours.
[00:12:05] Therefore, they're at risk for a pressure ulcer injury.
[00:12:08] Today we do it as we have every some staff member go in and look at every single patient.
[00:12:14] No, don't look at every single patient for pressure ulcer.
[00:12:16] Look at the ones who haven't moved.
[00:12:18] And as we are in a more and more difficult environment where there's a mismatch between demand and supply of health care professionals,
[00:12:28] and that demand is only going to get worse over the next 10 years,
[00:12:32] we're going to have to come up with something else.
[00:12:34] So this is not meant to replace the humans who are at the bedside.
[00:12:38] We need them.
[00:12:39] This is to let the people who are at the bedside do only the things that people at the bedside can do.
[00:12:45] And I'll give you one other scenario that we're working through early use of cases is a lot of times it's change of shift.
[00:12:54] That's the worst time to set a new patient up because I'm about to change shift.
[00:12:58] I don't take a new patient and there's a backlog in the ER.
[00:13:01] What's really important is the technology doesn't fix.
[00:13:04] This is where operational change has to change it.
[00:13:07] Don't have your virtual nurses work the exact same shifts time as your in-person nurses
[00:13:14] because if everyone's switching at the same time, so as long as you can stagger them,
[00:13:17] then getting a new patient upstairs and let the virtual nurse do the intake.
[00:13:21] Oh, by the way, the virtual nurse can do the intake and the family doesn't have to be there in the room.
[00:13:25] We can contact the family with a text message on demand.
[00:13:29] I have a question that your family member can answer this video translator on demand right there.
[00:13:36] The beauty of a video translator is the video translator can look at the patient and say,
[00:13:39] you know what? I can tell that person didn't understand what I just said.
[00:13:43] That doesn't work by telephone.
[00:13:45] So I think we're learning a ton of these use cases and we're still refining where's the value.
[00:13:52] But we know we have no choice but to go in this direction.
[00:13:54] That's why we're swinging big.
[00:13:56] Yeah, I love that.
[00:13:57] I love as a practicing physician, previously I know all the challenges
[00:14:02] and so all those use cases that you just mentioned to improve the quality of the experience.
[00:14:07] Agreed. And a lot of people talk about, hey, the applicability is back of the house, efficiency plays.
[00:14:13] And it's exciting to hear you, Tarun, talk about the clinical applications, real world, right?
[00:14:18] That you guys are using.
[00:14:20] I was in the ER. I messed up my knee.
[00:14:23] Thankfully everything was good. Nothing was fractured.
[00:14:25] But I was there during a shift change and it took forever.
[00:14:30] So your example just spoke to that moment that I had and I know a lot of our listeners have dealt with either themselves or their loved ones.
[00:14:38] So thank you for that.
[00:14:39] How does the hospital address interoperability challenges with AI solutions in different departments?
[00:14:45] That's a very simple answer. You have to.
[00:14:48] I guess the question is are we going to try to take one system does it all or are we back to best and breed here?
[00:14:58] I would think about it a little bit differently though. It's not so much of an interoperability.
[00:15:03] We're only going to use one brand of everything.
[00:15:06] If we can, and that's our mindset here too, if we can, we will.
[00:15:10] If you're looking at the speed of execution that's happening here,
[00:15:14] you're going to have to get comfortable with some degree of throwaway technology.
[00:15:20] I know people hate to say that and I hate doing it and I hate doing work and then saying guess what,
[00:15:25] we're not using that anymore and our native systems are able to do that.
[00:15:29] What about we want this other group?
[00:15:31] But I think what's really important is even though the technology you start with may not be the technology you finish with,
[00:15:37] the important thing was the sooner you start is the sooner you learn.
[00:15:43] What's the hardest car to learn to drive? Your first one.
[00:15:47] So if you say I'm waiting until a better car comes along and then I'll start learning to drive in an optimal car when I'm 25,
[00:15:55] it's those ages 16 or 17 to 25, you are a much better driver by the time you get to 25.
[00:16:01] And then there's a better car, great I don't have to do as much and I have a better car.
[00:16:06] But nothing will ever replace those initial seven years of driving.
[00:16:10] That's why world class athletes don't start when they're 15.
[00:16:13] World class athletes start when they're 3, 4 at that.
[00:16:17] So that's I think really important for us is don't wait for the perfect technology set up,
[00:16:22] get started and keep learning.
[00:16:24] And then eventually you may go back to a native system that has caught up.
[00:16:27] And one of the things we do is we're very clear with the vendors we work with,
[00:16:31] we're not going to sign a 10 year contract with you.
[00:16:34] I'm not going to sign a five year contract with you.
[00:16:37] Two to three years prove to me you can keep up and you can innovate and you can do the things we need to do.
[00:16:43] And then we'll can have a very long relationship but you have to prove to me that you can iterate.
[00:16:50] And vice versa, I think it's important for the vendor to look at a health system and say to prove that we can iterate.
[00:16:55] Because if we don't operationalize your product effectively, we're going to make you look bad too.
[00:17:00] So who you work with when you're partnering and co-developing I think is just as important for a tech startup.
[00:17:06] There's some really great points there Tarun and something for everybody listening to think about.
[00:17:11] The business of integrating AI is super important too.
[00:17:15] The contract term and is it a match to Tarun's point?
[00:17:19] Are you guys a match culturally?
[00:17:21] Are you going to be able to be agile in the movement?
[00:17:24] So I love those points there as well, super critical.
[00:17:28] Yeah and Tarun I love your approach of let's crawl, walk and run with a vendor.
[00:17:32] But also just getting started and needing and knowing that there needs to be practice right and playing with it
[00:17:39] before you realize and unlock the true value for how these technologies can be helpful in the care delivery space.
[00:17:46] The question then is there's so much hype right around AI and please share with us your thoughts of just the term and the term AI and your thoughts around that.
[00:17:55] But what do you think is the biggest promise of AI in care delivery in 2024 specifically?
[00:18:01] Alright so you opened the door for my shameless plug of Kapoor's inverse law of AI.
[00:18:06] For those of you who are not familiar with it, I'm hoping to be famous for something one day and maybe this is what it is right up there with Pythagoreum Theorem.
[00:18:13] Kapoor's inverse law of AI says the number of times that a person mentions AI in their presentation is usually inversely proportional to their actual understanding of AI.
[00:18:22] So I'm going to be very cautious about how much more time they spend.
[00:18:27] What I really have been spending a lot of time thinking about is and what I am actually really excited about and it's meaningfully, I don't want to say giddy about it.
[00:18:38] So let me tell you a story. I'm a hospitalist by training internal medicine and I actually still see patients when I've worked with the residents.
[00:18:47] I remember when our EMR finally integrated and I could see the last 12 progress notes from the history and physicals from the previous hospitalization.
[00:19:01] I said, oh my god, this is wonderful. And then I realized that I still only had 30 minutes to admit that patient.
[00:19:09] And now I had all this data, this information, but I didn't have insights.
[00:19:16] I had to go read all of those notes in order to find the insight. And what did I do? I read the last couple because that's as much time as I had.
[00:19:24] The fact that the gift of the LLM, which I think is perhaps one of the most revolutionary things that I've ever seen,
[00:19:32] that can go in and help me understand because there's data and information. That's the call that I would call that the left side of the curve.
[00:19:40] The right side of this is insights and opportunities, and it's a decreasing.
[00:19:46] Lots of data that's organized as information, and that's a little bit smaller because not all of data is organized and then even smaller to as you move across our insights.
[00:19:57] And it's the insights that we need.
[00:20:01] And we've been using our really highly educated talent to be insight machines, and they should be insight machines, but there also have to be data processing machines first.
[00:20:11] If we can let people like myself be insight machines, and maybe eventually our knowledge systems are present in a place where the machines are able to present insights,
[00:20:22] then you get to the final phase, which is the most important thing that truly only human can do.
[00:20:26] And that is how do I take advantage of this opportunity?
[00:20:29] And it could be an opportunity to say I have an opportunity to help this patient understand this disease.
[00:20:35] I have an opportunity to convince this patient that this is the right treatment path for them.
[00:20:40] And that is the essence of the humanity of medicine.
[00:20:45] I think we can put that back in the hands of our clinicians.
[00:20:49] We're dealing with a burnout epidemic in healthcare, and part of it, I think, is because we were at the worst part of just all data.
[00:20:57] And now we're starting to move that other side and start to say, hey, here's your opportunities to work on.
[00:21:02] And that's what's fulfilling, right?
[00:21:04] That's, to me, the essence of being a clinician and with your position, a nurse, etc.
[00:21:10] So that's the part that you can tell, I think, in my voice that I'm excited about this piece.
[00:21:14] Really love that one.
[00:21:15] Yeah, for sure.
[00:21:16] And Sarin, I love it.
[00:21:17] And the way that you put it all together is really inspiring, taking us back to when you were early in your practice,
[00:21:24] wanting to do something about all of the things that you had in front of you,
[00:21:28] to now fast-forwarding to today, you can do something with LLMs.
[00:21:33] The opportunity's big.
[00:21:34] And we appreciate the frameworks you've given us to how to think about AI in healthcare from a provider perspective
[00:21:43] and how to do that on the business side and get started is the big thing here.
[00:21:47] How would you close us off here, Tarun, as we wrap up today's episode?
[00:21:52] When you so kindly introduced me, digital transformation officer,
[00:21:56] the emphasis on my title has stayed the same, but my role has changed.
[00:22:01] It's not as much digital, it's now the transformation side.
[00:22:06] You continue to always need things to be digitized.
[00:22:09] So from digitize to digitalize, which is changing your processes,
[00:22:13] what are we actually going to change our business?
[00:22:16] And if you're prepared to change your business to take advantage of these tools,
[00:22:21] you can be fundamentally different.
[00:22:24] It's not just digitizing the old way of doing things and therefore making it more electronic.
[00:22:29] What are we going to do different?
[00:22:32] And this is our opportunity to do things different.
[00:22:34] It's not going to happen overnight too.
[00:22:38] I still believe there's another 20, 30 years of this continuing to evolve.
[00:22:42] Some things are going to change very quickly.
[00:22:44] And it is the idea of no, it's not evolution, it's punctuated equilibrium.
[00:22:49] If you've heard that term before, evolution is always a continuous, slow evolution that moves along.
[00:22:54] Punctuated equilibrium is things can change very rapidly in a very short period of time
[00:22:59] when the opportunity presents itself.
[00:23:01] And we're starting to get this alignment now of consumers are ready to accept some of these tools,
[00:23:08] our clinicians are ready to accept some of these tools,
[00:23:10] and the technology is there to do it as well.
[00:23:13] And so when you find that triple alignment,
[00:23:15] that's where punctuated equilibrium happens.
[00:23:18] When the other ones are not all aligned, then that's more of a slow evolution.
[00:23:22] You've got to wait for those things to happen.
[00:23:25] So I encourage you look for these triple alignments and then swim like mad.
[00:23:31] Swim like mad, I love it.
[00:23:34] Dr. Tarun, thank you for today.
[00:23:37] And Jenny, thanks for doing this with us on this series.
[00:23:40] We're definitely excited to get this episode to everyone.
[00:23:44] In the show notes, you're going to see all the links to get in touch with Dr. Kapoor
[00:23:49] and learn more about what he and the team are up to at Virtua.
[00:23:53] So we're really grateful that everybody tuned in for today's episode.
[00:23:56] Thanks again.
[00:23:57] Thank you all.
[00:23:58] Thanks for listening to our series, Future of AI and Health,
[00:24:02] our collaboration with Healthline Media.
[00:24:05] As we conclude this episode, we invite you to stay tuned for more insightful discussions
[00:24:09] on the series that we're doing together.
[00:24:12] The future is now and with knowledge comes empowerment.
[00:24:15] So I want to thank you for joining us
[00:24:17] and looking forward to having you with us on the next time
[00:24:20] as we explore the impact of AI and health care today and in the future.

