Generative AI has tremendous potential to address workforce challenges in healthcare.
In this episode, Kaveh Safavi, the Senior Managing Director of Global Health at Accenture, delves into the importance of leveraging technology, specifically generative AI, to transform the future of healthcare and alleviate the current workforce challenges. Kaveh discusses the potential of generative AI in healthcare, highlighting its ability to support and augment various language-based tasks, leading to enhanced productivity and efficiency. He also cautions healthcare organizations about the responsible use of this technology, emphasizing the importance of understanding its limitations and ensuring the protection of patient privacy.
Tune in as the demand for accessible and timely healthcare continues to rise. This episode provides valuable insights into how generative AI can revolutionize the industry and address some pressing challenges we face today.
Resources:
[00:00:00] Hey everybody, Saul Marquez with the Health Matters Podcast. I want to welcome you back
[00:00:13] to today's episode where recording a lot of great interviews from Health 2023 in Las Vegas
[00:00:19] and today I have the privilege of sitting here with Dr. Cabe Safavi. He is a senior managing
[00:00:25] director at Global Health Accenture. He helps organizations harness the promise of technology
[00:00:31] and human ingenuity to humanized healthcare. I'm really excited to have you on the podcast,
[00:00:36] Cabe thanks for joining me. Good to be yourself. And so to kick things off,
[00:00:40] you know, I always like to better understand what inspires our health leaders. So what is it that
[00:00:45] inspires you in your work in healthcare? Well, I tell you, I started out becoming a doctor because
[00:00:50] I saw primary care doctors, pediatricians and I grew up around doctors and I just thought that
[00:00:55] was a fascinating way to make a living. I got into healthcare and I was really struck by
[00:01:01] the power asymmetry between doctors and patients and I felt like healthcare would be better if there
[00:01:06] was a better balance and trying to figure out how to empower patients as part of the care system
[00:01:12] was important to me. I'm now in a different phase of my life. I am part of the baby boom population
[00:01:17] and we are facing the next decade where there will be far more of us than there are caregivers
[00:01:23] and we are all going to have a really big problem getting access and waiting times are going to be an
[00:01:28] issue. And it's actually literally a personal issue and I'm very much interested right now in how
[00:01:33] we're going to solve that access problem. And I have done enough research and looked at the numbers
[00:01:39] and I know that it's not physically possible to solve the problem by finding more people to do
[00:01:44] the work because there aren't any or not enough and therefore we're going to have to think about it
[00:01:48] differently and having spent the last two decades thinking about how technology and people work
[00:01:53] together. I can't see any other path forward other than taking the work that's being done by our
[00:01:59] caregivers and figuring out how to scale their capacity by using technology and people together
[00:02:04] and the alternative is a serious crisis in waiting times. Yeah, it's certainly a big issue and one
[00:02:10] that needs solving and it's a personal thing for you and you're on a mission to make it happen.
[00:02:15] I love that. I'm an impatient man. That needs results as all of them. And doesn't like to wait.
[00:02:21] And we need it now in healthcare and you're not alone. We are in that time where we like our Starbucks,
[00:02:26] we like our Amazon Prime and things need to be there today and our healthcare is no exception.
[00:02:31] I see from Accenture's research that nearly 100% of healthcare leaders agree that Genai is ushering
[00:02:38] a new era and enterprise intelligence. How will this technology transform the future of healthcare?
[00:02:43] Well, Genai is a relatively new technology. We've known about artificial intelligence in general
[00:02:48] and largely it's been through the lens of knowledge and prediction but Genai is a very different
[00:02:55] kind of technology. There are many use cases but the use case that has caught our attention
[00:02:59] particularly for healthcare is language and the reality is that a lot of what takes up human time
[00:03:07] is the acts of reading and writing. I'm not talking about reasoning or judgment. I'm just talking
[00:03:12] about the literal acts of reading and writing. And if we could find a way to take some of that off
[00:03:18] of the human and give that to tech then people have more time to do the things that humans do which
[00:03:24] is really much more about the judgment, the conversation, the things that only the humans can do.
[00:03:31] And particularly when we're facing a shortage of doctors and nurses, we're going to have to figure
[00:03:37] out a way to do this differently. Yeah, I couldn't agree with you more. And so just pointing to
[00:03:42] the research again that you guys have done it also shows 40% of working hours in healthcare could be
[00:03:48] supported or augmented by language-based AI. How will this help the industry's current workforce
[00:03:54] channels? Well, so the 40% is actually of all jobs. So if you look at all jobs across all industries,
[00:04:01] 40% of what is considered work today are language tasks that are amenable to large language models
[00:04:09] that have been enabled by generative AI. When you go into healthcare specifically and I'm going to
[00:04:14] ask focus on let's just talk about healthcare delivery for a second. Around 30% of the tasks can be
[00:04:21] impacted by language models. 20% of the tasks the technology can actually substitute for what the
[00:04:28] human does. 10% of it can help the human do a better job, but the human still has to do the work.
[00:04:34] It's that 20% that gives us the headroom to start solving the supply and demand problem. When you go
[00:04:39] to things like healthcare insurance, that looks a lot more like the overall job market where you're
[00:04:45] in the more like 30-40% of tasks are language tasks that can be substituted. The key here is
[00:04:51] that technology has the ability in some cases to automate, that's replace and in some cases augment,
[00:04:57] which is to do better. Most of our artificial intelligence experiences pre-generative AI have
[00:05:04] largely been about providing information to humans to make better decisions, so augment. We've
[00:05:09] thought about it really as a technology to make care better by augmenting human capacity,
[00:05:15] not automating. But in the environment we're in right now, when access is a fundamental problem
[00:05:21] in the UK, in the NHS right now, you're going to wait at least 18 months for an elective procedure.
[00:05:28] In Ireland, you'll wait two years for an elective procedure in the public healthcare system,
[00:05:32] and in some urban areas right now, elective procedures depending on what you're looking for could be
[00:05:36] three, four, five, six months. That is untenable long-term. It's going to go from a nuisance to a
[00:05:42] political problem. We have to automate. Augmenting isn't going to solve the problem, and it's the
[00:05:47] automation component that I'm most interested in. That's the more novel part right now,
[00:05:51] but there's a real opportunity there. Yeah, I would say so, and some of these wait times are just
[00:05:56] unsustainable as we think about how public health unfolds and just the epidemic that could happen
[00:06:02] as a result of it. Healthcare organizations are looking at using Gen AI for learning purposes
[00:06:09] and planning for pilot cases this year. What would you caution these organizations about when
[00:06:14] really experimenting with Gen AI? I'm going to answer that question first by describing what are
[00:06:20] the uses that are likely to be the best uses in the near term. And generative AI is in its early
[00:06:27] stages, so what it will be able to do in three or five years, the models are going to get more powerful,
[00:06:32] so we can't fully conceive of it. But like what's in front of us right now, we can actually look at
[00:06:37] non-healthcare industries to see patterns. So common patterns in healthcare, common patterns are
[00:06:43] using the technology to help people find information inside of their own organization.
[00:06:49] The lingo is enterprise surge, but when you are trying to find something in your hospital or
[00:06:54] insurance company, it could be anywhere and it's hard. And if I can find it, that's all that time
[00:06:59] saved looking for information. The second thing is what we'll often describe as creating first draft,
[00:07:05] often describe this content creation for content creators. So it could be, for example, in the
[00:07:10] clinical setting now, what's happening, which is preparing first draft email as an example,
[00:07:16] or document summarization so that it reads, understands and synthesizes in a reliable way. So
[00:07:22] instead of me having to try to look at a 200 page 10 year record, one page pulls everything
[00:07:28] salient. And that I'll never get through 200 pages as a doctor anyway. And I won't miss anything.
[00:07:33] So that kind of stuff really helps us do our jobs more quickly. Recognize as you can tell, none of
[00:07:39] these are making judgments or substituting for doctors, judgments. All of these are tools to
[00:07:44] help clinicians do their job better than the big one that's really going to come up is the ability
[00:07:48] to create a written document like a note, whether it's a nurse's note or a doctor's note and convert
[00:07:54] a narrative into structured content so that you don't have to type. If the typing problem, which
[00:08:00] is a 13% drag on physician productivity, is a huge problem. And generative AI will eliminate the
[00:08:06] need to touch the keyboard. It will eliminate the need. Most observers think that within three years,
[00:08:11] we will have technology that widely available that will not require us to use a keyboard to interact
[00:08:16] with an electronic health record. The first versions are already being shown to the market. So
[00:08:20] that's a lot of time back. None of that was possible without generative AI. What I didn't describe
[00:08:26] is advice machines to patients. So the leading organizations, the first thing they did before
[00:08:32] they had any generative AI was say, generative AI is not an answering machine. It's a language
[00:08:37] machine. So you see organizations putting prohibitions about using the technology to face a patient and
[00:08:44] give a patient advice even though the common lay press often talks about that as the thing.
[00:08:48] That's the one thing we don't want people to do. Now that doesn't mean it can't do it eventually.
[00:08:53] But the challenge there is that the generative AI is really good at the language fluency. That's not
[00:08:58] the problem. The problem is the actual answer. Generative AI doesn't create knowledge. It doesn't
[00:09:03] create an answer that never existed. It will generate an answer right or wrong. So if you're going to
[00:09:08] use generative AI to give a patient, you better have a bullet proof body of answers that it's going
[00:09:15] to be looking for the information and the models that we're seeing today that are built on the open
[00:09:19] internet are no different than if you would have told someone two years ago, go look it up on the
[00:09:24] internet and take care of yourself. If that wasn't good enough two years ago, it's not good enough
[00:09:27] today. Generative AI didn't solve that problem. It just makes that information more compelling in
[00:09:33] its presentation, not more accurate, more compelling. And so that's one of the big areas is to keep
[00:09:38] that clear. There are a few other areas that are really important. One of them is to understand how
[00:09:42] models are built. Many of these models take information that you put in and use it forever to learn. That's
[00:09:49] how generative AI is supposed to work. Great, but that means you better put information that you want
[00:09:54] essentially make public and lose control of. So you cut and paste patient information into chat GP
[00:09:59] 3.5, you just released HIPAA information without consent. You put proprietary information in there,
[00:10:06] you released it forever, you've lost copyright and protection, all of those things. So getting
[00:10:10] people to understand that that's how it works isn't an important issue because we know for example
[00:10:16] let's say a large hospital system. The IT organization will put policies in place that say you have
[00:10:22] to go use our generative AI tools, but the independent medical staff are well meaning and they might
[00:10:27] start hacking. Now I can use generative AI to build something to do X, Y and Z. They have no idea
[00:10:32] what they're doing. And they're exposing themselves and others to risk. So we actually think that it's
[00:10:37] really important to get principles out early that people understand what generative AI is, understand
[00:10:42] issues of privacy and protection of confidential information, understand the concept of advice versus
[00:10:47] fluency in use cases. Those are all big issues. They are. And it sounds like you've given this a lot
[00:10:53] of thought. Have you guys documented this? Is a guide like this available to the listeners? So
[00:10:59] for example on the provider side there's actually a coalition of health systems. Mayo was one of the
[00:11:03] leaders to promulgate rules for delivery systems to think about this. There are a number of
[00:11:09] organizations that provide guidance around responsible artificial intelligence and guidance.
[00:11:15] And there's also a whole body of policy concepts that are being put forward. In the EU there's
[00:11:21] actually full regulatory standards, and it's based on the premise. And we'll see some of this
[00:11:26] bleed its way into the US at some point. The thing about generative AI that makes it wonderful
[00:11:31] and scary is that it is constantly doing things that the people who programmed it didn't think
[00:11:37] were possible. That means we can't fully control it. And I'm not talking about necessarily
[00:11:43] creating a sentient intelligent being with agency, but think of it like a nuclear reactor that
[00:11:48] goes out of control. So because it has the potential to do more than what we think one could argue
[00:11:53] that there are uses of the technology that probably should be regulated or classified or limited
[00:12:00] because they pose a societal risk. And you're starting to see it regulatory frameworks being built
[00:12:06] to allow us to make decisions about the wrong uses of generative AI because of the fact that
[00:12:12] it can get out of control pretty quickly. Thank you for that. Yeah, it sounds like right now
[00:12:17] there's work being done to navigate to provide guardrails and best cases around the use of the
[00:12:24] technology and more to come. So I really appreciate this discussion a lot. And I know it's top of
[00:12:29] mind for a lot of leaders, Kavya. So I appreciate that how do you take a responsible approach to
[00:12:35] Gen AI that reduces the dangers to judgment and diagnosis? Yeah, first thing is what we describe
[00:12:40] before which is understanding that it's not an answering machine. It's a language machine.
[00:12:45] And so whatever is the underlying scientific knowledge that you need, that's the thing that
[00:12:51] you ultimately have to be relying on. It's not a bad way of communicating. That information can
[00:12:56] be very compelling. There's data that says it can be even more empathetic in its communication than
[00:13:01] human can be. Awesome. That makes people maybe more responsive to it, but recognizing the limitations
[00:13:07] of it are it doesn't create knowledge that never existed. It doesn't create the right answer if
[00:13:11] it didn't exist. It's not an analytic tool. It's a language tool. That's by far the most important.
[00:13:17] And at this point, the challenge we have is that most people who haven't studied generative AI's
[00:13:22] only experience as chat GPT and the problem with that is simply that the way we experience generative
[00:13:28] AI is through a question answer experience as a consumer. But the purpose of it is not about
[00:13:34] the answer or the accuracy of the answer. It's about the ability to understand and comprehend
[00:13:38] the question with its subtlety and deliver an answer in a rhetorically compelling way. Not about
[00:13:44] the answer being right, but we don't have any way of experiencing the language fluency without
[00:13:50] a question and answer experience. And we get confused and think that what we're really doing is
[00:13:54] judging the answer. So in many of us did this when GPT was first in the market and we were
[00:13:59] admonishing people to be careful, and I would fall into that trap. We'd say here's how to prove you
[00:14:04] shouldn't rely on it. Have it write your own biography. Okay. And that's a, obviously set up because
[00:14:09] it wouldn't know your own biography. That doesn't prove anything. It shouldn't be able to write the
[00:14:13] biography if it doesn't exist because it doesn't have answers. So it doesn't prove anything. What
[00:14:18] you should ask for is assuming the facts and the biography are true. Is it a well-written biography?
[00:14:23] That's really the only thing that GPT is doing. I love that. Now great context and great points here.
[00:14:30] Kavit helping us understand Genai, its promise, its limitations, the framework of thinking
[00:14:36] that we should be using around it. So can't thank you enough for a phenomenal discussion on this topic.
[00:14:41] What call the action would you leave the listeners with?
[00:14:44] I think we cannot underestimate how important this is going to be to our work. And it's going
[00:14:49] to be driven in the short run by the necessity for us to get time back. And we're going to have
[00:14:55] to use this technology to get time back so we can give it to ourselves and to other people. So
[00:14:59] there's a lot of discussion about this scene through the lens of hype. I think that if you really
[00:15:04] step back and think about what it does and think about the long art, this is an inevitable technology
[00:15:08] that's going to have a pretty profound effect, you just have to place it in the right context to
[00:15:12] understand its impacts. Totally. Now I love that great closing thought. Folks, just a phenomenal
[00:15:18] conversation with Dr. Kavit Safavi from Accenture. Check out the show notes for just the details and
[00:15:25] hit rewind because this is one that I think you want to hit rewind on a few times and share with
[00:15:29] a friend. I really enjoyed our time together. Kavit, thank you so much. And oh, the last question,
[00:15:34] if somebody wants to reach out to you and learn more about the work you and the team at Accenture
[00:15:38] are doing, how can they do that? You can either find me on LinkedIn, Kavis Safavi or Accenture.com
[00:15:44] and just look me up there. Outstanding. Folks, make sure you do that. It's in the show notes,
[00:15:49] take advantage of it, and really appreciate your time. Thank you. Nice to see you. Bye, guys.

