How ConcertAI is Revolutionizing Cancer Research with AI with Jeff Elton, CEO of ConcertAI
June 17, 202400:24:15

How ConcertAI is Revolutionizing Cancer Research with AI with Jeff Elton, CEO of ConcertAI

Data integration and AI play crucial roles in predicting and managing treatment resistance in diseases like cancer.

In this episode, Jeff Elton, CEO of ConcertAI, discusses his team’s focus on oncology and its role in accelerating medical research and improving patient care. He highlights the importance of data integration and AI in understanding diseases like cancer, particularly in predicting and managing treatment resistance. Jeff explains how ConcertAI's technology enables real-time analysis of patient data, aiding in clinical decision-making and trial matching, and delves into setbacks faced in developing AI models, emphasizing the need for transparent and reliable algorithms in healthcare. He also underscores the evolving nature of the workforce in the age of AI, where high-skilled individuals will be essential for leveraging advanced technologies effectively.

Tune in and learn how ConcertAI is shaping the future of healthcare with AI-driven solutions!


Resources: 

  • Watch the entire interview here
  • Follow and connect with Jeff Elton on LinkedIn.
  • Learn more about ConcertAI on their LinkedIn and website.
  • Browse ConcertAI’s blog here
  • Learn more about ConcertAI’s podcast here.

[00:00:02] Hey everybody, thank you for joining us again on the Outcomes Rocket. Saul Marquez here and today I have the privilege of hosting Jeff Elton on the podcast. He is the CEO of Concert AI, an AI SaaS solutions company providing research

[00:00:20] solutions and patient centric solutions for life science innovators and the world's leading providers. I'm excited to host him here because he really brings a wealth of knowledge. Previously, a leader at various consulting firms like Accenture and McKinsey and

[00:00:40] just a wealth of knowledge from his background and his PhD and the work that he does with these technologies in healthcare. So such a privilege to have you join us here today, Jeff. Thanks for being here. Yeah, Saul, thank you very much. It's a pleasure to be here.

[00:00:58] Love your podcast and was take the opportunity. Our privilege, yeah. No, my pleasure, Jeff. And look, there's a lot on the horizon that today's healthcare leaders have at the top of their minds, technologies, changes in the way and the market forces.

[00:01:14] And so I'm excited to dive into what Concert AI is doing to help make those things easier for today's leaders. Before we do though, Jeff, tell me a little bit about you and help us understand what is it that got you into healthcare?

[00:01:27] I am the son of a physician and a biomedical researcher. Diseases and the underlying, the cause of diseases, different therapeutics that might actually help resolve those was dinner table conversation. We saw a lot of kind of weird photos of different things and

[00:01:48] kind of as I was growing up. I do have to tell you that I actually thought that I was going to have to actually make my own way through life and not do what my parents were doing. And so I actually ended up falling into technology driven

[00:02:01] industries for a while, really focused on that. And then I was asked by colleagues to come back and do another healthcare project, just because involved technology. And as I got into it, I had somewhat forgotten just how passionate physicians, biomedical researchers are about what they do.

[00:02:19] And just how absolutely earnest that, that their, their entire reason for doing what they do every single day is resolving issues of human health. And actually trying to bring people back to the point of functionality, be with their families, have more time with their families.

[00:02:34] And I came away and said, I have the same value system and I want to be surrounded by people who are solving those problems. And after that, I never looked back. Everything I did was either on the healthcare provider side or biomedical

[00:02:49] innovations that were designed to go after high emethical needs. You were destined for it, Jeff. You were destined for it. Pulled, compelled and now. I love it. The cool thing about that is that you took your own path, right?

[00:03:02] And you went your way and now you're bringing in, you've brought into medicine and healthcare, much needed perspectives, different perspectives than where you to have started here with a legacy mindset. So I actually think turned out better for you. Thanks all. I think it was a path.

[00:03:19] It was the path I needed to take. I truly understand. I truly understand. And so let's take a moment to park on Concert AI. Tell us a little bit about the company, what you guys do and how in particular you add value to the ecosystem.

[00:03:35] You will probably find I have a tremendous amount of passion about what we do too. Concert AI is a very oncology, hematological malignancy focused company. We have the pleasure of working with some of the greatest provider systems around the United States.

[00:03:49] We work with medical societies and we work with biomedical innovators. The work we do starts at something that's called translational medicine and early clinical development. Translational medicine is where you have insights that may have come from a laboratory, but you're trying to actually understand its implications for actually

[00:04:08] resolving a human disease problem. And so it's taking data in things that actually came out of experimentation, but then saying, what's it's relevant to patients and a particular disease state. And can you plan to set up a clinical trial that might resolve something that hasn't been resolved today?

[00:04:27] And so we spend quite a bit of time focusing there. We focus on clinical development of accelerating new medicines through the clinical development pipeline. We take a look at medicines that are available and say, how are they working? How do we make them work better?

[00:04:40] And then we go all the way over to having a diagnostic interpretation solutions that are used by healthcare providers to make sure that they're diagnosing as accurately as possible. And then assigning the intervention or the therapy that's going to be most beneficial.

[00:04:56] So we actually, we think about ourselves as sitting here between two incredible ecosystems, healthcare providers on the one hand, biomedical innovators on the other hand, bringing those things through a process, hopefully doing a lot faster, a lot more precise, and then getting it into practice as much

[00:05:12] as possible. And then we really keep balancing between the two ends of the ecosystem. Today, we probably have the largest collection of research grade oncology data of anybody in the world. We probably grow in faster than most organizations.

[00:05:27] And part of it is actually what we've become very good at is understanding what's the problem we're trying to solve and how do we flex those ecosystems that I just described to help resolve that problem and do it as quickly as possible.

[00:05:40] You can imagine just every, everything we're doing is about human health. And inside our company, we have physicians that have taken care of patients, nurses that have taken care of patients, data scientists that are here because of what we do.

[00:05:53] We have engineers that want to build the type of solutions we're doing because they know that somebody is going to be better off. And so everybody's here for kind of that same reason you just asked me, like, why am I here? What's my passion?

[00:06:05] Everybody had a choice where they could do things, but decided that this was actually a worthy entity, but also just an exciting place to be. I really appreciate you sharing that, that Jeff and the ecosystem that you outlined for us, the players within it.

[00:06:19] It's about acceleration, whether it's the translational efforts or the optimization all the way to diagnosis. It's a fantastic mission that you guys are on. Help us uncover some of the how, what is it that you guys do different or better?

[00:06:37] And as we talk about that, help us understand some use cases and make it real for us. So when you're dealing with some of the data, you actually may have diseases where, and this tends to occur in cancer even more than other diseases.

[00:06:57] You oftentimes have a, what's called a targeted therapy. Cancer is oftentimes a disease of different genetic mutations where cells that were healthy actually mutate and they actually begin. That's where tumor growth, then everything begins to comes from what you're trying

[00:07:14] to do in a lot of medicines is either turn off the faster growing cells or turn off the particular reason that particular mutation for that tumor group. But unfortunately, patients seem to develop resistance to a medicine that brought benefit very quickly.

[00:07:30] And so by bringing together some of our data and we bring together what's called an exome, which is actually every single known mutate 29,000 mutations in genes. That may be driving something, a transcriptome, which is from RNA. We can begin to define which patients are developing resistance and

[00:07:48] what's the pattern for that resistance. And so if we're going to actually take new drugs into the clinic or maybe even do something like there's a class of drugs called ADCs that hit two targets at the

[00:07:58] same time, and it's meant to countervail the pattern of resistance that could be acquired at that medicine lasts longer. So we're bringing that data together with the biopharma innovators to design those studies, to assure greater benefit from really quite powerful medicines and see

[00:08:15] if we can extend the time before a pattern resistance will come up. Resistance will occur. We just want to make it not 18 months. We want to make it 48 months or 60 months for a longer period of time because then more options are available for those. Yeah, no, super fascinating.

[00:08:30] As it relates to understanding the information that you're getting around resistance, does this happen in real time or in clinic or are you doing this outside of the clinic? Help us understand the context there. Yeah, great question.

[00:08:45] So the data we bring together, we work with a variety of different provider settings. They allow those data to be used for research purposes. So it's what's called retrospective analysis. It's actually being collected almost continuously every 24 to 48 hours. We have to integrate data from multiple data sources.

[00:09:05] What we're doing is we're creating a picture and we're creating a picture, which sometimes gets called multimodal data. In fact, the multimodal data is what's underpinning the ability for artificial intelligence to be as effective as it is.

[00:09:18] When you have the multimodes of data, it allows you to establish causality, not correlations, but causality in a process. And have competence in the insights that are being generated in that particular process. So we're bringing that out retrospectively as care is being delivered and bring it together.

[00:09:36] Now, once those insights have been derived and you may have a clinical trial or a study or a new medicine that's gone in for clinical development, we have other tools which now run in the workflow of that provider site. And these will do patient to trial matching.

[00:09:52] So they're using artificial intelligence, natural language processing, the term of generative AI to actually identify patients and tightly match them. And essentially we're reading files, record data sets, laboratory information systems like experts would.

[00:10:11] Bringing that back in front of the experts in a way that's much more efficient and complete to help them identify the patients that may be beneficiaries of that particular clinical trial. So we're really to you, to the nature of your question, some of it's

[00:10:25] retrospective and it's actually what I'll call any digital analytic lab where we're bringing artificial intelligence, machine learning and other methodologies to get those insights that then guide. And then after we go the other direction, we're trying to bring tools to actually

[00:10:40] action it in real time so that actually patients can be a beneficiary. Yeah, no, that's fantastic, Jeff. Thanks for the insight there. Building these types of tools. It's not easy. And whether it's science or engineering or medicine, there's a lot of failure and trial and error.

[00:10:57] Talk to us about maybe a setback that comes to mind that gave you one of the biggest learnings you've learned and has made you better as a result. Like all things, it's interesting you ask about setback and if you're doing work

[00:11:11] in kind of healthcare and life sciences, and you probably know this all just from other work you've done in your own podcast and work, areas like drug discovery and early clinical development actually have lots of points of failure, right?

[00:11:26] What you're trying to do is there's a certain amount of experimentation where you're proving out a methodology and oftentimes it's not unusual things fail. I worked at Vardis and the research and early clinical development organization for a long time. And there was a very important expression that Dr.

[00:11:45] Fishman, who headed up R&D at the time brought forward, which is sometimes the failure is not the failure of the scientist. It's actually the science wasn't ready to reveal itself yet. And so this notion that you want to make sure that you're getting the experimentation

[00:12:02] and that the failure doesn't mean you were unsuccessful. What did you glean and learn from the point of failure? So we've spent years optimizing models that can hear, I mean, AI and natural language processing models that actually could read what's referred to as unstructured document, unstructured data.

[00:12:23] What that means is PDFs that are in the electronic medical record notes that may be sitting there that don't have a context. They're actually look like graphical forms to a machine. A machine can't read those.

[00:12:35] But if you're not very accurate at reading that, you can produce as many flawed results as you can have positive results. And so we've had a multi-year process of getting to the point where we could start developing models that had an accuracy equivalent to human beings.

[00:12:52] And then therefore now I can do things at much broader scale in much greater speed. And so we had to admit that for a while, we thought we were actually going to be able to do this probably about three years ago.

[00:13:04] But what's called the accuracy and the recall and just the, just the general utility of what was being done. Didn't give you the confidence that you could base any decisions on this for a while.

[00:13:15] And so we, but we kept working the problem and we kept understanding what is it that actually wasn't allowing us to be successful to work our way through. Most things we do, you have points where your performance isn't what you needed to be. It takes too much processing.

[00:13:33] It takes too much time. The accuracy is too low. Question is, if this were to work, would you care? And if the answer is you will care a lot, then those are classes of problems you'd go in and understand. Okay. That approach didn't work.

[00:13:48] What can I do to get it? Another example here is if you use one generative AI approach, and you've probably heard this notion of hallucinations. Hallucination means AI doesn't like empty cells. It likes everything to be neat, ticked and tied, laid out there and just perfectly.

[00:14:06] And where it sees emptiness, it imputes something. And sometimes what it imputes is not accurate. Hence called sometimes the source of hallucination, where just put something in to have something in. We found that by overlaying two, three different generative AI approaches, none

[00:14:23] of them have the same approach to hallucination as the other. You can start to multiplex your own approach. And whereas before you said that's too inadequate and it gives you too many results that you couldn't trust. And so therefore this idea of trustworthy AI is a big theme.

[00:14:40] Could you actually begin to develop different ways of overlaying alternative methodologies simultaneously, so each of them actually can overcome the inadequacies of the other and actually get to a much higher performance level? That's a non-obvious way to begin working, but it is a way that kind

[00:14:57] of you start as you go through it, you see the failure and you understand each of the individual head failure. But sometimes when you put it together, the actual compilation of the three overcomes the failure of each of the individual parts.

[00:15:09] It's almost like a built in redundancy gut check. Actually gut checks are good. Bodies have redundancy, right? The whole reason why we survived is we have multiple ways of doing same things through technology. And you probably, and I think Saul, it's not a bad principle. That same thing.

[00:15:26] You want to the redundancy. You want to make sure that you're going to have high assurance. And if you and I knew that a decision was being made on technology, we'd also want to know that that's a characteristic of what's being built in. That's fascinating.

[00:15:38] And I'm sure you, you made it sound a lot easier than it actually is because when it sounds like it's like four chat GPTs on top of each other and then it works, but I'm sure it's a lot more than that.

[00:15:52] I'm sure my data science team and AI engineering team, if they happen to be listening to this and say, yeah, that's what he's is because he's up there. Let me show you the models. I think part of this too, that was also part of it's an effort.

[00:16:07] We try to develop language to demystify things. You never, as you just said, you don't want a black box. You don't want anything that's actually going into healthcare to have a black box. We publish all of our models and take it into the American Society of Clinical Oncology.

[00:16:23] Everything's got to be trusted and placed out there. You learn something by forcing it. For sure. And I've heard it forget where it was. It was an interview that we did.

[00:16:31] I forget who it was with, but they called it a clear box and you like a clear box. You could see through it. You understand what's happening with it. Love that. Love that example. And certainly your appetite for failure is obviously a testament

[00:16:46] to the success you guys have had. Look, with all of this, all of these advances, talent, the nature of the workforce, operations. How are we, what advice would you give to leaders to make sure that's matching the rate of advance? So a great question.

[00:17:04] And actually one that I think people are really spending a lot of time wrestling with right now. So for one, the discussion about the impact that AI will actually have on healthcare, but also other enterprises, it's actually real.

[00:17:17] We've seen more performance and capabilities in the last 12 months that we hadn't seen in the prior four years. So this is real. This is, I think things will actually become a little different. What is actually happening is very simple things like administrative

[00:17:38] processes, processing lots of amounts of things is going to get done with incredible precision and done continuously, which actually has huge implications on jobs and even what does an organization look like? So one part of it is you're going to have to organize your data, organize

[00:17:56] even how you think about line functions in a very different way. Data needs to be available and present, but in a kind of a, in a way that actually the tools can actually access them at work. So one of it is you're going to organize differently.

[00:18:09] How you think about actually how different groups will be different. But the other part of it, you may find that you actually have fewer people are required, but the people that are required actually have to have greater skills.

[00:18:22] This isn't actually about getting rid of the high skilled people and saying, Hey, I was going to take it over. It's because the easy stuff gets stopped by the machines and the hard stuff comes back to the people.

[00:18:34] And so you're going to actually have to have the people that know how to do something with the hard stuff. And so you may find organizations, maybe they're going to be reduced in size by 30 or 40%. Does that mean I wouldn't think about this.

[00:18:46] This is like we've been through all sorts of organizational changes, industrial revolution to advent of computers, everything else. This isn't actually that different than other changes we've made, but the implications here are organizations will be vessels, if you will, of much higher expertise supported by super powerful tools.

[00:19:09] But if you think about that, organizations don't change that quickly. Like tech here, tech's changing quickly, but like humans and how they work and how they hire and business processes and clinical processes, those don't change

[00:19:24] that quickly, so which ones and how do we want to move things quickly? How do we reshape these organizations to really make sure that we're taking advantage of these things, but doing it strategically and with a sense of purpose around that as well.

[00:19:37] And this is going to be, this is going to be a new skill. You call it, is this the road where HR works? Is this where the chief AI officer works? Is this where, because even that, even the leadership team is going to have to

[00:19:48] think about what they're doing differently. Like leadership will look different. So it's a great question. And sometimes those are the subtle things that tell you whether you're going to get the value out of all these innovations. Not a great perspective there.

[00:20:00] And certainly Jeff, something for all of our listeners and viewers to think about, we are here toward the end. And Hey, just a question. You do a lot of interesting things. What do you do for fun? Yeah.

[00:20:15] So I just for the record, I do work a lot and I actually have a lot of, I enjoy what I'm doing. I am a super avid hiker. I am an avid sea kayaker. Nothing is happier than being out on the water in my kayak.

[00:20:32] I do a lot of biking, particularly on road. I do a little off road, but mainly road doing all that because I do a lot of work. I'm used to visualizing data. I probably wouldn't come as a surprise that I'm also do what I'll call more

[00:20:45] art focused photography and things of that nature. And now I can do heavy modifications of all my images. Oh, man. I tell you, do you have like a collection somewhere? Oh yeah. Yeah.

[00:20:58] In fact, at some point we can have a, we can have a longer kind of view on that. But again, I'm just, I love music too. So if you're looking at it, I have super diverse takes.

[00:21:07] If I'm not in a meeting, if I wasn't on this video, you'd find music accompanying me wherever I went. Thanks for sharing that Jeff. Hey, I like to dig into that because what makes people so interesting

[00:21:18] is what they do every day, but also what they do when they're not doing their main thing. So appreciate you sharing that. Look, we're here at the end and I want to give you an opportunity to share with the listeners and viewers where they can learn more and

[00:21:32] engage with you and concert AI. Yeah. So thanks all that you can find us at www.concertai.com. What I would say is go take a look at our blogs. We actually try to actually write a lot, express a lot of thoughts, and we'd welcome responses on that.

[00:21:50] Uh, much as Saul has, we've actually tried to actually get together with peer colleagues and put this in a podcast series as well. So in fact, Saul, you and I should probably talk about that a little

[00:22:01] bit, because that's a world you've done extremely well in, but I think, and then if you ever, if you're in our field and if you're in the field of medicine or in the field of data science, we're at most of the major

[00:22:12] academic meetings and we run sessions and panels and other interactions. So take a look for that. We believe communication and openness back to your point about the transparent box or clear box. That notion is really important.

[00:22:26] So we try to actually talk about the ideas, why we do what we're doing. We try to get case vignettes all the time. We try to publish posters. We try to get it into peer reviewed forums so that people can give us that feedback in doing that.

[00:22:37] So those are our channels and we always welcome questions and we welcome collaborations. Thanks for that opportunity. And folks for everybody listening and viewing all the things that you need are in the show notes. So we'll leave links to ConcertAI's website, their blog. Is the podcast live, Jeff?

[00:22:56] Podcast series is live. Yep. We'll get you the link to that as well. Amazing. So we'll, we'll get you guys a link to that and you're going to want to Apple podcast and Spotify, all the normal sources. Amazing.

[00:23:07] So we'll link that up so you guys could dig into that content as well, because it's good stuff. This is just tip of the iceberg and Jeff really appreciate you spending time with us today. Thank you. So it was a pleasure. It went by in just a flash.

[00:23:20] Thank you.