×

Download Premium Content

First Name
Last Name
Company
Country (Citizenship)
State
Opt-In to Future Emails
Thank you!
Error - something went wrong!

Power of Integrated Patient Engagement

July 22, 2019

Transcript

Jeff Lee

Okay so what I want to talk about today is a topic that’s adjacent to the central topic for this couple of days, which is engaging the patient. And we think of eCOA as one aspect of the patient’s experience, one aspect of their digital experience. But we need to be thinking about this in a more active way relative to other activities that the patient is undertaking in the study. And I delivered a talk a little while back with Melanie Goodwin from Pfizer who leads one of their patient recruitment groups. And we talked a little bit about this, and I wanted to bring this same message to you all, with her permission, with the obvious caveats that this represents her individual opinion and not Pfizer’s, and same here.

But I wanted to talk about, how are we thinking about delivering the best patient experience in clinical studies, whether it’s the COA side of it or whether it’s just the core study conduct. And one of the things that’s been obvious to us sitting here at CRF Bracket is that we’re obviously using the term patient centricity more, much more than I saw five years ago.

Can I ask for a show of hands of who feels like they have a really good handle on what it means to be patient centric? Like, super good handle. How many people feel like it’s more of an amorphous term, you’re not really sure what it means, it could mean anything. Yeah I see more hands there. And I think that’s sort of the challenge, it means everything and it means nothing at the same time.

So we’ve tried to think about it, what does it mean to us and how do we make sure we’re embracing the spirit of this in our studies. And what I’ve seen over the last several years is that patient centricity has meant more patient voice in designing the study. There’s patients as partner conferences, there’s patient advocacy groups that are active in this regard. And I think that’s really valuable. That’s a very good starting point. But it feels like there’s a little bit of a risk that we’re going to focus on that and forget that once the protocol is done, once the study is designed, there’s still work to do.

So Melanie and I had this talk about what can we do before the study, what can we do during the study, and then what comes after. And today I’m just going to focus on the first two segments, in the interest of time.

When it comes to before the trial, really today we’re talking about the protocol itself. We’re talking about how many visits, how many assessments, how long between pin pricks, how many biopsies, etc. And again, that’s really important details of the study to get right. But what we’re doing less of is thinking about the methods of actually conducting the study itself. Once the science is set, once it is what it is, then what are we doing to figure out what’s the best experience we can give that patient, given that particular study design. And Melanie shared something I really liked, which was sort of like a patient profile map, which makes a really interesting point about how do we optimize the study and the way we guide the patient through the study, given all the variables that exist. And if you look at each one of these, you start to see that there are so many things that can change one study to the next, one patient to the next. The protocol itself, right, the specific procedures, whether they’re invasive, whether they’re not, whether they’re in-home, whether they’re not. How long the study is, short versus long obviously is really critical. The environment itself, whether there’s competitive studies, whether there’s other people that are helping patients learn about this indication. The patient themselves, whether they’re in physical or mental distress, whether they’re healthy volunteers. Down to the sites, what types of sites, what types of clinicians. Where is the patient in their journey, are they treatment naive. And finally the benefits to the patient in the study. Are they intrinsically motivated, are they extrinsically motivated. And Melanie’s message, which I thought was interesting, is that if you change any one of these and you could substantially change the whole proposition to the patient being in that study.

So what Pfizer is doing presently is really actively trying—I shouldn’t say actively—in a very structured way think about that patient profile, understand the patient’s realities on all these variables, and then think, okay, if X is true, what can we do to solve for that. What can we do to make that patient have a better experience in the study. And that leads to a better understanding of what tactics, what methods—we’ll talk about specific examples in a minute—can we do to support the patient. Where do we do that. Is it in the site, is it in the home. When—is it during visits, is it at home doing procedures. And then how, exactly what are we doing differently. Is it travel assistance, is it home health nursing, is it patient advocates. Understanding those sort of five buckets and the individual variables within them, we can start to really make smart decisions about what methods we can use to help that patient.

[05:15]

So again, we’re not just talking about how many visits. We’re talking about how does the patient get to the visit. How do I reimburse them for their expenses. Those types of study operational methods. Now, we came across this in an industry program with Pfizer a number of years ago, with an adolescent smoking cessation study. And you’ve all had this experience whatever role you’re in, where you see a protocol and you have like an “oh my lord” moment. When that psilocybin RFP came through, a lot of us had our “oh my lord” moment, you know. For different reasons. But when this study came through, it was like wait, we’re trying to recruit patients for ages 12-17 who are so dependent on smoking that they’re ready to really engage in a clinical study to get off that dependence. I don’t know who that person is. I don’t really know how to identify with that 13-year-old that’s that hooked on smoking. And what we find is, when you have adolescent studies, everyone starts thinking games. Do games, it’s like a silver bullet, it’s a panacea, right. And part of the role that we’ve played over the years is to say, well yeah, but this person has plenty of game options, they’ve got consoles, they’ve got browser games, they’ve got plenty of opportunities. How do we do this effectively. And in this particular case what we did was, in the limited kind of opportunity we had was to say, let’s tell this patient a story. Let’s tell them a story that will reveal itself chapter by chapter over time. Then we’re basically rewarding the patient for staying in the study and you see some examples here where—it’s a little bit blurry—but where you’re alerting the patient, hey your next chapter is ready. And you see within the patient app on the left an example of the story. And it happened to be a story about adventurer seeking a lost city. So professionally copy written, professionally illustrated. And the idea was, each time you have a story there’s a cliffhanger. So it makes you want to find out what happens next.

Now, it was the best we could do, given the arrangement in the study. There was no money for arcade-style game. And I was frankly very nervous that, is this going to compete. There wasn’t Fortnite then but whatever—it was Minecraft, right. So was this going to compete with Minecraft. But I think that, having understood the patient a little bit more, understanding that we were taking a method to support that patient that was really aligned with who they were was this good starting point. What I really liked about the partnership with Pfizer is, they have a relationship and I’ve seen—we’ve done this with Shire when Joe was there years ago—is engaging a patient panel, not just about the protocol but what about these methods, what about this idea. And it was very relieving to bring this to a group called iCAN, which is a patient group for adolescents—it’s a cancer group I think, sorry, it’s an adolescent group—and show them this. Would this be meaningful to these people. Would it seem pedantic or sort of laughable by comparison to the other game options they had. And this got terrific feedback from this group and it kind of gave us the confidence to bring this method into this study. And we found during the course of this study that the patients who were using this feature, which was optional, were more likely to come to their visits, they were more likely to persist in the study. So there’s some potential correlated benefit from having this extra layer of engagement.

So the thread here is thinking not about just the protocol but about the methods before the study, figuring out what’s the right method, and then figure out—if you can—how you test that theory with real patients before you actually get started.

This is the before the study part. Then we’re in the study, right, and this is what I was saying earlier about at some point the protocol is what it is. So when we think about how do we help the patient during the study, we’ve got a lot of options. So many options actually that it feels like we need to be very careful about how we combine these options. And this is a home-grown table, so it’s not necessarily completely inclusive. But the way we think about this is that you’ve got some classic recurring challenges that you could fit into the buckets. On the left, you need the patient to come to visits. There’s protocol obligations that you want them to follow. Generally, they’re consuming meds and sometimes that’s in-clinic and that’s not a problem, and sometimes that’s take-home. You’ve got diaries in some studies, so that’s a factor. And then ultimately you just want them to complete the study.

[10:09]

This isn’t everything, and not every one of these exists on every study. But these are sort of the traditional buckets that we’re seeing. And we look out across the realm of people that are supporting patients in studies and we see a lot of different answers to these questions. And most of these everyday knows, so I won’t belabor each one of them, but together they give us a good arsenal to try to help patients if we have specific concerns in different protocols.

So when we’re thinking about, how do we draw from this arsenal, it starts with that patient profile, and it starts with understanding what those challenges are going to be. And we have a lot of different options.

Now the option that we’ve been most actively pursuing within CRF Bracket purely around patient engagement is the system called MPAL, so this is a little bit our bias. This is a system that communicates to the patient through SMS, through email, through voice—voice being intended for patients that have literacy issues or vision impairment and just simply can’t read a message—and through mobile app. So this is a system we’ve used for many years now on hundreds and hundreds of protocols, so we’ve seen how this can impact patients, and I’ll talk a little bit about that. But this is sort of our vantage point around this general topic.

And we’ve seen some pretty remarkable results—some of the examples we’ll talk about in a moment—to the point where it makes you wonder, are we doing all of these things. Which of these are we doing, which ones have we demonstrated work. Are we doing these for the studies when we need them. Because our experience has been that many studies don’t do these until it’s reactive. And the talk with Melanie kind of related that reality back to the patient recruitment reality, where many studies don’t engage patient recruitment until it’s too late. They rely on the sites, they hope it’s going to go well and then they react when they put the study in rescue mode and it’s behind enrolment targets. And we kind of have the same dynamic afoot with patient engagement. We know it works but we don’t do it on every study. It reminded me of when I came into this space in 2010, I saw all the Ken Getz data and thought to myself, like wait, how many studies are there that fail because they didn’t enroll, how many studies are there where there’s 60% of sites enroll 0 patients. We know this is happening and yet why aren’t we changing. And the message that Melanie has been taking within Pfizer is one that applies to this patient engagement topic as well, which is that when you’re dealing with a misfire from a reactive standpoint, when there’s the rescue, then you’re just throwing whatever you can at the problem. And that ends up meaning that your spend is not as tactical, it’s not as targeted, it’s not as efficient, you’re just kind of rushing to get something out there to figure out how you’re activating more sites, you’re doing a media campaign, you’re doing whatever you can do, whereas if you do this more proactively, then you’re more targeted, you’re really being more driven by the patient profile, you’re picking specific tactics that you know will work and you’re spending less with more effective results. So maybe you do that proactively on many studies and spend less than if you do it reactively on your studies that are failing. So I think the goal is to understand how to pick these methods and then how to justify them internally, relative to just being reactive.

And I would say one of the key elements we have in doing that is to be able to demonstrate the value. Any one of these—I go back to that matrix—everything should have a specific measurable impact, it should be boosting diary completion, it should be boosting protocol compliance, it should be showing a measurable result. And that’s kind of the way we’ve organized our thinking with our product, which is designed to help in many of these elements we’re seeing again, but really measure that. Anything that we talked about here can be measured. And I want to describe a little bit what we’re seeing in terms of those measurements. Sometimes it’s easy, sometimes it’s hard. Usually it takes advance planning, but if we’re trying to make sure patients come to visits, well then let’s look for protocol deviations associated with visit attendance—no shows, rescheduled visits. If we’re trying to measure medication adherence, then obviously we have PK assessments or we have potentially pill counts. Diary completion, we all know that world really well. Protocol deviations just related to study activities are measurable, typically logged. And then finally, completion rates. So we have all the measurements we could want to demonstrate that this works. And when we’ve used them, we’ve seen a really powerful correlated benefit. And Joe mentioned that the digital health world is here, it’s on us. And when you look out beyond clinical trials into the body of evidence around how digital health impacts people’s behaviors, their health outcomes, the evidence is pretty overwhelmingly positive and suggestive that if you’re engaging somebody through digital communications, mobile or otherwise, it’s going to make an impact on how they’re behaving, how they’re operating.

[15:32]

And here we see the same thing. When we’ve communicated to patients typically it’s optional. And that makes sense, right. You know, eCOA can’t be optional, it’s part of the protocol, you need this data. Patient engagement, if you’re going to communicate to someone, you’re going to offer them reimbursement, you’re going to do whatever one of these tactics it might be, you wouldn’t require them to do that as part of being in the study. So you’re going to let the site offer it to them. And what that’s meant for us is that in every study we have some patients that are getting it, some patients who aren’t. So we get to look across those two cohorts and see, going back to those measures, what’s the impact seemingly.

We had a study a while back that was a vaccine study that about 65% of the patients were getting communications to their phone. And when we looked at that group versus those who weren’t, we saw significant drop in protocol deviations, we saw less likely to have visit-related protocol deviations, and most importantly we saw a significant impact to dropout rate. So 50% reduction in discontinuation rate. That we saw in this particular study. We’ve seen that about a dozen times, 50% impact to discontinuation rate. If you go back to that question of like, do you do this proactively or do you do this reactively. If you could wave a wand and find a way to reduce the discontinuation rate by half, why wouldn’t you always do that.

And when you think about that from a metrics standpoint, if you go back to the Ken Getz data and you assume that that individual patient has a loaded value, like a sum cost of $30-40,000, then the ROI for this starts to look amazing, right. If you could take half—in this particular study, this happened to also be a Pfizer study, and these were their calculations now ours because vendors will do magic with Excel. But Pfizer literally did the math and said the number of patients that didn’t drop out that we think would have dropped out ended up being nearly $5 million saving.

So the category of patient engagement of digital tools to help the patient is in the shadows of the other more classic e-clinical stack areas, like EDC, IRT, eCOA. But this is one that has tremendous opportunity for us to help the patient.

But then the question becomes, if we’re going to do these things, if we know what we need to do for the patient, when and where and how, and we know that it has an impact, then the question becomes, how do we organize this for the patient in the study. And this again is one of Melanie’s slides, which brings home an interesting point, and a really thorny challenge. The way we do this today is by study. The container is the study. Study X gets tactics A, B, C. It gets travel reimbursement, it gets digital communications, it gets patient advocators. We’ve made it very much study and site centric. But really, in an ideal world we would make this more patient centric. So you would have all of those providers, or all the ones you want to work with, whether it’s travel reimbursement, home health nurses, patient apps, reminders, observed dosing, etc., you’d have all of those folks available for any study. And you’d let the patient pick which. Really it makes a lot of sense, like why shouldn’t it be a buffet for the patient for them to select. And Joe, I remember back to the early days at Lilly where the moniker was, improve the patient’s access to the study and the convenience of the study.

So I like the thought of this, but I also see this as a procurement nightmare. How would you have every single one of your potential vendors all contracted, all ready to go for every single one of your studies, so that any patient in any study could choose any tactic. I think that’s a daunting prospect, but one that I think is an interesting way of looking at the opportunity to not make the decision at the study level but let the patient make the decision for themselves.

[19:56]

So as we figure that out, as we figure out how to make the tactics more flexibly available to patients, then I think the next question becomes: How do you organize it for the patient, how do you bring it together for the patient. And this is really where the combination of CRF and Bracket has created some very powerful opportunities, because the challenge we’re facing today is that everybody around this circle is a different company. And that means that the patient has multiple different companies that are communicated to them, all part of the same study and we don’t really coordinate with each other. So something has to happen to bring that together to the patient in a consistent and coherent kind of way. And there’s something that we’ve been working on together now, drawing a wider circle around this topic, but bringing some of these solutions under one roof—so engagement, ePRO, eConsent, for example all being part of one organization—it gives us the opportunity to have all of those pieces work seamlessly for the patient, all be kind of consistent. But then we’ve got other things, we’re not going to do everything inside CRF Bracket so we’re going to have other providers outside the organization. And so the opportunity is to bring those to the patient in a really coherent way.

The example that we’ve got on screen here is caterer. So when you think about your average flight experience, you have probably a half dozen totally different companies that have worked together in a way to bring that experience to you and you don’t know that they’re different companies, you don’t realize that the catering company is different from the baggage handler from the security, from the maintenance etc. The airline brings that to you in a way that’s seamless hopefully. And that’s kind of what we need to do for our patients is bring this information to them in a seamless way.

And in addition to having ePRO engagement and eConsent within one organization, I’ll give an example of how we’ve kind of looked beyond ourselves to do this. How many folks in the room are familiar with a company called Greenphire? Okay good. So Greenphire is an organization that provides patient reimbursement for those of you who don’t know them, they use a system called ClinCard, and it’s basically a stored value card very much like what you would buy in a pharmacy, like a CVS, where it has a certain value on it. The value can be added to it as you progress through the study and hit certain milestones. That can be diary completion, that can just be simply going to each visit. But the challenge with something like a ClinCard is it does behave like any kind of gift card. You get a $100 gift card, a $20 gift card, you spend something on it, then you have some remaining balance on it, and I don’t know, if you’re anything like me, I can never remember what’s left on that card. I’d have to go to a website, I’d have to log in, I’d have to take the trouble to do this. And within Greenphire they find that people don’t do this as often as they could. They’d have to go to clinpal.com and remember their username, their password and go find out their balance. So we kind of found each other a little bit together with some sponsors that wanted to do this, and said let’s take the Greenphire data, the ClinCard data, and bring it to the patient inside of our patient app. So instead of having to go to that website and remember all those details, they can press a button inside of their app, and it transmits a query to Greenphire and says, show me the balance and transaction history for patient X and study Y and bring it to that patient all inside that. So you never had to leave that one experience. This adds value for Greenphire because it makes their service better. It adds value for the sponsor because the sponsor is not spending money on gift cards that are just getting put in a drawer. And it adds value for this overall patient engagement experience.

So when I think about the in-trial phase, thinking about how to plan those, just get those tactics approved proactively, think about how to justify them from an ROI standpoint, and then how to bring them together in a coherent way, is sort of the challenge to use methods more effectively that improve the patient experience.

The last phase of the talk was really about post-trial. And here Pfizer has a great story. They’ve got a lot of work around alumni networks and Blue Button and so forth. But one thing I did want to share with you, which is sort of in the spirit of soft data collection, is the patient voice during the study. And maybe even beyond the study. Because again as an eight-year-old newcomer to the clinical trial space, it blew my mind from the very beginning that on my trip up here from DC I probably had four or five different companies that wanted to know about how I experienced their service, whether it was Amtrak, rental car, what have you. And yet we don’t do that with our patients. Within clinical trials we rely on lagging indicators. The patient withdraws consent, and then it’s done, it’s over and you never knew that was going to be a problem, that patient’s gone.

[24:55]

So why don’t we collect information from the patient during the study that you can do in a sensible low-impact way that gives you a leading indicator. So think of this as patient voice or patient satisfaction where once a month or once a quarter, whatever the right interval is, the patient can answer some simple questions, like how happy are they in the study, would they recommend the study to other people, do they see themselves continuing the study. And some of these questions can be delicate, you don’t want to suggest they have the opportunity to drop out. You don’t want to kind of seed their thinking, but if you could learn those details from the patients in your first half of patients in the study, identify problems that those patients are having, and then have a data-driven conversation about your protocol, about your site experience, then you can potentially fix that for the remaining half of the patients who haven’t yet come to the study. So again, that idea of what could be more patient centric than the patient having the opportunity to express their needs in the study. And yet one that’s still quite underutilized in clinical research.

So with that, I’ll draw this presentation to a close and invite questions. I want to give you guys the view of what patient centricity is meaning to us and how we’re using these technologies together across our whole business to improve the patient experience.

[Q&A Section 26:30]

MODERATOR

Any questions for Jeff?

AUDIENCE MEMBER 1

Jeff thanks, that was really excellent. I’m wondering, in the past few years, how have you seen the trending as far as the adoption in pharma of the types of services that you’re talking about here? I know just a few short years ago, people thought of patient engagement as, I remember one company that was just called Patient Reminders, that’s pretty telltale, right. What you showed here was so much more holistic. So I’m just wondering what you’re seeing in terms of the uptake.

JEFF LEE

Well it’s characteristically slow for life sciences. I think that the beauty and the curse of being a data-driven industry is that people need to see data like the case study that we mentioned. And I sort of feel like that data needs to come from in-house, it needs to be on—if you’re sponsor X it really needs to be part of one of your studies so that you can truly bank on it. And that’s really where I do think the industry has made significant progress since the days of the Patient Reminder type companies because we’ve been doing this and we have real data to show for it. And I think that as well, we’re now—it’s the classic crawl walk run. We did simple reminders for a good number of years before we could conceive of some of these other more advanced opportunities. It’s almost like you have to get that foundational level before you can start getting more creative. And I do see more activity around people wanting to do things like gamification or things like more exotic data collection. Let the patient do an image and audio capture from their own device. So we’re getting there but it does take time.

MODERATOR

Jeff, maybe one of the reasons why it does take time is we kind of go back to the experience in providing e-clinical technology to sites. We’ve provided lots of different disparate bits of technology, and because we pay those guys, they’ve been willing to tolerate that. But we know that patients won’t tolerate lots of different disparate things. So the way that you show the Greenphire integration was a really good example of how we might want to deliver technology to patients. They have one interface, one place that they’re going to. But everything else behind the scenes is kind of working to service through that one place. But maybe that in a sense is one of the limitations of why companies aren’t adopting lots and lots of things because up until quite recently there hasn’t really been the capability to pull all those things together into one simple interface for a patient to use.

JEFF LEE

Yes I think you’re right. And as we try to study this problem, looking back a couple years, we did some surveys with ACRP to ask patients, if they were presented with the opportunity to have this type of technology on the study, what would that do to their interest or intent to join the study. And we found a couple things that were interesting. We found patients, if they knew there was a mobile app for a given study, they expressed, according to this survey, a 30% higher interest in the study. So that tells us that the patient sees that as a signal that the study is technology progressive, that it’s a good harbinger of the experience they’re going to have. And they even said they’d be willing to use two apps on a given study. So there is appetite and tolerance for some diverse technologies. But when you start asking would you do more, would you get two apps and a text, three apps, that’s where it fell off. And it’s like you said, they’ll vote with their feet, they don’t want that, it’s not the right way to bring it to them.

[END AT 30:19]

Previous Video
Migration Evidence to Support the Use of Patient-Reported Outcome Measures
Migration Evidence to Support the Use of Patient-Reported Outcome Measures

Serge Bodart and Bill Byrom present at eCOA Forum 2018.

Next Video
FDA's Patient-Focused Drug Development Guidance
FDA's Patient-Focused Drug Development Guidance

Nikunj Patel discusses patient-focused drug development.

×

First Name
Last Name
Company
Country (Citizenship)
State
Opt-In to Future Emails
Our team will be in contact shortly
Error - something went wrong!