Download Premium Content

First Name
Last Name
Country (Citizenship)
Opt-In to Future Emails
Thank you!
Error - something went wrong!

Risk-Based Monitoring in eCOA

June 13, 2017



So we’re going to move on to introduce Anders Mortin. Anders is going to take us through the conceptual ideas of risk-based monitoring. Anders has been in data management I think forever, it was pre-EDC days perhaps. I think the first time you and I met was ten years ago. But starting his career out at a CRO as a clinical data management, moving into Lundbeck with increasing responsibility for the implementation of eClinical systems ending up at Ferring where he had global ownership of everything eClinical I think is probably the best way to describe it. As a true subject matter expert within clinical data standards, please welcome Anders.


All right, yeah. Thanks Alex. Yeah, and towards the end of that, just to finish up that story, I left Ferring about two years ago, so I’m running a small consultancy company nowadays, working together with CRF Health, amongst others, on various you could say eClinical topics.

So today we’re going to talk about risk-based monitoring. And if you’ve never done anything with risk-based monitoring don’t panic, we’re going to start with trying to explain, and I’ll give my take on what it is all this about, looking at it in the eCOA context. All right, so that’s what we’ll try to do.

And so as a little bit of an introduction, some of you here have been so kind as to fill out this short pre-conference survey with some questions. So I’m just going to start with some highlights from that. Basically there were three questions related to risk-based monitoring and monitoring of data risks in eCOA and risk in clinical trials and all of that. And these are some highlights on that survey on some of the experiences of current risks that people mentioned. So probably not very much of a surprise, some things that have been mentioned before. Missing data, reporting compliance of course in eCOA is critical, entry response delays from sites where it relates to compliance. A couple of interesting ones that were mentioned, down here towards the end, linking in to some extent to the eConsent and so forth, is patient ID disclosure, where we’re getting more and more direct-patient data, we had PatientsLikeMe talking here this morning. So this whole thing is very much evolving, so that poses new challenges.

So actually looking just briefly at again the experiences and current risks, they were very concrete, whereas on the other questions that we’re more, you know, can you mention things where existing monitoring, review, data review sort of process has failed to identify a problem. And now the terminology suddenly shifts, and people use words like failed to identify a trend or a signal, performance, incorrect use, and more kind of generic trending information that people were kind of asking for. And luckily enough, that leads quite well into what risk-based monitoring and central monitoring is all about.

So that’s some of the highlights that we’ll have a further look at. The question partly is then, can risk-based monitoring, whatever it is, help on this. I’ll try to come up with an answer that is yes on a good day, and then I just need to mention that rarely, these highly relevant things in the end, you could say primarily here, I would recommend strongly prevention. Think carefully about those questions first and try to prevent them rather than spot them early. Because corrective action is not really enough if you have  a problem in that space.

The way we will try to do that, this is not duration, it’s kind of timeline. So we will have a short introduction. I’ll try to give my take on risk-based monitoring in one slide—pretty busy one, by the way. We’re going to talk about risk-based monitoring and eCOA, and then what’s specific about that, and how can we use and what are the benefits of using eCOA for risk-based monitoring. Then I’ll move along and talk a little bit more about sort of specific examples on risks in eCOA and give a little bit of a take on what we, in my view, need to do to go one step further to avoid some of these problems, to take us one step beyond finding out in the end that compliance was too low or we have a problem. So a little bit on what we can do with that, and something more methodic. So how do to go about this. You know, I want to start, I want to do this, how do I go about it. That links into the level of interaction, a directed session which will be a table relay exercise, whatever that is, for about 15-20 minutes, where we’ll work with risk, data, and visuals, explain all of that. And then there is some time for a few recommendations and questions and discussions towards the end. But please feel free to have any questions as I talk as well. So okay, that’s what we’ll try to do here.


So risk-based monitoring in one slide, I’ve tried to do that. So I will explain why we do this, what are we doing, how should we do it. So what is this all about. And there are, read them or not, both FDA and EMA have come out with some guidance on risk-based monitoring, some definitions, what is it, why is it. And basically the thing is, we have a problem. We know these things. Cost is going up. Quality, not necessarily going down but let’s say trial designs, eCOA and what have you, is getting more and more challenging and complex, so at least it’s not getting easier. We’re getting more risk of quality problems if nothing else. Throughput, duration of studies, patient safety—can’t say if going up or going down—but not all, as we heard from PatientsLikeMe in the morning, not everything we give to the patient is actually benefiting them. Compliance complexity. And a space also here, we talked about this, how do you know the patients are who they say they are, or invented patients, fraud detection. Sometimes people don’t want to talk about it. This definitely is a reality out there. So these are some things we really need to make sure we can control and manage.

So what do they—regulatory—think we should do about that? And I think risk-based monitoring, as kind of the term says, is very much about practically trying to manage risk. Define risk and monitor them and manage them in the right way. Focus on the right stuff. And traditionally there is a regulatory focus on—what are the key risks we are interested in—patient safety and data integrity. Of course making very much sense. I’d just like to add that from a business perspective, you have some other things that are highly relevant as well. You could say one being this with the trial conduct, which I would say would be things like meeting the timelines, keeping the budget, recruitment targets, everything is running smooth, science are happy, patients are happy, everything is good. So we want it to work well, we don’t want protocol deviations, we want it all to run smoothly. We are also interested in trial results. So like the patients, if they have low compliance, we might fail on the results. If patients are behaving different than we expected, that might also influence our results. So of course, in addition to patient safety and data integrity, companies or sponsors are of course making sure the trial runs smoothly.

How? What is then the suggestion, what do EMA and FDA, that facilitate the work, what do they think about should we now go about this? And first of all, the last risk-management plan I looked at for a client I’m working for the day before yesterday had 122 risks before the study started. So they might not all be relevant but it just tells us we need to focus, we need to find out what really really is important. We need to make sure we’re actually thinking about consequences and we pick up the things that will or might have an impact on our study, on the patient safety, on our results, so forth. Again, looking at compliance, yes we can look at the overall compliance, but of course we are especially interested in compliance that relates to endpoint results and safety. So is there a difference here, not making it too simple. So we need to find the right ones.

Then both they and I suggest a data-driven structured analytical approach, using the data, using the information we have, adding informed consent or eConsent data to that, other things, Paul mentioned some of what you have in the metadata like time stamp, things like that, using that information we have, and using more structured analytical methods to try to manage and monitor those things and spot things. So instead of going to review all of it from page 1 to page 22, have the CRA out there to check everything, look at it more structured analytical centrally. And not forget, take action. Goes back to finding the right risk. Something you can do something about. Try to fix it, as compared to finding out afterwards compliance was too low, we missed the endpoint. That’s kind of is too late. And as always with something regulatory driven, but also for your own good, document document document what you did. What you have assessed, what you thought, also if there is nothing. I looked at this, it looked good. Great. We need to document that as well.

So that is it in short, theoretical. You could say, what is this all about. Big questions at this point. It’s not rocket science. It’s pretty straightforward, it’s just now filling it with the right content, right.


So now if we’re looking at this in eCOA, is there a case for risk-based monitoring in eCOA? Do we have any problems and risks? Who would say no? No nos, so we have a yes. Right, so there could be something of this in eCOA context, this really would make sense to do more of this. Are there any key risks, are there any important things in eCOA that say, you know, more so if we’re doing this risk filtering, is it important enough. And some specific things with eCOA of course, saying yes, most of the times it’s a source. So there’s now going back, it’s very difficult to fix problems, once they’ve occurred, afterwards.

We have some really high data volumes and frequencies. Then you can do some, not from a monitoring point of view, but retrospectively looking across studies as well because you have  some things that can cross studies as well. But we have some high data volumes, typically, high frequency, so we quickly can get a signal, an indication if something is wrong and we can actually  have the time of fixing things.

It’s behaviour-dependent, which means it goes out there and we reach out to the patients. We get the variants and we’ve got this individualism and all of that, that can significantly have an impact that is really important on the data you’re collecting. Again there’s often an endpoint. It’s challenging—we talked about rare diseases earlier. And we also have very complex drivers for complex designs and complex setup. So definitely something that things can go wrong, definitely. Expensive as well, so we want to kind of nurture that investment. Compare to—no not comparing to paper. So we think that there are some important risks here to try to deal with.

What is specific now from another angle of eCOA data. So we say we have the risks. How about the data now, is there something there we can use, we can touch on some of that, but there are some really specific features also with eCOA data. It’s what I call direct. So we all know for patient safety of course that’s important if you capture something that’s adverse-event-like or that has to do with safety. So for the safety of course, but also for other things. You know, we know, these guys know, but you all know as the sponsor as well, saying, oh ten minutes ago they reported something, you know. So it is direct, I mean we don’t have to wait. Now we have it. It’s what I call sensitive, so one reason for that is again, we have this chain where you have a sponsor, you might have a CRO involved, you’ve got a CRA, you’ve got the site, you’ve got the patient. So if something is not working there in terms of assuring this quality and that everything is understood, you know, the longer up that chain you go, the bigger the waves will be, and you can be sure if it’s not working good at the site, you can tell from the eCOA data out there with the patient, then they will struggle. So it is typically very sensitive. Again, coming back to the high volume and frequency, that’s from a risk perspective, so if something goes wrong, we have a lot of problems. But from a structured analytical view, we’re getting a lot of data. It’s strong, we can use it for a lot of things. And it’s reaching metadata, as I mentioned, so it’s sharp, it can tell us a lot of really interesting stuff on what goes on that can give us direct and indirect indication.

There are also some challenges with eCOA data, looking at it, because again as I mentioned good side of it being individual is—the flip side of that is, is this really a signal or trend or is it only this patient, and they are gone anyway, so does it really tell anything about the others. And sometimes, if we have event-driven indications where you don’t know what to expect, it’s kind of difficult to have that reference, saying is this really a signal of something that we can use for anything, because we don’t really know if they’re reporting—we talked about OAB, you know. Are they, you know, what’s the right number, so to speak. If it’s a daily question or a weekly question, whatever it is, we know they should answer weekly. We know there are five questions, they should answer all five of them. But some more complex signs have more variants and it’s more difficult to refer to this as what we expect.

Still, again we’ll see this claim will be, yeah, the data here is really good for this purpose. So we’ve got that right. So somewhere in that first step you could say, do I have a problem of risk in the eCOA space. Definitely. Do we have the data. Yes. Are they important risks? Yes. Can we do something useful with it, can we take corrective action that matters, that will have an impact. I think definitely yes. So is there a case for this? Luckily, so you will not get out to the weather on a break just yet, so there’s a reason to continue.


So basics. Hopefully very quick briefing. Risk-based monitoring, again nothing strange, nothing rocket science. But let’s look at a little bit more concrete stuff now, some more examples of what this can be about before we move into the workshop piece of this and you work around the tables a little bit more with that.

So I’m going to show a couple of examples of what I call the first-step monitoring that many of you will be familiar with, and then look a little bit closer at what I call a second step, looking one step behind, one step more on some of these areas to truly spot the actual problem and be able to do something. So just to give you an idea, these are some of the things that, if people are asking you what are you monitoring in your eCOA trial, the first thing on the list is probably compliance. And everyone wants to condense that into that one number percentage. So what’s your compliance percentage, is it 90% or 92% or you look at the site maybe and you want to condense it into that one number. Fine, we might need that. There might be more. People aren’t really doing as they expect to do. You might want to use that. Misunderstanding was also mentioned by someone entering—I have seen entering rescue medications where they should enter IP and vice versa, those kind of things can go wrong. Incomplete and inconsistent data, pretty straightforward as well. Some examples, say we want to make sure they complete all the questions all the time. Inconsistencies, you’re starting to get that—diabetes glucometers, does the glucometer time stamp match the reported meal and those kinds of thing that you can start getting into there, starting to control more and more. Technical issues—if I can mention them—but they do occur sometimes. And then down the bottom again—they have to talk about it—how much is influenced. I’ve seen an email from an investigator saying, yeah but can’t really change this because she’s such a good patient, she’s been in many trials before, I’d really like to have her in. So no, not really. Seen some countries with no protocol deviations on inclusion/exclusion, but 90% of the patients coming in just on the border, you know, and that’s not really what you expect from your population, right. So are there any benefits in here somewhere for someone, that kind of influence, the way they’re being reported. And all the way over to invented data.  Invented patients, Invented data, which is something to keep an eye on as well. So these are some of the typical things you might see, also maybe some inspiration for the workshop a little later if any one of those is a topic that you want to pick up on there or a risk that you want to pick up on.

So let’s look at one way of looking at compliance data, for example. So what we have here is sites. The x-axis,  would you have a weekly  question, so would you have the compliance of answering that weekly question at the top. Could be a daily question or whatever it was, but this is basically a percentage. Let’s just but in a bar chart. And we can see we’re doing pretty good. And then we have some bad guys. So typically what you can do here is to say okay, so Site X, Y and Z, you send them an email or give them a call and say, you need to instruct your patients or instruct your patients better because your compliance is not good enough. So your patients need to do better. And that’s what they try to do. They don’t really understand maybe, because oh God’s sake, I’m telling them that they must fill in the diary, every time you ask for a question, you know, they should fill it out. But they might get back to the patients and remind them again. They might even say, oh I’ve got this visit coming up, it should be between here and here, let’s make it early, you know, at the earlier end and then get the patient in to remind them.

Actually looking at this one—which is real data from one of our sponsors. If we now take the same thing, the compliance and percentage, and we plot it out over study week, we see some dips here. So actually what this is, is visits. So the reason for that says that the true cause behind the problem, the bad performing sites or the bad preforming patients if you like, is a couple of sites not reading the protocol very thoroughly, thinking, ah I got this five-day, plus/minus window all the time, so I will call the patients in and I like to call them in a bit early—that actually does happen with sites. They push them in, what happens is, they cut that week counter. So when doing that, they bring the patient in, and there was in this case a potential dose change, which means they reset the counter. So it’s a combination of the sites not following fully the protocol and the design, so that they’re cutting out that last week, which actually one of them is an end point. So the patients fill out everything as they should, everything appeared on the device to fill out your daily questions, fill out your weekly, fill out your monthly, they did all that. But the site’s not paying attention that for these visits, you don’t have a negative window. Pull them in too early, you start losing important critical data.


So it goes back to the risks, what are the really important ones here. That visit with endpoint, you know, you must be sure that we get that point in, and not looking at it too simplified and saying 82% is good, and my threshold is 80, we’re fine. Or these ones are below, we need to remind them, retrain them or whatever it is. Actually understand what does on here. This is sites scheduling the patients wrongly, it’s not the patients being non-compliant.

So that’s one example. Let’s do another one.

Some of you might have seen one of these graphs like two years ago I think in one of my other presentations. What this is all about is the reported time that the patient sets. So this is about events reporting bowel movements. So the question is, when did you have a bowel movement. Patients are given a device and said to report a bowel movement, when was it. And this is the distribution. It makes kind of sense, you know, in the morning, not as much in the night, have much more in the morning it kind of feels like a fair distribution. If we dig down a little bit further into this one, we see peaks. So what’s happening here is, people fill out not when it happened, they fill out the diary in retrospect, and when you ask them a question like that, they will do the spinner thing and say, it was about 1:00, 2:00, or 1:30, 1:15. Some designs—that might not matter that much—but in some designs I’ve seen, five minutes—we’re talking about differences in things and you can have some five-minute window for did you get up to pee within five minutes after you had been going to bed. Yes or no. Or kind of calculating out so those minutes and those kind of effects can have a big impact in some studies. So a lot of studies, not relevant at all. So again it comes back to this and saying you can’t really put a measure in and say we should have everything reported within this length of time or in this way, sometimes it’s important, sometimes it’s not. It’s about understanding, linking that together, being smart about it and truly saying okay, what do I need to keep an eye on here.

Another effect of that is saying, when patients again do this spinner thing, is saying when did this happen—two hours ago. So they will keep the minutes of right now, they’ll spin it back one two three four hours. That sort of thing again. If your time points are important, spinners can be a challenge.

Another example, my final example, of something to look at. Again, looking at reported events. Patients, we can’t tell that you know, what the right answer is, so they can report 0 events or 20. So when they can do that in any way they want, the things happening, there is a risk that they do duplicate events. And here we see that some sites definitely have a problem with that, or what indicates to be duplicate events, because they have exactly the same time point and exactly the same attributes. So there are two identical events reported by the patients. So that indicates that probably is a duplicate. And some sites seem to be struggling more, patients at some sites are struggling more than others.

So again, it’s not enough that events come in, that they report and they’re kind of averaging about what you would expect. There also are some additional one step more in-depth things that you could have or should have a look at and say is this working as it should.

Cool. So next step from that, those examples, I’ll try now to tie that together with a little more view on methodology prompting us in for the workshop and what we’ll try to work with there, which again will be some good risks, some discussions on what data could be useful to monitor, manage that risk, and how could that be looked at, visualized, looked at to give you an insight into what goes on. And to do that I’m going to a little bit first start actually with doing the negative side, cracking down a little bit on one methodology, one way of doing things. Very often—I mentioned the study with 122 risks—so very often you start with some of these that we had before, and you put the threshold in and you say 80% is good and 79% is kind of yellowish, and below that I’m not happy. So we started doing that and you do it maybe on the study level. But then you find out that’s really not enough. So we need to look at a few more, and then continue down. And then actually we need to look at this by site. And then things start happening. All right, and I see it a lot, and say that’s fine. But for a lot of things, actually that’s not really sharp enough. To me, again coming back to this one, what we need to try to work with is finding out what are the really key things to monitor, to look out for. What is the really important stuff. What is the right data to use. How can we use analytics or visuals to actually help us spot that efficiently. KRIs and traffic lights tend to swamp you, and they tend to actually average out some signals and kind of obscure some stuff.


Okay. Without further ado, I think it’s up for some fun work on your own. We practiced in the morning with working at the tables, so we’ll do a similar thing. We’ll do it a little bit more as a relay here. So we’ll work with this piece about actual risk, right data, and efficient visual analytics. So this will be a try. So the way we’ll do this is, there is a little piece of paper on each. Sorry about the paper, but it’s paper. So you need to find someone who can write legible on the table, that’s the first part, someone that can write. And what you will do is you will be given five minutes to define a risk, I’ll put out the list. And if you run out, you probably can come up with some ideas on what the risks can be. If not, wave and I’ll come with some interesting suggestions for you to think about. So you will do that for five minutes. Agree on a risk to put on your piece of paper in the risk box, whatever we call it. If you can’t agree, if you have ten great ones, fine, agree to put one in the box and put the others on the back or on the side. And what I will do afterwards, I will take this back and with help from Dana, we’ll have this typed up, so that will be a little bit of a risk matrix on your suggestions on what was the risk, what was the recommended data to look at, and how was it suggested that they should be visualized or analyzed to kind of give you insight on that risk. And if you put more in, we’ll type those up and come up with suggestions on data and visuals for how to monitor that stuff. So again, put them on the back if you have more, but try to agree on one.

Five minutes have gone, you will hand over your paper to another table. Doesn’t really matter which one, but you’ll pass it on. So now you will get someone else’s risk. You will have five minutes to say, okay, risk is false patients, invented data. Okay, what data could we look at to identify that. So the second step will be, you will come up with suggested data. And then the third, really interesting step is, okay now the next table get it. So you’ll get risk data from someone else, and you need to come up with analysis visuals on how can we look at this to monitor and manage this risk. Okay, let’s see how it works. I’ll be floating around here, so any questions, thoughts, you know you know you will have them. All right, we’ll go, you have a clock? And start.

[Break for Workshopping 27:53-28:06]

Now I hope everyone found that a little bit useful. I think of course, as I said before, the discussion and the thought process coming in from different angles on this is probably the most useful part of this exercise hopefully. I think also the first key learning is most likely that, as I mentioned before with looking at the risk management plan with 122 items for a study, that actually means in reality 122 items times the number of people reading the plan. Because as you notice, say you get now a risk, and what do they mean, right, and it can be so many different things. So that’s I guess also why we have typed this up. It will be interesting to see, it’s like you whisper something and what happened to my risk. What did others think about it and what did they do with it, if they saw something completely different, which is also again part of what I hope to achieve here, saying actually, where did that travel and how do other people look at my thing here, right. So we will collect all these notes, thank you very much, at the end of the session and we’re going to write that up and some of them I’ll probably put some annotations or comments in as well. And that will be kind of a joint little risk-management catalogue if you like from this session, this venue. So if you want to have a look at that when you start up your next trial and see something useful in here, that of course will be great.

Okay. In the summary now ,I’m first going to say a few words on some recommendations, thoughts on data and tools again. Questions on tools also came up in this forum, and I have some notes on that as well. And then I’ll do a small summary and we’ll have questions and discussion for the rest of the remaining few minutes.


So we have touched on it, but again, giving visual data some more thought and saying where can we go, what’s specific with eCOA data, and what’s important to think about. So of course you’ve got the clinical data as such, your reported data, the values that the patients actually put in, the pain rating or the questionnaire answer, whatever it is. And of course you have that, and a way to look at that is versus assumptions, right, what do you expect them to do, how many bowel movements per day. Roughly where are they supposed to be, is it realistic, those are things you could look at. Is it complete. So those are some of the more data cleaning, very straightforward things you can do with your actual clinical data, which is the core of what you have.

If you’re looking at patient safety, of course the data content, what the data means from that perspective is the core of what you do. We’ve got, you could say not directly but indirectly typically, what I call issue data. Protocol deviations, what—

[Break in Audio 30:54 to 31:09]

We’re back. And so I call that issue data. CRF Health of course got the DCFs, being one thing that could be very relevant to look at in getting here is a problem. Of course trending those is relevant. Metadata of course I mentioned a couple of times. Time stamp, logins, variations and patterns in that. I don’t know if anyone had fraud as a risk, invented data, did anyone do that? Okay I’ll add in a line for that. Time stamps can be very useful in various ways for detecting some of those strange behaviours that say actually is this going on. And then especially again, look at data relations, like duration that was pulled in as well, but coming back to the fraud exercise, that’s one of the things I’ve seen, a fraud example of one I’ve seen is saying, if you look at the time it takes to complete something, if it’s the same person doing it over and over again, then it will probably take about much the same time every time, whereas if you’re looking at kind of real patients they were more all over the place. So durations, sequence as well. If one person—take fraud for example—one person doing all of these, they will do them after each other. But the real patients will start doing this about the same time in the evening and doing it at the same time. So again some of those things that you can look at for that, kind of. And time differences, distributions, things like that. So not only the clinical data as such and saying how are the answering the daily pain question, yes or no.

Tools, I talked quite a bit on visuals, that’s kind of my home turf and what I do a lot of and like a lot of, doing that. And what tools can be useful when it comes to that. I think visualizations typically show more than numbers. So trends, patterns, correlation helps you understand what really goes on, what’s the problem there. A more efficient way to actually crunch that data together and turn it into—you’re spotting that thing early on.

Interactivity, like when you allow, when it’s possible for you to focus in, zoom down, filter out the site or look at another kind of sub-group of patients or only Visit 5 or whatever you’re interested in, and then taking that and splitting up by patient or by visit or by something else, so then when you actually can work with data it’s very efficient or powerful with that as well. So that analytic quality of good analytics and tools to look at data doing risk-based monitoring.

Agile, so that would mean quick in development, ease of use. Because studies are different. Say for this one I want to know this. And if you don’t have it already you might, during the study, you will have this, what this and what about that So quick and agile visual report development I think is a key thing as well if I’m looking at what qualities would I be looking for in a tool for risk-based monitoring for looking at the data.

You’ve got some statistical aspect of course. Statistics are powerful, we’re talking about a lot of data here. And I’m, being data systems process person, I’m not good enough with statistics. But in most of our companies, there are some fantastic statisticians that really have some good ways of finding out, is there something out of the ordinary, is there a strange pattern, is this what is expected, assumed, is everything working as it should. So try to get some of their time and use some of their knowledge to help you do some smart stuff.

KRIs, sorry TransCelerate, I think that, often a bit too simplified, it kind of was good for things, averaging things up a little bit too much, difficult to spot trends. Kind of tends to swamp you, as the example I had, with traffic lights and red-green-yellow. Good for some things, but also limiting for some other stuff.

And listings. Yeah, sometimes we need listings, and of course at the end of the day you might need to go down to the specific list of data surely. But generally for this purpose of identifying trends and signals and stuff like that, they’re more like, okay these are things I need to fix in retrospect. It’s not really telling us where are we and why and where we’re going.


So looking at tools, for those of you who are interested, I’m happy to have a chat about that during the break on that. We’ve got tools like the traditional business intelligence tools, the visualizations, their activity. CRF Health is using Qlik now on their platform. I spoke to someone using Spotfire, those are all great examples. And great tools for that out there, Cognos and others. Microsoft is getting better, put it that way. So those are sort of in the BI space. They are really good at visuals, agile in development, quick, user friendly, intuitive, great in that space. They typically do lack things like documentation, noting, commenting, documenting your reviews and your signals and tracking that. And then we have an increasing space of companies like Comprehend and CluePoints, doing those kind of things where they embed that commenting. Heavier to develop, but also more purpose-built I would say in general. And then we’ve got statistical stuff like SAS or whatever you want to use in that. But again I’ll be happy to discuss more on that afterwards if someone is thinking about what to use and how to use it.

So in summary, eCOA and RBM. Yeah, we do have some problems and challenges in this space as well surely. There is a really good case for it because of some really important risks that affect eCOA and are relevant in our space. There is some great data. So with the right methodology, the right tools, being sharp and identifying the right kind of risks, looking at them with the right data in the right way, I think definitely we can make an important difference to a trial.

But first, try to prevent things first. Just as a note, and I always try to do this. Saying yes, people talk about risk-based monitoring, talking about all these great things. But learnings. You can also apply this, just talk to CRF Health, talk to me. I’ve got this old study, did I have any problems I can avoid this on the next one. We can apply this exact same stuff to find out, did I have a problem now that’s avoided as well. But again, risk management fine, but try to build things out first by design.

All right. That’s all I had. Questions, answers, comments, thoughts?


In terms of tools which can be used, do you see in learning machines something which can help and other technique using big data analysis? Something which could help? Or already in use?


You could say machine learning definitely can help. You could say especially or primarily in the sense of, if you look at a company like Comprehend, they to a certain extent use machine learning or big data, you could say, because what they do is they compare data to data, they compare across. So it’s kind of data-driven machine learning, you could say, you’re increasing your amount of data you can relate to and compare to, and thereby you are able to predict what’s supposed to be, what’s deviating, what’s different. Comprehend does that using statistical methods. Metadata is doing that more and more using their just sheer volume, amount of data because of the number of trials they have. And these guys are building a huge library of trials as well and have more. So I think with that data centricity, if you like, or data driven, you partly do already have that in predicting and expecting. And I think there’s more that can be done there, and also, say big data. I think, again coming back to PatientsLikeMe, that’s more over in that space to some extent. I think it’ll take a while yet before pharma is ready to go there on the clinical trial in terms of how they manage the trial. A little bit conservative I would say in that way.


We were wondering where all of this is written up. Is it a separate document from the protocol, or I mean, is anything that you would be doing with monitoring going to be going back to the patient where it might need to be in the consent, or how does it all relate?  


I will just say on a high level, you would need to be expected to include it in the protocol. How are you going to approach centralized or risk-based monitoring. You would need that. And then typically what I see is an operation or process for creating a central monitoring plan, so a specific plan not for what the CRA monitors are going to do, or what the CRO, if you have such, are going to do, but who is going to look centrally at which data and how. And that kind of can span—risk management can be part of that, safety for the safety group, assumptions and trial results for stats, you know getting it right, and medics then is an unexpected, and then operationally in terms of quality performance and data quality. So typically I see, again, high level, what kind of things are going to be done, if you have a safety committee of course, those kinds of things in the protocol. And then an operational document creating a plan, saying this is specifically what I’m going to look at, and who is going to look at what, how often, in this study.



I’m just wondering, do you have any insight into the regulatory side of things, and particularly I’m thinking, do you ever run into scenarios of maybe study teams thinking, for want of a better term, if we don’t measure it we don’t know we have a problem, and that’s maybe a good thing on one level?


Yeah, I do run into the one: That’s really interesting, we can start to look at that, but don’t look at the old studies, kind of thing. That’s similar, right. No, I think honestly to be fair with you, most people I talk to—and it’s three companies I work with on this now, not only in the eCOA space right now—they all kind of come from the space that actually this is expected, you know. So it’s a matter on how we do it, how much we do it, on what we do it, that’s different discussion. But it’s more from the angle of, we’re expected to do this and sponsor oversight was a very kind of hard, was everything in order there. It also kind of can be under this umbrella of central monitoring as well. So I think there is actually getting from that point. But it’s fumbling on how to do it and what to look at, but we need to do something, you know. That’s kind of more where they’re coming from.


Thank you Anders.

[END AT 41:55]

Previous Video
Assessing the Effectiveness of Cognitive Debriefing in ePRO Usability Testing
Assessing the Effectiveness of Cognitive Debriefing in ePRO Usability Testing

Next Video
Researcher Discretion in Determining Equivalence Across Modes in eCOA Instruments
Researcher Discretion in Determining Equivalence Across Modes in eCOA Instruments


First Name
Last Name
Country (Citizenship)
Opt-In to Future Emails
Our team will be in contact shortly
Error - something went wrong!