×

Download Premium Content

First Name
Last Name
Company
Country (Citizenship)
State
Opt-In to Future Emails
Thank you!
Error - something went wrong!

Optimizing Use of Mobile Technologies in Clinical Trials

July 22, 2019

Transcript

Jennifer Goldsack

Of course I’m delighted to be here, and I’ve certainly learned a lot this morning, so a great day so far. I wanted to sort of pause for a second and talk about what I mean by mobile technologies before we dive in, I think that’s really important. And in the context of this talk, I’ll be discussing specifically mobile sensor technology, so those where they do rely upon some kind of sensor to capture the data as opposed to input of data, manual input of data via some kind of portable technology.

I want to show this slide, and there is some really important information I want to share around it. But it’s been kind of wonderful actually to have teed the day up in the way we did and the tenor of the discussion, focused on measures that are important to patients. And that’s not always the case, and it’s certainly not always the case when I got to events and venues where we are focused on these mobile technologies. I think sometimes we get a little bit overexcited about the widget and forget a little bit about why we’re doing this. So I thought I’d pause for a second before we dive in and talk about why do we even care about these mobile sensor technologies.

I usually walk around but I’m tethered to the spot so it’s a bit awkward. This picture is, gosh, probably a couple of years old now, but it’s of two young men, Matthew and Luke. In this picture I believe they were 13 and 14. And these two boys both unfortunately suffer from Duchenne’s muscular dystrophy. They’re best friends from wheelchair basketball. But for folks who don’t know the natural history of Duchenne’s muscular dystrophy, it typically affects boys and young men, usually diagnosed at around four or five years of age. Typically, by the time you’re about 10, you are no longer ambulatory and are typically in a wheelchair. There’s a pretty aggressive decline in your motor function, and I believe that the life expectancy is around 22-23 years of age. So it’s a pretty devastating diagnosis, and it’s a terrible disease. And as of now there’s no cure and there are no disease modifying treatments for Duchenne’s muscular dystrophy.

Does anyone happen do know what the current gold standard endpoint is in Duchenne’s trials? Yes, it’s the six-minute walk test. This bothers me for a number of reasons. I think first, boys like Luke and Matthew are obviously excluded from participating in a trial, as are 60% of the patient population with Duchenne’s muscular dystrophy. And it’s not just that they can’t participate in the trial, it’s the fact that, when you’re developing therapies for Duchenne’s, your target aren’t boys like these. It’s not trying to find—as we continue to strive and search for a cure, it’s not trying to find some kind of treatment that can allow them to text their friends, play videogames, play wheelchair basketball, you know, continue to participate in meaningful ways from that period of time where they lose their ability to walk, until unfortunately very shortly later they do pass away. I mean, these boys are in middle school, high school, it doesn’t take into account whether they can take themselves to the toilet or brush their own teeth. And that I think, I wanted to start with that, because I think it’s really important for us to pause and think about why on earth is it that mobile technologies are an exciting solution. They can offer a lot, but I think it’s cases like these and their thoughtful use that should really guide their early use, and I think that’s something we might talk about a bit more during the subsequent panel discussion.

More broadly, why should we consider using mobile technologies, or why could they be advantageous? I think certainly we could be more patient centric, both in the measures and the way in which we conduct trials by including these measures, and that might manifest in a few ways. In the example of Matthew and Luke, it might be that we can measure aspects of function that are actually more important to them, that these objective measures, this passive collection of data, can capture aspects of health that are more important to them. We might be able to reduce the burden of participation, if you don’t need to come to the clinic so often. I think there are other interesting things as well, and I think that this starts to bridge into the efficiency section, which is if we have much more objective, sensitive measures, can we conduct trials with a smaller population and still have the same power. Do we have to continue the trials for as long. I think that that can be usually patient centric, not to those just participating in the trial, but also we either fail more quickly or we bring an effective therapy to market more quickly, all of these things are critically important.

Staying with the efficiency theme, I think you can obviously—potentially we could enroll fewer participants, as I said, with these more sensitive measures, and fail more quickly.

[04:58]

The efficacy piece I think is interesting too. I think if we could use some of these technologies to develop measures that would allow us to better identify subsets of populations that might respond better to a therapy. Similarly, you don’t always have to use these measures in your pivotal trial. It might be that you decide to use them to make better informed—hopefully—go/no-go decisions earlier. So again, it goes back to this idea of failing quickly or having more certainty advancing a new molecule to later phase trials. And in both of those examples, I always think of Alzheimer’s. If we could better target Alzheimer’s patients who might be responsive to a particular new therapy, or if we could make more informed decisions early on, I think that would really bolster that particular field, which so desperately needs some success.

I’m going to be a bit provocative, and I feel comfortable doing this because it’s before a panel discussion and I wanted to try and sort of seed some questions maybe. A question I hear a lot in talking to folks is the—usually a mobile device, and I’m like, it’s not a device! —is the mobile technology validated. And I’d sort of offer up that I don’t think it’s the right question. I would focus more—and I think this is more helpful and I think ultimately it’s going to be more advantageous to the field—that we think more about a digital measurement tool. And I would advocate for parsing that out into two pieces: into the sensor component, what goes on your wrist, the sensor ray that might be on your brain, there’s all different flavors of these. I myself—it’s a slip of the tongue, I do it all the time—I talk about wearables, but I really want us to think more broadly about these mobile sensor technologies. I think there’s more there for us. And I think it offers more opportunity for us to think broadly. But I’d encourage us, as I said, to parse that out into what’s capturing the data, and is that verified. And I’ll come back to what that means in a second. And then the data that you get off that, is it appropriately processed and handled to come up with an accurate measure. And then I think the validation piece has two parts. Not just is the math right, because that’s all algorithms, but is it clinically meaningful. And I would put that into a validation bucket.

I think that helps us too because I think the verification piece has different players, different experts. That’s essentially a hardware challenge with some firmware on the device. And you’re really asking yourself, did I build the tool right. And I like these questions here. And it’s an engineering problem. Is your accelerometer accurately measuring acceleration relative to gravity in three dimensions. Is the information that comes off that, that raw data, is that correct. And what you ensure there is you’re avoiding the problem of garbage in, garbage out. Because if that’s junk, then however you process it and however good your algorithm is, it’s not going to be reliable. And that verification issue, that sensor, the quality of the sensor, and whatever casing because that can affect it, that’s an engineering problem, and that’s your verification. And I think that’s one part of your digital measurement tool.

The other part, as I said, is that validation. Did I build the right tool. And that’s the mathematical component, where you have your PhD geniuses working on the algorithms there. But it’s also clinical. Even once you have the measure—and the algorithm is your measure, it’s certainly not an endpoint—can you then say this is clinically meaningful; and a change in the value that’s spat out by this algorithm, is that clinically meaningful. And that’s a big bucket. And so, I think it’s important in order to make sure we answer all of these questions, we understand who the relevant experts are and we appropriately collaborate, that we don’t ask the question is the mobile technology validated, and we really sort of ask ourselves, is a digital measurement tool, is that fit for purpose in this case. And there are many components of that question.

So let me try and walk through an example that I think might help us illustrate the concept. Let’s use the example of an ECG. The verification process in that case would be, is the output from the physical sensor, the electric potential measured in millivolts that might come off your chest strap, for example, is that accurate, is that electrical signal correct. And that’s an engineering problem, you validate it in a bench test. There might be conditions under which it’s accurate or not. But largely that’s a yes/no engineering problem. And then there might be, let’s say if it’s a chest strap and it’s tethered to an app or a watch, there might be some firmware, it might do some simple processing, that would be part of your verification. That’s one piece.

But let’s say you’re interested in heart rate variability as a measure. This is a really interesting measure, I think it’s still getting its legs in terms of where it could be applied, from over-trained athletes to children on the autism spectrum to heart failure patients. We’re finding applications for heart rate variability as providing really critical information about the condition. Not an expert in this, but I believe what they do when they’re calculating this is they look at the N wave and they look at a particular point, and they tether that from heartbeat to heartbeat. This is complicated, you have to find exactly the right piece; you have to, using an algorithm, identify that point on the curve of your N wave. You’re then looking for 50 milliseconds as your mean, and then you calculate the variability from that measure. There’s a lot of math that goes into that. That’s validation, that’s got nothing to do with are you getting the correct electrical signal off your chest. That’s verification.

[10:40]

And then, even when you have that pNN50 statistic, which is the preferred measure of heart rate variability, who cares? You have to make some kind of clinical argument. Why is this important. Why does this tell me information about how a patient feels, functions, or survives. And that is the second piece of that validation process, and if you think about the previous slide, it is both a mathematical and a clinical problem. So these are complex measures, but I think if you really pause for a second and parse out the different pieces, who the experts are, it feels a lot more manageable and a lot more digestible.

As Bill mentioned, I was very fortunate to lead a few fantastic multi-stakeholder teams at the Clinical Trials Transformation Initiative. One effort that they looked at was really trying to identify and articulate a pathway for developing these technology-derived novel endpoints. And there’s a nice overview slide here. There’s a few things that really jump out at me, and for me this is really the clinical piece of what we just described as validation. And I’m curious to get feedback when it comes to questions and discussion, but for all of you guys who have spent large parts of your careers developing measures, I’d like to think this doesn’t look that different. You have your measurement science piece on the right, you have some thoughtful patient-centric selection of the measure, the aspect of health, the concept of interest on the left. And a couple things jump out at me that I think are really critically important, which is first, the first step is not “what mobile technology are you going to use,” that doesn’t come till later. You really do start with the patient, you start with the measure, what’s the question we’re trying to answer, what’s the facet of this particular condition that we’re trying to assess. That’s where you start.

And there is a pause point where you say, should we really be thinking about a technology-derived endpoint or is it better for us to proceed with a more traditional measure. And I think that if we can reach a point where we really do think about the science and the patient first, the whole field is going to become less intimidating. There are going to be obvious cases where using a technology-derived measure is going to be preferable. And there are going to be others with the existing standards or maybe even a new traditional measure that obviously doesn’t make sense, but you know, a non-technology measure will remain optimal. I don’t think it’s a foregone conclusion, and as someone who believes very deeply in what these technologies can offer us, I think this is really important. The fastest way to kill a good thing is to overstretch and fail. And I worry about that a little bit now, and I think working through this process is really important.

I’m conscious of time, we can talk about this a little more, but I wanted to take a look at what we, as part of that CTTI team, described as our fourth step, selecting our suitable mobile technology for data capture and go into that in a little bit more detail. As Bill mentioned, there was a second project that was really focused on the technology, and a big piece of that was how do we figure out the right one. And there’s a really nice framework that’s summarized in this graphic here, and there’s a couple of different components. So the technical performance specifications are obviously critical. And that breaks out in a couple of ways. The measurement performance and it’s got a gold star there to remind me to say that’s what I would consider to be verification. That’s the piece, is it accurate, is it consistent, is it repeatable, is that sensor performing for you within the technology. And it might be that you can tolerate less precise measures, depending on your measure. You’re not starting with the technology, are you, you’re starting with your questions that you should be able to answer by the time you get to that point.

And then there’s other elements too. Performance isn’t just about the sensor, the technology also has to be able to communicate. So for example, if your particular scientific question means that you want to have more extended periods between your patient coming into the clinic, you want them to be able to upload the data, you even need to make sure that there’s a really good quality communication where nothing’s dropped, nothing’s repeated in communication from data integrity. Or that there’s sufficient memory. That would be another performance characteristic that you could just go between visits or they could just put it in a Jiffy bag after three months and mail it back in. Whatever flavor the technical performance specifications have to match.

[15:09]

There’s obviously then the data management part, and I think the biggest issue here could be data access. And I think about data access when you’re talking with a technology company in a couple of different ways. I think first in terms of a robust data governance policy. Who has access to the data at the technology company, what are they doing with that data. We had a sort of aha moment in doing some work on this. This blew my mind, which is, if you can’t tell a patient exactly where their data is going to go, they can’t provide informed consent. And that was really powerful to me, and it might not be that you get the name of everyone, it might be there’s a clause in the consent form to use the data for other research, but you need to be sure that technology company, especially if it is a more commercial brand which is fine if it meets your specifications, isn’t using de-identified data for marketing purposes, for example. So who has access on the back end to the data, that’s really important.

But also, and again this is one of those things that doesn’t keep me awake at night but I do worry about because I don’t want someone to trip up on it, do you as the sponsor have access to the data you need. Are you only receiving process data? Is it all black box processing and when you actually want to come to defend the measure you use you don’t have all the information you need to present. It might not always be the case that you need raw data, and I think that the pursuit of raw data is also a bit of a flawed philosophy and we can talk about that a bit later if you like, but I think making sure that you have access to the level of data that you need is critically important and I think could be a misstep and should be considered when you’re picking your technology.

Safety specifications, that seems quite benign when you use the—I keep tripping up and calling them wearables. I mean of course if you have a more frail or fragile population you’d want to make sure that there’s no irritation from a band or a wristwatch. But there are fantastic arrays of sensors that are starting to show evidence that they could predict epilepsy, that they could predict seizures. So this is an array of sensors embedded in your brain communicating with your Smartwatch, which is terrifying and fantastic all at once, and presumably they have far more stringent safety specifications than your Fitbit.

The human factors of it is important, and we’ll talk about data quality in a second. But I think human factor specifications are important. You might have the perfect device, it’s collecting high quality data, but if you’re doing an arthritis trial and it’s really tricky to plug it in to charge it, and you need this level of dexterity, you’re going to get enough data until it fails in the first charge, and then that’s going to be it. So those human factors pieces are really important.

Operational specifications are things like battery life, failure rate. That is a question that people should not be afraid to ask, the manufacturer does have the numbers on their failure rate, if you’re worried about it just ask.

And then those non-performance specifications like cost, like whether they’re—what the support would be from the technology company once the technology is deployed in the field in the trial, what help do you get from them if you need to replace devices, have a call center so on and so forth.

What I think is really important and this is a theme that will come up as I make a couple more points is, this decision shouldn’t be made in isolation. I think partnering, being very open with the technology company, having them know what your question is so that you can actually tell them and they’ll work with you on the specifications that you need. Coming in, saying we want to do with a wearable isn’t the right way to start an informed conversation. And also, I think—not I think—talking to the patients about this is really important and I think we’ve covered this really well today but it bears repeating. Again, just because there’s been really good uptake in the Duchenne’s community around these measurements for various reasons, the boys themselves have expressed, we don’t want another device but if we need one we’ll have one. At this point, whatever we need to do we’ll do it, and there’s a tolerance there. In other indications and other patient populations there might be. And I think, again another common question is someone you know will this be all right with patients. And I always just say well, have you asked them. And I think not working through this series of questions in isolation is critically important.

In an ideal world where we’ve selected a really important measure, we have validated our measure, we have exactly the right technology we need to put out in the field with our patient population, how do you actually operationalize that trial? In the interest of time, I’m not going to take a deep dive into all of these things, but I do want to talk briefly about these different elements of actually operationalizing all this. And I think protocol design is really important.

[20:02]

A big concern that folks have is around data integrity. And I think a lot of this can be offset by really thoughtful protocol design. First of all, is it a measure that matters to patients. I think if somehow you’re capturing an element of a patient’s condition that’s really important to them, they’re going to be more willing to be compliant. But at the same time, there are different protocol elements—and this leads into analysis plan—not everything is a big data problem. I think principles like data parcelment continue to apply. You don’t need to do continuous data collection and I’ll talk about this a bit more in a second just because you’re using a technology for example. So being really thoughtful in your design, being sort of minimalistic as far as you can, being as simple as you can will likely optimize your data integrity. So if the patient thinks the measure is important and has been well trained in how to use the device and there aren’t mis-incentives built in, then they’re unlikely to strap it on a dog which is something I think we hear about quite often.

There’s a couple of other pieces of protocol design that I think are important too. I think that there are a lot of questions around safety signals when we start to use these technologies. I’m not going to get into it in detail, but I would say that I don’t think we have to reinvent the wheel here, I think there’s some really good thinking around incidental findings in MRI scans for example during trials that we could look at. What’s the purpose of collecting data, is the safety signal itself so you might have a wearable and an algorithm that—wearable, I just did it—a technology and an algorithm that together can certainly give you a validated outcome assessment. There might be other information that you might think you know what it means, you think it might relate to a safety signal, but there’s different levels of pressure around having to handle those signals if they’re not fully developed and again I think that actually there’s been a lot of very thoughtful and very good work done around incidental findings and scans that we could lean on here.

The other thing you hear a lot about, and this is one I want to take a deep dive into just for a moment because it’s a question that I think is at the forefront of folks minds, is this idea of data sharing in real time, and that’s very much a unique characteristic of using at least a subset of these mobile technologies. I think there are a few guiding principles that can steer us right which is study participant safety is absolutely paramount. For example, if there’s information there that they should be seeing, so I would suggest that you wouldn’t want to blind a continuous glucose monitor in a Type 1 diabetes trial, right, that probably isn’t the best idea. You want to share it under any and all circumstances with your participants. But preserving trial quality is critical, and again I think about patients have said, if I’m going to enroll in this study—and I’ve listened carefully because I’m always impressed by patients, they always blow my mind with how insightful they are—if I’m going to take the time to enroll in this study, if I’m going to be compliant with your protocol, if I’m battling with my condition and I’m giving my time of my life to do this, please make sure that at the end of the study you can use my data and please make sure that the study has integrity. Like I’ll do a lot just to make sure that if I sign up. And so I think that there’s definitely a risk of introducing additional biases if you’re sharing data in real time and if you can’t control for that you can’t share the data.

And again, I think there are analogies in the kinds of studies we do now. Ideally no one would be in a placebo arm, right. We’d all be trying to give someone a shot. But we can’t; we need to have a reference standard in some cases, and it’s—that’s become understood, and I think in the same way and while we can strive to do creative things as we do with things like synthetic control arms in some diseases, there are going to be times where you can’t share that data in real time.

But regardless, and again this is where I think technology is not that different to what we usually do, still share the data at the end, and that’s something we’re still battling even in traditional trials. Patients should all participants should have the option to get their personal data and the study findings provided to them at the end, should they want it. So I think that there are ways that we can meet participants where they need to be met.

And I also think that another interesting piece is, there is certainly ways for an engagement perk that you could share other information that’s not outcomes data that might not introduce bias. So it might be that, congratulations, you are 100% compliant with your wear time last week, or you are in the top quartile of—you might not—you’d use some plain language but you know, you are crushing it compared to everyone else in the trial with your data collection, congratulations, or we’re at 100% enrolment for this trial, we’re making great progress. So I think that there are ways that you could use your companion app for example to engage without compromising the study, without sharing outcomes data.

And one thing I did want to highlight quickly, and you can see it on the right side of the screen up there although unless you have a monocle you’re probably not going to be able to see it, there’s a really nice decision tool that came out of the CTTI working group to help support decisions around data sharing in real time when you’re developing the protocol. The questions are hierarchical, and again I think the most important piece here is regardless of how good your scientists and your statisticians are at doing this in an office, you really need to get patients to be part of this decision whether you have patient investigators, whether you have advisors, whether you shop this out to a focus group I think this is really important to be done in partnership with patients.

[25:34]

The analysis plan, I think I mentioned previously, I don’t think we need to assume that these mobile technologies immediately create a big data problem. I think that’s not always the case and I think that the practice of data parsimony and practicing that as we always do in clinical trials is really important, these things shouldn’t be fishing expeditions just in case.

Also, Bill mentioned that we’re doing some work together with the DIA work group on wearables along with a couple of other folks in the room. One of the work streams is looking specifically at operational considerations, and we’ve really been kicking around this idea of data missing-ness lately and to give you some teasers as to what we’re thinking, we started to think a lot about we really need to know what we mean by missing data and it could be very different depending on your measure. So for example, if it’s a sleep outcome, you don’t need to wear it 24 hours a day, you just need to wear it when you’re in bed. At the simplest. Or if you’re looking at gait speed or gait quality, how many samples of walking do you need per day to be able to make an accurate assessment of the individual, the quality of their gait or their gait speed in a given day. Does it need to be a continuous measure. And what we’ve come to realize, and what I think is really interesting is, it’s not just that we want to make sure that the data that we include has integrity, but again going back to respect for the participants that we wouldn’t want to toss out data that we could get great information from because we have these very severe limitations that traditional 80% come what may around our missing data.

The trial conduct piece, there are definitely elements around putting a technology out in the field and asking participants to engage with those technologies in an unsupervised fashion. But you know, I think that for all of the problems that technologies may appear to cause, they could break, someone might not know how to use them, we might have all of these issues, I think that there are —there’s not a technology solution to everything but there’s a whole new toolbox of things that we can use. So there are lots of these things can be sort of passively monitored, you could have algorithms when you have 24 hours of no input of data, you can automatically send a text asking someone if is it okay, is it broken, reply yes or no, someone could then call them. It doesn’t need to be a terribly labor-intensive and I think you can catch these things very quickly. So the risk of going several months between appointments before you realize that you’re missing data is no longer the case so again I think certainly these technological tools add complexities but there’s also a whole new basket of solutions we can draw on.

Data management, so the data access piece I think is key here but I think folks should be thinking about that really earlier. The point of selecting a technology getting into bed with a tech company and I think that would extend to a third-party data platform as well who might be helping you organize or analyze the data. This also is obviously the importance of end-to-end security and all of those tools that you would have around who touches the data, who can access the data. It’s also your Part 11, so it’s your data integrity, confidentially, authenticity, all of those questions. They all fall under data management, that’s a big topic, I’m not going to try and touch on it today but we can certainly field some questions on it. And I would suggest that folks go to the CTTI recommendations on this, I think they’re quite actionable and quite thoughtful.

And then the last piece before I wrap up is this idea of regulatory submission. I think that the wave hasn’t hit yet. I hope it comes soon when folks are ready to start including these measures in marketing applications. But I don’t think this is something that should be left to the last minute. If there are any sponsor organizations starting to test these tools, the requirements haven’t changed, you need a really good audit trail, that audit trail should apply to your metadata you need to define your source data. And again, all of these principles don’t go out the window just because we’ve decided to use a technology. But I think the thoughts that I have on this is thinking about this early is going to be the best way to make sure that there isn’t the other shoe that drops at the point of finally feeling really confident in a measure in some data that you have at hand.

[30:05]

So just to reflect, as I said it was too much to tackle in full, is the importance of decoupling the technology in a measure I think that’s really important. I think it’s important too as a business decision. I think not only from who owns the process, who are the experts, how do I compile my data, my evidence on these different issues, but I also think it—and it was interesting, Michelle, listening to your presentation on libraries—in an ideal world you should be able to use any technology, when you use a simple wearable that samples at 50 hertz and it’s an accelerometer, fine it meets a certain, it’s verified to do this. And then you should be able to plug and play that with whatever algorithm you have if that algorithm is validated if you know that if the input data is X and your algorithm is Y, you should be able to use it and I think from a consumer point of view that stops the idea of having a monopoly on a technology algorithm pairing. Engaging partners appropriately, so patients hopefully we talked about them enough but I don’t think we can overstate it, early on is the measure the right measure is it something they’re interested in. The design of the protocol, is that something they’ll apply to particularly because it’s patient-generated health data 100% when you’re using these technologies. Regulators, I’m not sure how long you should have a good idea and not tell anyone about it, I think sharing those ideas early is probably the best way to do this. Technology manufacturers, treating them as partners, I think is going to be critically important. I still think that there’s an element of folks talking across each other and I think keeping that framework I described in mind, but more importantly going with the scientific question you want to answer, rather than I think it’s time that we include a widget, is really going to make the quality of those conversations and partnerships exponentially better. And then I think always the shout-out to statisticians, I think statisticians here can overcome a lot of fears about data quality and completeness and engaging them early I think is critically important with these technologies. Scientific principles still apply so again what’s the question you’re trying to answer. None of this changes just because we’re using these technologies. And again, data quality standards I think that there’s a lot of concern around apprehension, we’re conservative industry and that’s okay. But we I think are currently are holding these technologies to these kind of utopian standards of sort of perfect data. I’d like to think that over time we can improve data quality but right now as long as we can meet the standards that we currently have, if there is something to be gained by using a technology whether it’s a more meaningful measure, a more sensitive measure, lower burden of participation, let’s do it. Right now we don’t have perfect data. And so I don’t think that’s a stick we should use to beat the world of technologies with.

So just to leave you with some additional resources if this has piqued your interest, there’s the links to the two CTTI projects which I maintain are quite valuable. And then as Bill mentioned, there’s a really wonderful team working on three different deliverables, one which is coming together really nicely and I think will be incredibly valuable a sort of dossier for digital endpoints. The second as I discussed some of those technology implementation standards. So certainly I’d expect those two pieces to be published in 2019, and most likely discussed in good detail at DIA this summer. And then another really interesting piece on how to think about positioning these digital endpoints particularly with regards to more traditional measures.

So I went a couple minutes over, I’m sorry. But thank you so much.

[END AT 33:55]

Previous Video
Using eCOA Metadata to Detect Fraud and Develop Risk Indices
Using eCOA Metadata to Detect Fraud and Develop Risk Indices

Jill Platko presents on using eCOA Metadata to detect potential fraud in clinical trials.

Next Video
Migration Evidence to Support the Use of Patient-Reported Outcome Measures
Migration Evidence to Support the Use of Patient-Reported Outcome Measures

Serge Bodart and Bill Byrom present at eCOA Forum 2018.

×

First Name
Last Name
Company
Country (Citizenship)
State
Opt-In to Future Emails
Our team will be in contact shortly
Error - something went wrong!