×

Download Premium Content

First Name
Last Name
Company
Country (Citizenship)
State
Opt-In to Future Emails
Thank you!
Error - something went wrong!

eCOA and BYOD - Recent Research and Implications

July 22, 2019

Transcript

Willie Muehlhausen

Thank you everyone for being here, and thanks for inviting me to talk about that. Maybe the last time, we’ll figure that out.

Today I was asked to talk about BYOD. And as Bill said, I spent a lot of time and effort in almost the last almost ten years on BYOD. So I’m very passionate about that one, and I believe that it’s the next new thing. Which, I probably should with that, it’s actually not new. A lot of us have been around for a long time, and we’ve used web-based systems to capture ePRO for 20 years now. And when you look at the technologies, they are different technology, but in the end it’s BYOD. We didn’t care which device patients used at home to access the web-based system. So in many ways, it is BYOD in I almost want to say its purest form. And at that time, we didn’t even provide patients with a device to do it at home, they just used whatever they had. So it’s 100% BYOD, whereas when I say BYOD—I will probably say BYOD 5 times in the next 30 minutes, or 29 minutes and 19 seconds—you should always mentally add BYOD as an option. So when I say BYOD, I mean we offer BYOD but we also have a backup provisioned device that we can give to patients. I’m not advertising at this time that we’re going 100% BYOD. I did a presentation four years ago at DIA, where I said at that time I had seen one protocol in 16 years where it said if patients don’t have an internet connection at home they cannot participate in a clinical trial. That was as close as it got to excluding patients based on technology choices. I’ve seen another protocol, six weeks ago, that had something similar in there. Both were in late phase projects, where that was the first time I’ve seen that if patients don’t have a smartphone, tablet, or anything like that, they would be excluded from the project. So that’s two in 20 years. I’ve been in the industry now 20 years and 6 months, roughly. And I’ve worked for two large CROs, for 13 years combined between the two CROs. So I’ve seen hundreds of protocols. And I’ve been always dealing on the technology side, so my task usually was to check what technology makes sense to use in that protocol. And I’ve seen hundreds of protocols if not thousands, and I’ve seen two so for with these exclusion criteria. And I think I won’t see any more in a bigger number anytime soon. So just as a start, BYOD is not new, I want you to think about that because I think that’s a fair statement. Happy to have further discussion on that one.

And now today, I want to talk a little about BYOD, what we know, what we don’t. So the slides that I’m showing, I’m not saying I show the same slides all the time, but there is some reuse. And I’ve been fortunate enough to co-present with Bill, Sonya, and Bryan on multiple occasions in the last 12 months. So some of the slides are not necessarily just mine. They are from decks that we developed over the last 18 months or two years almost together.

So here is briefly what we’re going to talk about. How did we get here? Or as Bill eloquently said in one of the last presentations, how did we get into this mess? And I think he was talking about the ISPOR Task Force report that basically highlights what’s the difference between the changes of when you migrate questionnaires. So just a brief history here. Bill already talked about the FDA guideline. There was the ISPOR Task Force report that was published end of 2008, early 2009. And in 2009 we also have the FDA guidance. So these are key documents in this whole process.

What I’m talking about today mainly is there is a lot of challenges, perceived challenges, and myths around BYOD. The thing that we wanted to tackle first within the team that I’ve been working with in the last six-seven years was the scientific side of things. When we migrate questionnaires from one version to another version—so from paper to a tablet, or from a tablet to a smartphone or to web—what do we have to do, and how certain can we be that these versions are actually equivalent. And in the early days, there was some anxiety around that to assure that there was equivalence.

So what I’m presenting today right now is with that topic or with that focus on equivalence. Because we figured—I figured—if we can’t get equivalence right, then the rest doesn’t matter. All the technical challenges, operational challenges, can be overcome. Technology gets better, we get more experience, so things are getting better. But if we can’t find a process to assure equivalence across different modalities and different technologies, then the rest doesn’t matter because we can’t go there, can’t do it.

[05:31]

In 2010 there was another ISPOR Task Force report on mixed modes that had a couple of chapters on BYOD, or on equivalence, on mixing of modes, and that’s an important one. That’s why we left it on this timeline. So here is the famous—or infamous—table, the ISPOR ePRO Task Force report of 2009. Most of you will have seen it, in multiple different presentations and it sometimes looks slightly different but this is basically, as it says, adapted from Alan Shields. There’s three different levels of modification: minor, moderate, and substantial. Substantial, we can basically in this context forget about. If you have a substantial change, it’s like a new instrument development, so you just have to go through the whole shebang that you need to go through to develop that instrument and do the testing. What we’re focusing on are the minor and the moderate changes.

Minor changes, the level of evidence recommended by ISPOR is cognitive debriefing and usability testing. The other funny thing is, I always felt that cognitive debriefing and usability testing hasn’t been properly defined what that actually really is. So different companies have gone through that process in slightly different ways. We addressed it in the ePRO Consortium, and I think we kind of agreed in many ways how to do that, but there is still no standard how to do these.

And then moderate is if you make more significant changes. One that’s easy to understand is when you go from paper or visual approach to IVR or voice assistance. So from visual you go to oral, obviously you need to do some extra testing. But there’s also changes in the wording that are more significant, for example. That’s when we need to do equivalence testing and usability testing. And equivalence testing generally includes about 50-60 patients that will answer the questionnaire in both versions, so on paper for example and on the tablet, with a distraction test in between. You randomize it into two different arms, and then you also do usability testing in the end. And if you find equivalence, then you’re good to go. Takes about six months all together, and depending on which vendor and how many patients you include, costs roughly anywhere between say $100-150,000. So it’s not a small project, not a small undertaking, especially if you’re under pressure to get your study started, and now to find out you have to do an equivalence study can be a pain in the backside actually.

What we’re saying and what we’ve found is, this table was the right thing to do in 2009 or late 2008, because that’s all we knew. And as a conservative industry, and having to make sure that we do the right thing, that was the right thing to do at the time. But as you can see, this is not even all the literature, but a lot of stuff has been done in research since 2009 and has been published. So we know a lot more about what could potentially affect equivalence between two modes and what doesn’t. And I’m not going to read that to you, I assume you’ll get the slides later on so you can just look it up later.

In addition to that, we’ve done some work, and published that as well, on basically equivalence and what drives equivalence or what stops equivalence between different technologies. The first one was Chad Gwaltney actually in 2008. Chad and his team published the first equivalence study meta-analysis across many different studies that they analyzed. That was before the PRO guidance came out. So in 2015, we felt that if Chad can do it we can do it too. But we figured that, since he published his in 2008, I think he finished the analysis in 2007, so we looked at all the data that had been published since the PRO guidance came out. We assumed there were more studies, and they probably would have a different approach to how they did that, how they conducted the study. And so we had similar results that Chad had, basically, and we did find a few more studies. We included IVR projects, which Chad and his team particularly excluded, but we had them included. Seven studies that we found, and we actually found equivalence across that one as well, which gave us another, let’s say, scientific pathway to do some more research on the oral side, or on the IVR side.

[10:09]

Then, in 2016, Bill and myself and our team, we did an industry survey, and some of you may have responded to that, just to find out what is out there, what do people believe, what do people think they know, what do people really know, what’s the perception of BYOD and what are the actual hot topics that we need to address. And we put a poster out there, and again, it is accessible for everyone on ResearchGate, if you’re more interested in that. In 2017, we did a meta-synthesis of all the cognitive interview studies that we had done for eight different ePRO vendors, basically in the three years up to 2017, so 2013 I think through the end of 2016 or early 2017. And the reason why we did that is because nobody else had done that before, and we figured we have all the data there. And because I signed off on all these reports as the department head, I knew that there were three or four projects where we had some issues, where patients actually highlighted there were things that they didn’t like or they thought may have an effect on the equivalence. And all the others didn’t. So I felt that we should analyze that a little more in detail, which we did. And we also found—and it comes later in another slide—we found that in general it works. And the good news was we had some negative examples, we could actually highlight when do you get to a point where you break the equivalence, which on the equivalence meta-analysis we couldn’t really show that, and we didn’t find any.

So then in 2018, we thought, now we’re going to do the real big deal. We got really excited about that one. And we found a sponsor that got excited enough to pay for all of that. So we did a BYOD equivalence study. Again, nobody had done that before, so we figured we’d just give it a shot. And so we recruited almost 160 patients and we had a two crossover design study, where patients filled in the questionnaire three times. And then we basically did the analysis, and again we found that there was an equivalence across the different devices. What we also found, which we weren’t out to find, but once we had the data we did some additional analysis, and I think it’s one of the key things that is important for BYOD. One question always comes up: Can older patients use ePRO and can older patients do BYOD. What we found is actually we left patients to decide themselves what kind of device they would bring to the site, to our study. And what we found is the older the patients got, the bigger their devices got, which makes sense, right. It’s one of these duh moments, as Ashley said earlier. We didn’t anticipate that; that wasn’t part of what we tried to find or show, but that was one of the outcomes. And now, retrospectively it makes total sense. Patients will use a device that they use on a day-to-day basis. They’re not going to use a smaller device which is more difficult to use, just to be part of the clinical trial.

And then there’s another BYOD equivalence study coming out, in progress—I think the study has concluded, we’re just waiting for the final report—by the ePRO Consortium, and that was more of a long-term equivalence study. So normally equivalence studies are done within half a day, so patients fill them very quickly in quick succession with a distraction task so that they don’t have a learning effect. We don’t want them to memorize the answers in the beginning and then you have a high ICC of .9 or higher. But the ePRO Consortium is running a long-term study where patients filled it in I think for four weeks or six weeks.

So why are we doing all of this? I would say first, because we can, so that’s a good one. But also, when you look at our industry, how it’s changing, a lot of clinical trials are supposed to be more patient centric, and we heard all about that this morning, for all good reasons. We hear more about virtual clinical trials, or virtualization of clinical trials. We talk about real-world evidence, patient care, remote monitoring, and then we hear about these BYOD studies. So I think the reason why we are doing those and why we need to look into BYOD is because in the end, it should be the patient’s choice which device they’re going to use in that clinical trial. And you will see later on that I’ll bring it up more often. The reason why we’re doing that, the reason I’ve been doing this now for the last ten years is because I really believe it should be—we should generate enough data to show that we should leave it up to the patient to decide what device they use in the clinical trial. I would go even further and say that we should actually let them choose to change the device during a clinical trial if they want to. If we have good equivalence, there is no reason why they can’t start with the Alexa in the morning the first week, go on holidays, take their iPad, and finish the study in the last four weeks on their—in Honolulu on a beach somewhere or doing the triathlon swimming between two islands and just do it on a tablet.

[15:35]

So the meta-synthesis of cognitive interviews, why do we do that. As I said, we did it becase nobody’s done it before and we had all the data, but one thing we found when we did the meta-analysis of the equivalence studies is that we reached out to all the authors that we could identify of 72 publications, I think it was. And I think seven of them got back to us with the actual questionnaires that they tested. So the paper version, we kind of figured we’d take the original from the author. But it was very difficult to get the actual screenshots or any kind of visual information about what they actually tested when they did the equivalence study. So we know it was equivalent but we don’t know what was equivalent. For example, we don’t know if they have one item per screen or three or ten. Did they just mimic a PDF and just had to go through that and some systems can do it? So we didn’t know that. With our meta-synthesis, we had all the screenshots, because it was part of how we analyzed that and we had all the stuff. So that’s why we figured that while it’s not a quantitative study—it was a qualitative study—we still get enough information out of that one that is actually in addition to the meta-analysis of the equivalence studies that we did.

Here’s the equivalence study that I mentioned before, the BYOD that we did with 156 subjects. And we particularly picked the SF-20 because we wanted to check—and I don’t have enough time, 30 minutes, to go through all the details—but one thing that occurred to me in 2011-12—I have these moments, you know, things come to my—some brain cells trigger more often than others I guess. When we talk about equivalence and instruments, we always think about that as a whole. Whereas I went back and I said, I’m a veterinary surgeon, I like to take things apart, right, and then put it back together, on a good day. So an SF-20 or an EQ-5D is not the thing as a whole, it consists of items, we call them items or I call them questions. And one of the things that we did in the ePRO Consortium in the early days, we looked—I looked with the team, we looked at about 60-70-80 instruments and tried to figure out how many ways do we actually have that we display the question and the answer. So how many scale types do we actually have. And if you take the normal stuff, like a date picker, time picker, multiple choice, if you take the normal stuff out that’s not research specific that we use every day even in the questionnaires that we have about where are you from.

By the way, who is from Limerick? Limerick here. Aha, gotcha. I live about 20 miles from Limerick in Ireland so I was wondering about who that was.

So we analyzed these by widgets and we found there’s only three real widgets. That’s the verbal response scale, the numeric rating scale, and the VAS scale, basically. These are specific, there’s the Lickert scale, which we consider is the subset of the numeric rating scale. But these are basically the three scales. And then we looked at that in this specific project and made sure that we had all three in the questionnaire that we used, and then did the meta-analysis across the different widgets. Because our theory is—and still is, and I think we’ve found enough evidence now we’re still looking more but—that if patients understand the concept of a visual analog scale or a numeric rating scale, then the actual question and the answer option that you provide, in the context of doing a migration, doesn’t matter anymore. Because the patients understand that, that it’s meaningful, you do that during the instrument development phase, not during the migrate. Now unfortunately some of the older instruments, we always get a lot of feedback from patients that they don’t like the answers, don’t like the questions. That’s something that we don’t want to deal with because it’s a little bit late in the game to deal with that when we do the migration. It’s not that we ignore that, but that’s not what migration testing is about. So the idea is that, if the patient understands the concept of these scales, then we don’t have to test and validate every migration for every instrument that uses these three standard scale types, plus date picker, time picker, multiple choice, and so on.

[20:15]

Just to be a little more graphic, we just made up these questions so don’t worry about content validity or any of that stuff, it’s just to show that’s kind of what these look like. On the left-hand side of the paper then you have IOS and Android. We’re not saying that they’re all the same, but they are similar enough. So a question I want to play with you here is: Do you spot the difference. And you can see there is a difference between paper, certainly, but also between IOS and Android. So you see there is differences. But the question is, are they meaningful differences. Do they make a difference in how patients answer the question? And what we found in the studies that we’ve done, they don’t. And here is the same thing again, the equivalence of variable screen sizes. We have very high correlation between three modes of administration for each response scale type. So you can see, VRS verbal response scale, numeric rating scale, visual analog scale are all very high correlations. What we took from this and from all the other stuff that we did is that you can use BYOD in any study, and rest assured that whatever you had on the original version is equivalent to what you will have in your BYOD.

With a couple of caveats. There is a good summary of all the evidence that we put together, and four of the five cowboys—cowgirls—are here. So Ashley, I haven’t seen Ari—is Ari in the room? There you go, hi Ari, you’re everywhere. And Bill and myself and Chad. We basically pulled all of the data together and published that earlier this year in an article to pull it all together. Spot the difference, right, so there are differences. But again they are not meaningful differences. We’ll come back to what I said with regard to migration. If you migrate an instrument, if you built an instrument and migrate an instrument that consistently uses the standard widgets, what we call widgets or scale types, then you don’t have to worry about equivalence or that you introduce a bias by moving it into a different platform, even if you go with BYOD. The important thing is that you need to keep it simple and consistent.

Just a couple of things. Here is an example of these three scale types. And here are the elements, I’ll just go through that one quickly, as we skip through that. Meta-analysis, meta-synthesis, again, we did that. We did 101 patient report outcome measures, so actually a big number that we tested there.

If you develop an instrument, if you migrate an instrument, if you stick to the best practices, and these are not a secret. Again, they have been published by the ePRO Consortium a long time ago. They’ve just been revamped this year, so they look a lot nicer than they used to look in the last six or seven years. Content hasn’t changed though. So there’s three of them. There’s Best Practices for “Electronic Implementation of Response Scales for Patient Reported Outcome Measures.” And then the middle one, Best Practices for “Maximizing Electronic Data Capture Options during the Development.” So this is for new developments. And the other one is for the migration of questionnaires. And then, Bill and I decided earlier in the year that it would be good to pull a lot of that together into a small booklet—96 pages, it’s not that much. But we pulled that all together and we hope that we got a lot of good feedback already, it has been up there for the last three months or two months, we got a lot of good feedback. All proceeds are being donated to charities in UK and in Ireland. So we’re not going to get rich. Famous, probably infamous, but that remains to be seen.

My daughter just asked me, Papa, are you famous. I said, no I’m not really famous, I’m more infamous. She said, hm, she is ten, what is infamous. I said, you’re famous for the wrong reasons. And she said, okay what is that. I said like a pirate. Oh so you’re a pirate. Yes, sort of.

[25:00]

So our recommendations from all the research that we’ve done is now, keep it simple. There are—I would recommend everyone who wants to have some fun with Google, enter “visual analog scale” into Google and hit on the image button versus the word search, and then you will see what the visual analog scale can all look like. You’ll enjoy that. I did, or not. So our recommendation is, if you want to not have to do an equivalence study, not do cognitive debriefing after every migration of a questionnaire, and if you want to go with BYOD, you have to keep it simple. Use the existing widgets. Text art only when proven beneficial. So for example, I’m no big fan of bold, italics, underline, capitals, colors. If you have scientific evidence that having a visual analog scale going from red to orange to yellow to green helps you with your content validity, then so be it. But if you don’t have that, just don’t use it. Just because you can doesn’t mean you should. And sometimes simple is better.

One item per screen, which we’ve had for the last ten years, have that discussion on and off with vendors all the time. You’ve got to have real estate—I’ve seen some iPad Pros here, Ashley you’ve got an iPad Pro there as well, big one. So you can easily put ten question on there, right? But then you have another variable that you need to worry about. So if you do one per screen, you don’t have to worry about that variable. Especially when you talk to—I’m sure that when you were talking to Ashley in her role when she was in her role in the FDA, she would have an opinion about adding extra variables into the mix about equivalence.

Use neutral verbiage during the development. One of these pet projects of ours is “select” versus “circle.” On paper it’ll say “circle a number for a numeric rating scale.” If you use “select,” that works in every technology, in every questionnaire. So these the things that we just recommend that developers start to think about. I’ve been at an ISPOR donkey’s years ago, 10-12 years ago, where developers were on station saying, we will never consider technology when we develop new instruments. And that was some people left the room at that point, I stayed because I thought it was funny. Things have moved on, and we just need to move on as well. As I said, use basic widgets and avoid creative combinations. Sometimes we see instruments that use basic widgets but they mix them together because they could on paper. And again, unless you have real good evidence why you would want to do that, just don’t. There is a time to be creative. This is not one of them.

And then our recommendation is—and you will see that in the widget publication in our meta-synthesis—that if you stay within these rules, and then conduct an expert screen review instead of a cognitive debriefing or usability testing, you’re generally okay. That’s our recommendation. And we’ve had interactions with the agency and with many pharma companies, this seems to be the new approach that is somewhat accepted. Now we need to get more experience with that, but the important thing is that you need to have somebody who has done that before and knows what he is doing and has a review and sign-off on this. And then you can take that person also to meetings with regulators and have that discussion, but it saves you a lot of time and effort if you do it that way.

These are more detailed recommendations. Again, I won’t read them out, we’ll just have them in the slide deck so you can read that and follow up on that. And again, we’re back to this thing here, why are we doing this, because it’s important and I think in the end it’s the patient’s choice.

MODERATOR

Thanks Willie. Any questions for Willie?

AUDIENCE MEMBER 1

Thank you, Willie. Very interesting, great body of work summarized very nicely there. So it seems to me that these are all mostly PROs that you’re talking about. So I’m wondering what are you thinking about next? Are you going to think about ClinROs at all? Or where do you see this going? What’s the next thing to do?

WILLIE MUEHLHAUSEN

I’ve avoided ClinROs in the past because there is no such thing as a standard ClinRO, right, they all have slightly different approaches, some are more complex than others. So I’m not sure that I want to spend a lot of time in the next ten years looking into ClinROs. But I think the concepts would be the same with ClinROs. I also think that when we look at performance-based outcomes that we can probably find similar concepts there as well. We just haven’t done that in the past, we haven’t looked into this when we were really focusing on the patient reported outcome side of things. If anyone is happy to work with me and—I don’t know, you’re in it, right? You’re interested in doing ClinRO work? [laughter] I’m not going to do it by myself.

[30:25]

MODERATOR

Well actually there is an ePRO Consortium research group that are looking at best practices for electronic implementation of ClinRos, and I know some of the CRF Bracket team are a part of that research group. But I would also say that they are very different. And of course with patient-reported outcomes, we’re often asking patients to complete these things at home, unsupervised, and without really any training. We don’t sit down and explain to them the meaning of every question in the instrument, whereas when we train people to use a ClinRo, the clinicians, we provide comprehensive training, so they understand exactly what every item is that they’re measuring. And so, a difference in the way you might implement on a different electronic solution versus paper versus something else I think is somewhat irrelevant in that context of the training that you provide. That’s my personal view. We’ll see what ePRO Consortium come up with.

WILLIE MUEHLHAUSEN

I think with the ClinRo stuff also, what I find interesting is that only in the last few years have we started to think about do we need to translate them or not. So I always felt that ClinRos are a little bit of an orphan child in many ways. Everybody, including myself, was focusing on the patient-reported outcomes, so ClinRo probably needs some more attention. And as Bill said, there are groups out there to do that. So, not the answer you were probably looking for, but there is still work to be done.

MODERATOR

One question here.

AUDIENCE MEMBER 2

One question for you. A couple of years ago, when we had used BYOD data for our Phase 2 trials, and we were going to our Phase 3. We basically came to the point where we just wanted, if there was one sponsor, we couldn’t find one example where the FDA had approved BYOD data for a label claim. And so we decided to go with provisioned device. In the recent past few years, do you know of anybody that has gotten approval?

WILLIE MUEHLHAUSEN

I come back to my start of the presentation that we’ve done BYOD on the web-based things for the last 20 years. So just by pure chance, I would say that there is almost 100% probability that somebody submitted data off that, that was then included in a successful submission. We don’t track—and correct me if I’m wrong, you guys are ex-FDA—but we don’t track how we collected the data when we submit it to the FDA. So it doesn’t say this data was done on paper, this one was done on an app, and this one was done BYOD and this was a provisioned device. That may be something we want to look into until we better understand whether there is a difference or not. So currently I don’t know. I do know of two pharma companies that I worked with in the past and am currently working with that are pushing the envelope in that direction, so they are adamant to have a drug in the market over the next few years where all the important data was captured with BYOD. But I’m happy to get comments from the two of you if you want to comment on that. So I don’t know whether the answer is yes or no, except we’ve done BYOD for almost 20 years. So I find that almost impossible that nobody’s submitted that data to FDA.

AUDIENCE MEMBER 3

I don’t work for FDA anymore, so I can’t really comment on behalf of FDA, but I think it really comes down to probably the data collection, is it compliant with FDA regulation. Doesn’t matter how you collect—paper, electronic, whatever it is. You could do BYOD, you could have provisioned device, whatever it is. Can you guarantee, can you trace the data. So I think that’s the part, I guess, what the common denominator—but I think I would say, now I would say the agency is probably more open to a lot of new approaches, as you probably heard about, digital health initiatives, not just within CDRs, I think you’re leading a lot of that too. So I think you’re going to see more openness probably.

I had a question for you. By the way it was a very good presentation. If I were a patient, I would do BYOD, that’s just me. But maybe another patient would say they prefer something else. And I do like the comment about web browser, that we’ve been doing it for a while. I think it’s a really great way to, if you’re doing a multi-modal approach where they can go on the website and fill it out, or you give them a device, they can go to the web browser, whatever, or use their own device, their mobile phone. Only thing with that is usually the internet connectivity, can you guarantee that, that’s one issue, especially in a multi-national trial. So going back to your question about the study that you guys conducted, what is the target population like, in terms of geographically, demographically, and the clinical characteristics? Just high level.

[35:35]

WILLIE MUEHLHAUSEN

I think with BYOD we don’t know enough to really give you a good recommendation on that one. And I’ve said that on multiple occasions before, that we will only learn when we do that. We can talk about that and theorize about that one for the next five or ten years. Unless we go out in the field and do it, we will not find out what works and what doesn’t work. But it’s one thing that for example Bill and I discussed many times over the last couple of years, when we talk about elderly patients, can they do that for example. Whether somebody is old or not has got nothing to do with whether they can use it or not. It’s: do they have the mental capacity to do that, can they still use their hands to do that, can they read and write. So there are other parameters that are not related—or probably related to age, but age in itself is not a problem. So we just need to start looking at these populations in a different way, as far as I’m concerned, and probably need to come up with a scoring algorithm or some form to evaluate if a specific patient population will be willing and can use BYOD. As I said earlier, the older the patients got in our study the bigger the devices got. That by itself gave me a lot of confidence that patients use these devices, they are out there. We had patients that were very old—I should be careful, I’m getting older myself—that participated in these clinical trials, and it was not a challenge. So one parameter that we think we need to look into is familiarity with smartphones, for example. But again, if you use BYOD, and they say, I don’t have one, then you provision a device. That’s why I said earlier, you need to always have provisioned devices as a backup. But if they bring their own device then they should be and probably will be very familiar with that device and can work with that. The only thing that you need to then make sure of is that the device isn’t too small. There’s always a concern about can you read that. But again, BYOD showed us in our study that yes, they bring the device that they can read because they bring the big ones if they have an issue with their eyesight. So there is no—I can’t give you a definitive answer yet where to use it and where not, because I think we need to do more and test more. But the principle of BYOD should address most of the issues. Same with connectivity; they will not have a device at home if they can’t get connectivity. You know, these devices need connectivity one way or the other to download an app or game or whatever, email, whatever they want to do with that. So, if you provision a device, you hope that they have internet connection at home. If they have their own device, they must have it somewhere or they have access to internet at some point. So a lot of the challenges that we have had over the last 20 years with provisioned devices should go away or should be at least minimized by allowing BYOD.

AUDIENCE MEMBER 3

Were there any geographic population, I guess, was it US primarily?

WILLIE MUEHLHAUSEN

In our study, that was an equivalence study, so again it’s a lab study and it was in the US only. That was one site in the US.

MODERATOR

Great Willie, thank you very much.

[END AT 39:00]

Previous Video
Overt Aggression Scale (OAS-M) for Outpatient Use
Overt Aggression Scale (OAS-M) for Outpatient Use

Electronic version of the Overt Aggression Scale (OAS-M) by Emil Coccaro & Dan DeBonis.

Next Video
Approving Library Versions of Instruments
Approving Library Versions of Instruments

Approving library versions of eCOA instruments that can be applied, under license, without further approval...