Okay, we'll finish out this module talking about testing refinement survey administration and data analysis. Just, just three more categories as we're looking through these best practices and principles for special considerations in survey research. So, so first of all, we mentioned this in the, in the last segment testing your refinement. This is so critical in survey research because a lot of times you're just not going to get a second chance. So allow time for usability testing, have individuals take it, take the survey yourself. I mentioned earlier you know, it's very important to time things. And get people, other people taking it besides yourself and even your group. Make sure that they're looking at it for how long it takes to to take the survey. Make sure that they're looking at it for readability and clarity. And again, that's why we want to take it outside of our group, get it in front of some people that they would be, may, maybe likely types of participants. But, but maybe not participants that we'll contact, but so they can look at it for the readability and clarity of the survey. Look at it time time and again for spelling and grammar the layout, as we mentioned in the last segment. General appearance looking at it for that mutual exclusive, exclusivity of of response choices and, and again, that lack of bias in questions. And, and here, you know, it's hard to overemphasize the fact that it would be really good sin-, since there is no room for error here once you shoot it out there to, to the public or the group that you're using for for tests, for study. It'd be really good to find as many candidates as you can, as friendly reviewers as you can to make sure you're getting this right, from the usability standpoint. allow time for functional testing. So, so make sure that you know, you and your users or, or testers are looking at question flow, you, you know, does this make sense? Actually, once I start taking this does it make sense to ask about the pregnancy question first, or maybe I should put that after the gender so that we can use that branching logic. In the branching logic, is it working correctly? So this takes some testing time, but you've got to check the different permutations of, of how people might answer these surveys, and again make sure that you're getting it right. are the, are the testers following the instructions? So, so looking at the answers that are coming back, looking at the patterns of use. A lot of times you can sort of see, you know, well, you know, lot of, lot of them are just quitting. They're all kind of quitting around this one particular section. That's a good, good clue that maybe there's something in that section that's not understandable or that's sort of putting people off. finally look at the data coding. Make sure that you know, when they're answering the questions that, that they, you're getting actually for the analysis that the coding and, and the answers are coming back in, in the way that you're expecting them. [COUGH]. Again, back, back to that second to the last bullet point. it's very, very tempting when, when we're looking at the results coming back from surveys, to say oh, man, you know, they, they didn't understand this. How, how could they be so naive? It's, it's very easy to sort of, sort of put the blame on the, on the, on the backs of the users. But it's always the, the fault of, of the survey designer if things go wrong. And so you just have to keep that mindset right, right there front and center with you and your team as you're looking at these results. If, if they're doing it wrong, then, then something's not not clear and you need to go back and you need to reiterate until they're, they're doing it right. Allow time for data testing. So we mentioned this in the last slide around the coding of variables, but you know, again, this is a great time to look at the data that are being collected. It's a good time to get your statistician and your analyst people involved. Make sure again, go back to that, that very first principle around. You know, does every question relate to the specific aims, or do we, you know, have we let some nice to have type questions kind of creep in. do the question responses make sense? You know, how many, on our, on our testing surveys, how many answers are left blank during the testing? And when they're left blank is that because they didn't know, because they that they that they refuse to answer. So, so looking for patterns there is important and can kind of help you sort of, sort of modify things for that one last iteration if needed. are too many questions answered, do not know? are they completing the survey? And, and are the answers consistent with the branching logic? So, so again going back to the, to the testing a moment ago, you know, in our test results are, are the answers consistent with what we expect? you know, if we're getting that pregnancy answer and, and on, on males, then we know something's wrong. And then finally, this ones is important as well and I mentioned this when we were talking about the visual analog scales. it's, it's important to look at, sorry with the Likert scales, it's important to look at the variation in your responses. So if you do a test, you know, a small pilot test and you've got 20 users, and, and you're answering some question using a Likert scale, or really any type of, of response. And all 20 people basically give you the same answer out of a choice of five. Chances are, you're, you're not really going to be adding a lot of value with that question. So if there's no variance, you know, there, you know, it may be just sort of a check off the box type question to make sure that you know, maybe it's an inclusion/exclusion criteria where, where you want to just sort of make sure that, that this is the answer. But, but in general, when we ask a question, we are expecting there to be some, some diversity in the answers. And so, I found a lot of times, by going back and looking at the answers before we, you know, from the testers. If we're not seeing variation in the answers on a particular question, that's a good candidate for looking at. And number one, seeing if we, if we need to remove the question, if it's, if it's adding no value. Or at least sort, sort of shore up the language a little bit or the choices so that we are getting some variation. again, if it's important to that specific aim or the objectives of the study or trial. Okay, moving on to, to the next domain survey, administration and data collection. You know, you can consider modes of, of administration. We've talked a little bit already about paper and electronic and, and web-based systems. You know, there, there are systems that you can use for phone type surveys, where, you know, the phone, phone will either ring you sorry, someone system will either ring your phone or you'll be instructed to call in with a particular number. And you can particularly for coding variables for, for structured information, you know, you can key those in on a phone. You can do the same thing with SMS messaging. be a little bit careful with both of those modalities, making sure if, if you're doing it for clinical and translational research, making sure that that you're thinking about the privacy of the data because SMS messaging is sort of inherently in, unsecure. And so you want to make sure that you're not passing information back and forth that would compromise the the, the, the, the confidentiality of the patient responses, et cetera. Phones a little bit better there, but either one of those, really any of the methods elect-, web based paper as well. You always want to sort of take that into consideration. consider you know, environment. So if it's a paper survey, it might be a great area for a waiting, waiting room either public or private. if it's, if it's an electronic one, still might be a good, good idea to have it in a waiting room where you'd have, hand somebody an iPad, or maybe have a kiosk over in a corner where they could fill out a survey. For, for paper ones and again mailing them is, is, is quite common. we're seeing more and more of the electronic versions of, of these surveys come, coming our way. you know, as the, as the society and, and the community kind of, kind of goes more web based and web enabled. As societ-, society goes more web enabled they're more and more common all the time. you can consider either invited participants or open participants participation, invited participants. You know, if we were doing it on paper, we might mail surveys to these 50 individuals. if we're doing it electronic, maybe we'll email a web link to those same 50 individuals. But we might also sort of take an added precaution in that, that, that web link only works one time. So after they complete the survey you know, if they try to hit it again, or if they try to pass it on to a neighbor, and hey, you should take this survey too. after one completion, you can sort of set it so that they can't take it again. Other ways of course of, of doing surveys are just flat out open participation. So again, here we might have a stack of paper surveys in a waiting room or a web link just sort of posted on a public site that could be taken many, many many times by, by any individual. And, and, you know, there's no, there's no wrong answer here in terms of how you might put something together. It really just depends on the needs of your study. So, so finally, just just briefly, we'll talk a little bit about the data analysis pieces of it. We've talked a lot about testing of the data already. regardless of whether you're doing paper or whether you're doing web-based or phone-based you know, you're always going to want to analyze that data. And so it's always good to think upfront about creating electronic database platform to house that collected data. Of course, if you're using a web survey or, or the phone survey type, type work, typically there's already going to be a database platform behind it. But it's important to sort of think about it. Again, make sure before you start this study that, that you've gone through the process all the way through. You, you've been able to to pull in test data using pilot data collection, you've thrown it into the database. You, you've been able to export it into the statistics package, and then your statisticians or analysts can even sort of take that data and, and maybe even the metadata about that survey and they can go ahead and start writing out their their analysis scripts. So and just practice it keep, keep practicing all the way down to the database side and even into the sort, sort of writing the scripts for mock analysis. You, you won't be sorry. and and a lot of times those preliminary scripts just kind of evolve on out, and those are what you're going to use in, in the actual analysis, once you start collecting real data. So this completes this this overall module and again we've I think we've done a decent job of covering some basic concepts whenever you're collecting data using survey or questionnaire instruments. And one more time, it's really important to do this, do a good job with it, think it through upfront before you start the, the study.