So in this section, we'll talk about building on the work of others. Specifically using validated instruments when planning a data collection strategy. We'll talk about the value of reusing existing instruments, how that can improve your research, how it can increase the efficiency of doing your research. We'll also talk about the important considerations for reviewing existing instruments and deciding when and where those can be used and useful for your specific studies. We've talked already about the significance of investing early in your research studying in the data planning strategy. Making sure that you're putting time in early on will, will guarantee that you get good results. Later on in this study. We've talked as well about the importance of keeping a good codebook. Sort of defining all of the data elements. Keeping a good codebook early on when you're collecting and also for use later on when you're analyzing. And finally we've talked already about the the importance of the concepts of being able to, kind of go around and iterate many times with your research teams until you get things right. All the way from the data entry person, down to the, to the, to the statistician who, who's be an, analyzing the data. What we haven't really talked about though, is the reuse of validated instruments. So, validated instruments in this context, really, I'll, I'll, we, we've kind of outlined some definitions here. And really we're talking about reusing the work of others, where they have gone on before with experiments. Asking the right sorts of questions that will get to answers around things like, is a person depressed? How much are they sleeping? What's their health literacy, You know? How would we quanti, quantitate health literacy? And, and all sorts of other domains. So we're going to kind talk through the concept of not reinventing the wheel. That's probably the most important piece. So if you, if you're doing a study. And somebody's already figured out the measurment methods. To sort of ask the right questions, to get at a quantitative answer around something that you're wanting to measure. The last thing you want to do is reinvent the wheel and do that again. Leveraging work from others will save time and, as well, it will add credibility to your publish, publishing your results. So I'm going to give a very quick example. This is just an example. We're not going to be talking about study design. I certainly don't have any inclination that my hypothesis is correct here. We're just going to pretend for a moment that we have this hypothesis that there's something about daily intake of potassium that affects an individual's depression level. And we'll pretend as well that we are, you know, our research team and in particular the team that we've assembled and that we work with on a day-to-day basis. Maybe we know everything there is to know about intake of potassium. We're a group of nutritionists, we know how to how, how to calculate diets. We really know to measure the intake of things. And for, for for the purpose of this study, you know, we're going to, we're going to put together some daily regiment of banana intake to make sure that we're getting precisely. You, you know, the measurement of potassium that we want. So, so we've got this idea that the potassium may be linked to depression in some way. The problem is we don't know how to measure depression. How do you measure depression in an individual? the, the, you know, and, and again where, where we come down to here is we may have very scientific scientifically conclusive methods on one end of the study. And then, in this case the potassium, but if we've got scientifically inconclusive methods on the other side with depression, how we're measuring, how we're quantifying and determining what, what level of depression an individual has, then, then we're going to do this study and we're come up with scientifically inconclusive results. The reviewers and the the readers of our manuscript will not trust that the study is sound, because we may have used improper methods or insubstantial methods to measure depression. So, so this [INAUDIBLE] be creating some sort of a disconnect in our heads. We know the importance of doing all of the upfront thinking. We've gone through the concepts of getting it right early on, but, but, you know, we do have this issue that we, we think depression is important but we don't quite know how to measure it. And if we don't know how to measure it, we certainly don't know how to put the data management plan together for it. So, so we'll focus just a few moments on this particular component and how we might be able to leverage validated instruments. Where there's the, the work that's gone on before us, to, to be able to to fill that gap. So first thing though, I'll sort of talk about it is just a, just a very strong warning, that creating and validating a new measurement method for something like depression is, is, is really going to be a study in itself. You're going to end up having to, to be able to sort of, speak conclusively about the, the effects of potassium on depression. You're first going to have to be able to show people that you do have the, the capability in this particular study to measure depression. And if you're not using the work of others, if you're going to go it alone and sort of create a new measurement method Basically you are going to do a pre study. So, so that you kind of prove that your methodology works for measuring depression against some sort of a gold standard, maybe a psychiatric evaluation of depression. So we're really talking about a study that comes before your study, could even be on the scale of a Doctoral project. These are not trivial issues, the other thing that I want to warn again on is that reviewers will not believe the potasium linkage. You do your study. They will not believe it unless you can accurately get, unless you can convince them that you can accurately measure depression. And so, so we've got this dilemma. The dilemma, you know, can be solved and is solved in the real world by, you know, first going to the literature, you know. PubMed is a great resource for seeing, you know, who's done what, and what's gone on before you. It's always a great idea to consult with experts and even invite collaborators into your project. Again, if I'm doing this study and I'm the PI on this study and I've got worldwide expertise in nutrition and that end of the. The, study. One of the things I'm probably going to do is reach out and try to find within my institution, or outside my institution, I'm going to try to find some experts to come in as collaborators on the study. And at least help me figure out, you know, what needs to be done here so that we do shore up our experience and our expertise around depression. Again, we're, we're not going to do that in this particular video. What we're really about here is talking about data management. So, so we'll kind of go into a, just a very, very cursory view of sort of how this plays out in the real world. Here I've done a PubMed Central search to, to find validated instruments that are, that are related to depression. So, so what I might do here, if I, if I'm going from scratch before I consult with experts, is I'm going to go through the title and the abstract. I'm going to look through the methods sections of these papers and really try to determine how [INAUDIBLE] people before me that are doing maybe studies that are somewhat similar maybe other nutrition studies in the same type of population that I am using. How have they gone before me and measured depression using methods that maybe are a little lower in, in burden and time and expense than hiring psychiatrists to, to evaluate each individual. When I find references I'm going to go all the way back to the source validation on studies where the, where. Those that have gone on before us have gone, gone and done those methodology, methodological studies validating new instruments. So we going to go all the way back to the, the beginning and just sort of go, go through it and make sure that this is an instrument that works. Context is absolutely crucial here. So you may be working with a, an instrument. You know, survey that has some scoring associated with it, that, that, that then gives a quantitative estimate of depression. You may be working with an instrument that is perfectly valid in the, in, in the manner in which it was tested in the original subset. But if you violate, the, the, the, the. conditions [COUGH] on, on which they, they did the, the testing in validation. You may, you may in fact not be able to use the same instrument and, and draw the same conclusions. So, so validation can be dependent on population, on demographics, on, on the language, on administration modality, context setting, lots of different things. But as I go through and try to decide which of the instruments that, that have gone on before me and have been validated around this particular measurement I'm going to use. I'm going to have to take all of those, all of those things into consideration and either investigate them myself or draw from the team of experts that I've brought in for collaboration. So, so important things to consider are validity. You know, as an instrument been adequately proven to measure what it claims. Reliability, well, whenever that instrument is used are the results consistent and are they reproducible. Licensing, some instruments are are openly available and, and anyone can use them at no cost. Some instruments, er, have to be licensed and there is a cost, associated with, with that. And so checking the licensing and the, the, the use agreements whenever we're using these instruments is important. And finally again, you know, going back to the validation conditions, you know, how was the, how the d-, the. The results that were validated. Do they really apply in my study? So if, if the validation was using a paper survey and I'm thinking about doing a phone survey? That's, that's an area where I might need to be a little bit careful. If I, if the study was in elderly women from Finland. In rural condition and, and I'm doing the same type of study in urban conditions. You know, is there something about the that, that parameter, urban versus rural, that will invalidate the the, the instrument. maybe it was in a different language. You know, there's depression as measured in the. French in a French-speaking you, you, you know, format. We're going to flip it over to English. What does that mean and, and can I still use it, and does it still make sense? I would always be careful of just saying, you know, I need to just change one thing and then it meets our needs, because once you've changed that one thing, you've possibly invalidated the validated instrument. So, so beware. So I want to end this particular section by talking a little bit about how we've implemented validated instruments into the Red Cap platform. Red cap is this data management platform that we and many other groups use. One of the things that has come up many times in the Red Cap consortium, those, those groups of individuals that use it Here and elsewhere, is the fact that, hey, we can do all of the best practices, we can do the code books and red cap is easy to sort of develop in, and spend our study up, but it would be even easier if we had a method to sort of bring invalidated instruments. We'll talk about that a little bit more as we go along into the EDC components of this course, but just for now I've got a screenshot up of the public face of Red Cap. You'd see over there on the top right, I've got the, you know, the library open. I've searched for depression, and within depression, you know, the validated instruments that have already been coded and licensed to to sit in that Red Cap platform are shown below, and you can see, even, even with the small set of instruments that we have in Red Cap total. There are a number that, that are related to depression. If we click on one of those you'll see that some of the concepts that we've just talked about are the ones that are, that are most important here when, when you're reviewing and looking at things. So there, at the top we have how the, how the. Instrument was validated. In the middle, here's the acknowledgement. Here's the paper that you would go to if you really wanted to dig down deep. And then finally at the bottom yellow section there we see some of the Terms of Use. In that Red Cap platform, if we're interested in an instrument, we can actually sort of click a link and see what it would look like in a web-based survey. Or a PDF survey. There are some things that we'll about downstream about if you're coming from Red Cap and you want to bring that instrument into Red Cap for immediate use. But again, for now, we'll leave that for a later time. So this concludes this video segment in this section on validated instruments. But again I hope we've, we've re, relayed the basic concepts behind Behind why and when you might want to reuse, existing validated instruments. And especially some of the important considerations when choosing. [BLANK_AUDIO].