Hi, again. This is Kay Dickersin and we're in Section E. I'm going to talk about reporting systematic reviews and meta-analyses transparently. We've talked about and you've heard about in other classes reporting guidelines such as consort for clinical trials, strobe for observational studies. There also are reporting guidelines for systematic reviews and meta analyses. PRISMA is the reporting guideline for systematic reviews of clinical trials, and MOOSE, a little outdated by now, but nevertheless still useful, is the reporting guideline for observational studies. You should know by the way that an updated MOOSE is being worked on. I'm just showing you here, and you should definitely go and look it up for your systematic review, the PRISMA checklist. It includes some of the items that are included on PRISMA and which items should be included in your systematic review. These are reporting standards. These aren't how to do your systematic review, we've already talked about that with standards for systematic reviews, but this is how to report. One of the most important components of reporting your systematic review is to have a flowchart of how you examine the different studies and the data from those studies. This is comparable to looking at a flow chart for a clinical trial and how the individuals pass through the clinical trial and how the data were collected. In this case, however, it's how you identify the studies that were included in your systematic review and what was analyzed in the end. You can see that you start out by how many records you identified through database searching and other sources. How many you had to remove because they weren't eligible, or they were duplicates. And you end up with a certain number of studies for each of your meta-analysis. You may conduct more than one meta-analysis per systematic review. MOOSE is very similar, it was published a number of years ago in 2000, and it's being updated now. It presents, roughly, a similar outline of how to report a systematic review and meta-analysis of observational studies in epidemiology. By the way, in the old days, we referred to systematic reviews as meta-analysis, instead of separating the two concepts. So you will see, in some of the older literature, systematic reviews being referred to as meta-analysis, even though they don't always contain the quantitative synthesis. The final thing I wanted to talk about is something called GRADE. And grade is how you summarize the body of evidence. Now this is a term that's often used incorrectly, body of evidence. People often use it to refer to the meta-analysis or the systematic review of the individual studies. Grade comes after that. And grade was developed because physicians wanted to know, all right, now that you've done the systematic review and meta-analysis, what do you think? Do you think we should rely on this? So let's not just do the systematic review and show no evidence of effect. We want to also know how good were the studies in general. Is this reliable? Should more studies be done? Where are we in this? It's a grading of the body of evidence and it's not the same as the systematic review or meta-analysis. So how good is the evidence? That's the quality of the evidence. And, you would give a lower grade if there was a risk of bias in the evidence. You would give a higher grade if the effect size was bigger, if there was a dose-response effect. If it seemed like a plausible effect, and so forth. And this is done somewhere between the systematic review, and let's say a clinical practice guideline. It isn't necessarily done by the systematic reviewer. It could be done by the guideline producers instead, so you won't always see it as part of a systematic review. But I wanted to mention it because you will hear summarizing the body of the evidence often misused in your travels. In summary, what we've talked about are three areas for meta-bias in doing your systematic review. Selection bias, which is largely seen as reporting biases but also inclusion bias. Information bias, which has to do with how the data for the systematic review are extracted and identify actually. And bias in the analysis itself. And that's a really tricky one. We just touched on that. I think not a lot is known compared for example reporting biases. We talked about having standards for minimizing meta-bias, which includes doing a thorough search, watching out for information bias, and considering the impact of the methods of analysis. This is all important to you as you do your systematic review and go forward. I'm not just presenting theory here. I'm presenting some theory, but it's also advice about how you conduct your own systematic review. The methods are not set in stone. You have to decide for yourself which methods you are and are not able to apply in the short eight weeks that we have. But, you should know, and it should be part of your discussion section when something is an issue, you should be paying attention to even if you didn't have time, for example, to search more than three databases. That's all for this lecture, and thanks for listening. [MUSIC]