Welcome back to this section of the lecture, where we will talk about types of data and effect measures. This is not intended to be a new lecture on the types of data or effect measures, since you have learned all these in your Biostats and Epi courses. But I will emphasize on the elements that are relevant to a systematic review in meta-analysis. What is the effect? Well, in systematic reviews, we may be looking at various types of effect. Effect of health care interventions that's the most commonly encountered one. Or, we might be looking at the association between an exposure and an outcome. Or, we may be looking at prevalence, or incidence. For now, let's consider comparison of two interventions or two groups. This will usually be done using a difference or a ratio. So, for example, if we're comparing a JACC intervention versus a surgical intervention. And we're talking about randomized control trials, we will typically use a difference or ratio to quantify the effect of the intervention. What is the effect? It's often called effect measures for randomized control trials, or measures of association for observational studies. It is a way to describe the outcomes of participants treated with different interventions or exposed to different levels of risk factors. And to choose the correct or the appropriate type of measures of association to estimate that quantity. It really depends on the type of the data you have. Here's list of the types of data we typically encounter in human research. We may have dichotomous data where you will only have two category, alive or dead, or event or non-event, experiencing a heart attack, no heart attack, for example. Or, you may have continuous data where there's no gap between two levels. For example, blood pressure, height, weight, vision. You can measure vision in different ways and I will show you an example. We may have ordinal data where we will have ordered categories, like a liquor scale for example, none, mild, moderate, severe. You can use counts for the infrequent event. For example, number of strokes, number of adverse events. And we may also have time-to-event data. For example, survival time. Time to cancer remission, time to death, for example. For any clinical research, you may encounter all different types of data in your results. And here is one table from a research paper, and it summarized their efficacy outcomes for that particular study. And this study compares two intervention, PDT alone versus [INAUDIBLE] plus PDT for patients with age related macular degeneration. If we focus on the first row of the data, which is visual acuity at 12 month, and the authors presented the visual acuity in several different ways. In the first way, which is lost of less than 15 letters from baseline. It is a binary or dichotomous type of data because a patient can either lost less than 50, or greater than, or equal to fifteen letters from baseline. So, we will consider that dichotomous data, and visual acuity can be also expressed as a continuous data. If you look at the third row of the table, the number of letters. Remember the vision chart, where they have letters, and the optometrist will count how many letters you can read from the vision chart. We can measure the number of letters as a continuous measurement. So, here, for the PDT along group, that change from baseline was minus 8.2 plus minus 16.3. That's the standard deviation for the change from baseline. So, you can look at this table. If we focused on the last dissection of the table on the bottom which is the repeated PDT, the number of repeated treatment ranges from one, two, three, four. So, what type of data is that? We will consider it as the ordering of data. Where there is a ranked number for these orders of treatment. So for any research paper or for any research study you may encounter different types of data, even for the same study. For dichotomous data we can use risk ratio, odds ratio, risk difference, and number needed treat to measure. The treatment effect or the association. And dichotomous data are typically represented in a two by two table. Here we have test intervention, comparison intervention. And that a and c represent the number of events for the two intervention groups. And the marginal total a + b is the total number of participants in the test intervention group. And we use Nt to represent that. And you can also get the total number in the comparison intervention group by adding up c and d. And the risk ratio really is a ratio of the risk on the treatment divided by the ratio on the control and the risk or proportion or probability depending on the study design. Is expressed as the a, the number event divided by the marginal total, which is the total number in the testing intervention group. So you can use that formula to get the risk ratio. Odds ratio says it is the ratio of the odds on the treatment, divided by the odds on the control. Again, you can use the formula to get the odds ratio. Risk difference is the risk on the treatment minus the risk on the control. And the number needed to treat is the inverse of the risk difference. So there are the measure we typically use for dichotomous data and this is really not new to you, because you have used this multiple times in different classworks. Some features for the risk ratio. It's easy to interpret and easy to explain, because you can say the probability of having the event In the treatment group comparing to the control group is this. But it's not typically the effect measure reported in multivariate analysis. For example, if you run a multiple adjustic regression, you're more likely to get odds ratio out of it, instead of risk ratio. If there's no event in the control group or in your denominator for the risk ratio, you cannot really calculate the risk ratio. And what about odds ratio? Well, it has the best developed statistical methodology, particularly for For adjusting for covariance. And you can calculate odds ratio from some study design that do not allow calculation of absolute risks or rates. For example, in case control study where you actually select how many controls to be included in your study by design. There you cannot calculate the absolute risk, but you can still use odds ratio to measure the strength of association. It can be difficult to interpret, particularly when the baseline rate is above 20%, and I will come back to this point why that's a problem. Relative ratio and odds ratio are not the same. Because, if you remember from the formula I showed you, one is using the marginal total in the denominator. One is using the number of null value in the denominator. So there are calculated using different formula, even for the same 2x2 table, you're going to get two different numbers. So let's say, based on a 2x2 table, you get the relative risk of .8. And then you will find a different odds ratio, because odds ratio is calculated using a different formula. And both relative ratio and odds ratio are entirely valid ways of describing an intervention effect. Problem may arise if the odds ratio is misinterpreted as relative risk. So here's the question for you. Based on a two by two table, if I get a relative risk of 0.8, what would be the odds ratio? Is the odds ration more likely to be less than 0.8 or greater than 0.8? Well, as a matter of effect, odds ratio is always more extreme than the relative risk. Meaning a way from the now value of one. So let's say the relative ratio is point eight. The odds ratio will be less than point eight away from the now value of one. What's the matter with that? Well if you're interpreting odds ratio as odds ratio, you are fine. But if you misinterpret odds ratio as relative risk, then you will overestimate the intervention effects. So for example, if the relative risk is 6.8, i can say you know the intervention reduced the risk by 20%. But, you cannot use the same language to describe an odds ratio based on the same study, because the odds ratio will be something less than 0.8, let's say 0.7. And if you're still saying the risk is reduced by 30% you're overestimating the intervention effect. But it's not a problem if you interpret odds ratio as odds ratio. So you would say the odds is reduced by 30%. What if some studies reported relative risk and others reported odds ratio? So this is the problem that you will encounter in your meta-analysis. What to do? Well you can always analyze the studies by the measure of association or the measures of effect that they use. Here is an example from a previous student's work where in this particular meta-analysis they look at the risk of Ischemic Stroke with any Migraine and they did the analysis by measure of effect. So the first three studies reported relative risk and they combined the first three studies using relative Risk Astem, measure of effects. And then they have a set of study that reported odds ratio. And they did a separate meta analysis of that. And then they also have studies involving hazard ratio, and incidence rate ratio. So, the key is that you analyze the studies by the effect measures reported in the study. You don't have to combine them into one diamond. Here in this forest plot, they actually have four diamonds. One for each of the measure of the effect reported in the study. Sometimes we will have studies with zero cell counts. What does it mean? For example, if we're looking at rare events, let's say adverse event or stroke at 24 months for Behavior intervention. You may not, for some intervention groups or some studies, there may be no event at all. It will be a problem for the analysis. But most meta-analytical software automatically adds a fixed value, typically 0.5, to all the zero cells. It may bias the study estimates towards no difference and overestimate the variance. And there are non-fixed zero-cell corrections available. And if you have this problem in your particular analysis, come to us. And we will show you how to do it. But if there are no events in both arms from the study. You should exclude that study from the meta-analysis or use risk difference instead. So when there are zero-cell counts in your study, if you're working with ratios, relative risk or odds ratio, you will have a problem. But you won't have a problem by using risk difference. For risk difference, it directly related to the number who will benefit from the therapy. And it's fairly easy to interpret. You can calculate risk difference from any study design even when there are no events in either group. So in the previous example where you have zero events, you can still subtract zero from a number, but you cannot divide zero using relative risk or odds ratio. It's less often constant, particular when there is substantive variation in the baseline risk. It's not often reported in the studies and sometimes you cannot even calculate from the studies. Particularly if multivariate adjustment has been done. What do I mean by if the measure is constant? So in this example, hypothetical example, we have three studies, and the outcome is mortality. And for this first study, the mortality in the treatment group is 20% In a control group it is 30%. If you subtract the control group rate from the treatment risk, you get a risk difference of minus 10. So the baseline risk, meaning the mortality rate in the control group varies between the three studies. In the first study it's 30%, in the second study 17%, in the third study 62%. So here we have relatively constant measure in terms of risk difference, because the risk difference is about 10%, or minus 10%, from all three studies. However, if you calculate the risk ratio, or odds ratio, they're quite heterogeneous. So they're different. So when the baseline risk varies, consistency in one measure of effect means variability in another. As shown in this example, although you have quite consistent measures in risk difference, you're having inconsistent measures in risk ratio, odds ratio. And the inverse is true as well, if you have constant risk ratio or odds ratio, the risk difference may not be constant. For continuous data, there are two measures you can use. One is the main difference, which means the difference in means Well all studies are using the same scale. For example, for blood pressure we're all measuring blood pressure in the same scale, then you can take the difference in means to quantify the treatment effects. But for quality of life measures, for example, you may use different scales or different instruments. The short form SF36 Ranges from 0 to 100, but the eura quality of life has a different range. Another example is different instruments for measuring IQ. Although they are measuring the same underlying construct of underlying biological phenomenon but they have a different range. In that situation, you can use standardized means difference. Which is the difference in means between groups divided by the standard deviation of the outcome among the participants. And we can look at the example in the next two slides. The first example, where we used mean difference, or difference in means, to quantify the treatment effect for the continuous data. The comparisons are multifocal lenses versus single vision lenses. And the outcome is the change in refractive error from baseline to one year. All these studies use the same measurements for the outcome, or the same scale. And that's why the author's decided to use mean difference as a measure of association to gage the treatment effect. In the next example, which is the substitution of doctor by nurses in primary care, the outcome is patient satisfaction. Here if we look at the forest plot on the left hand side of the forest plot you will see the value from each individual study. The first study the mean is 77.9 for the intervention group. And for the control group is 74.05. But if you move onto the third study the mean is 4.4. And for the intervention group, and 4.22 for the control group. So, although all these studies are measuring patient satisfaction, but they're using different instrument or different scale. The first two studies, the satisfaction ranges from zero to 100, but for the third study it ranges from zero to five. So although they are measuring the same underlying construct, the scale's differ. And in that case the mean difference or difference in means won't be helpful because the values are different or the units are different. And we use standardized mean difference to measure the association. And here the standardized mean difference 0.28 for the three studies combined. So the way you interpret it is it favors the intervention by .28 standard deviation. So there's an interpretation problem if you're trying to explain this to a patient. And we would stick to the main difference as much as possible and only used standardized main difference When the instrument are different. There are other types of data. For example, counts. And how would you analyze counts? Well, you can dichotomize counts, as any events vs. none, or treated as a continuous data for common events. as rates, for example, using Posal module. But short scales are not often encountered in your analysis. If you have that problems come and talk to us and we will help you figure out what would be the most appropriate way to analyze the data. And lastly, for the time to event data, depending on how the authors report it and analyze the results, you can choose to dichotomise it. For example, breaking the time points into periods, and this requires the status of all patients in the study to be known at a fixed time point. So for example, if all patients follow for at least 12 months and the proportion who have developed event before 12 months is known for both groups, than a two by two table can be constructed. So that's what I mean by the dichotomies. If all studies have used a hazard ration or some survival analysis methods to calculate the hazard or hazard ratio of relative time. You can always meta-analyze the hazard ratio. I'm going to conclude by summarizing which measure to use. Again, which measure to use should convey the necessary clinical meaningful information. You have to think about the measures that are useful for making clinical decisions, for example if they are talking about reducing the blood pressure by 20%, 15%. Then a dichotomous data might be more appropriate than reporting a blood pressure as a continuous data. And the choice of the summary measure should be suitable to the study design and statistically appropriate and convenient to work with. So, in this section we have reviewed really different types of data and measures of a association and measures of effect, depending on the type of data you have. And I hope this is a quick refresher of what you have learned in other courses. And there will be issues that are specific to meta-analysis. You can always come to us and we will help you to figure them out. Thank you. [MUSIC]