[MUSIC] Hello, everyone. This is Lea Drye again, and today we're going to cover analysis issues in clinical trials. And the purpose of this lecture, is to uncover analysis principles as opposed to statistical methods. The statistical methods that we use in clinical trials are really beyond the scope of this class, and you'll cover those in your biostatistics and epi classes. So in this lecture, we're going to present the intention to treat principle of analysis. And we're also going to talk about the common practice of doing multiple subgroup analyses. In the first section, we're going to cover analyses by assigned treatment which is also called intention to treat. As a reminder, we use randomization to allocate treatments because it protects us from selection bias. And that protection is the primary reason that the randomized trial is considered the gold standard of study design. And so our analyses, in order to capitalize on the benefits of randomization, must be based on the randomized assigned treatment. So randomization assigns people to a treatment group, but in practice some people will not actually get any or some of their assigned treatment. And this is an example from the National Emphysema Treatment Trial or NETT. In this figure we show what happens with so-called cross-overs. And here I'm not talking about design cross-overs that we discussed in our design lecture, but I'm talking about undesign cross-overs or cross-overs that are mostly out of the control of the investigators. Participants were assigned to receive, either lung volume reduction surgery, or the standard of care medical management. The potential participants went through a lengthy run in process before randomization to try to screen out people who were likely to be non-adherent. But still after randomization, some people who were assigned to the surgery group refused to have the surgery. And so what they actually received was the standard of care medical management. And some people who were randomized to receive medical management, decided to pay their own money, and seek the surgery outside of the trial. So we had some people randomized to surgery, who received medical management, and some people who were randomized to medical management, who actually received the surgery. Another example from the Alzheimer's disease anti inflammatory prevention trail, or ADAPT, is what can happen when there's none adherence during follow up. This example is not necessarily a cross over immediately after randomization, but here the non adherence occurs during some point in follow up. So in ADAPT, the participants were randomized to receive either an NSAID or a placebo. And the participants had to take these medications for a long period of time, for years, actually. So during the process of follow-up, there were some people assigned to the NSAID group who decided that they couldn't tolerate the side effects of the NSAID, and they decided to no longer take it. There were also, a subgroup of people in the placebo group who began to take an NSAID on their own, during the trial. For example, some of them received a prescription for an NSAID to treat their arthritis. So In ADAPT, we had some people assigned to the NSAID group who at some point during the trial began to receive no treatment and some people assigned to the placebo group who at some point during the trial began to take an NSAID. So the question is, what do we do about these unplanned crossovers and the treatment non-adherence when we analyze the results from the trial? And the intention to treat philosophy tells us, that we analyze data based purely on randomization. So this means we ignore ineligibility, and those are people who were enrolled although they were not eligible. We ignore complete nonadherence, like the example we just saw from NETT where some people received none of their assigned treatment. We also ignore treatment terminations and treatment switches, where people receive the assigned treatment for a while, and then they terminate or switch to a different treatment. And we ignore partial adherence, which occurs in a lot of people. A lot of people will take some portion but not all of their prescribed treatment. And this may sound illogical, but in principle it isn't. So to explain why it's not illogical, I'm going to talk about what would happen if we did not use ITT. So we worked hard to make this an experimental design to avoid the problems of self-selection of treatment and confounding by indication. But if we analyze according to the treatment received, or adjusting for adherence, then we are allowing these biases to creep back in, so we don't know all the reasons for non adherence but what we do know is that it's frequently not random. A lot of times, treatment adherence is related to the treatment, because some treatments are harder to adhere to than others. And it may also be related to the outcome, because people who do not adhere, may differ from those who do, with respect to factors that relate to the prognosis. For example, people who don't adhere might have more severe disease. So we use randomization to prevent bias, but it only does this if we analyze by the treatment assigned. And in a perfect world everyone would take a 100% of their assigned medicine 100% of the time, and what we'd be able to estimate with a clinical trial would be this theoretical notion of true efficacy. So the effect of the treatment with perfect compliance. But we can't guarantee that a person will take the treatment as it's assigned. And in fact, the right to individual autonomy is one of our basic principles of medical ethics. So, in the real world, outside of the trial, some people will also not take their treatment as it is prescribed. And so we can think of the clinical trial instead of being a test of the treatment received, we can think of it as being a test of a treatment policy or a treatment prescription. And in this way it is like real life and it is a generalizable result. Although it is likely a conservative estimate of the true efficacy with perfect adherence. Another nice feature of an institrite analysis is that they're clear and simple to declare up front. They're not post hoc. So in clinical trials we try very hard to avoid changing our methods after we've seen the data. And adherence is something that's hard to quantify before you've seen the data. So to include adherence in our primary analysis plan would be difficult to specify ahead of time. And it may be fine to include some measure of adherence in our secondary analysis but it's not appropriate for the primary analysis. Finally, analyzing data according to the intention to treat philosophy requires that you collect data according to the intention to treat philosophy. And what I mean by this is that all data are collected for people once they're randomized, regardless of their treatment adherence. So once you randomize a person, you follow your visit schedule, and you collect all outcome assessments, even if they never take the first dose of their study medication. And if you do your data collection in this way, you can do the intention to treat primary analysis, but you can also do other analyses, a secondary analysis such as analysis by treatment received. Or the analysis on some subgroup of participants that you think are good adherers. However, the reverse is not true. If you collect your data only on people as long as they're compliant with their study medication then you lose the ability to be able to perform a true intention to treat analysis at analysis time. So just to reiterate, intention to treat analyses require an intention to treat data collection philosophy. And the data collected according to intention to treat are more complete and allow you more options for analysis. So what do we do with adherence? We think that treatment is associated with adherence, and we also think that adherence is probably associated with the outcome. So for those of who are epidemiologists, the temptation is to think, well, we need to adjust for adherence, because it's a confounder. And I'm going to argue that we should not adjust for adherence. So we know that people vary with their likelihood to adhere. Some people receive a treatment from their doctor, and they follow it to a T, they follow it exactly as it was prescribed. And other people receive a treatment plan and they don't follow it at all. The most of us all fall somewhere in between. So we can think of the likelihood to adhere as a quality that a person has. And we would expect because of randomization that we'll have comparable groups at baseline for disquality of likelihood to adhere. So differences in adherence after base line are caused by the treatment. They are part of the treatment affect. We don't usually adjust for factors that are in the causal pathway. In the previous figure the arrow was pointing from treatment to adherence. So adherence is in the causal pathway between treatment and the outcome. So to adjust for it would essentially be adjusting out part of the way in which the treatment affects the outcome, and since randomization has already given us comparable groups, by adjusting, we may then introduce non-comparability. So in this figure, we have a different situation where adherence is not related to treatment. So we think treatment is related to the outcome, and adherence is related to the outcome, but since treatment is not related to adherence, adherence is not a confounder, so in this situation, we do not need to adjust for adherence. So just to recap, if there is an association between treatment and adherence, then the arrow depicting this association is going from treatment to adherence, meaning that adherence is in the causal pathway so we don't adjust. And if there's no association between treatment and adherence, then adherence cannot be a confounder in the relationship between treatment and the outcome so there is no need to adjust. So adjustment for adherence is not a method that we used in clinical trials in the primary analyses. So hopefully I have convinced you that intention to treat is the appropriate analyses philosophy for the primary analyses, and that adjustment for adherence is not appropriate.