Now what I want to do to kind of conclude this part of survey design is talk about some steps in survey design and some dos and don'ts and best practices. So let me lay it out there. What are the typical steps that are out there? And we'll go through each of these steps in more detail. I think step 0 which is perhaps the most important one is to make sure your result are generalizable to an appropriate population. What does that mean? What that means is typically when you're doing a survey, maybe about 1,000 people, 2,000 people, 3,000 people, whatever the number might be, you're obviously interested in what they have to say. But what you're more interested in is how does the thing that you're capturing there, for example, if it's customer satisfaction, how does it generalize to your entire customer base? So that's what I mean by step zero. And we'll talk a little bit about how you do that. Step 1, develop a detailed listing of what you want to capture. Capture all of these things, write the draft questions, design the flow and the layout appropriately, evaluate, test, retest, pilot test, redraft. This is a circular process in the sense that we first set it up. We do some pilot testing, go back to the drawing board, clean up some questions that come about, test it again and so on until you're comfortable going ahead and implementing that survey. Step number 6 perhaps is very important from implementation point of view. Which is, if you're working within your company or if you're a consultant working for another company, get approval from all the parties. Extremely important, of course, if you're to implement the survey with customers. And finally, you go ahead do one more test in terms of figuring out whether the survey is good. Look at the final copy and implement the survey. So let me go into more detail about each of these steps. As I said, Step 0 perhaps the most important one. It ensures that the data that you're collecting from the survey will be useful. Couple of things you ought to think about. First, the population has been defined correctly. The sample is representative of the population. So for instance, suppose you're company that targets let's say, 25 to 30 year old males. What you want to make sure is you define their population correctly, that it's indeed your target market. When you say that's your target market, you look at the sample that you've collected for the survey that is representative of the target market. Why is that? Because then you can be more confident that the data that you collect from the survey is indeed representative of what might happen in your target market. Then you'll, of course, ensure that respondents selected to be interviewed are available, they understand the question, have the appropriate knowledge to answer the questions. And you're of course motivating them in some incentives to provide information to the surveys. And here the biggest issue of course are response rates. You might not be surprised or perhaps you are, the typical response rates in service is about 5%. So what that means is, of the 100 people you send out the survey to, only 5 of them will respond. So clearly it's very low. You should think about appropriate ways in which you can incentivize respondent to in fact respond to your survey. A couple of ways to think about this. One is, how do you handle this non-response? First, you want to collect data as much as possible on people who don't respond. Why? Because you want to make sure that the people who respond and people who don't are not systematically different from each other. If they are, that might not be a very good way of generalizing the results from your sample of the population. Let me give you an example. If you're interested in capturing people's responses of prices that you're charging for your product, and let's say there's a systematic difference between people who respond and people who don't respond in terms of their income. That's clearly not very good because then what you're capturing is not a representation of the true target market. And why is income an important metric? Because what you're interested is how did they responded to the price that you pay. So clearly income would be a very important indicator of how price sensitive people are. So if there is a difference in income levels of people who respond and people who don't, that clearly suggest that there is a bias in your sample in terms of its representativeness to the population. What can one do? Well, one can follow up with non-respondents. Try to convert them into responding. Typically what happens is you send them reminders. On the pro side, you would get people to respond. What's the con? The con could be if they are giving you bad data after repeated requests. So, what is typically done? You typically compare each wave of respondent to make sure they are similar. For instance, you might have the first wave. These are the initial set of responders. Let's say about 50% of people respond. You collect that wave one. Then after a week or so or two weeks, you send people reminders. These are people who have not responded. Let's say 20% to 30% of them responded. Rather than just aggregating or collecting the data, you would want to make sure that the 50% who responded in week 1 are not systematically different from the 30% who responded in week 3. If that is the case you can convert all the data together. If that's not the case, I would advise you to look at the data separately. Now let's go back to other steps. Step 1, what bits of information to collect? I think here what you have to make sure is you translate whatever research objectives you have into information requirements. One way to think about it is thinking in terms of question relevancy. What would you do with the answer if you knew it? So kind of work backwards. So think about the ideal situation in which you knew the answer, then think about how your managerial decisions would be different. Once you put that perspective in mind, it becomes easy to understand how you should ask those questions. Then the question is how do you format those questions? There are two big ways of thinking about that. One I will recall is open response questions, why do you shop Genuardi's for example? Another is closed response questions. How often do you shop a Genuardi's, very often to not at all. What you notice here, open response basically ask respondent to come up with their answers. They're not really give any feedback per se. On the other hand, in closed response question, you typically have a particular question in mind and you have a skill for example, from very often not at all in this question. Let's talk a little bit about what are the advantages and disadvantages of open-ended and close-ended questions. In terms of advantages, there are quite a few of open-ended questions. First, respondents can give a general reaction to questions such as, why do you say brand X is better? Responses are given in their own words, in their own language. It becomes easier for people to respond. It can also help interpret clues in the data. For example, if people are thinking about a particular color, you can follow up to kind of say, why was that particular color important? And it might also suggest what other questions to ask. Lots of advantages. At the same time, there's some drawbacks as well. First of all, it's not very good for self-administered surveys. Why? Because sometimes, survey fatigue might come in. If you think about respondents that are going through a lot of questions. You keep asking them open-ended questions, after a little while, they might have some fatigue and they might be not as articulate. And of course, all the answers that you get, depend upon how articulate the person is. If they're able to understand what they're thinking about, they're very articulate, open-ended questions are great. Why? Because they're responding in their own language and in their own verbage. If they're not as articulate, you don't get a lot of information. And finally in terms of analyzing the data, post-coding can be quite tedious. Because it's open end data, it's their own verbiage, it might be very difficult to understand all the different things that one has to code. So now let's flip back and kind of talk about closed-ended questions. Closed-ended questions like I said before, if you think back to the example how often do you shop at Genuardi's, the respondent is provided with predetermined descriptions and selects one or more of them. What are the advantages here, quite clear. It's easy to use, it's less threatening for the respondents, it's very simple to code and much cheaper to administer. On the flip side, it requires a lot of pre-testing. You want to make sure that the questions you're asking, the categories that you're giving people to take or select are clear. They understand what it means, and in fact, it presumes that the list of responses is complete. Lot more effort, but I think in terms of coding it becomes more easy. So again when you think about what type of questions you want to ask, I think you have to keep the end goal is mind. If the end goal is thinking more about what current consumers are thinking about, open-ended might be great. The other hand if the end goal is, you have a very good set of questions you want to ask them, you have a good idea of what responses you're looking at or what you want to see, closed-ended might be better. In terms of best practice, you often do exploratory research using open-ended questions and use this for codes in the quantitative survey with closed-ended. So this will give you a sense of how one would design a survey. Typically, you start with open-ended, get a sense of all the different issues that come up. You get a sense of what customers are thinking about and then you design the closed-ended questions. Now let's move on to step 3, drafting the questions. I think it's very, very important here to use simple, conventional language. I'll give you an example on the next slide. Avoid ambiguity, be specific as possible, and don't have long questions. And start broad and then narrow down. Keep in mind that when respondents are answering surveys, they typically don't have a lot of time. So if you make it very long, you ask long questions, it’s very ambiguous, you might see a lot of drop outs. People who starts a survey and then drop out because of it becoming very, very tedious to answer those questions. Here's an example, consider your target sample and use familiar language. Did you notice any malfunction at the time of purchase? Versus did you notice anything wrong with it when you bought it? What you see immediately is that the latter question is a little bit more easy to understand for the general audience. The first question, did you notice any malfunctions, uses specific language, specific verbiage that may be difficult for the common respondent to answer. So again, keep your respondent in mind, understand what they're comfortable with and design the questions in that sense. And finally, think about sequencing. Think about the layout guidelines. Here are some best practices. Open the survey with an easy non-threatening question. You want people to ease into the survey. So for instance, if you want information about their income or where they live, those might not be very good questions to start out with. These are questions that, in fact ask quite a lot of personal details. If you do want to ask those questions, it's probably better to ask them later on, when they've kind of eased into the survey. It should have a smooth and logical flow, from very general to kind of specific and there can be a lot of order biases. I'll give you an example in the next slide what I mean by it. A typical common bias is survey fatigue. Now what's a way around this kind of bias? Randomized ordering of your questions as much as possible. Let me give you an example of an order bias. Here are four examples. What I'm trying to look at here is what is the percentage of respondents who said they're very much interested in buying a new product? That's what you see on the right column. In the first row when no questions were asked before asking that question, 2.8% people were interested in buying this product. When you ask people only about the advantages of the product, what you see is that that number jumps up to 16.7%. Of course, if you only ask people about disadvantages of the product, nobody wants to buy it. And finally when you ask people about advantages and disadvantages, it's a number between 2% and 16%, about 5%. What do we see here? How we order the questions, where is it being asked in the overall survey is extremely important. So typically, a rule of thumb is if you think about a question that's important to you, make sure to randomize its order. Sometimes some people will get it in the beginning, some people might get it towards the end, some people might get it in the middle. What you want to see and make sure is that there is no order bias. No matter where they get it, you're getting a similar answer. So let me conclude with step 5 and 7, pre-testing and correcting problems. And keep in mind step 6, if you remember, was about just making sure that all the people who are interested in during the survey, you have a buyout with them. So it's got little to do with the survey design per se, but making sure that everybody who's interested in doing the survey is on board. So having said that, let's go back to Step 5 and 7. Here the idea is to make sure that before we implement the survey with your final target sample, just clean out the survey. Make sure all the questions that you do are meaningful. People who are interested in answering the the surveys, they understand what the questions are. You're designing the questions with the respondent in mind. What that means is, you make sure that you're using verbiage or language, that the respondent would be familiar with. You're asking questions which go from more general to more specific. Always lead off with questions which are non-threatening so that consumers or respondents who are answering the survey can easily get into the survey. Once all of that is done, hopefully what you have as an end result is a survey that is easy to understand, easy to implement, respondents are comfortable answering it, and you'll have good data from the survey. So let me conclude with what we've done in this whole session about survey design. One is you think about and carefully look at a survey as a mini-market research. What that means is you think carefully about every question. You think about how to design those questions. What are the best questions to ask? Should it be open-ended or closed-ended? Well, it depends upon what your objective is. What kind of language should be used? Again, it depends upon what the target market is. So think carefully about when you're designing a survey. Each question is very important and how the questions are ordered together even more important. Typically what you want to do is, if you are asking some specific kind of questions, for instance, about customer satisfaction. There is a lot research that has already happened in terms of what might be best ways to answer that question and ask that question. What I mean by that is search for and use proven questions. That way, you don't have to do the heavy lifting, somebody else has already done the heavy lifting for you. And finally, before you go ahead and implement the questionnaire, always pretest it. Maybe start out with a sample of 20 to 30 people to make sure that they understand each of the questions. To make sure that the answer that you're getting makes sense before you go ahead and implement that survey with a much larger sample. So hopefully with this particular part of our survey design, you have a good understanding of the dos and don'ts of survey design. What are the things you should keep track of? Now you also have a good understanding of a particular type of question, the net promoter score, that is extremely popular in the marketplace. What I mean by good understanding is you know the pros, but you also know the cons. So what I want you take away from all of these is that when you start designing a questionnaire or a survey, always keep every question in mind, think about the end goal. And work backwards whether that particular question you're asking is going to help you answer the end goal.