Now that you have your data, you will use this data to predict our new data points. For example, given a new tweet, you will use this data to say whether this tweet is positive or negative. In doing so, you want to analyze whether your model generalizes well or not. In this video, we will show you whether your model generalizes well or not, and specifically, we'll show you how to compute the accuracy of your model. Let's take a look at how you can do this. For this, you will need X_val and Y_val. Data that was set-aside during trainings, also known as the validation sets and Theta, the sets of optimum parameters that you got from training on your data. First, you will compute the sigmoid function for X_val with parameters Theta, then you will evaluate if each value of h of Theta is greater than or equal to a threshold value, often set to 0.5. For example, if your h X Theta is equal to the following vector, 0.3, 0.8, 0.5, etc., up to the number of examples from your validation set, you're going to assert if each of its components is greater than or equal to 0.5. So is 0.3 greater than or equal to 0.5? No. So our first prediction is equal to 0. Is 0.8 greater than or equal to 0.5? Yes. So our prediction for the second example is 1. Is 0.5 greater than or equal to 0.5? Yes. So our third prediction is equal to 1, and so on. At the end, you will have a vector populated with zeros and ones indicating predicted negative and positive examples, respectively. After building the predictions vector, you can compute the accuracy of your model over the validation sets. To do so, you will compare the predictions you have made with the true value for each observation from your validation data. If the values are equal and your prediction is correct, you'll get a value of 1 and 0 otherwise. For instance, if your prediction was correct, like in this case where your prediction and your label are both equal to 0, your vector will have a value equal to 1 in the first position. Conversely, if your second prediction wasn't correct, because your prediction and label disagree, your vector will have a value of 0 in the second position and so on and so forth. After you have compared the values of every prediction with the true labels of your validation set, you can get the total times that your predictions were correct by summing up the vector of the comparisons. Finally, you'll divide that number over the total number m of observations from your validation sets. This metric gives an estimate of the times that's your logistic regression will correctly work on unseen data. So if your accuracy is equal to 0.5, it means that 50 percent of the time, your model is expected to work well. For instance, if your Y_val and prediction vectors for five observations look like this, you'll compare each of their values and determine whether they match or not. After that, you'll have the following vector with a single 0 in the third position where the prediction and the label disagree. Next, you have to sum the number of times that your predictions were right and divide that number by the total number of observations in your validation sets. For example, you get an accuracy equal to 80 percent. Congratulations on finishing the first week of this specialization. You learned many concepts this week. The first thing you learned is you learned how to preprocess a text. You learned how to extract features from that text. You learned how to use those extracted features and train a model using those. Then you learned how to test your model. In this week's programming exercise, you're going to get a chance to implement all these concepts that we spoke about. Feel free to go ahead and do the programming exercise. There's also an optional video at the end of this week which covers the intuition behind the cost function for logistic regression. If you don't want to watch that video, feel free to go to next week, where you will learn about a new classification algorithm known as naive Bayes.