In the previous video, you saw how to build a convolutional neural network that classified horses against humans. When you are done, you then did a few tests using images that you downloaded from the web. In this video, you'll see how you can build validation into the training loop by specifying a set of validation images, and then have TensorFlow do the heavy lifting of measuring its effectiveness with that same. As before we download the dataset, but now will also download the separate validation dataset. We'll unzip into two separate folders, one for training, one for validation. We'll create some variables that pointed our training and validation subdirectories, and we can check out the filenames. Remember that the filenames may not always be reliable for labels. For example, here the validation horse labels aren't named as such while the human ones are. We can also do a quick check on whether we got all the data, and it looks good so we think we can proceed. We can display some of the training images as we did before, and let's just go straight to our model. Here we can import TensorFlow, and here we define the layers in our model. It's exactly the same as last time. We'll then print the summary of our model, and you can see that it hasn't changed either. Then we'll compile the model with the same parameters as before. Now, here's where we can make some changes. As well as an image generator for the training data, we now create a second one for the validation data. It's pretty much the same flow. We create a validation generator as an instance of image generator, re-scale it to normalize, and then pointed at the validation directory. When we run it, we see that it picks up the images and the classes from that directory. So now let's train the network. Note the extra parameters to let it know about the validation data. Now, at the end of every epoch as well as reporting the loss and accuracy on the training, it also checks the validation set to give us loss in accuracy there. As the epochs progress, you should see them steadily increasing with the validation accuracy being slightly less than the training. It should just take about another two minutes. Okay. Now that we've reached epoch 15, we can see that our accuracy is about 97 percent on the training data, and about 85 percent on the validation set, and this is as expected. The validation set is data that the neural network hasn't previously seen, so you would expect it to perform a little worse on it. But let's try some more images starting with this white horse. We can see that it was misclassified as a human. Okay, let's try this really cute one. We can see that's correctly classified as a horse. Okay, let's try some people. Let's try this woman in a blue dress. This is really interesting picture because she has her back turned, and her legs are obscured by the dress, but she's correctly classified as a human. Okay, here's a tricky one. To our eyes she's human, but will the wings confuse the neural network? And they do, she is mistaken for a horse. It's understandable though particularly as the training set has a lot of white horses against the grassy background. How about this one? It has both a horse and the human in it, but it gets classified as a horse. We can see the dominant features in the image are the horse, so it's not really surprising. Also there are many white horses in the training set, so it might be matching on them. Okay one last one. I couldn't resist this image as it's so adorable, and thankfully it's classified as a horse. So, now we saw the training with a validation set, and we could get a good estimate for the accuracy of the classifier by looking at the results with a validation set. Using these results and understanding where and why some inferences fail, can help you understand how to modify your training data to prevent errors like that. But let's switch gears in the next video. We'll take a look at the impact of compacting your data to make training quicker.