In this unit we're going to talk about a very important component of every general purpose computer called ALU or the Arithmetic Logic Unit. Back in 1945 the mathematician John Von Neumann wrote a seminal paper in which he included a diagram or a description of how general purpose computers can be built. And this became to be known over the years as the Von Neumann Architecture. Now, when you look at this diagram, you see that one key player in the diagram is the central processing unit and within this CPU, yet another important piece the ALU or the Arithmetic Logic Unit. Now, if we obstruct the way all these details and focus only on the ALU we can think of an abstraction which receives to multi-bit inputs. We call them input one and input two, and the third input is the function that has to be computed. So the ALU computes the function and out comes the result of this computation. Now, the function f is one out of a family of predetermined functions that taken together define what this ALU is capable of doing. Some of these functions are arithmetic and some of these functions are logical. So for example common computations that ALU typically perform are integer addition, integer multiplication, integer division. There may be some logical operations like bit wise AND, bit wise OR and so on. So there's a really interesting question, which is when you set out to build an ALU, how much functionality do you want to build in this hardware device. Well this is a classical hardware software trade off question, because if you choose not to implement something in hardware you can always augment it later with software. So if for some reason you decide that the ALU will not include multiplication or division, presumably at a later point when you build your software layer you will deal, you will complete this functionality with software. All right, now so far everything that we said was true for every ALU out there on any computer. From now on we are going to focus on one specific example of an ALU, we call it the Hack ALU because this is the ALU that will hum inside our hack computer. This is the overall gate diagram, and as we see the ALU has two 16-bit data inputs which we call x and y. It outputs a single 16-bit output which we call out, which function to compute is determined by six control bits that have strange names like zx and nx and so on. We will explain these names in just a few minutes. Based on these six control bits, the ALU computes one out of the following 18 functions of interest. And I call these functions functions of interest because in principle the ALU can compute many more functions, but we've decided to focus on these 18 functions only. Now some of these functions are very simple, like the constance of 01 minus 1, xy and so on. And other functions are more involved and interesting like x plus y, x and y, and so on. Now as we see from the diagram, the ALU also computes two 1-bit control outputs which are called zr and ng. The role of these two control bits and the reason for these names would become clear later in the unit. So let's move on and focus on the output of the ALU from one side and the control bits that caused the ALU to compute these outputs, and they cause it using this truth table. So this truth table gives you a complete functional specification of this ALU. That is, if you want to compute a certain function you can look it up on the right hand side of the table. You can then read off the zeros and ones that correspond to this function. You enter the zeros and ones into the control bits and boom, using some sort of black magic, the ALU will compute the desired function. So let me illustrate how the ALU works in action and in order to do it, I'm going to use our HDL simulator. So we can fire up the simulator, we can load the built-in ALU chip and as a result, we'll get this chip into the HDL window and once we do that, notice that we also get some nice GUI. And indeed, some of the built-in chips in our simulator have a GUI side effect that helps the user, you to understand what goes on inside the respective chip. So this is a diagram that we made up to help you keep track of what happens inside the ALU. So moving along, we begin testing. As you can notice we have set the ALU inputs to two values, which are 30 and 20. And we also set the control bits to 000111, which if you look at the table that I showed you before, what they mean is it's a directive that tells the ALU compute y-x. So the next thing that we do, is we have to tell the simulator to actually do something. And we do it by clicking this calculator icon, and this tells the LU to evaluate the chip logic on the given inputs. So this is what happens next and then we can inspect the outputs and if we do that indeed we see that the ALU came out with what was advertised, which is minus 10, the result of y- x. So it looks like the ALU at least in this example, works well. Let me give you a second example, which demonstrates the logical computational abilities of the ALU. The first thing that I do is I tell the simulator to revert to working with a Boolean formats rather than decimal formats which make it much it much easier to enter zeros and ones into the various inputs of the ALU, and that's what we do here. So I picked up two arbitrary examples of 16-bit zeros and ones values, I entered them. And then I entered also the control bit values 000000 which happens to be the directive to compute x and y. Once again, if you look at the truth table, you will see this role listed out. And indeed after we click the calculator icon, which I've skipped in this example, we see that the ALU actually computed bit wise in the operation, and the result was the bit wise end of the two given inputs. So so far, it seems like the ALU is functioning quite well, at least in these two examples. Well, we didn't say anything about how the ALU actually works. Everything so far was kind of magic. So now is the time to open up the black box and understand how the ALU actually performs this magic. So once again this is the gate diagram or the interface diagram of the ALU, and I want to focus on the six control bits and explain the names and operations of every one of these bits. So if zx=1, what we want to do is set the x input to 0. So irrespective of what x is it can be 17 or 19 or 5,003 or whatever. We set it to zero that's what we do if the x is 1. If nx is 1, we set the x input to not x this is Bitwise negation, and also notice that these two things happen one after the other. So for example, if zx equals 1 and nx equals 1, first of all, we 0 the x input and then we negate it. So we'll get 1, 1, 1, 1, 1, 1, if this is indeed the values of these two control bits. The same thing exactly happens with a y input using the zy and ny directives if you will, then we have an f bit. If f is 1, we compute x plus y. If f is 0, we compute x and y. Now, these are the processed x and y. So before we do these computations, the x and ys have already undergone these manipulations that we talked about before. They became either zero or negator or nothing, maybe we didn't touch them, and then we compute either addition or 16-bit ending. Finally, we have the no bit. If the no bit is 1, we negate the resulting output. The output that we just computed and if no is 0, we leave it as is. If we do all of these operations one after the other then what comes out is the desired function. And now that you understand these semantics, you can actually look at the table and try to convince yourself. You can actually prove that this table delivers the required results, and I will demonstrate to you how we can do it. So let's pick up one example, let's see how value computes not x. So I look up not x in the right-hand side. I see it right there, and then I look up the binary values of the six control bits and I start simulating on paper what happens inside the ALU. So in order to do it I have to come up with some arbitrary examples of x and y. So I make up two values and I use 4-bits instead of 16 to make it less tedious. So I have these two examples x and y arbitrarily chosen, and then I look at the control bits. Zx equals 0, and nx equals 0, which means that we don't touch the x input, we leave it as is. And then we move onto the y input and we see that zy equals 1, so we 0 the y input and n y equals 1, so we negate it and what we get is the result 1, 1, 1, 1. Moving along f is 0 and if f is 0, we want to compute x and y. So we compute x and y and we get 1, 1, 0, 0. This is a bit wise end and finally, no is 1. So we negate the result, and we get 0, 0, 1, 1. Lo and behold, 0, 0, 1, 1 is exactly not of x. If you look at the original x, which was 1, 1, 0, 0, we got not x. So we have proved that this row in the truth table performs as advertised, so to speak. Moving along, we can take another example, and this will be the final example. We look at y-x, an arithmetic operation and once again we see the binary values, and we begin simulating them. So once again, we start with two arbitrary examples of x and y. I've chosen 2 and 7. This is arithmetic operations so it's easier to think about it then both in decimal and in binary. So we have 2 and 7. And if everything works well, we should get the result 5 because y- x, 7- 2, should give us 5. So we see that zx is 0 and nx is 0, so we don't touch the x input, it remains as is. And moving along, we see that zy is 0, so we don't touch the y input. But ny is 1, which means that we have to negate it. So 0, 1, 1, 1 became 1, 0, 0, 0. Moving along, we see that f equals 1. So we compute the addition, and if we add up x and y, we get 1, 0, 1, 0. No also equals 1, so we negate the result and we get 0, 1, 0, 1. And 1, 0, 1 represents 5, which is exactly what we wanted. So once again you see that the ALU performed as specified and many of you may still wonder how this magic actually happens. We were told that we have to do subtraction and actually we did addition, and we got the result that we expected. Well we don't want to get too much into it, but if you go back and read or look at the units where we discussed the two's complement method, you will understand also the internal mechanics here, and why everything comes out as expected. Now as we said earlier in the unit, the ALU also computes an output to a 1-bit output control bits. And these bits are called zr and ng, and the role of these bits is to say something about the main output of the ALU denoted out. Specifically, if out equals 0, the ALU sets zr to 1. Otherwise, the zr becomes 0, and if out is negative then ng equals 1. Otherwise, ng equals 0. Now, you may ask yourself why do we need these two bits? Well, this will become clear when we put together the overall computer architecture in which these two bits play an important role. I'd like to make a few end notes about the Hack ALU. I hope that we have convinced you that it's a simple concept. It is very elegant, and surprisingly it's also extremely easy to implement and let me explain why. If you remember what we do with ALU given all these control bits, in some cases we have to set 16-bit values to 0s, easy. In other case, we have to set it to 1, also very easy. In some cases, we have to negate an incoming value. We've done it before I think in project one. We built a gate that does exactly 16-bit negation. And in some other cases we have to compute either plus or end and these two computations were already, are already taken care of using the chips that you designed in previous projects. So all together, there is very little to do. Everything was done in one way or another by existing chips that you have already developed. So to sum up this is Leonardo da Vinci, one of the greatest inventor's in history and Leonardo said that simplicity is the ultimate sophistication. And I hope that I convince you that the Hack ALU is both simple but quite sophisticated. So this has been the unit on the ALU, and this leads up to the next unit in which we will get our hands dirty and build one such ALU, along with some other chips.