>> Welcome to the perspective unit of week three in which we do two things. First of all, we answer frequently asked questions that typically come up when we teach how to build memory systems. And, we use this opportunity also to say also some disclaimers about various things that we covered during this week. So, the first question that I'd like to talk about is as follows. During this week, we used flip flop gates without actually building them. We treated them as black box abstractions. What does it take to actually build a flip flop? So, Norm, maybe you will take this question. >> In many courses on computer hardware, students are taught on how to construct flip flop gates from basic NAND gates. Once that is done, they continue with the usual thing, which is also done in practice of a complete separation between the way you use combinatorial circuiting and the way you use sequential circuiting. But in the beginning, they do construct the, a flip-flop from basic NAND gates. In our course, we decided to skip that part and really focus on at the beginning and the difference between the combinatorial point of view and the sequential point of view. Nevertheless, it is probably a good idea to see the main trick how can you get NAND gates to remember anything. And, this is as follows. So, the basic idea is to take two NAND gates and connect them in a loop. [SOUND] So, we have here two inputs, each one fitting into one NAND gate. Where the other input of the NAND gate is fed the output of the previous NAND gate. Now normally, this is a no-no. You're not supposed to, to connect combinatorial gates in a loop. In fact, the hardware simulator we give in this course explicitly forbids it, and this is done for pedagogic reasons. Now, let us see how this thing works in this particular case where it's actually used. So normally, this kind of circuit is supposed to be used when both of its inputs are set to one. Now in this situation, we don't know what this non gate will do because we don't yet know what this input is. So, because we don't know what the previous value, or the previous there is no real previous here, but we don't know what the value here is. So, we don't know what the value of the input is and we don't know what the value of the output is, so at the beginning, we know nothing. But then, what happens once we take one of these inputs and turn it to zero for a while? At this point, because this is a NAND gate, we can immediately see that the output of this gate is going to be one, independently of what this question mark was. Once this is 1, we know that this is going to be 1, the output of this NAND gate is going to be 0. Now once this is zero, we can fill in this question mark and make it zero. And then, we have two zeros as our output one and everything is very stable here. Now so, what's so interesting about it? Now, let us see what happens when this input is go, goes up back to one. In this case, nothing changes. Because this is a NAND gate, this output remains one, so these two remain one these remain zero, these remain zero. And, everything is completely stable. So somehow, we remember now that the last input that became zero was this one because we have a zero here. Now, let us see what happens when something changes. For example, where at a different time stamp, at the future time point, this one becomes zero. Now once this becomes zero, this immediately, the output here becomes one. The output here becomes one. This becomes one. The output here becomes zero. The output here becomes zero. This changes to zero, and again this stays one, so we're again stable. Now, when we turn this back to one, to the normal state of one, everything remains the same because one and zero is still one. So, everything is completely stable in this new situation, which is different than the previous one because now we have zero here and one here, where previously, we had the opposite value. So now, we remember that the last input that had, was, went down to zero was the bottom one. And, this kind of behavior allows this loop basically to remember the last input wires that went to zero, and it's stable in each of two different situations that's why, why flip flop is called. Why this, that's why the name is flip flop. It can either flip, so we have zero here and one here, or flop, that we have one here and zero here. In any case, a change, a momentary change in the input changes the output for the future. Now, this is just a basic functionality that gets you to remember stuff. Of course, when you want to actually use this in a nice flip flop like we have used the flip flop, you first of all need to put something around it, which actually makes sure that the inputs are behaving nicely. And the, I choose which one of the outputs you want to take. And also, you need to actually have somehow, separate the, what happens within one clock cycle, from what happens in the next clock cycle. And, that is done by usually by putting another one of these and driving one of them with a clock, and the another with the opposite of the clock. But, we don't want to get into these details, but just to show the basic idea, which is quite surprising how you can remember something even though you only have a NAND gate. So, thank you, Naan for this explanation, which indeed was somewhat intricate. And a follow up question that typically comes up is that is NAND gate the only basic technology for building memory systems today? >> Well, Shimun, the truth is this is not the only way to construct flip flops. In many other cases, flip flops are constructed using some basic physical properties of the underlying solid state physics, that, of the devices that are used for this kind of storage. Now, this kind of details of such devices and how they remember anything is really physics or electrical engineering and not something that we touch in this course. >> Okay, so here's the next question. During this week, we built a memory device which you called RAM. Is this the only memory device that the computers use? Well, the answer is definitely not. Computers use various kinds of memory devices of which the RAM is indeed the most important one. The RAM, which stands for random access memory stores both data and instructions. And, it is a volatile device, meaning that it depends on an external power supply. So, once you disconnect the computer from the power supply or turn it off, the contents of the RAM is effectively erased. So, in addition to the RAM unit, computers also typically use another device called ROM, which stands for read only memory. So, the ROM is not only read-only, it is also a non-volatile device, unlike the RAM. Which means that it maintains its current contents a long time, over time and it does not depend on an external power supply. Which makes it very convenient because the ROM is where you want to put the programs that have to work when you turn on the computer. You know, this is what is known as the booting process. So, when you boot up the computer, the program, which is stored or pre stored in the ROM starts running. And, this program normally initializes all sorts of things, in the operating system and in the computer. Actually, not in the operating system, but in lower level code. And, the next thing that the ROM does is or the program that resides in the ROM does, it clones from the disc the startup code of the operating system. And then finally, we begin to see some windows on the screen and the computer sort of comes alive. Another technology which you've probably heard of is called flash memory. And, flash memory is is a technology which actually combines the good things of both the RAM and the ROM. On the one hand it's a read read write memory. You can both read and modify its contents. And at the same time it does not depend on an external power supply like the ROM. So, once you turn off the computer. The contents of the flash memory remains intact. So, we talked about RAM, ROM, flash. Another kind of memory which which you normally encounter is called cache memory. So, the question is, what is cache memory and why do we need it? >> When one actually builds a computer, the memory is going to be a pretty costly part of the whole system. And as you can expect, there are many different technologies for building memories. And, the faster the memory, is the more expensive it usually is. The larger the memory, the more expensive it usually is. So, an architect, a computer architect is always faced with a trade off of that we want to put more money into the memory and make it larger and faster. Or, does we want to get a cheaper memory, and maybe put the money more in the processing unit? A usual tradeoff is to have a large, cheap memory, maybe slow also, and a very small, expensive, fast memory. And, try to make sure that what the, that does it is very often need, need used by the processor reside in the very fast memory, while the data that is only rarely used resides in the slow memory. because this way, you get the speed of a very fast memory, while you have the size of a very cheap memory. To do this correctly is a very intricate art, and today computers have whole hierarchies of caches. These are called caches, the small, fast memories called the cache. Hierarchies of such caches that are faster and faster and expensive, more expensive and more expensive and smaller and smaller, as they get closer to the processor. And, doing that correctly allows you to really get amazing eh, effective speed of a very fast memory, even though there is only a very small fast memory, and a very large slow memory. >> Okay. So, thank you Noam for this explanation. And indeed from a physical standpoint, we have many different kinds of memory units. And we discuss some of them in this in this perspective discussion. And yet, it's important to emphasize that from a logical perspective, all these different memory systems, you know, the RAM, the ROM, the Cache, and so on. They all look alike. They all, behave like a sequence of registers which are addressable. And, we can access any one of these registers and do something, with the bit contents of, of these registers. So, everything that we said during this week is completely applicable to every kind of memory that we discussed in this unit.