Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Hi, I’m Carrie Ann and this is Crash Course Computer Science.
- 00:05
- So last episode, we talked about how numbers can be represented in binary. Representing
- 00:09
- Like, 00101010 is 42 in decimal.
- 00:13
- Representing and storing numbers is an important function of a computer, but the real goal is computation,
- 00:19
- or manipulating numbers in a structured and purposeful way, like adding two numbers together.
- 00:23
- These operations are handled by a computer’s Arithmetic and Logic Unit,
- 00:27
- but most people call it by its street name: the ALU.
- 00:29
- The ALU is the mathematical brain of a computer.
- 00:32
- When you understand an ALU’s design and function, you’ll understand a fundamental
- 00:36
- part of modern computers. It is THE thing that does all of the computation in a computer,
- 00:41
- so basically everything uses it.
- 00:43
- First though, look at this beauty.
- 00:45
- This is perhaps the most famous ALU ever, the Intel 74181.
- 00:50
- When it was released in 1970, it was
- 00:52
- It was the first complete ALU that fit entirely inside of a single chip -
- 00:57
- Which was a huge engineering feat at the time.
- 00:59
- So today we’re going to take those Boolean logic gates we learned about last week
- 01:03
- to build a simple ALU circuit with much of the same functionality as the 74181.
- 01:08
- And over the next few episodes we’ll use
- 01:10
- this to construct a computer from scratch. So it’s going to get a little bit complicated,
- 01:14
- but I think you guys can handle it.
- 01:16
- INTRO
- 01:25
- An ALU is really two units in one -- there’s an arithmetic unit and a logic unit.
- 01:30
- Let's start with the arithmetic unit, which is responsible for handling all numerical operations in a
- 01:34
- computer, like addition and subtraction. It also does a bunch of other simple things like
- 01:39
- add one to a number, which is called an increment operation, but we’ll talk about those later.
- 01:43
- Today, we’re going to focus on the pièce de résistance, the crème de la crème of
- 01:47
- operations that underlies almost everything else a computer does - adding two numbers together.
- 01:52
- We could build this circuit entirely out of
- 01:54
- individual transistors, but that would get confusing really fast.
- 01:57
- So instead as we talked about in Episode 3 – we can use a high-level of abstraction and build our components
- 02:03
- out of logic gates, in this case: AND, OR, NOT and XOR gates.
- 02:08
- The simplest adding circuit that we can build takes two binary digits, and adds them together.
- 02:12
- So we have two inputs, A and B, and one output, which is the sum of those two digits.
- 02:18
- Just to clarify: A, B and the output are all single bits.
- 02:21
- There are only four possible input combinations.
- 02:24
- The first three are: 0+0 = 0
- 02:27
- 1+0 = 1 0+1 = 1
- 02:30
- Remember that in binary, 1 is the same as true, and 0 is the same as false.
- 02:35
- So this set of inputs exactly matches the boolean logic of an XOR gate, and we can use it as
- 02:39
- our 1-bit adder.
- 02:40
- But the fourth input combination, 1 + 1, is a special case. 1 + 1 is 2 (obviously)
- 02:46
- but there’s no 2 digit in binary, so as we talked about last episode, the result is
- 02:50
- 0 and the 1 is carried to the next column. So the sum is really 10 in binary.
- 02:54
- Now, the output of our XOR gate is partially correct - 1 plus 1, outputs 0.
- 03:00
- But, we need an extra output wire for that carry bit.
- 03:03
- The carry bit is only “true” when the inputs are 1 AND 1, because that's the only
- 03:06
- time when the result (two) is bigger than 1 bit can store… and conveniently we have
- 03:10
- a gate for that! An AND gate, which is only true when both inputs are true, so
- 03:15
- we’ll add that to our circuit too.
- 03:17
- And that's it. This circuit is called a half adder. It’s
- 03:19
- It's not that complicated - just two logic gates - but let’s abstract away even this level
- 03:24
- of detail and encapsulate our newly minted half adder as its own component, with two
- 03:28
- inputs - bits A and B - and two outputs, the sum and the carry bits.
- 03:32
- This takes us to another level of abstraction… heh… I feel like I say that a lot.
- 03:36
- I wonder if this is going to become a thing.
- 03:43
- Anyway, If you want to add more than 1 + 1
- 03:46
- we’re going to need a “Full Adder.” That half-adder left us with a carry bit as output.
- 03:50
- That means that when we move on to the next column in a multi-column addition,
- 03:54
- and every column after that, we are going to have to add three bits together, no two.
- 03:59
- A full adder is a bit more complicated - it
- 04:00
- takes three bits as inputs: A, B and C. So the maximum possible input is 1 + 1 + 1,
- 04:07
- which equals 1 carry out 1, so we still only need two output wires: sum and carry.
- 04:12
- We can build a full adder using half adders. To do this, we use a half adder to add A plus B
- 04:17
- just like before – but then feed that result and input C into a second half adder.
- 04:22
- Lastly, we need a OR gate to check if either one of the carry bits was true.
- 04:27
- That’s it, we just made a full adder! Again,we can go up a level of abstraction and wrap
- 04:31
- up this full adder as its own component. It takes three inputs, adds them, and outputs
- 04:36
- the sum and the carry, if there is one.
- 04:38
- Armed with our new components, we can now build a circuit that takes two, 8-bit numbers
- 04:42
- – Let’s call them A and B – and adds them together.
- 04:44
- Let’s start with the very first bit of
- 04:46
- A and B, which we’ll call A0 and B0. At this point, there is no carry bit to deal
- 04:51
- with, because this is our first addition. So we can use our half adder to add these
- 04:55
- two bits together. The output is sum0. Now we want to add A1 and B1 together.
- 05:01
- It's possible there was a carry from the previous addition of A0 and B0, so this time we need
- 05:06
- to use a full adder that also inputs the carry bit. We output this result as sum1.
- 05:11
- Then, we take any carry from this full adder, and run it into the next full adder that handles
- 05:16
- A2 and B2. And we just keep doing this in a big chain until all 8 bits have been added.
- 05:21
- Notice how the carry bits ripple forward to each subsequent adder. For this reason,
- 05:26
- this is called an 8-bit ripple carry adder. Notice how our last full adder has a carry out.
- 05:32
- If there is a carry into the 9th bit, it means the sum of the two numbers is too large to fit into 8-bits.
- 05:36
- This is called an overflow.
- 05:37
- In general, an overflow occurs when the result of an addition is too large to be represented by the number of bits you are using.
- 05:43
- This can usually cause errors and unexpected behavior.
- 05:45
- Famously, the original PacMan arcade game used 8 bits to keep track of what level you were on.
- 05:50
- This meant that if you made it past level 255 – the largest number storablein 8 bits –
- 05:55
- to level 256, the ALU overflowed.
- 05:58
- This caused a bunch of errors and glitches making the level unbeatable.
- 06:01
- The bug became a rite of passage for the greatest PacMan players.
- 06:04
- So if we want to avoid overflows, we can extend our circuit with more full adders, allowing
- 06:09
- us to add 16 or 32 bit numbers. This makes overflows less likely to happen, but at the
- 06:14
- expense of more gates. An additional downside is that it takes a little bit of time for
- 06:19
- each of the carries to ripple forward.
- 06:21
- Admittedly, not very much time, electrons move pretty fast, so we’re talking about billionths of a second,
- 06:26
- but that’s enough to make a difference in today’s fast computers.
- 06:29
- For this reason, modern computers use a slightly different adding circuit called a ‘carry-look-ahead’ adder
- 06:35
- which is faster, but ultimately does exactly the same thing-- adds binary numbers.
- 06:39
- The ALU’s arithmetic unit also has circuits for other math operations
- 06:43
- and in general these 8 operations are always supported.
- 06:46
- And like our adder, these other operations are built from individual logic gates.
- 06:50
- Interestingly, you may have noticed that there are no multiply and divide operations.
- 06:54
- That's because simple ALUs don’t have a circuit for this, and instead just perform a series of additions.
- 06:59
- Let’s say you want to multiply 12 by 5.
- 07:01
- That’s the same thing as adding 12 to itself 5 times. So it would take 5 passes through
- 07:06
- the ALU to do this one multiplication. And this is how many simple processors,
- 07:10
- like those in your thermostat, TV remote, and microwave, do multiplication.
- 07:15
- It’s slow, but it gets the job done.
- 07:17
- However, fancier processors, like those in your laptop or smartphone,
- 07:20
- have arithmetic units with dedicated circuits for multiplication.
- 07:23
- And as you might expect, the circuit is more complicated than addition -- there’s no
- 07:27
- magic, it just takes a lot more logic gates – which is why less expensive processors
- 07:31
- don’t have this feature.
- 07:32
- Ok, let’s move on to the other half of the ALU: the Logic Unit.
- 07:36
- Instead of arithmetic operations, the Logic Unit performs… well...
- 07:39
- logical operations, like AND, OR and NOT, which we’ve talked about previously.
- 07:44
- It also performs simple numerical tests, like checking if a number is negative.
- 07:47
- For example, here’s a circuit that tests if the output of the ALU is zero.
- 07:51
- It does this using a bunch of OR gates to see if any of the bits are 1.
- 07:55
- Even if one single bit is 1, we know the number can’t be zero and then we use a final NOT gate to flip this
- 08:01
- input so the output is 1 only if the input number is 0.
- 08:05
- So that’s a high level overview of what makes up an ALU. We even built several of
- 08:09
- the main components from scratch, like our ripple adder.
- 08:11
- As you saw, it’s just a big bunch of logic gates connected in clever ways.
- 08:14
- Which brings us back to that ALU you admired so much at the beginning of the episode.
- 08:18
- The Intel 74181.
- 08:21
- Unlike the 8-bit ALU we made today, the 74181 could only handle 4-bit inputs,
- 08:26
- which means YOU BUILT AN ALU THAT’S LIKE
- 08:29
- TWICE AS GOOD AS THAT SUPER FAMOUS ONE. WITH YOUR MIND! Well.. sort of.
- 08:34
- We didn’t build the whole thing… but you get the idea.
- 08:36
- The 74181 used about 70 logic gates, and it couldn’t multiply or divide.
- 08:41
- But it was a huge step forward in miniaturization, opening the doors to more capable and less expensive computers.
- 08:47
- This 4-bit ALU circuit is already a lot to take in,
- 08:50
- but our 8-bit ALU would require hundreds of logic gates to fully build and engineers
- 08:54
- don’t want to see all that complexity when using an ALU, so they came up with a special
- 08:59
- symbol to wrap it all up, which looks like a big ‘V’. Just another level of abstraction!
- 09:09
- Our 8-bit ALU has two inputs, A and B, each with 8 bits. We also need a way to specify what operation the ALU should perform,
- 09:17
- for example, addition or subtraction.
- 09:19
- For that, we use a 4-bit operation code.
- 09:21
- We’ll talk about this more in a later episode, but in brief, 1000 might be the command
- 09:27
- to add, while 1100 is the command for subtract. Basically, the operation code tells the ALU
- 09:33
- what operation to perform. And the result of that operation on inputs A and B is an 8-bit output.
- 09:38
- ALUs also output a series of Flags, which are 1-bit outputs for particular states and statuses.
- 09:43
- For example, if we subtract two numbers, and the result is 0, our zero-testing circuit, the one we made earlier,
- 09:50
- sets the Zero Flag to True (1). This is useful if we are trying to determine if two numbers are are equal.
- 09:55
- If we wanted to test if A was less than B,
- 09:57
- we can use the ALU to calculate A subtract B and look to see if the Negative Flag was set to true.
- 10:03
- If it was, we know that A was smaller than B.
- 10:05
- And finally, there’s also a wire attached to the carry out on the adder we built,
- 10:09
- so if there is an overflow, we’ll know about it. This is called the Overflow Flag.
- 10:13
- Fancier ALUs will have more flags, but these three flags are universal and frequently used.
- 10:18
- In fact, we’ll be using them soon in a future episode.
- 10:21
- So now you know how your computer does all its basic mathematical operations digitally
- 10:25
- with no gears or levers required.
- 10:27
- We’re going to use this ALU when we construct our CPU two episodes from now.
- 10:31
- But before that, our computer is going to need some memory! We'll talk about that next week.
- Hi, I’m Carrie Anne and welcome to Crash Course Computer Science.
- 00:05
- So last episode, using just logic gates, we built a simple ALU, which performs arithmetic
- 00:11
- and logic operations, hence the ‘A’ and the ‘L’.
- 00:13
- But of course, there’s not much point in calculating a result only to throw it away
- 00:17
- - it would be useful to store that value somehow, and maybe even run several operations in a row.
- 00:22
- That's where computer memory comes in!
- 00:24
- If you've ever been in the middle of a long RPG campaign on your console, or slogging
- 00:28
- through a difficult level on Minesweeper on your desktop, and your dog came by, tripped
- 00:32
- and pulled the power cord out of the wall, you know the agony of losing all your progress.
- 00:36
- Condolences.
- 00:38
- But the reason for your loss is that your console, your laptop and your computers make
- 00:42
- use of Random Access Memory, or RAM, which stores things like game state - as long as
- 00:46
- the power stays on.
- 00:47
- Another type of memory, called persistent memory, can survive without power, and it’s
- 00:51
- used for different things; We'll talk about the persistence of memory in a later episode.
- 00:55
- Today, we’re going to start small - literally by building a circuit that can store one..
- 01:00
- single.. bit of information.
- 01:01
- After that, we’ll scale up, and build our very own memory module, and we’ll combine
- 01:05
- it with our ALU next time, when we finally build our very own CPU!
- 01:10
- INTRO
- 01:19
- All of the logic circuits we've discussed so far go in one direction - always flowing
- 01:23
- forward - like our 8-bit ripple adder from last episode.
- 01:26
- But we can also create circuits that loop back on themselves.
- 01:29
- Let’s try taking an ordinary OR gate, and feed the output back into one of its inputs
- 01:34
- and see what happens.
- 01:35
- First, let’s set both inputs to 0.
- 01:37
- So 0 OR 0 is 0, and so this circuit always outputs 0.
- 01:41
- If we were to flip input A to 1.
- 01:44
- 1 OR 0 is 1, so now the output of the OR gate is 1.
- 01:48
- A fraction of a second later, that loops back around into input B, so the OR gate sees that
- 01:52
- both of its inputs are now 1.
- 01:54
- 1 OR 1 is still 1, so there is no change in output.
- 01:58
- If we flip input A back to 0, the OR gate still outputs 1.
- 02:01
- So now we've got a circuit that records a “1” for us.
- 02:04
- Except, we've got a teensy tiny problem - this change is permanent!
- 02:07
- No matter how hard we try, there’s no way to get this circuit to flip back from a 1
- 02:12
- to a 0.
- 02:13
- Now let’s look at this same circuit, but with an AND gate instead.
- 02:16
- We'll start inputs A and B both at 1.
- 02:19
- 1 AND 1 outputs 1 forever.
- 02:21
- But, if we then flip input A to 0, because it’s an AND gate, the output will go to 0.
- 02:26
- So this circuit records a 0, the opposite of our other circuit.
- 02:29
- Like before, no matter what input we apply to input A afterwards, the circuit will always output 0.
- 02:34
- Now we’ve got circuits that can record both 0s and 1s.
- 02:38
- The key to making this a useful piece of memory is to combine our two circuits into what is
- 02:42
- called the AND-OR Latch.
- 02:44
- It has two inputs, a "set" input, which sets the output to a 1, and a "reset" input, which
- 02:48
- resets the output to a 0.
- 02:50
- If set and reset are both 0, the circuit just outputs whatever was last put in it.
- 02:54
- In other words, it remembers a single bit of information!
- 02:58
- Memory!
- 02:59
- This is called a “latch” because it “latches onto” a particular value and stays that way.
- 03:03
- The action of putting data into memory is called writing, whereas getting the data out
- 03:08
- is called reading.
- 03:09
- Ok, so we’ve got a way to store a single bit of information!
- 03:12
- Great!
- 03:13
- Unfortunately, having two different wires for input – set and reset – is a bit confusing.
- 03:18
- To make this a little easier to use, we really want a single wire to input data, that we
- 03:22
- can set to either 0 or 1 to store the value.
- 03:24
- Additionally, we are going to need a wire that enables the memory to be either available
- 03:28
- for writing or “locked” down --which is called the write enable line.
- 03:32
- By adding a few extra logic gates, we can build this circuit, which is called a Gated Latch
- 03:37
- since the “gate” can be opened or closed.
- 03:39
- Now this circuit is starting to get a little complicated.
- 03:41
- We don’t want to have to deal with all the individual logic gates... so as before, we’re
- 03:44
- going to bump up a level of abstraction, and put our whole Gated Latch circuit in a box
- 03:48
- -- a box that stores one bit.
- 03:50
- Let’s test out our new component!
- 03:52
- Let’s start everything at 0.
- 03:54
- If we toggle the Data wire from 0 to 1 or 1 to 0, nothing happens - the output stays at 0.
- 04:00
- That’s because the write enable wire is off, which prevents any change to the memory.
- 04:04
- So we need to “open” the “gate” by turning the write enable wire to 1.
- 04:07
- Now we can put a 1 on the data line to save the value 1 to our latch.
- 04:11
- Notice how the output is now 1.
- 04:14
- Success!
- 04:14
- We can turn off the enable line and the output stays as 1.
- 04:18
- Once again, we can toggle the value on the data line all we want, but the output will
- 04:21
- stay the same.
- 04:22
- The value is saved in memory.
- 04:24
- Now let’s turn the enable line on again use our data line to set the latch to 0.
- 04:29
- Done.
- 04:30
- Enable line off, and the output is 0.
- 04:32
- And it works!
- 04:33
- Now, of course, computer memory that only stores one bit of information isn’t very
- 04:37
- useful -- definitely not enough to run Frogger.
- 04:39
- Or anything, really.
- 04:41
- But we’re not limited to using only one latch.
- 04:43
- If we put 8 latches side-by-side, we can store 8 bits of information like an 8-bit number.
- 04:48
- A group of latches operating like this is called a register, which holds a single number,
- 04:53
- and the number of bits in a register is called its width.
- 04:56
- Early computers had 8-bit registers, then 16, 32, and today, many computers have registers
- 05:01
- that are 64-bits wide.
- 05:03
- To write to our register, we first have to enable all of the latches.
- 05:06
- We can do this with a single wire that connects to all of their enable inputs, which we set to 1.
- 05:11
- We then send our data in using the 8 data wires, and then set enable back to 0, and
- 05:17
- the 8 bit value is now saved in memory.
- 05:19
- Putting latches side-by-side works ok for a small-ish number of bits.
- 05:23
- A 64-bit register would need 64 wires running to the data pins, and 64 wires running to
- 05:28
- the outputs.
- 05:29
- Luckily we only need 1 wire to enable all the latches, but that’s still 129 wires.
- 05:36
- For 256 bits, we end up with 513 wires!
- 05:40
- The solution is a matrix!
- 05:42
- In this matrix, we don’t arrange our latches in a row, we put them in a grid.
- 05:46
- For 256 bits, we need a 16 by 16 grid of latches with 16 rows and columns of wires.
- 05:52
- To activate any one latch, we must turn on the corresponding row AND column wire.
- 05:56
- Let’s zoom in and see how this works.
- 05:58
- We only want the latch at the intersection of the two active wires to be enabled,
- 06:02
- but all of the other latches should stay disabled.
- 06:05
- For this, we can use our trusty AND gate!
- 06:08
- The AND gate will output a 1 only if the row and the column wires are both 1.
- 06:12
- So we can use this signal to uniquely select a single latch.
- 06:15
- This row/column setup connects all our latches with a single, shared, write enable wire.
- 06:20
- In order for a latch to become write enabled, the row wire, the column wire, and the write
- 06:24
- enable wire must all be 1.
- 06:26
- That should only ever be true for one single latch at any given time.
- 06:29
- This means we can use a single, shared wire for data.
- 06:32
- Because only one latch will ever be write enabled, only one will ever save the data
- 06:37
- -- the rest of the latches will simply ignore values on the data wire because they are not
- 06:40
- write enabled.
- 06:41
- We can use the same trick with a read enable wire to read the data later, to get the data
- 06:46
- out of one specific latch.
- 06:48
- This means in total, for 256 bits of memory, we only need 35 wires - 1 data wire, 1 write
- 06:55
- enable wire, 1 read enable wire, and 16 rows and columns for the selection.
- 06:59
- That’s significant wire savings!
- 07:01
- But we need a way to uniquely specify each intersection.
- 07:05
- We can think of this like a city, where you might want to meet someone at 12th avenue
- 07:08
- and 8th street -- that's an address that defines an intersection.
- 07:11
- The latch we just saved our one bit into has an address of row 12 and column 8.
- 07:15
- Since there is a maximum of 16 rows, we store the row address in a 4 bit number.
- 07:20
- 12 is 1100 in binary.
- 07:23
- We can do the same for the column address: 8 is 1000 in binary.
- 07:28
- So the address for the particular latch we just used can be written as 11001000.
- 07:35
- To convert from an address into something that selects the right row or column, we need
- 07:39
- a special component called a multiplexer -- which is the computer component with a pretty cool
- 07:43
- name at least compared to the ALU.
- 07:45
- Multiplexers come in all different sizes, but because we have 16 rows, we need a 1 to
- 07:50
- 16 multiplexer.
- 07:51
- It works like this.
- 07:52
- You feed it a 4 bit number, and it connects the input line to a corresponding output line.
- 07:56
- So if we pass in 0000, it will select the very first column for us.
- 08:02
- If we pass in 0001, the next column is selected, and so on.
- 08:06
- We need one multiplexer to handle our rows and another multiplexer to handle the columns.
- 08:10
- Ok, it’s starting to get complicated again, so let’s make our 256-bit memory its own component.
- 08:16
- Once again a new level of abstraction!
- 08:24
- It takes an 8-bit address for input - the 4 bits for the column and 4 for the row.
- 08:29
- We also need write and read enable wires.
- 08:32
- And finally, we need just one data wire, which can be used to read or write data.
- 08:37
- Unfortunately, even 256-bits of memory isn’t enough to run much of anything, so we need
- 08:42
- to scale up even more!
- 08:43
- We’re going to put them in a row.
- 08:45
- Just like with the registers.
- 08:46
- We’ll make a row of 8 of them, so we can store an 8 bit number - also known as a byte.
- 08:51
- To do this, we feed the exact same address into all 8 of our 256-bit memory components
- 08:57
- at the same time, and each one saves one bit of the number.
- 09:01
- That means the component we just made can store 256 bytes at 256 different addresses.
- 09:07
- Again, to keep things simple, we want to leave behind this inner complexity.
- 09:11
- Instead of thinking of this as a series of individual memory modules and circuits, we’ll
- 09:15
- think of it as a uniform bank of addressable memory.
- 09:18
- We have 256 addresses, and at each address, we can read or write an 8-bit value.
- 09:23
- We’re going to use this memory component next episode when we build our CPU.
- 09:28
- The way that modern computers scale to megabytes and gigabytes of memory is by doing the same
- 09:32
- thing we’ve been doing here -- keep packaging up little bundles of memory into larger, and
- 09:36
- larger, and larger arrangements.
- 09:37
- As the number of memory locations grow, our addresses have to grow as well.
- 09:42
- 8 bits hold enough numbers to provide addresses for 256 bytes of our memory, but that’s all.
- 09:48
- To address a gigabyte – or a billion bytes of memory – we need 32-bit addresses.
- 09:53
- An important property of this memory is that we can access any memory location, at any
- 09:58
- time, and in a random order.
- 09:59
- For this reason, it’s called Random-Access Memory or RAM.
- 10:03
- When you hear people talking about how much RAM a computer has - that's the computer’s memory.
- 10:07
- RAM is like a human’s short term or working memory, where you keep track of things going
- 10:11
- on right now - like whether or not you had lunch or paid your phone bill.
- 10:14
- Here’s an actual stick of RAM - with 8 memory modules soldered onto the board.
- 10:18
- If we carefully opened up one of these modules and zoomed in, The first thing you would see
- 10:22
- are 32 squares of memory.
- 10:23
- Zoom into one of those squares, and we can see each one is comprised of 4 smaller blocks.
- 10:28
- If we zoom in again, we get down to the matrix of individual bits.
- 10:31
- This is a matrix of 128 by 64 bits.
- 10:34
- That’s 8192 bits in total.
- 10:37
- Each of our 32 squares has 4 matrices, so that’s 32 thousand, 7 hundred and 68 bits.
- 10:43
- And there are 32 squares in total.
- 10:45
- So all in all, that’s roughly 1 million bits of memory in each chip.
- 10:49
- Our RAM stick has 8 of these chips, so in total, this RAM can store 8 millions bits,
- 10:54
- otherwise known as 1 megabyte.
- 10:56
- That’s not a lot of memory these days -- this is a RAM module from the 1980’s.
- 11:00
- Today you can buy RAM that has a gigabyte or more of memory - that’s billions of bytes
- 11:05
- of memory.
- 11:06
- So, today, we built a piece of SRAM - Static Random-Access Memory – which uses latches.
- 11:11
- There are other types of RAM, such as DRAM, Flash memory, and NVRAM.
- 11:15
- These are very similar in function to SRAM, but use different circuits to store the individual
- 11:19
- bits -- for example, using different logic gates, capacitors, charge traps, or memristors.
- 11:24
- But fundamentally, all of these technologies store bits of information in massively nested
- 11:28
- matrices of memory cells.
- 11:31
- Like many things in computing, the fundamental operation is relatively simple.. it’s the
- 11:35
- layers and layers of abstraction that’s mind blowing -- like a russian doll that
- 11:40
- keeps getting smaller and smaller and smaller.
- 11:42
- I’ll see you next week.
- 00:03
- Hi, I’m Carrie Anne, this is Crash Course Computer Science, and today, we’re talking about processors.
- 00:07
- Just a warning though - this is probably the most complicated episode in the series.
- 00:11
- So once you get this, you’re golden.
- 00:12
- We’ve already made a Arithmetic and Logic Unit, which takes in binary numbers and performs
- 00:16
- calculations, and we’ve made two types of computer memory: Registers -- small, linear
- 00:21
- chunks of memory, useful for storing a single value -- and then we scaled up, and made some
- 00:25
- RAM, a larger bank of memory that can store a lot of numbers located at different addresses.
- 00:30
- Now it’s time to put it all together and build ourselves the heart of any computer,
- 00:34
- but without any of the emotional baggage that comes with human hearts.
- 00:37
- For computers, this is the Central Processing Unit, most commonly called the CPU.
- 00:42
- INTRO
- 00:51
- A CPU’s job is to execute programs.
- 00:53
- Programs, like Microsoft Office, Safari, or your beloved copy of Half Life: 2, are made
- 00:57
- up of a series of individual operations, called instructions, because they “instruct”
- 01:02
- the computer what to do.
- 01:03
- If these are mathematical instructions, like add or subtract, the CPU will configure its
- 01:07
- ALU to do the mathematical operation.
- 01:10
- Or it might be a memory instruction, in which case the CPU will talk with memory
- 01:14
- to read and write values.
- 01:15
- There are a lot of parts in a CPU, so we’re going to lay it out piece by piece, building
- 01:19
- up as we go.
- 01:20
- We’ll focus on functional blocks, rather than showing every single wire.
- 01:23
- When we do connect two components with a line, this is an abstraction for all of the necessary wires.
- 01:28
- This high level view is called the microarchitecture.
- 01:30
- OK, first, we’re going to need some memory.
- 01:32
- Lets drop in the RAM module we created last episode.
- 01:35
- To keep things simple, we’ll assume it only has 16 memory locations, each containing 8 bits.
- 01:40
- Let’s also give our processor four, 8-bit memory registers, labeled A, B, C and D which
- 01:45
- will be used to temporarily store and manipulate values.
- 01:48
- We already know that data can be stored in memory as binary values
- 01:51
- and programs can be stored in memory too.
- 01:52
- We can assign an ID to each instruction supported by our CPU.
- 01:56
- In our hypothetical example, we use the first four bits to store the “operation code”,
- 02:00
- or opcode for short.
- 02:02
- The final four bits specify where the data for that operation should come from -
- 02:06
- this could be registers or an address in memory.
- 02:08
- We also need two more registers to complete our CPU.
- 02:11
- First, we need a register to keep track of where we are in a program.
- 02:14
- For this, we use an instruction address register, which as the name suggests, stores the memory
- 02:19
- address of the current instruction.
- 02:20
- And then we need the other register to store the current instruction, which we’ll call the instruction register.
- 02:26
- When we first boot up our computer, all of our registers start at 0.
- 02:30
- As an example, we’ve initialized our RAM with a simple computer program that we’ll to through today.
- 02:35
- The first phase of a CPU’s operation is called the fetch phase.
- 02:38
- This is where we retrieve our first instruction.
- 02:41
- First, we wire our Instruction Address Register to our RAM module.
- 02:44
- The register’s value is 0, so the RAM returns whatever value is stored in address 0.
- 02:49
- In this case, 0010 1110.
- 02:52
- Then this value is copied into our instruction register.
- 02:55
- Now that we’ve fetched an instruction from memory, we need to figure out what that instruction is
- 02:59
- so we can execute it.
- 03:00
- That is run it.
- 03:01
- Not kill it.
- 03:02
- This is called the decode phase.
- 03:04
- In this case the opcode, which is the first four bits, is: 0010.
- 03:08
- This opcode corresponds to the “LOAD A” instruction, which loads a value from RAM
- 03:13
- into Register A.
- 03:14
- The RAM address is the last four bits of our instruction which are 1110, or 14 in decimal.
- 03:19
- Next, instructions are decoded and interpreted by a Control Unit.
- 03:23
- Like everything else we’ve built, it too is made out of logic gates.
- 03:26
- For example, to recognize a LOAD A instruction, we need a circuit that checks if the opcode
- 03:31
- matches 0010 which we can do with a handful of logic gates.
- 03:35
- Now that we know what instruction we’re dealing with, we can go ahead and perform
- 03:38
- that instruction which is the beginning of the execute phase!
- 03:41
- Using the output of our LOAD_A checking circuit, we can turn on the RAM’s read enable line
- 03:45
- and send in address 14.
- 03:47
- The RAM retrieves the value at that address, which is 00000011, or 3 in decimal.
- 03:53
- Now, because this is a LOAD_A instruction, we want that value to only be saved into Register A
- 03:58
- and not any of the other registers.
- 03:59
- So if we connect the RAM’s data wires to our four data registers, we can use our LOAD_A
- 04:04
- check circuit to enable the write enable only for Register A.
- 04:07
- And there you have it -- we’ve successfully loaded the value at RAM address 14 into Register A.
- 04:12
- We’ve completed the instruction, so we can turn all of our wires off, and we’’re
- 04:16
- ready to fetch the next instruction in memory.
- 04:18
- To do this, we increment the Instruction Address Register by 1 which completes the execute phase.
- 04:23
- LOAD_A is just one of several possible instructions that our CPU can execute.
- 04:28
- Different instructions are decoded by different logic circuits, which configure the CPU’s
- 04:32
- components to perform that action.
- 04:34
- Looking at all those individual decode circuits is too much detail, so since we looked at one example,
- 04:38
- we’re going to go head and package them all up as a single Control Unit to keep things simple.
- 04:43
- That’s right a new level of abstraction.
- 04:51
- The Control Unit is comparable to the conductor of an orchestra, directing all of the different
- 04:55
- parts of the CPU.
- 04:57
- Having completed one full fetch/decode/execute cycle, we’re ready to start all over again,
- 05:02
- beginning with the fetch phase.
- 05:03
- The Instruction Address Register now has the value 1 in it, so the RAM gives us the value
- 05:07
- stored at address 1, which is 0001 1111.
- 05:12
- On to the decode phase!
- 05:13
- 0001 is the “LOAD B” instruction, which moves a value from RAM into Register B.
- 05:20
- The memory location this time is 1111, which is 15 in decimal.
- 05:24
- Now to the execute phase!
- 05:26
- The Control Unit configures the RAM to read address 15 and configures Register B to receive the data.
- 05:31
- Bingo, we just saved the value 00001110, or the number 14 in decimal, into Register B.
- 05:38
- Last thing to do is increment our instruction address register by 1, and we’re done with another cycle.
- 05:43
- Our next instruction is a bit different.
- 05:45
- Let’s fetch it.
- 05:46
- 1000 01 00.
- 05:49
- That opcode 1000 is an ADD instruction.
- 05:53
- Instead of an 4-bit RAM address, this instruction uses two sets of 2 bits.
- 05:57
- Remember that 2 bits can encode 4 values, so 2 bits is enough to select any one of our 4 registers.
- 06:02
- The first set of 2 bits is 01, which in this case corresponds to Register B, and 00, which is Register A.
- 06:09
- So “1000 01 00” is the instruction for adding the value in Register B into the value in register A.
- 06:17
- So to execute this instruction, we need to integrate the ALU we made in Episode 5 into our CPU.
- 06:23
- The Control Unit is responsible for selecting the right registers to pass in as inputs,
- 06:27
- and configuring the ALU to perform the right operation.
- 06:30
- For this ADD instruction, the Control Unit enables Register B and feeds its value into
- 06:34
- the first input of the ALU.
- 06:36
- It also enables Register A and feeds it into the second ALU input.
- 06:40
- As we already discussed, the ALU itself can perform several different operations, so the
- 06:45
- Control Unit must configure it to perform an ADD operation by passing in the ADD opcode.
- 06:50
- Finally, the output should be saved into Register A. But it can’t be written directly
- 06:54
- because the new value would ripple back into the ALU and then keep adding to itself.
- 06:58
- So the Control Unit uses an internal register to temporarily save the output, turn off the
- 07:03
- ALU, and then write the value into the proper destination register.
- 07:07
- In this case, our inputs were 3 and 14, and so the sum is 17, or 00010001 in binary,
- 07:16
- which is now sitting in Register A. As before, the last thing to do is increment our instruction
- 07:20
- address by 1, and another cycle is complete.
- 07:23
- Okay, so let’s fetch one last instruction: 01001101.
- 07:29
- When we decode it we see that 0100 is a STORE_A instruction, with a RAM address of 13.
- 07:35
- As usual, we pass the address to the RAM module, but instead of read-enabling the memory, we write-enable it.
- 07:40
- At the same time, we read-enable Register A. This allows us to use the data line to
- 07:45
- pass in the value stored in register A.
- 07:47
- Congrats, we just ran our first computer program!
- 07:50
- It loaded two values from memory, added them together, and then saved that sum back into memory.
- 07:55
- Of course, by me talking you through the individual steps, I was manually transitioning the CPU
- 08:00
- through its fetch, decode and execute phases.
- 08:03
- But there isn’t a mini Carrie Anne inside of every computer.
- 08:06
- So the responsibility of keeping the CPU ticking along falls to a component called the clock.
- 08:10
- As it’s name suggests, the clock triggers an electrical signal at a precise and regular interval.
- 08:15
- Its signal is used by the Control Unit to advance the internal operation of the CPU,
- 08:19
- keeping everything in lock-step - like the dude on a Roman galley drumming rhythmically
- 08:23
- at the front, keeping all the rowers synchronized... or a metronome.
- 08:27
- Of course you can’t go too fast, because even electricity takes some time to travel
- 08:31
- down wires and for the signal to settle.
- 08:33
- The speed at which a CPU can carry out each step of the fetch-decode-execute cycle is called its Clock Speed.
- 08:39
- This speed is measured in Hertz - a unit of frequency.
- 08:42
- One Hertz means one cycle per second.
- 08:45
- Given that it took me about 6 minutes to talk you through 4 instructions -- LOAD, LOAD,
- 08:49
- ADD and STORE -- that means I have an effective clock speed of roughly .03 Hertz.
- 08:53
- Admittedly, I’m not a great computer but even someone handy with math
- 08:57
- might only be able to do one calculation in their head every second or 1 Hertz.
- 09:01
- The very first, single-chip CPU was the Intel 4004, a 4-bit CPU released in 1971.
- 09:08
- It’s microarchitecture is actually pretty similar to our example CPU.
- 09:12
- Despite being the first processor of its kind, it had a mind-blowing clock speed of 740 Kilohertz
- 09:18
- -- that’s 740 thousand cycles per second.
- 09:22
- You might think that’s fast, but it’s nothing compared to the processors that we use today.
- 09:26
- One megahertz is one million clock cycles per second, and the computer or even phone
- 09:30
- that you are watching this video on right now is no doubt a few gigahertz -- that's
- 09:34
- BILLIONs of CPU cycles every… single... second.
- 09:38
- Also, you may have heard of people overclocking their computers.
- 09:41
- This is when you modify the clock to speed up the tempo of the CPU -- like when the drummer
- 09:45
- speeds up when the Roman Galley needs to ram another ship.
- 09:48
- Chip makers often design CPUs with enough tolerance to handle a little bit of overclocking,
- 09:52
- but too much can either overheat the CPU, or produce gobbledygook as the signals fall behind the clock.
- 09:58
- And although you don’t hear very much about underclocking, it’s actually super useful.
- 10:02
- Sometimes it’s not necessary to run the processor at full speed...
- 10:04
- maybe the user has stepped away, or just not running a particularly demanding program.
- 10:08
- By slowing the CPU down, you can save a lot of power, which is important for computers
- 10:13
- that run on batteries, like laptops and smartphones.
- 10:15
- To meet these needs, many modern processors can increase or decrease their clock speed
- 10:19
- based on demand, which is called dynamic frequency scaling.
- 10:23
- So, with the addition of a clock, our CPU is complete.
- 10:26
- We can now put a box around it, and make it its own component.
- 10:28
- Yup.
- 10:29
- A new level of abstraction!
- 10:37
- RAM, as I showed you last episode, lies outside the CPU as its own component, and they communicate
- 10:42
- with each other using address, data and enable wires.
- 10:45
- Although the CPU we designed today is a simplified example, many of the basic mechanics we discussed
- 10:50
- are still found in modern processors.
- 10:52
- Next episode, we’re going to beef up our CPU, extending it with more instructions as
- 10:56
- we take our first baby steps into software.
- 10:59
- I’ll see you next week.
- Hi, I’m Carrie Anne and this is Crash Course Computer Science!
- 00:06
- Last episode, we combined an ALU, control unit, some memory, and a clock together to
- 00:10
- make a basic, but functional Central Processing Unit – or CPU – the beating, ticking heart
- 00:14
- of a computer.
- 00:15
- We’ve done all the hard work of building many of these components from the electronic
- 00:19
- circuits up, and now it’s time to give our CPU some actual instructions to process!
- 00:23
- The thing that makes a CPU powerful is the fact that it is programmable – if you write
- 00:28
- a different sequence of instructions, then the CPU will perform a different task.
- 00:31
- So the CPU is a piece of hardware which is controlled by easy-to-modify software!
- 00:36
- INTRO
- 00:44
- Let’s quickly revisit the simple program that we stepped through last episode.
- 00:48
- The computer memory looked like this.
- 00:50
- Each address contained 8 bits of data.
- 00:52
- For our hypothetical CPU, the first four bits specified the operation code, or opcode, and
- 00:57
- the second set of four bits specified an address or registers.
- 01:00
- In memory address zero we have 0010 1110.
- 01:04
- Again, those first four bits are our opcode which corresponds to a “LOAD_A” instruction.
- 01:10
- This instruction reads data from a location of memory specified in those last four bits
- 01:14
- of the instruction and saves it into Register A. In this case, 1110, or 14 in decimal.
- 01:21
- So let’s not think of this of memory address 0 as “0010 1110”, but rather as the instruction
- 01:27
- “LOAD_A 14”.
- 01:28
- That’s much easier to read and understand!
- 01:30
- And for me to say!
- 01:31
- And we can do the same thing for the rest of the data in memory.
- 01:35
- In this case, our program is just four instructions long, and we’ve put some numbers into memory
- 01:39
- too, 3 and 14.
- 01:41
- So now let’s step through this program:
- 01:42
- First is LOAD_A 14, which takes the value in address 14, which is the number 3, and
- 01:47
- stores it into Register A.
- 01:49
- Then we have a “LOAD_B 15” instruction, which takes the value in memory location 15,
- 01:54
- which is the number 14, and saves it into Register B.
- 01:56
- Okay.
- 01:57
- Easy enough.
- 01:58
- But now we have an “ADD” instruction.
- 02:00
- This tells the processor to use the ALU to add two registers together, in this case,
- 02:04
- B and A are specified.
- 02:06
- The ordering is important, because the resulting sum is saved into the second register that’s specified.
- 02:11
- So in this case, the resulting sum is saved into Register A.
- 02:15
- And finally, our last instruction is “STORE_A 13”, which instructs the CPU to write whatever
- 02:19
- value is in Register A into memory location 13.
- 02:23
- Yesss!
- 02:24
- Our program adds two numbers together.
- 02:26
- That’s about as exciting as it gets when we only have four instructions to play with.
- 02:30
- So let’s add some more!
- 02:31
- Now we’ve got a subtract function, which like ADD, specifies two registers to operate on.
- 02:35
- We’ve also got a fancy new instruction called JUMP.
- 02:38
- As the name implies, this causes the program to “jump” to a new location.
- 02:42
- This is useful if we want to change the order of instructions, or choose to skip some instructions.
- 02:45
- For example, a JUMP 0, would cause the program to go back to the beginning.
- 02:48
- At a low level, this is done by writing the value specified in the last four bits into
- 02:53
- the instruction address register, overwriting the current value.
- 02:56
- We’ve also added a special version of JUMP called JUMP_NEGATIVE.
- 03:00
- This only jumps the program if the ALU’s negative flag is set to true.
- 03:04
- As we talked about in Episode 5, the negative flag is only set when the result of an arithmetic
- 03:08
- operation is negative.
- 03:10
- If the result of the arithmetic was zero or positive, the negative flag would not be set.
- 03:14
- So the JUMP NEGATIVE won’t jump anywhere, and the CPU will just continue on to the next instruction.
- 03:19
- And finally, computers need to be told when to stop processing, so we need a HALT instruction.
- 03:24
- Our previous program really should have looked like this to be correct, otherwise the CPU
- 03:28
- would have just continued on after the STORE instruction, processing all those 0’s.
- 03:32
- But there is no instruction with an opcode of 0, and so the computer would have crashed!
- 03:36
- It’s important to point out here that we’re storing both instructions and data in the
- 03:40
- same memory.
- 03:41
- There is no difference fundamentally -- it’s all just binary numbers.
- 03:43
- So the HALT instruction is really important because it allows us to separate the two.
- 03:47
- Okay, so let’s make our program a bit more interesting, by adding a JUMP.
- 03:51
- We’ll also modify our two starting values in memory to 1 and 1.
- 03:55
- Lets step through this program just as our CPU would.
- 03:58
- First, LOAD_A 14 loads the value 1 into Register A.
- 04:01
- Next, LOAD_B 15 loads the value 1 into Register B.
- 04:05
- As before, we ADD registers B and A together, with the sum going into Register A. 1+1 = 2,
- 04:11
- so now Register A has the value 2 in it (stored in binary of course)
- 04:15
- Then the STORE instruction saves that into memory location 13.
- 04:18
- Now we hit a “JUMP 2” instruction.
- 04:20
- This causes the processor to overwrite the value in the instruction address register,
- 04:24
- which is currently 4, with the new value, 2.
- 04:27
- Now, on the processor’s next fetch cycle, we don’t fetch HALT, instead we fetch the
- 04:31
- instruction at memory location 2, which is ADD B A.
- 04:34
- We’ve jumped!
- 04:35
- Register A contains the value 2, and register B contains the value 1.
- 04:38
- So 1+2 = 3, so now Register A has the value 3.
- 04:42
- We store that into memory.
- 04:44
- And we’ve hit the JUMP again, back to ADD B A.
- 04:47
- 1+3 = 4.
- 04:49
- So now register A has the value 4.
- 04:51
- See what's happening here?
- 04:52
- Every loop, we’re adding one.
- 04:53
- Its counting up!
- 04:54
- Cooooool.
- 04:55
- But notice there’s no way to ever escape.
- 04:57
- We’re never.. ever.. going to get to that halt instruction, because we’re always going
- 05:01
- to hit that JUMP.
- 05:02
- This is called an infinite loop – a program that runs forever… ever… ever… ever…
- 05:07
- ever
- 05:08
- To break the loop, we need a conditional jump.
- 05:10
- A jump that only happens if a certain condition is met.
- 05:13
- Our JUMP_NEGATIVE is one example of a conditional jump, but computers have other types too - like
- 05:18
- JUMP IF EQUAL and JUMP IF GREATER.
- 05:20
- So let’s make our code a little fancier and step through it.
- 05:24
- Just like before, the program starts by loading values from memory into registers A and B.
- 05:28
- In this example, the number 11 gets loaded into Register A, and 5 gets loaded into Register B.
- 05:34
- Now we subtract register B from register A. That’s 11 minus 5, which is 6, and so 6
- 05:39
- gets saved into Register A.
- 05:40
- Now we hit our JUMP NEGATIVE.
- 05:42
- The last ALU result was 6.
- 05:44
- That’s a positive number, so the the negative flag is false.
- 05:47
- That means the processor does not jump.
- 05:49
- So we continue on to the next instruction...
- 05:51
- ...which is a JUMP 2.
- 05:52
- No conditional on this one, so we jump to instruction 2 no matter what.
- 05:56
- Ok, so we’re back at our SUBTRACT Register B from Register A. 6 minus 5 equals 1.
- 06:01
- So 1 gets saved into register A.
- 06:03
- Next instruction.
- 06:04
- We’re back again at our JUMP NEGATIVE.
- 06:06
- 1 is also a positive number, so the CPU continues on to the JUMP 2, looping back around again
- 06:11
- to the SUBTRACT instruction.
- 06:13
- This time is different though.
- 06:14
- 1 minus 5 is negative 4.
- 06:17
- And so the ALU sets its negative flag to true for the first time.
- 06:20
- Now, when we advance to the next instruction,
- 06:23
- JUMP_NEGATIVE 5, the CPU executes the jump to memory location 5.
- 06:27
- We’re out of the infinite loop!
- 06:29
- Now we have a ADD B to A. Negative 4 plus 5, is positive 1, and we save that into Register A.
- 06:35
- Next we have a STORE instruction that saves Register A into memory address 13.
- 06:39
- Lastly, we hit our HALT instruction and the computer rests.
- 06:43
- So even though this program is only 7 instructions long, the CPU ended up executing 13 instructions,
- 06:49
- and that's because it looped twice internally.
- 06:52
- This code calculated the remainder if we divide 5 into 11, which is one.
- 06:56
- With a few extra lines of code, we could also keep track of how many loops we did, the count
- 07:00
- of which would be how many times 5 went into 11… we did two loops, so that means 5 goes
- 07:05
- into 11 two times... with a remainder of 1.
- 07:08
- And of course this code could work for any two numbers, which we can just change in memory
- 07:12
- to whatever we want: 7 and 81, 18 and 54, it doesn’t matter -- that’s the power
- 07:17
- of software!
- 07:18
- Software also allowed us to do something our hardware could not.
- 07:21
- Remember, our ALU didn’t have the functionality to divide two numbers, instead it’s the
- 07:26
- program we made that gave us that functionality.
- 07:28
- And then other programs can use our divide program to do even fancier things.
- 07:32
- And you know what that means.
- 07:34
- New levels of abstraction!
- 07:41
- So, our hypothetical CPU is very basic – all of its instructions are 8 bits long, with
- 07:46
- the opcode occupying only the first four bits.
- 07:49
- So even if we used every combination of 4 bits, our CPU would only be able to support
- 07:54
- a maximum of 16 different instructions.
- 07:56
- On top of that, several of our instructions used the last 4 bits to specify a memory location.
- 08:01
- But again, 4 bits can only encode 16 different values, meaning we can address a maximum of
- 08:06
- 16 memory locations - that’s not a lot to work with.
- 08:10
- For example, we couldn’t even JUMP to location 17, because we literally can’t fit the number
- 08:15
- 17 into 4 bits.
- 08:16
- For this reason, real, modern CPUs use two strategies.
- 08:19
- The most straightforward approach is just to have bigger instructions, with more bits,
- 08:23
- like 32 or 64 bits.
- 08:25
- This is called the instruction length.
- 08:28
- Unsurprisingly.
- 08:29
- The second approach is to use variable length instructions.
- 08:32
- For example, imagine a CPU that uses 8 bit opcodes.
- 08:35
- When the CPU sees an instruction that needs no extra values, like the HALT instruction,
- 08:40
- it can just execute it immediately.
- 08:41
- However, if it sees something like a JUMP instruction, it knows it must also fetch
- 08:45
- the address to jump to, which is saved immediately behind the JUMP instruction in memory.
- 08:50
- This is called, logically enough, an Immediate Value.
- 08:52
- In such processor designs, instructions can be any number of bytes long, which makes the
- 08:56
- fetch cycle of the CPU a tad more complicated.
- 08:59
- Now, our example CPU and instruction set is hypothetical, designed to illustrate key working
- 09:04
- principles.
- 09:05
- So I want to leave you with a real CPU example.
- 09:07
- In 1971, Intel released the 4004 processor.
- 09:11
- It was the first CPU put all into a single chip and paved the path to the intel processors
- 09:15
- we know and love today.
- 09:17
- It supported 46 instructions, shown here.
- 09:19
- Which was enough to build an entire working computer.
- 09:22
- And it used many of the instructions we’ve talked about like JUMP ADD SUBTRACT and LOAD.
- 09:26
- It also uses 8-bit immediate values, like we just talked about, for things like JUMPs,
- 09:31
- in order to address more memory.
- 09:33
- And processors have come a long way since 1971.
- 09:36
- A modern computer processor, like an Intel Core i7, has thousands of different instructions
- 09:41
- and instruction variants, ranging from one to fifteen bytes long.
- 09:44
- For example, there’s over a dozens different opcodes just for variants of ADD!
- 09:48
- And this huge growth in instruction set size is due in large part to extra bells and whistles
- 09:52
- that have been added to processor designs overtime, which we’ll talk about next episode.
- 09:57
- See you next week!
Add Comment
Please, Sign In to add comment