April 03, 2006
Monday Musing: The Palm Pilot and the Human Brain
Today I would like to explain something scientists know well: how computers work, and then use some of the conceptual insights from that discussion to present an interesting recent model of something relatively unknown: how human brains might work. (This model is due to Jeff Hawkins, the inventor of the Palm Pilot--a type of computer, hence the title of this essay.) This may well be rather too-ambitious a task, but oh, well, let's see how it goes...
Part I: How Computers Work
Far too few people understand how computers operate. Many professional computer programmers, even, would be hard-pressed to explain the workings of the actual hardware, and may well never have heard of an NPN junction, while the average computer user certainly rarely bothers to wonder what goes on inside the CPU (Central Processing Unit, like Intel's Pentium chip, for example) of her machine when she highlights a paragraph in MS Word and clicks on the "justify" button on the tool bar, and the right margin of the text is instantly and seemingly magically alligned. This lack of curiosity about an amazing technological achievement is inexplicable to me, and it is a shame because computers are extremely beautiful in their complex multi-layered structure. How is it that a bunch of electrons moving around on a tiny silicon wafer deep inside your machine manages to right-justify your text, calculate your taxes, demonstrate a mathematical proof, model a weather-system, and a million other things?
What's equally weird to me is that I haven't ever seen a short, comprehensive, and comprehensible explanation of how computers work, so I'm going to give you one. This isn't going to be easy for me or for you, because computers are not trivial things, but I am hoping to provide a fairly detailed description, not just a bunch of confusing analogies. In other words, this is going to take some strenuous mental effort, and I encourage you to click on the links that I will try to provide, for further details and discussion of some of the topics I bring up. (The beginning part may be tedious for some of you who already know something about computers, but please bear with me.) Last preliminary comment: I will try to make this as simple as possible, but for those of you who don't know extremely basic things like what electrons are, I really don't know what to tell you, except that you should. (The humanities equivalent of this scientific ignorance might be someone who doesn't know what, say, a sonnet is.) Oh, go ahead, click on "electrons." I'll wait.
Computers are organized hierarchically with layers of conceptual complexity built one on top of the other. This is similar to how our brains work. What I mean is the following: suppose my wife Margit asks me to go buy some bread. It is a simple enough instruction, and she can be fairly certain that a few minutes later I will return with the bread. Here's what happens in my brain when I hear her request: I break it down into a series of smaller steps something like
Get bread: START
- Get money and apartment keys.
- Go to supermarket.
- Find bread.
- Pay for bread.
- Return with bread.
Each of these steps is then broken down into smaller steps. For example, "Go to supermarket" may be broken down as follows:
Go to supermaket: START
- Exit apartment.
- Walk downstairs.
- Turn left outside the building and walk until Broadway is reached.
- Make right on Broadway and walk one and a half blocks to supermarket.
- Make right into supermarket entrance.
Similarly, "Exit apartment" is broken down into:
Exit apartment: START
- Get up off couch.
- Walk forward three steps.
- Turn right and go down hallway until the front door.
- If door chain is on, undo it.
- Undo deadbolt lock on door.
- Open door.
- Step outside.
Well, you get the idea. Of course, "Get up off couch" translates into things like "Bend forward" and "Push down with legs to straighten body," etc. "Bend forward" itself translates into a whole sequence of coordinated muscular contractions. Each muscle contraction is actually a series of biochemical events that take place in the nerve and muscle fibres, and you can continue breaking each step down in this manner to the molecular or atomic level. Notice that most of the action occurs below the threshhold of consciousness, with only the top couple of levels normally available to our conscious minds. Also, I have simplified the example in significant ways, most importantly by neglecting the role of memory retrieval and storage. (There are many retrievals involved here, such as remembering where the store is, where my apartment door is, and even how to walk!) Each subset of instructions in this example is what has come to be known as a subroutine. The beauty of this scheme is that once you have worked out the sequence of smaller steps needed to accomplish a repetitive task which is one level higher in the hierarchy, you can just store that sequence in memory, and you don't ever need to work it out again. In other words, you can combine subroutines from a given layer into a subroutine which accomplishes some more general task in a higher layer. For example, one could combine the "Get bread" subroutine with the "Get newspaper" and "Get eggs" and "Get coffee" and "Drop off dry-cleaning" subroutines into a "Sunday morning chores" subroutine, which I might then do with little thought every Sunday morning.
This is how computers are able to do such extraordinary things. But I would like to explain some of the detail to you, and the best way to explain it is, I think, again by example. When a user highlights a paragraph of text and clicks on the justify button, here is some of what happens: a subroutine perhaps called "Justify right-hand margin" kicks in. (What that means is that control of the CPU is turned over to this subroutine.) This is what a primitive form of the subroutine might look like (in actual fact, many other things are taken into account) in what programmers call pseudo-code (an informal preliminary way of writing instructions--or laying out an algorithm--which are later carefully translated by the programmer into a higher-level computer language such as BASIC, FORTRAN, Pascal, or C):
Justify right-hand margin: START
- First determine the printed width of the text by subtracting the left margin position from the right.
- Build a line of text by getting the input (paragraph) text a word at a time. Test to see that the length of the text is less than the printed width.
- Output the first word with no following space.
- Determine the length of the remaining text, the available space, and the number of word spaces (the same as the remaining words). Divide to get the target word space. (Be sure to take into account the spaces in the string.)
- Output the word space, and the next word.
- Return to STEP 4 if there is more to print.
- Return to STEP 2 for the next line, until no more lines are left.
Of course, FORTRAN or Pascal, or C programmers don't spend a lot of time actually writing the code (the actual "text" of higher level computer languages is called "code" and the part of programming which takes an algorithm such as the one given above and translates it into the particular syntax of a given language such as C, is called "coding") for such things, because once they have been written by someone (anyone), they are put into libraries of subroutines and can subsequently be used by anyone needing to (in this case) justify text. Such libraries of useful subroutines are widely available to programmers.
Suppose this subroutine above were written in C. Now what happens to the C code? Who reads that? Well, the way it works is this: there is a program called a compiler, which takes the C code and each of it's instructions, and breaks them down into a simpler language called assembly language. Assembly language is a limited set of instructions which can be understood by the hardware (the CPU) itself. It consists of instructions like ADD (a, b, c) which, on a given CPU might mean, "add the content of the memory location a to the content of memory location b and store the result in memory location c". Different CPUs have different instruction sets (and therefore different assembly languages) but the same higher level language can be used on all of them. This is because a compiler for that type of CPU will translate the higher level language into the appropriate assembly language for that CPU. In this way, a program I have written in C to justify text can easily be ported over to a different computer (from a PC to a Mac, say) in the higher level language, without having to worry about how the lower levels accomplish their task. Are you with me? Reread this paragraph if you need to.
Actually, assembly language is itself translated by a program called an assembler into what is called machine language, which is simply a series of zeroes and ones that the hardware can "understand" and operate on. Now we get to the hardware itself. How does the hardware actually perform the instructions given to it? This time, let us start at the bottom of the hierarchy, the silicon itself, and build up from there. Stay with me now!
If certain types of impurities (boron, aluminum or gallium, for example) are added (called doping) to the semiconductor material, in this case silicon, it turns into what is known as P-type silicon. This type of silicon has deficiencies of valence electrons called "holes." Another type of silicon which can be produced by doping it with different impurities (antimony, arsenic or phosphorous, for example) is known as N-type silicon, and this has an excess of free electrons, greatly increasing the intrinsic conductivity of the semiconductor material. The interesting thing about this is that one can place these two materials in contact with one another, and this "junction" behaves differently than either of the two types of silicon by itself. A P-N junction allows an electric current to flow in one direction, but not in the other. This device is known as a diode.
A transistor is a device with three terminals (a triode--like the glass vacuum tubes of old): the base, the collector, and the emitter. In this device, the current flowing at the collector is controlled by the current between the base and the emitter. Transistors can be used as amplifiers, but more importantly in the case of computers, as switches. If you take two P-N junctions and combine them, creating a kind of sandwich, you get either an NPN junction or a PNP junction. These both then function as types of transistors, specifically bipolar junction transistors (BJT). In the case of the NPN junction type transistor, the N on one side acts as the emitter, the P is the base, and the other N is the collector. Refer to the diagram above and click here for more info about how exactly these work in terms of the underlying electronics.
Digital Logic Gates
Once we have transistors, we can do something very neat with them: we can combine them into what are known as logic gates. These are best explained by example. Imagine a device with two inputs and one output which behaves in the following way: if a voltage is applied to both inputs, the voltage is also present at the output, otherwise, the output remains at zero. (So if neither or only one of the inputs has a voltage present, the output is zero.) This is known as an AND gate, because its output is positive if and only if the first input AND the second input are "on." (This "on" state of high voltage usually is used to represent the number 1, while no voltage, or low voltage is used to represent the number 0.) Similarly, the output of an OR gate is "1" if either the first input OR the second input OR both of the inputs are "1". (Still with me? Good. All kinds of exciting stuff is coming up.) A NOT gate simply reverses a 1 to a 0 and vice versa. There are other logic gates, but we won't bother with them because they can all be simulated by combinations of something called a NAND gate. This is just an AND gate followed by a NOT gate. In other words its output is 0 only if both inputs are 1, otherwise its output is always 1. (See the "truth table" at the right. A and B are the inputs and X is the output.)
The really cool thing here is that one can combine two of the transistors discussed in the previous section to form a NAND gate. (See the diagram at right to see how they are connected together.) And as I mentioned before, NAND gates can then be connected together in ways that can simulate any kind of logic gate.
Not only that, there are ways of connecting NAND gates together to implement any binary digital function for which we can supply a truth table, with as many inputs and outputs as needed. This is known as digital logic and we can use it to, for example, add two binary numbers, each consisting of some fixed number of zeroes and ones. As I am sure you know, we can represent numbers in any base (including our usual decimal numbers) as binary numbers, so this is a very powerful way of manipulating numbers. In fact we can do many amazing things with these types of gates, including evaluating any statements of propositional logic. This is really the conceptual heart of computing.
Now, you should have at least a rough idea of how we can use bits of silicon to do things like add and subtract binary numbers by using voltages to represent zeroes and ones. But what do we do with the result? In other words, where do we store things? This brings us to the other major component of computing: memory.
A flip-flop is a device which can be used to store one bit (a zero or a one) of information. Can you guess how flip-flops are made? Yep, you got it: following our procedure here of building things from stuff we discussed in the previous section, of course, this time we combine NAND gates in ingenious ways to construct them.
There are various types of flip-flops. A flip-flop usually has one or two inputs, an output, and an input from a clock signal (this is why computers must have clocks--and it is the speed of these clocks which is measured when you are told that your laptop runs at, say, 1.9 GigaHertz, which means in this case that the clock signal flips and flops between 0 and 1, back and forth, 1.9 billion times per second). I will here describe a simple type of flip-flop called an SR (or Set/Reset) flip-flop. This is how wikipedia describes it:
The "set/reset" flip-flop sets (i.e., changes its output to logic 1, or retains it if it's already 1) if both the S ("set") input is 1 and the R ("reset") input is 0 when the clock is strobed. The flip-flop resets (i.e., changes its output to logic 0, or retains it if it's already 0) if both the R ("clear") input is 1 and the S ("set") input is 0 when the clock is strobed. If both S and R are 0 when the clock is strobed, the output does not change. If, however, both S and R are 1 when the clock is strobed, no particular behavior is guaranteed. This is often written in the form of a truth table. [See the table at right.]
So, for example, if I want to store a 1 as the output of the flip-flop, I would put 1 on the S input and 0 on the R input. When the clock strobes (flips up to 1) the flip-flop will set to the 1 state as the output. I know this sounds confusing, but just reread it until you are convinced it works. Similarly I can reset it to the zero state by putting 1 on the S input and 0 on the R input. So how are these things constructed out of NAND gates?
I hereby present to you, the SR flip-flop in all its immense digital logic beauty:
If you bought a computer recently it may well have come with one billion bytes of internal RAM memory. It takes eight flip-flops to hold one byte of information, and as you can see, it takes eight NAND gates to make this one basic flip-flop which will hold one bit in memory. That's 128 transistors for a byte of data! Now you know why the original ENIAC computer, which functioned using vacuum tubes instead of transistors, filled a large hall. These days, we can put billions of transistors on a single small silicon chip. (There are ways to make this more efficient, my example is only for rough illustrative purposes.)
There are other flip flops (such as the JK flip-flop) which eliminate the uncertain state of the SR flip-flop when both inputs are 1. There are other improvements and efficiencies which I won't get into here. (That would be like getting into the choice of materials for head-gaskets while trying to explain how an internal combustion engine works.)
So, starting with just simple bits of silicon, we have seen how we build layer upon conceptual layer (there are armies of engineers and scientists who specialize in each) until we have a processor which can perform arithmetic and logical functions, as well as a memory which can hold the results (or anything else). This is pretty much it! These are the elements which are used to design a machine language for a particular CPU (like the Pentium 5, say). And I have already described the software layers which sit on top of the hardware. I am sure it is obvious that there is much more (libraries-full) to every part of this (for example, I have said nothing about what an operating system does as part of the software layers), but broadly conceptually speaking, this is about it. If you have followed what I have laid out, you now know how electrons zipping about on little pieces of silicon right-justify your text, calculate your taxes, demonstrate a mathematical proof, model a weather-system, and the million other things computers can do.
I am running out of time and space in this column, so I will continue part II next time on April 17th. Look out for it then, and have a great week!
Posted by S. Abbas Raza at 03:23 PM | Permalink
TrackBack URL for this entry: http://www.typepad.com/services/trackback/6a00d8341c562c53ef00d83462089e69e2
Listed below are links to weblogs that reference Monday Musing: The Palm Pilot and the Human Brain: