Tag Archives: talking to a computer

[Aug. 17, 1964] Yes and No (Talking to a Machine, Part 1)


by Gideon Marcus

Making sense of it all

Computers can do amazing things these days. Twenty years ago, they were vacuum tube-filled monstrosities purpose-built for calculating artillery trajectories; now, they are sleek, transistorized mini-monstrosities that do everything from calculating income tax to booking vacations across multiple airlines. It used to be that computers were mathematically inclined women — these days, digital computers do everything those able women did, and many times faster.

This is an absolute miracle when you realize just how limited a digital computer really is. It's about the dumbest, simplest thing you can imagine. Appropriately, the successful operation of a computer, and programming those operations, is one of the more abstruse topics I've come across. Certainly, no one has ever been able to give me a concise education on the subject.

I'm a naive (or arrogant) person. I'm going to try to give you one. It's a complex topic, though, so I'm going to try to break it into "bite"-sized parts. Read on for part one!

Ones and Zeroes

Whether you know it or not, you are already familiar with the concept of binary. Your light switch is either on or off. Your television, your radio, your blender — all of them are either in operation or not. There is no in-between (or, at least, there shouldn't be).

A digital computer is nothing but a big bunch of places where you process ons and offs; for simplicity's sake, let's call an off "0" and an on "1". Inside every computer is a place for storing 1s and 0s called its "memory". If you've ever seen medieval chain mail, you have an idea what it looks like, a net of metal rings, each of which can be individually magnetized. If a ring is magnetized, the computer sees it as on or "1". If not, it sees it as off or "0".

Now, there's not a lot of information you can store there — just the on/off state. But what if you grouped of eight of these binary digits (or "bits") so that your computer knew they were associated? Then you could have all sorts of 8-digit groups (called "bytes"). For instance:

00000000
11111111
11110000
00001111
10000001

and so on. All told, you could have 256 combinations of ones and zeroes in each of these groups, and that's enough to be useful. Here's how.

Three simple tasks

A computer, at its heart, can do just three things:

  1. Store information. Think of a computer's memory as a post office, and each byte is a mailbox. In fact, in computing, these mailboxes are called addresses. Each address can store one of the 256 combinations of ones and zeroes.
  2. Do arithmetic. A computer is designed to be able to add, subtract, multiply, and divide.
  3. Compare numbers. A computer can look at two different numbers and tell you if one is equal to, greater than, or less than the other.

That's it! When I first learned that, I (like you) wondered "how the heck can something like that do something complicated like making sure my Allegheny Airlines reservation gets transferred to Eastern for my trip to New York?"

As it turns out, these three basic computer functions are sufficient for that task — if you are clever in how you instruct a computer to do them.

Talking in numbers

Remember that a computer can only speak in binary digits ("binary" for short.) Let's say a computer has been hard-coded to know that when you input "110", you mean "store the following number in the following address." If you input "101" it means "add the number in the following address to whatever is in this other, following address. And let's say "111" means "print out whatever is in the following address."

A very simple program, computing A + B = C might look like this (for the sake of simplicity, let's say that your computer's memory has 256 addresses in which it can store bytes, each addressed with digits 00000001 through 11111111):

  1. 110 1 00000001
  2. 110 10 00000010
  3. 110 0 000000011
  4. 101 000000001 00000011
  5. 101 000000010 00000011
  6. 111 000000011

In English, that's:

  1. Put "1" in address #1.
  2. Put "2" in address #2.
  3. (how does 10 equal 2? Just like when you add 1 to 9 in normal, base 10 arithmetic, you make the ones place 0 and carry the one into the tens place.  In binary, 1 is the most that can ever fit into a digit — so if you add 1, you make that place zero and carry the 1 to the next place over.

    Thus 1 + 1 = 10 (2), 10 (2) + 1 = 11 (3), 10 (2) + 10 (2) = 100 (4) …and 11111111 =255!)

  4. Put "0" in address #3 (just to make sure we're starting from zero — if a program had used that byte before, it might not be empty!)
  5. Add whatever is in address #1 (in this case, 1) to whatever's in address #3 (so far, nothing).
  6. Add whatever is in address #2 (in this case, 2) to whatever's in address #3 (so far, 1).
  7. Show me what's in address #3: The answer should be "3" (because 1+2=3). Except, it will probably be displayed as "11" because this is a computer we're talking about.

Good grief, that's a headache, and that's just for one simple bit of math. The first big problem is just remembering the commands. How is anyone supposed to look at that code and know what those initial numbers mean?

An easier way

The folks at IBM, Univac, CDC, etc. solved that particular problem pretty easily. They designed a program (entered in binary) that translates easier-to-remember three letter alphanumeric codes into binary numbers. Thus, someone could write the above program as, for example:

  1. STO 1 00000001
  2. STO 10 00000010
  3. STO 11 00000000
  4. ADD 000000001 00000011
  5. ADD 000000010 00000011
  6. SHO 000000011

STO, ADD, and SHO make a bit more intuitive sense than strings of numbers, after all.

And since you can translate letters to binary, why not numbers and addresses?

  1. STO 1 A1
  2. STO 2 A2
  3. STO 0 A3
  4. ADD A1 A3
  5. ADD A2 A3
  6. SHO A3

Note, these are not commands in any actual language — I made them up. And each computer system will have its own set of commands unique to the system, but real code will look something like this.

This easier to understand, mnemonic language is called "Assembly" because the program assembles your commands into something the computer understands (remember — they only know ones and zeroes).

Hitting the ceiling

Assembly makes it easier to program a computer, but it's still tedious. Just adding 1+2 took five lines. Imagine wanting to do something simple like computing the hypotenuse of a right triangle:

In geometry class, we learned that A2 + B2 = C2.

The first part of that is easy enough.

  1. STO A A1 (store A in address A1)
  2. STO B A2 (store B in address A2)
  3. STO 0 A3 (Clear out address A3 for use)
  4. MUL A1 A1 (multiply what's in A1 by itself)
  5. MUL A2 A2 (multiple what's in A2 by itself)
  6. ADD A1 A3 (add what's now in A1 to what's in A3)
  7. ADD A2 A3 (add what's now in A2 to what's in A3)

All right. That gets us A2 + B2 in the third address…but how do we get the square root of C2?

When I wrote this, I had no idea. I've since talked to a programmer. She showed me a thirty line program that I still don't understand. Sure, it works, but thirty lines for a simple equation? There has to be an easier way, one that doesn't involve me pulling out my accursed slide rule.

There is! To find out how, join us for the next installment of this series!


[Come join us at Portal 55, Galactic Journey's real-time lounge! Talk about your favorite SFF, chat with the Traveler and co., relax, sit a spell…]