Tag Archives: computers

[May 10, 1965] A Language for the Masses (Talking to a Machine, Part Three)

This is part three of our series on programming in the modern computer age.  Last time, we discussed the rise of user-oriented languages.  We now report on the latest of them and why it's so exciting.


by Gideon Marcus

Revolution in Mathematics

The year was 1793, and the new Republic of France was keen to carry its revolution to the standardization of weights and measures.  The Bureau du cadastre (surveying) had been tasked to construct the most accurate tables of logarithms ever produced, based on the recently developed, more convenient decimal division of the angles.  Baron Gaspard Clair François Marie Riche de Prony was given the job of computing the natural logarithm of all integers from 1 to 200,000 — to more than 14 places of accuracy!

Recognizing that there were not enough mathematicians in all of France to complete this project in a reasonable amount of time, he turned to another revolutionary concept: the assembly line.  Borrowing inspiration from the innovation as described in Adam Smith's Wealth of Nations, he divided the task into three tiers.  At the top level were the 5-6 of the most brilliant math wizards, including Adrien-Marie Legendre.  They selected the best formulas for computation of logarithms.  These formulas were then passed on to eight numerical analysts expert in calculus, who developed procedures for computation as well as error-check computations.  In today's parlance, those top mathematicians would be called "systems analysts" and the second tier folks would be "programmers."

Of course, back then, there were no digital machines to program.  Instead, de Prony assembled nearly a hundred (and perhaps more) human "computers." These men were not mathematicians; indeed, the only operations they had to conduct were addition and subtraction!  Thanks to this distributed labor system, the work was completed in just two years.

The Coming Revolution

These days, thanks to companies like IBM, Rand, and CDC, digital computers have become commonplace — more than 10,000 are currently in use!  While these machines have replaced de Prony's human calculators, they have created their own manpower shortage.  With computation so cheap and quick, and application of these computations so legion, the bottleneck is now in programmers.  What good does it do to have a hundred thousand computers in the world (a number being casually bandied about for near future years like 1972) if they sit idle with no one to feed them code?

As I showed in the first article of this series, communication between human and the first computers required rarefied skills and training.  For this reason, the first English-like programming languages were invented; they make coding more accessible and easier to learn. 

But developing programs in FORTRAN or COBOL or ALGOL is still challenging.  Each of these languages is specialized for their particular function: FORTRAN, ALGOL, and LISP are for mathematical formulas, COBOL for business and record keeping.  Moreover, all of these "higher-level" programming languages require an assembly program, a program that turns the relatively readable stuff produced by the programmer into the 1s and 0s a computer can understand.  It's an extra bit of work every time, and every code error that stalls the compiler is a wasted chunk of precious computer time.

By the early 1960s, there were folks working on both of these problems — the solution combined answers to both.

BASICally

In 1963 Dartmouth Professor John Kemeny got a grant from the National Science Foundation to implement a time-sharing system on a GE-225 computer.  Time-sharing, if you recall from Ida Moya's article last year, allows multiple users to access a computer at the same time, the machine running multiple processes simultaneously.


Photo Credit: Dartmouth College

Kemeny and his team, including Professor Thomas Kurtz and several undergrads, succeeded in completing the time-share project.  Moreover, in the interest of making computing available to everyone, they also developed a brand-new programming language. 

Beginner's All-purpose Symbolic Instruction Code, or BASIC, was the first language written specifically for novices.  In many ways, it feels similar to FORTRAN.  Here's an example of the "add two numbers" program I showed you last time:

5 PRINT "ADD TWO NUMBERS"
6 PRINT
10 READ A, B
20 LET C=A+B
30 PRINT "THE ANSWER IS", C
50 PRINT "ANOTHER? (1 FOR YES, 2 FOR NO)"
60 READ D
70 IF D = 1 THEN 6
80 IF D = 2 THEN 90
90 PRINT
100 PRINT "THANKS FOR ADDING!"
9999 END

Pretty easy to read, isn't it?

You might notice is that there's no initial declaration of variables.  You can code blithely along, and if you discover you need another variable (as I did at line 60), just go ahead and use one!  This can lead to sloppy structure, but again, the priority is ease of use without too many formal constraints. 

Indeed, there's really not much to the language — the documentation for BASIC comprises 23 pages, including sample programs.

So let me tell you the real earth-shaking thing about BASIC: the compiler is built in

On the Fly

Let's imagine that you are a student at Stuffy University.  Before time-sharing, if you wanted to run a program on the computer, you'd have to write the thing on paper, then punch it into cards using an off-line cardpunch, then humbly submit the cards to one of the gnomes tending the Big Machine.  He would load the FORTRAN (or whatever language) compiler into the Machine's memory.  Then he'd run your cards through the Machine's reader.  Assuming the compiler didn't choke, you might get a print-out of the program's results later that day or the next.

Now imagine that, through time-sharing, you have a terminal (a typewriter with a TV screen or printer) directly attached to the Machine.  That's a revolution in and of itself because it means you can type your code directly into a computer file.  Then you can type the commands to run the compiler program on your code, turning it into something the Machine can understand (provided the compiler doesn't choke on your bad code).

But what if, instead of that two-step process, you could enter code into a real-time compiler, one that can interpret as you code?  Then you could test individual statements, blocks of code, whole programs, without ever leaving the coding environment.  That's the revolution of BASIC.  The computer is always poised and ready to RUN the program without your having to save the code into a separate file and run a compiler on it. 


Kemeny watches his daughter, Jennifer, program — not having to bother with a compiler is particularly nice when you haven't got a screen!  Photo Credit: Dartmouth College

Moreover, you don't need to worry about arcane commands telling the program where to display output or where to find input (those numbers after every READ and WRITE command in FORTRAN.  It's all been preconfigured into the program language environment.

To be sure, making the computer keep all of these details in mind results in slower performance, but given the increased speed of machines these days and the relatively undemanding nature of BASIC programs, this is not too important. 

For the People

The goal of BASIC is to change the paradigm of computing.  If Kemeny has his way, programming will no longer be the exclusive province of the lab-coated corporate elites nor the young kooks at MIT who put together SPACEWARS!, the first computer game.  The folks at Dartmouth are trying to socialize computing, to introduce programming to people in all walks of life in anticipation of the day that there are 100,000 (or a million or a billion) computers available for use.

Vive la révolution!


Photo Credit: Dartmouth College



If you've read it this far, do me a favor and GOTO here — it's important!




[January 2, 1965] Say that again in English? (Talking to a Machine, Part Two)


by Gideon Marcus

A Matter of Time

We are now in the latter half of the 1960s, and computers have become a fundamental part of our lives.  I previously discussed how computers are really quite simple, only understanding ones and zeroes.  And yet, with a bit of ingenuity and a lot of patience, a computer can be instructed ("programmed') to do almost anything.co

Patience is the operative word.  As we saw last time, when programming in a language the computer understands natively, it takes six lines of computer language ("code") to add two numbers, and thirty just to have the computer use the Pythagorean Theorem to determine the hypotenuse of a right triangle.  Can you imagine doing advanced trigonometry that way?  Simple calculus?  How about advanced calculus?

If you're a scientist, you probably know advanced mathematics, but it's unlikely that you have any idea how to write machine language code.  And given that every computer speaks its own version of machine language, you certainly couldn't be expected know how to speak to all of them in their native tongue. 

But if you've got a thorny problem to solve, and the math is too complex (or more importantly, time-consuming) for you to handle on your own, you've got to be able to use a computer.  At that point, given the tools we've got thus far, your only option would be to employ a human translator to convert your problem into something the computer can understand.

People like this don't grow on trees!  It's not as if there are freelance software programmers out there who will take a coding assignment like piecework.  Computer tenders are full-time personnel employed by the institution that bought the machine.  Their time is precious.  They can't afford to write thousands of lines of program code every time an egghead wants to run a Fourier Transformation.  Moreover, they probably aren't scientists: would a computer gnome even know linear algebra from a serial circuit?

What's needed is a machine to talk to the machine.

A Language is Born

In 1954, a ten-person team led by IBM engineer, John Backus, developed the first robot translator.  Backus wasn't interested in plane tickets or phone bills; he simply wanted it to be easier for scientists to run math equations on a computer.  Sophisticated business applications could come later.


John Backus


Lois Habit, the lone woman on the team

Backus' team created FORTRAN: Formula Translator.  And it sparked a revolution.

FORTRAN actually comprised two elements: a new programming language for humans and a robot middleman for the computer.

First, the language.  FORTRAN, like any language, has a grammar and a vocabulary.  There are dozens of words, all of them more or less recognizable to an English speaker, though their definitions are highly specific.  Also, they must be entered in a grammatically rigorous fashion for them to work. 

But once one gets the knack, its orders of magnitude easier to program in FORTRAN than machine language.  For example, last time, we saw it took six lines of code just to add two numbers, and we had to include the numbers as part of the program.  What if we wanted a general program to add any two numbers?

It'd look like this:


PROGRAM ADD

C This program will add two numbers

INTEGER FIRST, SECOND, TOTAL

10 FORMAT(I5)
20 FORMAT(I6)

READ (5,10) FIRST

READ (5,10) SECOND

TOTAL = FIRST + SECOND

WRITE (6,20) TOTAL

END


Even without explanation, you probably were able to figure out how this program works, but here's some extra information:

Every FORTRAN program starts with PROGRAM (followed by the program's name) and ends with END.  At the beginning of every program, one declares their variables, informing the computer how to label any data it manipulates.  In this case, we're playing with integers, but variables can be real numbers or even strings of text.  FORMAT tells the program how many digits the variables can have (in this case, up to five for entry, six for the total).

When a line starts with a "C", any comment (generally explanatory) may follow.  What a boon for error-checking ("debugging") that is!

If you are lucky enough to have direct keyboard access to the computer to enter ("READ") the numbers to be added, and a CRT monitor on which the computer can display ("WRITE") the results, the interaction after the program is run will take seconds.  If you have to enter information with punch cards and view the results via printer, things will take a bit longer (and the numbers in the parentheses will be different).  But that's a topic for another article.

The whole program takes just 10 lines, one of which is optional.  Sure, it's half again as long as the equivalent machine code, but it can add any two numbers rather than two pre-coded ones.

Not only that, but that thirty line Pythagorean equation program can be done in just ten lines, too!  That's because FORTRAN includes a SQRT (square root) command.  Better still, there are commands for every trigonometric function (SIN, ASIN, COS, ACOS, etc.) so with just a few more lines, you can also get information on any triangle using the Law of Sines and Law of Cosines. 

Now you can see just how powerful a programming language can be!

Robot Middleman

Every computer comes with a kind of translator already hardwired into its permanent memory.  Otherwise, it couldn't interpret (for example) 101 as "Add" and 111 as "Print".  But, as we've discussed, they are incredibly minimal.  For a computer to understand the language of FORTRAN, it has to be programmed with an extra translator called a compiler. 

The compiler is a program input into the computer in machine language, of course (how else could it understand it?), but once entered, the compiler can be run to translate any FORTRAN program.  The compiler will completely translate ("assemble") the FORTRAN commands into a machine language program and execute it. 

This process is not instantaneous, just as a conversation between two people using an interpreter requires extra time.  Moreover, the compiler-assembled program is generally not as efficiently written (i.e. it takes more lines of code) as one optimized for brevity by an expert human. 

But because one saves so much time coding in FORTRAN, and because a human machine language expert isn't needed, the result is a tremendous net increase in efficiency.  In fact, programmers have been 500% quicker in their coding as a result, and they can focus on the problem they are trying to solve rather than the daunting task of talking in ones and zeroes or some arcane machine language.  That's worth the small price in computing time inefficiency.

Programming for everyone

FORTRAN was the first "higher-level" programming language, but it was quickly joined by many others.  They include LISP ("LISt Processing"), COBOL ("COmmon Business-Oriented Language"), and ALGOL ("ALGOrithmic Language"), each with their own specialized vocabulary and capabilities.  Indeed, it would be no exaggeration to say that a computer that can't read a higher-level language is almost useless; it's no surprise that FORTRAN was developed less than a decade after ENIAC, the first computer, came on-line.

But, as amazing as all of these languages are, their usage remains daunting.  FORTRAN et. al. are very good for the applications they were designed for, but not terribly flexible for anything else.

What this means is that, while FORTRAN might be useful to a physicist for making mathematical calculations, and COBOL is great for a corporate engineer to automate inventory control, there is no language for general application.  No introductory computing language that one (say, a college student) might learn to familiarize oneself with the theory of higher-level programming.

Moreover, most people don't have direct access to a computer, which means laboriously using a keypunch machine to put holes in punchcards (with one line of code per card), giving the stack to a technician, and waiting who-knows-how-long to get a result. 

The stage has been set for a simpler, even higher-level programming language that will allow anyone to dive feet first into coding — and that's basically what we'll be talking about in the next article in this series.






[November 1, 1964] Time (sharing) travel


by Ida Moya

New Toys for Los Alamos

As the Traveler said, things have really been heating up in Los Alamos Scientific Laboratory (LASL). And what with President Kennedy being taken from us so traumatically last year, it has all been too much. We have been struggling with national security while mourning the loss of our leader, and also attending to a deluge of new computers that are coming into the lab. Things have calmed down a little so I am now able to share a few secrets with you again.

Page from LA-1 document
Page from the Los Alamos Scientific Laboratory report LA-1

I've been busy helping with the preparation of the upcoming declassification of the Los Alamos Primer. This is the very first official technical report we produced at LASL, numbered LA-1. It is based on 5 lectures introducing the principles of nuclear weapons. These lectures were given in 1943 by LASL librarian Charlotte Serber’s husband, the physicist Robert Serber. You bet it's release took a long time to get this approved. Doing the work to cross out all those “Secret Limited” stamps and restamp each page with "Unclassified" also took some time.

Los Alamos is so important to the nation’s top-secret defense work that we are able to commandeer the first of each of the fastest computers manufactured. We had Serial no. 1 of the IBM 701 “Defense Calculator” in 1953. LASL also tested one of the first of 8 IBM 7030 “Stretch” computers, which even with its uptime shortcomings can calculate so fast that some people call it a “Supercomputer.”

I’m sure I also told you that we finally received our IBM 7090 computer. This equipment is being used for big science calculations around atomic energy, guided missile control, strategic planning (cryptanalysis, weather prediction, game theory), and jet engine design. I'm sure it is no surprise when I tell you we are using it to simulate nuclear explosions. This computer also has what they call an “upgrade,” the addition of more memory and input-output capability. The upgraded computer is called an IBM 7094.

Scientists at LASL, Lawrence Livermore Radiation Laboratory, and Massachusetts Institute of Technology have been working on better ways for computer operators to use the IBM 7094. Rather than custom-writing each computer operation and calculation that have to be done, they are working on a kind of “supervisor” to allow for more than one person to use the computer at the same time. This "operating system" is called CTSS or Compatible Time Sharing System.

Robert Fano sitting at a teletype
I don't have a current picture of Marge, but here is MIT Professor Robert Fano using CTSS from a Teletype ASR 35.

Sharing the Wealth

It's difficult to convey just how important this will be. Computers are hideously expensive things, often costing hundreds of thousands of dollars. They are also vital for any scientific institution's operations. Currently, only one person at a time can use them, which results in one of two situations. Either one person at a time has a monopoly on the machine during for the time it takes to compose and enter a program into the machine (incredibly inefficient) or programs are written "off-line" and run in a "batch". This latter solution ensures that the computer is always running, but it means no one can access the computer in real-time, and it might take days to get results (or notice that the program failed to run correctly!)

With time-sharing, several people can use a computer at once, running different programs in real-time. While the performance might not be as efficient for the computer, since it is accommodating multiple processes at once, the increased efficiency for the operator should more than make up for it.

One of my colleagues at MIT, Marjorie Merwin-Dagget, co-wrote a paper with Robert C. Daly and MIT lab head Fernando J. Corbato (Corby) about the CTSS operating system. You can have a look at it here An Experimental TIme Sharing System.

The Mother of Invention

Marge majored in math, taught for a couple of years, then found a position doing calculations and differential equations at an engineering lab. In the mid-50s, one of Marge’s female colleagues at this lab was sent to MIT to learn about the Whirlwind computer, and when this colleague came back, she taught her about how to code for Whirlwind.

Marge then leveraged this knowledge to code applications for a card punch calculator, an IBM 407 accounting machine, which was much quicker than the manual equipment their lab had been using. This clever coding work helped her land a job from Prof. Frank Verzuh in the MIT computer center. Marge got her friend Robert "Bob" Daly a job there too, because he was so skilled programming the IBM 407.

IBM 407 Accounting Machine showing detail of plugboard.
The IBM 407 Accounting Machine is "programmed" by changing wires on this plugboard.

One of Marge's first assignments was to compare assembly language programming to FORTRAN programming. Her findings are that FORTRAN is quicker to use and easier for other programmers to understand. She quickly became the FORTRAN expert of her group. She even got to work with the brilliant John McCarthy. John has been promoting the notion of timesharing computer systems at MIT and beyond. These computers are so fast that, John reasons, that several people can use them at once.

Marge tells me that Corby thought she and Bob are the best programmers on the staff. CTSS started  as a demo, to demonstrate the feasibility of computer timesharing. This demo turned into a viable system, something people wanted to actually use. She worked one-on-one a lot with Corby, rehashing the problems. They ended up working a lot at odd hours, staying up late going over listings and working out the problems. Kind of like those “hackers” I told you about last year. She was so excited when she told me that it finally worked for two Flexowriters.

Fernando Corbato stands amidst an IBM 7090 computer system.
MIT's Fernando Corbato standing amidst some of an IBM 7090 computer system at MIT.

Corby worked on programming the supervisor and queueing, while Marge took the task of coding interrupt handling, saving the state of the machine, commands, character handling, and a method for inputting and editing lines for the demos. Bob Daly was best at translating this to the mechanical working of the computer.

After co-writing this paper, Marge got married and took a leave of absence after her first child was born. She did return to MIT last year (1963), and is working part-time on smaller support projects outside the mainstream of CTSS development. It’s troubling how difficult it is for a woman to juggle fulfilling technical work with the demands of raising a young family.

Things to Come

Next time, I will tell you about our newest Supercomputer, the CDC 6600. This remarkable machine, designed by that wag Seymour Cray, is being installed in Los Alamos Scientific Laboratory right now. It is so fast and hot that it has to be cooled by Freon, which is making for a lot of fuss with air conditioning technicians coming and going to the lab. I spend a lot of time making copies of "Site Preparation Guide" manuals for everyone from the managers to the technicians. There's a lot more to these computers than just programming languages, that's for sure.  I hear that IBM is working on a new computer system, the 360. One of its requirements is that the pieces be able to fit through standard doors, and ride in standard elevators. Guess buyers are getting tired of having to break through walls to get their computers installed!

CDC 6600 computer system
Installation manual photo of the CDC 6600. Look at that display console with the two round screens, perfect decor for your evil lair.


[Come join us at Portal 55, Galactic Journey's real-time lounge! Talk about your favorite SFF, chat with the Traveler and co., relax, sit a spell…]




[Aug. 17, 1964] Yes and No (Talking to a Machine, Part 1)


by Gideon Marcus

Making sense of it all

Computers can do amazing things these days. Twenty years ago, they were vacuum tube-filled monstrosities purpose-built for calculating artillery trajectories; now, they are sleek, transistorized mini-monstrosities that do everything from calculating income tax to booking vacations across multiple airlines. It used to be that computers were mathematically inclined women — these days, digital computers do everything those able women did, and many times faster.

This is an absolute miracle when you realize just how limited a digital computer really is. It's about the dumbest, simplest thing you can imagine. Appropriately, the successful operation of a computer, and programming those operations, is one of the more abstruse topics I've come across. Certainly, no one has ever been able to give me a concise education on the subject.

I'm a naive (or arrogant) person. I'm going to try to give you one. It's a complex topic, though, so I'm going to try to break it into "bite"-sized parts. Read on for part one!

Ones and Zeroes

Whether you know it or not, you are already familiar with the concept of binary. Your light switch is either on or off. Your television, your radio, your blender — all of them are either in operation or not. There is no in-between (or, at least, there shouldn't be).

A digital computer is nothing but a big bunch of places where you process ons and offs; for simplicity's sake, let's call an off "0" and an on "1". Inside every computer is a place for storing 1s and 0s called its "memory". If you've ever seen medieval chain mail, you have an idea what it looks like, a net of metal rings, each of which can be individually magnetized. If a ring is magnetized, the computer sees it as on or "1". If not, it sees it as off or "0".

Now, there's not a lot of information you can store there — just the on/off state. But what if you grouped of eight of these binary digits (or "bits") so that your computer knew they were associated? Then you could have all sorts of 8-digit groups (called "bytes"). For instance:

00000000
11111111
11110000
00001111
10000001

and so on. All told, you could have 256 combinations of ones and zeroes in each of these groups, and that's enough to be useful. Here's how.

Three simple tasks

A computer, at its heart, can do just three things:

  1. Store information. Think of a computer's memory as a post office, and each byte is a mailbox. In fact, in computing, these mailboxes are called addresses. Each address can store one of the 256 combinations of ones and zeroes.
  2. Do arithmetic. A computer is designed to be able to add, subtract, multiply, and divide.
  3. Compare numbers. A computer can look at two different numbers and tell you if one is equal to, greater than, or less than the other.

That's it! When I first learned that, I (like you) wondered "how the heck can something like that do something complicated like making sure my Allegheny Airlines reservation gets transferred to Eastern for my trip to New York?"

As it turns out, these three basic computer functions are sufficient for that task — if you are clever in how you instruct a computer to do them.

Talking in numbers

Remember that a computer can only speak in binary digits ("binary" for short.) Let's say a computer has been hard-coded to know that when you input "110", you mean "store the following number in the following address." If you input "101" it means "add the number in the following address to whatever is in this other, following address. And let's say "111" means "print out whatever is in the following address."

A very simple program, computing A + B = C might look like this (for the sake of simplicity, let's say that your computer's memory has 256 addresses in which it can store bytes, each addressed with digits 00000001 through 11111111):

  1. 110 1 00000001
  2. 110 10 00000010
  3. 110 0 000000011
  4. 101 000000001 00000011
  5. 101 000000010 00000011
  6. 111 000000011

In English, that's:

  1. Put "1" in address #1.
  2. Put "2" in address #2.
  3. (how does 10 equal 2? Just like when you add 1 to 9 in normal, base 10 arithmetic, you make the ones place 0 and carry the one into the tens place.  In binary, 1 is the most that can ever fit into a digit — so if you add 1, you make that place zero and carry the 1 to the next place over.

    Thus 1 + 1 = 10 (2), 10 (2) + 1 = 11 (3), 10 (2) + 10 (2) = 100 (4) …and 11111111 =255!)

  4. Put "0" in address #3 (just to make sure we're starting from zero — if a program had used that byte before, it might not be empty!)
  5. Add whatever is in address #1 (in this case, 1) to whatever's in address #3 (so far, nothing).
  6. Add whatever is in address #2 (in this case, 2) to whatever's in address #3 (so far, 1).
  7. Show me what's in address #3: The answer should be "3" (because 1+2=3). Except, it will probably be displayed as "11" because this is a computer we're talking about.

Good grief, that's a headache, and that's just for one simple bit of math. The first big problem is just remembering the commands. How is anyone supposed to look at that code and know what those initial numbers mean?

An easier way

The folks at IBM, Univac, CDC, etc. solved that particular problem pretty easily. They designed a program (entered in binary) that translates easier-to-remember three letter alphanumeric codes into binary numbers. Thus, someone could write the above program as, for example:

  1. STO 1 00000001
  2. STO 10 00000010
  3. STO 11 00000000
  4. ADD 000000001 00000011
  5. ADD 000000010 00000011
  6. SHO 000000011

STO, ADD, and SHO make a bit more intuitive sense than strings of numbers, after all.

And since you can translate letters to binary, why not numbers and addresses?

  1. STO 1 A1
  2. STO 2 A2
  3. STO 0 A3
  4. ADD A1 A3
  5. ADD A2 A3
  6. SHO A3

Note, these are not commands in any actual language — I made them up. And each computer system will have its own set of commands unique to the system, but real code will look something like this.

This easier to understand, mnemonic language is called "Assembly" because the program assembles your commands into something the computer understands (remember — they only know ones and zeroes).

Hitting the ceiling

Assembly makes it easier to program a computer, but it's still tedious. Just adding 1+2 took five lines. Imagine wanting to do something simple like computing the hypotenuse of a right triangle:

In geometry class, we learned that A2 + B2 = C2.

The first part of that is easy enough.

  1. STO A A1 (store A in address A1)
  2. STO B A2 (store B in address A2)
  3. STO 0 A3 (Clear out address A3 for use)
  4. MUL A1 A1 (multiply what's in A1 by itself)
  5. MUL A2 A2 (multiple what's in A2 by itself)
  6. ADD A1 A3 (add what's now in A1 to what's in A3)
  7. ADD A2 A3 (add what's now in A2 to what's in A3)

All right. That gets us A2 + B2 in the third address…but how do we get the square root of C2?

When I wrote this, I had no idea. I've since talked to a programmer. She showed me a thirty line program that I still don't understand. Sure, it works, but thirty lines for a simple equation? There has to be an easier way, one that doesn't involve me pulling out my accursed slide rule.

There is! To find out how, join us for the next installment of this series!


[Come join us at Portal 55, Galactic Journey's real-time lounge! Talk about your favorite SFF, chat with the Traveler and co., relax, sit a spell…]




[May 22, 1963] Beyond the Typewriter (IBM Computers and how they work)


by Ida Moya

I was very impressed by this month’s paean to the IBM Selectric Typewriter by traveler Victoria Lucas. Her sensuous love of the very physicality of the thing really got to me. As I mentioned before, knowing how to type is what made me what I am today; I too used this panoply of ever-better equipment, so I really enjoyed her story. The IBM Selectric is an incredibly satisfying typewriter to operate.

The most intriguing part of Miss (Mrs.?) Lucas’ article was her closing question, “What are you going to do to steal my heart next, IBM?  For example, where is this computer thing going? Will it be the next love of my life?”

Answer: The computer will be the next love of your life. (Or maybe your master.)

My place of employ, Los Alamos Scientific Laboratory (LASL), is a frontrunner in adopting new computing technologies. I have worked in different capacities as LASL moved from calculating equipment that ran with hand-propelled gears and ratchet wheels, to things electrically controlled by mechanical switches, to those using electomechanical relays. (The IBM Selectric uses yet another kind of electromechanical switch, though since it is not properly a computer I won’t address it now.) The height of switching technology was until very recently vacuum tubes, which are now being by supplanted by transistors. Transistors, an amazing miniaturized technology, are much smaller than vacuum tubes, work faster, and don’t get as hot.

With computers, there are a lot of viewpoints from which one can focus. I think of computers more from the perspective of an operator: making software programs run on the computers, and producing and analyzing the results. Other people think about computer architecture — how does the data flow in and out of the computer, and what happens when the information is processed inside.

Here is a picture of one of the three vacuum tube-based IBM 704 computers at Los Alamos Scientific Laboratory. One of my colleagues, a computer operator, is shown opening the front door of the IBM 729 tape drive. As you can see, no special protective gear is required, and she doesn’t even have to wear a hair net. This is from just a few years ago; the computers we have now are even faster and more sophisticated.

The way we get a program into the computer is to punch the program onto cards, then use the card reader (the low piece of equipment in the center of this photograph) to transfer the program onto magnetic tapes. From the tape, the program is read into the computer’s core memory.

Data – for example, parameters for an experimental design study for a thermonuclear warhead, something you want to calculate over and over again with different settings — is then punched onto another set of cards, and read directly into the core memory. The program is transferred yet another time, to the CPU, the Central Processing Unit. There, the program acts on each of the data points in the core as appropriate. The results are printed out onto greenbar paper by the printer, which is the rightmost piece of equipment.

IBM produced this nifty card to illustrate the wonderful equipment they have to punch, sort, and interpret the cards.

We even have this little slide rule, which managers use to calculate how long it will take for keypunch operators to do a job. This little rule is our master – woe betide you if you cannot keep up!

I’m not sure what computing establishment this picture below is from, but here are a bunch of gals using IBM 026 card punches, very much like here at LASL. It’s nice to have a job and be a part of something important. But this windowless room jam packed with keypunch operators is depressing. Imagine how loud it is in there for these women. (Mary Whitehead tells me that when they were using calculators Weapons Research Establishment in Salisbury, Australia, they had carpeting in the room and egg crates lining the walls to attempt to absorb some of the sound. Not so lucky here.)


From Wikipedia

And heaven help them if they ever have to use that fire extinguisher. The cords on the floor look like a real trip hazard. However, most of these gal are just working for a year or two before they get married and become housewives, so it doesn’t pay to make the conditions any better. Me, even though my husband works at the Santa Fe Railroad, we don’t have that luxury. We both have to work in order to make ends meet and raise our wonderful children. I suspect more and more women are going to join the workforce permanently in the coming years, and these conditions will become a lot more humane for all of the future computer workers.

Another perspective from which to understand computing is the physical components inside the computer that come together to make a larger whole. For example this IBM Field Replaceable Unit (FRU), pictured below. On top of the unit are several vacuum tubes, while the rest of the contraption consists of resistors, diodes, and other discrete components. Electrons flow through this and, ingeniously, compute the Boolean logic of ands, ors, and nots.

I took this module as a souvenir from our IBM 704 system when it was decommissioned. Unlike the computers built as one unique unit, like say the one-off computers ENIAC or MANIAC, the 704 is constructed of a small number of modules. If a component in one of these modules goes bad, the individual module is removed and quickly replaced with a new module – then the computer works again. The bad module can be tested and repaired at a more leisurely pace.  These computers are expensive to own and run; keeping them “up” as much as possible, for all three shifts, is imperative.

The IBM 7030 Stretch was also designed with modularity in mind. Instead of tubes, the Stretch uses transistors, as you can see on this Standard Modular System (SMS) card below. This particular module, about the size of a playing card, is a “two-way AND,” a particular kind of Boolean logic gate. SMS cards were first developed for the Stretch, and are also used in the brand new IBM 7090, 1401, and other super-fast IBM computers and peripherals of today.

If you look closely at the transistors, which are the metal cans, you can see the Texas-shaped brand mark of Texas Instruments. This American company has learned how to mass-produce transistors. Inside this can is a teeny little piece of germanium crystal, a “semiconductor,” with some probes attached. (And by attached, I mean soldered together by women using binocular microscopes and steady hands, jammed together in another terrible windowless room). Manipulating and transforming the way electrons flow through these cans is, ultimately how the computer does our bidding. Interestingly, computer operators don’t need to know about this in detail; we can leave it to the expert computer engineers and technicians.

IBM is not the only company using a modular strategy. For example a few days ago the traveler showed a brand-new Siemens 3003 computer system. I don’t have a parts book for this German company, so I don’t know what this particular module does, but you can see in the picture below there are two silver can-shaped transistors, plus some other colored packages of components.


(Courtesy of The Living Computer Museum

So, Miss Lucas, there is plenty to love about computers. Don’t get stuck just being a typist, and join us in the transistorized revolution!




[Mar. 30, 1963] Mercury waltzes Matilda (the tracking and research station at Woomera, Australia)


by Ida Moya

I’m back from a whirlwind of helping the data analysts at Los Alamos get their FORTRAN formulas running on that balky old IBM Stretch computer. I can see why IBM only made 8 of these things. It is miraculous to have a computer that can fit into a single room, but this stretch (pardon the pun) in computing technology still averages only 17 hours uptime a day — and that’s also a stretch (no more, I promise).

When it breaks, this swarm of white-coated men in ties comes in and fusses around with it with a bunch of special tools, as well as the set of ALDs (Automated Logic Diagrams) that come with every IBM computer. The way those diagrams are produced and updated with punch cards and special line printers is an amazing story, but for another time.

Although we at Los Alamos Scientific Laboratory can comfort ourselves that the Stretch is the fastest computer in the world, I’m still envious of the institutions that have the better-engineered IBM 7090 computers. These are being used for calculations for the exciting Mercury program.


IBM 7090 at the Weapons Research Establishment's headquarters at Salisbury, on the northern outskirts of Adelaide in South Australia.

The Mercury spaceships do not have a computer on board – computers are far too heavy – so for figuring out how to re-enter the earth’s atmosphere the astronauts rely on computations sent by radio from the pair of IBM 7090 computers at the Mercury Control Station at Cape Canaveral. It’s an incredible amount of faith to put in one site, so Mercury control has those two redundant IBM computers, plus another set of computers in New Jersey. A third computer gathering information from the flight is on the other side of the globe — in Adelaide processing tracking data collected at at Weapons Research Establishment in Woomera, Australia. There is also another control center at Muchea, in Western Australia.


Control room of the astronaut tracking station at Muchea in Western Australia, part of US Project Mercury

A lot of people haven’t heard of Woomera, so let me tell you a little bit about it. At Woomera, more is being done than track Mercury astronauts. This part's an open secret, but the Brits and the Aussies are working together there on testing (or doing “trials” as they say) on rockets, missiles, and even atomic weapons. That's why they built this testing range in the middle of nowhere, in the outback of Australia.


Woomera Research Establishment Officer’s mess

A few years ago we had a visit from Bill Boswell, the Woomera director, along with a team from Maths Services, and Mary Whitehead, the leader of the Planning and Data Analysis Group. They were visiting various computer installations at Point Mugu, White Sands, and Cape Canaveral. These are all larger-than life place-names, but they really just represent groups of men and women madly making observations, coding the photographs in a way the computer can understand, and using these results to steer the manned spaceships. Mary and I had time to talk about more prosaic things, like her new apartment (or “flat” as they call it down under) in Woomera village, and the troubles of living so far from civilization.


Mary’s new flat at Woomera

Woomera reminds me a lot of Los Alamos. It is a similar purpose-built town, isolated from the surrounding population by remoteness and security. Entire families live there, with houses, apartments, and schools for the kids. There are clubs and mess halls; a bowling alley and community grocery store. The store sells just canned and packaged food; if you want something fresh the closest produce is 50 miles away. The planners made a lot of efforts to plant trees, most of which failed. Honestly, it sounds awful to me. I love the "Land of Enchantment" (New Mexico), where things actually grow. The two science towns also have odd mixed populations – for Los Alamos, it is the influx of American and foreign scientists, local Hispanos, and the San Ildefonso tribe. In Woomera, it is the influx of British scientists, local Aussies, and the aboriginal people. Personally I think Los Alamos does a better job of integrating the native population.


Community store in Woomera

There’s something about space that is so exciting. Space has it all: exploration, discovery, danger, and destiny. There’s so much more to it than my dry work of computers, trajectory calculations, and strangely named groups that I am so mired in. That’s why I am so excited to find science fiction and Galactic Journey’s reviews, which is opening my mind to our real future in space that this work makes possible.




[February 9, 1963] Do something about the weather… (The State of the Art in Computing)

[If you live in Southern California, you can see the Journey LIVE at Mysterious Galaxy Bookstore in San Diego, 2 p.m. on February 17!]


by Ida Moya

Let me take you on a little trip, one that starts in wartime and that ends with a peacetime enterprise that increasingly affects all of our lives.  One that I've had the good fortune to participate in (or, at least, on the edges of).  Who knows — you might end up an integral part of it, too!

I'll start with an important but little-known woman scientist, one who was not only representative of the kind of service women have provided for decades, but who was also pivotal in my development.

Charlotte Serber was the first librarian at Los Alamos Scientific Laboratory. She went to live there with her husband, Robert Serber, at the start of the secret project. Bob was a student of J. Robert Oppenheimer. Oppie, as we all called him, was the charismatic (and later tortured) leader of what we now call the Manhattan Project.


The Library at Los Alamos

I worked for Charlotte, who was the only female section leader of Project Y on The Hill. Like me, she didn’t start as a professional librarian. In fact, one of Charlotte’s first tasks was to organize the maids. Charlotte taught me a lot, and we all worked together to organize the printed materials, borrow scientific books from universities, subscribe to physics journals from all over the world, and endlessly mimeograph things. I didn’t think my hands would ever NOT be blue from that messy ink. We had the only mimeograph machine for a long time, so it seemed everybody would come by to make some copies and share the latest news.

Charlotte and Bob left Los Alamos after the bomb was dropped, but I stayed on. There were a lot of Hispanos like me on The Hill; my cousins and second cousins and of course my husband and family were living and working there too. When my husband moved us to The Hill and started working as a carpenter, they put out a call for wives who could type. That I can type and had the courage to answer that call saved me from what was seemingly my destiny on The Hill — being a maid, or at best, a store clerk. I’m grateful for Charlotte taking a chance on my younger self and letting this Hispano work more seriously on The Hill.

That was then, when computers were people or room-sized tube-packed monstrosities.  The world now is crazier than science fiction dreamed back then, in the middle of which my work has situated me. In fact, my unique position enables me to do research using the resources of the Los Alamos Scientific Laboratory, our sister institution the Lawrence Livermore National Laboratory, and the libraries of our managing institution, the University of California. This puts me ahead of the curve in knowing about (and working on the computers involved with) cutting edge developments in science.

Take the weather, for example. The Traveler has recently written about the upcoming launch of the first Nimbus satellite, and the recent launches of three Tiros weather satellites. My interest piqued, I’ve lately been quizzing my colleagues in engineering about how weather prediction via satellite using computers actually works. Fair warning: I might tell you something that is classified, but I will try to keep the classified things secret.

Weather prediction via satellite does not just involve getting the object into space (a big task in itself); there is also a coordinated effort of redundant ground stations to collect the data sent from the satellites. Computers are far too huge and heavy to send into orbit, so the satellites transmit their findings to computers back on earth for further analysis. But what do these computers actually compute?

Predicting the weather has a long history, going back to Aristotle and before. In the modern age, Lewis Fry Richardson is considered the father of using computers to analyze the weather, based on his 1922 book, Weather prediction by numerical process. This book was written before transistorized or tube computers; he was thinking about men and women with electronic hand calculators like the Marchant we used before the IBM computers came to Los Alamos. One much cited quote is his thought experiment describing a “weather theatre” or “forecast factory”:

“After so much hard reasoning, may one play with fantasy? Imagine a large hall like a theatre, except that the circles and galleries go right round through the space usually occupied by the stage. The walls of this chamber are painted to form a map of the globe. The ceiling represents the north polar regions, England is in the gallery, the tropics in the upper circle, Australia on the dress circle and the Antarctic in the pit. A myriad computers are at work upon the weather of the part of the map where each sits, but each computer attends only to one equation or part of an equation.”


 
I love this little sketch; I’m not sure where it came from. It’s not in Richardson’s book, but it fits his vision. In his thought experiment, Richardson imagined 64,000 human computers, all calculating simultaneous equations for their part of the globe, their pace synchronized by the conductor in the center. Runners would go down the aisles collecting results from sectors where the conductor would shine a light, and take them to a central office to be collated. A crazy idea. Until now, when we have transistorized computers fast enough to potentially run and collate 64,000 calculations at a time.
 
These new computers, aside from calculating nuclear explosions (a thing we at Los Alamos are very familiar with) are being used for weather prediction. In the UK, a Ferranti Mercury computer, known as “Meteor,” was used starting January 1959 at the Met Office to do Numerical Weather Prediction. The Meteor is a vacuum tube computer. Those panels along the right are not just a fancy wall; those are the sides of the frames of the computer! Unlike the women in the bank HSBC wearing spike heels, these computer operators seem to be allowed more practical footwear.
 

 
A transistorized English Electric KDF9 “Comet” is slated to replace this Mercury in another year or so. This is the same type of computer used by HSBC to perform banking operations. All of that blue and silver receding into the background of this photo is the computer. The typewriter-looking thing on the right is used to get results from the computer, while the U-shaped things are high speed paper tape readers used to feed data into the computer. Getting information in and out of these things is turning out to be a limiting factor to their speed. That punched paper tape has to be rewound by hand; a careful task that must be done properly or it will kink and break. The tape is impossible to repair it once it has broken, so there is a lot of cursing and re-punching of tapes when that happens. Also, the tape moves so fast it is like a razor blade, giving the mother of all paper cuts to the incautious.


 
In America, weather prediction is being done at the Joint Numerical Weather Prediction Unit (consisting of the U.S. Army, the U.S. Air Force, and the U.S. Weather Bureau. The JNWPU is on their 4th IBM computer, having already used the IBM 701 and IBM 704 (both vacuum tube computers). They then used an IBM 1401 transistorized computer with a high speed paper tape reader. The national weather service then got one of the very first IBM 7090 computers. Each of these computers was about 6 times faster than the one before.
 
The photographs sent from the Tiros satellites is sent in triplicate to Command and Data Acquisition stations in DC and Hawaii. There humans use physical tools, drafting tables and scales, to hand plot the movement of the clouds onto maps. These coordinates are then sent to the NASA Computing Center and the U. S. Weather Bureau, where they are then fed into the computers (using that terrible punched paper tape). The computers use complex mathematical formulas to predict the future movement of the clouds and therefore predict the weather. The output is automatically printed on wide paper by a typewriter-like thing, a Friden Flexowriter. This whole process is managed by teams of technicians, men and women. When it is ready, the printout goes to engineers who study the results.


 
As you can see, the glory is really all on the ground.  To all of you who want to be astronauts, perhaps you might think about the many people who support their flight…and aspire to be one of them instead.

[P.S. If you registered for WorldCon this year, please consider nominating Galactic Journey for the "Best Fanzine" Hugo.  Your ballot should have arrived by now…]




[January 7, 1963] From Los Alamos to the Moon (Computing — the State of the Art)


by Ida Moya

[Meet our newest fellow Journeyer, Mrs. Moya, whose technical background is as enviable as it is fascinating.  Given the whirlwind pace at which computer technology is advancing, I thought our readers would like to get, straight from the source (as it were), an account of Where We Are Today…]

Hello, fellow Travelers. My journey to date has been both a long and short one. Long in that I've come a long way since I first started as a typist at an institution you might have heard of: Los Alamos Scientific Laboratory.  And short in the fact that, well, I'm still here, now employed in the Lab's technical library. I’ve been around these New Mexico labs for nearly twenty years, since they built the first atomic bombs which ended the war.
 
I started at Los Alamos as a typist, creating memos and reports for the scientists. I learned how to type in high school, a skill that has taken me a long way. I was on the Hill in the beginning, in 1943, when there were only a couple of hundred people were working there. By 1945, there were over 6000. I’ve done a lot of things in these twenty dusty years at Los Alamos. Most of my work has been in the library, helping the scientists and engineers find the documents and information they need in order to get their work done. I’ve seen a lot over these years, and learned quite a bit about science and computers to better help the scientists get things done.

I can tell you more about Los Alamos and Project Y, which until now have been the hush-hushiest of subjects, because just this year a book came out — Volume 1 of the History of the United States Atomic Energy Commission: The New World 1939-1946.

This blockbuster new report, designed by esteemed colleague Marilyn Shobaken of Penn State, where it was published, has all the science archives in New Mexico and beyond buzzing. She was so involved with the production of this document that her name is even included on the verso, right after the Library of Congress catalog card number 62-14633.
 
That's just background. I promise I will tell you more stories of Project Y and the Hill later, but for now, let me tell you about my current work.  It involves the development of computation and how these new transistorized computers will take us into space. I’m not flying into space myself, but the work I do is helping those brave men get to the moon. This is an incredibly exciting project to be a part of, and I’m glad that NASA is not secret. So let me tell you about some of the competitive scientific and technical developments that Los Alamos Scientific Laboratory is leading.
 
Just last month, on December 7, 1962, the University of Manchester in the UK commissioned the Atlas computer, said to be faster than even the newest IBM computers — the Stretch, the 7090, and the 7094. However, one thing the Atlas computer doesn’t have is an industry behind it. Manchester made this one experimental machine, and have two others in the works; that’s it.

At Los Alamos Scientific Laboratory, we acquired the first IBM Stretch computer in 1961. This new transistorized computer was supposed to be 100 times faster than IBM’s previous vacuum-tube based computer, the IBM 704. The Stretch turned out to be only about 35 times faster, so IBM considers it a failure.

Still, the Stretch is faster than the gals in the human computer facilities, pecking away on their baby-blue Marchant electromechanical calculators. I did this for a time before we got our electronic computers; a tiring and thankless job that had to be done accurately in makeshift buildings in the high desert. Now we program our computers in FORTRAN, in air-conditioned rooms. If you have the aptitude, I highly recommend you learn to program computers using this innovative formula translating language. The space race needs more capable computer programmers (and it's not just for women, no matter what the engineers might tell you).

IBM made a few more Stretch computers, all for governmental customers. The National Security Agency has just installed the next one, and others are being prepared for our sister project at the Lawrence Livermore Laboratory, as well as the Atomic Weapons Establishment in England, and the US National Weather Service. That's about it, though. As I said, IBM deems this marvel a failure.

However, IBM has learned from this exercise, and has turned their considerable manufacturing prowess to the IBM 7090 and IBM 7094 computers. IBM has factories churning these out by the dozen. As I write, two IBM 7090 computers are installed at the Marshall Space Flight Center in Huntsville, Alabama, calculating trajectories for NASA’s Mercury program. These 7090 computers are also being used by rocket scientist Wernher von Braun to simulate flight trajectories in aid of the design of the Saturn rocket system, the rocket that is going to take Americans to the moon. Four more IBM 7090 computers are being used by the Air Force for their Ballistic Missile Early Warning System. There is even an IBM 7090 installed at the Woomera Long Range Weapons Establishment, as part of the Anglo-Australian Joint Project, also used to calculate rocket and missile trajectories. America’s industries are not far behind; two more IBM 7090 systems are being used by American Airlines for their forward-thinking SABRE flight reservation system. Imagine, soon even regular people like us will be able to experience the glamour and sophistication of air travel, as new jet and computer technologies put the price of a ticket within affordable reach.

So, Manchester U.K. may have the “fastest” computer in the Atlas, but the United States has the lead in building an entire industrial infrastructure to deploy fast computers all over the country and free world. Healthy competition with our allies is fine, but what we really need to do is win the space race against those godless Russian communists. The American people will prevail in space, in no small part thanks to FORTRAN and IBM computers.

I'm proud to be a part of this adventure, and now, as part of the Journey, I look forward to bringing you along with me!

[P.S. If you registered for WorldCon this year, please consider nominating Galactic Journey for the "Best Fanzine" Hugo.  Check your mail for instructions…]