Go to On Line Documents, Go to Go to Antique Computer home page

combination of
"Computer Architecture and Amdahl's Law"

by Gene M. Amdahl
and

"An Interview with Gene M. Amdahl"

of Gene M. Amdahl by William S. Anderson
focusing on the startup and operation of Amdahl and Trilogy corporations

individual articles from "IEEE Solid-State Circuits Society News"
Summer 2007

CONTENTS

My educational background has never included any training in the field of computing, so all of my design activities have been based on my experience and the necessity of solving current problems. Consequently, my computer architecture contributions will largely be autobiographical.

Farm, one room grade school

I was raised on a farm in eastern South Dakota and attended a one-room grade school for all eight grades, then a small high school of about 150 students, graduating in 1940 (we got electricity when I was a freshman). My technical experience was limited primarily to mechanical equipment.

College, Navy, Marriage, College

I spent one year at home, doing farm work, before entering South Dakota State College (now a University). My father had wanted me to attend a liberal-arts college, for he didn't want me to go to college to learn how to make a living, but rather how to get the most out of life!
I enrolled in South Dakota State College (SDSC) in the fall of 1941 in mechanical engineering, but decided it was not the field for me. So I took a potpourri of courses in math, chemistry, electrical engineering and physics.
Since this was the fall of 1941, Pearl Harbor was bombed early in my freshman year. I continued in college for the next year and a half, working as a janitor in the hospital for room and board and helping with farm work during the summer. I wasn't drafted due to such a shortage in farm labor. I was requested to teach Physics Laboratory that fall at SDSC, because they had more than one hundred soldiers coming in for the Army Specialized Training Program and not enough people who could teach them. This was more satisfying in terms of contributing to the war effort.
During that time I took the Navy's Eddy Test, qualifying for and entering Naval electronics training, and teaching after that. When I was discharged in June of 1946 I immediately married my fiancee, Marian Quissell, who grew up on a farm four miles from my home. This marriage was a major catalyst in my life, giving it stability, purpose and satisfaction! We have now been married over 60 years. We've raised three children, Carl, Delaine, and Andrea. I consider my family the crowning glory of my life! After getting married I decided to return to SDSU. There was no housing available in Brookings so I decided to build a small house in sections and erect it on an empty lot. Unfortunately there were no fittings for connecting to municipal water, though I explored all sources within two hundred miles! I finally decided to approach McComb's cabin court on the edge of the city. Mr. McComb agreed to let me erect my house there and connect to his water and power. He was a most gracious and fair landlord.

This is a picture of me in front of the entrance hall of the house I built so I could finish my undergraduate degree. I even had to make the front door! The date of the photograph is 1948.

Returning to SDSC in 1947, I selected physics as my major. When I was due to graduate in June 1948, I applied to several graduate schools to study Theoretical Physics. I was accepted at the University of Wisconsin at Madison. At the University of Wisconsin

University of Wisconsin

At the University of Wisconsin I received a Wisconsin Alumni Research Foundation assistantship plus the last of my GI bill and began that summer in the field of Theoretical Physics at the University of Wisconsin in Madison. I completed my courses, took my final exams, including my orals, and submitted my thesis. Near graduation time in February 1952 our first child, Carl, was born.
This was an unusual time in physics, for they had just discovered "strange particles" in late 1949, and the name "meson" had not yet been proposed. At that time two other graduate students and I were assigned to determine if a force between nuclear particles proposed by a Japanese physicist could adequately describe the simplest 3-body nucleus, Tritium (Hydrogen 3).
We worked for 30 days using an 8-digit desk calculator, and a slide rule to hold two more most-significant digits. We mapped the energy of the system for all relevant ranges of the parameters, but we couldn't quite achieve a stable state. We had found the proposed force to be inadequate, but I had found the means of calculating to be even more inadequate! I then began to think about how the computing could be done better. The University had no information on computers in its library, no courses in computing and no computers, save for an electronic analog computer in the Electrical Engineering Department.

and summer job using EDVAC

My major professor, Dr. Robert Sachs, recognized my dilemma and arranged for me to get a 2-month summer job in 1950 at the Aberdeen Proving Grounds. My assignment there was to program "super-sonic flow about a 3-dimensional body". The instruction set was that of the EDVAC, then under development. I wasn't given any introduction to programming, or to the structure of the computer. I did not complete the programming during the 2 month period; I also heard that the development of the EDVAC was dropped because the mercury delay line was unstable due to temperature build-up when operating. I was not enamored of the EDVAC structure because the use of fixed point with a limited word length required a lot of rescaling to maintain reasonable precision.

Back at Wisconsin, design a better computer

As I returned to Wisconsin I formulated a 3-address floating point structure, trying to make it as simple as possible, and to use technologies that were commercially available. I chose a magnetic drum for main storage, with recirculating registers to minimize the use of electronics. For 1/O I planned to use paper tape with a teletypewriter, which could both punch and read paper tape and print as well.
I determined that I could use floating point exclusively if I had a way to deal with the transfer of word segments from one word to another! The 3-address operation that I came up with was Extract, which took "n" bits, beginning at bit "j" in word 1, was to be inserted, beginning at bit "k" in word 2, the result to be stored in location 3. This eliminated the need for approximately a dozen instructions in fixed point! The complete instruction set consisted of 10 instructions - Add, Subtract, Multiply, Divide, Compare (and transfer if the difference is zero or negative), Transfer, Extract, Read-in, Read-out, and Halt. Read-in and Read-out were also very different from any 1/O operations I observed for several years following this, until I planned the design of my second computer at IBM in 1955, the IBM 709, when I introduced the I/O channel. Read-in and Read-out specified the information source or sink, beginning at a specified point in the source or sink and beginning at a specified location in the drum storage and continuing until completing the final specified location. The Read-in and Read-out instructions were executed concurrently and independently of computational operations. This overlap of I/O with computing was a major contributor to performance enhancement!

with drum memory

The magnetic drum had sufficient capacity to provide 32 tracks of storage, each containing 32 words of 50 information bits and a 5 hit-length space for track switching time, for a total track length of 1760 bit times. The 50 bit word was made up of 40 bits of numeric fraction, 8 bits of exponent plus 1 bit for exponent sign and 1 bit for sign of the fraction. The arithmetic was performed on the numeric fractions by re-circulating the fractions in re-circulating registers while the exponents and signs were retained in electronic registers for control purposes. The recirculating registers had the read and write heads spaced 44 bits apart, 40 bits for the fraction and 4 bits for switching time.
With this spacing the fraction would have 40 repetitions in a drum revolution, matching precisely the 1,760 bit times in a drum revolution. Each of the arithmetic operations was performed in the course of one drum revolution. I thought I had invented a new way of performing division in one revolution, considering the numerator fraction to be the initial value of the remainder, subtracting the denominator fraction from the remainder and adding a 1 in the leftmost quotient digit position, then shifting the denominator fraction one bit position to the right, preparing for repetition. If at any stage of repetition the remainder became negative the denominator fraction would be added to that remainder instead of subtracted and a 1 would be subtracted in the corresponding quotient position rather than added. I later heard that Dr. John von Neuman had patented it.

Each arithmetic operation and others took one drum revolution to be certain the instruction calling for it to be acquired, then a second revolution to be certain the operands were acquired, then a revolution to perform the operation, and finally a revolution to be certain the result had been stored. Since the operations were nonconflicting, there were four instructions in the pipeline at all times, one picking tip its instruction, one picking up its operands, one performing its operation and one storing its result. Consequently the computer performed one floating point operation per drum revolution. I believe there were several world firsts in that design, the first electronic computer to have floating point arithmetic (and certainly the first to have only floating point arithmetic), the first electronic computer to have pipelining, and the first electronic computer to have input and output operated concurrently and independently of computing!

Word of the design gets around

I told one of my fellow physics students about my computer design ideas, and he apparently was excited enough to pass the information on to the Electrical Engineering Department, and in the late fall of 1950 I was requested by them to give a lecture on my design ideas! I gave a seminar and about a week later the head of Electrical Engineering, Dr. Peterson, called my major professor and asked him to change the subject of my doctoral thesis to be a record of my computer design plan so that their graduate engineers could build it and be trained in this new field!
My major professor agreed, and I spent six months writing the new thesis and ordering the magnetic drum. I submitted my thesis in June, 1951, expecting to graduate in June. But there was no one at the University who felt competent to properly evaluate it, so it was sent to scientists at the Aberdeen Proving Ground for evaluation. They approved, and I graduated the following February. The thesis was titled "The Logical Design of an Intermediate Speed Digital Computer"; I named the computer the WISC (Wisconsin Integrally Synchronized Computer). It was completed in 1955 and is now displayed in the Computer History Museum in Mountain View. California.

Graduation and off to IBM

A copy of the thesis was apparently obtained by the IBM branch manager in Milwaukee and sent to 113M at Poughkeepsie. Nathaniel Rochester read it and had IBM make me an offer to join them in Poughkeepsie. I accepted and joined IBM in June 1952. My initial assignment was to simulate neural networks on the IBM 701, according to the proposed characteristics in a monograph published by Professor Hebb. I worked on it for several months and concluded that the description was inadequate. I then turned my attention to character recognition and had considerable success, even on the crude characters of wire printing.
The 701 had exhausted its market after the sale of 18 computers. The company decided that a follow-on computer, the 704, should be developed, utilizing the new magnetic core memory rather than the cathode-ray tube memory in the 701, for the capacity could be much larger.

Design of the IBM 704

I was given the task of designing it, for the other experienced IBM designers were about to be committed to a joint development project with MIT to develop and produce the SAGE system.
I decided to double the instruction size in order to accommodate a larger address and additional instructions to provide floating point arithmetic as well as the fixed point arithmetic of the 701. I had heard of an English computer having a "B-box", a counter which allowed the repetition of a loop until the count reduced to zero. Any address step-changing in an array for each iteration still required separate instructions. I thought it would be more efficient if the count and the step-size could be combined, then the program could be shorter and faster.
I called it indexing and put three index registers in the 704 to accommodate different step sizes for different data arrays. I assigned two bits in the instruction to identify no indexing and which of the three index registers to use in this instruction. I also discovered that index register contents could be available early enough to modify the address in this instruction before fetching the data, thus having no additional execution time! It turned out that the Sage system also had an indexing capability, but I don't know who had it first; they were a classified project, and I wasn't cleared, so I had no knowledge of it to get the dates of invention.
When the time came to price the 704 for the market, it was necessary to estimate the probable market size. Pricing people from IBM headquarters came to talk to me and get my agreement on size. They initially estimated a market of six machines (I assume they considered the 18 701 machines had mostly satisfied the market). I was incensed and insisted that the machine had so much more capability than the 701 that it would have a larger market size. Over the next few weeks they came back with 12, then 18 and finally 32 before I agreed.
The actual number sold was 140, making it an extremely profitable program!

Design of the IBM 709

I was then asked to design the follow-on system, the 709. In the 709 project I added a number of reasonably useful new instructions, one of the most interesting ones was a "history dependent table look-up", which allowed code conversions from BCD (Binary Coded Decimal), IBM's preferred code, to ASCII, the newly adopted American Standard Code for Information Interchange, or vice versa. It also allowed two binary coded decimal numbers (each decimal digit occupying a 6-bit character position) to be added or subtracted in binary, then using the table look-up to convert the result to a proper binary coded decimal result. These were two examples, but many more were ultimately developed by customers.
The principal change I wanted to make was the introduction of an I/O channel, permitting the computer to specify the reading or writing of a specified amount of data to or from a magnetic tape or drum into or out of memory without the computer having to control the data flow as it occurred, just as I did in the WISC, but be able to continue computing with only the impact of some memory cycle delays due to conflict of memory requests. This change was a significantly costly development so that it required corporate approval. Elaine Boehm and I determined that we had to make an outstanding demonstration to win approval.
We came up with the idea of a tape sorting program. The IBM 703 was a sorter-collator, a fairly modestly priced machine sold to the US Treasury. The expected price for the 709 we estimated to be at least two or three times that of the 703. We programmed the sort and found that it performed so much faster than the 703 that the cost of sorting on the 709 was less than on the 703. This demonstration tipped the balance and the I/O Channel development was approved!

Design of the STRETCH

and shortly after the 709 to design a supercomputer (called STRETCH) to utilize the new technology, transistors. I was told it would be my project, but that I would have to get a development contract, preferably from the Livermore Labs or Los Alamos. This was late November 1954. I consulted a bit with John Backus, and we agreed on the principal characteristics the STRETCH should possess. I studied the capabilities of the proposed semiconductor technology, which was a new circuit type called ECL (Emitter Coupled Logic), sort of like the vacuum tube push-pull amplifiers. They were extremely fast circuits and very power consuming.
I did some designing of a multiplier to estimate the probable performance that could be achieved if efficiently instructed. I then worked on a new concept "look ahead" which consisted of fetching instructions well in advance of their execution time so that branch instructions could be recognized early enough to fetch an alternative instruction sequence with no delay. The design analysis was very promising, yielding several times the performance we could achieve using vacuum tubes.
Armed with my initial design results I visited Livermore first. They listened and were very cordial, but they informed me that they had already committed to contract with a competitor so they couldn't commit to us. I then visited Los Alamos and presented to them; they were very interested and would negotiate with IBM. I then did some more fleshing out of the STRETCH design and also determined what should be done to the 704 to produce a 709.

At this time (mid 1955) I was surprised to have a man assigned to my STRETCH project. I initially assumed he reported to me, but it became clear that he thought I reported to him. This was very disconcerting, for I had been assured that STRETCH was my project before I accepted the assignment and had then gotten Los Alamos to the negotiating table and had achieved quite a bit of the design. I wasn't certain I had the situation figured out for sure, so I continued on.

This new man was uninterested in my design and had his own approach. He wanted to design a front end computer which would be a commercial computer which would then feed the back end which would be the scientific computer. To me this seemed to totally prevent any possibility of resulting in the supercomputer that I was commissioned to design! Late that year I was invited to meet with the Laboratory manager; he showed me his plan for restructuring the Laboratory. It was to be a matrix structure with several development projects feeding the technology engineering groups. The STRETCH development was to be managed by the man assigned to me a few months earlier. I was to be in charge of the STRETCH detailed design.

I was appalled, for I knew we could never agree, and the project would fail. I didn't respond about my reaction; I just went back to my office and wrote my letter of resignation. I did continue on until just before Christmas, providing my best design ideas, all of which were lost, then left for South Dakota for Christmas with my and my wife's families, then on to Los Angeles to join Ramo-Wooldridge's computer division.

Resignation, on to Ramo-Woolridge

At Ramo-Woolridge I was immediately put to work determining how to solve military requirements. Upon writing up a proposal I was sent to Washington that night on the "Red-Eye." After shaving and changing my shirt in the Washington air terminal I visited three different military groups who each had new needs to meet, [and] then off to catch the late flight back to LA and to bed by midnight. The next morning I was back at work determining how to solve these needs. When we got some of the contracts I expected I would have the chance to do some of the development, but management liked the way I produced my solutions so they considered me to be their "utility outfielder," whereas I considered myself to be "out in left field".
I then began to seriously listen to my college and grad school friend, Dr. Harold Hall, who was in a new start-up company called Aeronutronics, and which had just been acquired by the Ford Motor Company. This company appeared to have adequate capital and highly respected scientists in nuclear physics, rocket technology, and electronics. They also had quite a good stock option plan! I decided to pursue it.

Aeronutronics

My initial work at Aeronutronics was quite similar to that at Ramo-Woolridge, but here I had the opportunity to do a much more structured solution. A significant one was the design of a flight data entry machine, which allowed a pilot to type into blanks on a cathode ray tube screen, then press a button and have the plan automatically telegraphed to the FAA center; we called it FLIDEN. We won the contract, but again I didn't initially get the development project.
Fairly shortly thereafter the company moved from the San Fernando Valley to Newport Beach. We had just moved down when I was asked to rescue the FLIDEN project. I commuted for a month, working more than two shifts. I found there were over a hundred wiring errors and instabilities in some of the circuitry. I was so beat by this schedule that one late night as I was commuting home I crossed over the highway to LA airport, then looked up to see a water tower in a town which was 5 miles father south. I had no memory of anything in between! I decided that I had to stay in a motel until I finished the project, which took about two more weeks.
The project was a success, and the FAA used it for a test facility, but funding for their plan never materialized. I proposed developing a computer for the Ford Motor Company. When I had a chance to meet with them they explained that if they went with us and something went wrong it would be their fault, whereas if they went with IBM and something went wrong they would be in the clear, for they had gone with the best!
The work in electronics began to be very much the same as it had been in Ramo-Woolridge, but we surely loved living in Newport Beach! My mother had been ailing, and I wanted to visit her in South Dakota, but there never seemed to be a time when I could be spared, so I just resigned. When we got there it became clear that my mother had an untreatable cancer and that it was terminal; she died two months later, and we returned for her funeral. It had been a mixed up 1960 so far, and I still had to decide what I would do next.

Back to IBM

Almost five years [after leaving IBM] Dr. Piore, IBM's Chief Scientist who reported directly to Tom Watson, came to Los Angeles and invited my wife and me to have dinner at Romanoffs restaurant in Beverly Hills. Dr. Piore's wife was from the Romanoff family, so we had remarkable attention! Dr. Piore offered me the position of managing the Experimental Machines division in IBM Research, with the requirement to be on the East coast for a minimum of four months and a maximum of seven months.
My wife and I accepted and were back in New York State by November 1960. My first activities were to look at the projects in my division. I cancelled the only two hardware design projects because they had no chance of being of value to IBM. One project was a computer design which had been continually changed but never complete enough to be evaluated; the other was a government project which utilized super conductor switches for logic, but there was no way to amplify diminishing signal levels. This left only software projects, which I retained, and the responsibility for designing a new supercomputer with insignificant funding. I believe I was given this because the STRETCH project had not met its performance target so its price had been reduced and became a loss leader. The 704, 709, and the follow-on 7090 and 7094 had sustained IBM's scientific computing market.

and the SPREAD committee

At this time IBM had the SPREAD committee in session. There were about five major computer families made by various IBM divisions, each of which had generations which weren't quite compatible. Unfortunately, the total development costs were growing impossibly large, for any new device to be attached required an engineering and software project to be manned and funded for each member of each family. So IBM's development budget was greater than most computer companies' revenue! The SPREAD committee's goal was to define data formats, kinds of I/O devices, control, storage and logic technologies which were to be standardized, and to plan a new family of computers which would replace all current families. This was not only an enormous undertaking, but it was even more of a political undertaking, for it would require all of the divisions to yield their fiefdoms to the king!
This was a major revolution being fought, but the stakes were high, managing the costs to maintain control of the world's data processing market-place! I had only been back east for a couple of months when I was approached by the president of the Data Systems Division, Bob Evans. He asked me to meet with him at a budgeting session to be held at a small resort called Jug-End. I sat through a session that amply demonstrated the development budgeting problem.
After that session Bob and I met privately, and he asked me if I would consider designing the new family of computers. I asked him if the new family of computers would be upwardly compatible but not downwardly compatible. He said that was the plan. I said I would not be willing to do that, for it would only end up with the same budget problem I had just observed, for the generational problem would exist immediately. I told him I thought the family could be both upwardly and downwardly compatible and with virtually no cost impact, and if they would enforce this constraint I would be willing to accept the challenge. Bob thought for a moment, presuming I could possibly do it, and he agreed.

Architecting the IBM 360 Family

So in 1961 I was moving back to Poughkeepsie, where I worked for about 10 hours a day defining data formats, the instruction set, and in several cases the hardware structure, for each family member was to have about a factor of three difference in performance from its neighboring members which required registers in the smaller machines to be memory locations, but in larger machines to be in circuitry. There were a total of 7 machines in this System 360 family, covering a performance range of about 600 to 1. It is still IBM's mainframe line, though changed through the decades, and is IBM's largest revenue product, and as described in Halloween's Palo Alto Daily News, it is superior to complexes of minicomputers or PC's!

To meet the performance and cost constraints, the small machines had to use memory locations as registers in appropriate eases, where the larger machines could use hardware registers. I also discovered that there had to be some portion of the architecture that had to be reminiscent of each of the two significant families that we were replacing, otherwise the designers from those families couldn't develop the confidence that the design would be acceptable in their market segment. This resulted in decimal operations being memory to memory rather than in registers like the 1401 and indexing very similar to the 7094.

There were only two architectural advances of note in the full family;
  • the most significant was "base registers," which allowed a much smaller address size required in the instruction format to address a quite larger memory (I believe the inventor was Dr. Gerritt Blau),
  • and the other was making the addressing of the disk storage and tapes to be sufficiently alike so that the users familiar with tapes could experiment with the new disk storage without having to use random access exclusively (this was largely Clue to 1401 I/O designers I believe),
in the fastest member, the model 90's there were three very powerful ones, loop trapping, associated with look-ahead, planned by Dr. Tien Chi Chen and myself, then virtual registers (registers assigned when and where needed) and linked arithmetic units, so results of one arithmetic unit could become an input to another arithmetic unit without any intervening register storage; these were planned by the regular design team.
The principal negative consequence of the SPREAD committee data format constraints appeared in floating point, where having to use binary sizes for the exponent size, eight bits, and for fraction shifting by multiples of four bits, for the rounding errors were larger than I thought reasonable. I tried to get relief from the constraint in this ease, but was refused. It took about 20 years before IBM switched to IEEE floating point format.

to the West Coast with IBM and Stanford

I was quite tired of the time and politicking demands and remembered vividly that I had agreed to go east for a minimum of 4 months and a maximum of 7 months. I also knew that Dr. Piore's intent was for me to go to the Silicon Valley area when returned to California. I then structured my plan by calling Stanford's Engineering School to see if they would invite me to be a visiting professor for a couple of quarters. They did, so I informed management that I was returning to California as a visiting professor and that I would go on my own or as an IBM employee, it was up to them. They elected the latter, and I moved in the fall of 1964.

In January [1965] I taught computer design at Stanford; this was quite interesting, for I experienced quite a large range in the ease that the students had in their grasp of the material. I never determined the reason, for I had no knowledge of their previous experiences. The second quarter I taught was concentrated on the analysis and explanations for the performance of a cache memory in enhancing the speed of the computer. It was not too well organized, for I was trying to increase my own understanding. Concurrently I was working on a number of my pet problems at the IBM lab in Los Gatos, with remarkable success.

made IBM Fellow and Advanced Computer Systems.

In late January, while teaching at Stanford, I received a telephone call from the east coast just before dinner. The call was to inform me that I was named an IBM Fellow, which entitled me to work on any project of my choosing, with a small budget to support it. While hearing this news my knees got weak, and I had to hold on to a cabinet for support; then I heard some chimney falling and realized it was an earthquake rather than an overly large reaction to the good news!
A few months later I was asked to consider attaching my Fellow activities to a new lab IBM was starting called Advanced Computer Systems, ACS, which would be designing a super computer, hopefully to serve the Livermore and Los Alamos labs. The project would be developing a computer proposed by a group from IBM research. I knew quite a bit about it and liked much, but not all, of the plan.
I agreed to do it, but being a Fellow, I did not report to their management. For a few weeks I tried to make some changes in areas I didn't like, but to no avail. I recognized that with the requirement to develop the computer design, the technology and the total software support, that there was no way they could possibly find a big enough market to meet IBM's antitrust requirements of profitability.
I didn't want to be associated with a loss-leading project, like had happened to STRETCH in the 1960s, so I thought about the problem and came up with a different approach - design the computer to be System 360 compatible and at the highest speed we could achieve. This would eliminate all of the software development cost. To make it profitable we could design one or two smaller machines with the performance spacing of the existing 360 product line, thus sharing the technology development costs over a much larger market and maybe meeting the profitability requirements.

(Q) How did you fare in the design challenge and the consequences?
(A) I presented my alternative to the project managers only to have it rejected out of hand, for they were wedded to the architecture they had developed. I was pondering how to separate thyself from the impending loss leader when their top logic designer got into some trouble. The managers considered him unmanageable, but couldn't fire him so they found the solution, transfer him to me! I was delighted for he was responsible for the design of the most performance determining part of their computer. I knew that if he did the design of that part of the 360 alternative, there could be no charges of faulty design. It took a bit over two weeks to describe enough of my performance approaches before he recognized that it was really feasible to compete with the other design.

He then went into it wholeheartedly and actually was able to achieve a slightly higher performance and a somewhat smaller cost. Bob Evans came out to ACS with about five technical people and they held a shoot-out. We won and I was made the lab manager. The first thing I did was have the two smaller computers costed. I then submitted the three system plan to corporate pricing. The single highest speed computer was a loss leader. The second smaller computer added made a break-even program. Adding the third even smaller computer came out with normal profit! IBM management decided not to do it, for it would advance the computing capability too fast for the company to control the growth of the computer marketplace, thus reducing their profit potential. I then recommended that the ACS lab be closed, and it was.

ACS Lab closed
(Q) What happened after the ACS lab was closed?
(A) Just after the shoot-out, about two thirds of the employees left IBM, most of them forming a startup venture in designing a time-sharing computer; they got about 18 months worth of capital investment. A small group started a semiconductor company to develop field effect transistor memory chips for add-on memory for IBM computers and also an ECL memory chip for cache memories. I stayed on at IBM analyzing the performance of computing systems as a function of memory size and disk and tape storage units in the environment of multiprogramming. While I was doing this, IBM management learned that a company called Compat had announced a minicomputer. They had granted me permission to be on the board of my brother's consulting company, Compata, and immediately assumed it was Compat. Their discussions went on for two or three months without ever asking me before they recognized that it wasn't Compata; however emotions had reached such a fever pitch that they sent me a letter demanding that I resign from Compata's board, for it didn't look good that an IBM employee was on the board of another company in the computer field. I felt that my name had value to him, and as well I was "hot under the collar" about IBM's handling of the ACS project, so in September 1970 I wrote a letter explaining my position and was resigning from IBM rather than my brother's board. I also informed them that I intended to start my own large computer company! The president of my division tried to talk me out of it, for there was no money to be made in large computers!

Starting Amdahl
(Q) Why and how did you decide to start Amdahl Corporation?
(A) Ray Williams, the ACS financial man, was aware of my anger and disgust and came to me with the information that he had some contacts in the venture capital world. He proposed that we immediately develop a business plan, and he'd arrange meetings with the VCs. We took about three weeks to do an analysis of the formidable task of competing head on with IBM, for we intended to be compatible with IBM and, in fact, use their operating system (we knew IBM had decided to lease it independently of the mainframe to reduce their antitrust risk).

The reason for compatibility was that the mainframe market was almost exclusively IBM, and that producing a better product than IBM seemed simpler than changing the market place. We wrote up our business plan in as open and clear a manner as we could, outlining the difficulties and defining our strategies to counter them. We estimated our capital requirements at about $45 million.

(Q) How did you get your first start-up money?
(A) I then traveled to Japan, invited by Fujitsu, to give several lectures on computers to their engineers and to their board of directors. I had known several of their top people for three or four years and had great respect for them.

When I returned Ray had arranged a meeting with Ned Heiser, founder of a new venture firm in Chicago. We presented our business plan and requested an investment of S5 million. They considered this for several days and came back with an offer of S1 million. We refused on the basis that we'd have nothing accomplished that we could show for raising more money. They then asked us to determine the least that we would need, so Ray and I pondered this carefully, and decided we could do it with $2 million, if we were careful. Heiser agreed, and we received his investment in December 1970, two days after receiving an overdrawn notice from our bank!

Staffing, and the 100 Gate Package
(Q) Did you have problems staffing and designing a competitive computer?
(A) We were asked by the ACS start-up people to agree not to make employment offers until they had given up hope of getting more capital; we agreed, and in early January 22 of their people listened to our plan and joined, so we were up and running.

My plan to use a larger chip size for easier interconnection was improved upon by Fred Buelow, who learned there was a discarded, easily routed approach called a gate-array, which wasn't economical enough for chip manufacturers, but we could get a 100 gate chip, Large-Scale-Integration (LSD. This was phenomenal, for the ACS technology only provided about 35 gates, Medium-Scale-Integration (MSI), and took three or four months for a gifted man to route!
With a package redesigned to provide much better heat conduction we could use air cooling instead of chilled water. We tried to get the big semiconductor companies to make our chips for us, but none of them would. Texas Instruments listened to our presentation, but after 20 minutes their vice president called me aside and said that it wouldn't work, and if it did it was the wrong level of integration, and if we kept on we would spend all our money and go belly-up with nothing to show for it! Quite disconcerting!
Dr. Amdahl holding a 100gate L51 air-cooled chip. On his desk is a circuit board with the chips on it. This circuit board was for an Amdahl 470 V/6 (photograph dated March 1973).

More Financing
(Q) How did you finance such a demanding undertaking?
(A) During these early days Fujitsu friends would drop by from time to time. They never asked much about our progress but they must have sensed our growing confidence, for in late spring they asked if we would consider an investment from them; they felt it would need about 5 days of presentation to evaluate us thoroughly, and they would sign an agreement to protect our technology. We agreed and presented for three days. On the fourth clay they stopped us saying they fully believed. They invested S5 million and sent 20 engineers to assist in the development. Shortly after the presentation our LSI chips came back, and they performed just as predicted! We went on trying to raise more capital, but no venture capital firm believed we could compete with IBM. It was difficult to argue the case since RCA, General Electric, Xerox, and Philco were all getting out of computers; RCA and General Electric had each spent about $5 billion and were giving up! A surprise visit by Heinz Nixdorf from Germany was exciting, for after a few hours he agreed to put in $5 million. This also excited Fujitsu, for they decided to invest an additional $5 million! These events stirred the venture capital people to invest $7.8 million!

Intellectual Property?,
(Q) How did you avoid encroaching on other IBM patents and other technical property?
(A) When we started the design of our computer I reminded everybody that we were all bound by our agreements with IBM not to use any of their intellectual property, but that if we used only the descriptions in the IBM's publicly provided user's manual to do our designs, it would be free of conflict. Fortunately none of the designers had ever designed a 360 computer, so that manual was necessary and there was no carryover of 360 logic detail. I had a friend in IBM's legal staff who later informed me that IBM had made two in-depth investigations of our product to determine if there was any misappropriation of IBM property, but decided we were clean as a hound's tooth, however clean that is. We also had to test the availability of the IBM operating system licensed to our computer. We ordered it, and it took IBM almost two months to decide they had to do it, but they did! In short, we didn't do anything quite like IBM's patent coverage, and we took advantage of their dropping their tie-in software policy as well as their well-defined market place!

Speed of Development
(Q) How could you develop a product so much faster than IBM's?
(A) The technology in our computer was much more advanced than IBM's, for we had opted for the LSI chip with 100 gates rather than the MSI (medium scale integration) with only 35 gates. This meant we could avoid nearly two thirds of the chip crossings, which would cost significant time delays in the logic paths involved. This also reduced the size of the machine, and each foot of wire cost 1 nanosecond. We also designed a simpler machine, by a more orderly, but not slower, instruction execution sequence like I used in the WISC. There was also another factor, based on IBM's market management approach, where they avoided too great an advance in technology upgrades, for users could drop down one member of the 360 family if the smaller member was fast enough. Amdahl's offering was a bit more than three times faster than IBM's large member, and we priced it most competitively, for we had to overcome customer's management that IBM was the only safe decision.

Amdahl computers during that time utilized IBM instruction sets that could employ the IBM operating system, which was almost universal in the computing marketplace. Consequently, Amdahls didn't contain architectural advances which altered instruction results, but did contain pipelining as in the WISC and had much more advanced technology, such as LSI (Large Scale Integration) with air cooling, a world first (developed by Fred Buelow), rather than IBM's MSI (Medium Scale Integration) with water cooling. Amdahl also included another world first, remote diagnostics, called "Amdac," invented by the field engineers.
Photograph of the Amdahl 470 V/6.

(Q) Why did you think you could compete with IBM when RCA and GE couldn't?
(A) IBM's earlier competitors developed their offerings while IBM was "bundling" its software with its hardware, therefore the competitor had to develop its own software. RCA designed a machine nearly compatible with IBM's and software that was also quite similar; however, the deviations from IBM's hardware were carefully designed to appear easy to move to, but not appear too difficult to return to IBM if they didn't like it. RCA, however, had made it quite difficult to return, for they considered it to be the "barb" on their fishhook! Being later and IBM insiders, we had the advantage to plan on IBM having to maintain its unbundling; however, the Venture Capital world was unable to readjust its thinking when the new strategy was presented by an aspiring startup! Some even thought we couldn't design an IBM compatible computer since RCA could- n't! The cost of developing our own operating system and other supporting software would have well more than doubled our capital requirements!

Use IBM's Operating System(s)
(Q) Why could you plan on using the IBM operating system?
(A) When IBM decided that they were in serious risk of an antitrust action for offering their hardware complete with all of the software, thus virtually keeping any other supplier from being able to make an economically attractive offer in this marketplace, they decided the separation of their software pack- age from the hardware would not be too costly, as long as the soft- ware package was kept bundled (this is my guess, for I was not involved in any decision-making). The pricing of the software bundle was also economical enough to discourage competition. IBM also did not make too big a public announcement as far as my recollection of the event, for the VCs didn't seem aware of it. IBM also took quite a bit of time to decide they had to honor our order for their software package to be licensed to an Amdahl computer, but I was convinced they had to or the antitrust threat would immediately materialize!

Sales and IBM Response
(Q) How did Amdahl's marketing results progress?
(A) The initial market penetration by Amdahl was its first sale to the NASA space computing center in New York, where we were allowed to being installation on Friday night, with the expectation that it would take about a week and a half, like IBM required, but were astonished when they were informed on Sunday noon that the computer was ready for use!

The next sales were primarily to Universities, where student users appreciated the opportunity to mix in some of their own system software, rather than being restricted to only IBM's. We also sold one to a computing job-shop, but we were still not able to get a commitment from a commercial account until Massachusetts Mutual Insurance Company, who was very unhappy with IBM, decided to buy from us instead. The installation was very successful, and it was recognized that the Amdahl computer was viable! That broke the log jam, and most of our computers were sold to commercial entities! Within the next 18 months, we had sold enough so that our net profit, which was 30%, just like IBM's, had paid off all of our corporate development cost, which had reached some $60 million! So we had a perfect balance sheet.
Our first full year of shipments had been $96 million, our second year had been $196 million, and our third year had been $320 million! IBM had, of course, been effectively reducing its prices by buying customer software packages of little value to them and had sold many more machines than we had, but they realized they had to reduce the customer's cost of computing to preserve their market place.

(Q) How did IBM respond to your success?
(A) The next move by IBM was the announcement of a new 360 improved family, the first to be the 3030 (I'm sure you hunters can recognize the significance of that choice of number). This machine was to be equal in speed to Amdahl's and was to be priced 30% lower than ours! Immediately we analyzed what we had to do to respond. We came up with an improvement of our own, including a smaller version to expand the market we addressed. We also had to negotiate with Fujitsu to get lower prices on their manufactured parts (their manufacturing had been very profitable, and with a smaller version, they could reduce their prices and still fare as well). With this plan, we were able to maintain our 30% pretax profit in spite of IBM's attempt to "mow our grass to ground level". Over time our competition reduced the cost of computing for the mainframe customers by over an order of magnitude! IBM retaliated in Japan by calling on the government to limit the use of their architecture and software there, or they would reduce the prices in Japan to kill off the Japanese computer companies, or so the government informed Fujitsu of this.

(Q) How did you expand your market into Europe?
(A) Nixdorf had not been a significant player, for their marketing people had only had experience selling small machines, and they decided not to try to make a chance as drastic as would be required, so they sold their stock for a very significant profit. Amdahl entered the European market, first in Germany upon receiving an inquiry and visit from the Max Planck Institute in Munich and from the European Space Agency in Ober Pfaffen Hofen (with Nixdorf's blessing and assistance), then in Norway where I was questioned about my recollections of my Norwegian roots and coerced into singing a song written by the immigrants (this was publicized in Fortune Magazine under the title "A Frog Sings in Norway"). Italy came to us in the person of the former IBM country manager who now had responsibility for all central government computing, and who couldn't get a deal from IBM. In France I struck a deal where we would get import licenses for any sale we could make if we could have anything made for our computer in a factory in Toulouse. Being an inveterate punster I informed my VP of engineering that to get the proper picture he should make "la trek Toulouse". We were quoted a price for memory which was slightly less than it cost us to make it! Britain was easy to enter, but later.

(Q) Your relationship with Fujitsu was so strong; did you ever consider tempering it?
(A) I was concerned that our dependence on Fujitsu was in danger of making us effectively a subsidiary, and I felt that the only way we could be independent would be to find an alternative supplier of new and much denser chips for our new advanced computer offerings. I was unable to get support from the engineering staff, for they felt they were not capable of dealing with some of the problems that might come up. In the meantime Fujitsu heard of this and began to try to make me stop agitating, accept their new planned chips. It got bad enough that the president came to Sunnyvale [California] and verbally chewed me out. I had wanted a reduced dependence on Fujitsu, not a separation from them, for I was very much mindful that without them Amdahl would never have survived! I also had quite a number of very close Japanese friends, and I still have them today. I must also say that Fujitsu treated the company very fairly for the rest of its existence.

(Q) Did your efforts affect your health?
(A) The stress of this struggle was so severe that my back went into spasm. Some twelve years earlier I had ruptured a disc, and it had healed, but it had still remained very sensitive. I realized that this spasm was so severe that I couldn't go back to work for quite a long time, so I decided it was best to resign rather than continue struggling.

Trilogy
(Q) What did you think you could do as a follow-on to Amdahl Corporation?
(A) My back took about eight months to get back to near normality. During that time I pondered what I should do when healed. Carl and I brainstormed an interesting approach to very large scale integration, which we felt could make a wafer-size chip! I mentioned it to Clifford Madden, Amdahl's VP of Finance. He got so excited that he insisted that the three of us should start a new company!

(Q) If incurring so many problems in building a semi-conductor facility, how could you have done it differently?
(A) I was chairman of the board, for I still had to protect my back, Clifford was president, and Carl was head of engineering. We named the company Trilogy, for the technique employed to make the wafer-scale integration with high yield was to use triplet gates, where it was possible to test each gate and be able to remove one, or even two, of the gates if they were faulty, thus assuring an effectively working gate unless all three were faulty. The financial planning community became wildly excited, and we managed to acquire over $100 million. Carl and I had planned to have a semiconductor company process the chips, but some of the things we would have to do weren't standard, so the president decided we'd have to build our own facilities. The building of the super clean semiconductor facility was delayed during construction by unusually heavy and extended rains, so the costs mounted more rapidly than planned. The complexity of the routing program software, depositing enough metal for high power distribution and good bonding of the chip to the chip carrier were solved, but took extra time. The only problem we hadn't completely solved was the leakage of etching fluids through layers of interconnection, (the universal problem for all semiconductor companies). We estimated that two more passes of making the chip, testing for leakage faults, determining how to modify the masks to fix it and making the new masks would take about 24 months. The costs of the delays had reduced our capital so much that only about 24 months of run rate were left! We had proven to our satisfaction that we could do the wafer sized chip, for we had made three-quarters of the wafer successfully, but by the time we could successfully produce our full chip repeatedly, we would have no money left to exploit it, and we felt certain we could not raise more money! Carl suggested we could successfully produce 1/4 size chips and design a small product using them. I felt that the level of revenue we could achieve with that approach could hardly keep us afloat, so I contacted some of our principal investors and asked what they would recommend. They asked us to acquire a company with a computer product that would benefit from our remaining funds, so we did that. The negative publicity from this was as large as the positive publicity when we started!

(Q) What was it about a start-up company that made it so attractive to you?
(A) If I had the chance to do it all over again I would first offer enough money to a semiconductor company to compensate for solving the nonstandard processes. If that wouldn't work, I would take Carl's suggestion and see if we could sell the product design and chip availability to stay in business. These might not have worked, but if they did we could have made a significant success! I strongly enjoyed the atmosphere of cooperative enthusiasm in the start-up adventure!



Amdahl's "Law"

During this time, I had access to tape storage programs and data history for commercial, scientific, engineering and university computing centers for the 704 through the 7094. This gave insight on relative usage of the various instructions and a most interesting statistic -- each of these computing center work load histories showed that

  • there was 1 hit {bit ??} of I/O for each instruction executed!
  • I also was able to determine the speed of computing that could be maintained for a given memory size. This was related to disk and tape speeds in the environment of multi-processing.
These latter two properties I determined in 1969, when I privately estimated that System 360 would have to change the address length to exceed about 15 MIPS (Million Instructions Per Second).
Livermore Laboratory heard about the 1 bit of I/O and thought it couldn't be true so they ran tests for a month and found that during office hours, when users were using the machines from their consoles, the number of bits of 1/O averaged 1.1, and at night doing batch processing it averaged 1.0. They were surprised, but neither they nor I knew why it should have that value.
When virtual memory came into common usage, the number of hits { bits ?? } of I/O per instruction executed came down. Although I had limited data, I could reasonably estimate that it correlated quite closely with the reduction of the percentage of the program size which hadn't needed to be brought in or retained during the course of its execution.

In 1967 I was asked by IBM to give a talk at the Spring Joint Computer Conference to be held on the east coast. The purpose was for me to compare the computing potential of a super uniprocessor to that of a unique quasi-parallel computer, the Illiac IV, proposed by a Mr. Slotnik.

The proposed Illiac IV had a single instruction unit (I-unit) driving 16 arithmetic units (E-units). Each E-unit provided its own data addresses and determined whether or not to participate in the execution of the I-unit's current instruction, an interesting, but controversial proposal. The super uniprocessor was a design type, not a specific machine, so I had to estimate to the best of my ability what performance could reasonably he achieved by such a design.

Figure 1 shows a diagram of the Illiac IV; Figure 2 shows the performance of the Illiac IV on a problem having a varied, but reasonable, range of parallelism under the control of an operating system with characteristics similar to those then currently in use, having quite a bit of system management and data movement code.
Figure 3 shows the performance of the super uniprocessor on that same problem and operating system; Figure 4 shows the performance of the Illiac IV with Slotnick's expected future goal of 256 E-units and running a problem having a varied range of parallelism, but reaching a level of parallelism matching "America's symbol of purity," Ivory soap, 99.44'%.
Figure 5 shows the formula I generated to estimate the Illiac IV's performance, giving it the benefit of assuming that if some parallelism existed all processors could be usefully employed.
The formula generated to estimate the Illiac IV's performance. The numerator in the formula is Ps x (S+P), and the dominator is S+P/16. In this formula, S is the % of the problem that must be executed sequentially (or serially), and P is the % of the problem which may be executed in parallel if the computer has this capability. The sum (S+P) is always equal to 100%, or 1, for it is the workload to be performed. Ps is the performance of a computer which can only execute the problem in a totally sequential manner, regardless of the problem possessing the capability for parallel execution, and which has the speed of the Illiac IV's I-unit. The denominator reflects the capability of the Illiac IV to be able to execute the P component 16 each instruction execution time (this is giving an advantage to the Illiac IV, for only in vector or matrix operations where the sizes are multiples of I6 would it perform that well). The ratio (S+P)/(S+P/16) represents the speed-up of the Illiac IV architectural feature for parallelism. This speed-up times Ps is the optimistic performance of the Illiac IV. By the way, no one challenged this formula, just my range of only up to 50% parallel was thought by some to be a bit low.

These Figures are not quite the same as in the 1967 presentation, for they weren't published, nor did I keep them, for I had no expectation of the intensity of their afterlife! I never called this formula "Amdahl's Law" nor did I hear it called that for several years; I merely considered it an upper limit performance for a computer with ONE I-unit and N E-units running problems under the control of that time period's operating system!

The debate between me and Mr. Slotnick was joined by many in the audience, and it became quite heated. I felt Mr. Slotnick was trying to egg me into attacking him rather than his computer design, but I carefully avoided that, only to be attacked by Dr. Herbert Grosch in the audience for not attacking him. It became a bit of a circus, and I was quite unhappy about being involved, for I thought of it as a rational analysis of two competing design approaches, not a bashing of another human for offering a controversial design approach!

Several years later I was informed of a proof that Amdahl's Law was invalidated by someone at Los Alamos, where a number of computers interconnected as an N-cube by communication lines, but with each computer also connected to I/O devices for loading the operating system, initial data, and results. This made all control and data movement to be carried out in parallel.

I didn't enter the fray; I merely commented that what they called Amdahl's Law merely described the Illiac IV, which had only one I-unit. I also heard that Amdahl's Law was used to challenge the multi-computer systems developed by Massively Parallel, a Colorado firm, where their chief system designer looked at the formula and though it appeared to have the form of an information theoretical statement and used it to further enhance his system! As a result Massively Parallel invited me to join their advisory board!
I really still do not consider Amdahl's Law to be as much of a law as the relationships of memory size and computer performance and also the number of bits of I/O per instruction executed or as reduced, when considered as a function of the fraction of program required to be in "virtual memory". These seem to be lost in the mist of time! I also consider the WISC to be the most remarkable architectural achievement I've made, and with no input from any source other than sheer inventiveness.

There has been no publicity about the capability of the actual Illiac IV. I did hear unofficially that it was unable to be successfully debugged at the University of Illinois and that it was shipped to the NASA facility here in Sunnyvale where debugging was being carried out by volunteers. I heard a few months later that they had gotten it to work and had executed a test program, but that no information on its performance had been made available. I'm not certain that this information was entirely accurate so I cannot vouch for it.