Stories about SAGE

(for SAGE technical information - use you brouser's "BACK" function to return here)

Stories about SAGE

Factoids which needs a home - Apr 2014
Origins of the Internet
The first recorded description of the social interactions that could be enabled through networking was a series of memos written by J.C.R. Licklider of MIT in August 1962 discussing his "Galactic Network" concept. He envisioned a globally interconnected set of computers through which everyone could quickly access data and programs from any site. In spirit, the concept was very much like the Internet of today. Licklider was the first head of the computer research program at DARPA,4 starting in October 1962. While at DARPA he convinced his successors at DARPA, Ivan Sutherland, Bob Taylor, and MIT researcher Lawrence G. Roberts, of the importance of this networking concept.

Professor Martin Campbell-Kelly says that
a) J.C.R. Licklider was a major player in the machine/human interaction of SAGE
b) the massive SAGE effort put the US way ahead of the Brits in computers

Many people say that the IBM profits from SAGE paid for IBM System 360 development.

from a "scope dope" Gary Odle - December 2003
I just read your article on the weaknesses of the SAGE system and agree with you whole-heartedly. I was a Weapons Controller (i.e., "scope dope") at the Duluth SAGE site from 1975-76 and again for the year 1978.

I always marvelled that in our SNOWTIME excercises (SAC/NORAD Operational Weapons Testing and Evaluation ... send B-52's and KC-135's north and have them attack us) that the attackers always came in high and slow, and right in the middle of our radar coverage, neatly avoiding our blank areas. Our "kill" percentage would be about 99%. Commanders would praise us, nice reports would be written, and I would be angry that the whole thing had been a sham.

After four years in air defense I figured that if the Air Force wasn't going to take it seriously, I didn't need to be a part of it. I left the Air Force in 1979 to pursue other interests.

Controlling fighters in SAGE was fun ... like being paid to play video games .. . but no way for a responsible adult to spend their career.

Gary Odle

Comment from Joe Romito - Mar 2014
One of the earlier posts on the SAGE website used the SNOWTIME acronym incorrectly. The term stood for SAC-NORAD Operational Weapons Testing Involving Military Electronics. Its primary purpose was not to test SAGE, but rather to test ground-based Army air defense weapons in the US, which in the late 1960s were mostly Nike-Hercules missile units stationed around major population centers and military facilities. In the late 1960s there were roughly 15 defended areas around the country. I was most familiar with the ones on the west coast -- Seattle, San Francisco, and LA.

The exercise was conducted annually, against one defended area at a time. It was conducted late night-early morning to minimize interference with the FAA's air traffic control radars. In the exercise SAC aircraft would fly against the area using their maximum radar jamming capabilities. And the Nike units were allowed to use almost all of their full wartime countermeasures systems to counter the jamming. As someone who served in the San Francisco Nike brigade 1968-1970, I know that it was usually a humbling experience for the Army. I seem to recall that typical results were that Nike units would defeat at best one third of the attacking aircraft. Keep in mind that the Nike mission was to shoot down attacking Soviet bomber flights, each of which was carrying multiple thermonuclear bombs. If just one bomber in a flight completed its mission, the target city would likely have been destroyed. The sad reality was that Nike-Hercules radars were using technology from the 1950s and probably would have been no match for attacking enemy bombers in an actual wartime situation. Fortunately, we never found out for certain if this was the case.

Western Electric activities
Robert F. Martina - SAGE Test Director
Western Electric, part of AT&T then, was awarded the contract as system integrator for the entire SAGE System. Close to 500 WE engineers went through SAGE computer/radar school at MITRE Hanscom Field, 15 at a time. --They were responsible for the testing of all sectors in the country and turning the system over to the Airforce. Five test teams of apx. 50 each (25 at radar and interceptor bases; 25 at the direction/computer centers) were deployed at a time. Sector integration and certification testing took 9 months.--Some engineers were left behind to upgrade the system as changes came from Rand/SDC and MITRE as well as the radar contractors.

Problems with software and hardware were tracked and improvements suggested. Simulated inputs mixed with live data was one innovation made/programmed by this team. Alumni of this organization still meet annually, 2002 in Boston, to share a few memories of life on the road along way from the flag pole, the excitement of running 12 intercept missions a day and trips to find the source of permanent echos used for azimuth registration of radars.

Many of these engineers left WECo after the project phased down in the early 60's and became part of many other organizations, particularly NASA and its contractors.

R.F. Martina. ( a 5 sector man ) Senior Test Director
- Great Falls and Phoenix Air Defense Sectors

Update 10/28/02 - WECo SAGE reunion ... will be held in Cody WY in 03.

From Chris McWilliams
Subject: Computer Museum

Hi Ed,
The visit to the museum was quite enjoyable. They have all kinds of computer equipment in there, much of which cost $millions to build. I was especially impressed with some of the Cray super computer equipment. Dag Spicer was very helpful and informative on all the various eras of computer development their display covers.

They have 2 SAGE consoles (Intercept and Weapons Tech) which are very similar to the ones I worked on. They also have core and drum memories, plus a control panel and racks showing the thousands of tubes it took to run SAGE. I explained to Dag how we used to spend a lot of the quiet midnight shifts playing "Battleship" over the extensive communication setup we had. Also mentioned the problems we had getting the various radar site precisely enough located in the computer to avoid getting multiple returns from just one aircraft.

Also, when the system started loading up on data the "frame" time (time to run through entire program) kept getting longer. When it reached 15 seconds, we would start dumping data. We avoided that by controlling the data input to use only that necessary for the task at hand.

I went through some of their pictures, and am trying to identify some of the people I recognized.

Back to Home Page

This is an extended e-mail from Les Earnest (February 20, 1999)
(Table of Contents and formatting added by Ed Thelen)

Attached FYI are some articles on SAGE and related C3 systems that I wrote about ten years ago for the Usenet newsgroup comp.risks.

-Les Earnest


Testing the fire-up decoder
Duplexed for reliability
The C3 legacy, Part 1: top-down goes belly-up recursively
The C3 legacy, Part 2: a SAGE beginning
The C3 legacy, Part 3: Command-control catches on
The seductive image
The C3 legacy, Part 4: A gaggle of L-systems
The C3 legacy, Part 5: Subsystem I
The C3 Legacy, Part 6: Feedback
Was there ever a command and control system that worked?
SAGE revisited

[Risks 8.74]


This is an account of two ancient (30-year old) computer risks that were not publicly disclosed for the usual reasons. It involves an air defense system called SAGE and a ground-to-air missile called BOMARC.

SAGE was developed by MIT in the late '50s with Air Force sponsorship to counter the threat of a manned bomber attack by you-know-who. It was also designed to counter the political threat of a competing system called Nike that was being developed by the Army.

SAGE was the first large real time computer system. "Large" was certainly the operative term -- it had a duplexed vacuum tube computer that covered an area about the size of a football field and a comparably sized air conditioning system to take away the enormous heat load. It used an advanced memory technology that had just been invented, namely magnetic core, and had a larger main memory than any earlier computers, though it is not impessive by current standards -- it would now be called 256k bytes, though no one had heard of a byte then.

The system collected digitized radar information from multiple sites and used it to automatically track aircraft and guide interceptors. SAGE was designed to work initially with manned interceptors such as the F-102, F-104, and F-106 and used a radio datalink to transmit guidance commands to these aircraft. It was later modified to work with the BOMARC missile.

Each computer site had about 50 display consoles that allowed the operators to assign weapons to targets and monitor progress. As I recall, there were eventually between one and two dozen SAGE systems built in various parts of the U.S.

BOMARC missiles used a rocket booster to get airborne and a ramjet to cruise at high altitude to the vicinity of its target. It was then used its doppler radar to locate the target more accurately so that it could dive at it and detonate. It could carry either a high explosive or a nuclear warhead.

BOMARCs were housed in hardened structures. When a given missile received a launch command from SAGE, sent via land lines, the roof would roll back, the missile would erect, and if it had received a complete set of initial guidance commands in the meantime it would launch in the specified direction.

Testing the fire-up decoder

It was clearly important to ensure that the electronic guidance system in the missile was working properly, so the Boeing engineers who designed the launch control system included a test feature that would generate a set of synthetic launch commands so that the missile electronics could be monitored for correct operation. When in test mode, of course, the normal sequence of erecting and launching the missile was suppressed.

I worked on SAGE during 1956-60 and one of our responsibilities was to integrate BOMARC into that system. This led us to review the handling of launch commands in various parts of the system. In the course of this review, one of our engineers noticed a rather serious defect -- if the launch command system was tested, the missile would be in a state of readiness for launch. If the "test" switch was then returned to "operate" without individually resetting the control systems in each missile that had been tested, they would all immediately erect and launch!

Needless to say, that "feature" was modified rather soon after we mentioned it to Boeing.

Duplexed for reliability

For some reason, I got assigned the responsibility for securing approval to put nuclear warheads on the second-generation BOMARCs, which involved "proving" to a government board that the probability of accidentally launching a missile on any given day as a result of equipment malfunctions was less than a certain very small number and that one berserk person couldn't do it by himself. We did eventually convince them that it was adequately safe, but in the course of our studies we uncovered a scary problem.

The SAGE system used land lines to transmit launch commands to the missile site and these lines were duplexed for reliability. Each of the two lines followed a different geographic route so that they would be less likely to be taken out by a single blast or malfunction. There was a black box at the missile site that could detect when the primary line went bad and automatically switched to the alternate. On examination, we discovered that if both lines were bad at the same time, the system would remain connected to the alternate line and the amplifiers would then pick up and amplify whatever noise was there and interpret it as a stream of random bits.

We then did a Markov analysis to determine the expected time that it would take for a random bit stream to generate something that looked like a "fire" command for one of the missiles. We found that expected value was a little over 2 minutes. When such a command was received, of course, the missile would erect and prepare to launch. However, unless the missile also received a number of other commands during the launch window, it would automatically abort. Fortunately, we were able to show that getting a complete set of acceptable guidance commands within this time was extremely improbable, so this failure mode did not present a nuclear safety threat.

The official name of the first BOMARC model was IM-99A, so I wrote a report about this problem titled "Inadvertent erection of the IM-99A." While that title raised a few eyebrows, the report was destined to get even more attention than I expected. Its prediction came true a couple of weeks after it was released -- both phone lines went bad on a BOMARC site in Maryland, near Washington D.C., causing a missile to suddenly erect and start the launch sequence, then abort. Needless to say, this scared hell out of the site staff and a few other people.

The Air Force was suitably impressed with our prediction and I was immediately called upon to chair an MIT-AT&T committee that had the honor of fixing the problem. The fix was rather easy: just disconnect when both lines are bad. With good engineering practice, of course, this kind of problem wouldn't occur. However, the world is an imperfect place.

[Risks 9.60]

The C3 legacy, Part 1: top-down goes belly-up recursively

After more than 30 years of accumulated evidence to the contrary, the U.S. Defense Department apparently still believes that so-called command- control-communications (C3) systems should be designed and built from the top down as fully integrated systems. While that approach may have some validity in the design of weapon systems, it simply doesn't work for systems intended to gather information, aid analysis, and disseminate decisions. The top-down approach has wasted billions of dollars so far, with more to come, apparently.

I noticed the citation in RISKS 9.52 of the article, "The Pentagon's Botched Mission," in DATAMATION, Sept. 1 1989, which describes the latest development failures in the World Wide Military Command and Control System (WWMCCS). The cited article indicates that they are still following the same misguided "total system" approach that helped me to decide to leave that project in 1965. I confess that it took me awhile to figure out just how misguided that approach is -- I helped design military computer systems for 11 years before deciding to do something else with my life.

In RISKS 9.56, Dave Davis and Tom Reid observe that current C3 development projects seem to be sinking deeper into the mire of nonperformance even as the plans for these systems become more grandiose and unrealistic.

Please understand that I am not arguing against top-down analysis of organizational goals and functions. It is clearly essential to know which are the important responsibilities of an organization in order to properly prioritize efforts. Based on my experience, attempts at aiding analysis and decision-making tasks with computer applications should begin with the lowest levels and proceed upward IN THE CASES THAT WORK. Contrary to some widely held beliefs, many such tasks do not lend themselves to computer assistance and the sooner one weeds out the mistakes and intractable tasks the faster one can improve the areas that do lend themselves to automation and integration.

A great deal of time, effort, and money can be save by approaching development in an evolutionary bottom-up way. It is essential to shake-down, test, and improve lower level functions before trying to integrate at a higher level. Trying to do it all at once leads to gross instability that takes so long to resolve that the requirements change long before the initial version of the system is "finished." Each time one moves up a level it is usually necessary to redesign and modify some or all of the system. It is much faster to do that a number of times than it is to try to build a "total system" the first time because that approach almost never works.

Someone (Karl von Clausewitz?) once said that people who don't know history are condemned to repeat it. A modern corollary is that people who do know history will choose to repeat it as long as it is profitable. Unfortunately, the Defense Department's procurement policies often reward technical incompetence and charlatanism. I will support this claim with a few "peace stories" that would have been much more atrocious "war stories" if any of the systems that we designed had been involved in a real war. Fortunately, that didn't happen.

The presumption that computer-communication system development should be done on a grand scale from the outset is just one of many bad ideas that have taken root within the military-industrial establishment. The reason that this misconception has persisted for decades is that there is no penalty associated with failure. On the contrary, failures are often very profitable to the contractors -- the bigger, the better. The bureaucrats who initiate these fiascos usually move on before the project fails, so if anyone tries to point fingers they can say that it was the fault of the subsequent management.

While the "total system" approach is one of the more persistent causes of failure in C3 development, it is by no means the only misconception afloat. In subsequent segments I will review some other causes of historical fiascos. All of this will be ancient history, since I got out of this field about 25 years ago. Of course, many of the more recent fiascos are protected from public scrutiny anyway by the cloak of national security.

[RISKS 9.65]

The C3 legacy, Part 2: a SAGE beginning

Thanks to for pinning down my half-remembered quotation in the preceding segment (RISKS 9.60):
> The actual quote is "Those who cannot remember the past are condemned
> to repeat it." from George Santayana's "The Life of Reason".

The grandfather of all command-control-communication (C3) systems was an air defense system called SAGE, a rather tortured acronym for Semi- Automatic Ground Environment. As I reported earlier in RISKS 8.74, some of the missiles that operated under SAGE had a serious social problem: they tended to have inadvertent erections at inappropriate times. A more serious problem was that SAGE, as it was built, would have worked only in peacetime. That seemed to suit the Air Force just fine.

SAGE was designed in the mid to late 1950s, primary by MIT Lincoln Lab, with follow-up development by IBM and by nonprofits System Development Corp. and Mitre Corp. The latter two were spun off from RAND and MIT, respectively, primarily for this task.

SAGE was clearly a technological marvel for its time, employing digitized radar data, long distance data communications via land lines and ground-air radio links, the largest computer (physically) built before or since, a special-purpose nonstop timesharing system, and a large collection of interactive display terminals. SAGE was necessarily designed top-down because there had been nothing like it before -- it was about 10 years ahead of general purpose timesharing systems and 20 years ahead of personal computers and workstations.

While the designers did an outstanding job of solving a number of technical problems, SAGE would have been relatively useless as a defense system if a manned bomber attack had occurred for the following reasons.

  1. COUNTERMEASURES. Each SAGE system was designed to automatically track aircraft within a certain geographic area based on data from several large radars. While the system worked well under peacetime conditions, an actual manned bomber attack would likely have employed active radar jamming, radar decoys, and other countermeasures. The jamming would have effectively eliminated radar range information and would even have made azimuth data imprecise, which meant that the aircraft tracking programs would not have worked. In other words, this was a air defense system that was designed to work only in peacetime! (Some "Band-aids" were later applied to the countermeasures vulnerability problem, but a much simpler system would have worked better under expected attack conditions.)

  2. HARDENING. Whereas MIT had strongly recommended that the SAGE computers and command centers be put in hardened, underground facilities so that they could at least survive near misses, the "bean counters" in the Pentagon decided that this would be too expensive. Instead, they specified above-ground concrete buildings without windows. This was, of course, well suited to peacetime use.

  3. PLACEMENT. While the vulnerabilities designed into SAGE by MIT and the Pentagon made it relatively ineffective as a defense system, the Air Defense Command added a finishing blunder by siting most of the SAGE computer facilities in such a way that they would be bonus targets! This was an odd side effect of military politics and sociology, as discussed below.

In the 1950s, General Curtis Lemays's Strategic Air Command consistently had first draw on the financial resources of the Defense Department. This was due to the ongoing national paranoia regarding Soviet aggression and some astute politicking by Lemay and his supporters. One thing that Lemay insisted on for his elite SAC bases was that they have the best Officers Clubs around.

MIT had recommended that the SAGE computer facilities be located remotely, away from both cities and military bases, so that they would not be bonus targets in the event of an attack. When the Air Defense Command was called upon to select SAGE sites, however, they realized that their people would not enjoy being assigned to the boondocks, so they decided to put the SAGE centers at military bases instead.

Following up on that choice, the Air Defense Command looked for military bases with the best facilities, especially good O-clubs. Sure enough, SAC had the best facilities around, so they put many of the SAGE sites on SAC bases. Given that SAC bases would be prime targets in any manned bomber attack, the SAGE centers thus became bonus targets that would be destroyed without extra effort. Thus the peacetime lifestyle interests of the military were put ahead of their defense responsibilities.

SAGE might be regarded as successful in the sense that no manned bomber attack occurred during its life and that it might have served as a deterrent to those considering an attack. There were reports that the Soviet Union undertook a similar experimental development in the same time period, though that story may have been fabricated by Air Force intelligence units to help justify investment in SAGE. In any case, the Russians didn't deploy such a system, either because they lacked the capability to build a computerized, centralized "air defense" system such as SAGE or had the good sense not to expend their resources on such a vulnerable kluge.

[RISKS 9.67]

The C3 legacy, Part 3: Command-control catches on

(Continuing from RISKS 9.65)

As the U.S. Air Force committed itself to the development of the SAGE air defense system in the late 1950s, new weapons that did not require centralized guidance came to be rejected, even though some appeared to be less vulnerable to countermeasures than those that depended on SAGE. An example was a very fast, long range interceptor called the F-109 that was to carry a radar that would enable it to locate bombers at a considerable distance and attack them. As such, it did not need an elaborate ground-based computer control system.

My group at MIT Lincoln Lab had been responsible for integrating earlier interceptors and missiles into SAGE. We subsequently joined Mitre Corporation when it was formed from Lincoln Lab's rib and were later assigned the responsibility for examining how the F-109 interceptor might be used.

I had assumed that the Air Force was genuinely interested in seeing how the F-109 could best function in air defense. Accordingly, we worked out a plan in which the interceptors that were in service would be deployed to various airfields, both civilian and military, so as to make them less vulnerable to attack. This dispersal together with their ability to function with minimal information about the locations of attacking bombers appeared to offer a rather resilient air defense capability that could survive even the destruction of the vulnerable SAGE system.

When we published a utilization plan for the F-109 based on these ideas, The Air Force made it clear that we had reached the "wrong" conclusion -- we were supposed to prove that it was a bad idea. We apparently had been chosen to "study" it because, as designers of SAGE, we were expected to oppose any defensive weapons that would not need SAGE.

In order to deal with the embarrassing outcome of this study, a Colonel was commissioned to write a refutation that confirmed the ongoing need for centralized computer control. The Air Force insisted that anyone who requested our report must also get a copy of the refutation. Mitre necessarily acceded. In any case, the F-109 was never built in quantity.

The seductive image

Though the designers of SAGE came to recognize its weaknesses and vulnerabilities and the Air Force should have been reluctant to build more systems of the same type, it somehow came to be regarded as the model of what the next generation of military control systems should be. Never mind that it was essentially useless as a defense system -- it looked good!

The upper floor of each SAGE command center had a large room with subdued lighting and dozens of large display terminals, each operated by two people. Each terminal had a small storage-tube display for tabular reference data, a large CRT display of geographical and aircraft information (with a flicker period of just over one second!), and a light gun for pointing at particular features. Each terminal also had built-in reading lights, telephone/intercoms, and electric cigar lighters. This dramatic environment with flickering phosphorescent displays clearly looked to the military folks like the right kind of place to run a war. Or just to "hang out."

Downstairs was the mighty AN/FSQ-7 computer, designed by MIT using the latest and greatest technology available and constructed by IBM. It had:

Remarkably, all of this new technology worked rather well. There were some funny discoveries along the way, though. For example, in doing preventive maintenance checks on tubes, a technician found one that was completely dead that had not been detected by the diagnostics. Upon further examination it was discovered that this tube didn't do anything! This minor blunder no doubt arose during one of the many redesigns of the machine.

Both the prototype and operational SAGE centers were frequently visited by military brass, higher level bureaucrats, and members of Congress. They generally seemed to be impressed by the image of powerful, central control that this leading-edge technological marvel had. Of course, General Lemay and his Strategic Air Command could not sit by and let another organization develop advanced computer technology when SAC didn't have any.

In short order the SAC Control System was born. Never mind that there was not much for it to do -- it had to be at least as fancy as SAGE. When the full name was written out, it became Strategic Air Command Control System. The chance juxtaposition of "Command" and "Control" in this name somehow conjured up a deeper meaning in certain military minds.

In short order, Command-Control Systems became a buzz word and a horde of development projects was started based on this "concept." The Air Force Systems Command soon realized that it had discovered a growth industry and reorganized accordingly. The specifications for the new C2 systems generally contained no quantitative measures of performance that were to be met -- the presumption seemed to be that whatever was being done already could be done faster and better by using computers! How wrong they were.

[RISKS 9.74]

The C3 legacy, Part 4: A gaggle of L-systems

Martin Minow contributes some SAGE anecdotes in RISKS 9.68, including the following.

> My friend also mentioned that the graphics system could be used to display
> pictures of young women that were somewhat unrelated to national defense
> -- unless one takes a very long view -- with the light pen being used
> to select articles of clothing that were considered inappropriate in the
> mind of the viewer.  (Predating the "look and feel" of MacPlaymate by
> almost 30 years.)  Perhaps Les could expand on this; paying special
> consideration to the risks involved in this type of programming.

While light pens did exist in that period, SAGE actually used light _guns_, complete with pistol grip and trigger, in keeping with military traditions. Interceptors were assigned to bomber targets on the large displays by "shooting" them in a manner similar to photoelectric arcade games of that era.

Regrettably, I never witnessed the precursor to MacPlaymate, which probably appeared after my involvement. While I never saw anything bare on the SAGE displays, a colleague (Ed Fredkin) did stir up some trouble by displaying a large Bear (a Soviet bomber of that era) as a vector drawing that flew across the screen. Unfortunately, he neglected to deal with X, Y register overflow properly, so it eventually overflew its address space. The resulting collision with the edge of the world produced some bizarre imagery, as distorted pieces of the plane came drifting back across the screen.

(Continuing from RISKS 9.67)

A horde of command-control development projects was initiated by the Air Force in the early 1960s. Most were given names and each was assigned a unique three digit code followed by "L." Naturally, they came to be called called "L-systems." A Program Manager (usually a Colonel) was put in charge of each one to ensure that financial expenditure goals were met. Those who consistently spent exactly the amounts that had been planned were rewarded with larger sums in succeeding budgets. Monthly management reviews almost never touched on technical issues and never discussed operational performance -- it was made clear that the objective was to spend all available funds by the end of the fiscal year and that nobody cared much about technical or functional accomplishments.

In 1960, after earlier switching from MIT Lincoln Lab to Mitre Corp., my group was assigned to provide technical advice to a Colonel M., who was in charge of System 438L. This system was intended to automate the collection and dissemination of military intelligence information. Unlike most command-control systems of that era, it did not have a descriptive name that anyone used -- the intelligence folks preferred cryptic designations, so the various subsystems being developed under this program were generally called just "438L."

I had recently done a Masters thesis at MIT in the field of artificial intelligence and hoped to find applications in this new endeavor. I soon learned that the three kinds of intelligence have very little in common (i.e. human, artificial, and military).

IBM was the system contractor for 438L and was already at work on an intelligence database system for the Strategic Air Command Headquarters near Omaha. They were using an IBM 7090 computer with about 30 tape drives to store a massive database. It turned out to be a dismal failure because of a foreseeable variant of the GIGO problem, as discussed below.

The IBM 438L group had also developed specifications for a smaller system that was to be developed for other sites. Colonel M. asked us to review the computer Request for Proposals that they had prepared. He said that he planned to buy the computer sole-source rather than putting it out for bids on the grounds that there was "only one suitable computer available." When I read it, there was no need to guess which computer he had in mind -- the RFP was essentially a description of the IBM 1410, a byte-serial, variable word length machine of that era.

When Colonel M. sought my concurrence on the sole-source procurement, I demurred, saying there there were at least a half-dozen computers that could do that job. I offered to prepare a report on the principal alternatives, including an approximate ranking of their relative performance on the database task. He appeared vexed, but accepted my offer.

My group subsequently reviewed alternative computers and concluded that the best choice, taking into account performance and price, was the Bendix G-20. I reported this informally to Colonel M. and said that we would write it up, but he said not to bother. He indicated that he was very disappointed in this development, saying that it was not reasonable to expect his contractor (IBM) to work with a machine made by another company. I argued that a system contractor should be prepared to work with whatever is the best equipment for the job, but Col. M seemed unconvinced.

This led to a stalemate; Colonel M. said that he was "studying" the question of how to proceed, but nothing further happened for about a year. Finally, just before I moved to another project, I mentioned that the IBM 1410 appeared to be capable of doing the specified task, even though it was not the best choice. Col. M. apparently concluded that I would not make trouble if he proceeded with his plan. I later learned that he initiated a sole-source procurement from IBM just two hours after that conversation.

In the meantime, the development project at SAC Headquarters was falling progressively further behind schedule. We talked over this problem in my group and one fellow who had done some IBM 709 programming remarked that he thought he could put together some machine language macros rather quickly that would do the job. True to his word, this hacker got a query system going in one day! I foolishly bragged about this to the manager of the IBM group a short time later. Two weeks after that I discovered that he had recruited my hotshot programmer and immediately shipped him to Omaha. I learned to be more circumspect in my remarks thereafter.

The IBM 438L group did eventually deliver an operable database system to SAC, but it turned out to be useless because of GIGO phenomena (garbage in, garbage out). Actually, it was slightly more complicated than that. Let's call it GIGOLO -- Garbage In, Gobbledygook Obliterated, Late Output.

The basic problem was that in order to build a structured database, the input data had to be checked and errors corrected. In this batch environment, the tasks of data entry, error checking, correction, and file updating took several days, which meant that the operational database was always several days out of date.

The manual system that this was supposed to replace was based on people reading reports and collecting data summaries on paper and grease pencil displays. That system was generally up-to-date and provided swift answers to questions because the Sergeant on duty usually had the answers to the most likely questions already in his head or at this finger-tips. So much for the speed advantage of computers!

After several months of operation with the new computer system, the embarrassing discovery was made that no questions were being asked of it. The SAC senior staff solved this problem by ordering each duty officer to ask at least two questions of the 438L system operators during each shift. After several more months of operation we noted that the total number of queries had been exactly two times the number of shifts in that period.

The fundamental problem with the SAC 438L system was that the latency involved in creating a database from slightly buggy data exceeded the useful life of the data. The designers should have figured that out going in, but instead they plodded away at creating this expensive and useless system. On the Air Force management side, the practice of hiring a computer manufacturer to do system design, including the specification of what kind of computer to buy, involved a clear conflict-of-interest, though that didn't seem to worry anyone.

[RISKS 9.80]

The C3 legacy, Part 5: Subsystem I

(Continuing from RISKS 9.74)

Of the dozens of command and control system development projects that were initiated by the U.S. Air Force in the early 1960s, none appeared to perform its functions as well as the manual system that preceded it. I expect that someone will be willing to argue that at least one such system worked, but I suggest that any such claims not be accepted uncritically.

All of the parties involved in the development of C3 systems knew that their economic or power-acquisition success was tied to the popular belief that the use of computers would substantially improve military command functions. The Defense Department management and the U.S. Congress must bear much of the responsibility for the recurring fiascos because they consistently failed to insist on setting rational goals. Goals should have been specified in terms of information quality or response time for planning and executing a given set of tasks. The performance of these systems should have been predicted in the planning phase and measured after they were built so as to determine whether the project was worthwhile.

Instead, the implicit goal became "to automate command and control," which meant that these systems always "succeeded," even though they didn't work. Despite a solid record of failure in C3 development, I know of just one such project that was cancelled in the development phase. That was Subsystem I, which was intended to automate photo-interpretation and was developed for the Air Force by Bunker Ramo, as I recall.

The "I" in the name of this project supposedly stood for "Intelligence" or "Interpretation." This cryptic name was apparently chosen to meet the needs of the prospective users in the intelligence community, who liked to pretend that nobody knew what they were doing. This pretense occasionally led to odd conduct, such as when they assigned code names to various devices and tried to keep them secret from outsiders. For example, a secret name was assigned to one of the early U.S. spy satellites -- as I recall it was Samos -- but when that name somehow showed up in the popular press they tried to pretend that no such thing existed. In support of this claim, everyone in the intelligence community was directed to stop using that name immediately.

When I attended a meeting in the Pentagon a few days after this decree and mentioned the forbidden word, the person operating the tape recorder immediately said "Wait while I back up the tape to record over that!" This was a classified discussion, so there was no issue of public disclosure involved, just the belief that there should be no record of the newly contaminated name.

Sometime in the 1981-82 period, the Air Force decided to terminate the development of Subsystem I. A group of about 30 people from various parts of the defense establishment, including me, was invited to visit the facility in suburban Los Angeles where the work was going on to see if any of it could be used in other C3 systems. We were given a two day briefing on the system and its components, the principal one being a multiprocessor computer.

The conceptual design of this Polymorphic Computer, as they called it, was attributed to Sy Ramo, who had earlier helped lead Hughes Aircraft and Ramo-Wooldridge (later called TRW) to fame and fortune. The architecture of this new machine was an interesting bad idea. The basic idea was to use many small computers instead of one big one, so that the system could be scaled to meet various needs simply by adjusting the number of processors. The problem was that these units were rather loosely coupled and each computer had a ridiculously small memory -- just 1K words. Each processor could also sequentially access a 1K buffer. Consequently it was very awkward to program and had extremely poor performance.

I sought out the Subsystem I program manager while I was there and asked if our group was the only one being offered this "free system." He said that we were just one of a number of groups that were being flown in over several months time. When I asked how much they were spending on trying to give it away, he said about $9 million (which would be equivalent to about $38 million today). The Air Force Systems Command seemed to be trying desparately to make this program end up as a "success" no matter how much it cost. When I asked why the program was being cancelled, I got a very vague answer.

I did not recommend that my group acquire any of that equipment and as far as I know nobody else did. The question of why Subsystem I was cancelled remained unresolved as far I was was concerned. It is conceivable that it was because they figured out that it wasn't going to work, but neither did the other C3 systems, so the reason must have been deeper (or shallower, depending on your perspective). My guess is that they got into some kind of political trouble, but I will probably never know.

[RISKS 9.97]

The C3 Legacy, Part 6: Feedback

[My apologies for the gap in this series -- I'm running for City Council currently and don't seem to have enough spare cycles. -Les]

Was there ever a command and control system that worked?

My opening remark in RISKS 9.80 was: "Of the dozens of command and control system development projects that were initiated by the U.S. Air Force in the early 1960s, none appeared to perform its functions as well as the manual system that preceded it." Gene Fucci, who worked on the Air Force satellite surveillance programs as a project engineer on SAMOS and later as Field Force Test Director of MIDAS, found my remarks "somewhat distorted" in that he believes the satellite command and control systems worked well.

I will plead relative ignorance of those systems, but note that they were called just "control systems" until "command and control" became a buzzword in the early 1960s. I do not wish to take the position that all systems to which the term "command and control" or "command-control- communications" was eventually applied were failures -- just that all of the dozens that I knew of were failures.

SAGE revisited

Some of the earlier C3 Legacy postings on SAGE have found their way via a circuitous route to an old friend of mine, Phil Bagley, who also helped design that system. Phil has now sent me snail-mail that takes a different view of that program, as follows.

"I think that you have discovered what is behind the curtain. In case you haven't, let me tell you my view. The motivation behind a big military electronic system such as SAGE or BMEWS is _not_ to have it work. It is just to create the _illusion_ that the sponsor is doing his job, and perhaps peripherally to provide an opportunity to exercise influence. Lincoln Lab and MITRE had no motivation to point out the obvious -- that the emperor had no clothes. If you had asked a responsible think tank who had no stake in the outcome how to deal most effectively with the issues, you would have recommendations very different from those that guided the electronic systems developments.

"Now it wasn't all for naught. Out of SAGE, computer technology got a big boost. IBM learned how to build core memories and made a lot of money building machines with core memories. Lots of people like you and me got good systems and programming training (I still write programs). Ken Olsen learned how to design digital equipment and ultimately gave the world a few billion dollars worth of Vaxes.

"The moral of all this is: When things appear not to make sense you very probably are looking at it from the `wrong' point of view. Another way to say it: It's pretty hard to fool Mother Nature, so if it appears that she is being fooled, try to find a point of view which doesn't imply that she's being fooled."

While Phil and others may be comforted by this view, I will argue that it amounts to nothing more than "Whatever is, is right," which grates on my rationalist soul. I believe that if a comparable amount of government money had been invested in research, or on a more tractable application, that computer technology would have advanced much more quickly than actually happened.

I believe that as soon as MIT and MITRE engineers figured out that they had designed an unworkable system, they had an ethical obligation to point that out to their sponsors. Instead they (we) helped perpetuate the myth that it worked so that we could continue in our beloved technological lifestyle.

Phil's mention of Ken Olsen reminds me that we gave a going-away party for him and Harlan Anderson at the MIT Faculty Club when they left to form their company to make transistorized digital modules based on experience in building the TX-0 and TX-2 computers at Lincoln Lab. We told them that they could have their old jobs back after their start-up went belly-up, as we all expected. In fact, that reportedly came rather close to happening more than once in the first couple of years, but somehow DEC squeeked through and grew a bit.

Requiem: the SAIL computer, which would have reached the grand old age of 25 next week, is slated to retire tonight and die in the near future. It has provided an intellectual home for a very productive generation of researchers and will be remembered fondly.

-Les Earnest

From a rec.aviation.military posting

(beginning of original message)

Subject: Re: F-102/F-106
Date: 1998/11/15
Newsgroups: rec.aviation.military (Tom Naylor) wrote:

Ah, yes, Data Link! I was in the 326 FIS at Richards Gebaur AFB ( Dicky Goober) known on the air as "RG Tower". I flew a lot of test missions with the SAGE center there tryiong to debug data link. The original setup was 'frequency division' D/L, know as 'fiddle'. And fiddle we did, for several years, until they gave it up as hopelessly unreliable and replaced it with 'tiddle', Time Division D/L. This worked amazingly well, so much so that R/T almost faded from use. We would give and armaments sfaety and oxygen check on initial contact and the SAGE controller would acknowledge our call.

Simultaneously he would transmit to us a standardized test message. If it did its thing properly we were receiving valid signals and would then "Follow Dolly" The next call we made would be 'Judy' (taking control) and "MA" - mission accomplished" or rarely (102 being by now rather reliable) "MI" - missed intercept.

From hating Fiddle we grew to love Tiddle because the silence was so refreshing.

But getting to that point was aggravating . .
I well remember one of the first test missions with SAGE. I spent most of my time aloft Essing madly back and forth chasing the D/L steering command dot. Back on the ground we dissected the mission with the SAGE controller and several programmers. At last we discovered there had been no allowance made for aircraft turn radius, consequently I was overshooting all turns by as much as 6 to 8 miles. The radar would see me out of position the FSQ7 computer would issue a correction which of course I would again overshoot . . .

But like I said after awhile it got pretty good. Especially after they diaabled the RTB (return to base) function for the nuclear-armed BOMARC missile. (One free spirit during a CPX decied to see what would happen if he RTB'd a BOMARC - the notional missile did a 180 and headed back home!)

BTW the FSQ7 at the time was the world's finest computer with its 100 KB (kilobyte) core memory. And its vacuum tubes and 15 tons of airconditioning power! The computer I'm typing this on is several orders of magnitude more capable and dozens of magnitudes more reliable!

Speaking of vacuum tubes, I am pretty sure the IRSTS system fro the MG10 was solid-state. It's only break mode I ever saw was loss of LN2 coolant.

Walt BJ ftr plt ret

(end of original message)

from an early Operational Planner - Frank Mertely February 2004
I was one of a group of 5 or 6 guys in a special project to prepare the operational and implementation plan to install the [SAGE] system in the AIr Defense Command way back in 1954 in Colorado Springs.

We had a large task, but with the help of a lot of people we installed the first computer at Fort Dix, NJ on schedule. This would not have been possible without a lot of groundbreaking effort by the Lincoln Lab people, Western Electric, IBM and a whole lot of other conractors.

My task was to prepare the budget for equipment, facilites, communications and personnel for submission to the folks at the Pentagon. Since it was a National Security Council priority, it was a little easier to gain approvals there and the Congress.

I made many trips from Colorado Springs to New York City for coordination meetings. I stayed with the project for about two years and then went to the Air Command and Staff College and many other places after that.

I would like to make one comment on siting the facilites. I noted that some one stated that we sited the facilities on SAC bases where the had the best O~Clubs. That was not the case when we started .Our first priority was to site away from major target areas and availabilty of communications and to take advantage of existing facilites where possible.That is why you see places like Fort Lee, Topsham, Fort Custer and Truax Field. A number of ADC bases were selected. Among them were: Duluth, Grand Forks , K.I. Sawyer and others that were programmed for ADC interceptor bases.

Unfortunately, when General Lemay dispersed the bomber and tanker forces a few years later some of these bases took on the SAC flavor and did increase the vulnerability of the Sage system.

It is too bad that the transistor did not come along sooner as we could have put them underground at much lower cost since we would not have needed all that space for the computer, less air conditioning and back-up power. So we had to live with the technolgy that we had .

Regardless, it was a great system and a challenge to get us into the computer age. It was a tough job and it was nice to associate with so many skilled and dedicted people.

Thanks for helping me to recall my work on the system. Keep up the good work.

Best regards.


Locations of SAGE systems

as per
DC-1: McGuire AFB, NJ 
DC-2: Stewart AFB, NY 
DC-3 / CC-1: Hancock Field, NY 
DC-4: Fort Lee AFS, VA 
DC-5: Topsham AFS, ME (blockhouse demolished) 
DC-6: Fort Custer, MI 
DC-7 / CC-2: Truax Field, WI
DC-8: Richards-Gebaur AFB, MO 
DC-9: Gunter AFB, AL 
DC-10: Duluth IAP, MN 
DC-11: Grand Forks AFB, ND 
DC-12 / CC-3: McChord AFB, WA 
DC-13: Adair AFS, OR 
DC-14: K. I. Sawyer AFB, MI 
DC-15: Larson AFB, WA 
DC-16: Stead AFB, NV 
DC-17: Norton AFB, CA 
DC-18: Beale AFB, CA 
DC-19 / CC-4*: Minot AFB, ND (* CC-4 blockhouse built, but AN/FSQ-8 never installed) 
DC-20: Malmstrom AFB, MT 
DC-21: Luke AFB, AZ 
DC-22: Sioux City AFS, IA 

If you have comments or suggestions, Send e-mail to Ed Thelen

Back to Home Page
Last updated December 2, 2003