*** Please note, this page (and web site) are in early development.
Items are certainly not complete, and may be inaccurate.
Your information, comments, corrections, etc. are eagerly requested.
Click here to e-mail Ed. Please include the URL under discussion. Thank you ***

IBM SAGE

Manufacturer IBM
Identification,ID SAGE - (Semi-Automatic Ground Environment)
Date of first manufacture1956 - Designed by MIT for the Air Force
1958 The initial installation of SAGE, ... is declared operational.
Number produced -
Estimated price or cost-
location in museum -
donor -

Contents of this page:

Photo

Placard
-

Architecture
It is useful to think of the SAGE computer as the heart of a process control system.
- Inputs from "sensors", primarily multiple radars via FST-2, FAA traffic, and operator consoles.
- A computer to present data to operators for human decisions.
- Multiple outputs to "actuators", fighter commands, missile command centers.
With communication with adjacent systems for continuous coverage of a larger area than can be handled by one system.
  • Dual Processor
    • one on line,
    • other trainning, maintenance, hot backup
  • much communication to remote sites, flying manned/unmanned vehicles
  • Hot plugable modules, did not have to remove power from the rest of the machine to remove/insert a plug-in module. (An OFF/ON switch for each module.)
  • a 32 bit machine, left half was op code, right half address

The SAGE system was component of a larger system called 416L North American air defense system.

Manual Introduction to SAGE AN/FSQ-7 & AN/FSQ-8

Field Trip to North Bay - (to see a SAGE installation) by Gordon Bell ( 10 megabyte .pdf )

Recollections of the SAGE System by David E. Casteel, Captain, USAF (ret)

The system accepted inputs from radar system. Comments on SAGE Resistance to Radar Jamming

Special features
  • first/early use of core memory (6 microsecond cycle time)
  • first/early use of modems over phone lines
  • Memory was 64 K 32 bit words, later another 4 K was added making 69 K words
  • Drums for external storage, 150 K words
  • used standard size vacuum tubes (no miniture tubes), physically extremely large -
  • each computer of a pair had about 58,000 vacuum tubes and consumed one million watts. And another million watts to cool it.
  • used for command and control of air defense units. There were 22 SAGE computer pairs and associated consoles, communications gear, ... in installations about the U.S.
  • some were in "hardened" mountains, others were in normal buildings on SAC (Stragic Air Command) bases.
  • Installation started in 1958, many in service until 1985

Programmer Card - Front Side and Back Side, courtesy Bill Kirkpatrick


Radar Data, including azimuth & elevation estimate, was digitized, time tagged, and transmited to SAGE
by the FST-2 and it's descendents. Links include:

from Ron Sauro, to USAF-RadarStationVeterans@yahoogroups.com , Sept 26, 2013
Jim...The "Semi-Automatic Ground Environment" (SAGE) was named that because of the human input needed to collect the data, not to make decisions in any way.... Even today decisions are still decided by people and not computers... To aid that idea SAC even instituted the "failSafe"; program.

What made it automatic was the program flow of the SAGE computer.... In simple terms the system used the search radars to "find" the target in horizontal space and provide the azimuth to the computer..... the computer then sent back the azimuth to the height finder and slewed the HF to the correct azimuth......here is where it became semi automatic..... it knew where in horizontal space the target was but had no idea where in vertical space it was.....so it needed human input to computer by having the operator take a horizontal cursor and place it over the target on the HF scope and to then have the operator press a button teliing the computer that it could now take the human input to the computer of the height of the target...... once the computer where in 3d space the target was THEN it could make predictions for that target.... In todays world the button push was the same as clicking the mouse cursor.... For all it size and complexity the average SAGE computer was about as smart as your average tablet today...Even computers still need humans to interface with the real world..



- - more comments on the AN/FST-2 Radar-Processing Equipment for SAGE - posted June 20, 2013 (edited)
-------- Original Message --------
      Subject: Re: SAGE Errors/1st solid state digital computer/other
From: Les Earnest
Date: Wed, June 19, 2013 11:03 pm
To: Roy Mize
Cc:
Well if that system
[ AN/FST-2 Radar-Processing Equipment for SAGE"]
works as described, when it encounters radar jamming
it will send a horde of pulses to the computer thus jamming it.

-Les Earnest
From: ed@ed-thelen.org
To: les@cs.stanford.edu; roy@workplans.com
CC: ...
Subject: RE: SAGE Errors/1st solid state digital computer/other

> Well if that system works as described, when it encounters radar jamming
> it will send a horde of pulses to the computer thus jamming it.

I must agree with Les -
   As described in useful detail in the article -
     [ AN/FST-2 Radar-Processing Equipment for SAGE"]
the radar video comes into the AN/FST-2
  along with sync pulses (from the pulses being transmitted)
  and with antenna azimuth at that moment -
  ( The radar pulse rate was 330 PPS )

Assuming one target ;-))
   the AN/FST-2 makes range and azimuth "buckets"
    and looks for the "left" and "right" edge azimuths of the target
    in that range. When the "right edge" of the echo is
    detected, the equipment determines the center of the edges
     and transmits the center azimuth, and range (bucket) to SAGE :-))

And the AN/FST-2 and SAGE could seemingly work with are reasonable number of targets.
All very nice unless someone is trying to make a mess of your scope (video).

Let us assume one simplistic jammer, just transmitting radar noise steadily
   at you - that noise makes a radial line on your scope and video -
The AN/FST-2 has signal at all ranges near that azimuth -
   When the antenna rotates away from the jammer, (no more signal)
     the AN/FST-2 will report planes at all range buckets at the center azimuth -

OK - so SAGE "knows" there are say 100 aircraft (assuming 100 range buckets)
   in a line on that azimuth -
    just great :-(( all caused by one jammer :-((
Unfortunately
   a) there is likely more than one jammer :-((
   b) various other jamming techniques makes the scopes (and video)
     even more messy
   c) soon the AN/FST-2 communication line would be jammed with bogus targets
   d) and SAGE has way more "targets" than it can possibly analyze/track

Unfortunately, my Army Nike missile site never saw jamming while I was there. (I left in 1957)
( I have seen PPI scope photos of jamming during "exercises" in Germany involving Soviets.
   http://ed-thelen.org/ecm_ppi_3.jpg
     What a confounded mess !! )
  A few years later T-1 trailers which simulated jamming of various sorts
  were available to raise heck with, and train, Nike tracking operators and tracking supervisors
     to resist/fight/track-through jamming.
  (And the new Nike Hercules tracked targets using two different tracking radars
     using two different radar bands ( X & Ku ) - the Army eventually took jamming very seriously !! )
(The tracking supervisor had a display of active radar frequencies
   and a number of controls for transmitter frequencies, pulse widths,
    and other techniques to try to dodge/minimize jamming to aid the tracking operators.)

I'm near the limit of my current knowledge - bowing out, with my ears on -

Ed Thelen


There was a flury of e-mail among the usualf suspects in Sept 2012
A machine described here is reputed to digitize the targets seen by radar into a digital form useable by the SAGE system:
http://www.computerhistory.org/collections/accession/X825.87

The description seems to have a flaw as almost any radar has a range of over "64" miles. Does anyone have a better description of digitizing radar video for SAGE ??


David Casteel wrote June 1, 2013

The link describes some equipment manufactured by Lewyt and I believe the question above is directed to a statement about the range processed by the AN/FST-1. The AN/FST-1 used “slowed-down video” (SDV) to process the radar data from a Gap Filler radar (most of them were AN/FPS-18) to the parent radar site or directly to the Direction Center (I think the latter was possible). Gap Filler radars were short-range units and I think 64 miles would be a very probable range limit for one. They were positioned to provide lower-level coverage within holes in the coverage provided by the major LRR sites. The entry for the AN/FPS-18 in the Radomes equipment list gives 65 nm as its range.

I said above that I thought it might have been possible for the Gap Filler data to go directly to the SAGE site, but I am not really sure. I don’t recall having seen evidence of it at Adair AFS (PoADS). There was an odd device sometimes used with Gap Filler data in the manual system that used multiple little PPI scopes displaying the SDV from the associated GF sites with high retention, each of the little PPIs precisely located relative to the main LRR site position at the range setting specified. A scan by a rotating photosensor was then used to detect the blips displayed by the GF radar(s) and that information was superposed over the main radar to supplement the coverage. I did not learn how the AN/FST-1 worked and never saw any use of the video from the 2 GF sites associated with Mt. Hebo AFS. Both its Gap Fillers were never in operation when I was assigned there and both had been decommissioned before I went to Adair.

David

from Roy Mize - March 30, 2010 - Material below dotted line was added September 14, 2012
http://oai.dtic.mil/oai/oai?verb=getRecord&metadataPrefix=html&identifier=AD0264873
THE FEASIBILITY MODEL OF THE DIGITAL COMMUNICATIONS SET, AN/GSC-4 FIELD TEST PROGRAM

Abstract : A field test program was carried out to determine the feasibility of the AN/GSC-4 digital communications set. The AN/GSC-4 feasibility model was designed as a high-speed digital data modem for conveying binary data over toll telephone facilities. Laboratory tests were conducted to determine error rate characteristic as a function of white and impulse noise, frequency translation, and modulation. Field tests were conducted to determine the performance of the system over commercial phone lines, SAGE tropospheric scatter link, White Alice tropospheric scatter network, electronic switchboard, and the 465-L remote communications complex breadboard. The laboratory tests pointed out that a S/N ratio db was necessary for a binary error rate of 1 x 0.0001 when operating at 5400 bits/sec; for 2400 bit/sec operation, an error rate of 1 x 10 to the 6th power could be realized with SNR of 16 db with synchronization modulation. Without sync modulation a 3 db improvement is obtained. (Author)

............. dotted line ...........................................................

Descriptors : *SECURE COMMUNICATIONS, *TELEPHONE SYSTEMS, *TELEVISION SYSTEMS, COMMUNICATION AND RADIO SYSTEMS, DATA TRANSMISSION SYSTEMS, DIGITAL SYSTEMS, DISTORTION, ERRORS, FEASIBILITY STUDIES, MULTIPLEXING, PANEL BOARDS(ELECTRICITY), PHASE MODULATION, RELIABILITY, SCATTERING, SYNCHRONIZATION(ELECTRONICS).

Distribution Statement : APPROVED FOR PUBLIC RELEASE

= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

> Roy, As far as I know there were no serious problems in getting digital communications to work in SAGE. Below are two paragraphs from an article I'm writing that somehow got oddly formatted here. It is interesting to note that AT&T played no role in inventing the modem (which was not called that initially) and even after computer networks were shown to work they refused to believe it.
>
> In 1976 the Defense Communications Agency, which had recently gotten control of ARPAnet, offered it free to AT&T if they would agree to run it for a fee. However AT&T refused, apparently with Bell Labs' blessing, on the grounds that packet switching had no future.
>
> Thus even though Bell Labs was innovating in a number of areas then, they apparently couldn't stand the idea that someone outside had superseded them in their base business.
> -------------------------------------------
> Who invented the modem?
> Given that modems are an essential part of computer networking you might expect that they were created for that purpose but it wasn't quite like that. In 1949 Jack Harrington and his group at the Air Force Cambridge Research Center (AFCRC) wanted to be able to view radar data from a remote site, but the bandwidth of radar video was too great to go over ordinary phone lines. They created a Digital Radar Relay that identified blips, located their centers and sent digital packets, one per blip, over a phone line to the display site. An improved version of the modem (which was not yet called that) was patented by Jack Harrington and Paul Rosen and became the basis of Bell Telephone's A-1 Data Service.
>
>
> Who invented packet switching?
> It was Jack Harrington's group around 1953, after they moved from AFCRC to MIT. As part of the experimental Cape Cod air defense project (the SAGE prototype) they had Burroughs Corp build the FST-2, a special purpose computer used to process data from radars for transmission to the Whirlwind computer at MIT. The same scheme was used in its offspring, SAGE. Overall the SAGE network interconnected hundreds of sites across North America beginning in 1959.
> --------------------------------------------------
>
> -Les


From Roger Lewis - Dec 27, 2006
Memory was 64 K 32 bit words, later another 4 K was added making 69 K words.

Actually it was quite the reverse. The early Q7's were installed with only 2 each 4K memories fondly called "shower stalls" and the test program was called MEM01.

Later on the 65K memory was retrofitted to replace one of the 4K units including the associated driver frames.

It was a massive retrofit and as I remember it, required the IBM team to work 7 days a week for 6 weeks on all three shifts with extended shift lengths. Rumor had it that each retrofit cost as much as the associated computer originally cost.

At the completion, the test programs were then called BIGMEM and LILMEM reflecting the size differences.

Roger Lewis
13022 Psomas Way
Los Angeles, CA 90066-2213


From Les Earnest replying to a question about "Popular [DD] Cooper Myths Debunked" - Feb 2007
SAGE computers did record radar data on magnetic drums but kept only about two minutes worth at any given time and discarded old hits as new data came in. These data were used by the computer to automatically track aircraft and those tracks often were recorded on magnetic tape. However this process would not have "seen" a diverging radar blip unless the radar data was being displayed (it usually wasn't) and it was noticed by the Intercept Director following that flight.

-Les Earnest, who designed the Intercept Director's console layout

Historical Notes
info from Bernd Milmert August 24, 2014
AN/FSQ-7 - the computer that shaped the Cold War a book by Bernd Ulmann, 2014

and there are many SAGE manuals on Bitsavers

from Dag Spicer, Jan 26, 2013
SAGE computer screen "pin-up"
Thought this might be of interest: http://www.theatlantic.com/technology/archive/2013/01/the-never-before-told-story-of-the-worlds-first-computer-art-its-a-sexy-dame/267439/

- this note added September 2012
SAGE did not spring into being, from the womb of Whirlwind, with out predecessors.
At least several navies had been working on the problem of air defense coordination.
Also note the generous use of "SEMI-AUTOMATIC" in relation to the problem/solution ;-))
from the book "When Computers Went to Sea"
by David L. Boslaugh 1999 - ISBN-10: 0769500242
starting about page 50

Three Digital Attempts

The Canadian Navy's Digital Automated Tracking and Resolving System

In 1949 the Royal Canadian Navy began conceptual work on its Digital Automated Tracking and Resolving (DATAR) system which was to be based on a digital computer, and was also to include a digital ship-to-ship data link. They first demonstrated the UHF (ultra high frequency) digital tactical data link in shore-based tests from their Ottawa laboratory in 1950. The RCN then installed two prototype DATAR systems in the minesweepers Digby and Granby operating in Lake Ontario. Each system had one special-purpose Ferranti digital computer using 3,800 vacuum tubes and a magnetic drum maim memory. The systems filled most of the after pan of the minesweepers, and, because of the large number of vacuum tubes. overheating was a major problem.

The Canadian builders designed the system with a capacity for 64 targets with 4U yard resolution over an 80x80-mile tactical grid. DATAR operators, using am electronic cursor moved by a manual 'track ball,' picked target track coordinates term radar scopes, and entered the track data into the computer

Initial DATAR tests on Lake Ontario in August and September 1953 showed that the Canadians were clearly the world leader in automating seaborne tactical data systems, however, a major fire aboard one of the test ships halted testing and the project was terminated due to lack of funds to reconstruct the destroyed system. But the project was by no means a local loss. Mr. Stanley F Knights, the leading Canadian scientist on DATAR, would be made available as a consultant to the later USN Naval "tactical Data System project where ha would provide valuable technical support beginning in 1956.

Early Digital Experiments at the Navy Electronics Laboratory

In the first chapter of this narrative we followed the career of Irvin L. McNally from college graduation in 1931 to his sudden transfer from Pearl Harbor to the Bureau of Ships in Washington, D.C., where he reported in mid-July 1943. Here he took charge of the Shipboard
....
the course, and done extensive reading in the new field. With this combined background, the next move for the three experimenters was a replication of their Coordinated Display Equipment with digital technology.

If one wanted a digital computer in the early 195(h, you did not go out and buy it- because there were none on the market. You built it. The three set out to design a special-purpose digital computer tailored specifically for their radar data processing problem. With help from Dr. Huskey, they built a computer having instructions for addition, subtraction, multiplication, and division, and the ability to store track data in electronic registers. They named the device the Semi-Automatic Digital Analyzer and Computer or SADZAC. McCown later acquired a magnetic drum memory to expand the machine's track data storage.

Nye and McCown also designed analog-to-digital converters to translate target coordinate voltages to digital form for computer processing, and digital-to-analog converters to turn the digital track storage information back into track coordinate voltages. These volt ages positioned the synthetic target symbols on the radar scopes. Their homemade special-purpose digital computer, in effect, took the place of the CDE's capacitor storage banks. But now, since the target coordinates were stored in digital form, the computer could calculate the course and speed of each target from the sweep-to-sweep changes of the target coordinates. By 1951 the NEL investigators had developed their SADZAC-based Coordinated Display Equipment to the point where it was ready for a real-life try, and they briefed the BUSHIPS Radar Design Branch on how it could be applied to shipboard radar data handling and air interceptor control calculations [ 179].

The Semi-Automatic Air Intercept Control System

In 1951 the BUSHIPS Radar Branch awarded a contract to Teleregister Company to develop an automated plotting and vector computing aid for shipboard fighter direction, to be based on the NEL Coordinated Display Equipment. The Bureau named the device the Semi-Automatic Air Intercept Control System (SAAICS).



Fully deployed by 1963, the IBM-built early warning system remained operational until 1984. With 23 direction centers situated on the nation's northern, eastern, and western boundaries, SAGE pioneered the use of computer control over large, geographically distributed systems.

A Sage Talk at the Computer History Museum


Locations of SAGE systems


as per http://www.radomes.org/museum/
DC-1: McGuire AFB, NJ DC-12 / CC-3: McChord AFB, WA
DC-2: Stewart AFB, NY DC-13: Adair AFS, OR
DC-3 / CC-1: Hancock Field, NY DC-14: K. I. Sawyer AFB, MI
DC-4: Fort Lee AFS, VA DC-15: Larson AFB, WA
DC-5: Topsham AFS, ME (blockhouse demolished) DC-16: Stead AFB, NV
DC-6: Fort Custer, MI DC-17: Norton AFB, CA
DC-7 / CC-2: Truax Field, WI DC-18: Beale AFB, CA
DC-8: Richards-Gebaur AFB, MO DC-18: Beale AFB, CA built, but AN/FSQ-8 never installed)
DC-9: Gunter AFB, AL DC-20: Malmstrom AFB, MT
DC-10: Duluth IAP, MN DC-21: Luke AFB, AZ
DC-11: Grand Forks AFB, ND DC-22: Sioux City AFS, IA

Thomas E. Page, tepage @ hotmail , com writes-
"By the way, the AN/FSQ-32 was to have been the "SuperSAGE" computer for planned underground SuperSAGE Combat Control Centers. IBM developed the computer (based upon the earlier AN/FSQ-7 and AN/FSQ-8 SAGE computers), but the SuperSAGE facilities were cancelled. One site was to have been near Cornwall, NY -- see http://www.radomes.org/museum/documents/CornwallNYnyt59.html. "Many sites were examined for SuperSAGE. One was at Kennesaw Mountain, Georgia ... Another was at White Horse Mountain, at Cornwall, New York ... White Horse Mountain is just up the road from West Point." - "Shield of Faith" by Bruce Briggs (Simon and Shuster, 1988). Reportedly, the AN/FSQ-32 computer itself did find other aplications -- just not SAGE air-defense aplications."


from David Evan Young - Oct 4, 2011

Number of AN/FSQ-32 computers manufactured:    2

       Locations"      1 at SDC; 1 at IBM

The sage mentioned above for SDC was in Santa Monica, CA.
It was at 23rd and Colorado. About 2220 Colorado, 
    which looks like Universal Music Group now!
    34 deg, 10 min, 40 sec North
   118 deg, 28min, 25 sec West

I worked on it as a Field Engineer on the SDC account.

Any time! Happy to be able to remember! 
Retired and live in Rio (Copacabana) these days! 
Tough being single here! He he..

David Evan Young – IBM 1967-2007

Tel: + 55 21 7932 6850 (Cellular in Rio)

Description in BRL Report, 1961 AN/FSQ-32


Thomas E. Page, tepage @ hotmail , com June 16, 2009 writes in response to a question of SAGE numbers.
"There were 22 SAGE Direction Centers (AN/FSQ-7) in the U.S.; one underground dual SAGE Direction Center (AN/FSQ-7) in Canada. Reportedly, 32 SAGE DC's total were planned.

There were 3 SAGE Control (later Combat) Centers (AN/FSQ-8) in the U.S. Reportedly, 7 SAGE CC's total were planned.

A number of Super-SAGE Combat Centers (AN/FSQ-32) were planned, but none was built. Most were to have built underground (e.g., White Horse Mountain near West Point, NY); at least one SSCC was to have been above-ground (Scott AFB, IL). One prototype Q-32 was installed at the IBM programming center in Santa Monica, CA.

One remote SAGE Combat Center was activated at the former manual at Hamilton AFB using a three-string BUIC-II computer, AN/GSA-51. "


Tom writes Dec.2009 -
All our SAGE information is found at http://www.radomes.org/museum/sagedocs.html and its respective links.

I recommend avoiding contact with L.. E...... -- methinks he is a complete psychotic or something. Better contacts are out there -- I recommend starting with Mr. Robert F. Martina (318-797-5419), rfjm9870 @ aol . com .

By the way, Fort Lee AFS, VA, was HQ 20th Air Division (SAGE) at the time it deactivated in 1983. HQ 21st Air Division (SAGE) was located at Hancock Field, NY, until 1983; I was there when its FSQ-7 was turned off for the final time in October of that year. The 21st AD picked up all the former 23rd AD sites when Duluth shut down a couple of years earlier.


Sage II solid state computer

Bob Boden - bobjoy2 (at) hotmail dot com - October 2006 writes
(e-mail address no longer valid, please contact ed@ed-thelen.org if you know of Bob Boden.)
"I wrote the system test programs for the Sage FSQ-7 output system in 1954 and 1955. Later my group worked on the RTA computer which was a precursor for the solid-state Sage II computer.

"In 1958 I was made the Development Engineering Manager for Central Processor, Channels, and Operator's Console for the Sage II computer. I believe that this was the largest transistor computer ever built. It was intended to replace the old vacuum tube FSQ-7 systems. The System Development Corporation did our programming.

"We completed the design (which used the Philco MADT transistors -- type 2n501 if I remember rightly) and began physical layout and construction only to have the government cancel the Sage Program. Our design tested out beautifully, but only two machines were ever built. One went to SDC in LA for use in programming, the other went to SAC.

"Why do I never see any reference to the Sage II computer? It was one of the first 100% self-checked machines. It had a 48 bit word. 6.4mc clock frequency. It used liquid cooling. SDC said in 1966 that the machine they had was the most reliable and maintainable they had ever worked with."


Comment on above by Gordon Bell - gbell (at) microsoft dot com - October 2006
"Tom Marill and Larry Roberts performed the first computer-computer network experiment between the Q-32 and TX-2 (I believe).

"See Larry's page http://www.ziplink.net/~lroberts/InternetChronology.html says: Oct-65 First Actual Network Experiment, Lincoln Labs TX-2 tied to SDC's Q32, Lawrence Roberts, MIT Lincoln Labs. This experiment was the first time two computers talked to each other and the first time packets were used to communicate between computers. "


SAGE Reunion - received May 2007
The Western Electric Defense Activities Engineering Services (ADES) Alumni Group held its 25th reunion in Houston this past weekend, April, 27-29. These were the people who integrated and tested the SAGE system at 23 sectors covering the USA back in the late 50's and early 60's.

About 500 engineers and other technical personnel were hired by ADES, trained at MITRE/Lincoln Labs, formed into five teams moving from sector to sector, integrating and testing this first big network of radars, computer centers, air bases, other inputs and ground to air data links. It also tied into the NIKE complex of ground to air missile sites.

Now all in their 70's they still recall the lure of the open road and the early days of computing and data transmission.

The 2008 meeting is tentatively set for San Diego.

R. F. Martina
9870 Jennifer Lane
Shreveport LA 71106
318-797-5419 rfjm9870@aol.com

This Artifact
-

Interesting Web Sites

Other information
When I set this list of computers shown by Computer History Museum, at the old location in building # 126 in NASA Ames some 12 years ago - I made a standard format -
- that in retrospect clearly should have included a
. . . "Software" catagory
- "even though" this was (at the time) a hardware museum

Other info - Software category added June 2011
from Robert Nielsen - GUILLE11 at aol dot com - June 19, 2011
I worked with Roland, Rollie, [Roland D] Pampel who was the main software person, and Bob Suda (software) when they developed the first ever system program called SEVA for the SAGE system.

System Evaluation Validation Acceptance = SEVA

SEVA was a highly couples software program, that would perform a make believe attack on America to show the system was ready to be accepted by the Air Force and to be shipped and deployed at a SAGE site

SEVA was the work of two geniuses and only I and Russ Burger (engineers) where the only ones that could de bug it during it inspiration. Those days of designing and working on SAGE where the best days of my life

Robert Nielsen
Texas

Other info - Hardware

RE: The Q-32
From Roy Mize July 23, 2012
Supposedly only one was built, but apparently not true. However, found only a note, not definitive information.

Best reference I've found is the book Bright Boys. Here is an excerpt and the Web link:
http://www.brightboys.org/index6.html

A huskier version of the AN/FSQ-7 was also built, the AN/FSQ-32. There was also Whirlwind's little sister, built by the bright boys in the old Whittemore Shoe Polish factory around the corner on Vassar Street, which was called the Memory Test Computer or MTC, or sometimes referred to as Whirlwind 1 1/2. The MTC was built specifically to test Jay Forrester's magnetic core memory (see the free download of Chapter 6) before installing it in Whirlwind. Later, an all-transistor version (3,600 transistors) of the MTC was built, called the TX-0. In 1957, the 22,000 transistor TX-2 replaced the TX-0.

It was the TX-2, at the helm of which was Larry Roberts, that first sent digital packets of information across the continent to another of the Whirlwind progeny (AN/FSQ-32) in California. That, of course, began Arpanet which lead to the Internet.

A recent 2010, and very readable, book also has some information:
"The Department of Mad Scientists: How DARPA Is Remaking Our World, from the Internet to Artificial Limbs" by Michael Belfiore

"In the post-NASA world, ARPA emerged as a sort of dumping ground for military programs that could find no other home. In 1951, the Air Force needed to unload what one former ARPA staffer called an expensive white elephant (in the form of a major piece of computer hardware called the AM/FSQ-32DIA, and the fledgling R&D agency ended up with it.

"The 250 ton machine had been built by IBM as a spare for he Air Force's Semi-Automated Ground Environment, or SAGE, Program."

Roy

The struggle for accuracy
From Roy Mize April 5, 2010 to Dag Spicer
... I still intend to submit a proper white paper. However, in the interim, below is a brief summary of my SAGE investigations and facts that I've validated. The focus is to correct ubiquitous misconceptions by docents and references in CHM and other records. As a result of my contacts, IBM has changed some Webpages to reflect the actual number of SAGE computers that were built. We should consider doing the same if the information on our pages that don't agree with the validated information..

After reading literally hundreds of pages, and having emails and telcons with responsible persons at IBM, MITRE, Lincoln Labs, and other places, it is clear that we should never rely on the memories of just one person and what they might write many years later without other confirming sources. I've found this to be true in the two early aviation history books I'm completing. It's astounding the variation in 'facts' between people who are considered to be responsible historians.

I've located records and responsible persons for all SAGE elements except for the RAPPI manufactured by Lewyt Electronics, subsidiary of Lewyt Vacuum Cleaner company. Many are kept on servers that CHM might consider asking for a data transfer to ensure future retention of the information.

I had been in contact with Ms. Mary Mullins - public relations VP at Thyssen-Krupp, eventual successor to Lewyt, asking for help in finding any archives. Unfortunately, she bailed because of the flame war emails engendered by by a previous summary I sent out. At this point, any contact with Thyssen-Krupp would have to be at John's level to their subsidiary CEO, given what happened.

I found a RAPPI reference that said it was designed in 1955 with production starting shortly thereafter. If this can be validated, the LEWYT RAPPI could rightfully be called the first production transistorized computer., or at least a transistorized signals processor.

Roy

 

Summary:

 
 Number of Direction (Sector) Control Centers    23  (22 if Thunder Bay isn't included)
 Number of Combat Control Centers                  4  (  5 if Thunder Bay is included)
        (Role of Thunder Bay isn't clear in items I reviewed)


 Number of AN/FSQ-7 and AN/FSQ-8 computers manufactured     56
 Locations of SAGE computers:            
        27 combat control or direction centers  2 at each center / total = 54
   Programming Support Center              2 at Systems Development Corp. (SDC)
            (A Vinton Cerf interview seems to say that he saw only one @ SDC)

Number of AN/FSQ-32 computers manufactured:    2
       Locations"      1 at SDC; 1 at IBM 
                     (One source states that 1 system went to the CIA)

Air Traffic Control Use:
       Despite the fact that the SAGE systems and its supporting radars acted as a de facto
North American air traffic control system, only 1 direction control center was ever a part 
of the FAA system.  It was later supplanted by a new system under FAA control. 
 Great Falls Montana Center/Malmstrom AFB system was used in early days due to cost 
of a separate FAA system and because air traffic in the Dakota/Montana corridor was 
so light.  FAA history on the development of the civilian air traffic control system 
virtually ignores any Air Force role in ATC.

Number of Building Stories
Some were three stories and some were four.  There is a complete list available.

Note:
Haven't been able to find a definitive summary about how Cheyenne Mountain operated as 
a Combat Operation Center, e.g. how their computers worked with SAGE and the type of 
computers used.  It must exist, so I'll keep looking during my recuperation. 

From Roy Mize June 2009 to Paul Lasewicz, IBM Historian
I'm a docent at the Computer History Museum (www.computerhistory.org) imn silicon Valley (Mountain view).

Discussing SAGE is part of tmy tour lectures. A visitor asked about my assertion that there were 28 locations where duplexed AN/FSQ-7's were installed; e.g. 56 computers. My reply was that I had researched the subject but I would do so again just to make sure.

In reviewing my original SAGE research and doing new research, I found errors in a variety of places. it is expected in non validated posting, but I found errors in many places including various SAGE listings at the Computer History Museum Website, even at IBM.

I'm attempting to get validated information from all SAGE related vendors, and from archives where the ocmpanies no longer exist. I have a reply from MITRE and am awaiting replies from MIT/Lincoln Labs, AT&T, Western Electric, and Systems Development Corporation (Burroughs/UNISYS) archivists.

I also have a reply from another IBM office. However, they were unable to provide a complete answer.

Here is what I found on a search of IBM history Webpages:

"When fully deployed in 1963, the system consisted of 27 centers throughout North America..."

The number is correct when considered operational Air Force locations - 24 combat direction centers and 3 combat control centers. 27 centers x 2 = 54 computers. 2 additional AN/FSQ-7 computers were installed as a programming support center at RAND/Systems Development Corporation in Santa Monica, California.

I also found a reference that seems to be in error about the relationship of SAGE to the MIT/Whirlwind. Finename = teraflopattackilluminata.pdf. URL = www-03.ibm.com/servers/deepcomputing/pdf/teraflopattackilluminata.pdf

"IBM's been before. Its Whirlwind II used 55,000 vacuum tubes. "

Other sources state that Whirlwind II, as such, was never built. SAGE was sometimes described as Whirlwind II, but this is incorrect according to other sources.

My objective is to obtain documentqation from validated sources to use in preparing an accurate catabase of SAGE information. The computer History Museum has become a principal source for historical research on computers. As a docent, I;m spending time because our arhivisist can't spend the time I've expended on trcking down accurte information.

Your help would be greatly appreciated.

From Dale Williams May 2004

>> I was one of the Airman that Blue Suited the Q-7 at
>> Malmstrom AFB, Great Falls, Mt in 1963.
>Question:
> 1) what is "Blue Suited"?
When the SAGE project first became an active weapons system for the Air Defense Command, the maintenance on the FSQ.-7 (&8) was preformed by IBM. In the early 60's the Air Force decided to take over maintenance or "Blue Suit" maintenance. It was a term used by the Air Force (at least at that time) to signify that Air Force personnel would be doing the job instead of civilian personnel. Did I clear that up or make it murkier?

>> Spent three years working in the Central Computer section
>> of the Q-7.
> Hmmm - sound like "Blue Suited" is maintenance?
> A person trained/specialized in one section?
When I first went into computers in the Air Force (I cross-trained out of air-craft radio maintenance), it was divided up into three sections. We were some of the first in the Air Force to be in the new field, computers. The section you were in was determined by an IQ test. If you did good in logical thinking, you were assigned to the central computer section. For what is a computer, but a logical thinking machine. If you did good in mechanical, you were assigned to input/output. That included card readers and card punches, printers, tape drives and computer entry punches. Plus the logic that controlled the input/output between the Long Range Radar sites and the Q-7. And the logic for the X-tell (cross talking) between the other Q-7 sites and the forward-tel and back-tel to the Q-8 sites. The FSQ.-7 was a direction center and for every so many Q-7's there was a FSQ-8 which was the control center. From there it went on up to NORAD.

They later determined this was not the way to break down the maintenance, as the computers and the peripheral equipment became more sophisticated. The transition from electron tubes to transistors and then on to chips made the computer so small that it just was not feasible to divide the maintenance up anymore. So you worked on everything as you were assigned from one system to another.


> Got any "war stories" that techies might enjoy?
>> The other two sections being Displays and Input/Output.
>> I got to work on the Q-7 in its final days at Luke AFB, Az.
>> in the early 70's. I was only there for about a year or so.
>> It wasn't nearly as exciting as the my first time up at Great Falls.
>> I had worked on a whole lot newer computer in the mean time,
>> but not physically bigger.
> Easy to believe ;-)
Also, you were actually in side the computer when you preformed maintenance. Everything was bigger than life with the Q-7. So you could, with an o-scope, look at each and every bit of the word as it worked its way through the computer. It was really a simple machine to work on, when I compare it to later and physically smaller, but much faster systems. I did work on other large systems, the Philco 2000 and 1000 at NORAD's Cheyenne Mountain Complex and the IBM 360 and 370 at a satellite monitoring site in Australia. But, even if they were larger and faster computers, it just wasn't the same as the old Q-7.

>> The Q-7 was the easiest computer I worked on,
>> more forgiving of my mistakes.
>> Dale Williams
>> blackkoko22@yahoo.com
I hope this clears up some of the questions and just doesn't generate a whole lot more. But if I can answer any other questions I will certainly try. It does strain the old memory going back 40 years to remember things. But it is fun remembering.
Dale

From Les Earnest Mar 2009 - replying about ARPANET in INFOROOTS
Les is not overly shy ;-))
This article is for those who think that large organizations are/can-be efficient. No exceptions are discussed here :-|

Sue Thomas wrote:
>
> As per my earlier posts, I’m researching the influence of California 
> on the development of the environment we now know as cyberspace. [ 
> http://www.thewildsurmise.com ] I’ve just read Annalee Saxenian’s 
> ‘Regional Advantage’ about the cultural differences between east coast 
> and west coast tech industries, and that has led me to wonder whether 
> it would have made a huge difference to the development of the 
> internet if the first few nodes had been based in east coast locations 
> (apart from the obvious technical issues which had made the selected 
> groups the best choice). Maybe the idea was even considered then 
> discarded?
>
> To refresh your memories, the first 4 nodes of Arpanet were in Los 
> Angeles, Menlo Park, Santa Barbara, and Utah. Any thoughts on possible 
> alternative hosts on the East Coast – or other parts of the US – along 
> with speculations as to whether anything would have been different, 
> and why?
>

=======================================

In my view the East-West question doesn't make sense at several levels.

  1. First, there was no 4 node ARPAnet, though some now like to think there was. The first four nodes were designated as a test rig, composed of sites that were willing to shake down, debug and measure the performance of the first packet switching schemes. The first operational network was to be transcontinental and have 8 nodes though things got a bit mixed up before all of the early sites got on line.

  2. The first nodes connected were not Los Angeles, Menlo Park, Santa Barbara, and Utah. They were UCLA, SRI, UCSB, and U. Utah. It didn't matter where they were located.

  3. ARPAnet was based principally on technology developed on the East Coast, specifically at MIT.
I believe I am qualified to comment inasmuch as I was at MIT in the late 1950s when the underlying technology was developed there and was the Stanford representative on the ARPAnet startup committee during 1967-68. For the record, I'm a West Coast guy who went East for 12 years.

It appears to me that there were five key steps that led to the creation of ARPAnet building on the 1950 technology base provided by general purpose computers and telegraph and telephone systems:

(1) development of high speed digital communications;
(2) development of computer timesharing;
(3) recognition of the need for an integrated network;
(4) proof that partially connected networks would work;
(5) development of packet switching.
Steps 1, 2 and 4 happened at MIT; 3 and 5 were done mostly by people from MIT.

HIGH SPEED DIGITAL COMMUNICATIONS

The first computer network was part of the SAGE air defense system, which was initiated by MIT Lincoln Lab in the 1950s. SAGE used modems that had been invented nearby in 1949 by Jack Harrington and his group at the Air Force Cambridge Research Center (AFCRC). SAGE became a nationwide network connecting 23 gigantic computers, one being in Canada. I use the term "gigantic" in the physical sense inasmuch they were the largest computers ever built. Each had about 55,000 vacuum tubes and occupied an area the size of a football field. Never mind that as an air defense system SAGE was a fraud that cost taxpayers billions of dollars and was a cornerstone of the military-industrial complex that has since bilked U.S. taxpayers out of many more billions. That conspiracy is still going strong but it's another story.

SAGE used digital communications to collect radar data from remote sites, transmit guidance commands via packet radio to manned interceptors and ground-to-air missiles, and to send tactical information to adjacent control centers and to higher level command-control systems. However all these links were special-purpose.

TIMESHARING

Another thing that had to be invented before ARPAnet became worthwhile was timesharing, since without it there would have been no need for interactive networking until about 20 years later, when personal computers became feasible. Timesharing was an accidental invention in SAGE, which processed radar data cyclically and put keyboard interactions and display generation in the same loop. That was a special-purpose kind of timesharing but John McCarthy, who was then a professor at MIT, foresaw the need for general purpose timesharing and proposed it in 1959. Subsequently several timesharing projects in the Boston area confirmed its feasibility in the early 1960s, the first being CTSS at MIT. The first commercial timesharing.system was the PDP-6, developed in 1964 by Digital Equipment Corporation, a spin-off from MIT Lincoln Lab.

RECOGNIZING THE NEED FOR A GENERAL PURPOSE NETWORK

The first person to clearly enunciate the need for a general purpose computer network was J.C.R. Licklider, or "Lick" as his friends called him. I first met Lick in 1949, when he gave me a summer job as a guinea pig in one of his experiments. I ran into him again when I joined MIT Lincoln Lab in 1956 to help design SAGE. Lick later became a key scientist at Bolt, Baranek and Newman (BBN), where he supported the development by Ed Fredkin and John McCarthy of an early timesharing system on a DEC PDP-1 computer. In 1962 Lick joined the Defense Department's Advanced Research Projects Agency (ARPA) and founded its Information Processing Techniques Office (IPTO). A short time later he proposed building an interactive network linking existing timesharing systems. Lick didn't know exactly how to build such a network but left it on the IPTO agenda when he returned to MIT and kept pushing for it -- see http://www.kurzweilai.net/articles/art0366.html?printable=1

PROOF THAT PARTIALLY CONNECTED NETWORKS CAN WORK

The next two steps in ARPAnet development came out of a group of MIT graduate students who spent evenings and weekends in the early 1960s sharing the TX-2 computer at MIT Lincoln Lab. TX-2 had been designed by Wes Clark, mostly using modules that had been engineered by Ken Olson before he left to found DEC. Some of the students involved were Ivan Sutherland who was developing his Sketchpad drawing system, Larry Roberts who was working on perception of three dimensional objects from photographs, and Len Kleinrock who was doing network simulations to investigate queuing theory for various configurations of partially connected networks. I was there too, creating the first cursive handwriting recognizer, which included the first spelling checker as a subroutine. We all helped each other occasionally and became friends.

Kleinrock completed his PhD in 1963 and showed that a partially connected network could provide adequate throughput between any pair of nodes. He then accepted a faculty appointment at UCLA. Concurrently, Paul Baran at Rand Corporation was looking at networking from the viewpoint of survivability in an environment where links could be taken out and concluded that a multipath network would be more survivable than the tree-structured networks used in military communications systems. He attempted to get funding to build such a system but was unable to get it funded. Later Donald Davies in Britain also advocated a packet switching scheme but also was unable to find funding.

Ivan Sutherland finished his dissertation in 1963 and in 1964 was recruited by Lick as his replacement, so that Lick could return to MIT. Larry Roberts had also finished his dissertation in 1963 and hung around Lincoln Lab. Ivan followed up on Lick's idea of creating a network by funding Larry to put together a link between two timesharing systems, the TX-2 at Lincoln Lab and the AN/FSQ-32 at Systems Development Corporation.

Meanwhile I was loaned by my employer (MITRE Corp., an MIT spin-off) to the Central Intelligence Agency for a year and then to the Joint Chiefs of Staff to work on more ill-conceived projects. Given that Ivan and I were both in the Washington area we socialized occasionally and, in 1965, he tried to recruit me to join him at ARPA. I politely declined, saying that after working in the military-industrial complex for over a dozen years my goal was to get as far from the Pentagon as possible. He then kindly suggested that I talk to Stanford, where he had just funded a new million dollar computer facility for artificial intelligence research but then had second thoughts about project management there. I followed that suggestion and soon joyously left for Stanford. I learned later that Ivan had also tried to recruit Larry Roberts, who also declined, but was able to get Bob Taylor to come from NASA.

DEVELOPMENT OF ARPANET

When Bob Taylor took over IPTO at the end of 1965 he decided to move ahead on creating a network and realized that he needed someone with expertise to lead the project. Apparently based on suggestions from Lick and Ivan he recruited Larry Roberts. Perhaps more accurately, he coerced Roberts by leaning on his employer, MIT. Upon joining ARPA Roberts put together a start-up committee composed of representatives of sites that we interested in participating. I participated representing the Stanford Artificial Intelligence Lab (SAIL) even though my boss, John McCarthy, had major reservations about this possibly intrusive project.

We started formulating packet designs and our original plan was to have each timesharing system talk directly to its neighbors over the network. However Wes Clark, who had been the architect of TX-2 and other things, then made the excellent suggestion that minicomputers be used to handle packet switching. Those machines, which we called Interface Message Processors (IMPs), would then talk to the main computer through a separate interface.

We developed performance specifications for the network that focused on two functions: file transfer and remote access, which came to be called "Telnet". As I recall we discussed doing email briefly, given that it was already available in some timesharing systems, but rejected it as a frivolous use of the net -- after all, we already had U.S. Mail (!).

Though we were somewhat off-target in our initial choice of services it turned out well in the long run. When the need for email services was recognized a few years later it was easily provided using the file transfer capability and when interactive web applications began to be developed about 35 years later the short round-trip communications delays specified for Telnet proved adequate for these new applications.

As I recall the Request for Proposals was issued in the summer of 1968 and our committee reviewed the resulting technical proposals at a meeting in November that year in Monterey, California, at the Del Monte Hotel, which I had arranged. Of the dozen or so that were submitted there were two standouts, from Raytheon and BBN, both from the Boston area. The consensus evaluation of our committee, based just on technical issues, not financial proposals, was that Raytheon was the better choice, though I thought BBN had done a better job and said so. Perhaps I was influenced by the fact that a substantial number of people in the BBN group had recently come there from Lincoln Lab after working on SAGE. In any case I was happily surprised two months later to learn that BBN was selected as the contractor. I later tried to find out how that happened but got conflicting reports.

As soon as the IMP interface specifications were developed by BBN we all started working on making that connection. I got one of our graduate students (perhaps Phil Petit) to design the hardware and another (Andy Moorer) to write the operating system software. However we then ran into a brick wall. Our operating system, which was closely related to DEC's TOPS-10, required that the entire system be resident in main memory, which was then core. Unfortunately the addition of the ARPAnet interface software made the operating system so large that there was not enough room to run user programs! I therefore had to round up more funding, go out for bids on more core memory and get it installed before we could connect to ARPAnet. Thus even though we were supposed to be one of the elite initial eight we were unable to connect until some months later.

Even then ARPAnet was not a very lively place, though the pace picked up a lot after email was added. Most sites left many of their data and program files publicly accessible and a lot of benign thievery went on, which was fine inasmuch as nearly all of the participants were universities. For example after I recruited Ralph Gorin to make an improved spelling checker around 1971 it soon spread over the net to most DEC-10 and DEC-20 computer facilities that were on the net.

After I wrote FINGER, which provided a kind of social networking service and had a proto-blog capability (see http://asia.cnet.com/reviews/pcperipherals/0,39051168,61998604,00.htm), it soon spread everywhere. Unfortunately the Unix version, written at UC Berkeley, had a security vulnerability that was exploited by the first Internet Worm, launched in 1988 from MIT by Robert Morris. Happily, FINGER was used more constructively by other people, including Linus Torvalds who reportedly used his ,plan file to coordinate the development of Linux.

When Vint Cerf finished his PhD at UCLA in 1972 and came to Stanford, I helped him round up funding for his network protocol research project that produced TCP/IP, which facilitated the integration of disparate networks into the Internet beginning 1 January 1983.

In summary, the trajectory of ARPAnet turned out to be a somewhat bumpy but it demonstrated the practicality of packet switching and was close enough to what was needed that it was able to evolve useful services. In the beginning all telecommunications companies scoffed at the idea that packet switching networks would work but they have now largely switched over to using this technology. It is amusing to note that in 1976 AT&T was offered the chance to take over ARPAnet with no up-front cost if they would agree to run it, but they refused on the grounds that this technology had no future!

-Les Earnest


Sue Thomas wrote:
>
> As per my earlier posts, I’m researching the influence of California 
> on the development of the environment we now know as cyberspace. [ 
> http://www.thewildsurmise.com ] I’ve just read Annalee Saxenian’s 
> ‘Regional Advantage’ about the cultural differences between east coast 
> and west coast tech industries, and that has led me to wonder whether 
> it would have made a huge difference to the development of the 
> internet if the first few nodes had been based in east coast locations 
> (apart from the obvious technical issues which had made the selected 
> groups the best choice). Maybe the idea was even considered then 
> discarded?
>
> To refresh your memories, the first 4 nodes of Arpanet were in Los 
> Angeles, Menlo Park, Santa Barbara, and Utah. Any thoughts on possible 
> alternative hosts on the East Coast – or other parts of the US – along 
> with speculations as to whether anything would have been different, 
> and why?
>

The following is from Peter A. Goodwin
placed here because of an "altercation" about the usefulness of SAGE vs jamming - this hint might eventually help determine if/how_much SAGE input could handle jamming and prevent/reduce "GIGO" (Garbage In Garbage Out).
BMEWS: There were two 7090s in each installation. Raw radar data were digitized and fed to the computers for threat analysis. The radar data consisted of target position -- elevation, azimuth, and range -- radial distance = radar-echo-return time), and range rate (a Doppler-effect measurement). GE made the radars, Sylvania made the gear that performed the digitizing. IBM's ASDD Mohansic Lab made the 7090 radar-data real-time data channel boxes. Both 7090s operated on all data, but only one reported to NORAD; the idea was that if either machine failed, the other would be ready to perform actively. The 7090 program that performed threat analysis had operating levels to avoid being overwhelmed with incoming data: If the data stream was moderate, it would perform thorough analysis; if the data stream became heavy, it would only perform cursory analysis, the theory being that by that time, the world was going to hell anyway so who cared? The 7090 program was written in assembly language in order to optimize size and speed. The instruction code was written to be non-volatile, and every couple of seconds the system would perform a Hamming checksum to ensure that nothing in the instruction code had been altered.


If you have comments or suggestions, Send e-mail to Ed Thelen

Go to Antique Computer home page
Go to Visual Storage page
Go to top

Updated Aug 17, 2014