Go to Antique Computer home page
Go to Visual Storage page

Control Data Corporation, CDC-6600 & 7600

*** Your information, comments, corrections, etc. are eagerly requested.
Click here to e-mail Ed. Please include the URL under discussion. Thank you ***

Control Data Corporation, CDC-6600 & 7600

Manufacturer Control Data Corporation
Identification,ID CDC-6600
CDC-7600
Date of first manufactureCDC-6600 - 1964
CDC-7600 - 1969
Number produced CDC-6600 - 50 as per http://www.newmedianews.com/tech_hist/cdc6600.html
.
Estimated price or costCDC-6600 - $7,000,000 as per CISC of NCAR - understand? ;-))
CDC-7600
location in museum -
donor CDC-6600 - Lawrence Livermore Laboratory
CDC-7600 - Lawrence Livermore Laboratory

Contents of this page:

Photo
CDC-6600, CDC-7600

Placard
6600 WORD.doc and 7600 WORD.doc by Ron Mak

Architecture
  • Designed by the legendary Seymour Cray "leading a small team of only 34 Control Data employees including the janitor!" in the early 1960s
  • Considerations in computer design - Leading Up to the Control Data 6600" by James E. Thornton (local copy .pdf)
  • each 60 bit word held:
    • 1 floating point value
    • 10 6 bit characters (pre ASCII :-)
    • 4 15 bit (register) instructions or fewer 30 bit (memory access) instructions
  • 1 instruction could be issued each 0.1 microseconds, conditional upon functional unit, register, and data availability
  • 1 instruction word look-ahead, 12 previous instruction words instantly available
  • Gordon Bell lecture notes
  • Gordon Bell 6600 system diagram and also see the next diagram (#40)
  • James E. Thornton "Parallel operation in the Control Data 6600", in Chapter 39 of "Computer Structures: Readings and Examples" by C. Gordon Bell & Allen Newell
  • James T. Humberd was a CDC 6600 salesman, and used these slides to present the system.

Special features - 6600
    
Hardware
     Software - (Optional) "Time Critical" Operating System

Special features - CDC 6600 Hardware
  • 10 independent "Functional Units" in the Main Processor included:
    • 2 floating point Multipliers (1 microsecond)
    • 1 floating point Divider (3.4 microseconds)
    • 1 Add and
    • 1 Long Add units (0.3 microseconds)
    • 2 increment (used for memory access),
    • 1 branch,
    • 1 boolean,
    • 1 shift unit (each 0.1 microseconds)
    • 1 population count (number of 1 bits in a word - go ask NSA why)
  • 10 (optionally 20) built in "Peripheral Processors" (PPs) controlled the main processor, operated the control console (including painting the alphanumerics in vector mode), and performed all I/O (Input/Output)

    12 bit (register) and 24 bit (memory address in 18 bits) instructions

    each had 4096 12 bit words, each could access any peripheral channel

  • Memory bandwidth was 1 60 bit word per 100 nanoseconds (10 per microsecond
  • Memory access time 475 nanoseconds
  • X shape helped reduce signal transit time. Outer parts of the X held not time critical things like heat exchangers, refrigerator pummps, "Dead Start" panel (most other computers were "booted" up, CDC usage was to "Dead Start" a computer, etc.
  • No individual lamps and switches for machine registers, total operator access was via the two big CRT (TV tube) displays and keyboard
  • Circuit cards were 3 by 3 inches, in pairs, with transistors soldered onto each circuit card, and resistors & connection wires soldered between. Called "cord wood" modules - see photo
  • Meory modules were 1024 bits by 12 bits wide (no parity), (like the CDC 160A) 5 of these units made 1024 - 60 bit words.
  • Every Cray machine seemed to have a "Population Count" instruction, which counted the 1 bits in a word - we figured this was for 1 customer - NSA. (National Security Agency) is charged with code cracking, and those who claim to know say the number of bits in a field is interesting.
  • 18 bit addresses, word addressable - Seymour must have figured that no one would ever buy that much memory because he originally used the 18th bit for another memory function. Boeing insisted on 262,144 (256 K words) memory, and and Engineering Change Order did the re-work to allow that much
  • A 2 megaword Extended Core Storage (ECS) was available. It was phased to match main memory, so after a 1.4 microsecond start up delay, 10 - 60 bit words could be written to or read from ECS.
  • Assuming no register conflicts, one instruction was issued each 100 nanoseconds
  • The instruction buffer was large enough to hold FFT code assuming precomputed trig values - usually done if you were going to do much FFT work.

The above mentioned Peripheral Processors were operated as:

   - specific task dedicated PPs
         -  main CPU and PP scheduling by the "monitor" 
         -  disk driver
         -  monitor driver
         -  printer driver dumping from disk
                 via the disk driver
         -  card reader driver buffering to disk
                 via the disk driver
         -  tape driver (after "we" re-wrote it.)
   - float tasks for the remaining PPs for other tasks
 
The program in the main CPU could
  - request specific tasks like 
       - open input and output files
           by placing a request in ?address 100??
       - terminating itself
       - ...
  - operate I/O circular queues in main memory
       - the PPs would try to keep these areas full
          or empty depending if  the file was
          for input or output
That is the general flavor -
You may remember that when special systems found that
each request for a tape I/O caused 
  - a PP to be assigned for that task, 
  - the program for that PP's task loaded from disk, 
  - that I/O record done
  - the PP dropped back into the general pool of PPs
       for re-assignment for anything
We got excited about the inefficiency and slowness
  and "we" re-wrote it to stay around,
  doing tape I/O for as many task as were around,
  dropping back to the PP pool only if idle for seconds.

Population Count instruction

> Date: Wed, 19 May 2010 10:02:29 -0700
> From: wbblair3@yahoo.com
> Subject: The Declassified History of NSA Computers
> To: cctalk@classiccmp.org
> 
> PDF report, "History of NSA General-Purpose Electronic Digital Computers":
> 
>   http://www.governmentattic.org/3docs/NSA-HGPEDC_1964.pdf 

Nice.

Also of interest, is specific changes / additions to the instruction sets 
that the NSA requested. Seymour Cray knew they were going to be his best customer.

Take a look at his 'population count' sideways add, a useful instruction in cryptography.
It was included in all the Cray machines:

http://en.wikipedia.org/wiki/Hamming_weight

Randy

Bulk Store core memory for fast program swaping (instead of a swapping drum or disk)
There was special provision for moving streams of 60 bit words to/from a lower cost core memory called "Bulk Store".

Upon start of execution of the Bulk Move instruction, all other main memory accesses were stopped, even including Peripheral Processors, and a 60 bit word would move to or from Bulk Store each 100 nanoseconds - remember the phased memory system.

The 700 nanosecond max latency and the world beating transfer speed could be very useful ;-))

Interesting option - from Tom Kelly
These were 6600's. SN 82 and 84, I think, but that is very hazy. The customer was the Naval Air Development Center, in Johnsville (or Warminster), PA. They had been modified for real-time use, and with special interfaces to simulation hardware. If I recall (and I didn't do a lot with this), if you referenced a negative address, it accessed the simulation hardware. There was a hardware realtime scheduler.

> I had heard of a modification for extra memory for Boeing,
> but never a "monitor mode" for the CPU.
> Why would some want to use a "monitor mode"?

It was a standard option. I've looked it up. Look at:
"Control Data 6000 Series Computer Systems: Reference Manual", Pub # 60100000, Revision N, Appendix F. (1972)

If the CEJ/MEJ (Central Exchange Jump/Monitor Exchange Jump) option was installed, then there was a "monitor flag" that controlled the operation of the XJ (CPU) and MXN (PPU) instructions. The instruction lists on the covers indicate that the MXN instruction was "Included in 6700 or those systems having the applicable Standard Options" The appendix refers to this as "monitor mode" on p. F-5.

When the CPU was running the CPUMTR code, it ran with the monitor flag set, which prevented the PPs from interrupting it again.

Software - (Optional) "Time Critical" Operating System


Now - about "Real Time" -

I was with Control Data Corp - Special Systems Division -
1966-1972 and part of our claim to fame was our own version of 
    "Real Time" 

Most manufacturer's version of "Real Time" was
  " our equipment is fast enough to handle your problem,
      as we see it,
    so you will not be inconvenienced"
or something like the above.

And for many commercial applications  -
   say retail point of sale
The above was good enough - and really
   how much damage would happen if a clerk/customer
   was occasionally delayed a few seconds.

Even say airline reservation process survived,
   if you couldn't handle some transaction promptly,
   SABRE just threw it on the floor,
   the reservation agent re-submitted, and all was OK

--------------------------

There were other  "Real Time" users that were more demanding -
 CDC "Real Time" started out when the CDC-6600 was involved with 
    "hybrid" computing - popular in the 1960s.
  There were some functions that an analog computer
    was poor at - say complicated function generation,
    and the analog run would lose validity if
    the digital function generation was delayed.

Control Data's presentation was
  "we will guarantee that we can schedule the required
     input, processing, and output
   every x milliseconds, and schedule other jobs
   as background or in other  "Real Time"  slots.

We even could guarantee data-logging to specially
constructed circular disk files. These files could be simultaneously
accessed in background for on-line analysis.

Our "gimmick" was the ability to schedule dedicated
   Peripheral Processors (PP) to handle the I/O,
       a deterministic task
   and use the scheduling PP to synchronize the
     I/O and schedule the main processor appropriately
      after the input is complete and before the
      required output time. 
And we could run "batch" processing in the background :-))

If the customer's CPU processing took longer than
   the customer asked - there was the option,
     - grab off background time
          (other  "Real Time"  jobs would not
           loose their requested/guaranteed time)
     - abort - (usually used during debug)
In-core values of moving average and max CPU time was available
     to the real time user.

That niche market was expanded eventually to 
   say Grumman test flight evaluation in  "Real Time".
Ground flight analysts could interact with  "Real Time" 
   data on their scopes, make  "Real Time" decisions to 
     continue, abort, expand
   the prototype test plan depending upon the current situation.
Analysts could even edit/recompile in the background and
   then utilize the new program.
Exotic inputs such as from a laser-ranging  theodolite were
      Interfaced.

This system was used to aid prototype development of
   the F-14 and possibly others later.
     
-------------------------------------
            

We thought this very helpful
   and to the best of my knowledge no other vendor
   before or since could do this version of "Real Time" successfully
 in a multiuser environment.

I am certainly open to comments

  --Ed Thelen
 


Tim Coslet has a story :-))

I have a story about Real Time Systems design that my supervisor told me years ago:

     At a presentation on Real Time System design methodologies, during the question and answer period one person complimented the presenter on their talk and methodology but commented that it did not seem to be able to specify adequately the Real Time constraints, especially in their specific application.
     So the presenter asked what was their application and assured them that it should only need minor changes to handle whatever made the application special.
     The questioner then explained "Our application collects data and then transmits the data to another computer for storage and later analysis. The computer that our application runs on is attached to one end of a one meter long steel rod, the other end of this rod is attached to a nuclear bomb that is about to be detonated in a test. If the application has not completed all its tasks before the blast wave destroys the computer then it fails to meet its requirements. How would this time constraint be shown using your methodology?"
     The presenter replied "Oh, I'd never thought of that..."

My supervisor then commented: "That is the difference between Soft Real Time Systems and Hard Real Time Systems".

Quirks

  • Quirks - the Scope operating system expected a time limit parameter on the job control card. Oddly, you were forced to give the job time limit in OCTAL SECONDS. There was a limit of 5 octal digits permited in any field of the job control card, so 77777 (octal) was the maximum time you could request. (If your job took more compute time than that, your job was terminated, - output files to that point were retained.)

    Octal 77777 converts into 32767 (decimal) seconds, 546 (decimal) minutes, or 9.1 hours.

    Several books now quote a mean-time-between-failures (MTBF) for the 6600 as 9 hours. I could not imagine where that number could have come from. (Our 6600's ran months with out unscheduled maintenance time. Every few months the CDC Customer Engineers would (rather rudely) demand machine time - like to do Engineering Change Orders). I now suspect that the quoted 9 hour MTBF in recent books came inadvertently from the maximum CPU time requestable.

    To give an idea of the reliability of the 6600, I liked to program and calculate Pi as a method of learning a new machine's assembly language. I did it to 500,000 decimal places in about 60 hours on a CDC-6600 one Thanksgiving weekend. Because of the length of time, I had to write a check point dump of the intermediate results after 8 hours, and restart the job from the check point dump, over and over until the job was done. That meant I went into work every 8 hours all during that weekend until the 60 hour running time was complete. There was no worry on my part that the 6600 would fail during that weekend run - and the value of Pi was correct as determined later from faster machines with larger memories.

  • Quirks - the use of OCTAL parameters on the Scope job control card was passionately defended by most other Control Data employees. They could not imagine why an expensive computer should have to do a decimal to binary (octal) conversion when a human could do it instead. (Really true, Unbelivable!!)

CDC 7600
  • The CDC 7600 was main processor code compatable with the CDC 6600,
    and the clock was ALMOST 4 times faster. 27.5 nanoseconds
    • Extensive pipelining within the functional units permitted even more than a 4x speed increase over the 6600.
    • More peripheral units were standard, used slightly differently
    • see Gordon Bell slide sequence
    • different packaging, smaller, tighter
    • nine functional units instead of 10 in 6600

Historical Notes A nice history at Charles Babbage Institute
from http://ei.cs.vt.edu/~history/Parallel.html
	Control Data Corporation (CDC) founded. 

from http://wotug.ukc.ac.uk/parallel/documents/misc/timeline/timeline.txt
========1960========

Control Data starts development of CDC 6600.  (MW: CDC, Cray, CDC
6600)
========1964========
Control Data Corporation produces CDC 6600, the world's first
commercial supercomputer.  (GVW: CDC, Cray, CDC 6600)

========1969========
CDC produces CDC 7600 pipelined supercomputer.  (GVW: CDC, Cray, CDC
7600)
========1972========

Seymour Cray leaves Control Data Corporation, founds Cray Research
Inc.  (GVW: CDC, CRI)


From Ed Thelen ---------------------------
You are certainly welcome -
    Thanks for contacting me  :-))   
[about "Fourth Survey of Domestic Electronic Digital Computing Systems" ]

This Artifact
- This unit is serial # 1 from Lawrence Livermore

Interesting Web Sites
Unfortunately, large parts of the Wikipedia CDC 6600 entry (as of July 2013) are:
- WRONG - such as the linkage and control between PPs and CPU
"For any given slice of time, one PP was given control of the CPU, asking it to complete some task (if required). Control was then handed off to the next PP in the barrel."
Looks as though the same silly fool has been mucking about in this 6600 entry for at least 10 years.
The person has no clue at all about how the machine worked or the CPU was controlled:
- - phased memory,
- - PP-0 by convention controlled the CPU,
- - another operated the operator's console
- - the others did I/O, ie. printing, card reading and punching, disc access, mag tape, serial I/O, ...
Attempts to introduce reality are overwritten. :-((
(Feb 2014) Current version not quite so bad - but still really quirky errors. For instance, the CDC 7600 was application binary compatable with the 6xxx series. You did NOT have to re-compile your FORTRAN, just run the 6nnn binary image, and many other errors and much sloppy writing. As mentioned, the PPs were quite different.
Wikipedia deserves better !!

Other information
Some manuals: Just for fun, the Dead Start panel

overview

detail




> Message: 20
> Date: Sat, 8 Mar 2008 21:18:32 -0800
> From: "Rick Bensene" rickb@bensene.com
> cctalk@classiccmp.org
...
 

> 
> The displays on the console were driven by a PPU (Peripheral Processing
> Unit), which were small scalar processors (actually, one processor
> multiplexed to appear as a number of independent CPUs), akin to small
> minicomputers (like a PDP-8), which operated out of shared sections of
> main memory.  There was a PPU program that ran the display, generating
> it from data in a section of memory. 

In SCOPE, PP # 10 was dedicated to this purpose -

Each time shared PP (using a common adder) had its own memory of 
4 K 12 bit words.  This reduced the traffic to main memory.

PP # 1 was normally assigned to monitor requests from the jobs
   assigned to "control points".  A job would place a request in
   its relative memory location 0 for service by the system.
   PP # 1 would monitor these requests and assign other
    PPs to do the work, causing a PP to load a new program 
     if  necessary.
   

I worked in CDC Special Systems from 1966 to 1971 - 
    We shipped a version of SCOPE modified to run "Time Critical"
     which used modified code in PP #1 to guarantee user choice of 
           - analog and discrete inputs
           - x milliseconds CPU time
           - analog and discrete output
        on a guaranteed time cycle -
    This was the best in the world at the time for doing hybrid computing :-))
         which unfortunately was on its way out :-((
    A system program to calculate resources to see if 
          a new "time critical" user could be added to the running list.
    
>  The displays were vector only, not  raster.  

Yes :-))

> There was dedicated hardware in the display console that did
> CDC character set (a 6-bit code) conversion to vector characters.

Not in any system we shipped, and we could run the "EYE"
   and Northwestern University CHESS program with 
        another PP displaying the chess pieces in nice form
        on the right hand scope.
   The left hand scope being assigned to monitoring
     activity at the normally 8 "control points",
        showing activity and requests for operator intervention
        such as mounting/removing tapes and printer(s) out of paper...
    
> Vector graphics were possible, within the limitations of the speed of
> the PPU.  

Each PP had a 100 nano-second time sharing of the adder each 1 microsecond -
    hence a relatively hard upper limit of 10 PPs with out a special
    order for another 10 ( for customers such as Boeing).

On later 6x00-series systems, such as the CYBER-73, the PPUs
> ran fast enough to generate a nice looking all-vector chessboard on the
> left screen, and a text-based transcript of the moves on the right
> screen.  There were also a number of other cute programs, one being a
> pair of eyes (one on each screen) which would look around and blink.
> The operating system was called KRONOS, and I clearly remember that the
> console command to run the "eye" program was "X.EYES".

Greg R. Mansfield had KRONOS going, and shipping to some customers,
      - mostly educational - by the time I left. 
  Greg  was kind of a one man band - a bit of a Dilbert 
     - a remarkably imaginative and productive individual -
        
I left CDC long before the CYBER-73
....
> 
> Rick Bensene
> The Old Calculator Museum
> http://oldcalculatormuseum.com

Ed Thelen


CDC Cyber Emulator spotted by Jim Seay
Tom Hunter reported in December 2002 to the following Newsgroups: comp.sys.cdc, alt.folklore.computers

> Here is a Christmas present for you: CDC Cyber mainframes are back!
>
> The just released Desktop Cyber Emulator version 1.0 emulates a
> typical CDC Cyber mainframe and peripherals. This release contains
> sources for the emulator and tools, as well as binaries compiled for
> Win98/NT on Intel/AMD PCs. You can download the release from
> "http://members.iinet.net.au/~tom-hunter/"
     (New e-mail and web address)
>
> The emulator runs the included Chippewa OS tape image (handcoded in
> octal by Seymour Cray).
>

And emulates a CDC 6600 with 10 PPUs, 1 CPU, 256 kWord memory, 40 channels and the following devices:
- Console,
- disk drives (6603, 844),
- tape drives (607, 669),
- card reader (405),
- line printer (1612)

- and -

Has anyone still got Control Data Cyber deadstart tapes and possibly matching source tapes? The following would be of great interest for the Desktop Cyber Emulator project. These tapes deteriorate over time and if we don't preserve them now they will be lost forever. Even Syntegra (Control Data's successor) have no longer got copies of MACE, KRONOS and SCOPE.

The following deadstart and source tapes would be great to salvage:

- MACE 5
- SCOPE (any version)
- KRONOS (any version)
- NOS 1
- SMM diagnostics

An SMM deadstart tape with matching source would help in fixing the remaining problems in the emulator.

I can supply a small C program (in source) which will read those tapes on a UNIX or VMS system and create an image which can be used to recreate the original tape (fully preserving the physical record structure and even tape marks).

- - - - - -
Richard Ragan of Syntegra is quoted as "I can verify that it works and have booted up the Chippewa OS. Ah, those old green screens again. Watching the Dayfile, the B-display and looking at the control points on the D-display in octal. Takes you back.... "


There is a popular scientific benchmark called the Linpack Benchmark used to measure the speed that a particular computer can complete a particular "compute bound" task. As per Linpack Benchmark


Jim Humberd suggests here " that IBM “invented” the 7040/7094 - DCS (Directly Coupled System), in response to my efforts to sell a CDC 6600 to one of IBM’s largest customers. ... The CDC 6600 consisted of a large central processor, surrounded by 10 peripheral and control processors that were assigned the tasks of operating the devices connected to the input/output channels, and transferring data to and from the central processor.

"That idea was so compelling, IBM came up with the idea of the 7040/7094 - DCS, later upgraded to a 7044/7094 - DCS as their answer to the CDC 6600. "



PROCEEDINGS OF THE IEEE, VOL. 76, NO. 10, OCTOBER 1988 (starting on page 1292)
F. A 1960 Supercomputer Design and a Missile Using Silicon Planar Transistors

Two of the first silicon bipolar n-p-n transistor products should go into the historical record book, not only for the enormous profits they generated which enabled Fairchild Semiconductor Laboratory to greatly increase its research and development efforts that led to the rapid introduction of whole families of volume produced silicon transistors and integrated circuits, but also for setting the pace on computer system designs based on the availability of certain superior transistor performance characteristics, such as speed and especially reliability.

The origin of the first product was the gold-doped high-peed (16 ns) switching n-p-n transistor, 2N706. It was a smaller mesa (three-times smaller diameter at 5-mil or an area of 1.2 x 10-4cm2) and higher speed version of the 2N696 bipolar silicon n-p-n discussed in Section IV-D which had been marketed by Fairchild in 1960. Gold is a highly efficient recombination center for electrons and holes. In order to increase the switching speed, gold was diffused into the transistor to reduce the minority carrier lifetime and thus the charge storage time in the base and collector layers of the 2N706.

Based on this existence proof, Control Data Corporation awarded Fairchild Semiconductor Laboratory a $500 000 development contract to produce a still higher speed silicon transistor switch to meet the first requirement ---- the high switching speed (less than three nano-seconds) of the 10-MHz (3MIPS) CDC-6600 scientific computer [69].

The second requirement was reliability since there were 600 000 transistors in the CPU. That contract was followed up by a $5M production contract for 10 million units of high speed, gold-diffused, transistors and 2.5 million units of high speed, gold-diffused, diodes in September 1964.

In fact, the transistor specifications of 3-ns and high reliability were arrived at by the CDC computer designers based on the required speed and reliability to complete a numerical solution of a scientific problem without interruption from a computer hardware failure [69].

In order to achieve several thousand hours of CPU run-time without failure, high reliability from the individual silicon planar transistors was the most critical consideration owing to the large number of transistors (600 000) used in the CPU of the CDC-6600. Noyce's monolithic technology has greatly improved the numerics of reliability today.

For example, the 600 000 transistors in CDC-6600 is only about one-half of the number of transistors contained in a1-mega-bit(Mbit) DRAM chip which has a projected chip operating life of 10 years and as many as nine or more 1-Mbit chips can be used in a single personal computer today which rarely experiences MOS memory failures and whose failures are usually due to the crash of the mechanical magnetic disk drive.

To meet both the 3-ns and high-reliability specifications, Fairchild engineers shrunk the circular 16-ns mesa 2N706 transistor down to a three-finger stripe geometry and used oxide passivation for stabilization. They also improved the yield by using an epitaxial layer to control the resistivity. The result was the 2N709 which met the 3-ns switching time and high reliability requirements. It gave a 2000 CPU-hour operating time before a transistor fails.

This was a very large development and production contract for the design and delivery of only one transistor type-by comparison, it took only about $250 000 to start a silicon transistor manufacturing company in 1960. High speed and high reliability of the 2N709 met the critical requirements that made the first scientific computer possible. ....

from Ed Thelen ----------------------



When I got hired by Control Data, Special Systems,
   a group of us were given a one month familiarization
   on the CDC 6600 so that we could indeed take advantage
      of its special features for the Special Systems Division.  :-))

Early in the class we discussed the lack of a 
      HALT instruction
      HALT and  SingleStep switch
in the 6600 -
There was actually no way to stop the main computer,
    with out pulling power, it was always running something -
      it could be executing garbage, but it was running!

(I should explain, there was an
      - address base register, "Relative Address" or RA register
            the processor treated treated this as address 0
            and could not execute or store below this address
      - field length register, FL
            the processor could access memory between
                 RA and RA+FL
The operating system, usually focused in Peripheral Processor 0, PP0
    could control this and other registers 
    using the Exchange Jump instruction in a PP.
    http://ed-thelen.org/comp-hist/CDC-6600-R-M.html#TOC/       )
 
Like how ya going to debug step by step with out a 
    HALT instruction?
    HALT switch?

It turns out ya gotta do it a different way -
   "WHY"
   "OK, so you halt,
        a)  no internal registers are available
                    on any expensive light panel -
        b) no switches are available to change anything
     and besides, 
        a) who wants to halt a $6,000,000 machine
              and sit at the console pondering?

The single step debug method was to replace the following 
  instruction with an 
         ExchangeJump
instruction, which 
      a) dumps the machine registers into memory
      b) starts to execute the next program 
and you use a PP (one of 10) to examine the 
   dumped registers and memory area of the
   interrupted program.

That way, you can do instruction by instruction debug,
   while running the machine at almost full speed.

And besides, a HALT instruction in a pipelined machine
    requires lots of messy, slow logic.

So, Console Step by step debugging, if you REALLY need it requires,
    a) 1/2 of the dual CRT console
    b) one of the 10 Peripheral Processors  (PPs)
    c) the memory required by  the program,
         (which can be swapped to disk 
                 while you are scratching your head)
     
In my 5 years with Control Data Special Systems,
     doing really unusual time-critical systems programs,
     including modifying PP code,
  I never had to resort to Console Debugging -
        enough other aids were available :-))

In so many ways Cray was way ahead
        of the rest of the world.
 And then he came up with the beautifly designed and sublime
    CRAY 1
  Picky people might suggest that his associate, Les Davis
picked up, fixed up lots of rough edges in the orignal designs,,
but then again, Seymour could then go roaring off 
to the next mountain to conquor  ;-))

-----------------------------------------------

So many stories 

- - - - - - - - - - - - - - - - - - - - - - -
WE were told that the release 6600 document 
  to the rest of CDC was the 6600 wire list
      - what length wire went from here to there   !!
Tough enough to teach Field Engineers  (FEs) and others 
    a machine from logic diagrams, but from wire lists  ??? !!!
               you gotta be kidding
So the FE organization had to make logic diagrams
    and other documents from the wire lists -

- - - - - - - - - - - - - - - - - - - - - - -


There was a 6600 at corporate headquarters for demo
   and benchmark.
Unfortunately, one multiply unit was defective,
   and Field Engineering couldn't fix it -
   so they set the scoreboard to mark it permanently busy.

Unfortunately, this interfered with benchmark performance.
Finally sufficient pressure was applied to Seymour
   to come and fix the machine.

One evening Seymour came from Chippewa Falls,
   swung open the appropriate 6600 frame,
   drew up a chair, sat down (no documents)
   and looked at the wiring of the defective unit for maybe 15 minutes.

He then went to the F.E's wire rack of colored twisted pair wires,
   each color representing a different length, signal delay,
   and swapped one color twisted pair for another in the multiply unit,
   and left without a word.

After some confusion, did he fix it?  is he coming back?
   the F.E.s enabled that multiply unit -
   and the machine was now working perfectly -

- - - - - - - - - - - - - - - - - - - - - - -

The above tales circulated amongst CDC employees :-))

Hardware Adventures -
Seymour Cray was the lead designer, apparently Les Davis filled in the "?details?" ;-))
I (Ed Thelen) programmed CDC 6x00 machines for 5 years for Special Systems. Many days ( well OK, sometimes 2 AM and/or Saturdays ) I got "machine time" where I could assemble and debug my software. During that time I don't remember ever seeing a machine "down" - not working -

Having fixed General Electric Computer Department equipment (including traveling to other sites) I have high regard for the 6x00 reliability. Having fixed hardware, I wanted to meet the folks who were called when something went bad - heck, everything breaks sometimes.

So I would find the room where the CE's hung out, and introduce myself.
StoneWall - the CE's were totally uninterested in:
- somebody who fixed other manufacturer's equipment
- someone who programmed their equipment.
Talk about Cold Shoulder !!!

The following is my first contact (ever) with someone who fixed CDC 6x00 computers ...

Subject: CDC 6600 S/N 1, 2 and 3 from Chippewa Lab
From: James Clark < jhc0239@yahoo.com >
Date: Wed, Apr 24, 2013 7:50 pm

Ed - Appreciate the opportunity to respond to your web site on the 6600. I worked with Seymour at Chippewa Lab on S/N 1 - I'm the guy in the sweater you see in stock photos standing by the console with another engineer. That photo was put in "Datamation" magazine back then.

There were five of us working on S/N 1 - Seymour, Freddie Headerli (French), Bob Moe, Myself and another engineer I have forgot. I went with s/n 1 to Livermore Lab and stayed until

s/n 2 was ready and went with it to Brookhaven Lab on Long Island, NY and went back to the Lab.

I asked Seymore if I could take S/N 3 to CERN in Geneva and he gave the go-ahead. I was the EIC (Engineer in Charge) at CERN for the first two years. Pressue was tough from both CDC wanting me to get the 6600 accepted by CERN and CERN was pushing to get the company software to work.

We had a couple hundred software engineers working in California on the "Sipros" system. IT DIDN'T WORK. Seymour ended up using our simplified Maintenance software from Chippewa called "The Chippewa System". That was still in use when I left CERN in 1967.

It was a truly great machine with teriffic high speed at the time. I had asked Seymour one time why he didn't use chip technology that had just come out and he said he felt it hadn't yet been proved enough for the risk.

He was a real down to earth guy. We were on the same bowling team in chippewa Falls and would keep score in "Octal" just to confuse the other teams:) He was somewhat shy and didn't like the lime light. When the press came to Chippewa Lab to take photos of S/N 1 6600 - they wanted a photo of him in front of his computer and he said "Jim - why don't you get in front of the console and let them take a pic of you":)
Thus, I'm in the photo.

It was YEARs ago. I ended up being the "Father of the Navy Standard Computer" the AN/UYK-20 when the Navy hired me away from CDC in "69" and now I've been retired for some time but now active as a Captain in the Coast Guard Auxiliary.

Best to you.
Jim Clark

James H. (Jim) Clark
District Captain Sector Hampton Roads
United States Coast Guard Auxiliary
Department of Homeland Security
jhc0239@yahoo.com
...
"Professionalism promotes Proficiency"

"Cordwood Modules" - MTBF - from Jerry Berg May 30, 2013
A quick comment on the MTBF.

Initially the cordwood modules used precision machined aluminum heat-sinks (face plate). A cost reduction changed to a cast aluminum that had a draft angle where the 6 screws attached the PC board. That angle put pressure on the surrounding solder joints when the screws were tightened.

... The fix was simple; reflow the solder on both sides AFTER tightening the screws. The problem was ensuring that all field units got returned.

...

I recall writing about a 5 page memo describing the problem, root cause, corrective action in the late 60's (probably 67-68). I worked in the Failure Analysis group, Quality Assurance under Geo Hamilton and Bob Olson at Arden Hills.

Jerry Berg

More "Cordwood Modules" - MTBF and other tricks ;-)) - from Dana Roth July 5, 2013
“Cordwood Modules" - MTBF

That explains why CDC 6600, (later serial), Department of Energy, Ottawa, was having a failure 2-3 days after a power loss, from the thermal cycling of the cordwood modules. Up the block, the CDC 6000 early serial (102?) at Computing Devices, Nepean, Ontario would only have one fault a year, no doubt because they had the early machined modules. Thanks for clarifying this, now I’ll sleep at night :-)

For Customer Engineers, knowing the “tricks of the trade" kept things going, like:

  1. Removing the brake belts from the 844’s on day one.
  2. Aging punch cards for 6 months before use, to allow the ink to dry so your card punch would not jam.
  3. Don’t touch the 808 storage for any maintenance until they died.
  4. Change the vacuum tubing to steel re-enforced tubing and use the pre-grooved tape heads on 66x tapes.
  5. Get a tape certifier, clean three times and certify all new tapes, and send back 90% of new “garbage” tapes back to IBM/Memorex!
  6. Test your 66X caption motors as 90% would be damaged in shipping!
  7. Change out those PCB riddled General Electric motor capacitors (the ones with the internal “quality approved” ink stamp, that would break down under heat and explode)
  8. Putting foil on your channel cables for shielding.
  9. And above all, don’t lose your cookbook with the initial machine timing specs!

One piece of software I was always enamored with was the CE maintenance scheduler program. It was a brilliant piece of work that saved lots of dollars in maintenance, which was a cash cow for CDC. It promoted putting your effort into the top critical maintenance elements based on a blend of past knowledge and performance.

Somehow, “Quality” never seemed to correlate real events in the field until you would do something to get noticed like: Sending back and ordering one $5,000 motor a day till it was recognized the glass tach wheel was always getting damaged! CDC, at the time, didn’t promote the culture that allowed a matrix of organizations to solve issues (as this wasn’t an industry standard).

One cultural difference CDC did have over many companies was that from Bill Norris down, everyone was on a first name basis, which did promote many cross organizational ideas and solutions!

Dana Roth

Software Adventures - everyone has software troubles, so lets call 'em adventures
Seymour Cray even developed the first Operating system for the CDC 6600 !!
Seymour Cray made an operating system to demo his very unusual CDC 6600. I was rather basic, we called it "Chipawa Operating System". I was very basic, maybe no time limit, no reserving tapes for a job, ... but demoed that here indeed was the world's fastest computer, with user memory protection so that multiple programs could be safely "running" at the same time. While one user program was waiting for I/O, another user program was given the main processor for execution.

A summery presentation of the status of the various jobs was presented to the operator as a selectable display on either of the two big round CRTs on the operator's console. The basic plan was 8 horizontal zones
- 7 for the maximum number of user jobs in the system
- the 8th, bottom, for I/O status, printer 3 out of paper, tape drive 5 rewound, ...

Two operating systems came from this, and used the same basic display form
- Kronos/NOS by Dave Cahlander and Greg Mansfield, used in many colleges for students
- SCOPE, more formal, more useful features for general purpose.


A Tale
The CDC 6600 had great user protection, program A could not access program Bs memory or resources.
BUT we heard tales that the CDC 6600 in Palo Alto, California was getting crashed !!!
A job would come in from a particular authorized user, with a particular name - maybe "CRASHER", and the whole CDC 6600 would soon be unable to do anything useful.

It turned out that Control Data's operating systems did not provide a maximum per user allocation for the system disk, the "CRASHER" user figured out this weakness, kept writting to disk until it was filled, and prevented others from using this vital system resourse, such as spooling to printers ...

This exploited weakness was quickly fixed :-))


From Lew Bornmann March 2012
We initially met at CDC just off E. Weddell Drive (near the Blue Cube) in Sunnyvale.

I was hired just prior to everyone moving from the Porter Drive facility in Palo Alto so essentially was the first person working in Sunnyvale. I was one of three managers developing the replacement Scope OS for the 6000 series. When we fell behind on the development schedule, I pert charted where we were and what we still had to do showing there wasn’t any way we could deliver in time to meet contracts – and development was cancelled. What we had was moved back to Minneapolis and relabeled as Scope for the 7000 series. Since I had just moved in from Madison, WI, I requested not to be sent back. They gave me a research project which turned into Query Update and the database product line. I also worked on the distributed OS for the Bank of Switzerland and the PL/1 compiler.

A view of the end of Control Data
- I ( Ed Thelen ) bailed out in 1972, thinking they were off track


Message: 8
Date: Tue, 06 Dec 2011 15:42:58 -0800
From: "Chuck Guzis" < cclist@sydex.com >
To: "General Discussion: On-Topic and Off-Topic Posts"
< cctalk@classiccmp.org >
Subject: Re: The Strange Birth and Long Life of Unix
Message-ID: < 4EDE3802.24413.8376F7@cclist.sydex.com >
Content-Type: text/plain; charset=US-ASCII

On 6 Dec 2011 at 9:52, Al Kossow wrote:

> I was just digging through some CDC documents we just received
> concerning the joint CDC/NCR developments that happened in the early
> 70's, and was thinking how fast the pace of system change is now. The
> system they started on in 1973 was ultimately released almost 10 years
> later as the CYBER 180. By the end of the 80's they were thinking of
> porting Unix to it. I can't imagine anyone taking 10 years today to
> develop a new computer system, or thinking of writing an operating
> system and tool chain from scratch.

To be fair, you have to understand the times and the culture. In
1973, the dominant storage technology was still core. Backplanes
were still done with twisted pair and taper pins. The Cyber 70 line
was mostly a cosmetic rework of the old 60s 6000/7000 series.

The 170 series migrated to ECL ICs instead of discrete transistors
and semiconductor memory. Compared to everything that had gone
before, it was major, even if the same old architecture (6 bit
characters, ones' completement) and instruction set was being
implemented.

The dual-personality Cyber 180 ws a major rework of the basic
architecture, even if most CDC customers operated the systems in 60
bit mode to be compatible with the old hardware. While the
6000/7000/Cyber 70/170 systems had a very clean simple RISC design,
the 180 ws anything but--sort of the response to the sucker question
"What instructions would you want for product xxx?". For the system
programmers, little nits such as the move to twos complement,
hexadecimal notation and reversing the order of the bit numbering in
a word were just icing.

The times were another factor. Seymour Cray's going off and doing
his thing hurt CDC's high-end sales badly. CDC's fortunes rapidly
declined and the 70s and 80s were marked by layoffs--one co-worker
committed suicide when he realized that having spent his career with
CDC, job prospects were limited at his age.

The NIH mentality of units within CDC hurt a lot. When one of the
Cyber 180 software architects gave a presentation of the 180's
operating system software sometime around 1976, I was furious when
the subject came to paging software. He described in some detail
what he thought the paging should be--simple demand paging. I raised
my hand and asked him if he'd discussed the matter any with members
of CDC's other 64-bit virtual-memory machines already in production.

He looked at me as if I'd just informed him that he had an unknown
twin brother. I told him that STAR had been working with the
technology since 1969 and that demand-paging was going to give him
grief. I suggested that he talk to our pager guy about working-set
paging. I don't think he ever did.

And finally, a lot of the talent had flown the coop. Seymour was
gone and had taken a bunch of key talent with him. Jim Thornton was
consulting and playing with a loop of coax that ran around the
parking lots at Arden Hills. And a lot of other talent had left to
join the early microcomputer scene in California.

It was surprising that CDC lasted as long as it did.

BTW, in 1984, I strongly suggested to Neil Lincoln that ETA adopt
Unix as the OS for the ETA-10. To his credit, he agreed with me, but
failed to convince others. Eventually, ETA did have a Unix port done
by an outside firm, but by then, it was too late for them.

--Chuck


If you have comments or suggestions, Send e-mail to Ed Thelen

Go to Antique Computer home page
Go to Visual Storage page
Go to top

Updated Sept 8, 2014