Grouped together here as they were application code compatible
(the peripheral processors were not code compatible)
The CDC 6400 was a code compatable uni-processor designed by Jim Thornton
using CDC 6600 hardware technology, with the 10 peripheral processors.
The CDC 6500 was a dual 6400, with the 10 peripheral processors.
Manufacturer | Control Data Corporation |
Identification,ID | CDC-6600
CDC-7600 |
Date of first manufacture | CDC-6600 - 1964
CDC-7600 - 1969 |
Number produced | CDC-6600 - 50 as per http://www.newmedianews.com/tech_hist/cdc6600.html
. |
Estimated price or cost | CDC-6600 - $7,000,000 as per
CISC of NCAR - understand? ;-))
CDC-7600 |
location in museum | - |
donor | CDC-6600 - Lawrence Livermore Laboratory
CDC-7600 - Lawrence Livermore Laboratory |
Contents of this page:
CDC-6600,
CDC-7600,
working CDC 6500 at Living Computer Museum
CDC-6n00 Module, Standard 4K x 12 bit memory module photo by Rick Hotmail Note: CDC 6600 Brochure - 1963 - from Jitze N Couperus contains many excellent system photos !! |
6600 WORD.doc and 7600 WORD.doc by Ron Mak |
|
Special features - 6600
12 bit (register) and 24 bit (memory address in 18 bits) instructions
each had 4096 12 bit words, each could access any peripheral channel
The above mentioned Peripheral Processors were operated as:
Upon start of execution of the Bulk Move instruction, all other main memory accesses were stopped,
even including Peripheral Processors, and a 60 bit word would move to or from Bulk Store
each 100 nanoseconds - remember the phased memory system.
The 700 nanosecond max latency and the world beating transfer speed could be very useful ;-))
Interesting option - from Tom Kelly
> I had heard of a modification for extra memory for Boeing,
It was a standard option. I've looked it up. Look at:
If the CEJ/MEJ (Central Exchange Jump/Monitor Exchange Jump) option
was installed, then there was a "monitor flag" that controlled the
operation of the XJ (CPU) and MXN (PPU) instructions. The instruction
lists on the covers indicate that the MXN instruction was
"Included in 6700 or those systems having the applicable Standard Options"
The appendix refers to this as "monitor mode" on p. F-5.
When the CPU was running the CPUMTR code,
it ran with the monitor flag set, which prevented the PPs from
interrupting it again.
Characters on the monitor
To accommodate the required
high speed deflection, the medium power "ultra" high frequency
2C43 was used.
(The small top is the anode which gets hot and is thermally connected to
a finned faned heat sink, at 450 volt max deflection plate voltages.)
There was a circuit to intensify the beam more when the beam was moving rapidly
- as in a long vector.
A Peripheral Processor assigned to drive the monitor display output the commands to
paint characters and other graphics on the monitor tubes.
Software - (Optional) "Time Critical" Operating System
I have a story about Real Time Systems design that my supervisor told me years ago:
At a presentation on Real Time System design methodologies,
during the question and answer period one person complimented the presenter on their talk and methodology but commented that it did not seem to be able to specify adequately the Real Time constraints, especially in their specific application.
My supervisor then commented: "That is the difference between Soft Real Time Systems and Hard Real Time Systems".
Quirks
Hardware
Software - (Optional) "Time Critical" Operating System
Quirks
Special features - CDC 6600 Hardware
- specific task dedicated PPs
- main CPU and PP scheduling by the "monitor"
- disk driver
- monitor driver
- printer driver dumping from disk
via the disk driver
- card reader driver buffering to disk
via the disk driver
- tape driver (after "we" re-wrote it.)
- float tasks for the remaining PPs for other tasks
The program in the main CPU could
- request specific tasks like
- open input and output files
by placing a request in ?address 100??
- terminating itself
- ...
- operate I/O circular queues in main memory
- the PPs would try to keep these areas full
or empty depending if the file was
for input or output
That is the general flavor -
You may remember that when special systems found that
each request for a tape I/O caused
- a PP to be assigned for that task,
- the program for that PP's task loaded from disk,
- that I/O record done
- the PP dropped back into the general pool of PPs
for re-assignment for anything
We got excited about the inefficiency and slowness
and "we" re-wrote it to stay around,
doing tape I/O for as many task as were around,
dropping back to the PP pool only if idle for seconds.
There was special provision for moving streams of 60 bit words to/from
a lower cost core memory called "Bulk Store".
These were 6600's. SN 82 and 84, I think, but that is very hazy.
The customer was the Naval Air Development Center, in Johnsville
(or Warminster), PA. They had been modified for real-time use,
and with special interfaces to simulation hardware. If I recall
(and I didn't do a lot with this), if you referenced a negative
address, it accessed the simulation hardware. There was a hardware
realtime scheduler.
> but never a "monitor mode" for the CPU.
> Why would some want to use a "monitor mode"?
"Control Data 6000 Series Computer Systems: Reference Manual",
Pub # 60100000, Revision N, Appendix F. (1972)
I [Ed Thelen] watch the classiccmp.org newsgroup. Recently (June 2016) there
has been a flury of e-mails with subject "CDC 6600 - Why so awesome?".
Most seem well informed - but - most get confused about writing on the dual monitors.
There are learned discussions about hardware character generators of several types,
but most folks forget that the tubes were electrostatic deflection and optimum for
vector graphics, (and vector character generation ;-)).
Each peripheral processor could execute one 12 bit instruction per microsecond.
Assuming the beam positioning instructions are 24 bits, the PP could paint the A above
in 14 microseconds. Assuming a desired refresh rate is 60 frames per second,
(1,000,000 us / 60 us /char = 16,000 us / frame.
To paint the character "A" on the tube (there was no lower case alpha in the 6n00 character set).
easily done with subroutines :-))
"Special Systems" eliminated steps f and h -
(16,000 us / frame) / 14 us/char = more than 1,000 characters per frame time.
Assume 40 characters per "control point" and 7 + 1 control points = 320 characters.
Painting the "running jobs display" takes maybe 1/3 of the time.
Now - about "Real Time" -
I was with Control Data Corp - Special Systems Division -
1966-1972 and part of our claim to fame was our own version of
"Real Time"
Most manufacturer's version of "Real Time" was
" our equipment is fast enough to handle your problem,
as we see it,
so you will not be inconvenienced"
or something like the above.
And for many commercial applications -
say retail point of sale
The above was good enough - and really
how much damage would happen if a clerk/customer
was occasionally delayed a few seconds.
Even say airline reservation process survived,
if you couldn't handle some transaction promptly,
SABRE just threw it on the floor,
the reservation agent re-submitted, and all was OK
--------------------------
There were other "Real Time" users that were more demanding -
CDC "Real Time" started out when the CDC-6600 was involved with
"hybrid" computing - popular in the 1960s.
There were some functions that an analog computer
was poor at - say complicated function generation,
and the analog run would lose validity if
the digital function generation was delayed.
Control Data's presentation was
"we will guarantee that we can schedule the required
input, processing, and output
every x milliseconds, and schedule other jobs
as background or in other "Real Time" slots.
We even could guarantee data-logging to specially
constructed circular disk files. These files could be simultaneously
accessed in background for on-line analysis.
Our "gimmick" was the ability to schedule dedicated
Peripheral Processors (PP) to handle the I/O,
a deterministic task
and use the scheduling PP to synchronize the
I/O and schedule the main processor appropriately
after the input is complete and before the
required output time.
And we could run "batch" processing in the background :-))
If the customer's CPU processing took longer than
the customer asked - there was the option,
- grab off background time
(other "Real Time" jobs would not
loose their requested/guaranteed time)
- abort - (usually used during debug)
In-core values of moving average and max CPU time was available
to the real time user.
That niche market was expanded eventually to
say Grumman test flight evaluation in "Real Time".
Ground flight analysts could interact with "Real Time"
data on their scopes, make "Real Time" decisions to
continue, abort, expand
the prototype test plan depending upon the current situation.
Analysts could even edit/recompile in the background and
then utilize the new program.
Exotic inputs such as from a laser-ranging theodolite were
Interfaced.
This system was used to aid prototype development of
the F-14 and possibly others later.
-------------------------------------
We thought this very helpful
and to the best of my knowledge no other vendor
before or since could do this version of "Real Time" successfully
in a multiuser environment.
I am certainly open to comments
--Ed Thelen
Tim Coslet has a story :-))
So the presenter asked what was their application and assured them that it should only need minor changes to handle whatever made the application special.
The questioner then explained "Our application collects data and then transmits the data to another computer for storage and later analysis. The computer that our application runs on is attached to one end of a one meter long steel rod, the other end of this rod is attached to a nuclear bomb that is about to be detonated in a test. If the application has not completed all its tasks before the blast wave destroys the computer then it fails to meet its requirements. How would this time constraint be shown using your methodology?"
The presenter replied "Oh, I'd never thought of that..."
|
CDC 7600
|
Historical Notes A nice history at Charles Babbage Institute
from http://ei.cs.vt.edu/~history/Parallel.html Control Data Corporation (CDC) founded. from http://wotug.ukc.ac.uk/parallel/documents/misc/timeline/timeline.txt ========1960======== Control Data starts development of CDC 6600. (MW: CDC, Cray, CDC 6600) ========1964======== Control Data Corporation produces CDC 6600, the world's first commercial supercomputer. (GVW: CDC, Cray, CDC 6600) ========1969======== CDC produces CDC 7600 pipelined supercomputer. (GVW: CDC, Cray, CDC 7600) ========1972======== Seymour Cray leaves Control Data Corporation, founds Cray Research Inc. (GVW: CDC, CRI)Les Davis The Ultimate Team Player, an Oral History |
This Artifact
- This unit is serial # 1 from Lawrence Livermore |
- WRONG - such as the linkage and control between PPs and CPU
The person has no clue at all about how the machine worked or the CPU was controlled: - - phased memory, - - PP-0 by convention controlled the CPU, - - another operated the operator's console - - the others did I/O, ie. printing, card reading and punching, disc access, mag tape, serial I/O, ... Attempts to introduce reality are overwritten. :-(( (Feb 2014) Current version not quite so bad - but still really quirky errors. |
Some manuals,
a CDC 6600 Brochure - 1963
Table of Contents
a proposed 30 minute sub-tour of CHM
Looking for information.
Two-and-a half (2 1/2) Eye-Blinker stories for you
A slowed (intentionally) CDC-6400, the CDC-6300
Power, from cctech-request@classiccmp.org with Paul Koning
Just for fun, the Dead Start panel
CDC Cyber Emulator -
spotted by Jim Seay
- and -
Has anyone still got Control Data Cyber deadstart tapes and possibly
matching source tapes? The following would be of great interest for
the Desktop Cyber Emulator project. These tapes deteriorate over time
and if we don't preserve them now they will be lost forever. Even
Syntegra (Control Data's successor) have no longer got copies of MACE,
KRONOS and SCOPE.
The following deadstart and source tapes would be great to salvage:
An SMM deadstart tape with matching source would help in fixing the
remaining problems in the emulator.
I can supply a small C program (in source) which will read those tapes
on a UNIX or VMS system and create an image which can be used to
recreate the original tape (fully preserving the physical record
structure and even tape marks).
- - - - - -
There is a popular scientific benchmark called the Linpack Benchmark used to
measure the speed that a particular computer can complete a particular "compute bound"
task. As per
Linpack Benchmark
Jim Humberd suggests here "
that IBM �invented� the 7040/7094 - DCS (Directly Coupled System), in response to my efforts
to sell a CDC 6600 to one of IBM�s largest customers. ... The CDC 6600 consisted of a large
central processor, surrounded by 10 peripheral and control processors that were assigned the
tasks of operating the devices connected to the input/output channels, and transferring data
to and from the central processor.
Two of the first silicon bipolar n-p-n transistor products
should go into the historical record book, not only for the
enormous profits they generated which enabled Fairchild
Semiconductor Laboratory to greatly increase its research
and development efforts that led to the rapid introduction
of whole families of volume produced silicon transistors
and integrated circuits, but also for setting the pace on computer
system designs based on the availability of certain
superior transistor performance characteristics, such as
speed and especially reliability.
The origin of the first product was the gold-doped high-peed (16 ns)
switching n-p-n transistor, 2N706. It was a
smaller mesa (three-times smaller diameter at 5-mil or an
area of 1.2 x 10-4cm2) and higher speed version of the 2N696
bipolar silicon n-p-n discussed in Section IV-D which had
been marketed by Fairchild in 1960. Gold is a highly efficient
recombination center for electrons and holes. In order to
increase the switching speed, gold was diffused into the
transistor to reduce the minority carrier lifetime and thus
the charge storage time in the base and collector layers of
the 2N706.
Based on this existence proof, Control Data Corporation
awarded Fairchild Semiconductor Laboratory a
$500,000 development contract to produce a still higher
speed silicon transistor switch to meet the first requirement ----
the high switching speed (less than three nano-seconds) of the
10-MHz (3MIPS) CDC-6600 scientific computer [69].
The second requirement was reliability since
there were 600,000 transistors in the CPU. That contract was
followed up by a $5M production contract for 10 milwidth=90%
units of high speed, gold-diffused, transistors and 2.5 million
units of high speed, gold-diffused, diodes in September 1964.
In fact, the transistor specifications of 3-ns and
high reliability were arrived at by the CDC computer
designers based on the required speed and reliability to
complete a numerical solution of a scientific problem without
interruption from a computer hardware failure [69].
In order to achieve several thousand hours of CPU run-time
without failure, high reliability from the individual silicon
planar transistors was the most critical consideration owing
to the large number of transistors (600,000) used in the CPU
of the CDC-6600. Noyce's monolithic technology has greatly
improved the numerics of reliability today.
For example,
the 600,000 transistors in CDC-6600 is only about one-half
of the number of transistors contained in a 1-mega-bit(Mbit)
DRAM chip which has a projected chip operating life of 10
years and as many as nine or more 1-Mbit chips can be used
in a single personal computer today which rarely experiences
MOS memory failures and whose failures are usually
due to the crash of the mechanical magnetic disk drive.
To
meet both the 3-ns and high-reliability specifications, Fairchild
engineers shrunk the circular 16-ns mesa 2N706 transistor down
to a three-finger stripe geometry and used oxide
passivation for stabilization. They also improved the yield
by using an epitaxial layer to control the resistivity. The
result was the 2N709 which met the 3-ns switching time and
high reliability requirements. It gave a 2000 CPU-hour operating
time before a transistor fails.
This was a very large
development and production contract for the design and
delivery of only one transistor type-by comparison, it took
only about $250 000 to start a silicon transistor manufacturing
company in 1960. High speed and high reliability of
the 2N709 met the critical requirements that made the first
scientific computer possible.
....
Hardware Adventures -
Having fixed General Electric Computer Department equipment (including traveling to other sites)
I have high regard for the 6x00 reliability. Having fixed hardware, I wanted to meet
the folks who were called when something went bad - heck, everything breaks sometimes.
So I would find the room where the CE's hung out, and introduce myself.
The following is my first contact (ever) with someone who fixed CDC 6x00 computers ...
Ed - Appreciate the opportunity to respond to your web site on the 6600. I worked with
Seymour at Chippewa Lab on S/N 1 - I'm the guy in the sweater you see in stock photos
standing by the console with another engineer. That photo was put in "Datamation" magazine
back then.
There were five of us working on S/N 1 - Seymour, Freddie Headerli (French), Bob Moe,
Myself and another engineer I have forgot. I went with s/n 1 to Livermore Lab and stayed
until
s/n 2 was ready and went with it to Brookhaven Lab on Long Island, NY and went back to the Lab.
I asked Seymore if I could take S/N 3 to CERN in Geneva and he gave the go-ahead. I was
the EIC (Engineer in Charge) at CERN for the first two years. Pressure was tough from both
CDC wanting me to get the 6600 accepted by CERN and CERN was pushing to get the
company software to work.
We had a couple hundred software engineers working in California on the "Sipros" system.
IT DIDN'T WORK. Seymour ended up using our simplified Maintenance software from Chippewa
called "The Chippewa System". That was still in use when I left CERN in 1967.
It was a truly great machine with teriffic high speed at the time. I had asked Seymour
one time why he didn't use chip technology that had just come out and he said he felt
it hadn't yet been proved enough for the risk.
He was a real down to earth guy. We were on the same bowling team in chippewa Falls
and would keep score in "Octal" just to confuse the other teams:) He was somewhat shy
and didn't like the lime light. When the press came to Chippewa Lab to take photos
of S/N 1 6600 - they wanted a photo of him in front of his computer and he said
"Jim - why don't you get in front of the console and let them take a pic of you":)
It was YEARs ago. I ended up being the "Father of the Navy Standard Computer"
the AN/UYK-20 when the Navy hired me away from CDC in "69" and now I've been retired
for some time but now active as a Captain in the Coast Guard Auxiliary.
Best to you.
James H. (Jim) Clark
"Cordwood Modules" - MTBF - from Jerry Berg May 30, 2013
Initially the cordwood modules used precision machined aluminum heat-sinks (face plate).
A cost reduction changed to a cast aluminum that had a draft angle where the 6 screws attached the PC board.
That angle put pressure on the surrounding solder joints when the screws were tightened.
... The fix was simple; reflow the solder on both sides AFTER tightening the screws. The problem was ensuring
that all field units got returned.
...
I recall writing about a 5 page memo describing the problem, root cause, corrective action in the late 60's (probably 67-68).
I worked in the Failure Analysis group, Quality Assurance under Geo Hamilton and Bob Olson at Arden Hills.
Jerry Berg
More "Cordwood Modules" - MTBF and other tricks ;-))
- from Dana Roth July 5, 2013
That explains why CDC 6600, (later serial), Department of Energy, Ottawa, was having a failure 2-3 days
after a power loss, from the thermal cycling of the cordwood modules. Up the block, the CDC 6000 early
serial (102?) at Computing Devices, Nepean, Ontario would only have one fault a year, no doubt because
they had the early machined modules. Thanks for clarifying this, now I�ll sleep at night :-)
For Customer Engineers, knowing the �tricks of the trade" kept things going, like:
One piece of software I was always enamored with was the CE maintenance scheduler program.
It was a brilliant piece of work that saved lots of dollars in maintenance, which was a cash
cow for CDC. It promoted putting your effort into the top critical maintenance elements based
on a blend of past knowledge and performance.
Somehow, �Quality� never seemed to correlate real events in the field until you would do something
to get noticed like: Sending back and ordering one $5,000 motor a day till it was recognized the
glass tach wheel was always getting damaged! CDC, at the time, didn�t promote the culture that
allowed a matrix of organizations to solve issues (as this wasn�t an industry standard).
One
cultural difference CDC did have over many companies was that from Bill Norris down, everyone
was on a first name basis, which did promote many cross organizational ideas and solutions!
Dana Roth
Software Adventures - everyone has software troubles, so lets call 'em adventures
A summery presentation of the status of the various jobs was presented to the operator
as a selectable display on either of the two big round CRTs on the operator's console.
The basic plan was 8 horizontal zones
Two operating systems came from this, and used the same basic display form
A Tale
It turned out that Control Data's operating systems did not provide a maximum
per user allocation for the system disk,
the "CRASHER" user figured out this weakness, kept writting to disk until it was filled,
and prevented others from using this vital system resourse, such as spooling to printers ...
This exploited weakness was quickly fixed :-))
From Lew Bornmann March 2012
I was hired just prior to everyone moving from the Porter Drive facility in Palo Alto so essentially
was the first person working in Sunnyvale. I was one of three managers developing the replacement Scope OS for the 6000 series.
When we fell behind on the development schedule, I pert charted where we were and what we still had to do showing there wasn�t
any way we could deliver in time to meet contracts � and development was cancelled. What we had was moved back to Minneapolis and
relabeled as Scope for the 7000 series. Since I had just moved in from Madison, WI, I requested not to be sent back. They gave me
a research project which turned into Query Update and the database product line. I also worked on the distributed OS for the
Bank of Switzerland and the PL/1 compiler.
"Lifted" from linkedin.com
e-mail from Eugene Miya - Sept 5, 2018
The CPU register set in the 66/7600 machines consisted of:
One loaded/stored data into/from an 'X' register by setting a value
in the corresponding 'A' register.
To further make the problem more difficult,
In addition, because the machines had multiple functional units
capable of running concurrently, the compiler tried to minimize
computation time by overlapping the execution of the instructions.
Given all the above, I did not have fun writing the register assigner
for the machines. It was a total exercise in frustration.
BTW, the HW designers justified (rationalized) their divvying up of
the 'X' registers into 5 load and 2 store registers based on studies
of scientific codes in some idealized situation.
The real problem with the 66/7600 series was the 60 bit word length.
The other problem with it and many other early machines was it's
limited address space which eventually killed it.
A view of the end of Control Data
|
If you have comments or suggestions, Send e-mail to Ed Thelen
Go to Antique Computer home page
Go to Visual Storage page
Go to top
Updated April 13, 2019