*** Please note, this page (and web site) are in early development.
Items are certainly not complete, and may be inaccurate.
Your information, comments, corrections, etc. are eagerly requested.
Send e-mail to Ed Thelen.
Please include the URL under discussion. Thank you ***
|Manufacturer ||INTEL Super Computers |
|Identification,ID ||Paragon |
|Date of first manufacture||1992 |
|Number produced ||? |
|Estimated price or cost||- |
|location in museum ||- |
Contents of this page:
Massively Parallel with Intel's 32-bit 80860 RISC chip, performing at 60 MFLOPS peak
MIMD is for Multiple Instruction Multiple Data
Hypercube _ "
The Intel Paragon XP
Machine type: RISC-based distributed-memory multi-processor.
Models: Paragon XP/S, XP/E
Operating systems: OSF/1, SUNMOS.
Connection structure: 2-D mesh (torus).
Compilers: Fortran 77, ADA.
|. ||Paragon SP/S ||Paragon XP/E |
|Clock cycle ||20 ns ||20 ns |
|Theor. peak performance
Per Proc. (64-bits)
|0.075 Gflop/s ||0.075 Gflop/s |
|Maximal ||300.0 Gflop/s ||2.1 Gflop/s|
|128 MB ||128 MB |
|Maximal ||128 GB ||4.5 GB |
|Communication bandwidth ||200 MB/s ||200 MB/s|
|No. of processors ||64-4000 ||4-32|
Oliver A. McBryan
In late 1992, Intel shipped a commercial version of the DELTA, called Paragon.
The Paragon uses the same rectangular grid structure as the DELTA, but faster
processing nodes. The system is designed with scalability in mind from the outset. Initial systems are designed to have up to 2048 nodes with a peak rate of 300 Gflops.
The largest systems will have 128 GBytes of main memory, with an aggregate
500 GByte/sec aggregate bandwidth, and access to over 1 TByte of internal disk
with an aggregate bandwidth of 6.4 GByte/sec. Communication bandwidth between
nodes is 200 MBytes/sec full duplex
Paragon software plans indicate a substantial divergence from previous Intel systems.
The basic software environment is the same as on the iPSC2 - a library of message
passing routines. However the Paragon also supports a full UNIX (Mach) kernel in
each node, along with a node-level virtual memory. Finally Intel has indicated that
a virtual shared memory capability will also be available across nodes.
The Paragon node contains two identical Intel i860XP processors, an improved
50MHz version of the i860 used in previous Intel systems. This processor has
peak rates of 75 flops (64-bit) and 42 MIPS and can support from 16-128 MBytes
with a 400 MByte/sec processor-memory bandwidth and an 800 MByte/sec processor-cache
bandwidth. The second processor on a node id dedicated entirely to communications processing.
Paragon nodes are organized into three partitions: the Compute partition, the Service
partition and the I/O partition. Parallel applications and a UNIX micro-kernel reside on the Compute partition. The Compute partition can be subdivided into subpartitions allocated
to either interactive or batch processing, and there may be any number of each kind.
Partition sizes and shapes may be change at any time. Batch processing is provided
through the standard NQS system. The Service partition provides full operating system
facilities such as shells, editors and compilers. This partition can grow or shrink
in time with the system running, according to user needs. Compute partition and
Service partition nodes are identical, allowing repartitioning between these
partitions at any time.
The I/O partition provides disk, tape and network connections. I/O nodes include
SCSI nodes for disks and tapes, VME nodes for specialized devices, and HiPPI
nodes for connection to disk arrays
and frame buffers. These nodes can can also be used as Service partition nodes,
but are never allocated to the Compute partition. By increasing the I/O partition
size as the system grows, I/O capabilities can scale to match the computational
capabilities. Applications can avail of both UNIX OSF/1 facilities and Intel NX/2
operating system facilities for interaction between nodes, or with Service partition nodes.
About the Intel Paragon Computer
by Dave Turner (email@example.com)
part of a
Last year, an Intel Paragon supercomputer joined the computing
resources in the Computation Center's machine room. This
256-node Paragon was originally part of the 1024-node machine at
Oak Ridge National Lab that was at one time the world's fastest
computer. It was decommissioned at the end of April 1999, and
Ames Lab was able to pick it up for the price of shipping and
installation. For $14,000, we picked up a machine that would have
cost us $5,000,000 just 5 years ago. ISU agreed to house it in the
basement of the Durham Center, and both ISU and Ames Lab
researchers are using this machine.
The Paragon has 16 rows and 16 columns of nodes, with a 2D mesh
connecting them. Most nodes have 64 MB, of which 7 MB is used
for the OS (OSF1). 64 of the nodes have 128 MB of memory.
Each node actually has 3 processors. One handles communications
at all times, while the other two are for computations. The
processors are 75 MHz i860s, which run at 5-20 MFlops or more
depending on the code. The communication rate tops out at around
130 MB/sec for 1 MB messages, with a latency of around 100
The 512-compute node Intel Paragon-- the CSCC's latest major computational resource--has a
peak speed of 38.4 gigaflops and 67.2 gigabytes of online disk space. The flagship Paragon also has
14 RAIDs (Redundant Array of Inexpensive Disks), one Ethernet node, two HIPPI I/O nodes, and
five service nodes. As currently configured, all the compute nodes have 32 megabytes of memory.
Trex was delivered to Caltech at 9:00 am on December 10, 1993. The Intel installation crew had all
nine cabinets wired and the machine booted by 6:00 pm that evening. Intel turned it over to
Caltech for testing the following Monday. A week and a half later, trex successfully passed the
production phase of its acceptance testing, which involved running a suite of 12 programs (many
submitted by Delta users). It ran more than 26 consecutive hours without a reboot, easily satisfying
the official 19-hour requirement for the production phase. Currently in progress is the interactive,
multi- user, development phase of acceptance testing. As soon as trex completes acceptance testing,
this machine will become available for production use. In the interim, a few "friendly users" from
the CSCC community will be given a chance to experiment on trex.
Interesting Web Sites
If you have comments or suggestions, Send e-mail
to Ed Thelen
Go to Antique Computer home page
Go to Visual Storage page
Go to top
Updated April 30, 2000