CHECS members use a wide variety of computing resources to carry out their research. These resources range from a rapidly growing collection of the latest processors, network cards, switches and storage devices, to high-end platforms used for production scientific computing, large-scale scalability analyses, and collaboration with VT and external researchers. Below we give further details about some of our largest or more interesting computing facilities. We group these facilities into "experimental" and "production", corresponding roughly to whether we have (and exploit) root access to the resource.
System G. The System G cluster consists of 324 Mac Pros, each with two 4-core 2.8 GHz Intel Xeon processors (for a total of 2592 processor cores) and eight GB of RAM. The system is the first supercomputer running over quad data rate (QDR) InfiniBand (40Gbs) interconnect technology. System G (for "green") also has unique power-aware capabilities, with thousands of power and thermal sensors allowing CHECS researchers to design and develop algorithms and systems software that achieve high-performance with modest power requirements, and to test such systems at unprecedented scale. With a sustained (Linpack) performance of 22.8 TFlops, System G is the largest power-aware research system and one of the largest computer science systems research clusters in the world.
Navajo. This shared memory machine has 64 total cores and 256 GB of RAM. It consists of four sixteen core 2.3 GHz AMD 6267 Opteron processors.
Ojibwa. A shared memory Sun Fire X4600 M2 Server with 8 nodes, 32 cores, 64 GB of memory, and 584 GB of disk space.
ICE. The SyNeRGy lab maintains the 9-node (36-core) ICE cluster, made up of dual-core, dual-processor AMD Opteron 2218 CPUs and used primarily for research in power-aware computing and high-performance networking.
Production Computing Facilities
HokieSpeed. CHECS works closely with Advanced Research Computing (ARC) at Virginia Tech. The most powerful system available through ARC is HokieSpeed, a 209 node cpu/gpu cluster. The team that designed and deployed HokieSpeed was led by Dr. Wu Feng. Each system node contains two 2.40 GHz Intel Xeon E5645 6-core CPUs, and two NVIDIA 2050 448-core GPUs, for a total of more than 2,500 CPU cores and more than 185,000 GPU cores. The system has an aggregate 4992 GB of memory and is interconnected with quad data rate (QDR) InfiniBand. It made its debut in November 2011 as the 11th ranked system on the Green500 list.
BlueRidge (coming September 2012). A 318 node (5088 core) cluster. The system is built from 2.6 GHz Intel Sandy Bridge CPUs, with 64 GB of RAM per 16-core node (20.4 TB aggregate memory), and a QDR InfiniBand interconnect.
HokieOne is a shared-memory SGI UV system. It has 492 2.66 GHz Intel Xeon cores (on 82 sockets on 41 blades) with 2.62 TB of memory (5.3 GB/core).
Athena is a cluster system with GPUs and large RAM memory footprint. There are 42 quad-socket, AMD 2.3GHz Magny Cour octa-core nodes (1,344 cores) with 64 GB RAM each (12.4 TFLOP peak). Sixteen of the nodes also have access to 8 total nVidia S2050 Fermi (quad-core) GPUs with 6GB of memory. The nodes are connected via QDR InfiniBand.
Ithaca is an IBM iDataPlex system. The system is partitioned to provide a Parallel Matlab compute cluster, general research compute cluster and resource for other projects. Ithaca has 84 nodes (672 cores), with 66 nodes available for general use. Each node has dual-socket quad-core 2.26 GHz Intel Nehalem processors. Ten nodes have 48 GB memory and the remainder have 24 GB. Nodes are connected via QDR InfiniBand.
© 2006 Virginia Tech