- 2.0 GHz Intel Haswell processors across all nodes
- Infiniband FDR interconnect (56 Gbps)
- 97 TFLOPs Actual Performance (Rmax)
- 88 standard compute nodes (2464 total cores; 128 GB RAM per node)
- 28 mid-tier compute nodes (784 total cores; 512 GB RAM per node)
- 4 large memory nodes (112 total cores; 1.28 TB RAM per node)
- 5 GPU nodes with NVidia Tesla K80 GPUs
- 1 Xeon Phi nodes with 2 Knight’s Corner coprocessors
- 350 TB Scratch Space
WHAT MAKES GARDNER UNIQUE?
There are multiple options both on and off campus for high performance computing. Gardner stands out among them for several reasons:
- A HIPAA-compliant environment appropriate for analyzing patient data
- Five software stacks built using both open source and commercial compilers
- Separate software stacks for basic science and clinical research
- GPU versions of software commonly used in the life sciences
- The ability to handle data-intensive pipelines that require up to 1.4TB of memory
- An experienced HPC administrator to help you one-on-one with optimizing your jobs, installations, and more
Early access accounts for Gardner are currently being provisioned.
FREQUENTLY ASKED QUESTIONS
Q: What credentials do I need to use the Gardner cluster?
A: A BSD account or collaborator account is required to request access to CRI resources. Learn more about BSD accounts and passwords here.
Q: What software is available for me to use on Gardner?
A: A current list of software installed on Gardner is available here.
Q: Are those working with basic science (non-human) data required to complete HIPAA training in order to use Gardner?
A: No, HIPAA training is not required for scientists working with basic science data.
Q: How can I get technical help with Gardner?
Q: Why is your cluster named Gardner?
A: Martin Gardner, a University of Chicago graduate (SB 1936), was a popular mathematics and science author, magician, and puzzle enthusiast. His column “Mathematical Games” ran in Scientific American for 25 years. Our cluster’s name honors Gardner’s contributions to the field of computational biology and his lifelong work of making mathematics accessible and interesting for millions.
MEET OUR HPC ADMINISTRATOR
Mike Jarsulic is a graduate of the University of Pittsburgh and has been with the CRI since September 2012. He previously worked at the Bettis Atomic Power Laboratory in West Mifflin, PA, where he was a scientific programmer focused on modernizing thermal/hydraulic design software before moving into High Performance Computing. Mike is well-versed in a variety of programming languages including Fortran, C/C++, Java, and Perl. His other interests in HPC include distributed memory programming, compiler optimizations, and long-term reproducibility of results.