Skip to content

System Overview

The SRDC Platform supports data storage, management, and computation designed for compute-intensive research on large and sensitive data. SRDC features dedicated infrastructure, housed in the secure campus data center, as well as personnel to support research and enforce the necessary privacy and security policies and procedures.

Technology & Equipment

  • High performance computing, accessible to researchers under their Faculty Computing Allowance. Researchers may purchase additional HPC nodes to add to the system.
  • Windows and Linux virtual machines, accessible to researchers under their Faculty Computing Allowance. Researchers may purchase additional virtual machine hardware to add to the system.
  • A dedicated, large-scale, and high performance storage system. Researchers may purchase additional storage to add to the system.

Compute Overview

The SRDC cluster offers a compute pool comprising 40 nodes, each featuring Intel Xeon Gold 6230 CPUs, 364 GB of RAM, 20 cores running at 2.1 GHz, and support for 16 floating-point operations per clock cycle. Job submissions to these nodes are under the partition name of “srdc”.

GPUs on SRDC

Alongside the CPU-only nodes, the SRDC cluster also features 48 GeForce GTX 1080 Ti GPUs distributed across 12 nodes. Each node contains 8 CPUs and 4 GPUs with a total of 64 GB of CPU RAM and 11 GB of GPU RAM per GPU. SRDC’s GPUs allow for concurrent operations, memory mapping, and coordinated kernel launches.

SRDC Hardware Configuration

Partition Nodes Node List CPU Model # Cores / Node Memory / Node Infiniband Specialty Scheduler Allocation
srdc 40 n00[00-39].srdc.srdc0 Intel Xeon Gold 6230 20 364 GB 4X EDR - By Node
srdc_GTX1080TI 12 n00[40-51].srdc.srdc0 Intel Xeon ES-2623 8 64 GB 4x FDR 4x GeForce GTX 1080 Ti per Node
(11 GB of GPU RAM per GPU)
(48 GPUs total)
By Node

Storage on SRDC

Storage allocations for project data are sized based on project needs and storage availability for Linux and Windows SRDC environments.

In Linux SRDC environments, users can utilize working scratch space, subject to a 12TB quota and a purge policy. Larger storage allocations may be available via MOU. Research groups can expand their storage on the GPFS if subject to space limitations. The cost of expansion is $25.8K for 100TB usable as of December 2023.

Please contact us at brc@berkeley.edu for a consultation.

Platform Support & Research Facilitation

The SRDC is managed by system administrators and SRDC consultants, who monitor the OS and software, review security requirements and compliance, identify computational workflows, and help onboard new users. The Information Security Office works closely with SRDC staff to ensure security through monitoring and intrusion detection.