Overview
Summary
This provides an overview of how to use the Savio and CGRL clusters. For details please click on the links below or in the sidebar.
Passwords
|
Passwords. You'll need to generate and enter a one-time password each time that you log in. You'll use an application called Google Authenticator to generate these passwords, which you can install and run on your smartphone and/or tablet. For instructions on setting up and using Google Authenticator, see Logging into the BRC Clusters. |
Logging in
|
Logging in. You'll use your favorite SSH client program to log into the cluster via hpc.brc.berkeley.edu. E.g., from the command line (where you'll substitute your actual BRC Cluster username for myusername):$ ssh myusername@hpc.brc.berkeley.edu |
For more detailed information on logging in, see Logging into the BRC Clusters.
File Storage
|
File storage. Once you log in, you'll be in your home directory (/global/home/users/myusername), with a 10 GB storage limit. If you have an account on the Savio or Vector clusters, you also have access to a personal scratch directory, through which you'll share global storage with other cluster users. Some users may also have access to a group directory, shared with collaborators.
You can access all of this storage - your home, scratch, and (if relevant) group directories - from the BRC supercluster's login and data transfer nodes, as well as from your cluster's compute nodes. For more information on your available storage, please see our storage page.
|
Running your Jobs
|
Running your jobs. When you log into a cluster, you'll land on one of several login nodes. Here you can edit scripts, compile programs etc. However, you should not be running any applications or tasks on the login nodes, which are shared with other cluster users. Instead, use the SLURM job scheduler to submit jobs that will be run on one or more of the cluster's many compute nodes.
You'll use SLURM commands like sbatch to submit your jobs, sinfo to view their status, and scancel to cancel them. Whenever you run sbatch, you'll point it at a SLURM job script, a small file that specifies where and how you want to run your job on the cluster and what command(s) you want your job to execute. If you need to run jobs interactively, there's also an srun command available. See Running Your Jobs for more detailed information on submitting and running your jobs via SLURM, as well as the charges (if any) your account may incur for running computational jobs. |
Accessing or Installing Software
|
Accessing or installing software. Lots of software packages and tools are already built and provided for your use, on your cluster. You can list these and load/unload them via Environment Module commands. By default, SLURM and Warewulf commands are already added to your path, starting out. For all other provided software, at a shell prompt, enter module list to see what you're currently accessing, module avail to see what additional software is available, and one or more module load modulename or module unload modulename commands to set up your environment.
For more detailed information on accessing provided software via Environment Module commands - as well as on installing your own software, when needed - please see Accessing Software. |
Transferring Data
|
Transferring data. To transfer data from other computers into - or out of - your various storage directories, you can use protocols and tools like SCP, STFP, FTPS, and Rsync. If you're transferring lots of data, the web-based Globus Connect tool is typically your best choice: it can perform fast, reliable, unattended transfers. Whenever you transfer data, you'll need to connect to the BRC supercluster's dedicated Data Transfer Node, dtn.brc.berkeley.edu. For more information on getting your data onto and off of Savio, please see Transferring Data. |
For additional information, please see Frequently Asked Questions pages or our Trainings and Tutorials.
For additional help, support, or information, please see Getting Help.