Tarbell High Performance Computing Cluster User Manual


1.0 System Resources

Environment:

Interactive Nodes:

Standard Compute Nodes:

Fat Compute Nodes:


2.0 Applying for an Account

  1. Eligibility Requirements
  2. Request Forms


3.0 Logging into Tarbell

  1. Obtaining an SSH client:
    1. If you are logging in from a Linux or Apple workstation, then SSH will have already been installed by default.
    2. If you are logging in from a Windows workstation, you will need to install an SSH client. Two popular SSH clients for Windows are Cygwin and PuTTY.
  2. From a Linux, Apple, or Windows workstation with Cygwin installed:
    1. Open a terminal program
    2. Type:  ssh -l username tarbell.cri.uchicago.edu
    3. If this in your first time logging in, accept the SSH key.
    4. Enter your password.
  3. From a Windows workstation with PuTTY installed:
    1. Open PuTTY
    2. In the Host Name box, type:  tarbell.cri.uchicago.edu
    3. Select SSH as the connection type
    4. Verify the port number set in the Port box is 22
    5. Press the Open button at the bottom of the form
    6. A dialog box may appear when logging to for the first time with PuTTY to accept the SSH keys. Select YES.
    7. Type in your username and password when prompted.


4.0 Software Selection

All software used in a workflow should be accessed via Environment Modules. The previous iteration of this User Guide stated that you could also use absolute paths to the program in the /apps directory. This is not recommended since some applications require specfic environment variables to be set.

The following are basic commands that are helpful in using Environment Modules:


5.0 Submitting Jobs to Tarball

  1. Interactive Jobs:
  1. Batch Jobs:
  1. Example Pipeline

#!/bin/bash

########################
#                      #
# Scheduler Directives #
#                      #
########################

### Set the name of the job, where jobname is a unique name for your job
#PBS -N jobname

### Select the shell you would like the script to execute within
#PBS -S /bin/bash

### Inform the scheduler of the expected runtime, where walltime=HH:MM:SS
#PBS -l walltime=8:00:00

### Inform the scheduler of the number of CPU cores for your job.
### This example will allocate four cores on a single node.
#PBS -l nodes=1:ppn=4

### Inform the scheduler of the amount of memory you expect to use.
### Use units of 'b', 'kb', 'mb', or 'gb'
#PBS -l mem=4gb

### Set the destination for your program's output.
#PBS -o $HOME/myjob.out
#PBS -e $HOME/myjob.err

#################
#               #
# Job Execution #
#               #
#################

# Load the approprite applications
module load blast/2.2.28

# Execute the program
blastall -p blastn -d drosoph.nt -i ecoli.nt


6.0 Monitoring the Status of Jobs

The status of your jobs can be monitored with the showq or qstat commands. These commands must be run from the Interactive Nodes.

  1. showq
  1. qstat


7.0 Additional Training and Support