Protocols supported for logging in

For first time login, only the ssh protocol is supported. Other protocols such as X11 Forwading over ssh, sftp, and VNC will be discussed later in other tutorials.


The (S)ecure (SH)ell protocol opens up a pseudo terminal connection, which will give you an interactive shell in order to communicate with the cluster. There are several clients available:

  • For OSX you can use ‘OSX Terminal‘ or if you have the X11 development package installed you can use ‘XTerm’.
  • Windows has several non-free packages available, but one of the most popular is called ‘PuTTY’. Windows 10 users can use the Windows Subsystem for Linux.
  • Linux has several options, but as Linux enthusiasts seem to have strong opinions about what is good and what is not, and full scale augments have been known to start out over trivial things (such as which editor is the best Vi or Emacs - if you don’t believe me go to any linux forum and say you are a noob and want to know which is the most powerful editor. It becomes rather amusing in a short amount of time.), I will name two, on the basis that they come standard with most distributions. Gnome Terminal: which is the default terminal for the Gnome desktop environment, and XTerm which comes standard on most systems.

(Note - this is by no means a full list of every client that you can use)

Logging in

For the purposes of this tutorial we will be using a terminal emulator such as [OSX:Terminal Terminal], [Gnome Terminal], or [XTerm]. Begin by opening up your terminal and typing:

rlyon@deepthought:~$ ssh

You will substitute your username for the fictitious one that is shown.

If you are off campus, and don’t receive any response or you are told that the connection has timed out, then you will probably need to connect to the UI VPN server or the CRC one.

We also have a web-based solution, ondemand.

Once a connection is made you will receive a prompt to enter your password. Enter the password which has been given to you and if all is well, you will see this:

WARNING: To protect the system from unauthorized use and to
ensure that the system is functioning properly, activities
on this system are monitored recorded and subject to audit.
Use of this system is expressed consent to such monitoring
and recording. Any unauthorized access or use of this system
is prohibited and subject to criminal and civil penalties.


    CORES:   40 
    MEMORY:  191872 MB 
    SUMMARY: (collected Tue Jun 28 12:30:01 PDT 2022) 
       * CPU Usage (total average) = 4617.75%  
       * Memory used (real)        = 7029 MB 
       * Memory free (cache)       = 183601 MB 
       * Swap in use               = 11 MB 
       * Load average              = 19.18  17.55  10.89

    QUESTIONS: Submit all questions, requests, and system issues

benji@zaphod ~ *

You have now logged in.

Which server should I use?

We have two main types of servers, standalone servers and cluster servers. The difference between these two is how you go about running your jobs.

Standalone Servers

On a standalone server, you simply log in and can run your jobs directly. For example:

benji@trillian ~ $ module load R
benji@trillian ~ $ R

R version 3.1.3 (2015-03-09) -- "Smooth Sidewalk"
Copyright (C) 2015 The R Foundation for Statistical Computing
Platform: x86_64-unknown-linux-gnu (64-bit)

R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.

  Natural language support but running in an English locale

R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.

Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.

> rnorm(20)
 [1] -0.31292512 -0.04088035 -0.45084361 -0.15509451 -1.31036346  1.04192160
 [7] -1.54716041  0.75207358  0.37357678 -0.37222896  1.31458656 -1.46211410
[13]  0.64556641  0.61382511 -1.24606528 -0.35138105 -1.55991190 -0.31917710
[19] -1.00967966 -0.86483148
> quit(save="no")
benji@trillian ~ $

We currently maintain 12 generally available standalone servers. The servers you have access to will depend on the type of account you have:

Standard(Used for workshops/classes)
  • colin
  • ford
  • formic
  • marvin
  • petunia
  • seraph
  • slartibartfast
  • hactar
  • trillian
  • arthur
  • whale
  • zaphod
  • slartibartfast
  • jayne

For general compute needs the servers marvin, trillian, and zaphod are where you should start - the login screen will tell you which is the least busy. In the above login example, note that marvin is pretty busy and would not be a good server to start more jobs on. If your project requires lots of memory (RAM), slartibartfast and ford may be used. Arthur and colin are newer servers, and have yet more RAM. Whale has an obscene amount of RAM (1TB), please use only for your jobs that really need that kind of brute force. Note that your account type may limit which servers you have access to. For a more details on server capabilities see this page. For a general walk-through see this workshop documentation.


The CRC maintains a ~2500+ core compute cluster, currently managed by Note: Free account users can submit cluster jobs from the standalone servers. If your jobs can run in replicates or be parallelized (most can), the cluster will serve you well. When running your jobs on the cluster, you must use the scheduler to add your jobs to the cluster queue. For example:

benji@fortyfive ~ $ cd /mnt/lfs2/benji
benji@fortyfive ~ $ cat my_job_script.slurm 


echo "running on "

source /usr/modules/init/bash
module load R

Rscript -e "rnorm(10)"

echo "finished"
benji@fortyfive ~ $ sbatch my_job_script.slurm 
Submitted batch job 726

benji@fortyfive ~ $ cat slurm-726.out 
running on 
 [1] 0.2058277 1.3586386 0.2363083 0.1044323 0.3514225 1.9576543 0.1503541
 [8] 0.4841014 0.6014489 1.0225770
benji@fortyfive ~ $

I put the commands I wanted to run (generating random numbers) into a simple script named my_job_script.slurm, and then submitted that script the the queue using the sbatch command. The output of the job (what would normally print to screen) is returned in a file which is by default named slurm-XXX.out. There are many examples of various submission script options here, and you may also find the cluster computing workshop examples helpful.