For first time login, only the ssh protocol is supported. Other protocols such as X11 Forwading over ssh, sftp, and VNC will be discussed later in other tutorials.
The (S)ecure (SH)ell protocol opens up a pseudo terminal connection, which will give you an interactive shell in order to communicate with the cluster. There are several clients available:
(Note - this is by no means a full list of every client that you can use)
For the purposes of this tutorial we will be using a terminal emulator such as [OSX:Terminal Terminal], [Gnome Terminal], or [XTerm]. Begin by opening up your terminal and typing:
rlyon@deepthought:~$ ssh firstname.lastname@example.org
You will substitute your username for the fictitious one that is shown.
We also have a web-based solution, ondemand.
Once a connection is made you will receive a prompt to enter your password. Enter the password which has been given to you and if all is well, you will see this:
IIDS RESEARCH COMPUTING AND DATA SERVICES WARNING: To protect the system from unauthorized use and to ensure that the system is functioning properly, activities on this system are monitored recorded and subject to audit. Use of this system is expressed consent to such monitoring and recording. Any unauthorized access or use of this system is prohibited and subject to criminal and civil penalties. SYSTEM: zaphod.hpc.uidaho.edu CORES: 40 MEMORY: 191872 MB SUMMARY: (collected Tue Jun 28 12:30:01 PDT 2022) * CPU Usage (total average) = 4617.75% * Memory used (real) = 7029 MB * Memory free (cache) = 183601 MB * Swap in use = 11 MB * Load average = 19.18 17.55 10.89 QUESTIONS: Submit all questions, requests, and system issues to: email@example.com benji@zaphod ~ *
You have now logged in.
We have two main types of servers, standalone servers and cluster servers. The difference between these two is how you go about running your jobs.
On a standalone server, you simply log in and can run your jobs directly. For example:
benji@trillian ~ $ module load R benji@trillian ~ $ R R version 3.1.3 (2015-03-09) -- "Smooth Sidewalk" Copyright (C) 2015 The R Foundation for Statistical Computing Platform: x86_64-unknown-linux-gnu (64-bit) R is free software and comes with ABSOLUTELY NO WARRANTY. You are welcome to redistribute it under certain conditions. Type 'license()' or 'licence()' for distribution details. Natural language support but running in an English locale R is a collaborative project with many contributors. Type 'contributors()' for more information and 'citation()' on how to cite R or R packages in publications. Type 'demo()' for some demos, 'help()' for on-line help, or 'help.start()' for an HTML browser interface to help. Type 'q()' to quit R. > rnorm(20)  -0.31292512 -0.04088035 -0.45084361 -0.15509451 -1.31036346 1.04192160  -1.54716041 0.75207358 0.37357678 -0.37222896 1.31458656 -1.46211410  0.64556641 0.61382511 -1.24606528 -0.35138105 -1.55991190 -0.31917710  -1.00967966 -0.86483148 > quit(save="no") benji@trillian ~ $
We currently maintain 12 generally available standalone servers. The servers you have access to will depend on the type of account you have:
|Standard||(Used for workshops/classes)|
For general compute needs the servers marvin, trillian, and zaphod are where you should start - the login screen will tell you which is the least busy. In the above login example, note that marvin is pretty busy and would not be a good server to start more jobs on. If your project requires lots of memory (RAM), slartibartfast and ford may be used. Arthur and colin are newer servers, and have yet more RAM. Whale has an obscene amount of RAM (1TB), please use only for your jobs that really need that kind of brute force. Note that your account type may limit which servers you have access to. For a more details on server capabilities see this page. For a general walk-through see this workshop documentation.
The CRC maintains a ~2500+ core compute cluster, currently managed by fortyfour.ibest.uidaho.edu. Note: Free account users can submit cluster jobs from the standalone servers. If your jobs can run in replicates or be parallelized (most can), the cluster will serve you well. When running your jobs on the cluster, you must use the scheduler to add your jobs to the cluster queue. For example:
benji@fortyfour ~ $ cd /mnt/lfs2/benji benji@fortyfour ~ $ cat my_job_script.slurm #!/bin/bash cd $SLURM_SUBMIT_DIR echo "running on " hostname source /usr/modules/init/bash module load R Rscript -e "rnorm(10)" echo "finished" benji@fortyfour ~ $ sbatch my_job_script.slurm Submitted batch job 726 benji@fortyfour ~ $ cat slurm-726.out running on n090  0.2058277 1.3586386 0.2363083 0.1044323 0.3514225 1.9576543 0.1503541  0.4841014 0.6014489 1.0225770 finished benji@fortyfour ~ $
I put the commands I wanted to run (generating random numbers) into a simple script named my_job_script.slurm, and then submitted that script the the queue using the
sbatch command. The output of the job (what would normally print to screen) is returned in a file which is by default named slurm-XXX.out. There are many examples of various submission script options here, and you may also find the cluster computing workshop examples helpful.