Mary Donovan and Sharon Solis
April 9, 2019
Why and when to use HPC?
Designed for when computational problems are either too large, take too long, and/or require large file storage for standard computers
When HPC might not be your solution:
Campus available cluster Knot (CentOS/RH 6):
110 node, ~1400 core system
4 ‘fat nodes’(1TB RAM)
GPU nodes (12 M2050’s) (now too old)
Campus available cluster Pod (CentOS/RH7):
70 node, ~2600 core system
4 ‘fat nodes’(1TB RAM)
GPU nodes (3) (Quad NVIDIA V100/32 GB with NVLINK)
GPU Development node (P100, 1080Ti, Titan V)
Condo clusters: (PI’s buy compute nodes)
Guild (60 nodes)
Braid (120 nodes, also has GPUs)
for pod and knot accounts
Request access: http://csc.cnsi.ucsb.edu/acct
Xsede: NSF sponsored service organization that provides access to computing resources.
Campus Champion (Sharon Solis): Represents XSEDE on the campus
Software Carpentry intro to Unix shell: http://swcarpentry.github.io/shell-novice/
ssh username@pod.cnsi.ucsb.edu
Lets make a quick R code to run On your computer in the terminal:
echo "data <- data.frame(x=seq(1:10),y=seq(1:10)); write.csv(data,"testcsv.csv",row.names=F)" > myscript.R
Now transfer that to pod:
scp myscript.R user@pod.cnsi.ucsb.edu:myscript.R
nano submit.job
#!/bin/bash -l
#Serial (1 core on one node) job...
#SBATCH --nodes=1 --ntasks-per-node=1
cd $SLURM_SUBMIT_DIR
module load R
Rscript myscript.R
sbatch submit.job
showq | grep mdono
For the short queue:
sbatch -p short submit.job
If you need to cancel:
qdel job_id
scp user@pod.cnsi.ucsb.edu:testcsv.csv ./testcsv.csv
to start R on the command line
module load R
check R version
which R
installing packages?
For parallel stuff the parallel
package is helpful
Please include in your papers! “We acknowledge support from the Center for Scientific Computing from the CNSI, MRL: an NSF MRSEC (DMR-1720256) and NSF CNS-1725797.”
Your turn: Try running on pod
test_my_skillz.R
(use the short queue)
ssh username@pod.cnsi.ucsb.edu
git clone https://github.com/fishymary/R_on_HPC.git
cd R_on_HPC
cat submit.job
sbatch -p short submit.job
showq | grep [username]
ls
cat slurm-[jobid]