tech:slurm
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
tech:slurm [2020/04/29 09:17] – [Python] kohofer | tech:slurm [2020/05/27 10:57] – kohofer | ||
---|---|---|---|
Line 241: | Line 241: | ||
debug* | debug* | ||
- | If computer node is down | + | If computer node is **<color #ed1c24>down</ |
< | < | ||
Line 247: | Line 247: | ||
PARTITION AVAIL TIMELIMIT | PARTITION AVAIL TIMELIMIT | ||
debug* | debug* | ||
+ | |||
+ | sinfo | ||
+ | PARTITION AVAIL TIMELIMIT | ||
+ | gpu* | ||
+ | gpu* | ||
+ | |||
</ | </ | ||
Line 526: | Line 532: | ||
... | ... | ||
</ | </ | ||
+ | |||
+ | ===== Example ===== | ||
+ | |||
+ | An simple example to use nvidia GPU! | ||
+ | |||
+ | < | ||
+ | #!/bin/bash | ||
+ | |||
+ | #SBATCH --job-name=mnist | ||
+ | #SBATCH --output=mnist.out | ||
+ | #SBATCH --error=mnist.err | ||
+ | |||
+ | #SBATCH --partition gpu | ||
+ | #SBATCH --gres=gpu | ||
+ | #SBATCH --mem-per-cpu=4gb | ||
+ | #SBATCH --nodes 2 | ||
+ | #SBATCH --time=00: | ||
+ | |||
+ | #SBATCH --ntasks=10 | ||
+ | |||
+ | #SBATCH --mail-type=ALL | ||
+ | #SBATCH --mail-user=< | ||
+ | </ | ||
+ | |||
+ | |||
+ | |||
+ | |||
+ | ml load miniconda3 | ||
+ | |||
+ | python3 main.py | ||
===== Links ===== | ===== Links ===== | ||
+ | |||
+ | https:// | ||
+ | |||
+ | https:// | ||
+ | |||
+ | https:// | ||
http:// | http:// | ||
https:// | https:// | ||
+ |
/data/www/wiki.inf.unibz.it/data/pages/tech/slurm.txt · Last modified: 2022/11/24 16:17 by kohofer