Applications/Scheduler
PBS vs Slurm
CRAY KOREA Blog
2021. 8. 6. 14:31
1. Commands
User Commands | PBS | Slurm |
Job submission | qsub [script_file] | sbatch [script_file] |
Job deletion | qdel [job_id] | scancel [job_id] |
Job status (by job) | qstat [job_id] | squeue [job_id] |
Job status (by user) | qstat -u [user_name] | squeue -u [user_name] |
Job hold | qhold [job_id] | scontrol hold [job_id] |
Job release | qrls [job_id] | scontrol release [job_id] |
Queue list | qstat -Q | squeue |
Node list | pbsnodes -l | sinfo -N OR scontrol show nodes |
Cluster status | qstat -a | sinfo |
2. Environment
Environment | PBS | Slurm |
Job ID | $PBS_JOBID | $SLURM_JOBID |
Submit Directory | $PBS_O_WORKDIR | $SLURM_SUBMIT_DIR |
Submit Host | $PBS_O_HOST | $SLURM_SUBMIT_HOST |
Node List | $PBS_NODEFILE | $SLURM_JOB_NODELIST |
3. Job Script
(1) PBS
#!/bin/sh #PBS -V #PBS -N test_job #PBS -q normal #PBS -l select=1:ncpus=1:mpiprocs=1:ompthreads=1 #PBS -l walltime=01:00:00 cd $PBS_O_WORKDIR ./test.x |
(2) Slurm
#!/bin/sh #SBATCH -J test_job #SBATCH -p normal #SBATCH -N 1 #SBATCH -n 1 #SBATCH -o %x_%j.out #SBATCH -e %x_%j.err #SBATCH --time=01:00:00 export OMP_NUM_THREADS=1 srun ./test.x exit 0 |
4. Interactive Job
(1) PBS
$ qsub -I -q normal -l select=1:ncpus=1 -l walltime=01:00:00 |
(2) Slurm
일반 | $ srun -p normal -N 1 -n 1 --pty bash |
znode44 노드 지정 | $ srun -p normal -N 1 -n 1 -w znode44 --pty bash |
znode44 독점 모드 사용 | $ srun -p normal -N 1 -n 1 -w znode44 --exclusive --pty bash |