Advice for Slurm Configuration


First off, apologies if I use confusing or incorrect terminology, I am still learning.

I am trying to set up configuration for a Slurm-enabled adaptive cluster.

Documentation of the supercomputer and it’s Slurm configuration is documented here. Here is some of the most relevant information extracted from the website:

Partition Name Max Nodes per Job Max Job Runtime Max resources used simultaneously Shared Node Usage Default Memory per CPU Max Memory per CPU
compute 512 8 hours no limit no 1920 MB 8000 MB


This partition consists of 2659 AMD EPYC 7763 Milan compute nodes and is intended for running parallel scientific applications. The compute nodes allocated for a job are used exclusively and cannot be shared with other jobs. Some information about the compute node:

Component Value
# of CPU Cores 64
# of Threads 128

Here is some output from control show partition:

   AllowGroups=ALL AllowAccounts=ALL AllowQos=ALL
   AllocNodes=ALL Default=NO QoS=N/A
   DefaultTime=NONE DisableRootJobs=NO ExclusiveUser=NO GraceTime=0 Hidden=NO
   MaxNodes=512 MaxTime=08:00:00 MinNodes=0 LLN=NO MaxCPUsPerNode=UNLIMITED
   PriorityJobFactor=1 PriorityTier=1 RootOnly=NO ReqResv=NO OverSubscribe=EXCLUSIVE
   OverTimeLimit=NONE PreemptMode=OFF
   State=UP TotalCPUs=711168 TotalNodes=2778 SelectTypeParameters=NONE
   DefMemPerCPU=960 MaxMemPerCPU=3840

Here is what I have so far:

cluster = SLURMCluster(
    memory=f"{8000 * 64 * 0.90} MB",
    # job_extra=["--ntasks-per-node=50",],

Some things to mention:

  1. In the first above table, “nodes” refers to compute server nodes, not Dask nodes (which I think should probably be rather called Dask Workers? If someone could clear up that term for me I would be grateful). Since I have 64 CPU Cores and 8000 MB of allowed memory, I thought it would be sensible to set the memory to 8000 * 64 with a “reduction” factor of 0.90, just to be on the safe side.
  2. I have 64 CPUs, which I believe should translate to 64 “cores” in the SLURMCluster. I want each Python to have 2 CPUs, so, in total 32 processes. That might be optimised down to 4 CPUs per Python, but I have no idea how to get a feeling for sensible settings here.
  3. I set the walltime of each dask-cluster job to the maximum allowed; as I would rather block with one Slurm Job than need to wait. This might induce idle work of that server, but it might still be more effective than waiting in the Slurm batch system queue.

If I now print the job script as configured above, I get:


#!/usr/bin/env bash

#SBATCH -J dask-worker
#SBATCH -p compute
#SBATCH -A ab0995
#SBATCH -n 1
#SBATCH --cpus-per-task=64
#SBATCH --mem=430G
#SBATCH -t 08:00:00

/work/ab0995/AWIsoft/miniconda/NextGEMS/.conda/bin/python -m distributed.cli.dask_worker tcp:// --nthreads 2 --nprocs 32 --memory-limit 13.41GiB --name dummy-name --nanny --death-timeout 60 --interface ib0 --protocol tcp://

So, questions:

  1. By my mental math, 8000*64*0.9 = 460.8 GB, not 430G What is happening here?
  2. I don’t really understand the nthreads. nprocs, and memory-limit getting of the dask_worker…?
  3. When I let the cluster scale adaptively, it requests one worker, which immediately exits without any logs being produced (no slurm-out-??????? files are produced)

If I actually use the client to do some calculations on a big dataset (several Tb) I eventually run into memory errors (I have logs from some other configuration tests left over). Interesting here, I would have assumed the cluster asks for another Slurm node, but that is not the case…

The adapt is performed simply by:

cluster.adapter(min=1, max=10)

What would some the recommended settings be here? Any help or hints would be very appreciated!!

I think I noticed: If I get lucky, and I check the queue I can briefly see the slurm warning “invalid” pop up. So clearly my current configuration is not being accepted by the system.