I found my print(cluster.job_script())
below
#!/usr/bin/env bash
#SBATCH -J dask-worker
#SBATCH -p xahctest
#SBATCH -n 1
#SBATCH --cpus-per-task=8
#SBATCH --mem=15G
#SBATCH -t 01:00:00
/some/path/to/python -m distributed.cli.dask_worker tcp://{someip}:45546 --name dummy-name --nthreads 2 --memory-limit 3.73GiB --nworkers 4 --nanny --death-timeout 60
And if I start manually it won’t succceed unless I change {someip} before to something like login01.
I’ve tried add something in yaml, but I still get the same scripts above.
distributed:
scheduler:
default-address: 'tcp://login-1:port'