Scaling down number of workers based on memory

When using a cluster manager that supports scaling, e.g. LocalCluster, is there a way to scale down the number of workers according to memory in lieu of workload?

For example, let’s say the initial configuration is set up to 1GB/worker; if more memory is required by a certain computation, instead of failing, dask would scale the cluster down by cutting the number of workers in half (i.e. 2GB/worker), in an exponential backoff. Would that be possible and, if so, how?

Thanks in advance for any advice and all the great work!

Hi, there’s a discussion here: Scale up on memory pressure · Issue #6826 · dask/distributed · GitHub

1 Like