In my script, I have a “client.shutdown()” in the “finally” block. this makes sure to close and clean up the cluster at the end (I am using an SSHCluster). I removed the “shutdown()” to be able to preview the dashboard after the script would have ended. However, I am still noticing the same behaviour after removing the client.shutdown, i.e. scheduler and workers are stopped automatically, and the dashboard is inaccessible. I put a breakpoint in the shutdown() function within the client, however, this is never hit. I am not using a context manager anywhere. Under which circumstances does Dask automatically shut down the cluster?
Is there a way to get the performance metrics, preferably in file form (e.g. CSV), with information about CPU, memory, network overhead, task duration, etc, after each run? In that case, I wouldn’t mind closing the dashboard upon script completion. There doesn’t seem to be a generic case as I am looking for, in the documentation.