Get performance metrics after script completion

In my script, I have a “client.shutdown()” in the “finally” block. this makes sure to close and clean up the cluster at the end (I am using an SSHCluster). I removed the “shutdown()” to be able to preview the dashboard after the script would have ended. However, I am still noticing the same behaviour after removing the client.shutdown, i.e. scheduler and workers are stopped automatically, and the dashboard is inaccessible. I put a breakpoint in the shutdown() function within the client, however, this is never hit. I am not using a context manager anywhere. Under which circumstances does Dask automatically shut down the cluster?

Is there a way to get the performance metrics, preferably in file form (e.g. CSV), with information about CPU, memory, network overhead, task duration, etc, after each run? In that case, I wouldn’t mind closing the dashboard upon script completion. There doesn’t seem to be a generic case as I am looking for, in the documentation.


I think what you want is performance_report.

It your main scrip which has started the Scheduler/Client end, then all the Dask cluster will be shutdown. To be sure what is happening for you, we would need some reproducer.

thanks for this, I was looking at the wrong documentation!

Then this is what I am experiencing. However, if I always get a log file of what would have happened during the run with performance_report, I don’t mind the cluster shutting down when this script is ready (it’s actually better). I will try it out.

1 Like