Is it possible to spin up multiple Dask Clients within the same Python process and connect them to the same dashboard?
So I have for example one client using different cluster abstractions but both submitting task information to the same dashboard:
local_cluster = LocalCluster(
threads_per_worker=1,
n_workers=8
)
master_client = Client(
address=local_cluster,
set_as_default=True,
)
dashboard_url = master_client.dashboard_link
arvados_cluster = LSFCluster(
queue='long',
cores=24,
memory='1GB',
walltime='72:00',
job_extra_directives=[f'-o /tmp/job_out'] # No Mails
)
arvados_cluster.scale(1)
arvados_client = Client(
adress=arvados_cluster,
set_as_default=False,
dashboard_adress=dashboard_url. # This parameter would be needed
)
Hi @schulz-m, welcome to Dask discourse forum!
I don’t think this is possible, as the Dashboard is not linked to a Client, but to a Scheduler. If you launch multiple Scheduler (Cluster) instances, you’ll have one dashboard per Scheduler.
One way to achieve what you want would be to have a single Scheduler for all of your resources, but using dask-jobqueue with an existing Scheduler is not supported yet.
I don’t think multiple Schedulers could share information in the same Dashboard.
Hi @guillaumeeb
Thank you for the quick response!
Using a single scheduler with several “Cluster Abstractions” would also be great - would you know if there is any roadmap for that to be implemented?
Sorry for the delay, there is a long standing issue to implement this in dask-jobqueue.
Any help appreciated!
1 Like
Thanks, @guillaumeeb for the link! (and happy new year )
A slight tangent: Has there been any thought already about supporting different “types” of workers with different resources on the same scheduler & cluster instead? Such that we can have a submission going e.g. to a worker with GPU or without OR one with more memory and one with less (That’d be a different approach to the use case we actually have)
If I understand correctly, this is also the idea behind the issue linked above: you could have several Worker types on the same HPC cluster, annotate them using Worker Resources, and them submit tasks with annotations to use these resources.