How do I avoid distributed.client - WARNING - Couldn't gather keys, rescheduling?

While running embarrassingly parallel computations with dask.compute(list_of_delayed_objects), on a Google Cloud Deployment using a dask-gateway-cluster, and distributed client, I often run into 2023-06-21 15:55:59,925 - distributed.client - WARNING - Couldn't gather 20042 keys, rescheduling {'mean_field_lsq-5917ffc7-0517-4d76-b64f-06fcdda84591':()… messages. It seems to happen during the last ~1% of the computation,
followed by the dashboard-progress-bar then reversing, and it looks like it reverses the same number of tasks as the number of keys in the warning.

Sometimes also followed by a CancelledError like in the example below. That example consisted of 10 000 tasks, and the rescheduling started when there were 10 tasks left.

Example 1

Cores per worker: 3. Memory: 6GB

%%time 
results  = dask.compute(get_delayed_objects( 0, window_size, Nblocks ), 
                       get_delayed_objects( 1, window_size, Nblocks ),
                       get_delayed_objects( 2, window_size, Nblocks ),
                       get_delayed_objects( 3, window_size, Nblocks ),
                       get_delayed_objects( 4, window_size, Nblocks ),
                       get_delayed_objects( 5, window_size, Nblocks ), 
                       get_delayed_objects( 6, window_size, Nblocks ),
                       get_delayed_objects( 7, window_size, Nblocks ),
                       get_delayed_objects( 8, window_size, Nblocks ),
                       get_delayed_objects( 9, window_size, Nblocks )
                      )

2023-08-11 12:08:55,405 - distributed.client - WARNING - Couldn't gather 15 keys, rescheduling {'optimize-708294f0-14a4-4ce8-9c28-bfbf6469c128': (), 'optimize-84e96850-0446-4d0b-a67a-53cadc0c3d01': (), 'optimize-7a0c0a45-1519-495f-a767-8413af0a6a60': (), 'optimize-fadba322-23f6-456c-a57d-dc9ee6d1293e': (), 'optimize-3835d312-033d-48d1-8e3d-42ea0ed87358': (), 'optimize-2c76a50f-4e6e-4cc5-8279-b4babf498228': (), 'optimize-3c743ba9-80d4-4438-a924-214a08520da0': (), 'optimize-ba67a080-fdd8-426d-9464-7120d2f92e27': (), 'optimize-515c7969-7c02-45f2-bbd1-934fd3c8c44e': (), 'optimize-77b98835-cdc4-419d-af5d-aa00943b6625': (), 'optimize-772d6fee-c66b-499c-928e-f66aa0de139e': (), 'optimize-87943e23-9fe9-4abc-96d3-5816216aafc8': (), 'optimize-392a3b7a-3d4d-41cf-a7db-d600cf772f6d': (), 'optimize-b0602ae0-8ca2-417d-b4c6-dcab5a3365d6': (), 'optimize-a9c2e4c7-65b2-4269-beee-08ece7f83ca8': ()}
---------------------------------------------------------------------------
CancelledError                            Traceback (most recent call last)
File <timed exec>:2

File /srv/conda/envs/notebook/lib/python3.10/site-packages/dask/base.py:600, in compute(traverse, optimize_graph, scheduler, get, *args, **kwargs)
    597     keys.append(x.__dask_keys__())
    598     postcomputes.append(x.__dask_postcompute__())
--> 600 results = schedule(dsk, keys, **kwargs)
    601 return repack([f(r, *a) for r, (f, a) in zip(results, postcomputes)])

File /srv/conda/envs/notebook/lib/python3.10/site-packages/distributed/client.py:3122, in Client.get(self, dsk, keys, workers, allow_other_workers, resources, sync, asynchronous, direct, retries, priority, fifo_timeout, actors, **kwargs)
   3120         should_rejoin = False
   3121 try:
-> 3122     results = self.gather(packed, asynchronous=asynchronous, direct=direct)
   3123 finally:
   3124     for f in futures.values():

File /srv/conda/envs/notebook/lib/python3.10/site-packages/distributed/client.py:2291, in Client.gather(self, futures, errors, direct, asynchronous)
   2289 else:
   2290     local_worker = None
-> 2291 return self.sync(
   2292     self._gather,
   2293     futures,
   2294     errors=errors,
   2295     direct=direct,
   2296     local_worker=local_worker,
   2297     asynchronous=asynchronous,
   2298 )

File /srv/conda/envs/notebook/lib/python3.10/site-packages/distributed/utils.py:339, in SyncMethodMixin.sync(self, func, asynchronous, callback_timeout, *args, **kwargs)
    337     return future
    338 else:
--> 339     return sync(
    340         self.loop, func, *args, callback_timeout=callback_timeout, **kwargs
    341     )

File /srv/conda/envs/notebook/lib/python3.10/site-packages/distributed/utils.py:406, in sync(loop, func, callback_timeout, *args, **kwargs)
    404 if error:
    405     typ, exc, tb = error
--> 406     raise exc.with_traceback(tb)
    407 else:
    408     return result

File /srv/conda/envs/notebook/lib/python3.10/site-packages/distributed/utils.py:379, in sync.<locals>.f()
    377         future = asyncio.wait_for(future, callback_timeout)
    378     future = asyncio.ensure_future(future)
--> 379     result = yield future
    380 except Exception:
    381     error = sys.exc_info()

File /srv/conda/envs/notebook/lib/python3.10/site-packages/tornado/gen.py:769, in Runner.run(self)
    766 exc_info = None
    768 try:
--> 769     value = future.result()
    770 except Exception:
    771     exc_info = sys.exc_info()

File /srv/conda/envs/notebook/lib/python3.10/site-packages/distributed/client.py:2155, in Client._gather(self, futures, errors, direct, local_worker)
   2153     else:
   2154         raise exception.with_traceback(traceback)
-> 2155     raise exc
   2156 if errors == "skip":
   2157     bad_keys.add(key)

CancelledError: optimize-ebb00945-ca13-4947-9365-2e6aa2bb0f3b


  1. What types of tasks gets rescheduled by the client? I mean, what does this warning imply about my task that I am missing?

I dont understand why it presumably finishes a task, then reschedules, computing a task that has already been done.


I read that increasing memory could solve a similar error. (How do I solve "distributed.scheduler - ERROR - Couldn't gather keys"? - #2 by pavithraes). Doubling the memory per worker in Example 1 solves the CancelledError, but rescheduling tasks still occur. This is shown in Example 2.

Example 2

Cores per worker: 3. Memory: 12GB


%%time
results  = dask.compute(get_delayed_objects( 0, window_size, Nblocks ), 
                       get_delayed_objects( 1, window_size, Nblocks ),
                       get_delayed_objects( 2, window_size, Nblocks ),
                       get_delayed_objects( 3, window_size, Nblocks ),
                       get_delayed_objects( 4, window_size, Nblocks ),
                       get_delayed_objects( 5, window_size, Nblocks ), 
                       get_delayed_objects( 6, window_size, Nblocks ),
                       get_delayed_objects( 7, window_size, Nblocks ),
                       get_delayed_objects( 8, window_size, Nblocks ),
                       get_delayed_objects( 9, window_size, Nblocks )
                      )

2023-08-11 13:33:14,811 - distributed.client - WARNING - Couldn't gather 13 keys, rescheduling {'optimize-515a7b36-a66e-4770-89a5-2a88913b9067': (), 'optimize-dc2ddfa7-0052-42d4-b7cd-425f99c9561b': (), 'optimize-568a94ec-08a3-4c2c-a302-91b0023248cd': (), 'optimize-30801cfc-0ca7-47f2-b90f-9448490cb994': (), 'optimize-afd5d0b6-6015-4e33-a0c0-8ee3464d60cb': (), 'optimize-81fca5e5-a6fe-4bf8-9d4f-fbaf3a787503': (), 'optimize-526c6040-3462-4acc-8137-84777ef2fdd1': (), 'optimize-f410368d-150f-49fc-9699-3e6994d62533': (), 'optimize-0cedfd8c-0130-406e-ba25-bdb9fc0a215e': (), 'optimize-368b5de0-d638-4100-8f57-92575c9de04d': (), 'optimize-ed1d33e4-6b5e-497e-97e4-32d716c9029c': (), 'optimize-014e3a40-0cc9-4410-bba6-38843682253b': (), 'optimize-b07cbdbb-02ae-4e6d-9cca-af6c468de09e': ()} 2023-08-11 13:39:25,751 - distributed.client - WARNING - Couldn't gather 10 keys, rescheduling {'optimize-b6cb06fa-a7a6-4230-bab3-4357d031e543': (), 'optimize-dc9bb730-10f1-4eea-bb18-4b28a111256c': (), 'optimize-11f4df18-ac0e-493b-b923-37964cc84ef5': (), 'optimize-467c36b3-3fa3-4c2d-a3db-f62fa453eba8': (), 'optimize-6f392b12-f51a-4bb1-8a0b-e0ef89d1e68a': (), 'optimize-da48e050-8fe8-4e90-a75e-4b1be757e955': (), 'optimize-aa18fa0f-c1e2-40ec-955b-7f53496bf97c': (), 'optimize-1c573fc7-baa5-48ee-98c9-f591a78076bc': (), 'optimize-2e60a310-ea07-4c96-8361-74bf4a206c64': (), 'optimize-348dff59-fbcb-451c-af7c-3c67715ba707': ()}

CPU times: user 1min 3s, sys: 3.06 s, total: 1min 6s
Wall time: 1h 20min 47s

The total process took 1 hr 20 min, while the 10 last tasks took 26 minutes after alot of rescheduling. So increasing memory seems to help with the CancelledError but not the rescheduling.

  1. From the Worker Memory Use table in the dashboard, it looks like workers does not need the 12GB I give them, so why does increasing worker_memory help?
Dashboard



  1. What can I do to have the computation finish without spending time on rescheduling of tasks? I could make an attempt to make reproducible examples, but looks like similar errors have been reproduced already (below), so I am of course hoping there exist a simple fix! :slight_smile:

Looks like your final data has about 170GiB and you are fetching all of that data to your local machine.

This data has to pass through the scheduler and if the scheduler doesn’t have enough memory, it will die which can manifest as a CancelledError. We’re working on making this more transparent, see Scheduler gather should warn or abort requests if data is too large · Issue #7964 · dask/distributed · GitHub

While gathering you can actually bypass the scheduler and fetch directly from workers to the client (see kwarg direct API — Dask.distributed 2023.8.0 documentation)
but you’d still have to make sure that your Client (e.g. your laptop) has sufficient memory to hold the data.

Are you actually intending to fetch 170GiB? Typically we recommend to store final results in a cloud blog storage (e.g. S3 on AWS) instead of fetching it to your local client.

1 Like

So the rescheduling happens because of high memory-usage?

When calling results = dask.compute(list_of_delayed_objects) the results-tuple holds ~ 10MiB, which is why I gather and save it locally. I guess there is likely some high memory-usage happening on the cluster during computation then. Why does it not show up on the dashboard?

When workers are spun up, they already start out with around 130 MiB each stored in their memory. I am guessing this is their environment among other things. If I scale this up to f.ex. ~500 workers this will sum up to ~70GiB on the dashboard, before calling dask.compute()

Then when starting the dask.compute(list_of_delayed_objects)-session, each task consist of an input of 70MiB and it returns ~ 1 KiB as output (after a long iterative scipy.optimize.minimize-process).

Example of how workers are initialized (not MRE though)
from dask_gateway import GatewayCluster, Gateway
from distributed import Client
g = Gateway()
g.list_clusters()
options = g.cluster_options()
options.worker_cores = 3; options.worker_memory = 6
cluster = g.new_cluster(options) # Creates a cluster with those options


cluster.scale(n)
### I wait for some workers to be spun up here, and then proceed to start dask.compute(...) below:


from scipy.optimize import minimize, ...

@dask.delayed
def optimize(data_scattered, ... etc... ):
    ### Does some data-reduction and start a scipy.optimize.minimize-process on the reduced data
    def function( parameters ): return negative_loglikelihood(parameters, data,..., etc...)
    result = minimize( function, x0, etc...) #, callback=callback ) #, 'disp':True }
        return result.x, result.fun, result.success, result.message, result.status, result.nfev, result.nit, result.njev


def get_delayed_objects( pressurelevel, window_size, Nblocks ):
    ### Gathers ~70MiB data locally, and scatters it to the cluster. ( Instead of having each task load the same data 10 000 times. )
    data_scattered, time_scattered, lat_scattered, lon_scattered = client.scatter(data), client.scatter(time), client.scatter(lat), client.scatter(lon)
    ....
    ### Then delayed objects are distributed to cluster-workers in the loop below
    list_of_delayed_objects = []
    for x, y in zip(x_masked, y_masked):
        delayed_optimize = optimize( data_scattered, time_scattered, etc... )
        list_of_delayed_objects.append( delayed_optimize )


%%time
results  = dask.compute( get_delayed_objects( plevel, window_size, Nblocks )  )

@ofk123 check the nannies’ logs. They’ll likely say that the worker exceeded 95% memory usage and must be terminated. This in turn will cause everything inside it to be recomputed from scratch and gather() to fail. If this spike is fast enough, you may not notice it in the dashboard.

Second easiest cause is a segmentation fault triggered by whatever library you’re running on top of dask. Again, either the nanny’s or the worker’s log will tell you what just happened.

1 Like

I re-ran a version of the code with the same initialization as above.
Warnings occurred when there were ~10 tasks left, out of 6240.

Call to dask.compute
%%time
delayed_obj =  get_delayed_objects( 6, window_size, Nblocks ), get_delayed_objects( 7, window_size, Nblocks ),get_delayed_objects( 8, window_size, Nblocks ),get_delayed_objects(9, window_size, Nblocks ),get_delayed_objects( 10, window_size, Nblocks ),get_delayed_objects( 11, window_size, Nblocks )
res  = dask.compute( delayed_obj )

I attach part of the log where it starts warning. Not familiar with distributed other than using it with dask.

  • Is nannies’ logs the same as the Scheduler log (by clicking Info in the dashboard)?

  • How should I interpret this message mentioning one of the tasks?
    2023-08-25 18:12:02,717 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-4b4d7e48-7020-40c9-9bd7-ba4a2cc610b2 NoneType: None

Scheduler tls://10.8.3.9:8786

2023-08-25 18:06:36,514 - distributed.scheduler - INFO - Remove worker <WorkerState 'tls://10.8.5.12:40425', name: dask-worker-83db662f9d9e4be78c53f9383c423fc5-tw2kv, status: closed, memory: 0, processing: 0>

2023-08-25 18:06:36,706 - distributed.scheduler - INFO - Remove worker <WorkerState 'tls://10.8.5.8:35793', name: dask-worker-83db662f9d9e4be78c53f9383c423fc5-wsxk2, status: closed, memory: 0, processing: 0>

2023-08-25 18:08:07,036 - distributed.scheduler - ERROR - Couldn't gather keys {'optimize_covparam_vel-dcf454d0-c956-4881-abe4-6ae57d0ea0a9': [], 'optimize_covparam_vel-963b6116-295d-4ae3-be6f-db396840faf7': [], 'optimize_covparam_vel-6e69f49e-7a74-4150-bf87-b46732b4aed9': [], 'optimize_covparam_vel-ec92df26-c371-4ff5-ad8a-0c2fcebbe031': [], 'optimize_covparam_vel-c9dda398-c4e7-47a1-84fb-9e7770170f13': [], 'optimize_covparam_vel-84a48140-5dff-49c2-b2ad-7c96dcbe9a08': [], 'optimize_covparam_vel-bcca3487-8b55-4abb-9a6d-6e8ec7ed7823': [], 'optimize_covparam_vel-d92174c7-78e9-47f9-992f-de1efe83a5b8': [], 'optimize_covparam_vel-f6a49405-78d0-4502-8a0d-d1840db81974': [], 'optimize_covparam_vel-ee04807e-03ea-4c24-b0f6-1c045b52c06d': [], 'optimize_covparam_vel-bb2d6656-3131-4232-8ef8-c93c67613fc5': [], 'optimize_covparam_vel-e65a1a4a-2fc9-44d4-b3a7-9cffcf12b531': [], 'optimize_covparam_vel-18869e2b-665b-4082-a465-dbb625121d2f': [], 'optimize_covparam_vel-3abd028a-375b-4055-87ac-391cfc82833f': [], 'optimize_covparam_vel-227d5f45-5f14-47f2-afab-af8161f51463': [], 'optimize_covparam_vel-f66b4294-6ca7-4059-aa3d-0822310e678c': [], 'optimize_covparam_vel-ece35c1b-b286-4f5f-8e72-7d95a2a132cb': [], 'optimize_covparam_vel-717c3e95-0eb8-4aa6-99ef-7c91850b7621': [], 'optimize_covparam_vel-f09d4b1c-4d99-4ed8-8a6d-c2ede17746e8': [], 'optimize_covparam_vel-66433f13-2e82-4534-8a95-5ada28e99abb': [], 'optimize_covparam_vel-c1e60b99-24c1-41c3-8bbd-968d6e352e72': [], 'optimize_covparam_vel-c97e6224-a19f-4fca-afb4-035fd8ec124b': [], 'optimize_covparam_vel-4da39788-6d91-4878-9fcf-8d99bbd97cab': [], 'optimize_covparam_vel-bfb34e47-9ce7-4809-b824-8db0fbf46c27': [], 'optimize_covparam_vel-09857d80-2254-4550-997a-636b9d062afb': [], 'optimize_covparam_vel-ca105808-ea80-4daf-8e47-7513229adedb': [], 'optimize_covparam_vel-373eae77-858a-44e2-83d9-cd1bc4ab5c49': [], 'optimize_covparam_vel-c919c928-1490-4aa8-ab9c-5271874c4d33': [], 'optimize_covparam_vel-2f1aae80-d5a1-4965-9442-8a8e6ae99203': [], 'optimize_covparam_vel-649bf05c-134b-480e-94dc-c01dd0f1eeec': [], 'optimize_covparam_vel-e727a53d-9be9-421c-9464-783c783c6291': [], 'optimize_covparam_vel-f5b9d891-5a4a-4015-8a9d-f2b048f53a44': [], 'optimize_covparam_vel-deabbc75-763e-4345-95fa-30807856449b': [], 'optimize_covparam_vel-219821e1-368b-4ac1-88cd-c7713a867957': [], 'optimize_covparam_vel-c14c16da-dd40-40b7-b947-7a6524cb165d': [], 'optimize_covparam_vel-3b91ff63-c8a8-48d1-9bec-9b357b09d056': [], 'optimize_covparam_vel-2e2a81d1-cb28-4570-a878-1cfa19e96562': [], 'optimize_covparam_vel-e3b8d29b-2931-4504-8e4b-3ea3e7ba8ab1': [], 'optimize_covparam_vel-cb14107a-5fa6-467b-bb5d-c7c201f615d8': [], 'optimize_covparam_vel-f3d126a4-5cc7-4659-8878-3adec808e327': [], 'optimize_covparam_vel-487c840c-d141-4f32-ae65-e9223a881180': []} state: ['processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'memory', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing'] workers: [] NoneType: None

2023-08-25 18:08:07,037 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-dcf454d0-c956-4881-abe4-6ae57d0ea0a9 NoneType: None

2023-08-25 18:08:07,037 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-963b6116-295d-4ae3-be6f-db396840faf7 NoneType: None

2023-08-25 18:08:07,037 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-6e69f49e-7a74-4150-bf87-b46732b4aed9 NoneType: None

2023-08-25 18:08:07,037 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-ec92df26-c371-4ff5-ad8a-0c2fcebbe031 NoneType: None

2023-08-25 18:08:07,037 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-c9dda398-c4e7-47a1-84fb-9e7770170f13 NoneType: None

2023-08-25 18:08:07,037 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-84a48140-5dff-49c2-b2ad-7c96dcbe9a08 NoneType: None

2023-08-25 18:08:07,037 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-bcca3487-8b55-4abb-9a6d-6e8ec7ed7823 NoneType: None

2023-08-25 18:08:07,037 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-d92174c7-78e9-47f9-992f-de1efe83a5b8 NoneType: None


##### REMOVED LINES HERE


2023-08-25 18:11:40,345 - distributed.scheduler - ERROR - broadcast to tls://10.8.42.9:45899 failed: OSError: Timed out trying to connect to tls://10.8.42.9:45899 after 30 s

2023-08-25 18:11:58,903 - distributed.scheduler - ERROR - broadcast to tls://10.8.42.9:45899 failed: OSError: Timed out trying to connect to tls://10.8.42.9:45899 after 30 s

2023-08-25 18:12:02,716 - distributed.scheduler - ERROR - Couldn't gather keys {'optimize_covparam_vel-4b4d7e48-7020-40c9-9bd7-ba4a2cc610b2': [], 'optimize_covparam_vel-31e83add-040a-4092-8a6f-de8b636a85b9': [], 'optimize_covparam_vel-9d1d7476-ea53-4ae2-90f1-622e4a8ebbdc': [], 'optimize_covparam_vel-d7f3d847-99c6-4f0f-9093-2a5bb108a6d8': [], 'optimize_covparam_vel-c6d6f4a8-305e-481f-b233-8f2b8993b0da': [], 'optimize_covparam_vel-c8a1be39-2167-490f-8e74-03ccbb0b96c7': [], 'optimize_covparam_vel-563fbd99-8e78-4d42-b656-f5c711b429ca': [], 'optimize_covparam_vel-af936fcd-eafb-4036-8457-0fcc686c17fa': [], 'optimize_covparam_vel-8ea2bd08-318f-4dea-b9fe-cef2340106ec': [], 'optimize_covparam_vel-d918a322-568b-4413-b686-75f261ed89d3': [], 'optimize_covparam_vel-acc9fd62-90cd-4e4c-801e-aadbd0ee6ac4': [], 'optimize_covparam_vel-44f34029-113c-4a65-82a5-2ceaf6fe7dd9': [], 'optimize_covparam_vel-1909c0aa-a392-49fe-905f-d4baf06691b8': [], 'optimize_covparam_vel-2c3b8f13-3c52-4ef6-b868-a84deac7227d': [], 'optimize_covparam_vel-b0f64237-aadf-4408-a138-5968368ce285': [], 'optimize_covparam_vel-eb51acb7-1c07-4e9a-8191-7263a4e164cb': [], 'optimize_covparam_vel-29ce9d2a-2e00-4b41-8aa3-bb820f2d6717': [], 'optimize_covparam_vel-19550ab0-8fa8-4a8d-b654-341eb434941d': [], 'optimize_covparam_vel-3d73d9b0-7021-4f9e-a6f7-578815c994bc': [], 'optimize_covparam_vel-388cb9ca-a9ce-466c-ba70-842b2c64ec00': [], 'optimize_covparam_vel-8c39a2ed-c250-4392-ad06-be5a24141531': [], 'optimize_covparam_vel-0fc4b5f3-6b19-49b6-9447-4a0efa29ece5': [], 'optimize_covparam_vel-06e032af-23c2-4725-a0d6-597336e77fe0': [], 'optimize_covparam_vel-b783f94a-5751-40ff-9115-a11a051bb5f9': [], 'optimize_covparam_vel-57f79f9f-c661-4db0-9b0b-2e4ab8c33c7e': [], 'optimize_covparam_vel-c3b8ef10-9b48-4e61-90a9-6d974620b0c1': [], 'optimize_covparam_vel-57e58d80-b463-4325-a331-5b309c84ebca': [], 'optimize_covparam_vel-403cddb4-f459-4633-afe2-83e4d8f19785': [], 'optimize_covparam_vel-8c97bd0e-2500-48db-b6ce-6122072b5945': [], 'optimize_covparam_vel-36818236-ecae-4037-87c1-5cb5abc4564a': [], 'optimize_covparam_vel-a25f48a7-d84d-4af7-8a1d-55fc331c0ffd': [], 'optimize_covparam_vel-8876d536-2f87-48ac-ade5-e3f1893ad4f3': [], 'optimize_covparam_vel-02ea4012-d224-40d5-9d1d-2234d2a61b82': [], 'optimize_covparam_vel-dfc7f139-88e8-479b-bf39-47dde41d1133': [], 'optimize_covparam_vel-59d25949-d70d-41d3-aa94-60c96733c044': [], 'optimize_covparam_vel-bacf01bb-dafd-4aac-b967-8d4da5f0693c': [], 'optimize_covparam_vel-f9eea94d-9bb3-4750-ad2a-350f0b745afd': [], 'optimize_covparam_vel-4c07f49a-5c70-4594-93e3-558991fb7049': [], 'optimize_covparam_vel-a9761965-438c-4d7c-b621-384299e1b9bc': [], 'optimize_covparam_vel-ce1f2999-b12b-48f9-ad78-3f2e59223596': [], 'optimize_covparam_vel-5883b0c3-b9ee-459c-b94c-8e59a7127076': [], 'optimize_covparam_vel-f09d4b1c-4d99-4ed8-8a6d-c2ede17746e8': [], 'optimize_covparam_vel-2a193a48-0e5c-4bc9-bad6-44ece822a774': [], 'optimize_covparam_vel-33a7d482-d965-49b4-af35-edbf1093fe7e': [], 'optimize_covparam_vel-13f51a2c-a124-4020-8807-aae135162e7f': [], 'optimize_covparam_vel-3715a692-810a-4eee-8c18-8483cb5f6d19': [], 'optimize_covparam_vel-47f82541-4143-4425-bc0d-39ab8a0a582b': [], 'optimize_covparam_vel-b0fd9d2d-55f7-463d-a632-a919bed9e931': [], 'optimize_covparam_vel-8e747224-8732-4806-8f80-3bc5e1cdda0c': [], 'optimize_covparam_vel-cef6257b-764a-4da2-9d22-5b9e5a07fcd0': [], 'optimize_covparam_vel-18a9824a-7258-4bbc-ab2c-962c96fe8cbd': [], 'optimize_covparam_vel-d6fbd766-a6f6-46b1-8a6d-c45b0ef916e1': [], 'optimize_covparam_vel-f335ca3d-409a-4e83-bed1-1ff6f488ddc8': [], 'optimize_covparam_vel-d66fc0e8-8031-45ea-9746-3810b7bd38f2': [], 'optimize_covparam_vel-613df2c6-5e42-4fe8-be46-c70a5b1253a7': [], 'optimize_covparam_vel-b0ac923c-06a6-421e-a249-e3bb18e56f5d': [], 'optimize_covparam_vel-e4d11d16-456c-4388-911b-4f7d3a23697f': [], 'optimize_covparam_vel-4ad09a73-9385-48a5-959b-a53ae9387f1b': [], 'optimize_covparam_vel-ad07262e-c21d-44c0-9315-0f48c7ae4115': [], 'optimize_covparam_vel-e61489db-0af8-4c58-985b-418f005f4c0b': [], 'optimize_covparam_vel-4c3d6189-127f-4e14-b6d9-c46423fde083': [], 'optimize_covparam_vel-80e20b49-40b4-4eaa-b680-04088f713d0f': [], 'optimize_covparam_vel-35824252-e82e-4c20-a335-6684ed62ee09': [], 'optimize_covparam_vel-d11c1713-0b00-4f47-b017-78656228587b': [], 'optimize_covparam_vel-8b0ea644-6189-49e2-8382-8af868fb05ea': [], 'optimize_covparam_vel-fe8e12e7-2559-40fc-85bc-982da9199046': [], 'optimize_covparam_vel-cd76f989-f4be-4a10-9e82-927fbb7b9244': [], 'optimize_covparam_vel-10ddb126-a35e-45d5-b5d1-3e8cafb87181': [], 'optimize_covparam_vel-63634254-3dc1-4477-b46c-452c2346d365': [], 'optimize_covparam_vel-b5184d7a-4e1f-4c2c-a051-b9864638136e': [], 'optimize_covparam_vel-e4b7bf61-927d-42ba-a6b1-2c7aebbe3d45': [], 'optimize_covparam_vel-c93477e3-bc6e-410c-97f2-9829c85d89ba': [], 'optimize_covparam_vel-fa255d7b-9705-4bdd-94fc-914f9f67fa3c': [], 'optimize_covparam_vel-975ec0d5-6f79-4e2d-bef4-4c352d1b6509': [], 'optimize_covparam_vel-e9ac23ac-0bc4-4e8c-9db7-3a853fcab463': [], 'optimize_covparam_vel-978990e3-f80c-416b-bf5c-a91ef4dc4f7b': [], 'optimize_covparam_vel-1429a468-07be-4cae-914e-c2d081cad145': [], 'optimize_covparam_vel-2e2a81d1-cb28-4570-a878-1cfa19e96562': [], 'optimize_covparam_vel-851066f5-43bd-4c87-a3bf-463278c7e889': [], 'optimize_covparam_vel-b776f48f-0496-470d-83de-76727cada4e7': [], 'optimize_covparam_vel-ec753907-9ad4-4a30-9821-232770099490': [], 'optimize_covparam_vel-2c30197a-18c1-4e58-bea2-75f96dd88ec1': [], 'optimize_covparam_vel-86f8d5da-1598-404d-a6d8-69ba2d8b3122': [], 'optimize_covparam_vel-019d41bb-c848-46b0-9b3e-a442b39b00b2': [], 'optimize_covparam_vel-c80cfe87-7444-4755-a2ea-c8599ff0a924': [], 'optimize_covparam_vel-f549e25b-b454-41ea-850f-fe895e317810': [], 'optimize_covparam_vel-36e2801a-e64a-4016-a4d6-539de74b5ec8': [], 'optimize_covparam_vel-d6a341b1-de9d-46f9-af38-1427734c1688': [], 'optimize_covparam_vel-49ec74e3-e10b-4cd7-b42f-64663a7baa0b': [], 'optimize_covparam_vel-de241c35-a850-422b-aaa2-8844fb8c6d8c': [], 'optimize_covparam_vel-ca105808-ea80-4daf-8e47-7513229adedb': [], 'optimize_covparam_vel-7fa1688d-9af8-4e8b-bfed-c1801517791e': [], 'optimize_covparam_vel-1bbcf81a-8795-4c9e-b79e-627ff0cce796': [], 'optimize_covparam_vel-74a35679-a652-4301-8170-b4acccfb50f1': [], 'optimize_covparam_vel-10a8ab04-5e7d-4230-ae1d-44a5599d1691': [], 'optimize_covparam_vel-7b62e9ff-4875-439f-93bf-07973aaee920': [], 'optimize_covparam_vel-da81bea0-724b-4f77-8f55-1d503d8a728a': [], 'optimize_covparam_vel-7d34c3b9-0469-44c5-a63d-1e1e6ce12aab': [], 'optimize_covparam_vel-b872b387-2ad5-4998-a3df-620106310b90': [], 'optimize_covparam_vel-a27c80f7-4ab4-49ed-94fc-8ecbb2e70615': [], 'optimize_covparam_vel-0461771e-eea1-4010-a769-4a6006bad1c6': [], 'optimize_covparam_vel-51bb41f7-fc0b-4ec9-a112-e101a56dd706': []} state: ['processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing', 'processing'] workers: [] NoneType: None

2023-08-25 18:12:02,717 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-4b4d7e48-7020-40c9-9bd7-ba4a2cc610b2 NoneType: None

2023-08-25 18:12:02,717 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-31e83add-040a-4092-8a6f-de8b636a85b9 NoneType: None

2023-08-25 18:12:02,717 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-9d1d7476-ea53-4ae2-90f1-622e4a8ebbdc NoneType: None

2023-08-25 18:12:02,717 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-d7f3d847-99c6-4f0f-9093-2a5bb108a6d8 NoneType: None

2023-08-25 18:12:02,717 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-c6d6f4a8-305e-481f-b233-8f2b8993b0da NoneType: None

2023-08-25 18:12:02,717 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-c8a1be39-2167-490f-8e74-03ccbb0b96c7 NoneType: None

2023-08-25 18:12:02,717 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-563fbd99-8e78-4d42-b656-f5c711b429ca NoneType: None

2023-08-25 18:12:02,718 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-af936fcd-eafb-4036-8457-0fcc686c17fa NoneType: None

2023-08-25 18:12:02,718 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-8ea2bd08-318f-4dea-b9fe-cef2340106ec NoneType: None

2023-08-25 18:12:02,718 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-d918a322-568b-4413-b686-75f261ed89d3 NoneType: None

2023-08-25 18:12:02,718 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-acc9fd62-90cd-4e4c-801e-aadbd0ee6ac4 NoneType: None

2023-08-25 18:12:02,718 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-44f34029-113c-4a65-82a5-2ceaf6fe7dd9 NoneType: None

2023-08-25 18:12:02,718 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-1909c0aa-a392-49fe-905f-d4baf06691b8 NoneType: None

2023-08-25 18:12:02,718 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-2c3b8f13-3c52-4ef6-b868-a84deac7227d NoneType: None

2023-08-25 18:12:02,718 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-b0f64237-aadf-4408-a138-5968368ce285 NoneType: None

2023-08-25 18:12:02,718 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-eb51acb7-1c07-4e9a-8191-7263a4e164cb NoneType: None

2023-08-25 18:12:02,718 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-29ce9d2a-2e00-4b41-8aa3-bb820f2d6717 NoneType: None

2023-08-25 18:12:02,718 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-19550ab0-8fa8-4a8d-b654-341eb434941d NoneType: None

2023-08-25 18:12:02,718 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-3d73d9b0-7021-4f9e-a6f7-578815c994bc NoneType: None

2023-08-25 18:12:02,718 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-388cb9ca-a9ce-466c-ba70-842b2c64ec00 NoneType: None

2023-08-25 18:12:02,718 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-8c39a2ed-c250-4392-ad06-be5a24141531 NoneType: None

2023-08-25 18:12:02,718 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-0fc4b5f3-6b19-49b6-9447-4a0efa29ece5 NoneType: None

2023-08-25 18:12:02,719 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-06e032af-23c2-4725-a0d6-597336e77fe0 NoneType: None

2023-08-25 18:12:02,719 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-b783f94a-5751-40ff-9115-a11a051bb5f9 NoneType: None

2023-08-25 18:12:02,719 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-57f79f9f-c661-4db0-9b0b-2e4ab8c33c7e NoneType: None

2023-08-25 18:12:02,719 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-c3b8ef10-9b48-4e61-90a9-6d974620b0c1 NoneType: None

2023-08-25 18:12:02,719 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-57e58d80-b463-4325-a331-5b309c84ebca NoneType: None

2023-08-25 18:12:02,719 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-403cddb4-f459-4633-afe2-83e4d8f19785 NoneType: None

2023-08-25 18:12:02,719 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-8c97bd0e-2500-48db-b6ce-6122072b5945 NoneType: None

2023-08-25 18:12:02,719 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-36818236-ecae-4037-87c1-5cb5abc4564a NoneType: None

2023-08-25 18:12:02,719 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-a25f48a7-d84d-4af7-8a1d-55fc331c0ffd NoneType: None

2023-08-25 18:12:02,719 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-8876d536-2f87-48ac-ade5-e3f1893ad4f3 NoneType: None

2023-08-25 18:12:02,719 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-02ea4012-d224-40d5-9d1d-2234d2a61b82 NoneType: None

2023-08-25 18:12:02,719 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-dfc7f139-88e8-479b-bf39-47dde41d1133 NoneType: None

2023-08-25 18:12:02,719 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-59d25949-d70d-41d3-aa94-60c96733c044 NoneType: None

2023-08-25 18:12:02,720 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-bacf01bb-dafd-4aac-b967-8d4da5f0693c NoneType: None

2023-08-25 18:12:02,720 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-f9eea94d-9bb3-4750-ad2a-350f0b745afd NoneType: None

2023-08-25 18:12:02,720 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-4c07f49a-5c70-4594-93e3-558991fb7049 NoneType: None

2023-08-25 18:12:02,720 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-a9761965-438c-4d7c-b621-384299e1b9bc NoneType: None

.....

Workers dont seem to be logging any warnings.
Since I used several hundred workers I could not click through all of them.

  • Is there a way I could print all workers logs simultaneously, so that I can confirm the workers do/dont log anything different than in the example below?
Workerlog
2023-08-25 16:38:02,155 - distributed.worker - INFO - Start worker at: tls://10.8.0.11:36611

2023-08-25 16:38:02,156 - distributed.worker - INFO - Listening to: tls://10.8.0.11:36611

2023-08-25 16:38:02,156 - distributed.worker - INFO - Worker name: dask-worker-83db662f9d9e4be78c53f9383c423fc5-4s4p4

2023-08-25 16:38:02,156 - distributed.worker - INFO - dashboard at: 10.8.0.11:8787

2023-08-25 16:38:02,156 - distributed.worker - INFO - Waiting to connect to: tls://dask-83db662f9d9e4be78c53f9383c423fc5.prod:8786

2023-08-25 16:38:02,156 - distributed.worker - INFO - -------------------------------------------------

2023-08-25 16:38:02,156 - distributed.worker - INFO - Threads: 3

2023-08-25 16:38:02,156 - distributed.worker - INFO - Memory: 12.00 GiB

2023-08-25 16:38:02,156 - distributed.worker - INFO - Local Directory: /tmp/dask-worker-space/worker-010sm_xq

2023-08-25 16:38:02,156 - distributed.worker - INFO - -------------------------------------------------

2023-08-25 16:38:02,349 - distributed.worker - INFO - Registered to: tls://dask-83db662f9d9e4be78c53f9383c423fc5.prod:8786

2023-08-25 16:38:02,349 - distributed.worker - INFO - -------------------------------------------------

Hello,

Is nannies’ logs the same as the Scheduler log (by clicking Info in the dashboard)?

No, it’s the stdout/stderr of the dask-worker bash command

How should I interpret this message mentioning one of the tasks?
2023-08-25 18:12:02,717 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: [], optimize_covparam_vel-4b4d7e48-7020-40c9-9bd7-ba4a2cc610b2 NoneType: None

This is an issue, that has been solved in 2023.8.1 (gather() should not remove unresponsive workers · Issue #7995 · dask/distributed · GitHub), where the scheduler would erroneously shut down a worker that’s temporarily unresponsive - typically because its GIL is locked - thus losing all contents within.
I would advise to retry with the latest version of dask and see if the problem persists.

Is there a way I could print all workers logs simultaneously, so that I can confirm the workers do/dont log anything different than in the example below?

This question is specific to dask-gateway-cluster and I’m afraid I’m not familiar with it. I would be surprised if the system didn’t offer anything for centralized logs collection.

1 Like

Is there a way I can access the stdout/stderr of the dask-worker bash command from within a jupyter lab workflow that uses a dask-gateway-cluster?

@crusaderky

The function client.get_worker_logs seems to give the dask-worker-logs, ( from this post). I see there is an argument named nanny in the function. Below is an example of the output using nanny = True, without running anything.

client.get_worker_logs(nanny=True):
{'tls://10.8.10.2:34213': (('INFO',
   "2023-09-01 18:14:36,446 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.10.2:42197'"),),
 'tls://10.8.10.3:39955': (('INFO',
   "2023-09-01 18:14:36,881 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.10.3:35365'"),),
 'tls://10.8.10.4:33253': (('INFO',
   "2023-09-01 18:14:39,111 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.10.4:42165'"),),
 'tls://10.8.10.5:34085': (('INFO',
   "2023-09-01 18:14:37,257 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.10.5:45371'"),),
 'tls://10.8.10.6:39941': (('INFO',
   "2023-09-01 18:14:37,684 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.10.6:45223'"),),
 'tls://10.8.10.7:40643': (('INFO',
   "2023-09-01 18:14:38,613 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.10.7:42967'"),),}

Is this the correct Nannies’ logs that I should be checking during the computation?

yes, you can read them from there

1 Like

Hi,
below are the worker- and nanny-logs. Not from the same session as above, but the tasks are initialized the same way, and warning is the same.

There is a distributed.worker - ERROR - Worker stream died during communication: tls://10.8.40.3:40177-message in 9 out of the 100 workerlogs. How should I interpret this message?
And what can its relation to the warning be?

The other workerlogs does not log anything specific.

Example of those 9 workerlogs that has the error-message
 'tls://10.8.30.3:40609': (('INFO',
   '2023-09-10 13:11:36,114 - distributed.worker - INFO -       Start worker at:      tls://10.8.30.3:40609'),
  ('INFO',
   '2023-09-10 13:11:36,114 - distributed.worker - INFO -          Listening to:      tls://10.8.30.3:40609'),
  ('INFO',
   '2023-09-10 13:11:36,114 - distributed.worker - INFO -           Worker name: dask-worker-67d1111900424a2a8b8c83242a51d8fc-xcrcw'),
  ('INFO',
   '2023-09-10 13:11:36,114 - distributed.worker - INFO -          dashboard at:             10.8.30.3:8787'),
  ('INFO',
   '2023-09-10 13:11:36,115 - distributed.worker - INFO - Waiting to connect to: tls://dask-67d1111900424a2a8b8c83242a51d8fc.prod:8786'),
  ('INFO',
   '2023-09-10 13:11:36,115 - distributed.worker - INFO - -------------------------------------------------'),
  ('INFO',
   '2023-09-10 13:11:36,115 - distributed.worker - INFO -               Threads:                         10'),
  ('INFO',
   '2023-09-10 13:11:36,115 - distributed.worker - INFO -                Memory:                  20.00 GiB'),
  ('INFO',
   '2023-09-10 13:11:36,115 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-ld_it2lt'),
  ('INFO',
   '2023-09-10 13:11:36,115 - distributed.worker - INFO - -------------------------------------------------'),
  ('INFO',
   '2023-09-10 13:11:36,132 - distributed.worker - INFO - Starting Worker plugin /home/jovyan/methods_development/cloud_data/dynht_mappingcode.pye7419256-38b7-47a2-b188-387af1a6e309'),
  ('INFO',
   '2023-09-10 13:11:36,132 - distributed.worker - INFO - Starting Worker plugin /home/jovyan/methods_development/cloud_data/dynht_mappingcode.pyc8980b84-b5f9-478d-a8c3-80381b96c4e4'),
  ('INFO',
   '2023-09-10 13:11:37,284 - distributed.worker - INFO -         Registered to: tls://dask-67d1111900424a2a8b8c83242a51d8fc.prod:8786'),
  ('INFO',
   '2023-09-10 13:11:37,284 - distributed.worker - INFO - -------------------------------------------------'),
  ('INFO',
   '2023-09-10 13:11:44,438 - distributed.worker - INFO - Starting Worker plugin /home/jovyan/methods_development/cloud_data/dynht_mappingcode.pyb0a74f7c-2305-44d7-922c-d27a55064901'),
  ('ERROR',
   '2023-09-10 13:25:44,529 - distributed.worker - ERROR - Worker stream died during communication: tls://10.8.40.3:40177
Traceback (most recent call last):
  File "/srv/conda/envs/notebook/lib/python3.10/site-packages/distributed/comm/tcp.py", line 498, in connect
 stream = await self.client.connect(
  File "/srv/conda/envs/notebook/lib/python3.10/site-packages/tornado/tcpclient.py", line 275, in connect
    af, addr, stream = await connector.start(connect_timeout=timeout)
asyncio.exceptions.CancelledError

During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/srv/conda/envs/notebook/lib/python3.10/asyncio/tasks.py", line 456, in wait_for
    return fut.result()
asyncio.exceptions.CancelledError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/srv/conda/envs/notebook/lib/python3.10/site-packages/distributed/comm/core.py", line 291, in connect
    comm = await asyncio.wait_for(
  File "/srv/conda/envs/notebook/lib/python3.10/asyncio/tasks.py", line 458, in wait_for
    raise exceptions.TimeoutError() from exc
asyncio.exceptions.TimeoutError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/srv/conda/envs/notebook/lib/python3.10/site-packages/distributed/worker.py", line 2049, in gather_dep
    response = await get_data_from_worker(
  File "/srv/conda/envs/notebook/lib/python3.10/site-packages/distributed/worker.py", line 2854, in get_data_from_worker
    return await retry_operation(_get_data, operation="get_data_from_worker")
  File "/srv/conda/envs/notebook/lib/python3.10/site-packages/distributed/utils_comm.py", line 400, in retry_operation
    return await retry(
  File "/srv/conda/envs/notebook/lib/python3.10/site-packages/distributed/utils_comm.py", line 385, in retry
    return await coro()
  File "/srv/conda/envs/notebook/lib/python3.10/site-packages/distributed/worker.py", line 2831, in _get_data
    comm = await rpc.connect(worker)
  File "/srv/conda/envs/notebook/lib/python3.10/site-packages/distributed/core.py", line 1428, in connect
    return await connect_attempt
  File "/srv/conda/envs/notebook/lib/python3.10/site-packages/distributed/core.py", line 1364, in _connect
    comm = await connect(
  File "/srv/conda/envs/notebook/lib/python3.10/site-packages/distributed/comm/core.py", line 317, in connect
    raise OSError(
OSError: Timed out trying to connect to tls://10.8.40.3:40177 after 30 s')),
The other workerlogs look like this (different datetime because I recreated the warning)
 'tls://10.8.15.2:40375': (('INFO',
   '2023-09-10 16:13:17,664 - distributed.worker - INFO -       Start worker at:      tls://10.8.15.2:40375'),
  ('INFO',
   '2023-09-10 16:13:17,664 - distributed.worker - INFO -          Listening to:      tls://10.8.15.2:40375'),
  ('INFO',
   '2023-09-10 16:13:17,664 - distributed.worker - INFO -           Worker name: dask-worker-cbfacbd303bb469786a1927e9a1ec954-tggh6'),
  ('INFO',
   '2023-09-10 16:13:17,664 - distributed.worker - INFO -          dashboard at:             10.8.15.2:8787'),
  ('INFO',
   '2023-09-10 16:13:17,664 - distributed.worker - INFO - Waiting to connect to: tls://dask-cbfacbd303bb469786a1927e9a1ec954.prod:8786'),
  ('INFO',
   '2023-09-10 16:13:17,664 - distributed.worker - INFO - -------------------------------------------------'),
  ('INFO',
   '2023-09-10 16:13:17,664 - distributed.worker - INFO -               Threads:                         10'),
  ('INFO',
   '2023-09-10 16:13:17,664 - distributed.worker - INFO -                Memory:                  20.00 GiB'),
  ('INFO',
   '2023-09-10 16:13:17,664 - distributed.worker - INFO -       Local Directory: /tmp/dask-worker-space/worker-lklcchvp'),
  ('INFO',
   '2023-09-10 16:13:17,664 - distributed.worker - INFO - -------------------------------------------------'),
  ('INFO',
   '2023-09-10 16:13:17,677 - distributed.worker - INFO - Starting Worker plugin /home/jovyan/methods_development/cloud_data/dynht_mappingcode.pycfcf6c93-7bb6-4653-9a2c-5f1fef96ab6c'),
  ('INFO',
   '2023-09-10 16:13:19,318 - distributed.worker - INFO -         Registered to: tls://dask-cbfacbd303bb469786a1927e9a1ec954.prod:8786'),
  ('INFO',
   '2023-09-10 16:13:19,318 - distributed.worker - INFO - -------------------------------------------------'),
  ('INFO',
   '2023-09-10 16:13:44,680 - distributed.worker - INFO - Starting Worker plugin /home/jovyan/methods_development/cloud_data/dynht_mappingcode.py9620fd32-0717-4e5b-9335-c912ec609dd6'))
And this is the full nannies log, generated with client.get_worker_logs(nanny=True) (different datetime because I recreated the warning)
{'tls://10.8.0.3:41097': (('INFO',
   "2023-09-10 16:06:23,884 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.0.3:40381'"),),
 'tls://10.8.1.3:35087': (('INFO',
   "2023-09-10 16:06:16,522 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.1.3:43417'"),),
 'tls://10.8.10.2:46741': (('INFO',
   "2023-09-10 16:13:27,932 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.10.2:42473'"),),
 'tls://10.8.100.3:34857': (('INFO',
   "2023-09-10 16:06:23,741 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.100.3:41499'"),),
 'tls://10.8.11.3:44775': (('INFO',
   "2023-09-10 16:13:38,223 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.11.3:40359'"),),
 'tls://10.8.12.3:41871': (('INFO',
   "2023-09-10 16:13:21,410 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.12.3:43865'"),),
 'tls://10.8.13.3:37271': (('INFO',
   "2023-09-10 16:13:20,632 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.13.3:39419'"),),
 'tls://10.8.14.3:36639': (('INFO',
   "2023-09-10 16:13:26,967 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.14.3:36981'"),),
 'tls://10.8.15.2:40375': (('INFO',
   "2023-09-10 16:13:16,097 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.15.2:46643'"),),
 'tls://10.8.16.3:41373': (('INFO',
   "2023-09-10 16:13:24,644 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.16.3:37577'"),),
 'tls://10.8.17.2:41257': (('INFO',
   "2023-09-10 16:13:23,289 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.17.2:46069'"),),
 'tls://10.8.18.3:34511': (('INFO',
   "2023-09-10 16:13:17,892 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.18.3:45789'"),),
 'tls://10.8.19.2:38311': (('INFO',
   "2023-09-10 16:13:28,614 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.19.2:46575'"),),
 'tls://10.8.2.2:40933': (('INFO',
   "2023-09-10 16:13:27,170 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.2.2:43821'"),),
 'tls://10.8.20.3:42643': (('INFO',
   "2023-09-10 16:13:15,961 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.20.3:46503'"),),
 'tls://10.8.21.3:44423': (('INFO',
   "2023-09-10 16:13:22,675 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.21.3:44237'"),),
 'tls://10.8.22.2:43451': (('INFO',
   "2023-09-10 16:13:33,147 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.22.2:45203'"),),
 'tls://10.8.23.4:43623': (('INFO',
   "2023-09-10 16:30:36,161 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.23.4:46843'"),),
 'tls://10.8.24.3:40255': (('INFO',
   "2023-09-10 16:13:30,660 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.24.3:41611'"),),
 'tls://10.8.25.2:46521': (('INFO',
   "2023-09-10 16:13:27,688 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.25.2:42307'"),),
 'tls://10.8.26.2:46525': (('INFO',
   "2023-09-10 16:13:29,563 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.26.2:42103'"),),
 'tls://10.8.27.3:34591': (('INFO',
   "2023-09-10 16:19:55,180 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.27.3:45015'"),),
 'tls://10.8.28.3:39437': (('INFO',
   "2023-09-10 16:20:06,209 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.28.3:37461'"),),
 'tls://10.8.29.4:36575': (('INFO',
   "2023-09-10 16:30:56,620 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.29.4:36729'"),),
 'tls://10.8.3.4:35653': (('INFO',
   "2023-09-10 16:21:12,591 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.3.4:40797'"),
  ('WARNING',
   '2023-09-10 16:21:50,438 - distributed.nanny - WARNING - Restarting worker'),
  ('WARNING',
   '2023-09-10 16:22:28,536 - distributed.nanny - WARNING - Restarting worker'),
  ('WARNING',
   '2023-09-10 16:23:02,296 - distributed.nanny - WARNING - Restarting worker'),
  ('WARNING',
   '2023-09-10 16:23:41,112 - distributed.nanny - WARNING - Restarting worker'),
  ('WARNING',
   '2023-09-10 16:24:20,530 - distributed.nanny - WARNING - Restarting worker'),
  ('WARNING',
   '2023-09-10 16:25:00,028 - distributed.nanny - WARNING - Restarting worker'),
  ('WARNING',
   '2023-09-10 16:25:39,637 - distributed.nanny - WARNING - Restarting worker'),
  ('WARNING',
   '2023-09-10 16:26:18,676 - distributed.nanny - WARNING - Restarting worker'),
  ('WARNING',
   '2023-09-10 16:26:49,242 - distributed.nanny - WARNING - Restarting worker'),
  ('WARNING',
   '2023-09-10 16:27:23,821 - distributed.nanny - WARNING - Restarting worker'),
  ('WARNING',
   '2023-09-10 16:28:02,326 - distributed.nanny - WARNING - Restarting worker'),
  ('WARNING',
   '2023-09-10 16:28:40,730 - distributed.nanny - WARNING - Restarting worker')),
 'tls://10.8.30.2:42453': (('INFO',
   "2023-09-10 16:13:22,632 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.30.2:33339'"),),
 'tls://10.8.31.2:46541': (('INFO',
   "2023-09-10 16:13:21,817 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.31.2:42605'"),),
 'tls://10.8.32.2:45747': (('INFO',
   "2023-09-10 16:13:22,110 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.32.2:44159'"),),
 'tls://10.8.33.3:34693': (('INFO',
   "2023-09-10 16:13:16,844 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.33.3:35215'"),),
 'tls://10.8.34.3:46029': (('INFO',
   "2023-09-10 16:13:20,538 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.34.3:41289'"),),
 'tls://10.8.35.2:41061': (('INFO',
   "2023-09-10 16:13:17,340 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.35.2:39715'"),),
 'tls://10.8.36.3:37855': (('INFO',
   "2023-09-10 16:13:34,676 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.36.3:46235'"),),
 'tls://10.8.37.3:39269': (('INFO',
   "2023-09-10 16:13:24,862 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.37.3:38639'"),),
 'tls://10.8.38.3:39325': (('INFO',
   "2023-09-10 16:13:36,235 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.38.3:46883'"),),
 'tls://10.8.39.3:32867': (('INFO',
   "2023-09-10 16:13:23,260 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.39.3:44267'"),),
 'tls://10.8.4.3:33675': (('INFO',
   "2023-09-10 16:13:26,008 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.4.3:45011'"),),
 'tls://10.8.40.3:41771': (('INFO',
   "2023-09-10 16:13:37,219 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.40.3:41729'"),),
 'tls://10.8.41.3:42389': (('INFO',
   "2023-09-10 16:13:31,862 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.41.3:32787'"),),
 'tls://10.8.42.3:43333': (('INFO',
   "2023-09-10 16:13:24,167 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.42.3:45181'"),),
 'tls://10.8.43.3:43355': (('INFO',
   "2023-09-10 16:13:18,969 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.43.3:32945'"),),
 'tls://10.8.44.3:41091': (('INFO',
   "2023-09-10 16:13:19,900 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.44.3:40047'"),),
 'tls://10.8.45.3:44757': (('INFO',
   "2023-09-10 16:13:32,756 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.45.3:41677'"),),
 'tls://10.8.46.3:39121': (('INFO',
   "2023-09-10 16:13:32,312 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.46.3:37855'"),),
 'tls://10.8.47.3:44823': (('INFO',
   "2023-09-10 16:13:18,121 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.47.3:35129'"),),
 'tls://10.8.48.3:40243': (('INFO',
   "2023-09-10 16:13:29,321 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.48.3:40955'"),),
 'tls://10.8.49.3:35233': (('INFO',
   "2023-09-10 16:13:18,509 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.49.3:37095'"),),
 'tls://10.8.5.3:39791': (('INFO',
   "2023-09-10 16:13:26,663 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.5.3:43367'"),),
 'tls://10.8.50.4:41929': (('INFO',
   "2023-09-10 16:31:52,096 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.50.4:41897'"),),
 'tls://10.8.51.2:44267': (('INFO',
   "2023-09-10 16:13:22,841 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.51.2:37071'"),),
 'tls://10.8.52.3:45847': (('INFO',
   "2023-09-10 16:13:24,719 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.52.3:33285'"),),
 'tls://10.8.53.3:45297': (('INFO',
   "2023-09-10 16:13:32,838 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.53.3:36181'"),),
 'tls://10.8.54.3:36321': (('INFO',
   "2023-09-10 16:13:22,522 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.54.3:40563'"),),
 'tls://10.8.55.3:36637': (('INFO',
   "2023-09-10 16:13:26,797 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.55.3:33479'"),),
 'tls://10.8.56.3:39081': (('INFO',
   "2023-09-10 16:13:18,862 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.56.3:45169'"),),
 'tls://10.8.57.2:40805': (('INFO',
   "2023-09-10 16:13:24,574 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.57.2:42785'"),),
 'tls://10.8.58.3:45365': (('INFO',
   "2023-09-10 16:13:32,601 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.58.3:46067'"),),
 'tls://10.8.59.2:44991': (('INFO',
   "2023-09-10 16:13:24,927 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.59.2:36059'"),),
 'tls://10.8.6.3:35835': (('INFO',
   "2023-09-10 16:13:29,147 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.6.3:36651'"),),
 'tls://10.8.60.3:34307': (('INFO',
   "2023-09-10 16:13:25,726 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.60.3:40687'"),),
 'tls://10.8.61.3:38433': (('INFO',
   "2023-09-10 16:13:30,803 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.61.3:41053'"),),
 'tls://10.8.62.2:43335': (('INFO',
   "2023-09-10 16:13:17,823 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.62.2:34713'"),),
 'tls://10.8.63.3:36485': (('INFO',
   "2023-09-10 16:13:25,279 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.63.3:38741'"),),
 'tls://10.8.64.2:37233': (('INFO',
   "2023-09-10 16:13:26,017 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.64.2:45429'"),),
 'tls://10.8.65.2:38567': (('INFO',
   "2023-09-10 16:13:17,676 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.65.2:43733'"),),
 'tls://10.8.66.3:46491': (('INFO',
   "2023-09-10 16:13:20,761 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.66.3:35469'"),),
 'tls://10.8.67.3:33367': (('INFO',
   "2023-09-10 16:13:27,920 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.67.3:43065'"),),
 'tls://10.8.68.3:42325': (('INFO',
   "2023-09-10 16:13:17,062 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.68.3:40237'"),),
 'tls://10.8.69.3:35675': (('INFO',
   "2023-09-10 16:13:24,679 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.69.3:35043'"),),
 'tls://10.8.7.4:44137': (('INFO',
   "2023-09-10 16:30:49,488 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.7.4:40477'"),),
 'tls://10.8.70.2:36285': (('INFO',
   "2023-09-10 16:13:31,597 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.70.2:36313'"),),
 'tls://10.8.71.3:46479': (('INFO',
   "2023-09-10 16:13:21,233 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.71.3:39567'"),),
 'tls://10.8.72.3:43921': (('INFO',
   "2023-09-10 16:13:18,273 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.72.3:40715'"),),
 'tls://10.8.73.3:41427': (('INFO',
   "2023-09-10 16:13:21,284 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.73.3:39443'"),),
 'tls://10.8.74.2:34895': (('INFO',
   "2023-09-10 16:13:34,052 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.74.2:35855'"),),
 'tls://10.8.75.4:40599': (('INFO',
   "2023-09-10 16:31:09,954 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.75.4:41653'"),
  ('WARNING',
   '2023-09-10 16:31:40,478 - distributed.nanny - WARNING - Restarting worker'),
  ('WARNING',
   '2023-09-10 16:32:11,186 - distributed.nanny - WARNING - Restarting worker'),
  ('WARNING',
   '2023-09-10 16:32:37,466 - distributed.nanny - WARNING - Restarting worker')),
 'tls://10.8.76.3:36305': (('INFO',
   "2023-09-10 16:13:30,692 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.76.3:38487'"),),
 'tls://10.8.77.3:33471': (('INFO',
   "2023-09-10 16:13:16,572 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.77.3:34181'"),),
 'tls://10.8.78.3:43071': (('INFO',
   "2023-09-10 16:13:32,147 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.78.3:43813'"),),
 'tls://10.8.79.3:44225': (('INFO',
   "2023-09-10 16:13:36,898 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.79.3:45381'"),),
 'tls://10.8.8.3:35791': (('INFO',
   "2023-09-10 16:13:24,943 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.8.3:45597'"),),
 'tls://10.8.80.3:41803': (('INFO',
   "2023-09-10 16:13:33,923 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.80.3:37063'"),),
 'tls://10.8.81.3:38501': (('INFO',
   "2023-09-10 16:13:26,598 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.81.3:35307'"),),
 'tls://10.8.82.3:37711': (('INFO',
   "2023-09-10 16:13:32,336 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.82.3:37759'"),),
 'tls://10.8.83.2:35321': (('INFO',
   "2023-09-10 16:13:25,885 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.83.2:46711'"),),
 'tls://10.8.84.3:33833': (('INFO',
   "2023-09-10 16:13:27,641 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.84.3:42747'"),),
 'tls://10.8.85.2:46375': (('INFO',
   "2023-09-10 16:13:28,204 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.85.2:41415'"),),
 'tls://10.8.86.3:36743': (('INFO',
   "2023-09-10 16:13:16,516 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.86.3:43291'"),),
 'tls://10.8.87.3:34215': (('INFO',
   "2023-09-10 16:13:23,414 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.87.3:45245'"),),
 'tls://10.8.88.3:43529': (('INFO',
   "2023-09-10 16:13:18,128 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.88.3:39967'"),),
 'tls://10.8.89.3:44969': (('INFO',
   "2023-09-10 16:13:24,401 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.89.3:37899'"),),
 'tls://10.8.9.3:46555': (('INFO',
   "2023-09-10 16:13:22,420 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.9.3:46265'"),),
 'tls://10.8.90.3:43709': (('INFO',
   "2023-09-10 16:13:27,703 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.90.3:43425'"),),
 'tls://10.8.91.2:44873': (('INFO',
   "2023-09-10 16:13:21,481 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.91.2:34317'"),),
 'tls://10.8.92.3:44019': (('INFO',
   "2023-09-10 16:13:20,934 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.92.3:41543'"),),
 'tls://10.8.93.3:34113': (('INFO',
   "2023-09-10 16:13:14,826 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.93.3:38683'"),),
 'tls://10.8.94.2:36215': (('INFO',
   "2023-09-10 16:13:29,415 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.94.2:43591'"),),
 'tls://10.8.95.3:37087': (('INFO',
   "2023-09-10 16:13:28,748 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.95.3:43781'"),),
 'tls://10.8.96.2:39773': (('INFO',
   "2023-09-10 16:13:30,790 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.8.96.2:37159'"),),
 'tls://10.9.49.3:45415': (('INFO',
   "2023-09-10 16:06:07,618 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.9.49.3:46197'"),),
 'tls://10.9.61.3:45855': (('INFO',
   "2023-09-10 16:06:27,940 - distributed.nanny - INFO -         Start Nanny at: 'tls://10.9.61.3:33285'"),)}

Other observations:

  • After this warning occurs, there seems to be uneven distribution of task-processing. Screenshot from Scheduler-info-tab below:
Screenshot from the Schedulers info-tab

  • The deployment I use currently has version 2022.12.0 for dask and distributed, so I am not able to try the full computation with version 2023.8.1 yet.