It seems like there is a race in dask that leads to the same task being offered to more than one workers. This leads to fatal bugs when an ‘impossible’ transition results, e.g.:
RuntimeError: Task ‘xyz’ transitioned from processing to memory on worker <WorkerState ‘tcp://XXX’, name: XXX, status: running, memory: 555, processing: 1>, while it was expected from <WorkerState ‘tcp://YYY’, name: YYY, status: init, memory: 0, processing: 1>. This should be impossible.
Is there any known work around for this?
Also, we’re running a pretty old version of dask (2022.11.1); Has this been fixed in a more recent version?