I have a setup in which I assign a unique
Variable() to each job submitted to the Dask cluster using
fire_and_forget(). This variable serves as a stopping condition and can be set elsewhere in the code.
Now, here’s my concern: If, let’s say, 100 of these jobs have finished running, would I end up with 100 “leaked” memory allocations to Dask variables? In other words, I’m wondering how these variables get cleaned up. Do I need to manually delete them once they are no longer needed?
I would greatly appreciate any insights or suggestions regarding this matter.
Thanks in advance!