Regarding Dask Queuing when many task assign to worker

My Issue is that when all the API request task get assigned to Workers and they don’t have enough memory to process it and become memory out of error so i found out Queuing method , Once the Previous task get completed and then it will allow the next set of task to assign to workers ,For that i used a daskqueue library in Python which is similar to Celery ( GitHub - AmineDiro/daskqueue: Distributed Task Queue based Dask) - here i have given the link daskqueue library for your reference

Below i have Tried out one Example for queuing the task but i don’t know how the Queuing Process is done ,and it is not even reflecting in dashboard and don’t know how the task is performing in workers

(daskqueue/examples/perf_cpu_bound.py) - here i have given the filename example for Queuing method that i worked out located in github

(daskqueue · PyPI) - here is the link given, Queuing method with example (2 example reference)

  1. So would you please correct me whether my approach is correct or not ?
  2. If it is correct, can you please explain how the Queuing Process is done in the above example ?
  3. If the above approach is wrong, would you please guide me how the Queuing process can be done when more requests are coming and balance the task to the worker, to resolve the memory out of error ?
  4. we have integrated celery and dask will it create any issue in the long run ?