Unexpected Dask cluster behavior on docker setup

@pavithraes Thank you very much! Here is the traceback;

app             | file_path=== /shared/test4gb.csv summary_blocksize= 128000000.0
app             | in summary for dask computation....
dask_worker_2        | distributed.nanny - WARNING - Worker exceeded 95% memory budget. Restarting
dask_worker_1        | distributed.nanny - WARNING - Worker exceeded 95% memory budget. Restarting
dask_worker_2        | distributed.nanny - INFO - Worker process 100 was killed by signal 15
dask_scheduler       | distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://172.24.0.6:37543', name: tcp://172.24.0.6:37543, memory: 0, processing: 16>
dask_scheduler       | distributed.core - INFO - Removing comms to tcp://172.24.0.6:37543
dask_worker_2        | distributed.nanny - WARNING - Restarting worker
dask_worker_1        | distributed.nanny - INFO - Worker process 100 was killed by signal 15
dask_scheduler       | distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://172.24.0.5:34833', name: tcp://172.24.0.5:34833, memory: 0, processing: 32>
dask_scheduler       | distributed.core - INFO - Removing comms to tcp://172.24.0.5:34833
dask_scheduler       | distributed.scheduler - INFO - Lost all workers
dask_worker_1        | distributed.nanny - WARNING - Restarting worker
dask_worker_2        | distributed.worker - INFO -       Start worker at:     tcp://172.24.0.6:40681
dask_worker_2        | distributed.worker - INFO -          Listening to:     tcp://172.24.0.6:40681
dask_worker_2        | distributed.worker - INFO -          dashboard at:           172.24.0.6:37165
dask_worker_2        | distributed.worker - INFO - Waiting to connect to:  tcp://dask_scheduler:8786
dask_worker_2        | distributed.worker - INFO - -------------------------------------------------
dask_worker_2        | distributed.worker - INFO -               Threads:                         16
dask_worker_1        | distributed.worker - INFO -       Start worker at:     tcp://172.24.0.5:37363
dask_worker_2        | distributed.worker - INFO -                Memory:                   1.00 GiB
dask_worker_2        | distributed.worker - INFO -       Local Directory: /src/app/dask-worker-space/worker-_464h4d5
dask_worker_1        | distributed.worker - INFO -          Listening to:     tcp://172.24.0.5:37363
dask_worker_2        | distributed.worker - INFO - -------------------------------------------------
dask_worker_1        | distributed.worker - INFO -          dashboard at:           172.24.0.5:33659
dask_worker_1        | distributed.worker - INFO - Waiting to connect to:  tcp://dask_scheduler:8786
dask_worker_1        | distributed.worker - INFO - -------------------------------------------------
dask_worker_1        | distributed.worker - INFO -               Threads:                         16
dask_worker_1        | distributed.worker - INFO -                Memory:                   1.00 GiB
dask_worker_1        | distributed.worker - INFO -       Local Directory: /src/app/dask-worker-space/worker-0dfblw0s
dask_worker_1        | distributed.worker - INFO - -------------------------------------------------
dask_scheduler       | distributed.scheduler - INFO - Register worker <WorkerState 'tcp://172.24.0.6:40681', name: tcp://172.24.0.6:40681, memory: 0, processing: 32>
dask_scheduler       | distributed.scheduler - INFO - Starting worker compute stream, tcp://172.24.0.6:40681
dask_scheduler       | distributed.core - INFO - Starting established connection
dask_worker_2        | distributed.worker - INFO -         Registered to:  tcp://dask_scheduler:8786
dask_worker_2        | distributed.worker - INFO - -------------------------------------------------
dask_worker_2        | distributed.core - INFO - Starting established connection
dask_scheduler       | distributed.scheduler - INFO - Register worker <WorkerState 'tcp://172.24.0.5:37363', name: tcp://172.24.0.5:37363, memory: 0, processing: 0>
dask_scheduler       | distributed.scheduler - INFO - Starting worker compute stream, tcp://172.24.0.5:37363
dask_scheduler       | distributed.core - INFO - Starting established connection
dask_worker_1        | distributed.worker - INFO -         Registered to:  tcp://dask_scheduler:8786
dask_worker_1        | distributed.worker - INFO - -------------------------------------------------
dask_worker_1        | distributed.core - INFO - Starting established connection
dask_worker_2        | distributed.nanny - WARNING - Worker exceeded 95% memory budget. Restarting
dask_scheduler       | distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://172.24.0.6:40681', name: tcp://172.24.0.6:40681, memory: 0, processing: 19>
dask_scheduler       | distributed.core - INFO - Removing comms to tcp://172.24.0.6:40681
dask_worker_2        | distributed.nanny - INFO - Worker process 141 was killed by signal 15
dask_worker_2        | distributed.nanny - WARNING - Restarting worker
dask_worker_1        | distributed.nanny - WARNING - Worker exceeded 95% memory budget. Restarting
dask_worker_1        | distributed.nanny - INFO - Worker process 141 was killed by signal 15
dask_scheduler       | distributed.scheduler - INFO - Remove worker <WorkerState 'tcp://172.24.0.5:37363', name: tcp://172.24.0.5:37363, memory: 0, processing: 32>
dask_scheduler       | distributed.core - INFO - Removing comms to tcp://172.24.0.5:37363
dask_scheduler       | distributed.scheduler - INFO - Task ('read-csv-998b9577be828ad2ffd0a47a3929fe2e', 15) marked as failed because 3 workers died while trying to run it
dask_scheduler       | distributed.scheduler - INFO - Task ('read-csv-998b9577be828ad2ffd0a47a3929fe2e', 19) marked as failed because 3 workers died while trying to run it
dask_scheduler       | distributed.scheduler - INFO - Task ('read-csv-998b9577be828ad2ffd0a47a3929fe2e', 17) marked as failed because 3 workers died while trying to run it
dask_scheduler       | distributed.scheduler - INFO - Task ('read-csv-998b9577be828ad2ffd0a47a3929fe2e', 13) marked as failed because 3 workers died while trying to run it
dask_scheduler       | distributed.scheduler - INFO - Task ('read-csv-998b9577be828ad2ffd0a47a3929fe2e', 1) marked as failed because 3 workers died while trying to run it
dask_scheduler       | distributed.scheduler - INFO - Task ('read-csv-998b9577be828ad2ffd0a47a3929fe2e', 11) marked as failed because 3 workers died while trying to run it
dask_scheduler       | distributed.scheduler - INFO - Task ('read-csv-998b9577be828ad2ffd0a47a3929fe2e', 22) marked as failed because 3 workers died while trying to run it
dask_scheduler       | distributed.scheduler - INFO - Task ('read-csv-998b9577be828ad2ffd0a47a3929fe2e', 20) marked as failed because 3 workers died while trying to run it
dask_scheduler       | distributed.scheduler - INFO - Task ('read-csv-998b9577be828ad2ffd0a47a3929fe2e', 3) marked as failed because 3 workers died while trying to run it
dask_scheduler       | distributed.scheduler - INFO - Lost all workers
dask_worker_1        | distributed.nanny - WARNING - Restarting worker