Hi, I have stumbled into the issue with dask.distributed logging.
We are using our custom log handler module which modifies the behavior and format of standard Python logging to be json readable. I use the preload argument for the worker and scheduler command which sets up the standard logging to our desired format.
So when I for example log something in using standard logging.info (within task on worker) I get a nice message as expected:
{"timestamp":"2023-10-11T09:55:37.446000Z","level":"INFO","message":"Adding 1 and 2","debug":{"name":"__main__","module":"add","file":"(name='add.py', path='examples/docs/add.py')","line":5,"function":"add","exception":null,"process":"(id=28, name='Dask Worker process (from Nanny)')","thread":"(id=140700083545792, name='Dask-Default-Threads-28-0')","elapsed":7.44081}}
However the dask library logs did not get formatted
2023-10-11 09:55:39,941 - distributed.nanny - INFO - Closing Nanny gracefully at 'tcp://10.18.148.58:36249'. Reason: scheduler-close
2023-10-11 09:55:39,951 - distributed._signals - INFO - Received signal SIGTERM (15)
2023-10-11 09:55:39,951 - distributed.nanny - INFO - Closing Nanny at 'tcp://10.18.148.58:36249'. Reason: nanny-close
2023-10-11 09:55:39,951 - distributed.nanny - INFO - Nanny asking worker to close. Reason: nanny-close
I read that dask is using a standard logging library so what am I doing wrong?