Hello, we have a jupyerhub + dask-gateway server deployment on k8s based on the public chart “daskhub”. It sets up a gateway-server instance as a jupterhub service. Users authenticate against gateway server using per-user jupyterhub API token and spin up new dask clusters on demand.
We now want to configure gateway server to dynamically mount per-user volumes onto dask worker pods.
For a quick test, I was able to mount a SHARED volume for All Users and worker pods by adding “c.KubeClusterConfig.worker_extra_container_config” and “c.KubeClusterConfig.worker_extra_pod_config” to section "gateway–backend-extraConfig’.
But I notice when mounting per-user volume, worker pod spin-up time is much longer than just mounting a Shared volume (hardcode in the configs). I don’t understand why, both the shared and per-user volumes already exist before the mount.
The pod startup time will be related to how your cluster provisions volumes, which you haven’t mentioned. It is likely due to having both volumes assigned to the pod may make placement harder and things may need to be changed on the node to accommodate this.