Dask-Kubernetes Usage

Hi, I have a dask-kubernetes usage question, we are trying to internally create a dask, dask-notebook and dask-kubernetes-operator docker image using RHEL instead of ubuntu as the base. Now I am trying to create a Kubecluster, via this doc, KubeCluster — Dask Kubernetes 2021.03.0+130.g8718d9d documentation, and have created a custom helm chart for the operator to point to the image created. Installed the helm chart with a release name. We tried to create the cluster like this
cluster = KubeCluster(name=‘foo’, image=‘internal-repo-url/dask:latest’). However, we are observing that, dask it is still pointing to the ghcr repo instead of the specified repo url.
Any idea on how to make dask to point to the internal artifactory? Help appreciated. Thanks!

Sorry you’re having trouble with this!

Just to clarify, there are two images we could be talking about:

  • We have ghcr.io/dask/dask-kubernetes-operator which has the controller application that watches the Kubernetes API for custom resource events and created Pods, Service, etc. This is installed once per k8s cluster via a helm chart. Folks shouldn’t need to interact with this container directly unless you need to create your own image for security purposes or you want to install a plugin into the controller.
  • We also have ghcr.io/dask/dask which is the default image for creating clusters and is used for creating Scheduler and Worker Pods. If you create a KubeCluster(name="foo") with no image specified it will fall back to this. Our documentation for creating DaskCluster k8s resources also suggests this image by default. It is extremely common for users to want to override this with their own image that contains their software packages, etc.

Which image are you having trouble with? The controller, or the cluster?

Hi, Thanks for your timely response.

  • So we have installed dask-kubernetes-operator via helm chart only once to the kubernetes cluster.
  • Now we are trying to launch the Scheduler and Worker Pods by using the dask-kubernetes python package. With this line, KubeCluster(name='foo', image='internal-repo-url/dask:latest')

It tries to create the dask cluster behind the scenes on kubernetes, but it still points to this image ghcr.io/dask/dask. Instead of the supplied image internal-repo-url/dask:latest

.

Any ideas or thoughts on the same?

I’m a little confused as I am not able to reproduce this with dask-kubernetes version 2022.10.1.

Leave the default

from dask_kubernetes.operator import KubeCluster

cluster = KubeCluster(name="demo")
$ kubectl get pod -l dask.org/cluster-name=demo -o=jsonpath="{.items[0].spec.containers[0].image}" 
ghcr.io/dask/dask:latest

Set a custom image

from dask_kubernetes.operator import KubeCluster

cluster = KubeCluster(name="custom", image="foo.com/jacobtomlinson/dask:latest")
$ kubectl get pod -l dask.org/cluster-name=custom -o=jsonpath="{.items[0].spec.containers[0].image}" 
foo.com/jacobtomlinson/dask:latest

Hi, I am using dask-kubernetes version 2022.10.1 as well.

I tried in the same way you have tried. However, it is still pointing to the gchr.io repo.

Also, is there a way to pass docker registry credentials to KubeCluster

I am not seeing this behaviour when I try it. Could you share the exact steps you are following so I can try and reproduce this? I’m not sure how to help otherwise.

To pass private registry credentials you’ll need to follow the Kubernetes docs on pulling from a private registry. This generally involves creating a secret in your namespace with the credentials and then setting the imagePullSecrets option on the pod specs which you can do with the make_cluster_spec way of creating clusters.

Something like this:

Create the secret

$ kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>

Set the secret name in the scheduler and worker pod specs before creating the KubeCluster

from dask_kubernetes.operator import KubeCluster, make_cluster_spec

spec = make_cluster_spec(name="custom", image="foo.com/jacobtomlinson/dask:latest")
spec["spec"]["worker"]["spec"]["imagePullSecrets"] = [{"name": "regcred"}]
spec["spec"]["scheduler"]["spec"]["imagePullSecrets"] = [{"name": "regcred"}]

cluster = KubeCluster(custom_cluster_spec=spec)

Hi, I have tried the following as you have mentioned above in setting up the credentials. But the command seems to be stuck. After I run the KubeCluster Line. Is there a way I can check any logs/ run in verbose mode? I tried to get all pods from kubectl, I don’t see any pod named local-dask

from dask_kubernetes.operator import KubeCluster
cluster = KubeCluster(name=“local-dask”, image=“foo.net/container-sandbox/foo/dask:latest”)

Sorry I missed the response on this thread. Newer versions of dask-kubernetes have better error handling around cluster startup. If you still have interest I suggest using a newer version and see if you get more useful output.