Deploying dask-kubernetes-operator with enabled service monitor

Hello,

I’m trying to deploy dask-kubernetes-operator and for each KubeCluster that is created I want to specify that service monitor will be created to visualize the prometheus metrics in Grafana. Is there a way how to do that?

I’m installing it as follows

$ helm repo add dask https://helm.dask.org
$ helm repo update
$ helm install -n data-experimental --generate-name dask/dask-kubernetes-operator --set rbac.cluster=false --set kopfArgs="{--namespace=data-experimental}"

I found that there is an option to set values during helm install for dask/dask (scheduler.metrics.serviceMonitor.enabled) https://artifacthub.io/packages/helm/dask/dask

but not for dask/dask-kubernetes-operator
https://artifacthub.io/packages/helm/dask/dask-kubernetes-operator

Is there a way how to do it during helm install or do I need to somehow specify in the python code when KubeCluster is created? Thanks a lot!

Currently we don’t have support for this in the Python API. If you’re using the YAML resources directly then you can create a ServiceMonitor at the same time.

If creating a ServiceMonitor via the Python API is a feature you would like to see could you open an issue on GitHub requesting it?

2 Likes

Thanks, I’ll open the issue on github. Is there some example how to do that with direct yaml resources?

You can find out about the resources here https://kubernetes.dask.org/en/latest/operator_resources.html.

I guess you would create your own service monitor resource with a selector that selects the scheduler service.

I’ve opened an issue to track this on GitHub Add optional ServiceMonitor · Issue #687 · dask/dask-kubernetes · GitHub

Following up here for future readers. This is now supported in 2023.3.1.

https://kubernetes.dask.org/en/latest/operator_installation.html#prometheus