Traefik fails in Daskhub helm chart based setup

I am fighting with some proof of concept application, with the intent to do some evaluation/measurements in the AWS cloud. I have an EKS based setup with autoscaler using kubernetes 1.21.
In the end of september I was able the run the same setup scripts, but since cca 3 weeks it fails right at the GatewayCluster initialization. Behind that, the problem is that traefic doesn’t work and the gateway API server cannot be contacted.

Here is the traefik POD log:

time="2022-10-28T10:27:25Z" level=info msg="Configuration loaded from flags."
time="2022-10-28T10:27:25Z" level=warning msg="Cross-namespace reference between IngressRoutes and resources is enabled, please ensure that this is expected (see AllowCrossNamespace option)" providerName=kubernetescrd
time="2022-10-28T10:27:26Z" level=error msg="subset not found for default/api-daskhub-dask-gateway" namespace=default providerName=kubernetescrd ingress=api-daskhub-dask-gateway

I have no idea what subset is looking for etc.

I found other things which may be an indication to the problem cause.
I deploy the daskhub with the following commands

helm repo add dask https://helm.dask.org/
helm repo update
helm install --version 2022.8.2 daskhub dask/daskhub --values=daskhub.yaml

(I also tried with helm upgrade --install daskhub dask/daskhub --values=daskhub.yaml)
In Lens, under Helm Charts I see listed

dask 2022.8.2 
dask-gateway 2022.10.0
dask-kubernetes-operator 2022.10.1
daskhub 2022.8.2 

But both, the api-daskhub-dask-gateway and traefic-daskhub-dask-gateway pods are labeled with wrong helm chart version

app.kubernetes.io/version=2022.6.1
gateway.dask.org/instance=daskhub-dask-gateway
helm.sh/chart=dask-gateway-2022.6.1

Nowhere in my scripts is specified 2022.6.1

Please give me some hints, because I am running out of time for this PoC and we have to choose completely different technology, just because of this blocking issue.
We are a ground penetration radar (NDI technology) company with significant impact in this domain.

@consideRatio may have some thoughts here.

I checkout the dask-dateway helm chart repo, but I don’t unserstand why is pulling dask-gateway version 2022.10.0 and dask-kubernetes-operator version 2022.10.1, when in fact I would like to stick to a version, avoiding surprises of infiltrated changes.

I saw some patch requirement, I tried to find a better version than 2022.4.0 but I couldn’t find anything better… I applied this

❯ kubectl apply -f https://raw.githubusercontent.com/dask/dask-gateway/2022.4.0/resources/helm/dask-gateway/crds/daskclusters.yaml
Warning: resource customresourcedefinitions/daskclusters.gateway.dask.org is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/daskclusters.gateway.dask.org configured
❯ kubectl apply -f https://raw.githubusercontent.com/dask/dask-gateway/2022.4.0/resources/helm/dask-gateway/crds/traefik.yaml
Warning: resource customresourcedefinitions/ingressroutes.traefik.containo.us is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/ingressroutes.traefik.containo.us configured
Warning: resource customresourcedefinitions/ingressroutetcps.traefik.containo.us is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/ingressroutetcps.traefik.containo.us configured
Warning: resource customresourcedefinitions/ingressrouteudps.traefik.containo.us is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/ingressrouteudps.traefik.containo.us configured
Warning: resource customresourcedefinitions/middlewares.traefik.containo.us is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/middlewares.traefik.containo.us configured
Warning: resource customresourcedefinitions/middlewaretcps.traefik.containo.us is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/middlewaretcps.traefik.containo.us configured
Warning: resource customresourcedefinitions/serverstransports.traefik.containo.us is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/serverstransports.traefik.containo.us configured
Warning: resource customresourcedefinitions/tlsoptions.traefik.containo.us is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/tlsoptions.traefik.containo.us configured
Warning: resource customresourcedefinitions/tlsstores.traefik.containo.us is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/tlsstores.traefik.containo.us configured
Warning: resource customresourcedefinitions/traefikservices.traefik.containo.us is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/traefikservices.traefik.containo.us configured

the removed the traefic pod to be recreated automatically.
Now the traefic pod log doesn’t have the last error

"subset not found for default/api-daskhub-dask-gateway" namespace=default providerName=kubernetescrd ingress=api-daskhub-dask-gateway

I recreated the jupter-admin too, just to try more cleanly… Unfortunately the communication doesn’t seems to work. Fails in the same first step touch of the gateway API server by the client api.

I did a few more clean start tests. I have to say that this error in traefic pod

"subset not found for default/api-daskhub-dask-gateway" namespace=default providerName=kubernetescrd ingress=api-daskhub-dask-gateway

does not disappear consistenly by the two patches (see above).

Maybe I am hunting something which Coiled also decided to drop Why we passed on Kubernetes
Who knows what is the correlation between these two.