I have a dask KubeCluster running in a seperate process, I have different jobs (kicked off by external users so I have no way of knowing when jobs will arrive and how many and how often) that connnect to the same KubeCluster and the KubeCluster is running in adapt mode. Each Job is very different in terms of it’s resource requirement, i.e. how many workers the KubeCluster should roughly have to accommodate that Job. When the Job starts I can inspect the parameters of the job and make a rough guess to know how many workers need to be added to the cluster, is there any way for the client to nudge the KubeCluster to say, you should probably scale up by X amount of workers now, and then let the adaptive scaling handle the rest?
@pavithraes I tried using scheduler plugins but had no luck. We ended up using a queue to communicate between worker and client