Running eactly one task per DASK worker

Hi,

I am trying to submit a set of 12 tasks, each on one separate DASK worker using a HTCondor cluster. As far as I understand I can accomplish the latter by using the ‘resource’ option from client.submit. Currently I have the following:

    client  = Client(cluster)
    # Scale the cluster to the number of points (number of points = Number of HTCondor jobs)
    cluster.scale(nPoints) 

    # We submit the tasks (defined by the submit method) to the cluster 
    futureList = []
    for point in pointList:
        future = client.submit(taskMethod, point, resources={'point': 1})
        futureList.append(future)

    for future in futureList:
        # Wait until experiments are complete
        client.gather(future)

This should submit taskMethod for each ‘point’ on a separate worker. However, while the cluster seems to scale to the requested number of points, it seems like the jobs are not being executed. Is my understanding of the ‘resource’ option wrong here?

I already added export DASK_DISTRIBUTED__WORKER__RESOURCES__point=1 in the hope that this would make DASK know that every worker has exactly one ‘point’ resource avalable, but this does not seem to make a difference.

Ok, I solved the issue by adding extra = [‘–resources "point=1"’] to the cluster configuration.

1 Like