Hi,
The Related
section helped me find a solution.
In case somebody struggles, this one works just fine:
import dask
import dask.dataframe as dd
import pandas as pd
dask.config.set({"dataframe.convert-string": False})
import pandas as pd
a = pd.DataFrame({"int_array_column1":[[1,2,3]],"int_array_column2":[[1,2,3]]})
ddf = dd.from_pandas(a,npartitions=1)
ddf_dd = ddf.to_parquet("testdd.parquet", schema={
"int_array_column1": pa.list_(pa.int64()),
"int_array_column2": pa.list_(pa.int64()),
})
Please feel free to close the thread, apologies for the mess, hopefully somebody can use this.