**What happened**:
This issue first came up on [discourse](https://dask.discour…se.group/t/h5py-objects-cannot-be-pickled-or-slow-processing/229). When using the multiprocessing scheduler with `h5py` objects, there is a `TypeError: h5py objects cannot be pickled`. This does not happen when using the distributed scheduler.
**Minimal Complete Verifiable Example**:
```python
import h5py
import dask.array as da
# create fake hdf5 for testing
f = h5py.File("tmp/mytestfile.hdf5", "w")
dset = f.create_dataset("mydataset", (1000, 3), dtype='i')
# read it back in
f = h5py.File('tmp/mytestfile.hdf5', 'r')
dset = f['mydataset']
# send dask array result to map_blocks
dask_array = da.from_array(dset, chunks=(dset.shape[0], 1))
doubled = dask_array.map_blocks(lambda x: x * 2)
doubled.compute(scheduler='processes')
```
<details>
<summary>Full traceback</summary>
```python-traceback
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/var/folders/hf/2s7qjx7j5ndc5220_qxv8y800000gn/T/ipykernel_8250/2047040186.py in <module>
17
18 # compute
---> 19 doubled.compute(scheduler='processes')
~/mambaforge/envs/dask-mini-tutorial/lib/python3.9/site-packages/dask/base.py in compute(self, **kwargs)
288 dask.base.compute
289 """
--> 290 (result,) = compute(self, traverse=False, **kwargs)
291 return result
292
~/mambaforge/envs/dask-mini-tutorial/lib/python3.9/site-packages/dask/base.py in compute(traverse, optimize_graph, scheduler, get, *args, **kwargs)
571 postcomputes.append(x.__dask_postcompute__())
572
--> 573 results = schedule(dsk, keys, **kwargs)
574 return repack([f(r, *a) for r, (f, a) in zip(results, postcomputes)])
575
~/mambaforge/envs/dask-mini-tutorial/lib/python3.9/site-packages/dask/multiprocessing.py in get(dsk, keys, num_workers, func_loads, func_dumps, optimize_graph, pool, chunksize, **kwargs)
218 try:
219 # Run
--> 220 result = get_async(
221 pool.submit,
222 pool._max_workers,
~/mambaforge/envs/dask-mini-tutorial/lib/python3.9/site-packages/dask/local.py in get_async(submit, num_workers, dsk, result, cache, get_id, rerun_exceptions_locally, pack_exception, raise_exception, callbacks, dumps, loads, chunksize, **kwargs)
492 # Main loop, wait on tasks to finish, insert new ones
493 while state["waiting"] or state["ready"] or state["running"]:
--> 494 fire_tasks(chunksize)
495 for key, res_info, failed in queue_get(queue).result():
496 if failed:
~/mambaforge/envs/dask-mini-tutorial/lib/python3.9/site-packages/dask/local.py in fire_tasks(chunksize)
474 (
475 key,
--> 476 dumps((dsk[key], data)),
477 dumps,
478 loads,
~/mambaforge/envs/dask-mini-tutorial/lib/python3.9/site-packages/cloudpickle/cloudpickle_fast.py in dumps(obj, protocol, buffer_callback)
71 file, protocol=protocol, buffer_callback=buffer_callback
72 )
---> 73 cp.dump(obj)
74 return file.getvalue()
75
~/mambaforge/envs/dask-mini-tutorial/lib/python3.9/site-packages/cloudpickle/cloudpickle_fast.py in dump(self, obj)
600 def dump(self, obj):
601 try:
--> 602 return Pickler.dump(self, obj)
603 except RuntimeError as e:
604 if "recursion" in e.args[0]:
~/mambaforge/envs/dask-mini-tutorial/lib/python3.9/site-packages/h5py/_hl/base.py in __getnewargs__(self)
366 limitations, look at the h5pickle project on PyPI.
367 """
--> 368 raise TypeError("h5py objects cannot be pickled")
369
370 def __getstate__(self):
TypeError: h5py objects cannot be pickled
```
</details>
**Anything else we need to know?**:
I tried the same with zarr and there was no error:
```python
import zarr
import dask.array as da
# create fake zarr array
zarr_array = zarr.zeros((1000, 3), chunks=(1000, 1), dtype='i')
# convert to dask array
dask_array = da.from_zarr(zarr_array, chunks=(zarr_array.shape[0], 1))
# apply function
doubled = dask_array.map_blocks(lambda x: x * 2)
# compute
doubled.compute(scheduler='processes')
```
**Environment**:
- Dask version: 2022.02.1
- Python version: 3.9
- Operating System: Mac
- Install method (conda, pip, source): conda