Summing excess of overlapping blocks

I am using map_overlap (or overlap and map_blocks) to calculate a function which takes a window around each array element.

Currently, when calculating `map_overlap’ without trimming, the output array is larger than the input array as the outputs of each block are just concatenated:

current_excess

I need to retain the information in the excess, so am not trimming. However, what I want is for the excess to be summed onto the neighbouring block (in other words for the output to overlap) and the output array to have the same shape as the input array (ignoring for the moment any padding around the outside of the input array):
overlap_excess

Is there a way to do this using just dask logic? I suppose I could do some slicing to get what I’m after if I know the chunk sizes and locations after calculating map_overlap (which may do re-chunking). Would these be given by the following?

res = dask.array.map_overlap(func, arr, depth=depth, trim=False, allow_rechunk=True)
res.chunks

Hi @jnahlers, welcome to Dask community!

This does not sound like an easy problem. Maybe it would be easier to answer with a small reproducible example.

If I understand it correctly, I got a small idea while thinking about the problem, not sure if it will work: maybe you could just do another overlap with the same depth on the already overlapped and computed array. This way, you’d have an array composed of (block 1 computed, block 1 excess, block 2 excess), and the second chunk of your example : (block 1 excess, block 2 excess, block 2). Then you could do some blockwise computations to sum the other block excess back into the current block, and just trim the 2*depth overlap. Is that clear?

Thanks for your help. I suspect that my use case is a little different from the one that map_overlap was designed for. As I understand, map_overlap is mainly useful if you are mapping a function which takes a local neighbourhood, and returns a single value (for example, taking a derivative). What I want to do is map a function which takes a local neighbourhood, and returns a local neighbourhood. Those returned local neighbourhoods would then overlap in the final array.

What exactly ‘overlapping’ means is tricky - consider the various blending modes in image software. My need is for overlapping values to simply be summed (essentially an additive/linear-dodge blending mode).

I asked this question because I felt wanting to return local neighbourhoods, and summing overlapping neighbourhoods, was somehow a natural idea, and was curious if there was a way of doing this with map_overlap that I was missing.

I’ve put together the following MWE for a 2D array, which hopefully clarifies what I am trying to achieve:

import numpy as np
import dask.array as da

arr = da.random.random((100, 100), chunks=(10, 10))
depth = 2

result = da.map_overlap(lambda x: x**2, arr, depth=depth, boundary=0, trim=False)
result_untrimmed = result.compute()

# Re-shape/re-slice the result array into the original shape, using an additive blending mode.

# Original chunks
chunk_sizes = result.chunks
chunk_indices = [np.cumsum(np.concatenate([[0], chunk_sizes[:-1]])) for chunk_sizes in chunk_sizes]

# Chunks with overlap (call them "blocks")
block_sizes = tuple(np.array(chunk_sizes) + 2 * depth for chunk_sizes in chunk_sizes)
block_indices = [np.cumsum(np.concatenate([[0], block_sizes[:-1]])) for block_sizes in block_sizes]

# Prepare final array
result_final = np.zeros(np.array(arr.shape) + 2 * depth)

# Iterate over the chunks/blocks, and add them to the final result array
for i in range(len(chunk_sizes[0])):
    for j in range(len(chunk_sizes[1])):

        # Get the size of the current block
        block_size = block_sizes[0][i], block_sizes[1][j]

        # Get the block
        block = result_untrimmed[block_indices[0][i]:block_indices[0][i] + block_size[0],
                                 block_indices[1][j]:block_indices[1][j] + block_size[1]]

        # Get the indices of the current chunk in the final array
        i_diff, j_diff = chunk_indices[0][i], chunk_indices[1][j]

        # Add the current block to the final array
        result_final[i_diff:i_diff + block_size[0], j_diff:j_diff + block_size[1]] += block

# If desired, trim the outer excess now.
result_final = result_final[depth:-depth, depth:-depth]

Of course, if for some reason the chunk sizes are relatively small and depth is large, such an approach can take up a lot of memory!