Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

possible support for sparse arrays #295

Open
dcherian opened this issue Dec 1, 2023 · 2 comments
Open

possible support for sparse arrays #295

dcherian opened this issue Dec 1, 2023 · 2 comments

Comments

@dcherian
Copy link
Collaborator

dcherian commented Dec 1, 2023

@ilan-gold @ivirshup if you have time, it'd be nice to see a groupby-reduce workflow you'd like to see supported natively by flox.

@ivirshup
Copy link

ivirshup commented Dec 7, 2023

Thanks for opening the issue!

I think the workflows we'd like to see are pretty straight forward. We'd like:

  • the array being computed on to be sparse (e.g. the first argument), but producing a dense result
  • The distributed array being computed on to have sparse chunks
  • Probably some additional reduction methods that ignore zeros. At the very least a count_nonzero/ nzcount (btw, would love suggestions on naming here) method

I've thought a little bit about implementation.

  • I don't know if a ufunc based approach will work. I recall @seberg working on some ufunc methods for sparse arrays at a sprint, but don't recall if it ended up working out
  • I think only the in-memory array computing layer would need any work, as combining stats across chunks should be the same
  • I haven't figured out how this fits in with the existing engines. Aside from just implementing sparse support in numpy_groupies
  • A graphblas engine would probably be very fast here
  • I only really care about 2d sparse arrays at the moment

@seberg
Copy link

seberg commented Dec 7, 2023

I recall @seberg working on some ufunc methods for sparse arrays at a sprint, but don't recall if it ended up working out

That mostly worked, but was a bit slow. The point being that the approach was to extract the result sparsity pattern. Then extract the data and apply the normal ufunc to it. But at least for memory bound ufuncs like add, it was much slower (maybe 3-4x, but don't recall). Now, of course that could probably be optimized a bit by specialization e.g. of binary ufuncs. But in the end, the way scipy-sparse works, it seemed potentially useful as a fallback to implement any ufunc for any dtype NumPy supports, even if scipy sparse has never heard of them. But probably not as a replacement for most operations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants