-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sparse Distances #100
Comments
Thanks for this library! Any updates to sparsity support? Similar to |
Yes, @ogencoglu, sparsity is already implemented, as in |
In numpy/scipy, Can SimSIMD improve such matrix multiplications. That's my use case. |
Cool! We will have a few related releases, but more likely in October/November. Can you please open a separate feature request for Sparse Matrix Multiplications? And, as always, it helps if you can spread the word about the library - helps us prioritize features and work between different projects, @ogencoglu 🤗 |
All existing metrics imply dense vector representations. Dealing with very high-dimensional vectors, sparse representations may provide huge space-efficiency gains.
The only operation that needs to be implemented for Jaccard, Hamming, Inner Product, L2, and Cosine is a float-weighted vectorized set-intersection. We may expect the following kinds of vectors:
The last may not be practically useful. AVX-512 backend (Intel Ice Lake and newer and AMD Genoa) and SVE (AWS Graviton, Nvidia Grace, Microsoft Cobalt) will see the biggest gains. Together with a serial backend, multiplied by 4-5 input types, and 5 distance functions, this may result in over 100 new kernels.
Any thoughts and recommendations? Someone else looking for this functionality?
The text was updated successfully, but these errors were encountered: