Scott Sievert
2016-05-29 03:53:33 UTC
I recently ran into an application where I had to compute many inner products quickly (roughy 50k inner products in less than a second). I wanted a vector of inner products over the 50k vectors, or `[x1.T @ A @ x1, âŠ, xn.T @ A @ xn]` with A.shape = (1k, 1k).
My first instinct was to look for a NumPy function to quickly compute this, such as np.inner. However, it looks like np.inner has some other behavior and I couldnât get tensordot/einsum to work for me.
[a PR]:https://github.com/numpy/numpy/pull/7690
My first instinct was to look for a NumPy function to quickly compute this, such as np.inner. However, it looks like np.inner has some other behavior and I couldnât get tensordot/einsum to work for me.
 The main challenge is to figure out how to transition the behavior of all these operations, while preserving backwards compatibility. Quite likely, we need to pick new names for these functions, though we should try to pick something that doesn't suggest that they are second class alternatives.
Do we choose new function names? Do we add a keyword arg that changes what np.inner returns?[a PR]:https://github.com/numpy/numpy/pull/7690