Discussion:
[Numpy-discussion] ufunc for sum of squared difference
Matthew Harrigan
2016-11-04 17:11:36 UTC
Permalink
I was reading this
<http://notes-on-cython.readthedocs.io/en/latest/std_dev.html> and got
thinking about if a ufunc could compute the sum of squared differences in a
single pass without a temporary array. The python code below demonstrates
a possible approach.

import numpy as np
x = np.arange(10)
c = 1.0
def add_square_diff(x1, x2):
return x1 + (x2-c)**2
ufunc = np.frompyfunc(add_square_diff, 2, 1)
print(ufunc.reduce(x) - x[0] + (x[0]-c)**2)
print(np.sum(np.square(x-c)))

I have (at least) 4 questions:
1. Is it possible to pass run time constants to a ufunc written in C for
use in its inner loop, and if so how?
2. Is it possible to pass an initial value to reduce to avoid the clean up
required for the first element?
3. Does that ufunc work, or are there special cases which cause it to fall
apart?
4. Would a very specialized ufunc such as this be considered for
incorporating in numpy since it would help reduce time and memory of
functions already in numpy?

Thank you,
Matt
Sebastian Berg
2016-11-04 17:56:48 UTC
Permalink
I was reading this and got thinking about if a ufunc could compute
the sum of squared differences in a single pass without a temporary
array.  The python code below demonstrates a possible approach.
import numpy as np
x = np.arange(10)
c = 1.0
    return x1 + (x2-c)**2
ufunc = np.frompyfunc(add_square_diff, 2, 1)
print(ufunc.reduce(x) - x[0] + (x[0]-c)**2)
print(np.sum(np.square(x-c)))
1. Is it possible to pass run time constants to a ufunc written in C
for use in its inner loop, and if so how?
I don't think its anticipated, since a ufunc could in most cases use a
third argument, but a 3 arg ufunc can't be reduced. Not sure if there
might be some trickery possible.
2. Is it possible to pass an initial value to reduce to avoid the
clean up required for the first element?
This is the identity normally. But the identity can only be 0, 1 or -1
right now I think. The identity is what the output array gets
initialized with (which effectively makes it the first value passed
into the inner loop).
3. Does that ufunc work, or are there special cases which cause it to
fall apart?
4. Would a very specialized ufunc such as this be considered for
incorporating in numpy since it would help reduce time and memory of
functions already in numpy?
Might be mixing up things, however, IIRC the single pass approach has a
bad numerical accuracy, so that I doubt that it is a good default
algorithm.

- Sebastian
Thank you,
Matt
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Matthew Harrigan
2016-11-04 19:42:25 UTC
Permalink
I didn't notice identity before. Seems like frompyfunc always sets it to
None. If it were zero maybe it would work as desired here.

In the writing your own ufunc doc, I was wondering if the pointer to data
could be used to get a constant at runtime. If not, what could that be
used for?

static void double_logit(char **args, npy_intp *dimensions,
npy_intp* steps, void* data)

Why would the numerical accuracy be any different? The subtraction and
square operations look identical and I thought np.sum just calls
np.add.reduce, so the reduction step uses the same code and would therefore
have the same accuracy.

Thanks
Post by Sebastian Berg
I was reading this and got thinking about if a ufunc could compute
the sum of squared differences in a single pass without a temporary
array. The python code below demonstrates a possible approach.
import numpy as np
x = np.arange(10)
c = 1.0
return x1 + (x2-c)**2
ufunc = np.frompyfunc(add_square_diff, 2, 1)
print(ufunc.reduce(x) - x[0] + (x[0]-c)**2)
print(np.sum(np.square(x-c)))
1. Is it possible to pass run time constants to a ufunc written in C
for use in its inner loop, and if so how?
I don't think its anticipated, since a ufunc could in most cases use a
third argument, but a 3 arg ufunc can't be reduced. Not sure if there
might be some trickery possible.
2. Is it possible to pass an initial value to reduce to avoid the
clean up required for the first element?
This is the identity normally. But the identity can only be 0, 1 or -1
right now I think. The identity is what the output array gets
initialized with (which effectively makes it the first value passed
into the inner loop).
3. Does that ufunc work, or are there special cases which cause it to
fall apart?
4. Would a very specialized ufunc such as this be considered for
incorporating in numpy since it would help reduce time and memory of
functions already in numpy?
Might be mixing up things, however, IIRC the single pass approach has a
bad numerical accuracy, so that I doubt that it is a good default
algorithm.
- Sebastian
Thank you,
Matt
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Sebastian Berg
2016-11-04 21:33:05 UTC
Permalink
I didn't notice identity before.  Seems like frompyfunc always sets
it to None.  If it were zero maybe it would work as desired here.
In the writing your own ufunc doc, I was wondering if the pointer to
data could be used to get a constant at runtime.  If not, what could
that be used for?
static void double_logit(char **args, npy_intp *dimensions,
                            npy_intp* steps, void* data)
Why would the numerical accuracy be any different?  The subtraction
and square operations look identical and I thought np.sum just calls
np.add.reduce, so the reduction step uses the same code and would
therefore have the same accuracy.
Sorry, did not read it carefully, I guess `c` is the mean, so you are
doing the two pass method.

- Sebastian
Thanks
Post by Matthew Harrigan
I was reading this and got thinking about if a ufunc could
compute
the sum of squared differences in a single pass without a
temporary
array.  The python code below demonstrates a possible approach.
import numpy as np
x = np.arange(10)
c = 1.0
    return x1 + (x2-c)**2
ufunc = np.frompyfunc(add_square_diff, 2, 1)
print(ufunc.reduce(x) - x[0] + (x[0]-c)**2)
print(np.sum(np.square(x-c)))
1. Is it possible to pass run time constants to a ufunc written
in C
for use in its inner loop, and if so how?
I don't think its anticipated, since a ufunc could in most cases use a
third argument, but a 3 arg ufunc can't be reduced. Not sure if there
might be some trickery possible.
2. Is it possible to pass an initial value to reduce to avoid the
clean up required for the first element?
This is the identity normally. But the identity can only be 0, 1 or -1
right now I think. The identity is what the output array gets
initialized with (which effectively makes it the first value passed
into the inner loop).
3. Does that ufunc work, or are there special cases which cause
it to
fall apart?
4. Would a very specialized ufunc such as this be considered for
incorporating in numpy since it would help reduce time and memory
of
functions already in numpy?
Might be mixing up things, however, IIRC the single pass approach has a
bad numerical accuracy, so that I doubt that it is a good default
algorithm.
- Sebastian
Thank you,
Matt
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Matthew Harrigan
2016-11-11 16:25:58 UTC
Permalink
I started a ufunc to compute the sum of square differences here
<https://gist.github.com/mattharrigan/6f678b3d6df5efd236fc23bfb59fd3bd>.
It is about 4x faster and uses half the memory compared to
np.sum(np.square(x-c)). This could significantly speed up common
operations like std and var (where c=np.mean(x). It faster because its a
single pass instead of 3, and also because the inner loop is specialized
for the common reduce case, which might be an optimization considering more
generally.

I think I have answered my question #2&3 above.

Can someone please point me to an example where "data" was used in a ufunc
inner loop? How can that value be set at runtime? Thanks
Post by Sebastian Berg
Post by Matthew Harrigan
I didn't notice identity before. Seems like frompyfunc always sets
it to None. If it were zero maybe it would work as desired here.
In the writing your own ufunc doc, I was wondering if the pointer to
data could be used to get a constant at runtime. If not, what could
that be used for?
static void double_logit(char **args, npy_intp *dimensions,
npy_intp* steps, void* data)
Why would the numerical accuracy be any different? The subtraction
and square operations look identical and I thought np.sum just calls
np.add.reduce, so the reduction step uses the same code and would
therefore have the same accuracy.
Sorry, did not read it carefully, I guess `c` is the mean, so you are
doing the two pass method.
- Sebastian
Post by Matthew Harrigan
Thanks
Post by Matthew Harrigan
I was reading this and got thinking about if a ufunc could
compute
the sum of squared differences in a single pass without a
temporary
array. The python code below demonstrates a possible approach.
import numpy as np
x = np.arange(10)
c = 1.0
return x1 + (x2-c)**2
ufunc = np.frompyfunc(add_square_diff, 2, 1)
print(ufunc.reduce(x) - x[0] + (x[0]-c)**2)
print(np.sum(np.square(x-c)))
1. Is it possible to pass run time constants to a ufunc written
in C
for use in its inner loop, and if so how?
I don't think its anticipated, since a ufunc could in most cases use a
third argument, but a 3 arg ufunc can't be reduced. Not sure if there
might be some trickery possible.
2. Is it possible to pass an initial value to reduce to avoid the
clean up required for the first element?
This is the identity normally. But the identity can only be 0, 1 or -1
right now I think. The identity is what the output array gets
initialized with (which effectively makes it the first value passed
into the inner loop).
3. Does that ufunc work, or are there special cases which cause
it to
fall apart?
4. Would a very specialized ufunc such as this be considered for
incorporating in numpy since it would help reduce time and memory
of
functions already in numpy?
Might be mixing up things, however, IIRC the single pass approach has a
bad numerical accuracy, so that I doubt that it is a good default
algorithm.
- Sebastian
Thank you,
Matt
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Julian Taylor
2016-11-11 18:38:51 UTC
Permalink
here is an old unfinished PR adding max_abs which has a similar purpose
as yours:
https://github.com/numpy/numpy/pull/5509

So far I know data is set during runtime construction and is part of the
ufunc object, so it is not really very suitable to pass in constants on
call.
Its intended purpose is pass in functions to be called in generic loops
like PyUFunc_d_d.

Having an option to mark arguments of a ufunc as special in reductions
could be useful, e.g. it would allow a potential
(fused-)multiply-and-add ufunc to be used to implement a weighted sum.
Post by Matthew Harrigan
I started a ufunc to compute the sum of square differences here
<https://gist.github.com/mattharrigan/6f678b3d6df5efd236fc23bfb59fd3bd>. It
is about 4x faster and uses half the memory compared to
np.sum(np.square(x-c)). This could significantly speed up common
operations like std and var (where c=np.mean(x). It faster because its
a single pass instead of 3, and also because the inner loop is
specialized for the common reduce case, which might be an optimization
considering more generally.
I think I have answered my question #2&3 above.
Can someone please point me to an example where "data" was used in a
ufunc inner loop? How can that value be set at runtime? Thanks
On Fri, Nov 4, 2016 at 5:33 PM, Sebastian Berg
Post by Matthew Harrigan
I didn't notice identity before. Seems like frompyfunc always sets
it to None. If it were zero maybe it would work as desired here.
In the writing your own ufunc doc, I was wondering if the pointer to
data could be used to get a constant at runtime. If not, what could
that be used for?
static void double_logit(char **args, npy_intp *dimensions,
npy_intp* steps, void* data)
Why would the numerical accuracy be any different? The subtraction
and square operations look identical and I thought np.sum just calls
np.add.reduce, so the reduction step uses the same code and would
therefore have the same accuracy.
Sorry, did not read it carefully, I guess `c` is the mean, so you are
doing the two pass method.
- Sebastian
Post by Matthew Harrigan
Thanks
Post by Matthew Harrigan
I was reading this and got thinking about if a ufunc could
compute
the sum of squared differences in a single pass without a
temporary
array. The python code below demonstrates a possible approach.
import numpy as np
x = np.arange(10)
c = 1.0
return x1 + (x2-c)**2
ufunc = np.frompyfunc(add_square_diff, 2, 1)
print(ufunc.reduce(x) - x[0] + (x[0]-c)**2)
print(np.sum(np.square(x-c)))
1. Is it possible to pass run time constants to a ufunc written
in C
for use in its inner loop, and if so how?
I don't think its anticipated, since a ufunc could in most cases use a
third argument, but a 3 arg ufunc can't be reduced. Not sure if there
might be some trickery possible.
2. Is it possible to pass an initial value to reduce to avoid the
clean up required for the first element?
This is the identity normally. But the identity can only be 0, 1 or -1
right now I think. The identity is what the output array gets
initialized with (which effectively makes it the first value passed
into the inner loop).
3. Does that ufunc work, or are there special cases which cause
it to
fall apart?
4. Would a very specialized ufunc such as this be considered for
incorporating in numpy since it would help reduce time and memory
of
functions already in numpy?
Might be mixing up things, however, IIRC the single pass approach has a
bad numerical accuracy, so that I doubt that it is a good default
algorithm.
- Sebastian
Thank you,
Matt
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
<https://mail.scipy.org/mailman/listinfo/numpy-discussion>
Post by Matthew Harrigan
Post by Matthew Harrigan
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
<https://mail.scipy.org/mailman/listinfo/numpy-discussion>
Post by Matthew Harrigan
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
<https://mail.scipy.org/mailman/listinfo/numpy-discussion>
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
<https://mail.scipy.org/mailman/listinfo/numpy-discussion>
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Matthew Harrigan
2016-11-11 21:06:15 UTC
Permalink
My possibly poorly thought out API would be something like
np.add_sq_diff.set_c(42.0).reduce(x), basically add a setter method which
returned self. Thanks for explaining what data was intended to do. All of
this is really just a dance around the requirement that reduce only works
on binary functions. Would there be any interest in extending reduce to
allow for arbitrary numbers of input and output arguments?

On Fri, Nov 11, 2016 at 1:38 PM, Julian Taylor <
Post by Julian Taylor
here is an old unfinished PR adding max_abs which has a similar purpose
https://github.com/numpy/numpy/pull/5509
So far I know data is set during runtime construction and is part of the
ufunc object, so it is not really very suitable to pass in constants on
call.
Its intended purpose is pass in functions to be called in generic loops
like PyUFunc_d_d.
Having an option to mark arguments of a ufunc as special in reductions
could be useful, e.g. it would allow a potential
(fused-)multiply-and-add ufunc to be used to implement a weighted sum.
Post by Matthew Harrigan
I started a ufunc to compute the sum of square differences here
<https://gist.github.com/mattharrigan/6f678b3d6df5efd236fc23bfb59fd3bd>.
It
Post by Matthew Harrigan
is about 4x faster and uses half the memory compared to
np.sum(np.square(x-c)). This could significantly speed up common
operations like std and var (where c=np.mean(x). It faster because its
a single pass instead of 3, and also because the inner loop is
specialized for the common reduce case, which might be an optimization
considering more generally.
I think I have answered my question #2&3 above.
Can someone please point me to an example where "data" was used in a
ufunc inner loop? How can that value be set at runtime? Thanks
On Fri, Nov 4, 2016 at 5:33 PM, Sebastian Berg
Post by Matthew Harrigan
I didn't notice identity before. Seems like frompyfunc always sets
it to None. If it were zero maybe it would work as desired here.
In the writing your own ufunc doc, I was wondering if the pointer
to
Post by Matthew Harrigan
Post by Matthew Harrigan
data could be used to get a constant at runtime. If not, what
could
Post by Matthew Harrigan
Post by Matthew Harrigan
that be used for?
static void double_logit(char **args, npy_intp *dimensions,
npy_intp* steps, void* data)
Why would the numerical accuracy be any different? The subtraction
and square operations look identical and I thought np.sum just
calls
Post by Matthew Harrigan
Post by Matthew Harrigan
np.add.reduce, so the reduction step uses the same code and would
therefore have the same accuracy.
Sorry, did not read it carefully, I guess `c` is the mean, so you are
doing the two pass method.
- Sebastian
Post by Matthew Harrigan
Thanks
On Fri, Nov 4, 2016 at 1:56 PM, Sebastian Berg
Post by Matthew Harrigan
I was reading this and got thinking about if a ufunc could
compute
the sum of squared differences in a single pass without a
temporary
array. The python code below demonstrates a possible approach.
import numpy as np
x = np.arange(10)
c = 1.0
return x1 + (x2-c)**2
ufunc = np.frompyfunc(add_square_diff, 2, 1)
print(ufunc.reduce(x) - x[0] + (x[0]-c)**2)
print(np.sum(np.square(x-c)))
1. Is it possible to pass run time constants to a ufunc written
in C
for use in its inner loop, and if so how?
I don't think its anticipated, since a ufunc could in most cases use a
third argument, but a 3 arg ufunc can't be reduced. Not sure if there
might be some trickery possible.
2. Is it possible to pass an initial value to reduce to avoid
the
Post by Matthew Harrigan
Post by Matthew Harrigan
Post by Matthew Harrigan
clean up required for the first element?
This is the identity normally. But the identity can only be 0, 1
or
Post by Matthew Harrigan
Post by Matthew Harrigan
Post by Matthew Harrigan
-1
right now I think. The identity is what the output array gets
initialized with (which effectively makes it the first value
passed
Post by Matthew Harrigan
Post by Matthew Harrigan
Post by Matthew Harrigan
into the inner loop).
3. Does that ufunc work, or are there special cases which cause
it to
fall apart?
4. Would a very specialized ufunc such as this be considered
for
Post by Matthew Harrigan
Post by Matthew Harrigan
Post by Matthew Harrigan
incorporating in numpy since it would help reduce time and
memory
Post by Matthew Harrigan
Post by Matthew Harrigan
Post by Matthew Harrigan
of
functions already in numpy?
Might be mixing up things, however, IIRC the single pass approach has a
bad numerical accuracy, so that I doubt that it is a good default
algorithm.
- Sebastian
Thank you,
Matt
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
<https://mail.scipy.org/mailman/listinfo/numpy-discussion>
Post by Matthew Harrigan
Post by Matthew Harrigan
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
<https://mail.scipy.org/mailman/listinfo/numpy-discussion>
Post by Matthew Harrigan
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
<https://mail.scipy.org/mailman/listinfo/numpy-discussion>
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
<https://mail.scipy.org/mailman/listinfo/numpy-discussion>
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Jerome Kieffer
2016-11-14 08:38:09 UTC
Permalink
On Fri, 11 Nov 2016 11:25:58 -0500
Post by Matthew Harrigan
I started a ufunc to compute the sum of square differences here
<https://gist.github.com/mattharrigan/6f678b3d6df5efd236fc23bfb59fd3bd>.
It is about 4x faster and uses half the memory compared to
np.sum(np.square(x-c)).
Hi Matt,

Using *blas* you win already a factor two (maybe more depending on you blas implementation):

% python -m timeit -s "import numpy as np;x=np.linspace(0,1,int(1e7))" "np.sum(np.square(x-2.))"
10 loops, best of 3: 135 msec per loop

% python -m timeit -s "import numpy as np;x=np.linspace(0,1,int(1e7))" "y=x-2.;np.dot(y,y)"
10 loops, best of 3: 70.2 msec per loop


Cheers,
--
Jérôme Kieffer
eat
2016-11-14 20:38:25 UTC
Permalink
Yeah,

but it's not so obvious what's happening "under the hoods". Consider this
(with an old Win7 machine):
Python 3.5.2 (v3.5.2:4def2a2901a5, Jun 25 2016, 22:18:55) [MSC v.1900 64
bit (AMD64)]
np.__version__
'1.11.1'
Post by Jerome Kieffer
On Fri, 11 Nov 2016 11:25:58 -0500
Post by Matthew Harrigan
I started a ufunc to compute the sum of square differences here
<https://gist.github.com/mattharrigan/6f678b3d6df5efd236fc23bfb59fd3bd>.
It is about 4x faster and uses half the memory compared to
np.sum(np.square(x-c)).
Hi Matt,
Using *blas* you win already a factor two (maybe more depending on you
% python -m timeit -s "import numpy as np;x=np.linspace(0,1,int(1e7))"
"np.sum(np.square(x-2.))"
10 loops, best of 3: 135 msec per loop
% python -m timeit -s "import numpy as np;x=np.linspace(0,1,int(1e7))" "y=x-2.;np.dot(y,y)"
10 loops, best of 3: 70.2 msec per loop
x= np.linspace(0, 1, int(1e6))

timeit np.sum(np.square(x- 2.))
10 loops, best of 3: 23 ms per loop

y= x- 2.

timeit np.dot(y, y)
The slowest run took 18.60 times longer than the fastest. This could mean
that an intermediate result is being cached.
1000 loops, best of 3: 1.78 ms per loop

timeit np.dot(y, y)
1000 loops, best of 3: 1.73 ms per loop

Best,
eat
Post by Jerome Kieffer
Cheers,
--
JérÎme Kieffer
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Jerome Kieffer
2016-11-16 08:29:24 UTC
Permalink
On Mon, 14 Nov 2016 22:38:25 +0200
Post by eat
but it's not so obvious what's happening "under the hoods". Consider this
Python 3.5.2 (v3.5.2:4def2a2901a5, Jun 25 2016, 22:18:55) [MSC v.1900 64
bit (AMD64)]
np.__version__
'1.11.1'
What matters is the "blas" library used under the hood, hence the
options passed to numpy at compile time. I notices 20x differences
depending on the blas version.

But more importantly:
* results are the same (at the limit of the numerical precision)
* dot() was always faster than sum(square()), varying from a bit to a lot.

I agree this may change in future version of numpy.

Cheers,
--
Jérôme Kieffer
Matthew Harrigan
2016-11-15 01:40:39 UTC
Permalink
Still slower and worse uses 2x the memory for the intermediate temporary
array.

I propose allowing implicit reductions with ufuncs. Specifically if out is
provided with shape[axis] = 1, then pass it on to the ufunc with a stride
of 0. That should allow this to work:

x = np.arange(10)
def add_square_diff(x1, x2, x3):
return x1 + (x2-x3)**2
result =np.zeros(1)
np.frompyfunc(add_square_diff, 3, 1)(result, x, np.mean(x), result)

Essentially it creates a reduce for a function which isn't binary. I think
this would be generally useful. For instance, finding the min and max in
one pass would be nice:

def minmax(x1, x2, x3):
return min(x1,x3), max(x2,x3)
minn = np.array([np.inf])
maxx = np.array([-np.inf])
np.frompyfunc(minmax, 3, 2)(minn, maxx, x, minn, maxx)

Note it also allows for arbitrary initial values or identity to be
specified, possibly determined at run time. I think this would make ufuncs
even more universal.
Post by Jerome Kieffer
On Fri, 11 Nov 2016 11:25:58 -0500
Post by Matthew Harrigan
I started a ufunc to compute the sum of square differences here
<https://gist.github.com/mattharrigan/6f678b3d6df5efd236fc23bfb59fd3bd>.
It is about 4x faster and uses half the memory compared to
np.sum(np.square(x-c)).
Hi Matt,
Using *blas* you win already a factor two (maybe more depending on you
% python -m timeit -s "import numpy as np;x=np.linspace(0,1,int(1e7))"
"np.sum(np.square(x-2.))"
10 loops, best of 3: 135 msec per loop
% python -m timeit -s "import numpy as np;x=np.linspace(0,1,int(1e7))" "y=x-2.;np.dot(y,y)"
10 loops, best of 3: 70.2 msec per loop
Cheers,
--
JérÎme Kieffer
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Stephan Hoyer
2016-11-15 01:49:30 UTC
Permalink
On Mon, Nov 14, 2016 at 5:40 PM, Matthew Harrigan <
Post by Matthew Harrigan
Essentially it creates a reduce for a function which isn't binary. I
think this would be generally useful.
NumPy already has a generic enough interface for creating such ufuncs. In
fact, it's called a "generalized ufunc":
https://docs.scipy.org/doc/numpy/reference/c-api.generalized-ufuncs.html

I think you could already write "implicit reductions" using gufuncs?
Matthew Harrigan
2016-11-17 01:18:34 UTC
Permalink
Thanks for pointing me to that. I think its a great match for a 1D case,
like the inner product of two arrays in terms of signature. I don't see
how it works with higher dimensional arrays, esp with something like the
axis parameter in ufunc.reduce. With what I proposed for an array of shape
(M, N) for example, result.shape could be (1,) or (M, 1) or (1, N) for
reducing over the flattened array or either axis. Can you do something
like that with gufunc or do you need to iterate gufunc calls over
slices/views? Thanks again.
Post by Stephan Hoyer
On Mon, Nov 14, 2016 at 5:40 PM, Matthew Harrigan <
Post by Matthew Harrigan
Essentially it creates a reduce for a function which isn't binary. I
think this would be generally useful.
NumPy already has a generic enough interface for creating such ufuncs. In
https://docs.scipy.org/doc/numpy/reference/c-api.generalized-ufuncs.html
I think you could already write "implicit reductions" using gufuncs?
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Loading...