Discussion:
[Numpy-discussion] fpower ufunc
Charles R Harris
2016-10-21 02:58:06 UTC
Permalink
Hi All,

I've put up a preliminary PR <https://github.com/numpy/numpy/pull/8190> for
the proposed fpower ufunc. Apart from adding more tests and documentation,
I'd like to settle a few other things. The first is the name, two names
have been proposed and we should settle on one

- fpower (short)
- float_power (obvious)

The second thing is the minimum precision. In the preliminary version I
have used float32, but perhaps it makes more sense for the intended use to
make the minimum precision float64 instead.

Thoughts?

Chuck
Nathaniel Smith
2016-10-21 03:11:13 UTC
Permalink
On Thu, Oct 20, 2016 at 7:58 PM, Charles R Harris
Post by Charles R Harris
Hi All,
I've put up a preliminary PR for the proposed fpower ufunc. Apart from
adding more tests and documentation, I'd like to settle a few other things.
The first is the name, two names have been proposed and we should settle on
one
fpower (short)
float_power (obvious)
+0.6 for float_power
Post by Charles R Harris
The second thing is the minimum precision. In the preliminary version I have
used float32, but perhaps it makes more sense for the intended use to make
the minimum precision float64 instead.
Can you elaborate on what you're thinking? I guess this is because
float32 has limited range compared to float64, so is more likely to
see overflow? float32 still goes up to 10**38 which is < int64_max**2,
FWIW. Or maybe there's some subtlety with the int->float casting here?

-n
--
Nathaniel J. Smith -- https://vorpus.org
Charles R Harris
2016-10-21 03:38:33 UTC
Permalink
Post by Nathaniel Smith
On Thu, Oct 20, 2016 at 7:58 PM, Charles R Harris
Post by Charles R Harris
Hi All,
I've put up a preliminary PR for the proposed fpower ufunc. Apart from
adding more tests and documentation, I'd like to settle a few other
things.
Post by Charles R Harris
The first is the name, two names have been proposed and we should settle
on
Post by Charles R Harris
one
fpower (short)
float_power (obvious)
+0.6 for float_power
Post by Charles R Harris
The second thing is the minimum precision. In the preliminary version I
have
Post by Charles R Harris
used float32, but perhaps it makes more sense for the intended use to
make
Post by Charles R Harris
the minimum precision float64 instead.
Can you elaborate on what you're thinking? I guess this is because
float32 has limited range compared to float64, so is more likely to
see overflow? float32 still goes up to 10**38 which is < int64_max**2,
FWIW. Or maybe there's some subtlety with the int->float casting here?
logical, (u)int8, (u)int16, and float16 get converted to float32, which is
probably sufficient to avoid overflow and such. My thought was that float32
is something of a "specialized" type these days, while float64 is the
standard floating point precision for everyday computation.

Chuck
Sebastian Berg
2016-10-21 07:45:11 UTC
Permalink
Post by Charles R Harris
Post by Nathaniel Smith
On Thu, Oct 20, 2016 at 7:58 PM, Charles R Harris
Post by Charles R Harris
Hi All,
I've put up a preliminary PR for the proposed fpower ufunc. Apart
from
Post by Charles R Harris
adding more tests and documentation, I'd like to settle a few
other things.
Post by Charles R Harris
The first is the name, two names have been proposed and we should
settle on
Post by Charles R Harris
one
fpower (short)
float_power (obvious)
+0.6 for float_power
Post by Charles R Harris
The second thing is the minimum precision. In the preliminary
version I have
Post by Charles R Harris
used float32, but perhaps it makes more sense for the intended
use to make
Post by Charles R Harris
the minimum precision float64 instead.
Can you elaborate on what you're thinking? I guess this is because
float32 has limited range compared to float64, so is more likely to
see overflow? float32 still goes up to 10**38 which is <
int64_max**2,
FWIW. Or maybe there's some subtlety with the int->float casting here?
logical, (u)int8, (u)int16, and float16 get converted to float32,
which is probably sufficient to avoid overflow and such. My thought
was that float32 is something of a "specialized" type these days,
while float64 is the standard floating point precision for everyday
computation.
Isn't the behaviour we already have (e.g. such as mean).

ints -> float64
inexacts do not get upcast?

- Sebastian
Post by Charles R Harris
Chuck 
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Sebastian Berg
2016-10-21 08:29:30 UTC
Permalink
Post by Sebastian Berg
Post by Charles R Harris
Post by Nathaniel Smith
On Thu, Oct 20, 2016 at 7:58 PM, Charles R Harris
Post by Charles R Harris
Hi All,
I've put up a preliminary PR for the proposed fpower ufunc. Apart
from
Post by Charles R Harris
adding more tests and documentation, I'd like to settle a few
other things.
Post by Charles R Harris
The first is the name, two names have been proposed and we should
settle on
Post by Charles R Harris
one
fpower (short)
float_power (obvious)
+0.6 for float_power
Post by Charles R Harris
The second thing is the minimum precision. In the preliminary
version I have
Post by Charles R Harris
used float32, but perhaps it makes more sense for the intended
use to make
Post by Charles R Harris
the minimum precision float64 instead.
Can you elaborate on what you're thinking? I guess this is
because
float32 has limited range compared to float64, so is more likely to
see overflow? float32 still goes up to 10**38 which is <
int64_max**2,
FWIW. Or maybe there's some subtlety with the int->float casting here?
logical, (u)int8, (u)int16, and float16 get converted to float32,
which is probably sufficient to avoid overflow and such. My thought
was that float32 is something of a "specialized" type these days,
while float64 is the standard floating point precision for everyday
computation.
Isn't the behaviour we already have (e.g. such as mean).
ints -> float64
inexacts do not get upcast?
Ah, on the other hand, some/most of the float only ufuncs probably do
it as you made it work?
Post by Sebastian Berg
- Sebastian
Post by Charles R Harris
Chuck 
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Charles R Harris
2016-10-21 16:26:23 UTC
Permalink
Post by Sebastian Berg
Post by Charles R Harris
Post by Nathaniel Smith
On Thu, Oct 20, 2016 at 7:58 PM, Charles R Harris
Post by Charles R Harris
Hi All,
I've put up a preliminary PR for the proposed fpower ufunc. Apart
from
Post by Charles R Harris
adding more tests and documentation, I'd like to settle a few
other things.
Post by Charles R Harris
The first is the name, two names have been proposed and we should
settle on
Post by Charles R Harris
one
fpower (short)
float_power (obvious)
+0.6 for float_power
Post by Charles R Harris
The second thing is the minimum precision. In the preliminary
version I have
Post by Charles R Harris
used float32, but perhaps it makes more sense for the intended
use to make
Post by Charles R Harris
the minimum precision float64 instead.
Can you elaborate on what you're thinking? I guess this is because
float32 has limited range compared to float64, so is more likely to
see overflow? float32 still goes up to 10**38 which is <
int64_max**2,
FWIW. Or maybe there's some subtlety with the int->float casting here?
logical, (u)int8, (u)int16, and float16 get converted to float32,
which is probably sufficient to avoid overflow and such. My thought
was that float32 is something of a "specialized" type these days,
while float64 is the standard floating point precision for everyday
computation.
Isn't the behaviour we already have (e.g. such as mean).
ints -> float64
inexacts do not get upcast?
Hmm... The best way to do that would be to put the function in
`fromnumeric` and do it in python rather than as a ufunc, then for integer
types call power with `dtype=float64`. I like that idea better than the
current implementation, my mind was stuck in the ufunc universe.

Chuck

Loading...