Discussion:
[Numpy-discussion] Numpy integers to integer powers again again
Charles R Harris
2016-10-24 22:41:00 UTC
Permalink
Hi All,

I've been thinking about this some (a lot) more and have an alternate
proposal for the behavior of the `**` operator

- if both base and power are numpy/python scalar integers, convert to
python integers and call the `**` operator. That would solve both the
precision and compatibility problems and I think is the option of least
surprise. For those who need type preservation and modular arithmetic, the
np.power function remains, although the type conversions can be surpirising
as it seems that the base and power should play different roles in
determining the type, at least to me.
- Array, 0-d or not, are treated differently from scalars and integers
raised to negative integer powers always raise an error.

I think this solves most problems and would not be difficult to implement.

Thoughts?

Chuck
Nathaniel Smith
2016-10-24 23:30:43 UTC
Permalink
On Mon, Oct 24, 2016 at 3:41 PM, Charles R Harris
Post by Charles R Harris
Hi All,
I've been thinking about this some (a lot) more and have an alternate
proposal for the behavior of the `**` operator
if both base and power are numpy/python scalar integers, convert to python
integers and call the `**` operator. That would solve both the precision and
compatibility problems and I think is the option of least surprise. For
those who need type preservation and modular arithmetic, the np.power
function remains, although the type conversions can be surpirising as it
seems that the base and power should play different roles in determining
the type, at least to me.
Array, 0-d or not, are treated differently from scalars and integers raised
to negative integer powers always raise an error.
I think this solves most problems and would not be difficult to implement.
Thoughts?
My main concern about this is that it adds more special cases to numpy
scalars, and a new behavioral deviation between 0d arrays and scalars,
when ideally we should be trying to reduce the
duplication/discrepancies between these. It's also inconsistent with
how other operations on integer scalars work, e.g. regular addition
overflows rather than promoting to Python int:

In [8]: np.int64(2 ** 63 - 1) + 1
/home/njs/.user-python3.5-64bit/bin/ipython:1: RuntimeWarning:
overflow encountered in long_scalars
#!/home/njs/.user-python3.5-64bit/bin/python3.5
Out[8]: -9223372036854775808

So I'm inclined to try and keep it simple, like in your previous
proposal... theoretically of course it would be nice to have the
perfect solution here, but at this point it feels like we might be
overthinking this trying to get that last 1% of improvement. The thing
where 2 ** -1 returns 0 is just broken and bites people so we should
definitely fix it, but beyond that I'm not sure it really matters
*that* much what we do, and "special cases aren't special enough to
break the rules" and all that.

-n
--
Nathaniel J. Smith -- https://vorpus.org
Stephan Hoyer
2016-10-25 16:14:40 UTC
Permalink
I am also concerned about adding more special cases for NumPy scalars vs
arrays. These cases are already confusing (e.g., making no distinction
between 0d arrays and scalars) and poorly documented.
Post by Nathaniel Smith
On Mon, Oct 24, 2016 at 3:41 PM, Charles R Harris
Post by Charles R Harris
Hi All,
I've been thinking about this some (a lot) more and have an alternate
proposal for the behavior of the `**` operator
if both base and power are numpy/python scalar integers, convert to
python
Post by Charles R Harris
integers and call the `**` operator. That would solve both the precision
and
Post by Charles R Harris
compatibility problems and I think is the option of least surprise. For
those who need type preservation and modular arithmetic, the np.power
function remains, although the type conversions can be surpirising as it
seems that the base and power should play different roles in determining
the type, at least to me.
Array, 0-d or not, are treated differently from scalars and integers
raised
Post by Charles R Harris
to negative integer powers always raise an error.
I think this solves most problems and would not be difficult to
implement.
Post by Charles R Harris
Thoughts?
My main concern about this is that it adds more special cases to numpy
scalars, and a new behavioral deviation between 0d arrays and scalars,
when ideally we should be trying to reduce the
duplication/discrepancies between these. It's also inconsistent with
how other operations on integer scalars work, e.g. regular addition
In [8]: np.int64(2 ** 63 - 1) + 1
overflow encountered in long_scalars
#!/home/njs/.user-python3.5-64bit/bin/python3.5
Out[8]: -9223372036854775808
So I'm inclined to try and keep it simple, like in your previous
proposal... theoretically of course it would be nice to have the
perfect solution here, but at this point it feels like we might be
overthinking this trying to get that last 1% of improvement. The thing
where 2 ** -1 returns 0 is just broken and bites people so we should
definitely fix it, but beyond that I'm not sure it really matters
*that* much what we do, and "special cases aren't special enough to
break the rules" and all that.
-n
--
Nathaniel J. Smith -- https://vorpus.org
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Charles R Harris
2016-10-26 19:23:08 UTC
Permalink
Post by Stephan Hoyer
I am also concerned about adding more special cases for NumPy scalars vs
arrays. These cases are already confusing (e.g., making no distinction
between 0d arrays and scalars) and poorly documented.
Post by Nathaniel Smith
On Mon, Oct 24, 2016 at 3:41 PM, Charles R Harris
Post by Charles R Harris
Hi All,
I've been thinking about this some (a lot) more and have an alternate
proposal for the behavior of the `**` operator
if both base and power are numpy/python scalar integers, convert to
python
Post by Charles R Harris
integers and call the `**` operator. That would solve both the
precision and
Post by Charles R Harris
compatibility problems and I think is the option of least surprise. For
those who need type preservation and modular arithmetic, the np.power
function remains, although the type conversions can be surpirising as it
seems that the base and power should play different roles in
determining
Post by Charles R Harris
the type, at least to me.
Array, 0-d or not, are treated differently from scalars and integers
raised
Post by Charles R Harris
to negative integer powers always raise an error.
I think this solves most problems and would not be difficult to
implement.
Post by Charles R Harris
Thoughts?
My main concern about this is that it adds more special cases to numpy
scalars, and a new behavioral deviation between 0d arrays and scalars,
when ideally we should be trying to reduce the
duplication/discrepancies between these. It's also inconsistent with
how other operations on integer scalars work, e.g. regular addition
In [8]: np.int64(2 ** 63 - 1) + 1
overflow encountered in long_scalars
#!/home/njs/.user-python3.5-64bit/bin/python3.5
Out[8]: -9223372036854775808
So I'm inclined to try and keep it simple, like in your previous
proposal... theoretically of course it would be nice to have the
perfect solution here, but at this point it feels like we might be
overthinking this trying to get that last 1% of improvement. The thing
where 2 ** -1 returns 0 is just broken and bites people so we should
definitely fix it, but beyond that I'm not sure it really matters
*that* much what we do, and "special cases aren't special enough to
break the rules" and all that.
What I have been concerned about are the follow combinations that currently
return floats

num: <type 'numpy.int8'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float32'>
num: <type 'numpy.int16'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float32'>
num: <type 'numpy.int16'>, exp: <type 'numpy.int16'>, res: <type
'numpy.float32'>
num: <type 'numpy.int32'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float64'>
num: <type 'numpy.int32'>, exp: <type 'numpy.int16'>, res: <type
'numpy.float64'>
num: <type 'numpy.int32'>, exp: <type 'numpy.int32'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int16'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int32'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int64'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int64'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int16'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int32'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int64'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int64'>, res: <type
'numpy.float64'>

The other combinations of signed and unsigned integers to signed powers
currently raise ValueError due to the change to the power ufunc. The
exceptions that aren't covered by uint64 + signed (which won't change) seem
to occur when the exponent can be safely cast to the base type. I suspect
that people have already come to depend on that, especially as python
integers on 64 bit linux convert to int64. So in those cases we should
perhaps raise a FutureWarning instead of an error.

Chuck
j***@gmail.com
2016-10-26 19:39:16 UTC
Permalink
Post by Charles R Harris
Post by Stephan Hoyer
I am also concerned about adding more special cases for NumPy scalars vs
arrays. These cases are already confusing (e.g., making no distinction
between 0d arrays and scalars) and poorly documented.
Post by Nathaniel Smith
On Mon, Oct 24, 2016 at 3:41 PM, Charles R Harris
Post by Charles R Harris
Hi All,
I've been thinking about this some (a lot) more and have an alternate
proposal for the behavior of the `**` operator
if both base and power are numpy/python scalar integers, convert to
python
Post by Charles R Harris
integers and call the `**` operator. That would solve both the
precision and
Post by Charles R Harris
compatibility problems and I think is the option of least surprise. For
those who need type preservation and modular arithmetic, the np.power
function remains, although the type conversions can be surpirising as
it
Post by Charles R Harris
seems that the base and power should play different roles in
determining
Post by Charles R Harris
the type, at least to me.
Array, 0-d or not, are treated differently from scalars and integers
raised
Post by Charles R Harris
to negative integer powers always raise an error.
I think this solves most problems and would not be difficult to
implement.
Post by Charles R Harris
Thoughts?
My main concern about this is that it adds more special cases to numpy
scalars, and a new behavioral deviation between 0d arrays and scalars,
when ideally we should be trying to reduce the
duplication/discrepancies between these. It's also inconsistent with
how other operations on integer scalars work, e.g. regular addition
In [8]: np.int64(2 ** 63 - 1) + 1
overflow encountered in long_scalars
#!/home/njs/.user-python3.5-64bit/bin/python3.5
Out[8]: -9223372036854775808
So I'm inclined to try and keep it simple, like in your previous
proposal... theoretically of course it would be nice to have the
perfect solution here, but at this point it feels like we might be
overthinking this trying to get that last 1% of improvement. The thing
where 2 ** -1 returns 0 is just broken and bites people so we should
definitely fix it, but beyond that I'm not sure it really matters
*that* much what we do, and "special cases aren't special enough to
break the rules" and all that.
What I have been concerned about are the follow combinations that
currently return floats
num: <type 'numpy.int8'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float32'>
num: <type 'numpy.int16'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float32'>
num: <type 'numpy.int16'>, exp: <type 'numpy.int16'>, res: <type
'numpy.float32'>
num: <type 'numpy.int32'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float64'>
num: <type 'numpy.int32'>, exp: <type 'numpy.int16'>, res: <type
'numpy.float64'>
num: <type 'numpy.int32'>, exp: <type 'numpy.int32'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int16'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int32'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int64'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int64'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int16'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int32'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int64'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int64'>, res: <type
'numpy.float64'>
The other combinations of signed and unsigned integers to signed powers
currently raise ValueError due to the change to the power ufunc. The
exceptions that aren't covered by uint64 + signed (which won't change) seem
to occur when the exponent can be safely cast to the base type. I suspect
that people have already come to depend on that, especially as python
integers on 64 bit linux convert to int64. So in those cases we should
perhaps raise a FutureWarning instead of an error.
Post by Stephan Hoyer
Post by Nathaniel Smith
np.int64(2)**np.array(-1, np.int64)
0.5
Post by Charles R Harris
Post by Stephan Hoyer
Post by Nathaniel Smith
np.__version__
'1.10.4'
Post by Charles R Harris
Post by Stephan Hoyer
Post by Nathaniel Smith
np.int64(2)**np.array([-1, 2], np.int64)
array([0, 4], dtype=int64)
Post by Charles R Harris
Post by Stephan Hoyer
Post by Nathaniel Smith
np.array(2, np.uint64)**np.array([-1, 2], np.int64)
array([0, 4], dtype=int64)
Post by Charles R Harris
Post by Stephan Hoyer
Post by Nathaniel Smith
np.array([2], np.uint64)**np.array([-1, 2], np.int64)
array([ 0.5, 4. ])
Post by Charles R Harris
Post by Stephan Hoyer
Post by Nathaniel Smith
np.array([2], np.uint64).squeeze()**np.array([-1, 2], np.int64)
array([0, 4], dtype=int64)


(IMO: If you have to break backwards compatibility, break forwards not
backwards.)


Josef
http://www.stanlaurelandoliverhardy.com/nicemess.htm
Post by Charles R Harris
Chuck
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Charles R Harris
2016-10-26 19:57:29 UTC
Permalink
On Wed, Oct 26, 2016 at 3:23 PM, Charles R Harris <
Post by Charles R Harris
Post by Stephan Hoyer
I am also concerned about adding more special cases for NumPy scalars vs
arrays. These cases are already confusing (e.g., making no distinction
between 0d arrays and scalars) and poorly documented.
Post by Nathaniel Smith
On Mon, Oct 24, 2016 at 3:41 PM, Charles R Harris
Post by Charles R Harris
Hi All,
I've been thinking about this some (a lot) more and have an alternate
proposal for the behavior of the `**` operator
if both base and power are numpy/python scalar integers, convert to
python
Post by Charles R Harris
integers and call the `**` operator. That would solve both the
precision and
Post by Charles R Harris
compatibility problems and I think is the option of least surprise.
For
Post by Charles R Harris
those who need type preservation and modular arithmetic, the np.power
function remains, although the type conversions can be surpirising as
it
Post by Charles R Harris
seems that the base and power should play different roles in
determining
Post by Charles R Harris
the type, at least to me.
Array, 0-d or not, are treated differently from scalars and integers
raised
Post by Charles R Harris
to negative integer powers always raise an error.
I think this solves most problems and would not be difficult to
implement.
Post by Charles R Harris
Thoughts?
My main concern about this is that it adds more special cases to numpy
scalars, and a new behavioral deviation between 0d arrays and scalars,
when ideally we should be trying to reduce the
duplication/discrepancies between these. It's also inconsistent with
how other operations on integer scalars work, e.g. regular addition
In [8]: np.int64(2 ** 63 - 1) + 1
overflow encountered in long_scalars
#!/home/njs/.user-python3.5-64bit/bin/python3.5
Out[8]: -9223372036854775808
So I'm inclined to try and keep it simple, like in your previous
proposal... theoretically of course it would be nice to have the
perfect solution here, but at this point it feels like we might be
overthinking this trying to get that last 1% of improvement. The thing
where 2 ** -1 returns 0 is just broken and bites people so we should
definitely fix it, but beyond that I'm not sure it really matters
*that* much what we do, and "special cases aren't special enough to
break the rules" and all that.
What I have been concerned about are the follow combinations that
currently return floats
num: <type 'numpy.int8'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float32'>
num: <type 'numpy.int16'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float32'>
num: <type 'numpy.int16'>, exp: <type 'numpy.int16'>, res: <type
'numpy.float32'>
num: <type 'numpy.int32'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float64'>
num: <type 'numpy.int32'>, exp: <type 'numpy.int16'>, res: <type
'numpy.float64'>
num: <type 'numpy.int32'>, exp: <type 'numpy.int32'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int16'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int32'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int64'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int64'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int16'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int32'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int64'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int64'>, res: <type
'numpy.float64'>
The other combinations of signed and unsigned integers to signed powers
currently raise ValueError due to the change to the power ufunc. The
exceptions that aren't covered by uint64 + signed (which won't change) seem
to occur when the exponent can be safely cast to the base type. I suspect
that people have already come to depend on that, especially as python
integers on 64 bit linux convert to int64. So in those cases we should
perhaps raise a FutureWarning instead of an error.
Post by Stephan Hoyer
Post by Nathaniel Smith
np.int64(2)**np.array(-1, np.int64)
0.5
Post by Charles R Harris
Post by Stephan Hoyer
Post by Nathaniel Smith
np.__version__
'1.10.4'
Post by Charles R Harris
Post by Stephan Hoyer
Post by Nathaniel Smith
np.int64(2)**np.array([-1, 2], np.int64)
array([0, 4], dtype=int64)
Post by Charles R Harris
Post by Stephan Hoyer
Post by Nathaniel Smith
np.array(2, np.uint64)**np.array([-1, 2], np.int64)
array([0, 4], dtype=int64)
Post by Charles R Harris
Post by Stephan Hoyer
Post by Nathaniel Smith
np.array([2], np.uint64)**np.array([-1, 2], np.int64)
array([ 0.5, 4. ])
Post by Charles R Harris
Post by Stephan Hoyer
Post by Nathaniel Smith
np.array([2], np.uint64).squeeze()**np.array([-1, 2], np.int64)
array([0, 4], dtype=int64)
(IMO: If you have to break backwards compatibility, break forwards not
backwards.)
Current master is different. I'm not too worried in the array cases as the
results for negative exponents were zero except then raising -1 to a power.
Since that result is incorrect raising an error falls on the fine line
between bug fix and compatibility break. If the pre-releases cause too much
trouble.

Chuck
j***@gmail.com
2016-10-26 20:20:55 UTC
Permalink
Post by Charles R Harris
On Wed, Oct 26, 2016 at 3:23 PM, Charles R Harris <
Post by Charles R Harris
Post by Stephan Hoyer
I am also concerned about adding more special cases for NumPy scalars
vs arrays. These cases are already confusing (e.g., making no distinction
between 0d arrays and scalars) and poorly documented.
Post by Nathaniel Smith
On Mon, Oct 24, 2016 at 3:41 PM, Charles R Harris
Post by Charles R Harris
Hi All,
I've been thinking about this some (a lot) more and have an alternate
proposal for the behavior of the `**` operator
if both base and power are numpy/python scalar integers, convert to
python
Post by Charles R Harris
integers and call the `**` operator. That would solve both the
precision and
Post by Charles R Harris
compatibility problems and I think is the option of least surprise.
For
Post by Charles R Harris
those who need type preservation and modular arithmetic, the np.power
function remains, although the type conversions can be surpirising
as it
Post by Charles R Harris
seems that the base and power should play different roles in
determining
Post by Charles R Harris
the type, at least to me.
Array, 0-d or not, are treated differently from scalars and integers
raised
Post by Charles R Harris
to negative integer powers always raise an error.
I think this solves most problems and would not be difficult to
implement.
Post by Charles R Harris
Thoughts?
My main concern about this is that it adds more special cases to numpy
scalars, and a new behavioral deviation between 0d arrays and scalars,
when ideally we should be trying to reduce the
duplication/discrepancies between these. It's also inconsistent with
how other operations on integer scalars work, e.g. regular addition
In [8]: np.int64(2 ** 63 - 1) + 1
overflow encountered in long_scalars
#!/home/njs/.user-python3.5-64bit/bin/python3.5
Out[8]: -9223372036854775808
So I'm inclined to try and keep it simple, like in your previous
proposal... theoretically of course it would be nice to have the
perfect solution here, but at this point it feels like we might be
overthinking this trying to get that last 1% of improvement. The thing
where 2 ** -1 returns 0 is just broken and bites people so we should
definitely fix it, but beyond that I'm not sure it really matters
*that* much what we do, and "special cases aren't special enough to
break the rules" and all that.
What I have been concerned about are the follow combinations that
currently return floats
num: <type 'numpy.int8'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float32'>
num: <type 'numpy.int16'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float32'>
num: <type 'numpy.int16'>, exp: <type 'numpy.int16'>, res: <type
'numpy.float32'>
num: <type 'numpy.int32'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float64'>
num: <type 'numpy.int32'>, exp: <type 'numpy.int16'>, res: <type
'numpy.float64'>
num: <type 'numpy.int32'>, exp: <type 'numpy.int32'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int16'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int32'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int64'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int64'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int16'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int32'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int64'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int64'>, res: <type
'numpy.float64'>
The other combinations of signed and unsigned integers to signed powers
currently raise ValueError due to the change to the power ufunc. The
exceptions that aren't covered by uint64 + signed (which won't change) seem
to occur when the exponent can be safely cast to the base type. I suspect
that people have already come to depend on that, especially as python
integers on 64 bit linux convert to int64. So in those cases we should
perhaps raise a FutureWarning instead of an error.
Post by Stephan Hoyer
Post by Nathaniel Smith
np.int64(2)**np.array(-1, np.int64)
0.5
Post by Charles R Harris
Post by Stephan Hoyer
Post by Nathaniel Smith
np.__version__
'1.10.4'
Post by Charles R Harris
Post by Stephan Hoyer
Post by Nathaniel Smith
np.int64(2)**np.array([-1, 2], np.int64)
array([0, 4], dtype=int64)
Post by Charles R Harris
Post by Stephan Hoyer
Post by Nathaniel Smith
np.array(2, np.uint64)**np.array([-1, 2], np.int64)
array([0, 4], dtype=int64)
Post by Charles R Harris
Post by Stephan Hoyer
Post by Nathaniel Smith
np.array([2], np.uint64)**np.array([-1, 2], np.int64)
array([ 0.5, 4. ])
Post by Charles R Harris
Post by Stephan Hoyer
Post by Nathaniel Smith
np.array([2], np.uint64).squeeze()**np.array([-1, 2], np.int64)
array([0, 4], dtype=int64)
(IMO: If you have to break backwards compatibility, break forwards not
backwards.)
Current master is different. I'm not too worried in the array cases as the
results for negative exponents were zero except then raising -1 to a power.
Since that result is incorrect raising an error falls on the fine line
between bug fix and compatibility break. If the pre-releases cause too much
trouble.
naive question: if cleaning up the inconsistencies already (kind of) breaks
backwards compatibility and didn't result in a big outcry, why can we not
go with a Future warning all the way to float. (i.e. use the power function
with specified dtype instead of ** if you insist on int return)

Josef
Post by Charles R Harris
Chuck
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Nathaniel Smith
2016-10-26 19:39:15 UTC
Permalink
On Wed, Oct 26, 2016 at 12:23 PM, Charles R Harris
<***@gmail.com> wrote:
[...]
Post by Charles R Harris
What I have been concerned about are the follow combinations that currently
return floats
num: <type 'numpy.int8'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float32'>
num: <type 'numpy.int16'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float32'>
num: <type 'numpy.int16'>, exp: <type 'numpy.int16'>, res: <type
'numpy.float32'>
num: <type 'numpy.int32'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float64'>
num: <type 'numpy.int32'>, exp: <type 'numpy.int16'>, res: <type
'numpy.float64'>
num: <type 'numpy.int32'>, exp: <type 'numpy.int32'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int16'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int32'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int64'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int64'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int16'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int32'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int64'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int64'>, res: <type
'numpy.float64'>
What's this referring to? For both arrays and scalars I get:

In [8]: (np.array(2, dtype=np.int8) ** np.array(2, dtype=np.int8)).dtype
Out[8]: dtype('int8')

In [9]: (np.int8(2) ** np.int8(2)).dtype
Out[9]: dtype('int8')

-n
--
Nathaniel J. Smith -- https://vorpus.org
j***@gmail.com
2016-10-26 19:49:36 UTC
Permalink
Post by Nathaniel Smith
On Wed, Oct 26, 2016 at 12:23 PM, Charles R Harris
[...]
Post by Charles R Harris
What I have been concerned about are the follow combinations that
currently
Post by Charles R Harris
return floats
num: <type 'numpy.int8'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float32'>
num: <type 'numpy.int16'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float32'>
num: <type 'numpy.int16'>, exp: <type 'numpy.int16'>, res: <type
'numpy.float32'>
num: <type 'numpy.int32'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float64'>
num: <type 'numpy.int32'>, exp: <type 'numpy.int16'>, res: <type
'numpy.float64'>
num: <type 'numpy.int32'>, exp: <type 'numpy.int32'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int16'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int32'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int64'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int64'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int16'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int32'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int64'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int64'>, res: <type
'numpy.float64'>
In [8]: (np.array(2, dtype=np.int8) ** np.array(2, dtype=np.int8)).dtype
Out[8]: dtype('int8')
In [9]: (np.int8(2) ** np.int8(2)).dtype
Out[9]: dtype('int8')
Post by Charles R Harris
(np.array([2], dtype=np.int8) ** np.array(-1,
dtype=np.int8).squeeze()).dtype
dtype('int8')
Post by Nathaniel Smith
Post by Charles R Harris
(np.array([2], dtype=np.int8)[0] ** np.array(-1,
dtype=np.int8).squeeze()).dtype
dtype('float32')
Post by Nathaniel Smith
Post by Charles R Harris
(np.int8(2)**np.int8(-1)).dtype
dtype('float32')
Post by Nathaniel Smith
Post by Charles R Harris
(np.int8(2)**np.int8(2)).dtype
dtype('int8')

The last one looks like value dependent scalar dtype

Josef
Post by Nathaniel Smith
-n
--
Nathaniel J. Smith -- https://vorpus.org
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Charles R Harris
2016-10-26 19:58:20 UTC
Permalink
Post by Nathaniel Smith
On Wed, Oct 26, 2016 at 12:23 PM, Charles R Harris
[...]
Post by Charles R Harris
What I have been concerned about are the follow combinations that
currently
Post by Charles R Harris
return floats
num: <type 'numpy.int8'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float32'>
num: <type 'numpy.int16'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float32'>
num: <type 'numpy.int16'>, exp: <type 'numpy.int16'>, res: <type
'numpy.float32'>
num: <type 'numpy.int32'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float64'>
num: <type 'numpy.int32'>, exp: <type 'numpy.int16'>, res: <type
'numpy.float64'>
num: <type 'numpy.int32'>, exp: <type 'numpy.int32'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int16'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int32'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int64'>, res: <type
'numpy.float64'>
num: <type 'numpy.int64'>, exp: <type 'numpy.int64'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int8'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int16'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int32'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int64'>, res: <type
'numpy.float64'>
num: <type 'numpy.uint64'>, exp: <type 'numpy.int64'>, res: <type
'numpy.float64'>
In [8]: (np.array(2, dtype=np.int8) ** np.array(2, dtype=np.int8)).dtype
Out[8]: dtype('int8')
In [9]: (np.int8(2) ** np.int8(2)).dtype
Out[9]: dtype('int8')
You need a negative exponent to see the effect.

Chuck
Loading...