Discussion:
[Numpy-discussion] Numpy 1.11.0b2 released
Charles R Harris
2016-01-28 20:51:49 UTC
Permalink
Hi All,

I hope I am pleased to announce the Numpy 1.11.0b2 release. The first beta
was a damp squib due to missing files in the released source files, this
release fixes that. The new source filese may be downloaded from
sourceforge, no binaries will be released until the mingw tool chain
problems are sorted.

Please test and report any problem.

Chuck
Matthew Brett
2016-01-28 21:29:11 UTC
Permalink
Hi,

On Thu, Jan 28, 2016 at 12:51 PM, Charles R Harris
Post by Charles R Harris
Hi All,
I hope I am pleased to announce the Numpy 1.11.0b2 release. The first beta
was a damp squib due to missing files in the released source files, this
release fixes that. The new source filese may be downloaded from
sourceforge, no binaries will be released until the mingw tool chain
problems are sorted.
Please test and report any problem.
OSX wheels build OK:
https://travis-ci.org/MacPython/numpy-wheels/builds/105521850

Y'all can test with:

pip install --pre --trusted-host wheels.scipy.org -f
http://wheels.scipy.org numpy

Cheers,

Matthew
Nathaniel Smith
2016-01-28 21:36:12 UTC
Permalink
Maybe we should upload to pypi? This allows us to upload binaries for osx
at least, and in general will make the beta available to anyone who does
'pip install --pre numpy'. (But not regular 'pip install numpy', because
pip is clever enough to recognize that this is a prerelease and should not
be used by default.)

(For bonus points, start a campaign to convince everyone to add --pre to
their ci setups, so that merely uploading a prerelease will ensure that it
starts getting tested automatically.)
Post by Charles R Harris
Hi All,
I hope I am pleased to announce the Numpy 1.11.0b2 release. The first beta
was a damp squib due to missing files in the released source files, this
release fixes that. The new source filese may be downloaded from
sourceforge, no binaries will be released until the mingw tool chain
problems are sorted.
Please test and report any problem.
Chuck
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Charles R Harris
2016-01-28 22:03:14 UTC
Permalink
Post by Nathaniel Smith
Maybe we should upload to pypi? This allows us to upload binaries for osx
at least, and in general will make the beta available to anyone who does
'pip install --pre numpy'. (But not regular 'pip install numpy', because
pip is clever enough to recognize that this is a prerelease and should not
be used by default.)
(For bonus points, start a campaign to convince everyone to add --pre to
their ci setups, so that merely uploading a prerelease will ensure that it
starts getting tested automatically.)
Post by Charles R Harris
Hi All,
I hope I am pleased to announce the Numpy 1.11.0b2 release. The first
beta was a damp squib due to missing files in the released source files,
this release fixes that. The new source filese may be downloaded from
sourceforge, no binaries will be released until the mingw tool chain
problems are sorted.
Please test and report any problem.
So what happens if I use twine to upload a beta? Mind, I'd give it a try if
pypi weren't an irreversible machine of doom.

Chuck
Nathaniel Smith
2016-01-28 22:20:25 UTC
Permalink
AFAIK beta releases act just like regular releases, except that the pip ui
and the pypi ui continue to emphasize the older "real" release.
Post by Charles R Harris
Post by Nathaniel Smith
Maybe we should upload to pypi? This allows us to upload binaries for osx
at least, and in general will make the beta available to anyone who does
'pip install --pre numpy'. (But not regular 'pip install numpy', because
pip is clever enough to recognize that this is a prerelease and should not
be used by default.)
(For bonus points, start a campaign to convince everyone to add --pre to
their ci setups, so that merely uploading a prerelease will ensure that it
starts getting tested automatically.)
Post by Charles R Harris
Hi All,
I hope I am pleased to announce the Numpy 1.11.0b2 release. The first
beta was a damp squib due to missing files in the released source files,
this release fixes that. The new source filese may be downloaded from
sourceforge, no binaries will be released until the mingw tool chain
problems are sorted.
Please test and report any problem.
So what happens if I use twine to upload a beta? Mind, I'd give it a try
if pypi weren't an irreversible machine of doom.
Chuck
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Ralf Gommers
2016-01-28 22:23:45 UTC
Permalink
On Thu, Jan 28, 2016 at 11:03 PM, Charles R Harris <
Post by Charles R Harris
Post by Nathaniel Smith
Maybe we should upload to pypi? This allows us to upload binaries for osx
at least, and in general will make the beta available to anyone who does
'pip install --pre numpy'. (But not regular 'pip install numpy', because
pip is clever enough to recognize that this is a prerelease and should not
be used by default.)
(For bonus points, start a campaign to convince everyone to add --pre to
their ci setups, so that merely uploading a prerelease will ensure that it
starts getting tested automatically.)
Post by Charles R Harris
Hi All,
I hope I am pleased to announce the Numpy 1.11.0b2 release. The first
beta was a damp squib due to missing files in the released source files,
this release fixes that. The new source filese may be downloaded from
sourceforge, no binaries will be released until the mingw tool chain
problems are sorted.
Please test and report any problem.
So what happens if I use twine to upload a beta? Mind, I'd give it a try
if pypi weren't an irreversible machine of doom.
One of the things that will probably happen but needs to be avoided is that
1.11b2 becomes the visible release at https://pypi.python.org/pypi/numpy.
By default I think the status of all releases but the last uploaded one (or
highest version number?) is set to hidden.

Other ways that users can get a pre-release by accident are:
- they have pip <1.4 (released in July 2013)
- other packages have a requirement on numpy with a prerelease version (see
https://pip.pypa.io/en/stable/reference/pip_install/#pre-release-versions)

Ralf
Nathaniel Smith
2016-01-28 22:57:34 UTC
Permalink
Post by Ralf Gommers
On Thu, Jan 28, 2016 at 11:03 PM, Charles R Harris
Post by Charles R Harris
Post by Nathaniel Smith
Maybe we should upload to pypi? This allows us to upload binaries for osx
at least, and in general will make the beta available to anyone who does
'pip install --pre numpy'. (But not regular 'pip install numpy', because pip
is clever enough to recognize that this is a prerelease and should not be
used by default.)
(For bonus points, start a campaign to convince everyone to add --pre to
their ci setups, so that merely uploading a prerelease will ensure that it
starts getting tested automatically.)
Post by Charles R Harris
Hi All,
I hope I am pleased to announce the Numpy 1.11.0b2 release. The first
beta was a damp squib due to missing files in the released source files,
this release fixes that. The new source filese may be downloaded from
sourceforge, no binaries will be released until the mingw tool chain
problems are sorted.
Please test and report any problem.
So what happens if I use twine to upload a beta? Mind, I'd give it a try
if pypi weren't an irreversible machine of doom.
One of the things that will probably happen but needs to be avoided is that
1.11b2 becomes the visible release at https://pypi.python.org/pypi/numpy. By
default I think the status of all releases but the last uploaded one (or
highest version number?) is set to hidden.
Huh, I had the impression that if it was ambiguous whether the "latest
version" was a pre-release or not, then pypi would list all of them on
that page -- at least I know I've seen projects where going to the
main pypi URL gives a list of several versions like that. Or maybe the
next-to-latest one gets hidden by default and you're supposed to go
back and "un-hide" the last release manually.

Could try uploading to

https://testpypi.python.org/pypi

and see what happens...
Post by Ralf Gommers
- they have pip <1.4 (released in July 2013)
It looks like ~a year ago this was ~20% of users --
https://caremad.io/2015/04/a-year-of-pypi-downloads/
I wouldn't be surprised if it dropped quite a bit since then, but if
this is something that will affect our decision then we can ping
@dstufft to ask for updated numbers.

-n
--
Nathaniel J. Smith -- https://vorpus.org
Ralf Gommers
2016-01-28 23:25:32 UTC
Permalink
Post by Nathaniel Smith
Post by Ralf Gommers
On Thu, Jan 28, 2016 at 11:03 PM, Charles R Harris
Post by Charles R Harris
Post by Nathaniel Smith
Maybe we should upload to pypi? This allows us to upload binaries for
osx
Post by Ralf Gommers
Post by Charles R Harris
Post by Nathaniel Smith
at least, and in general will make the beta available to anyone who
does
Post by Ralf Gommers
Post by Charles R Harris
Post by Nathaniel Smith
'pip install --pre numpy'. (But not regular 'pip install numpy',
because pip
Post by Ralf Gommers
Post by Charles R Harris
Post by Nathaniel Smith
is clever enough to recognize that this is a prerelease and should not
be
Post by Ralf Gommers
Post by Charles R Harris
Post by Nathaniel Smith
used by default.)
(For bonus points, start a campaign to convince everyone to add --pre
to
Post by Ralf Gommers
Post by Charles R Harris
Post by Nathaniel Smith
their ci setups, so that merely uploading a prerelease will ensure
that it
Post by Ralf Gommers
Post by Charles R Harris
Post by Nathaniel Smith
starts getting tested automatically.)
On Jan 28, 2016 12:51 PM, "Charles R Harris" <
Post by Charles R Harris
Hi All,
I hope I am pleased to announce the Numpy 1.11.0b2 release. The first
beta was a damp squib due to missing files in the released source
files,
Post by Ralf Gommers
Post by Charles R Harris
Post by Nathaniel Smith
Post by Charles R Harris
this release fixes that. The new source filese may be downloaded from
sourceforge, no binaries will be released until the mingw tool chain
problems are sorted.
Please test and report any problem.
So what happens if I use twine to upload a beta? Mind, I'd give it a try
if pypi weren't an irreversible machine of doom.
One of the things that will probably happen but needs to be avoided is
that
Post by Ralf Gommers
1.11b2 becomes the visible release at https://pypi.python.org/pypi/numpy.
By
Post by Ralf Gommers
default I think the status of all releases but the last uploaded one (or
highest version number?) is set to hidden.
Huh, I had the impression that if it was ambiguous whether the "latest
version" was a pre-release or not, then pypi would list all of them on
that page -- at least I know I've seen projects where going to the
main pypi URL gives a list of several versions like that. Or maybe the
next-to-latest one gets hidden by default and you're supposed to go
back and "un-hide" the last release manually.
Could try uploading to
https://testpypi.python.org/pypi
and see what happens...
That's worth a try, would be good to know what the behavior is.
Post by Nathaniel Smith
Post by Ralf Gommers
- they have pip <1.4 (released in July 2013)
It looks like ~a year ago this was ~20% of users --
https://caremad.io/2015/04/a-year-of-pypi-downloads/
I wouldn't be surprised if it dropped quite a bit since then, but if
this is something that will affect our decision then we can ping
@dstufft to ask for updated numbers.
Hmm, that's more than I expected. Even if it dropped by a factor of 10 over
the last year, that would still be a lot of failed installs for the current
beta1. It looks to me like this is a bad trade-off. It would be much better
to encourage people to test against numpy master instead of a pre-release
(and we were trying to do that anyway). So the benefit is then fairly
limited, mostly typing the longer line including wheels.scipy.org when
someone wants to test a pre-release.

Ralf
Nathaniel Smith
2016-01-29 03:21:50 UTC
Permalink
Post by Ralf Gommers
Post by Nathaniel Smith
Post by Ralf Gommers
On Thu, Jan 28, 2016 at 11:03 PM, Charles R Harris
Post by Charles R Harris
Post by Nathaniel Smith
Maybe we should upload to pypi? This allows us to upload binaries for osx
at least, and in general will make the beta available to anyone who does
'pip install --pre numpy'. (But not regular 'pip install numpy', because pip
is clever enough to recognize that this is a prerelease and should not be
used by default.)
(For bonus points, start a campaign to convince everyone to add --pre to
their ci setups, so that merely uploading a prerelease will ensure that it
starts getting tested automatically.)
Post by Charles R Harris
Hi All,
I hope I am pleased to announce the Numpy 1.11.0b2 release. The first
beta was a damp squib due to missing files in the released source files,
this release fixes that. The new source filese may be downloaded from
sourceforge, no binaries will be released until the mingw tool chain
problems are sorted.
Please test and report any problem.
So what happens if I use twine to upload a beta? Mind, I'd give it a try
if pypi weren't an irreversible machine of doom.
One of the things that will probably happen but needs to be avoided is that
1.11b2 becomes the visible release at https://pypi.python.org/pypi/numpy. By
default I think the status of all releases but the last uploaded one (or
highest version number?) is set to hidden.
Huh, I had the impression that if it was ambiguous whether the "latest
version" was a pre-release or not, then pypi would list all of them on
that page -- at least I know I've seen projects where going to the
main pypi URL gives a list of several versions like that. Or maybe the
next-to-latest one gets hidden by default and you're supposed to go
back and "un-hide" the last release manually.
Could try uploading to
https://testpypi.python.org/pypi
and see what happens...
That's worth a try, would be good to know what the behavior is.
Post by Nathaniel Smith
Post by Ralf Gommers
- they have pip <1.4 (released in July 2013)
It looks like ~a year ago this was ~20% of users --
https://caremad.io/2015/04/a-year-of-pypi-downloads/
I wouldn't be surprised if it dropped quite a bit since then, but if
this is something that will affect our decision then we can ping
@dstufft to ask for updated numbers.
Hmm, that's more than I expected. Even if it dropped by a factor of 10 over the last year, that would still be a lot of failed installs for the current beta1. It looks to me like this is a bad trade-off. It would be much better to encourage people to test against numpy master instead of a pre-release (and we were trying to do that anyway). So the benefit is then fairly limited, mostly typing the longer line including wheels.scipy.org when someone wants to test a pre-release.
After the disastrous lack of testing for the 1.10 prereleases, it
might almost be a good thing if we accidentally swept up some pip 1.3
users into doing prerelease testing... I mean, if they don't test it
now, they'll just end up testing it later, and at least there will be
fewer of them to start with? Plus all they have to do to opt out is to
maintain a vaguely up-to-date environment, which is a good thing for
the ecosystem anyway :-). It's bad for everyone if pip and PyPI are
collaborating to provide this rather nice, standard feature for
distributing and QAing pre-releases, but we can't actually use it
because of people not upgrading pip...

Regarding CI setups and testing against master: I think of these as
being complementary. The fact is that master *will* sometimes just be
broken, or contain tentative API changes that get changed before the
release, etc. So it's really great that there are some projects who
are willing to take on the work of testing numpy master directly as
part of their own CI setups, but it is going to be extra work and risk
for them, they'll probably have to switch it off sometimes and then
turn it back on, and they really need to have decent channels of
communication with us whenever things go wrong because sometimes the
answer will be "doh, we didn't mean to change that, please leave your
code alone and we'll fix it on our end". (My nightmare here is that
downstream projects start working around bugs in master, and then we
find ourselves having to jump through hoops to maintain backcompat
with code that was never even released. __numpy_ufunc__ is stuck in
this situation -- we know that the final version will have to change
its name, because scipy has been shipping code that assumes a
different calling convention than the final released version will
have.)

So, testing master is *great*, but also tricky and not really
something I think we should be advocating to all 5000 downstream
projects [1].

OTOH, once a project has put up a prerelease, then *everyone* wants to
be testing that, because if they don't then things definitely *will*
break soon. (And this isn't specific to numpy -- this applies to
pretty much all upstream dependencies.) So IMO we should be teaching
everyone that their CI setups should just always use --pre when
running pip install, and this will automatically improve QA coverage
for the whole ecosystem.

...It does help if we run at least some minimal QA against the sdist
before uploading it though, to avoid the 1.11.0b1 problem :-). (Though
the new travis test for sdists should cover that.) Something else for
the release checklist I guess...

-n

[1] http://depsy.org/package/python/numpy
Ralf Gommers
2016-01-29 07:49:32 UTC
Permalink
Post by Nathaniel Smith
Post by Ralf Gommers
Post by Nathaniel Smith
Post by Ralf Gommers
On Thu, Jan 28, 2016 at 11:03 PM, Charles R Harris
Post by Charles R Harris
Post by Nathaniel Smith
Maybe we should upload to pypi? This allows us to upload binaries
for osx
Post by Ralf Gommers
Post by Nathaniel Smith
Post by Ralf Gommers
Post by Charles R Harris
Post by Nathaniel Smith
at least, and in general will make the beta available to anyone who
does
Post by Ralf Gommers
Post by Nathaniel Smith
Post by Ralf Gommers
Post by Charles R Harris
Post by Nathaniel Smith
'pip install --pre numpy'. (But not regular 'pip install numpy',
because pip
Post by Ralf Gommers
Post by Nathaniel Smith
Post by Ralf Gommers
Post by Charles R Harris
Post by Nathaniel Smith
is clever enough to recognize that this is a prerelease and should
not be
Post by Ralf Gommers
Post by Nathaniel Smith
Post by Ralf Gommers
Post by Charles R Harris
Post by Nathaniel Smith
used by default.)
(For bonus points, start a campaign to convince everyone to add
--pre to
Post by Ralf Gommers
Post by Nathaniel Smith
Post by Ralf Gommers
Post by Charles R Harris
Post by Nathaniel Smith
their ci setups, so that merely uploading a prerelease will ensure
that it
Post by Ralf Gommers
Post by Nathaniel Smith
Post by Ralf Gommers
Post by Charles R Harris
Post by Nathaniel Smith
starts getting tested automatically.)
On Jan 28, 2016 12:51 PM, "Charles R Harris" <
Post by Charles R Harris
Hi All,
I hope I am pleased to announce the Numpy 1.11.0b2 release. The
first
Post by Ralf Gommers
Post by Nathaniel Smith
Post by Ralf Gommers
Post by Charles R Harris
Post by Nathaniel Smith
Post by Charles R Harris
beta was a damp squib due to missing files in the released source
files,
Post by Ralf Gommers
Post by Nathaniel Smith
Post by Ralf Gommers
Post by Charles R Harris
Post by Nathaniel Smith
Post by Charles R Harris
this release fixes that. The new source filese may be downloaded
from
Post by Ralf Gommers
Post by Nathaniel Smith
Post by Ralf Gommers
Post by Charles R Harris
Post by Nathaniel Smith
Post by Charles R Harris
sourceforge, no binaries will be released until the mingw tool
chain
Post by Ralf Gommers
Post by Nathaniel Smith
Post by Ralf Gommers
Post by Charles R Harris
Post by Nathaniel Smith
Post by Charles R Harris
problems are sorted.
Please test and report any problem.
So what happens if I use twine to upload a beta? Mind, I'd give it a
try
Post by Ralf Gommers
Post by Nathaniel Smith
Post by Ralf Gommers
Post by Charles R Harris
if pypi weren't an irreversible machine of doom.
One of the things that will probably happen but needs to be avoided
is that
Post by Ralf Gommers
Post by Nathaniel Smith
Post by Ralf Gommers
1.11b2 becomes the visible release at
https://pypi.python.org/pypi/numpy. By
Post by Ralf Gommers
Post by Nathaniel Smith
Post by Ralf Gommers
default I think the status of all releases but the last uploaded one
(or
Post by Ralf Gommers
Post by Nathaniel Smith
Post by Ralf Gommers
highest version number?) is set to hidden.
Huh, I had the impression that if it was ambiguous whether the "latest
version" was a pre-release or not, then pypi would list all of them on
that page -- at least I know I've seen projects where going to the
main pypi URL gives a list of several versions like that. Or maybe the
next-to-latest one gets hidden by default and you're supposed to go
back and "un-hide" the last release manually.
Could try uploading to
https://testpypi.python.org/pypi
and see what happens...
That's worth a try, would be good to know what the behavior is.
Post by Nathaniel Smith
Post by Ralf Gommers
- they have pip <1.4 (released in July 2013)
It looks like ~a year ago this was ~20% of users --
https://caremad.io/2015/04/a-year-of-pypi-downloads/
I wouldn't be surprised if it dropped quite a bit since then, but if
this is something that will affect our decision then we can ping
@dstufft to ask for updated numbers.
Hmm, that's more than I expected. Even if it dropped by a factor of 10
over the last year, that would still be a lot of failed installs for the
current beta1. It looks to me like this is a bad trade-off. It would be
much better to encourage people to test against numpy master instead of a
pre-release (and we were trying to do that anyway). So the benefit is then
fairly limited, mostly typing the longer line including wheels.scipy.org
when someone wants to test a pre-release.
After the disastrous lack of testing for the 1.10 prereleases, it
might almost be a good thing if we accidentally swept up some pip 1.3
users into doing prerelease testing... I mean, if they don't test it
now, they'll just end up testing it later, and at least there will be
fewer of them to start with? Plus all they have to do to opt out is to
maintain a vaguely up-to-date environment, which is a good thing for
the ecosystem anyway :-). It's bad for everyone if pip and PyPI are
collaborating to provide this rather nice, standard feature for
distributing and QAing pre-releases, but we can't actually use it
because of people not upgrading pip...
That's a fair point. And given the amount of brokenness in (especially
older versions of ) pip, plus how easy it is to upgrade pip, we should
probably just say that we expect a recent pip (say last 3 major releases).
Post by Nathaniel Smith
Regarding CI setups and testing against master: I think of these as
being complementary. The fact is that master *will* sometimes just be
broken, or contain tentative API changes that get changed before the
release, etc. So it's really great that there are some projects who
are willing to take on the work of testing numpy master directly as
part of their own CI setups, but it is going to be extra work and risk
for them, they'll probably have to switch it off sometimes and then
turn it back on, and they really need to have decent channels of
communication with us whenever things go wrong because sometimes the
answer will be "doh, we didn't mean to change that, please leave your
code alone and we'll fix it on our end". (My nightmare here is that
downstream projects start working around bugs in master, and then we
find ourselves having to jump through hoops to maintain backcompat
with code that was never even released. __numpy_ufunc__ is stuck in
this situation -- we know that the final version will have to change
its name, because scipy has been shipping code that assumes a
different calling convention than the final released version will
have.)
So, testing master is *great*, but also tricky and not really
something I think we should be advocating to all 5000 downstream
projects [1].
OTOH, once a project has put up a prerelease, then *everyone* wants to
be testing that, because if they don't then things definitely *will*
break soon. (And this isn't specific to numpy -- this applies to
pretty much all upstream dependencies.) So IMO we should be teaching
everyone that their CI setups should just always use --pre when
running pip install, and this will automatically improve QA coverage
for the whole ecosystem.
OK, persuasive argument. In the past this wouldn't have worked, but our CI
setup is much better now. Until we had Appveyor testing for example, it was
almost the rule that MSVC builds were broken for every first beta.

So, with some hesitation: let's go for it.
Post by Nathaniel Smith
...It does help if we run at least some minimal QA against the sdist
before uploading it though, to avoid the 1.11.0b1 problem :-). (Though
the new travis test for sdists should cover that.) Something else for
the release checklist I guess...
There's still a large number of ways that one can install numpy that aren't
tested (see list in https://github.com/numpy/numpy/issues/6599), but the
only one relevant for pip is when easy_install is triggered by
`setup_requires=numpy`. It's actually not too hard to add that to TravisCI
testing (just install a dummy package that uses setup_requires. I'll add
that to the todo list.

Ralf
Andreas Mueller
2016-01-29 17:45:54 UTC
Permalink
Is this the point when scikit-learn should build against it?
Or do we wait for an RC?
Also, we need a scipy build against it. Who does that?
Our continuous integration doesn't usually build scipy or numpy, so it
will be a bit tricky to add to our config.
Would you run our master tests? [did we ever finish this discussion?]

Andy
Post by Charles R Harris
Hi All,
I hope I am pleased to announce the Numpy 1.11.0b2 release. The first
beta was a damp squib due to missing files in the released source
files, this release fixes that. The new source filese may be
downloaded from sourceforge, no binaries will be released until the
mingw tool chain problems are sorted.
Please test and report any problem.
Chuck
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Julian Taylor
2016-01-29 17:53:43 UTC
Permalink
You most likely don't need a scipy build against it. You should be able
to use the oldest scipy our project supports. Numpy does try to not
break its reverse dependencies, if stuff breaks it should only occur in
edge cases not affecting functionality of real applications (like
warnings or overzealous testing).

Of course that only works if people bother to test against the numpy
prereleases.
Post by Andreas Mueller
Is this the point when scikit-learn should build against it?
Or do we wait for an RC?
Also, we need a scipy build against it. Who does that?
Our continuous integration doesn't usually build scipy or numpy, so it
will be a bit tricky to add to our config.
Would you run our master tests? [did we ever finish this discussion?]
Andy
Post by Charles R Harris
Hi All,
I hope I am pleased to announce the Numpy 1.11.0b2 release. The first
beta was a damp squib due to missing files in the released source
files, this release fixes that. The new source filese may be
downloaded from sourceforge, no binaries will be released until the
mingw tool chain problems are sorted.
Please test and report any problem.
Chuck
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Nathaniel Smith
2016-01-29 22:39:54 UTC
Permalink
Post by Andreas Mueller
Is this the point when scikit-learn should build against it?
Yes please!
Post by Andreas Mueller
Or do we wait for an RC?
This is still all in flux, but I think we might actually want a rule that
says it can't become an RC until after we've tested scikit-learn (and a
list of similarly prominent packages). On the theory that RC means "we
think this is actually good enough to release" :-).

OTOH I'm not sure the alpha/beta/RC distinction is very helpful; maybe they
should all just be betas.
Post by Andreas Mueller
Also, we need a scipy build against it. Who does that?
Like Julian says, it shouldn't be necessary. In fact using old builds of
scipy and scikit-learn is even better than rebuilding them, because it
tests numpy's ABI compatibility -- if you find you *have* to rebuild
something then we *definitely* want to know that.
Post by Andreas Mueller
Our continuous integration doesn't usually build scipy or numpy, so it
will be a bit tricky to add to our config.
Post by Andreas Mueller
Would you run our master tests? [did we ever finish this discussion?]
We didn't, and probably should... :-)

It occurs to me that the best solution might be to put together a
.travis.yml for the release branches that does: "for pkg in
IMPORTANT_PACKAGES: pip install $pkg; python -c 'import pkg; pkg.test()'"
This might not be viable right now, but will be made more viable if pypi
starts allowing official Linux wheels, which looks likely to happen before
1.12... (see PEP 513)

-n
Ralf Gommers
2016-01-30 17:27:15 UTC
Permalink
Post by Nathaniel Smith
It occurs to me that the best solution might be to put together a
.travis.yml for the release branches that does: "for pkg in
IMPORTANT_PACKAGES: pip install $pkg; python -c 'import pkg; pkg.test()'"
This might not be viable right now, but will be made more viable if pypi
starts allowing official Linux wheels, which looks likely to happen before
1.12... (see PEP 513)
Post by Andreas Mueller
Is this the point when scikit-learn should build against it?
Yes please!
Post by Andreas Mueller
Or do we wait for an RC?
This is still all in flux, but I think we might actually want a rule that
says it can't become an RC until after we've tested scikit-learn (and a
list of similarly prominent packages). On the theory that RC means "we
think this is actually good enough to release" :-). OTOH I'm not sure the
alpha/beta/RC distinction is very helpful; maybe they should all just be
betas.
Post by Andreas Mueller
Also, we need a scipy build against it. Who does that?
Like Julian says, it shouldn't be necessary. In fact using old builds of
scipy and scikit-learn is even better than rebuilding them, because it
tests numpy's ABI compatibility -- if you find you *have* to rebuild
something then we *definitely* want to know that.
Post by Andreas Mueller
Our continuous integration doesn't usually build scipy or numpy, so it
will be a bit tricky to add to our config.
Post by Andreas Mueller
Would you run our master tests? [did we ever finish this discussion?]
We didn't, and probably should... :-)
Why would that be necessary if scikit-learn simply tests pre-releases of
numpy as you suggested earlier in the thread (with --pre)?

There's also https://github.com/MacPython/scipy-stack-osx-testing by the
way, which could have scikit-learn and scikit-image added to it.

That's two options that are imho both better than adding more workload for
the numpy release manager. Also from a principled point of view, packages
should test with new versions of their dependencies, not the other way
around.

Ralf
Nathaniel Smith
2016-01-30 18:16:26 UTC
Permalink
Post by Nathaniel Smith
Post by Nathaniel Smith
It occurs to me that the best solution might be to put together a
.travis.yml for the release branches that does: "for pkg in
IMPORTANT_PACKAGES: pip install $pkg; python -c 'import pkg; pkg.test()'"
Post by Nathaniel Smith
Post by Nathaniel Smith
This might not be viable right now, but will be made more viable if pypi
starts allowing official Linux wheels, which looks likely to happen before
1.12... (see PEP 513)
Post by Nathaniel Smith
Post by Nathaniel Smith
Post by Andreas Mueller
Is this the point when scikit-learn should build against it?
Yes please!
Post by Andreas Mueller
Or do we wait for an RC?
This is still all in flux, but I think we might actually want a rule
that says it can't become an RC until after we've tested scikit-learn (and
a list of similarly prominent packages). On the theory that RC means "we
think this is actually good enough to release" :-). OTOH I'm not sure the
alpha/beta/RC distinction is very helpful; maybe they should all just be
betas.
Post by Nathaniel Smith
Post by Nathaniel Smith
Post by Andreas Mueller
Also, we need a scipy build against it. Who does that?
Like Julian says, it shouldn't be necessary. In fact using old builds of
scipy and scikit-learn is even better than rebuilding them, because it
tests numpy's ABI compatibility -- if you find you *have* to rebuild
something then we *definitely* want to know that.
Post by Nathaniel Smith
Post by Nathaniel Smith
Post by Andreas Mueller
Our continuous integration doesn't usually build scipy or numpy, so it
will be a bit tricky to add to our config.
Post by Nathaniel Smith
Post by Nathaniel Smith
Post by Andreas Mueller
Would you run our master tests? [did we ever finish this discussion?]
We didn't, and probably should... :-)
Why would that be necessary if scikit-learn simply tests pre-releases of
numpy as you suggested earlier in the thread (with --pre)?
Post by Nathaniel Smith
There's also https://github.com/MacPython/scipy-stack-osx-testing by the
way, which could have scikit-learn and scikit-image added to it.
Post by Nathaniel Smith
That's two options that are imho both better than adding more workload
for the numpy release manager. Also from a principled point of view,
packages should test with new versions of their dependencies, not the other
way around.

Sorry, that was unclear. I meant that we should finish the discussion, not
that we should necessarily be the ones running the tests. "The discussion"
being this one:

https://github.com/numpy/numpy/issues/6462#issuecomment-148094591
https://github.com/numpy/numpy/issues/6494

I'm not saying that the release manager necessarily should be running the
tests (though it's one option). But the 1.10 experience seems to indicate
that we need *some* process for the release manager to make sure that some
basic downstream testing has happened. Another option would be keeping a
checklist of downstream projects and making sure they've all checked in and
confirmed that they've run tests before making the release.

-n
Jeff Reback
2016-01-30 18:40:03 UTC
Permalink
just my 2c

it's fairly straightforward to add a test to the Travis matrix to grab numpy wheels built numpy wheels (works for conda or pip installs).

so in pandas we r testing 2.7/3.5 against numpy master continuously

https://github.com/pydata/pandas/blob/master/ci/install-3.5_NUMPY_DEV.sh
Post by Nathaniel Smith
It occurs to me that the best solution might be to put together a .travis.yml for the release branches that does: "for pkg in IMPORTANT_PACKAGES: pip install $pkg; python -c 'import pkg; pkg.test()'"
This might not be viable right now, but will be made more viable if pypi starts allowing official Linux wheels, which looks likely to happen before 1.12... (see PEP 513)
Post by Andreas Mueller
Is this the point when scikit-learn should build against it?
Yes please!
Post by Andreas Mueller
Or do we wait for an RC?
This is still all in flux, but I think we might actually want a rule that says it can't become an RC until after we've tested scikit-learn (and a list of similarly prominent packages). On the theory that RC means "we think this is actually good enough to release" :-). OTOH I'm not sure the alpha/beta/RC distinction is very helpful; maybe they should all just be betas.
Post by Andreas Mueller
Also, we need a scipy build against it. Who does that?
Like Julian says, it shouldn't be necessary. In fact using old builds of scipy and scikit-learn is even better than rebuilding them, because it tests numpy's ABI compatibility -- if you find you *have* to rebuild something then we *definitely* want to know that.
Post by Andreas Mueller
Our continuous integration doesn't usually build scipy or numpy, so it will be a bit tricky to add to our config.
Would you run our master tests? [did we ever finish this discussion?]
We didn't, and probably should... :-)
Why would that be necessary if scikit-learn simply tests pre-releases of numpy as you suggested earlier in the thread (with --pre)?
There's also https://github.com/MacPython/scipy-stack-osx-testing by the way, which could have scikit-learn and scikit-image added to it.
That's two options that are imho both better than adding more workload for the numpy release manager. Also from a principled point of view, packages should test with new versions of their dependencies, not the other way around.
https://github.com/numpy/numpy/issues/6462#issuecomment-148094591
https://github.com/numpy/numpy/issues/6494
I'm not saying that the release manager necessarily should be running the tests (though it's one option). But the 1.10 experience seems to indicate that we need *some* process for the release manager to make sure that some basic downstream testing has happened. Another option would be keeping a checklist of downstream projects and making sure they've all checked in and confirmed that they've run tests before making the release.
-n
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Andreas Mueller
2016-02-10 16:53:30 UTC
Permalink
Thanks, that is very helpful!
Post by Jeff Reback
just my 2c
it's fairly straightforward to add a test to the Travis matrix to grab
numpy wheels built numpy wheels (works for conda or pip installs).
so in pandas we r testing 2.7/3.5 against numpy master continuously
https://github.com/pydata/pandas/blob/master/ci/install-3.5_NUMPY_DEV.sh
Post by Nathaniel Smith
Post by Nathaniel Smith
Post by Nathaniel Smith
It occurs to me that the best solution might be to put together a
.travis.yml for the release branches that does: "for pkg in
IMPORTANT_PACKAGES: pip install $pkg; python -c 'import pkg; pkg.test()'"
Post by Nathaniel Smith
Post by Nathaniel Smith
This might not be viable right now, but will be made more viable
if pypi starts allowing official Linux wheels, which looks likely to
happen before 1.12... (see PEP 513)
Post by Nathaniel Smith
Post by Nathaniel Smith
Post by Andreas Mueller
Is this the point when scikit-learn should build against it?
Yes please!
Post by Andreas Mueller
Or do we wait for an RC?
This is still all in flux, but I think we might actually want a
rule that says it can't become an RC until after we've tested
scikit-learn (and a list of similarly prominent packages). On the
theory that RC means "we think this is actually good enough to
release" :-). OTOH I'm not sure the alpha/beta/RC distinction is very
helpful; maybe they should all just be betas.
Post by Nathaniel Smith
Post by Nathaniel Smith
Post by Andreas Mueller
Also, we need a scipy build against it. Who does that?
Like Julian says, it shouldn't be necessary. In fact using old
builds of scipy and scikit-learn is even better than rebuilding them,
because it tests numpy's ABI compatibility -- if you find you *have*
to rebuild something then we *definitely* want to know that.
Post by Nathaniel Smith
Post by Nathaniel Smith
Post by Andreas Mueller
Our continuous integration doesn't usually build scipy or numpy,
so it will be a bit tricky to add to our config.
Post by Nathaniel Smith
Post by Nathaniel Smith
Post by Andreas Mueller
Would you run our master tests? [did we ever finish this
discussion?]
Post by Nathaniel Smith
Post by Nathaniel Smith
We didn't, and probably should... :-)
Why would that be necessary if scikit-learn simply tests
pre-releases of numpy as you suggested earlier in the thread (with
--pre)?
Post by Nathaniel Smith
There's also https://github.com/MacPython/scipy-stack-osx-testing
by the way, which could have scikit-learn and scikit-image added to it.
Post by Nathaniel Smith
That's two options that are imho both better than adding more
workload for the numpy release manager. Also from a principled point
of view, packages should test with new versions of their
dependencies, not the other way around.
Sorry, that was unclear. I meant that we should finish the
discussion, not that we should necessarily be the ones running the
https://github.com/numpy/numpy/issues/6462#issuecomment-148094591
https://github.com/numpy/numpy/issues/6494
I'm not saying that the release manager necessarily should be running
the tests (though it's one option). But the 1.10 experience seems to
indicate that we need *some* process for the release manager to make
sure that some basic downstream testing has happened. Another option
would be keeping a checklist of downstream projects and making sure
they've all checked in and confirmed that they've run tests before
making the release.
-n
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Julian Taylor
2016-01-31 10:57:45 UTC
Permalink
Post by Nathaniel Smith
It occurs to me that the best solution might be to put together a
.travis.yml for the release branches that does: "for pkg in
IMPORTANT_PACKAGES: pip install $pkg; python -c 'import pkg; pkg.test()'"
This might not be viable right now, but will be made more viable if
pypi starts allowing official Linux wheels, which looks likely to
happen before 1.12... (see PEP 513)
Post by Andreas Mueller
Is this the point when scikit-learn should build against it?
Yes please!
Post by Andreas Mueller
Or do we wait for an RC?
This is still all in flux, but I think we might actually want a rule
that says it can't become an RC until after we've tested
scikit-learn (and a list of similarly prominent packages). On the
theory that RC means "we think this is actually good enough to
release" :-). OTOH I'm not sure the alpha/beta/RC distinction is
very helpful; maybe they should all just be betas.
Post by Andreas Mueller
Also, we need a scipy build against it. Who does that?
Like Julian says, it shouldn't be necessary. In fact using old
builds of scipy and scikit-learn is even better than rebuilding
them, because it tests numpy's ABI compatibility -- if you find you
*have* to rebuild something then we *definitely* want to know that.
Post by Andreas Mueller
Our continuous integration doesn't usually build scipy or numpy, so it will be a bit tricky to add to our config.
Would you run our master tests? [did we ever finish this discussion?]
We didn't, and probably should... :-)
Why would that be necessary if scikit-learn simply tests pre-releases of
numpy as you suggested earlier in the thread (with --pre)?
There's also https://github.com/MacPython/scipy-stack-osx-testing by the
way, which could have scikit-learn and scikit-image added to it.
That's two options that are imho both better than adding more workload
for the numpy release manager. Also from a principled point of view,
packages should test with new versions of their dependencies, not the
other way around.
It would be nice but its not realistic, I doubt most upstreams that are
not themselves major downstreams are even subscribed to this list.

Testing or delegating testing of least our major downstreams should be
the job of the release manager.
Thus I also disagree with our more frequent releases. It puts too much
porting and testing effort on our downstreams and it gets infeaseble for
a volunteer release manager to handle.
I fear by doing this we will end up in an situation where more
downstreams put upper bounds on their supported numpy releases like e.g.
astropy already did.
This has bad consequences like the subclass breaking of linspace that
should have been caught month ago but was not because upstreams were
discouraging users from upgrading numpy because they could not keep up
with porting.
Pauli Virtanen
2016-01-31 12:08:02 UTC
Permalink
31.01.2016, 12:57, Julian Taylor kirjoitti:
[clip]
Post by Julian Taylor
Testing or delegating testing of least our major downstreams should be
the job of the release manager.
Thus I also disagree with our more frequent releases. It puts too much
porting and testing effort on our downstreams and it gets infeaseble for
a volunteer release manager to handle.
I fear by doing this we will end up in an situation where more
downstreams put upper bounds on their supported numpy releases like e.g.
astropy already did.
This has bad consequences like the subclass breaking of linspace that
should have been caught month ago but was not because upstreams were
discouraging users from upgrading numpy because they could not keep up
with porting.
I'd suggest that some automation could reduce the maintainer burden
here. Basically, I think being aware of downstream breakage is something
that could be determined without too much manual intervention.

For example, automated test rig that does the following:

- run tests of a given downstream project version, against
previous numpy version, record output

- run tests of a given downstream project version, against
numpy master, record output

- determine which failures were added by the new numpy version

- make this happen with just a single command, eg "python run.py",
and implement it for several downstream packages and versions.
(Probably good to steal ideas from travis-ci dependency matrix etc.)

This is probably too time intensive and waste of resources for
Travis-CI, but could be run by the Numpy maintainer or someone else
during release process, or periodically on some ad-hoc machine if
someone is willing to set it up.

Of course, understanding the cause of breakages would take some
understanding of the downstream package, but this would at least ensure
we are aware of stuff breaking. Provided it's covered by downstream test
suite, of course.
--
Pauli Virtanen
Daπid
2016-01-31 12:41:09 UTC
Permalink
Post by Pauli Virtanen
- run tests of a given downstream project version, against
previous numpy version, record output
- run tests of a given downstream project version, against
numpy master, record output
- determine which failures were added by the new numpy version
- make this happen with just a single command, eg "python run.py",
and implement it for several downstream packages and versions.
(Probably good to steal ideas from travis-ci dependency matrix etc.)
A simpler idea: build the master branch of a series of projects and run the
tests. In case of failure, we can compare with Travis's logs from the
project when they use the released numpy. In most cases, the master branch
is clean, so an error will likely be a change in behaviour.

This can be run automatically once a week, to not hog too much of Travis,
and counting the costs in hours of work, is very cheap to set up, and free
to maintain.

/David
Pauli Virtanen
2016-01-31 13:12:28 UTC
Permalink
Post by Daπid
Post by Pauli Virtanen
- run tests of a given downstream project version, against
previous numpy version, record output
- run tests of a given downstream project version, against
numpy master, record output
- determine which failures were added by the new numpy version
- make this happen with just a single command, eg "python run.py",
and implement it for several downstream packages and versions.
(Probably good to steal ideas from travis-ci dependency matrix etc.)
A simpler idea: build the master branch of a series of projects and run the
tests. In case of failure, we can compare with Travis's logs from the
project when they use the released numpy. In most cases, the master branch
is clean, so an error will likely be a change in behaviour.
If you can assume the tests of a downstream project are in an OK state,
then you can skip the build against existing numpy.

But it's an additional and unnecessary burden for the Numpy maintainers
to compare the logs manually (and check the built versions are the same,
and that the difference is not due to difference in build environments).
I would also avoid depending on the other projects' Travis-CI
configurations, since these may change.

I think testing released versions of downstream projects is better than
testing their master versions here, as the master branch may contain
workarounds for Numpy changes and not be representative of what people
get on their computers after Numpy release.
Post by Daπid
This can be run automatically once a week, to not hog too much of Travis,
and counting the costs in hours of work, is very cheap to set up, and free
to maintain.
It may be that such project could be runnable on Travis, if split to
per-project runs to work around the 50min timeout.

I'm not aware of Travis-CI having support for "automatically once per
week" builds.

Anyway, having any form of central automated integration testing would
be better than the current situation where it's mostly all-manual and
relies on the activity of downstream project maintainers.
--
Pauli Virtanen
Marten van Kerkwijk
2016-01-31 21:52:08 UTC
Permalink
Hi Julian,

While the numpy 1.10 situation was bad, I do want to clarify that the
problems we had in astropy were a consequence of *good* changes in
`recarray`, which solved many problems, but also broke the work-arounds
that had been created in `astropy.io.fits` quite a long time ago (possibly
before astropy became as good as it tries to be now at moving issues
upstream and perhaps before numpy had become as responsive to what happens
downstream as it is now; I think it is fair to say many project's attitude
to testing has changed rather drastically in the last decade!).

I do agree, though, that it just goes to show one has to try to be careful,
and like Nathaniel's suggestion of automatic testing with pre-releases -- I
just asked on our astropy-dev list whether we can implement it.

All the best,

Marten
Julian Taylor
2016-02-01 22:14:27 UTC
Permalink
hi,
even if it are good changes, I find it reasonable to ask for a delay in
numpy release if you need more time to adapt. Of course this has to be
within reason and can be rejected, but its very valuable to know changes
break existing old workarounds. If pyfits broke there is probably a lot
more code we don't know about that is also broken.

Sometimes we might even be able to get the good without breaking the
bad. E.g. thanks to Sebastians heroic efforts in his recent indexing
rewrite only very little broke and a lot of odd stuff could be equipped
with deprecation warnings instead of breaking.
Of course that cannot often be done or be worthwhile but its at least
worth considering when we change core functionality.

cheers,
Julian
Post by Marten van Kerkwijk
Hi Julian,
While the numpy 1.10 situation was bad, I do want to clarify that the
problems we had in astropy were a consequence of *good* changes in
`recarray`, which solved many problems, but also broke the work-arounds
that had been created in `astropy.io.fits` quite a long time ago
(possibly before astropy became as good as it tries to be now at moving
issues upstream and perhaps before numpy had become as responsive to
what happens downstream as it is now; I think it is fair to say many
project's attitude to testing has changed rather drastically in the last
decade!).
I do agree, though, that it just goes to show one has to try to be
careful, and like Nathaniel's suggestion of automatic testing with
pre-releases -- I just asked on our astropy-dev list whether we can
implement it.
All the best,
Marten
Ralf Gommers
2016-02-01 21:25:17 UTC
Permalink
On Sun, Jan 31, 2016 at 11:57 AM, Julian Taylor <
Post by Andreas Mueller
Post by Nathaniel Smith
It occurs to me that the best solution might be to put together a
.travis.yml for the release branches that does: "for pkg in
IMPORTANT_PACKAGES: pip install $pkg; python -c 'import pkg; pkg.test()'"
This might not be viable right now, but will be made more viable if
pypi starts allowing official Linux wheels, which looks likely to
happen before 1.12... (see PEP 513)
Post by Andreas Mueller
Is this the point when scikit-learn should build against it?
Yes please!
Post by Andreas Mueller
Or do we wait for an RC?
This is still all in flux, but I think we might actually want a rule
that says it can't become an RC until after we've tested
scikit-learn (and a list of similarly prominent packages). On the
theory that RC means "we think this is actually good enough to
release" :-). OTOH I'm not sure the alpha/beta/RC distinction is
very helpful; maybe they should all just be betas.
Post by Andreas Mueller
Also, we need a scipy build against it. Who does that?
Like Julian says, it shouldn't be necessary. In fact using old
builds of scipy and scikit-learn is even better than rebuilding
them, because it tests numpy's ABI compatibility -- if you find you
*have* to rebuild something then we *definitely* want to know that.
Post by Andreas Mueller
Our continuous integration doesn't usually build scipy or numpy,
so it will be a bit tricky to add to our config.
Post by Nathaniel Smith
Post by Andreas Mueller
Would you run our master tests? [did we ever finish this
discussion?]
Post by Nathaniel Smith
We didn't, and probably should... :-)
Why would that be necessary if scikit-learn simply tests pre-releases of
numpy as you suggested earlier in the thread (with --pre)?
There's also https://github.com/MacPython/scipy-stack-osx-testing by the
way, which could have scikit-learn and scikit-image added to it.
That's two options that are imho both better than adding more workload
for the numpy release manager. Also from a principled point of view,
packages should test with new versions of their dependencies, not the
other way around.
It would be nice but its not realistic, I doubt most upstreams that are
not themselves major downstreams are even subscribed to this list.
I'm pretty sure that some core devs from all major scipy stack packages are
subscribed to this list.

Testing or delegating testing of least our major downstreams should be
Post by Andreas Mueller
the job of the release manager.
If we make it (almost) fully automated, like in
https://github.com/MacPython/scipy-stack-osx-testing, then I agree that
adding this to the numpy release checklist would make sense.

But it should really only be a tiny amount of work - we're short on
developer power, and many things that are cross-project like build & test
infrastructure (numpy.distutils, needed pip/packaging fixes,
numpy.testing), scipy.org (the "stack" website), numpydoc, etc. are mostly
maintained by the numpy/scipy devs. I'm very reluctant to say yes to
putting even more work on top of that.

So: it would really help if someone could pick up the automation part of
this and improve the stack testing, so the numpy release manager doesn't
have to do this.

Ralf
Pauli Virtanen
2016-02-02 16:45:20 UTC
Permalink
01.02.2016, 23:25, Ralf Gommers kirjoitti:
[clip]
Post by Ralf Gommers
So: it would really help if someone could pick up the automation part of
this and improve the stack testing, so the numpy release manager doesn't
have to do this.
quick hack: https://github.com/pv/testrig

Not that I'm necessarily volunteering to maintain the setup, though, but
if it seems useful, move it under numpy org.
--
Pauli Virtanen
Nathaniel Smith
2016-02-04 05:18:38 UTC
Permalink
Post by Pauli Virtanen
[clip]
Post by Ralf Gommers
So: it would really help if someone could pick up the automation part of
this and improve the stack testing, so the numpy release manager doesn't
have to do this.
quick hack: https://github.com/pv/testrig
Not that I'm necessarily volunteering to maintain the setup, though, but
if it seems useful, move it under numpy org.
That's pretty cool :-). I also was fiddling with a similar idea a bit,
though much less fancy... my little script cheats and uses miniconda to
fetch pre-built versions of some packages, and then runs the tests against
numpy 1.10.2 (as shipped by anaconda) + the numpy master, and does a diff
(with a bit of massaging to make things more readable, like summarizing
warnings):

https://travis-ci.org/njsmith/numpy/builds/106865202

Search for "#####" to jump between sections of the output.

Some observations:

testing* matplotlib* this way doesn't work, b/c they need special test data
files that anaconda doesn't ship :-/

*scipy*:
*one new failure*, in test_nanmedian_all_axis
250 calls to np.testing.rand (wtf), 92 calls to random_integers, 3 uses
of datetime64 with timezones. And for some reason the new numpy gives more
"invalid value encountered in greater"-type warnings.

*astropy*:
*two weird failures* that hopefully some astropy person will look into;
two spurious failures due to over-strict testing of warnings

*scikit-learn*:
several* new failures:* 1 "invalid slice" (?), 2 "OverflowError: value
too large to convert to int". No idea what's up with these. Hopefully some
scikit-learn person will investigate?
2 np.ma view warnings, 16 multi-character strings used where "C" or "F"
expected, 1514 (!!) calls to random_integers

*pandas:*
zero new failures, only new warnings are about NaT, as expected. I guess
their whole "running their tests against numpy master" thing works!


*statsmodels:*
* absolute disaster*. *261 *new failures, I think mostly because of numpy
getting pickier about float->int conversions. Also a few "invalid slice".
102 np.ma view warnings.

I don't have a great sense of whether the statsmodels breakages are ones
that will actually impact users, or if they're just like, 1 bad utility
function that only gets used in the test suite. (well, probably not the
latter, because they do have different tracebacks). If this is typical
though then we may need to back those integer changes out and replace them
by a really loud obnoxious warning for a release or two :-/ The other
problem here is that statsmodels hasn't done a release since 2014 :-/

-n
--
Nathaniel J. Smith -- https://vorpus.org
Nathaniel Smith
2016-02-04 05:56:08 UTC
Permalink
Post by Pauli Virtanen
[clip]
Post by Ralf Gommers
So: it would really help if someone could pick up the automation part of
this and improve the stack testing, so the numpy release manager doesn't
have to do this.
quick hack: https://github.com/pv/testrig
Not that I'm necessarily volunteering to maintain the setup, though, but
if it seems useful, move it under numpy org.
Whoops, got distracted talking about the results and forgot to say --
I guess we should think about how to combine these? I like the
information on warnings, because it helps gauge the impact of
deprecations, which is a thing that takes a lot of our attention. But
your approach is clearly fancier in terms of how it parses the test
results. (Do you think the fanciness is worth it? I can see an
argument for crude and simple if the fanciness ends up being fragile,
but I haven't read the code -- mostly I was just being crude and
simple because I'm lazy :-).)

An extra ~2 hours of tests / 6-way parallelism is not that big a deal
in the grand scheme of things (and I guess it's probably less than
that if we can take advantage of existing binary builds) -- certainly
I can see an argument for enabling it by default on the
maintenance/1.x branches. Running N extra test suites ourselves is not
actually more expensive than asking N projects to run 1 more testsuite
:-). The trickiest part is getting it to give actually-useful
automated pass/fail feedback, as opposed to requiring someone to
remember to look at it manually :-/

Maybe it should be uploading the reports somewhere? So there'd be a
readable "what's currently broken by 1.x" page, plus with persistent
storage we could get travis to flag if new additions to the release
branch causes any new failures to appear? (That way we only have to
remember to look at the report manually once per release, instead of
constantly throughout the process.)

-n
--
Nathaniel J. Smith -- https://vorpus.org
Antoine Pitrou
2016-02-04 09:33:29 UTC
Permalink
On Wed, 3 Feb 2016 21:56:08 -0800
Post by Nathaniel Smith
An extra ~2 hours of tests / 6-way parallelism is not that big a deal
in the grand scheme of things (and I guess it's probably less than
that if we can take advantage of existing binary builds) -- certainly
I can see an argument for enabling it by default on the
maintenance/1.x branches. Running N extra test suites ourselves is not
actually more expensive than asking N projects to run 1 more testsuite
:-). The trickiest part is getting it to give actually-useful
automated pass/fail feedback, as opposed to requiring someone to
remember to look at it manually :-/
Yes, I think that's where the problem lies. Python had something called
"community buildbots" at a time (testing well-known libraries such as
Twisted against the Python trunk), but it suffered from lack of
attention and finally was dismantled. Apparently having the people
running it and the people most interested in it not being the same ones
ended up a bad idea :-)

That said, if you do something like that with Numpy, we would be
interested in having Numba be part of the tested packages.

Regards

Antoine.
Pauli Virtanen
2016-02-04 14:09:26 UTC
Permalink
04.02.2016, 07:56, Nathaniel Smith kirjoitti:
[clip]
Post by Nathaniel Smith
Whoops, got distracted talking about the results and forgot to say --
I guess we should think about how to combine these? I like the
information on warnings, because it helps gauge the impact of
deprecations, which is a thing that takes a lot of our attention. But
your approach is clearly fancier in terms of how it parses the test
results. (Do you think the fanciness is worth it? I can see an
argument for crude and simple if the fanciness ends up being fragile,
but I haven't read the code -- mostly I was just being crude and
simple because I'm lazy :-).)
The fanciness is essentially a question of implementation language and
ease of writing the reporting code. At 640 SLOC it's probably not so bad.

I guess it's reasonably robust --- the test report formats are unlikely
to change, and pip/virtualenv will probably continue to work esp. with
pinned pip version.

It should be simple to extract also the warnings from the test stdout.

I'm not sure if the order of test results is deterministic in
nose/py.test, so I don't know if just diffing the outputs always works.

Building downstream from source avoids future binary compatibility issues.

[clip]
Post by Nathaniel Smith
Maybe it should be uploading the reports somewhere? So there'd be a
readable "what's currently broken by 1.x" page, plus with persistent
storage we could get travis to flag if new additions to the release
branch causes any new failures to appear? (That way we only have to
remember to look at the report manually once per release, instead of
constantly throughout the process.)
This is probably possible to implement. Although, I'm not sure how much
added value this is compared to travis matrix, eg.
https://travis-ci.org/pv/testrig/

Of course, if the suggestion is that the results are generated on
somewhere else than on travis, then that's a different matter.
--
Pauli Virtanen
Thomas Caswell
2016-02-04 15:32:46 UTC
Permalink
The test data for mpl is available as a sperate conda package,
matplotlib-tests. The reason for splitting it is 40Mb of tests images.

Tom
Post by Pauli Virtanen
[clip]
Post by Nathaniel Smith
Whoops, got distracted talking about the results and forgot to say --
I guess we should think about how to combine these? I like the
information on warnings, because it helps gauge the impact of
deprecations, which is a thing that takes a lot of our attention. But
your approach is clearly fancier in terms of how it parses the test
results. (Do you think the fanciness is worth it? I can see an
argument for crude and simple if the fanciness ends up being fragile,
but I haven't read the code -- mostly I was just being crude and
simple because I'm lazy :-).)
The fanciness is essentially a question of implementation language and
ease of writing the reporting code. At 640 SLOC it's probably not so bad.
I guess it's reasonably robust --- the test report formats are unlikely
to change, and pip/virtualenv will probably continue to work esp. with
pinned pip version.
It should be simple to extract also the warnings from the test stdout.
I'm not sure if the order of test results is deterministic in
nose/py.test, so I don't know if just diffing the outputs always works.
Building downstream from source avoids future binary compatibility issues.
[clip]
Post by Nathaniel Smith
Maybe it should be uploading the reports somewhere? So there'd be a
readable "what's currently broken by 1.x" page, plus with persistent
storage we could get travis to flag if new additions to the release
branch causes any new failures to appear? (That way we only have to
remember to look at the report manually once per release, instead of
constantly throughout the process.)
This is probably possible to implement. Although, I'm not sure how much
added value this is compared to travis matrix, eg.
https://travis-ci.org/pv/testrig/
Of course, if the suggestion is that the results are generated on
somewhere else than on travis, then that's a different matter.
--
Pauli Virtanen
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Chris Barker - NOAA Federal
2016-02-05 16:27:59 UTC
Permalink
Post by Nathaniel Smith
An extra ~2 hours of tests / 6-way parallelism is not that big a deal
in the grand scheme of things (and I guess it's probably less than
that if we can take advantage of existing binary builds)
If we set up a numpy-testing conda channel, it could be used to cache
binary builds for all he versions of everything we want to test
against.

Conda-build-all could make it manageable to maintain that channel.

-CHB
Nathaniel Smith
2016-02-05 17:55:10 UTC
Permalink
Post by Chris Barker - NOAA Federal
Post by Nathaniel Smith
An extra ~2 hours of tests / 6-way parallelism is not that big a deal
in the grand scheme of things (and I guess it's probably less than
that if we can take advantage of existing binary builds)
If we set up a numpy-testing conda channel, it could be used to cache
binary builds for all he versions of everything we want to test
against.
Conda-build-all could make it manageable to maintain that channel.
What would be the advantage of maintaining that channel ourselves instead
of using someone else's binary builds that already exist (e.g. Anaconda's,
or official project wheels)?

-n
Pauli Virtanen
2016-02-05 18:14:30 UTC
Permalink
Post by Nathaniel Smith
Post by Chris Barker - NOAA Federal
Post by Nathaniel Smith
An extra ~2 hours of tests / 6-way parallelism is not that big a deal
in the grand scheme of things (and I guess it's probably less than
that if we can take advantage of existing binary builds)
If we set up a numpy-testing conda channel, it could be used to cache
binary builds for all he versions of everything we want to test
against.
Conda-build-all could make it manageable to maintain that channel.
What would be the advantage of maintaining that channel ourselves instead
of using someone else's binary builds that already exist (e.g. Anaconda's,
or official project wheels)?
ABI compatibility. However, as I understand it, not many backward ABI
incompatible changes in Numpy are not expected in future.

If they were, I note that if you work in the same environment, you can
push repeated compilation times to zero compared to the time it takes to
run tests in a way that requires less configuration, by enabling
ccache/f90cache.
j***@gmail.com
2016-02-05 21:41:10 UTC
Permalink
Post by Ralf Gommers
On Feb 5, 2016 8:28 AM, "Chris Barker - NOAA Federal" <
Post by Chris Barker - NOAA Federal
Post by Nathaniel Smith
An extra ~2 hours of tests / 6-way parallelism is not that big a deal
in the grand scheme of things (and I guess it's probably less than
that if we can take advantage of existing binary builds)
If we set up a numpy-testing conda channel, it could be used to cache
binary builds for all he versions of everything we want to test
against.
Conda-build-all could make it manageable to maintain that channel.
What would be the advantage of maintaining that channel ourselves
instead
of using someone else's binary builds that already exist (e.g.
Anaconda's,
or official project wheels)?
ABI compatibility. However, as I understand it, not many backward ABI
incompatible changes in Numpy are not expected in future.
If they were, I note that if you work in the same environment, you can
push repeated compilation times to zero compared to the time it takes to
run tests in a way that requires less configuration, by enabling
ccache/f90cache.
control of fortran compiler and libraries
I was just looking at some new test errors on TravisCI in unchanged code
of statsmodels, and it looks like conda switched from openblas to mkl
yesterday.
(statsmodels doesn't care when compiling which BLAS/LAPACK is used as long
as they work because we don't have Fortran code.)
Josef
(sending again, delivery refused)
Post by Ralf Gommers
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Chris Barker
2016-02-05 21:16:46 UTC
Permalink
Post by Nathaniel Smith
Post by Chris Barker - NOAA Federal
If we set up a numpy-testing conda channel, it could be used to cache
binary builds for all he versions of everything we want to test
against.
Conda-build-all could make it manageable to maintain that channel.
What would be the advantage of maintaining that channel ourselves instead
of using someone else's binary builds that already exist (e.g. Anaconda's,
or official project wheels)?
other's binary wheels are only available for the versions that are
supported. Usually the latest releases, but Anaconda doesn't always have
the latest builds of everything.

Maybe we want to test against matplotlib master (or a release candidate,
or??), for instance.

And when we are testing a numpy-abi-breaking release, we'll need to have
everything tested against that release.

Usually, when you set up a conda environment, it preferentially pulls from
the default channel anyway (or any other channel you set up) , so we'd only
maintain stuff that wasn't readily available elsewhere.

-CHB
--
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception

***@noaa.gov
Nathaniel Smith
2016-02-05 23:24:37 UTC
Permalink
Post by Chris Barker
Post by Nathaniel Smith
Post by Chris Barker - NOAA Federal
If we set up a numpy-testing conda channel, it could be used to cache
binary builds for all he versions of everything we want to test
against.
Conda-build-all could make it manageable to maintain that channel.
What would be the advantage of maintaining that channel ourselves instead
of using someone else's binary builds that already exist (e.g. Anaconda's,
or official project wheels)?
other's binary wheels are only available for the versions that are
supported. Usually the latest releases, but Anaconda doesn't always have the
latest builds of everything.
True, though official project wheels will hopefully solve that soon.
Post by Chris Barker
Maybe we want to test against matplotlib master (or a release candidate,
or??), for instance.
Generally I think for numpy's purposes we want to test against the
latest released version, because it doesn't do end-users much good if
a numpy release breaks their environment, and the only fix is hiding
in some git repo somewhere :-). But yeah.
Post by Chris Barker
And when we are testing a numpy-abi-breaking release, we'll need to have
everything tested against that release.
There aren't any current plans to have such a release, but true.

-n
--
Nathaniel J. Smith -- https://vorpus.org
Chris Barker
2016-02-06 23:21:46 UTC
Permalink
Post by Chris Barker
Post by Chris Barker
Post by Chris Barker - NOAA Federal
If we set up a numpy-testing conda channel, it could be used to cache
binary builds for all he versions of everything we want to test
against.
Anaconda doesn't always have the
Post by Chris Barker
latest builds of everything.
OK, this may be more or less helpful, depending on what we want to built
against. But a conda environment (maybe tied to a custom channel) really
does make a nice contained space for testing that can be set up fast on a
CI server.

If whoever is setting up a test system/matrix thinks this would be useful,
I'd be glad to help set it up.

-Chris
--
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception

***@noaa.gov
Michael Sarahan
2016-02-06 23:42:03 UTC
Permalink
FWIW, we (Continuum) are working on a CI system that builds conda recipes.
Part of this is testing not only individual packages that change, but also
any downstream packages that are also in the repository of recipes. The
configuration for this is in
https://github.com/conda/conda-recipes/blob/master/.binstar.yml and the
project doing the dependency detection is in
https://github.com/ContinuumIO/ProtoCI/

This is still being established (particularly, provisioning build workers),
but please talk with us if you're interested.

Chris, it may still be useful to use docker here (perhaps on the build
worker, or elsewhere), also, as the distinction between build machines and
user machines is important to make. Docker would be great for making sure
that all dependency requirements are met on end-user systems (we've had a
few recent issues with libgfortran accidentally missing as a requirement of
scipy).

Best,
Michael
Post by Chris Barker
Post by Chris Barker
Post by Chris Barker - NOAA Federal
If we set up a numpy-testing conda channel, it could be used to cache
binary builds for all he versions of everything we want to test
against.
Anaconda doesn't always have the
Post by Chris Barker
latest builds of everything.
OK, this may be more or less helpful, depending on what we want to built
against. But a conda environment (maybe tied to a custom channel) really
does make a nice contained space for testing that can be set up fast on a
CI server.
If whoever is setting up a test system/matrix thinks this would be useful,
I'd be glad to help set it up.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Robert T. McGibbon
2016-02-06 23:51:11 UTC
Permalink
(we've had a few recent issues with libgfortran accidentally missing as a
requirement of scipy).

On this topic, you may be able to get some milage out of adapting
pypa/auditwheel, which can load
up extension module `.so` files inside a wheel (or conda package) and walk
the shared library dependency
tree like the runtime linker (using pyelftools), and check whether things
are going to resolve properly and
where shared libraries are loaded from.

Something like that should be able to, with minimal adaptation to use the
conda dependency resolver,
check that a conda package properly declares all of the shared library
dependencies it actually needs.

-Robert
FWIW, we (Continuum) are working on a CI system that builds conda
recipes. Part of this is testing not only individual packages that change,
but also any downstream packages that are also in the repository of
recipes. The configuration for this is in
https://github.com/conda/conda-recipes/blob/master/.binstar.yml and the
project doing the dependency detection is in
https://github.com/ContinuumIO/ProtoCI/
This is still being established (particularly, provisioning build
workers), but please talk with us if you're interested.
Chris, it may still be useful to use docker here (perhaps on the build
worker, or elsewhere), also, as the distinction between build machines and
user machines is important to make. Docker would be great for making sure
that all dependency requirements are met on end-user systems (we've had a
few recent issues with libgfortran accidentally missing as a requirement of
scipy).
Best,
Michael
Post by Chris Barker
Post by Chris Barker
Post by Chris Barker - NOAA Federal
If we set up a numpy-testing conda channel, it could be used to
cache
Post by Chris Barker
Post by Chris Barker - NOAA Federal
binary builds for all he versions of everything we want to test
against.
Anaconda doesn't always have the
Post by Chris Barker
latest builds of everything.
OK, this may be more or less helpful, depending on what we want to built
against. But a conda environment (maybe tied to a custom channel) really
does make a nice contained space for testing that can be set up fast on a
CI server.
If whoever is setting up a test system/matrix thinks this would be
useful, I'd be glad to help set it up.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
--
-Robert
Chris Barker
2016-02-06 23:52:37 UTC
Permalink
Post by Michael Sarahan
FWIW, we (Continuum) are working on a CI system that builds conda recipes.
great, could be handy. I hope you've looked at the open-source systems that
do this: obvious-ci and conda-build-all. And conda-smithy to help set it
all up..

Chris, it may still be useful to use docker here (perhaps on the build
Post by Michael Sarahan
worker, or elsewhere), also, as the distinction between build machines and
user machines is important to make. Docker would be great for making sure
that all dependency requirements are met on end-user systems
yes -- veryhandy, I have certainly accidentally brough in other system libs
in a build....

Too bad it's Linux only. Though very useful for manylinux.


-Chris
--
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception

***@noaa.gov
Michael Sarahan
2016-02-07 00:11:27 UTC
Permalink
Robert,

Thanks for pointing out auditwheel. We're experimenting with a GCC 5.2
toolchain, and this tool will be invaluable.

Chris,

Both conda-build-all and obvious-ci are excellent projects, and we'll
leverage them where we can (particularly conda-build-all). Obvious CI and
conda-smithy are in a slightly different space, as we want to use our own
anaconda.org build service, rather than write scripts to run on other CI
services. With more control, we can do cool things like splitting up build
jobs and further parallelizing them on more workers, which I see as very
important if we're going to be building downstream stuff. As I see it, the
single, massive recipe repo that is conda-recipes has been a disadvantage
for a while in terms of complexity, but now may be an advantage in terms of
building downstream packages (how else would dependency get resolved?) It
remains to be seen whether git submodules might replace individual folders
in conda-recipes - I think this might give project maintainers more direct
control over their packages.

The goal, much like ObviousCI, is to enable project maintainers to get
their latest releases available in conda sooner, and to simplify the whole
CI setup process. We hope we can help each other rather than compete.

Best,
Michael
Post by Chris Barker
Post by Michael Sarahan
FWIW, we (Continuum) are working on a CI system that builds conda recipes.
great, could be handy. I hope you've looked at the open-source systems
that do this: obvious-ci and conda-build-all. And conda-smithy to help set
it all up..
Chris, it may still be useful to use docker here (perhaps on the build
Post by Michael Sarahan
worker, or elsewhere), also, as the distinction between build machines and
user machines is important to make. Docker would be great for making sure
that all dependency requirements are met on end-user systems
yes -- veryhandy, I have certainly accidentally brough in other system
libs in a build....
Too bad it's Linux only. Though very useful for manylinux.
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Chris Barker
2016-02-08 19:59:02 UTC
Permalink
Post by Michael Sarahan
Chris,
Both conda-build-all and obvious-ci are excellent projects, and we'll
leverage them where we can (particularly conda-build-all). Obvious CI and
conda-smithy are in a slightly different space, as we want to use our own
anaconda.org build service, rather than write scripts to run on other CI
services.
I don't think conda-build-all or, for that matter, conda-smithy are fixed
to any particular CI server. But anyway, the anaconda.org build service
looks nice -- I'll need to give that a try. I've actually been building
everything on my own machines anyway so far.
Post by Michael Sarahan
As I see it, the single, massive recipe repo that is conda-recipes has
been a disadvantage for a while in terms of complexity, but now may be an
advantage in terms of building downstream packages (how else would
dependency get resolved?)
yup -- but the other issue is that conda-recipes didn't seem to be
maintained, really...
Post by Michael Sarahan
The goal, much like ObviousCI, is to enable project maintainers to get
their latest releases available in conda sooner, and to simplify the whole
CI setup process. We hope we can help each other rather than compete.
Great goal!

Thanks,

-CHB
--
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception

***@noaa.gov
Evgeni Burovski
2016-02-04 09:51:32 UTC
Permalink
one new failure, in test_nanmedian_all_axis
250 calls to np.testing.rand (wtf), 92 calls to random_integers, 3 uses
of datetime64 with timezones. And for some reason the new numpy gives more
"invalid value encountered in greater"-type warnings.
One limitation of this approach, AFAIU, is that the downstream
versions are pinned by whatever is available from anaconda, correct?
Not a big deal per se, just something to keep in mind when looking at
the report that there might be false positives.

For scipy, for instance, this seems to test 0.16.1. Most (all?) of
these are fixed in 0.17.0.

At any rate, this is great regardless --- thank you!

Cheers,

Evgeni
Charles R Harris
2016-02-04 17:33:52 UTC
Permalink
Post by Nathaniel Smith
Post by Pauli Virtanen
[clip]
Post by Ralf Gommers
So: it would really help if someone could pick up the automation part of
this and improve the stack testing, so the numpy release manager doesn't
have to do this.
quick hack: https://github.com/pv/testrig
Not that I'm necessarily volunteering to maintain the setup, though, but
if it seems useful, move it under numpy org.
That's pretty cool :-). I also was fiddling with a similar idea a bit,
though much less fancy... my little script cheats and uses miniconda to
fetch pre-built versions of some packages, and then runs the tests against
numpy 1.10.2 (as shipped by anaconda) + the numpy master, and does a diff
(with a bit of massaging to make things more readable, like summarizing
https://travis-ci.org/njsmith/numpy/builds/106865202
Search for "#####" to jump between sections of the output.
testing* matplotlib* this way doesn't work, b/c they need special test
data files that anaconda doesn't ship :-/
*one new failure*, in test_nanmedian_all_axis
250 calls to np.testing.rand (wtf), 92 calls to random_integers, 3 uses
of datetime64 with timezones. And for some reason the new numpy gives more
"invalid value encountered in greater"-type warnings.
*two weird failures* that hopefully some astropy person will look into;
two spurious failures due to over-strict testing of warnings
several* new failures:* 1 "invalid slice" (?), 2 "OverflowError: value
too large to convert to int". No idea what's up with these. Hopefully some
scikit-learn person will investigate?
2 np.ma view warnings, 16 multi-character strings used where "C" or "F"
expected, 1514 (!!) calls to random_integers
*pandas:*
zero new failures, only new warnings are about NaT, as expected. I guess
their whole "running their tests against numpy master" thing works!
*statsmodels:*
* absolute disaster*. *261 *new failures, I think mostly because of
numpy getting pickier about float->int conversions. Also a few "invalid
slice".
102 np.ma view warnings.
I don't have a great sense of whether the statsmodels breakages are ones
that will actually impact users, or if they're just like, 1 bad utility
function that only gets used in the test suite. (well, probably not the
latter, because they do have different tracebacks). If this is typical
though then we may need to back those integer changes out and replace them
by a really loud obnoxious warning for a release or two :-/ The other
problem here is that statsmodels hasn't done a release since 2014 :-/
I'm going to do a second beta this weekend and will try putting it up on
pypi. The statsmodels are a concern, we may need to put off the transition
to integer only indexes. OTOH, if statsmodels can't fix things up we will
have to deal with that at some point. Apparently we also need to do
something about invisible deprecation warnings. Python changing the default
to ignore was, IIRC, due to a Python screw up in backporting PyCapsule to
2.7 and deprecating PyCObject in the process. The easiest way out of that
hole was painting it over.

Chuck
Andreas Mueller
2016-02-10 16:57:38 UTC
Permalink
Post by Julian Taylor
It would be nice but its not realistic, I doubt most upstreams that are
not themselves major downstreams are even subscribed to this list.
I'm pretty sure that some core devs from all major scipy stack
packages are subscribed to this list.
Well, I don't think anyone else from sklearn picked up on this, and I
myself totally forgot the issue for the last two weeks.


I think continuously testing against numpy master might actually be
feasible for us, but I'm not entirely sure....

Loading...