Discussion:
[Numpy-discussion] testing numpy with downstream testsuites (was:
Nathaniel Smith
2015-08-26 06:59:30 UTC
Permalink
[Popping this off to its own thread to try and keep things easier to follow]
- Lament: it would be really nice if we could get more people to
test our beta releases, because in practice right now 1.x.0 ends
up being where we actually the discover all the bugs, and 1.x.1 is
where it actually becomes usable. Which sucks, and makes it
difficult to have a solid policy about what counts as a
regression, etc. Is there anything we can do about this?
Just a note in here - have you all thought about running the test suites for
downstream projects as part of the numpy test suite?
I don't think it came up, but it's not a bad idea! The main problems I
can foresee are:
1) Since we don't know the downstream code, it can be hard to
interpret test suite failures. OTOH for changes we're uncertain of we
already do often end up running some downstream test suites by hand,
so it can only be an improvement on that...
2) Sometimes everyone including downstream agrees that breaking
something is actually a good idea and they should just deal, but what
do you do then?

These both seem solvable though.

I guess a good strategy would be to compile a travis-compatible wheel
of $PACKAGE version $latest-stable against numpy 1.x, and then in the
1.(x+1) development period numpy would have an additional travis run
which, instead of running the numpy test suite, instead does:
pip install .
pip install $PACKAGE-$latest-stable.whl
python -c 'import package; package.test()' # adjust as necessary
? Where $PACKAGE is something like scipy / pandas / astropy / ...
matplotlib would be nice but maybe impractical...?

Maybe someone else will have objections but it seems like a reasonable
idea to me. Want to put together a PR? Asides from fame and fortune
and our earnest appreciation, your reward is you get to make sure that
the packages you care about are included so that we break them less
often in the future ;-).

-n
--
Nathaniel J. Smith -- http://vorpus.org
Jens Nielsen
2015-08-26 10:38:39 UTC
Permalink
As a Matplotlib developer I try to test our code manually with all betas
and rc of new numpy versions.
(And already pushed fixed a few new deprecation warnings with 1.10beta1
which otherwise passes our test suite.
I forgot to report this back since there were no issues to report )
However, we could actually do this automatically if numpy betas were
uploaded as prereleases on pypi.

We are already using Travis's allow failure mode to test python 3.5 betas
and rc's along with all our dependencies installed with `pip --pre`
https://pip.pypa.io/en/latest/reference/pip_install.html#pre-release-versions

Putting prereleases on pypi would thus automate most of the testing of new
Numpy versions for us.

Best
Jens
Post by Nathaniel Smith
[Popping this off to its own thread to try and keep things easier to follow]
- Lament: it would be really nice if we could get more people to
test our beta releases, because in practice right now 1.x.0 ends
up being where we actually the discover all the bugs, and 1.x.1 is
where it actually becomes usable. Which sucks, and makes it
difficult to have a solid policy about what counts as a
regression, etc. Is there anything we can do about this?
Just a note in here - have you all thought about running the test suites
for
downstream projects as part of the numpy test suite?
I don't think it came up, but it's not a bad idea! The main problems I
1) Since we don't know the downstream code, it can be hard to
interpret test suite failures. OTOH for changes we're uncertain of we
already do often end up running some downstream test suites by hand,
so it can only be an improvement on that...
2) Sometimes everyone including downstream agrees that breaking
something is actually a good idea and they should just deal, but what
do you do then?
These both seem solvable though.
I guess a good strategy would be to compile a travis-compatible wheel
of $PACKAGE version $latest-stable against numpy 1.x, and then in the
1.(x+1) development period numpy would have an additional travis run
pip install .
pip install $PACKAGE-$latest-stable.whl
python -c 'import package; package.test()' # adjust as necessary
? Where $PACKAGE is something like scipy / pandas / astropy / ...
matplotlib would be nice but maybe impractical...?
Maybe someone else will have objections but it seems like a reasonable
idea to me. Want to put together a PR? Asides from fame and fortune
and our earnest appreciation, your reward is you get to make sure that
the packages you care about are included so that we break them less
often in the future ;-).
-n
--
Nathaniel J. Smith -- http://vorpus.org
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Matthew Brett
2015-08-26 11:14:29 UTC
Permalink
Hi,
Post by Nathaniel Smith
[Popping this off to its own thread to try and keep things easier to follow]
- Lament: it would be really nice if we could get more people to
test our beta releases, because in practice right now 1.x.0 ends
up being where we actually the discover all the bugs, and 1.x.1 is
where it actually becomes usable. Which sucks, and makes it
difficult to have a solid policy about what counts as a
regression, etc. Is there anything we can do about this?
Just a note in here - have you all thought about running the test suites for
downstream projects as part of the numpy test suite?
I don't think it came up, but it's not a bad idea! The main problems I
1) Since we don't know the downstream code, it can be hard to
interpret test suite failures. OTOH for changes we're uncertain of we
already do often end up running some downstream test suites by hand,
so it can only be an improvement on that...
2) Sometimes everyone including downstream agrees that breaking
something is actually a good idea and they should just deal, but what
do you do then?
These both seem solvable though.
I guess a good strategy would be to compile a travis-compatible wheel
of $PACKAGE version $latest-stable against numpy 1.x, and then in the
1.(x+1) development period numpy would have an additional travis run
pip install .
pip install $PACKAGE-$latest-stable.whl
python -c 'import package; package.test()' # adjust as necessary
? Where $PACKAGE is something like scipy / pandas / astropy / ...
matplotlib would be nice but maybe impractical...?
Maybe someone else will have objections but it seems like a reasonable
idea to me. Want to put together a PR? Asides from fame and fortune
and our earnest appreciation, your reward is you get to make sure that
the packages you care about are included so that we break them less
often in the future ;-).
One simple way to get going would be for the release manager to
trigger a build from this repo:

https://github.com/matthew-brett/travis-wheel-builder

This build would then upload a wheel to:

http://travis-wheels.scikit-image.org/

The upstream packages would have a test grid which included an entry
with something like:

pip install -f http://travis-wheels.scikit-image.org --pre numpy

Cheers,

Matthew
Jeff Reback
2015-08-26 13:03:45 UTC
Permalink
Pandas has for quite a while has a travis build where we install numpy
master and then run our test suite.

e.g. here: https://travis-ci.org/pydata/pandas/jobs/77256007

Over the last year this has uncovered a couple of changes which affected
pandas (mainly using something deprecated which was turned off :)

This was pretty simple to setup. Note that this adds 2+ minutes to the
build (though our builds take a while anyhow so its not a big deal).
Post by Nathaniel Smith
Hi,
Post by Nathaniel Smith
[Popping this off to its own thread to try and keep things easier to
follow]
Post by Nathaniel Smith
- Lament: it would be really nice if we could get more people to
test our beta releases, because in practice right now 1.x.0 ends
up being where we actually the discover all the bugs, and 1.x.1 is
where it actually becomes usable. Which sucks, and makes it
difficult to have a solid policy about what counts as a
regression, etc. Is there anything we can do about this?
Just a note in here - have you all thought about running the test
suites for
Post by Nathaniel Smith
downstream projects as part of the numpy test suite?
I don't think it came up, but it's not a bad idea! The main problems I
1) Since we don't know the downstream code, it can be hard to
interpret test suite failures. OTOH for changes we're uncertain of we
already do often end up running some downstream test suites by hand,
so it can only be an improvement on that...
2) Sometimes everyone including downstream agrees that breaking
something is actually a good idea and they should just deal, but what
do you do then?
These both seem solvable though.
I guess a good strategy would be to compile a travis-compatible wheel
of $PACKAGE version $latest-stable against numpy 1.x, and then in the
1.(x+1) development period numpy would have an additional travis run
pip install .
pip install $PACKAGE-$latest-stable.whl
python -c 'import package; package.test()' # adjust as necessary
? Where $PACKAGE is something like scipy / pandas / astropy / ...
matplotlib would be nice but maybe impractical...?
Maybe someone else will have objections but it seems like a reasonable
idea to me. Want to put together a PR? Asides from fame and fortune
and our earnest appreciation, your reward is you get to make sure that
the packages you care about are included so that we break them less
often in the future ;-).
One simple way to get going would be for the release manager to
https://github.com/matthew-brett/travis-wheel-builder
http://travis-wheels.scikit-image.org/
The upstream packages would have a test grid which included an entry
pip install -f http://travis-wheels.scikit-image.org --pre numpy
Cheers,
Matthew
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Loading...