Discussion:
64-bit windows numpy / scipy wheels for testing
(too old to reply)
Matthew Brett
2014-04-24 21:56:47 UTC
Permalink
Hi,

Thanks to Cark Kleffner's toolchain and some help from Clint Whaley
(main author of ATLAS), I've built 64-bit windows numpy and scipy
wheels for testing.

The build uses Carl's custom mingw-w64 build with static linking.

There are two harmless test failures on scipy (being discussed on the
list at the moment) - tests otherwise clean.

Wheels are here:

https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd64.whl
https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_amd64.whl

You can test with:

pip install -U pip # to upgrade pip to latest
pip install -f https://nipy.bic.berkeley.edu/scipy_installers numpy scipy

Please do send feedback.

ATLAS binary here:

https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/atlas-64-full-sse2.tar.bz2

Many thanks for Carl in particular for doing all the hard work,

Cheers,

Matthew
Julian Taylor
2014-04-24 22:35:39 UTC
Permalink
Post by Matthew Brett
Hi,
Thanks to Cark Kleffner's toolchain and some help from Clint Whaley
(main author of ATLAS), I've built 64-bit windows numpy and scipy
wheels for testing.
The build uses Carl's custom mingw-w64 build with static linking.
There are two harmless test failures on scipy (being discussed on the
list at the moment) - tests otherwise clean.
This is great news, thanks for working on this.

Have you already documented the procedure used to create the wheels?
I would like to be able to reproduce the builds.

Is it possible to add this toolchain and build procedure to the
vagrant/fabric based numpy release virtual machine setup?
The current version doing linux + win32 builds is available here:
https://github.com/juliantaylor/numpy-vendor

The windows builds are done in a linux guest using wine. Wine also seems
to support win64.

The mac build procedure would also needs updating.

Cheers,
Julian
Charles R Harris
2014-04-24 23:00:58 UTC
Permalink
Hi Matthew,
Post by Matthew Brett
Hi,
Thanks to Cark Kleffner's toolchain and some help from Clint Whaley
(main author of ATLAS), I've built 64-bit windows numpy and scipy
wheels for testing.
The build uses Carl's custom mingw-w64 build with static linking.
There are two harmless test failures on scipy (being discussed on the
list at the moment) - tests otherwise clean.
https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd64.whl
https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_amd64.whl
pip install -U pip # to upgrade pip to latest
pip install -f https://nipy.bic.berkeley.edu/scipy_installers numpy scipy
Please do send feedback.
https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/atlas-64-full-sse2.tar.bz2
Many thanks for Carl in particular for doing all the hard work,
Cool. After all these long years... Now all we need is a box running tests
for CI.

Chuck
Nathaniel Smith
2014-04-24 23:06:18 UTC
Permalink
On Fri, Apr 25, 2014 at 12:00 AM, Charles R Harris
Post by Charles R Harris
Cool. After all these long years... Now all we need is a box running tests
for CI.
There is
http://www.appveyor.com/
though I haven't tried doing anything with it yet... (yes it says
".NET" at the top, but then at the bottom it says that this is a lie
and it doesn't care what kind of project you have)

-n
--
Nathaniel J. Smith
Postdoctoral researcher - Informatics - University of Edinburgh
http://vorpus.org
j***@gmail.com
2014-04-24 23:08:29 UTC
Permalink
Post by Charles R Harris
Hi Matthew,
Post by Matthew Brett
Hi,
Thanks to Cark Kleffner's toolchain and some help from Clint Whaley
(main author of ATLAS), I've built 64-bit windows numpy and scipy
wheels for testing.
The build uses Carl's custom mingw-w64 build with static linking.
There are two harmless test failures on scipy (being discussed on the
list at the moment) - tests otherwise clean.
https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd64.whl
https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_amd64.whl
pip install -U pip # to upgrade pip to latest
pip install -f https://nipy.bic.berkeley.edu/scipy_installers numpy scipy
Please do send feedback.
https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/atlas-64-full-sse2.tar.bz2
Many thanks for Carl in particular for doing all the hard work,
Cool. After all these long years... Now all we need is a box running tests
for CI.
Chuck
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
I get two test failures with numpy

Josef
Post by Charles R Harris
Post by Matthew Brett
np.test()
Running unit tests for numpy
NumPy version 1.8.1
NumPy is installed in C:\Python27\lib\site-packages\numpy
Python version 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64 bit
(AMD64)]
nose version 1.1.2

======================================================================
FAIL: test_iterator.test_iter_broadcasting_errors
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest
self.test(*self.arg)
File "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py",
line 657, in test_iter_broadcasting_errors
'(2)->(2,newaxis)') % msg)
File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 44, in
assert_
raise AssertionError(msg)
AssertionError: Message "operands could not be broadcast together with
remapped shapes [original->remapped]: (2,3)->(2,3) (2,)->(2,newaxis) and
requested shape (4,3)" doesn't contain remapped operand
shape(2)->(2,newaxis)

======================================================================
FAIL: test_iterator.test_iter_array_cast
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest
self.test(*self.arg)
File "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py",
line 836, in test_iter_array_cast
assert_equal(i.operands[0].strides, (-96,8,-32))
File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 255, in
assert_equal
assert_equal(actual[k], desired[k], 'item=%r\n%s' % (k, err_msg),
verbose)
File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 317, in
assert_equal
raise AssertionError(msg)
AssertionError:
Items are not equal:
item=0

ACTUAL: 96L
DESIRED: -96

----------------------------------------------------------------------
Ran 4828 tests in 46.306s

FAILED (KNOWNFAIL=10, SKIP=8, failures=2)
<nose.result.TextTestResult run=4828 errors=0 failures=2>
Charles R Harris
2014-04-24 23:20:30 UTC
Permalink
On Thu, Apr 24, 2014 at 7:00 PM, Charles R Harris <
Post by Charles R Harris
Hi Matthew,
Post by Matthew Brett
Hi,
Thanks to Cark Kleffner's toolchain and some help from Clint Whaley
(main author of ATLAS), I've built 64-bit windows numpy and scipy
wheels for testing.
The build uses Carl's custom mingw-w64 build with static linking.
There are two harmless test failures on scipy (being discussed on the
list at the moment) - tests otherwise clean.
https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd64.whl
https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_amd64.whl
pip install -U pip # to upgrade pip to latest
pip install -f https://nipy.bic.berkeley.edu/scipy_installers numpy scipy
Please do send feedback.
https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/atlas-64-full-sse2.tar.bz2
Many thanks for Carl in particular for doing all the hard work,
Cool. After all these long years... Now all we need is a box running
tests for CI.
Chuck
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
I get two test failures with numpy
Josef
Post by Charles R Harris
Post by Matthew Brett
np.test()
Running unit tests for numpy
NumPy version 1.8.1
NumPy is installed in C:\Python27\lib\site-packages\numpy
Python version 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64 bit
(AMD64)]
nose version 1.1.2
======================================================================
FAIL: test_iterator.test_iter_broadcasting_errors
----------------------------------------------------------------------
File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest
self.test(*self.arg)
File "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py",
line 657, in test_iter_broadcasting_errors
'(2)->(2,newaxis)') % msg)
File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 44, in
assert_
raise AssertionError(msg)
AssertionError: Message "operands could not be broadcast together with
remapped shapes [original->remapped]: (2,3)->(2,3) (2,)->(2,newaxis) and
requested shape (4,3)" doesn't contain remapped operand
shape(2)->(2,newaxis)
======================================================================
FAIL: test_iterator.test_iter_array_cast
----------------------------------------------------------------------
File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest
self.test(*self.arg)
File "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py",
line 836, in test_iter_array_cast
assert_equal(i.operands[0].strides, (-96,8,-32))
File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 255,
in assert_equal
assert_equal(actual[k], desired[k], 'item=%r\n%s' % (k, err_msg),
verbose)
File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 317,
in assert_equal
raise AssertionError(msg)
item=0
ACTUAL: 96L
DESIRED: -96
----------------------------------------------------------------------
Ran 4828 tests in 46.306s
FAILED (KNOWNFAIL=10, SKIP=8, failures=2)
<nose.result.TextTestResult run=4828 errors=0 failures=2>
Strange. That second one looks familiar, at least the "-96" part. Wonder
why this doesn't show up with the MKL builds.

Chuck
j***@gmail.com
2014-04-24 23:29:38 UTC
Permalink
Post by Charles R Harris
On Thu, Apr 24, 2014 at 7:00 PM, Charles R Harris <
Post by Charles R Harris
Hi Matthew,
Post by Matthew Brett
Hi,
Thanks to Cark Kleffner's toolchain and some help from Clint Whaley
(main author of ATLAS), I've built 64-bit windows numpy and scipy
wheels for testing.
The build uses Carl's custom mingw-w64 build with static linking.
There are two harmless test failures on scipy (being discussed on the
list at the moment) - tests otherwise clean.
https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd64.whl
https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_amd64.whl
pip install -U pip # to upgrade pip to latest
pip install -f https://nipy.bic.berkeley.edu/scipy_installers numpy scipy
Please do send feedback.
https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/atlas-64-full-sse2.tar.bz2
Many thanks for Carl in particular for doing all the hard work,
Cool. After all these long years... Now all we need is a box running
tests for CI.
Chuck
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
I get two test failures with numpy
Josef
Post by Charles R Harris
Post by Matthew Brett
np.test()
Running unit tests for numpy
NumPy version 1.8.1
NumPy is installed in C:\Python27\lib\site-packages\numpy
Python version 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64 bit
(AMD64)]
nose version 1.1.2
======================================================================
FAIL: test_iterator.test_iter_broadcasting_errors
----------------------------------------------------------------------
File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest
self.test(*self.arg)
File "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py",
line 657, in test_iter_broadcasting_errors
'(2)->(2,newaxis)') % msg)
File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 44,
in assert_
raise AssertionError(msg)
AssertionError: Message "operands could not be broadcast together with
remapped shapes [original->remapped]: (2,3)->(2,3) (2,)->(2,newaxis) and
requested shape (4,3)" doesn't contain remapped operand
shape(2)->(2,newaxis)
======================================================================
FAIL: test_iterator.test_iter_array_cast
----------------------------------------------------------------------
File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest
self.test(*self.arg)
File "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py",
line 836, in test_iter_array_cast
assert_equal(i.operands[0].strides, (-96,8,-32))
File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 255,
in assert_equal
assert_equal(actual[k], desired[k], 'item=%r\n%s' % (k, err_msg),
verbose)
File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 317,
in assert_equal
raise AssertionError(msg)
item=0
ACTUAL: 96L
DESIRED: -96
----------------------------------------------------------------------
Ran 4828 tests in 46.306s
FAILED (KNOWNFAIL=10, SKIP=8, failures=2)
<nose.result.TextTestResult run=4828 errors=0 failures=2>
Strange. That second one looks familiar, at least the "-96" part. Wonder
why this doesn't show up with the MKL builds.
ok tried again, this time deleting the old numpy directories before
installing

Ran 4760 tests in 42.124s

OK (KNOWNFAIL=10, SKIP=8)
<nose.result.TextTestResult run=4760 errors=0 failures=0>


so pip also seems to be reusing leftover files.

all clear.

Josef
Post by Charles R Harris
Chuck
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
j***@gmail.com
2014-04-24 23:47:20 UTC
Permalink
OT: Oh, I hate pip and packages that require numpy

E:\tmp>C:\Python27\python C:\Python27\Scripts\pip-script.py install -U patsy
Downloading/unpacking patsy
Running setup.py
(path:c:\users\josef\appdata\local\temp\pip_build_josef\patsy\setup.py)
egg_info for package patsy

no previously-included directories found matching 'doc\_build'
Downloading/unpacking numpy from
https://pypi.python.org/packages/source/n/numpy/numpy-1.8.1.tar.gz#md5=be95babe263bfa3428363d6db5b64678(from
patsy)
....

Found existing installation: numpy 1.6.2
Uninstalling numpy:
Successfully uninstalled numpy
Running setup.py install for numpy
...

...

C:\Python27\lib\distutils\dist.py:267: UserWarning: Unknown distribution
option: 'define_macros'

warnings.warn(msg)

error: Unable to find vcvarsall.bat

----------------------------------------
Rolling back uninstall of numpy
Cleaning up...


-------------------------
user error ?
I have a stale numpy-1.6.2-py2.7.egg-info file in site-packages

put that's a nice new feature of pip "Rolling back uninstall of numpy"
numpy is still here

Josef
j***@gmail.com
2014-04-25 00:26:56 UTC
Permalink
On Thu, Apr 24, 2014 at 7:20 PM, Charles R Harris <
Post by Charles R Harris
On Thu, Apr 24, 2014 at 7:00 PM, Charles R Harris <
Post by Charles R Harris
Hi Matthew,
Post by Matthew Brett
Hi,
Thanks to Cark Kleffner's toolchain and some help from Clint Whaley
(main author of ATLAS), I've built 64-bit windows numpy and scipy
wheels for testing.
The build uses Carl's custom mingw-w64 build with static linking.
There are two harmless test failures on scipy (being discussed on the
list at the moment) - tests otherwise clean.
https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd64.whl
https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_amd64.whl
pip install -U pip # to upgrade pip to latest
pip install -f https://nipy.bic.berkeley.edu/scipy_installers numpy scipy
Please do send feedback.
https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/atlas-64-full-sse2.tar.bz2
Many thanks for Carl in particular for doing all the hard work,
Cool. After all these long years... Now all we need is a box running
tests for CI.
Chuck
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
I get two test failures with numpy
Josef
Post by Charles R Harris
Post by Matthew Brett
np.test()
Running unit tests for numpy
NumPy version 1.8.1
NumPy is installed in C:\Python27\lib\site-packages\numpy
Python version 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64 bit
(AMD64)]
nose version 1.1.2
======================================================================
FAIL: test_iterator.test_iter_broadcasting_errors
----------------------------------------------------------------------
File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest
self.test(*self.arg)
File
"C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py", line
657, in test_iter_broadcasting_errors
'(2)->(2,newaxis)') % msg)
File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 44,
in assert_
raise AssertionError(msg)
AssertionError: Message "operands could not be broadcast together with
remapped shapes [original->remapped]: (2,3)->(2,3) (2,)->(2,newaxis) and
requested shape (4,3)" doesn't contain remapped operand
shape(2)->(2,newaxis)
======================================================================
FAIL: test_iterator.test_iter_array_cast
----------------------------------------------------------------------
File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest
self.test(*self.arg)
File
"C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py", line
836, in test_iter_array_cast
assert_equal(i.operands[0].strides, (-96,8,-32))
File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 255,
in assert_equal
assert_equal(actual[k], desired[k], 'item=%r\n%s' % (k, err_msg),
verbose)
File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 317,
in assert_equal
raise AssertionError(msg)
item=0
ACTUAL: 96L
DESIRED: -96
----------------------------------------------------------------------
Ran 4828 tests in 46.306s
FAILED (KNOWNFAIL=10, SKIP=8, failures=2)
<nose.result.TextTestResult run=4828 errors=0 failures=2>
Strange. That second one looks familiar, at least the "-96" part. Wonder
why this doesn't show up with the MKL builds.
ok tried again, this time deleting the old numpy directories before
installing
Ran 4760 tests in 42.124s
OK (KNOWNFAIL=10, SKIP=8)
<nose.result.TextTestResult run=4760 errors=0 failures=0>
so pip also seems to be reusing leftover files.
all clear.
Running the statsmodels test suite, I get a failure
in test_discrete.TestProbitCG where fmin_cg converges to something that
differs in the 3rd decimal.

I usually only test the 32-bit version, so I don't know if this is specific
to this scipy version, but we haven't seen this in a long time.
I used our nightly binaries http://statsmodels.sourceforge.net/binaries/

Josef
Josef
Post by Charles R Harris
Chuck
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Matthew Brett
2014-04-25 05:21:15 UTC
Permalink
Hi,
On Thu, Apr 24, 2014 at 7:20 PM, Charles R Harris
Post by Charles R Harris
On Thu, Apr 24, 2014 at 7:00 PM, Charles R Harris
Post by Charles R Harris
Hi Matthew,
On Thu, Apr 24, 2014 at 3:56 PM, Matthew Brett
Post by Matthew Brett
Hi,
Thanks to Cark Kleffner's toolchain and some help from Clint Whaley
(main author of ATLAS), I've built 64-bit windows numpy and scipy
wheels for testing.
The build uses Carl's custom mingw-w64 build with static linking.
There are two harmless test failures on scipy (being discussed on the
list at the moment) - tests otherwise clean.
https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd64.whl
https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_amd64.whl
pip install -U pip # to upgrade pip to latest
pip install -f https://nipy.bic.berkeley.edu/scipy_installers numpy scipy
Please do send feedback.
https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/atlas-64-full-sse2.tar.bz2
Many thanks for Carl in particular for doing all the hard work,
Cool. After all these long years... Now all we need is a box running
tests for CI.
Chuck
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
I get two test failures with numpy
Josef
Post by Charles R Harris
Post by Matthew Brett
np.test()
Running unit tests for numpy
NumPy version 1.8.1
NumPy is installed in C:\Python27\lib\site-packages\numpy
Python version 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64 bit
(AMD64)]
nose version 1.1.2
======================================================================
FAIL: test_iterator.test_iter_broadcasting_errors
----------------------------------------------------------------------
File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest
self.test(*self.arg)
File
"C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py", line 657,
in test_iter_broadcasting_errors
'(2)->(2,newaxis)') % msg)
File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 44,
in assert_
raise AssertionError(msg)
AssertionError: Message "operands could not be broadcast together with
remapped shapes [original->remapped]: (2,3)->(2,3) (2,)->(2,newaxis) and
requested shape (4,3)" doesn't contain remapped operand
shape(2)->(2,newaxis)
======================================================================
FAIL: test_iterator.test_iter_array_cast
----------------------------------------------------------------------
File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest
self.test(*self.arg)
File
"C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py", line 836,
in test_iter_array_cast
assert_equal(i.operands[0].strides, (-96,8,-32))
File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 255,
in assert_equal
assert_equal(actual[k], desired[k], 'item=%r\n%s' % (k, err_msg),
verbose)
File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 317,
in assert_equal
raise AssertionError(msg)
item=0
ACTUAL: 96L
DESIRED: -96
----------------------------------------------------------------------
Ran 4828 tests in 46.306s
FAILED (KNOWNFAIL=10, SKIP=8, failures=2)
<nose.result.TextTestResult run=4828 errors=0 failures=2>
Strange. That second one looks familiar, at least the "-96" part. Wonder
why this doesn't show up with the MKL builds.
ok tried again, this time deleting the old numpy directories before
installing
Ran 4760 tests in 42.124s
OK (KNOWNFAIL=10, SKIP=8)
<nose.result.TextTestResult run=4760 errors=0 failures=0>
so pip also seems to be reusing leftover files.
all clear.
Running the statsmodels test suite, I get a failure in
test_discrete.TestProbitCG where fmin_cg converges to something that differs
in the 3rd decimal.
I usually only test the 32-bit version, so I don't know if this is specific
to this scipy version, but we haven't seen this in a long time.
I used our nightly binaries http://statsmodels.sourceforge.net/binaries/
That's interesting, you saw also we're getting failures on the tests
for powell optimization because of small unit-at-last-place
differences in the exp function in mingw-w64. Is there any chance you
can track down where the optimization path is diverging and why?
It's just that - if this is also the exp function maybe we can see if
the error is exceeding reasonable bounds and then feed back to
mingw-w64 and fall back to the numpy default implementation in the
meantime.

Cheers,

Matthew
j***@gmail.com
2014-04-26 14:10:47 UTC
Permalink
Post by Matthew Brett
Hi,
On Thu, Apr 24, 2014 at 7:20 PM, Charles R Harris
Post by Charles R Harris
On Thu, Apr 24, 2014 at 7:00 PM, Charles R Harris
Post by Charles R Harris
Hi Matthew,
On Thu, Apr 24, 2014 at 3:56 PM, Matthew Brett
Post by Matthew Brett
Hi,
Thanks to Cark Kleffner's toolchain and some help from Clint Whaley
(main author of ATLAS), I've built 64-bit windows numpy and scipy
wheels for testing.
The build uses Carl's custom mingw-w64 build with static linking.
There are two harmless test failures on scipy (being discussed on
the
Post by Charles R Harris
Post by Charles R Harris
Post by Matthew Brett
list at the moment) - tests otherwise clean.
https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd64.whl
https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_amd64.whl
Post by Charles R Harris
Post by Charles R Harris
Post by Matthew Brett
pip install -U pip # to upgrade pip to latest
pip install -f https://nipy.bic.berkeley.edu/scipy_installers numpy scipy
Please do send feedback.
https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/atlas-64-full-sse2.tar.bz2
Post by Charles R Harris
Post by Charles R Harris
Post by Matthew Brett
Many thanks for Carl in particular for doing all the hard work,
Cool. After all these long years... Now all we need is a box running
tests for CI.
Chuck
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
I get two test failures with numpy
Josef
Post by Charles R Harris
Post by Matthew Brett
np.test()
Running unit tests for numpy
NumPy version 1.8.1
NumPy is installed in C:\Python27\lib\site-packages\numpy
Python version 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64
bit
Post by Charles R Harris
(AMD64)]
nose version 1.1.2
======================================================================
FAIL: test_iterator.test_iter_broadcasting_errors
----------------------------------------------------------------------
File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest
self.test(*self.arg)
File
"C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py",
line 657,
Post by Charles R Harris
in test_iter_broadcasting_errors
'(2)->(2,newaxis)') % msg)
File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line
44,
Post by Charles R Harris
in assert_
raise AssertionError(msg)
AssertionError: Message "operands could not be broadcast together with
remapped shapes [original->remapped]: (2,3)->(2,3) (2,)->(2,newaxis)
and
Post by Charles R Harris
requested shape (4,3)" doesn't contain remapped operand
shape(2)->(2,newaxis)
======================================================================
FAIL: test_iterator.test_iter_array_cast
----------------------------------------------------------------------
File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest
self.test(*self.arg)
File
"C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py",
line 836,
Post by Charles R Harris
in test_iter_array_cast
assert_equal(i.operands[0].strides, (-96,8,-32))
File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line
255,
Post by Charles R Harris
in assert_equal
assert_equal(actual[k], desired[k], 'item=%r\n%s' % (k, err_msg),
verbose)
File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line
317,
Post by Charles R Harris
in assert_equal
raise AssertionError(msg)
item=0
ACTUAL: 96L
DESIRED: -96
----------------------------------------------------------------------
Ran 4828 tests in 46.306s
FAILED (KNOWNFAIL=10, SKIP=8, failures=2)
<nose.result.TextTestResult run=4828 errors=0 failures=2>
Strange. That second one looks familiar, at least the "-96" part.
Wonder
Post by Charles R Harris
why this doesn't show up with the MKL builds.
ok tried again, this time deleting the old numpy directories before
installing
Ran 4760 tests in 42.124s
OK (KNOWNFAIL=10, SKIP=8)
<nose.result.TextTestResult run=4760 errors=0 failures=0>
so pip also seems to be reusing leftover files.
all clear.
Running the statsmodels test suite, I get a failure in
test_discrete.TestProbitCG where fmin_cg converges to something that
differs
in the 3rd decimal.
I usually only test the 32-bit version, so I don't know if this is
specific
to this scipy version, but we haven't seen this in a long time.
I used our nightly binaries http://statsmodels.sourceforge.net/binaries/
That's interesting, you saw also we're getting failures on the tests
for powell optimization because of small unit-at-last-place
differences in the exp function in mingw-w64. Is there any chance you
can track down where the optimization path is diverging and why?
It's just that - if this is also the exp function maybe we can see if
the error is exceeding reasonable bounds and then feed back to
mingw-w64 and fall back to the numpy default implementation in the
meantime.
I'm a bit doubtful it's exp, the probit model is based on the normal
distribution and has an exp only in the gradient via norm._pdf, the
objective function uses norm._cdf.

I can look into it.

However:
We don't use fmin_cg for anything by default, it's part of "testing all
supported scipy optimizers" and we had problems with it before on various
machines https://github.com/statsmodels/statsmodels/issues/109
The test was completely disabled on Windows for a while, and I might have
to turn some screws again.

I'm fighting with more serious problems with fmin_slsqp and fmin_bfgs,
which we really need to use.

If minor precision issues matter, then the code is not "robust" and should
be fixed.

compared to precision issues. I'm fighting more with the large scale
properties of exp.
https://github.com/scipy/scipy/issues/3581


Neverthless,
I would really like to know why I'm running into many platform differences
and problems with scipy.optimize.

Cheers,

Josef
Post by Matthew Brett
Cheers,
Matthew
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
j***@gmail.com
2014-04-26 14:20:28 UTC
Permalink
Post by j***@gmail.com
Post by Matthew Brett
Hi,
On Thu, Apr 24, 2014 at 7:20 PM, Charles R Harris
Post by Charles R Harris
On Thu, Apr 24, 2014 at 7:00 PM, Charles R Harris
Post by Charles R Harris
Hi Matthew,
On Thu, Apr 24, 2014 at 3:56 PM, Matthew Brett
Post by Matthew Brett
Hi,
Thanks to Cark Kleffner's toolchain and some help from Clint Whaley
(main author of ATLAS), I've built 64-bit windows numpy and scipy
wheels for testing.
The build uses Carl's custom mingw-w64 build with static linking.
There are two harmless test failures on scipy (being discussed on
the
Post by Charles R Harris
Post by Charles R Harris
Post by Matthew Brett
list at the moment) - tests otherwise clean.
https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd64.whl
https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_amd64.whl
Post by Charles R Harris
Post by Charles R Harris
Post by Matthew Brett
pip install -U pip # to upgrade pip to latest
pip install -f https://nipy.bic.berkeley.edu/scipy_installersnumpy
scipy
Please do send feedback.
https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/atlas-64-full-sse2.tar.bz2
Post by Charles R Harris
Post by Charles R Harris
Post by Matthew Brett
Many thanks for Carl in particular for doing all the hard work,
Cool. After all these long years... Now all we need is a box running
tests for CI.
Chuck
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
I get two test failures with numpy
Josef
Post by Charles R Harris
Post by Matthew Brett
np.test()
Running unit tests for numpy
NumPy version 1.8.1
NumPy is installed in C:\Python27\lib\site-packages\numpy
Python version 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64
bit
Post by Charles R Harris
(AMD64)]
nose version 1.1.2
======================================================================
Post by Charles R Harris
FAIL: test_iterator.test_iter_broadcasting_errors
----------------------------------------------------------------------
Post by Charles R Harris
File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest
self.test(*self.arg)
File
"C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py",
line 657,
Post by Charles R Harris
in test_iter_broadcasting_errors
'(2)->(2,newaxis)') % msg)
File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line
44,
Post by Charles R Harris
in assert_
raise AssertionError(msg)
AssertionError: Message "operands could not be broadcast together
with
Post by Charles R Harris
remapped shapes [original->remapped]: (2,3)->(2,3) (2,)->(2,newaxis)
and
Post by Charles R Harris
requested shape (4,3)" doesn't contain remapped operand
shape(2)->(2,newaxis)
======================================================================
Post by Charles R Harris
FAIL: test_iterator.test_iter_array_cast
----------------------------------------------------------------------
Post by Charles R Harris
File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest
self.test(*self.arg)
File
"C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py",
line 836,
Post by Charles R Harris
in test_iter_array_cast
assert_equal(i.operands[0].strides, (-96,8,-32))
File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line
255,
Post by Charles R Harris
in assert_equal
assert_equal(actual[k], desired[k], 'item=%r\n%s' % (k, err_msg),
verbose)
File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line
317,
Post by Charles R Harris
in assert_equal
raise AssertionError(msg)
item=0
ACTUAL: 96L
DESIRED: -96
----------------------------------------------------------------------
Post by Charles R Harris
Ran 4828 tests in 46.306s
FAILED (KNOWNFAIL=10, SKIP=8, failures=2)
<nose.result.TextTestResult run=4828 errors=0 failures=2>
Strange. That second one looks familiar, at least the "-96" part.
Wonder
Post by Charles R Harris
why this doesn't show up with the MKL builds.
ok tried again, this time deleting the old numpy directories before
installing
Ran 4760 tests in 42.124s
OK (KNOWNFAIL=10, SKIP=8)
<nose.result.TextTestResult run=4760 errors=0 failures=0>
so pip also seems to be reusing leftover files.
all clear.
Running the statsmodels test suite, I get a failure in
test_discrete.TestProbitCG where fmin_cg converges to something that
differs
in the 3rd decimal.
I usually only test the 32-bit version, so I don't know if this is
specific
to this scipy version, but we haven't seen this in a long time.
I used our nightly binaries
http://statsmodels.sourceforge.net/binaries/
That's interesting, you saw also we're getting failures on the tests
for powell optimization because of small unit-at-last-place
differences in the exp function in mingw-w64. Is there any chance you
can track down where the optimization path is diverging and why?
It's just that - if this is also the exp function maybe we can see if
the error is exceeding reasonable bounds and then feed back to
mingw-w64 and fall back to the numpy default implementation in the
meantime.
I'm a bit doubtful it's exp, the probit model is based on the normal
distribution and has an exp only in the gradient via norm._pdf, the
objective function uses norm._cdf.
I can look into it.
We don't use fmin_cg for anything by default, it's part of "testing all
supported scipy optimizers" and we had problems with it before on various
machines https://github.com/statsmodels/statsmodels/issues/109
The test was completely disabled on Windows for a while, and I might have
to turn some screws again.
I'm fighting with more serious problems with fmin_slsqp and fmin_bfgs,
which we really need to use.
If minor precision issues matter, then the code is not "robust" and should
be fixed.
compared to precision issues. I'm fighting more with the large scale
properties of exp.
https://github.com/scipy/scipy/issues/3581
Neverthless,
I would really like to know why I'm running into many platform differences
and problems with scipy.optimize.
To avoid giving a wrong impression:

Scipy.optimize works in general very well for statsmodels, we use it
heavily and we have a large set of test cases for it.
It's just the last 5% or so of cases where I spend a considerable amount of
time figuring out how to get around convergence problems, which are
sometimes platform specific and sometimes not.

Josef
Post by j***@gmail.com
Cheers,
Josef
Post by Matthew Brett
Cheers,
Matthew
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
j***@gmail.com
2014-04-26 16:01:45 UTC
Permalink
Post by j***@gmail.com
Post by Matthew Brett
Hi,
On Thu, Apr 24, 2014 at 7:20 PM, Charles R Harris
Post by Charles R Harris
On Thu, Apr 24, 2014 at 7:00 PM, Charles R Harris
Post by Charles R Harris
Hi Matthew,
On Thu, Apr 24, 2014 at 3:56 PM, Matthew Brett
Post by Matthew Brett
Hi,
Thanks to Cark Kleffner's toolchain and some help from Clint
Whaley
Post by Charles R Harris
Post by Charles R Harris
Post by Matthew Brett
(main author of ATLAS), I've built 64-bit windows numpy and scipy
wheels for testing.
The build uses Carl's custom mingw-w64 build with static linking.
There are two harmless test failures on scipy (being discussed on
the
Post by Charles R Harris
Post by Charles R Harris
Post by Matthew Brett
list at the moment) - tests otherwise clean.
https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd64.whl
https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_amd64.whl
Post by Charles R Harris
Post by Charles R Harris
Post by Matthew Brett
pip install -U pip # to upgrade pip to latest
pip install -f https://nipy.bic.berkeley.edu/scipy_installersnumpy
scipy
Please do send feedback.
https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/atlas-64-full-sse2.tar.bz2
Post by Charles R Harris
Post by Charles R Harris
Post by Matthew Brett
Many thanks for Carl in particular for doing all the hard work,
Cool. After all these long years... Now all we need is a box
running
Post by Charles R Harris
Post by Charles R Harris
tests for CI.
Chuck
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
I get two test failures with numpy
Josef
Post by Charles R Harris
Post by Matthew Brett
np.test()
Running unit tests for numpy
NumPy version 1.8.1
NumPy is installed in C:\Python27\lib\site-packages\numpy
Python version 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500
64 bit
Post by Charles R Harris
(AMD64)]
nose version 1.1.2
======================================================================
Post by Charles R Harris
FAIL: test_iterator.test_iter_broadcasting_errors
----------------------------------------------------------------------
Post by Charles R Harris
File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest
self.test(*self.arg)
File
"C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py",
line 657,
Post by Charles R Harris
in test_iter_broadcasting_errors
'(2)->(2,newaxis)') % msg)
File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line
44,
Post by Charles R Harris
in assert_
raise AssertionError(msg)
AssertionError: Message "operands could not be broadcast together
with
Post by Charles R Harris
remapped shapes [original->remapped]: (2,3)->(2,3)
(2,)->(2,newaxis) and
Post by Charles R Harris
requested shape (4,3)" doesn't contain remapped operand
shape(2)->(2,newaxis)
======================================================================
Post by Charles R Harris
FAIL: test_iterator.test_iter_array_cast
----------------------------------------------------------------------
Post by Charles R Harris
File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest
self.test(*self.arg)
File
"C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py",
line 836,
Post by Charles R Harris
in test_iter_array_cast
assert_equal(i.operands[0].strides, (-96,8,-32))
File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line
255,
Post by Charles R Harris
in assert_equal
assert_equal(actual[k], desired[k], 'item=%r\n%s' % (k,
err_msg),
Post by Charles R Harris
verbose)
File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line
317,
Post by Charles R Harris
in assert_equal
raise AssertionError(msg)
item=0
ACTUAL: 96L
DESIRED: -96
----------------------------------------------------------------------
Post by Charles R Harris
Ran 4828 tests in 46.306s
FAILED (KNOWNFAIL=10, SKIP=8, failures=2)
<nose.result.TextTestResult run=4828 errors=0 failures=2>
Strange. That second one looks familiar, at least the "-96" part.
Wonder
Post by Charles R Harris
why this doesn't show up with the MKL builds.
ok tried again, this time deleting the old numpy directories before
installing
Ran 4760 tests in 42.124s
OK (KNOWNFAIL=10, SKIP=8)
<nose.result.TextTestResult run=4760 errors=0 failures=0>
so pip also seems to be reusing leftover files.
all clear.
Running the statsmodels test suite, I get a failure in
test_discrete.TestProbitCG where fmin_cg converges to something that
differs
in the 3rd decimal.
I usually only test the 32-bit version, so I don't know if this is
specific
to this scipy version, but we haven't seen this in a long time.
I used our nightly binaries
http://statsmodels.sourceforge.net/binaries/
That's interesting, you saw also we're getting failures on the tests
for powell optimization because of small unit-at-last-place
differences in the exp function in mingw-w64. Is there any chance you
can track down where the optimization path is diverging and why?
It's just that - if this is also the exp function maybe we can see if
the error is exceeding reasonable bounds and then feed back to
mingw-w64 and fall back to the numpy default implementation in the
meantime.
I'm a bit doubtful it's exp, the probit model is based on the normal
distribution and has an exp only in the gradient via norm._pdf, the
objective function uses norm._cdf.
I can look into it.
with 32 bit official binaries MingW 32

Warning: Desired error not necessarily achieved due to precision loss.
Current function value: 0.400588
Iterations: 75
Function evaluations: 213
Gradient evaluations: 201

relative and absolute deviation from "desired"
[ -1.26257296e-05 -4.77535711e-05 -9.93794940e-06 -1.78815725e-05]
[ -2.05270407e-05 -2.47024202e-06 -1.41748189e-05 1.33259208e-04]


with your wheels, after increasing maxiter in the test case

Optimization terminated successfully.
Current function value: 0.400588
Iterations: 766
Function evaluations: 1591
Gradient evaluations: 1591

relative and absolute deviation from "desired"
[ -1.57311713e-07 -4.25324806e-08 -3.01557919e-08 -1.19794357e-07]
[ -2.55758996e-07 -2.20016050e-09 -4.30121820e-08 8.92745931e-07]

So actually the 64 bit wheel has the better final result, and just needs
more iterations to get close enough to what we had required in the unit
tests.

The trace of the 64bit version seems to slow down in the movement, but then
doesn't run into the "precision loss"
j***@gmail.com
2014-04-24 23:20:37 UTC
Permalink
On Thu, Apr 24, 2014 at 7:00 PM, Charles R Harris <
Post by Charles R Harris
Hi Matthew,
Post by Matthew Brett
Hi,
Thanks to Cark Kleffner's toolchain and some help from Clint Whaley
(main author of ATLAS), I've built 64-bit windows numpy and scipy
wheels for testing.
The build uses Carl's custom mingw-w64 build with static linking.
There are two harmless test failures on scipy (being discussed on the
list at the moment) - tests otherwise clean.
https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd64.whl
https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_amd64.whl
pip install -U pip # to upgrade pip to latest
pip install -f https://nipy.bic.berkeley.edu/scipy_installers numpy scipy
Please do send feedback.
https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/atlas-64-full-sse2.tar.bz2
Many thanks for Carl in particular for doing all the hard work,
Cool. After all these long years... Now all we need is a box running
tests for CI.
Chuck
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
I get two test failures with numpy
scipy looks good, just two powell trace failures

Josef
Josef
Post by Charles R Harris
Post by Matthew Brett
np.test()
Running unit tests for numpy
NumPy version 1.8.1
NumPy is installed in C:\Python27\lib\site-packages\numpy
Python version 2.7.3 (default, Apr 10 2012, 23:24:47) [MSC v.1500 64 bit
(AMD64)]
nose version 1.1.2
======================================================================
FAIL: test_iterator.test_iter_broadcasting_errors
----------------------------------------------------------------------
File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest
self.test(*self.arg)
File "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py",
line 657, in test_iter_broadcasting_errors
'(2)->(2,newaxis)') % msg)
File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 44, in
assert_
raise AssertionError(msg)
AssertionError: Message "operands could not be broadcast together with
remapped shapes [original->remapped]: (2,3)->(2,3) (2,)->(2,newaxis) and
requested shape (4,3)" doesn't contain remapped operand
shape(2)->(2,newaxis)
======================================================================
FAIL: test_iterator.test_iter_array_cast
----------------------------------------------------------------------
File "C:\Python27\lib\site-packages\nose\case.py", line 197, in runTest
self.test(*self.arg)
File "C:\Python27\lib\site-packages\numpy\core\tests\test_iterator.py",
line 836, in test_iter_array_cast
assert_equal(i.operands[0].strides, (-96,8,-32))
File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 255,
in assert_equal
assert_equal(actual[k], desired[k], 'item=%r\n%s' % (k, err_msg),
verbose)
File "C:\Python27\lib\site-packages\numpy\testing\utils.py", line 317,
in assert_equal
raise AssertionError(msg)
item=0
ACTUAL: 96L
DESIRED: -96
----------------------------------------------------------------------
Ran 4828 tests in 46.306s
FAILED (KNOWNFAIL=10, SKIP=8, failures=2)
<nose.result.TextTestResult run=4828 errors=0 failures=2>
j***@gmail.com
2014-04-25 00:33:46 UTC
Permalink
Post by Charles R Harris
Hi Matthew,
Post by Matthew Brett
Hi,
Thanks to Cark Kleffner's toolchain and some help from Clint Whaley
(main author of ATLAS), I've built 64-bit windows numpy and scipy
wheels for testing.
The build uses Carl's custom mingw-w64 build with static linking.
There are two harmless test failures on scipy (being discussed on the
list at the moment) - tests otherwise clean.
https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd64.whl
https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_amd64.whl
pip install -U pip # to upgrade pip to latest
pip install -f https://nipy.bic.berkeley.edu/scipy_installers numpy scipy
Please do send feedback.
https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/atlas-64-full-sse2.tar.bz2
Many thanks for Carl in particular for doing all the hard work,
Cool. After all these long years... Now all we need is a box running tests
for CI.
Very good news, after 3 years interruption I might be able to build scipy
again, and switch now to a 64bit development version.

Thanks for pushing for this, and doing all the hard work.

Josef
Post by Charles R Harris
Chuck
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Sturla Molden
2014-04-25 05:57:39 UTC
Permalink
Post by Matthew Brett
Thanks to Cark Kleffner's toolchain and some help from Clint Whaley
(main author of ATLAS), I've built 64-bit windows numpy and scipy
wheels for testing.
Thanks for your great effort to solve this mess.

By Murphy's law, I do not have access to a Windows computer on which to
test now. :-(

This approach worries me a bit though: Will we have to maintain a fork of
MinGW-w64 for building NumPy and SciPy? Should this toolset be distributed
along with NumPy and SciPy on Windows? I presume it is needed to build C
and Cython extensions?

On the positive side: Does this mean we finally can use gfortran on
Windows? And if so, can we use Fortran versions beyond Fortran 77 in SciPy
now? Or is Mac OS X a blocker?

Sturla
Pauli Virtanen
2014-04-26 17:01:29 UTC
Permalink
25.04.2014 08:57, Sturla Molden kirjoitti:
[clip]
Post by Sturla Molden
On the positive side: Does this mean we finally can use gfortran on
Windows? And if so, can we use Fortran versions beyond Fortran 77 in SciPy
now? Or is Mac OS X a blocker?
Yes, Windows is the only platform on which Fortran was problematic. OSX
is somewhat saner in this respect.
--
Pauli Virtanen
Sturla Molden
2014-04-27 22:39:51 UTC
Permalink
Post by Pauli Virtanen
Yes, Windows is the only platform on which Fortran was problematic. OSX
is somewhat saner in this respect.
Oh yes, it seems there are official "unofficial gfortran binaries"
available for OSX:

http://gcc.gnu.org/wiki/GFortranBinaries#MacOS

Cool :)


Sturla
Ralf Gommers
2014-04-28 15:39:58 UTC
Permalink
Post by Sturla Molden
Post by Pauli Virtanen
Yes, Windows is the only platform on which Fortran was problematic. OSX
is somewhat saner in this respect.
Oh yes, it seems there are official "unofficial gfortran binaries"
http://gcc.gnu.org/wiki/GFortranBinaries#MacOS
I'd be interested to hear if those work well for you. For people that just
want to get things working, I would recommend to use the gfortran
installers recommended at
http://scipy.org/scipylib/building/macosx.html#compilers-c-c-fortran-cython.
Those work for sure, and alternatives have usually proven to be problematic
in the past.

Ralf
Post by Sturla Molden
Cool :)
Sturla
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Sturla Molden
2014-04-28 16:06:31 UTC
Permalink
Post by Ralf Gommers
I'd be interested to hear if those work well for you. For people that just
want to get things working, I would recommend to use the gfortran
installers recommended at
<a
href="http://scipy.org/scipylib/building/macosx.html#compilers-c-c-fortran-cython.">http://scipy.org/scipylib/building/macosx.html#compilers-c-c-fortran-cython.</a>
Those work for sure, and alternatives have usually proven to be problematic
in the past.
No problems thus far, but I only installed it yesterday. :-)

I am not sure gcc-4.2 is needed anymore. Apple has retired it as platform C
compiler on OS X. We need a Fortran compiler that can be used together with
clang as C compiler.

Sturla
Ralf Gommers
2014-04-28 16:21:59 UTC
Permalink
Post by Ralf Gommers
Post by Ralf Gommers
I'd be interested to hear if those work well for you. For people that
just
Post by Ralf Gommers
want to get things working, I would recommend to use the gfortran
installers recommended at
<a
href="
http://scipy.org/scipylib/building/macosx.html#compilers-c-c-fortran-cython
.">
http://scipy.org/scipylib/building/macosx.html#compilers-c-c-fortran-cython
.</a>
Post by Ralf Gommers
Those work for sure, and alternatives have usually proven to be
problematic
Post by Ralf Gommers
in the past.
No problems thus far, but I only installed it yesterday. :-)
Sounds good. Let's give it a bit more time, once you've given it a good
workout we can add that those gfortran 4.8.x compilers seem to work fine to
the scipy build instructions.

I am not sure gcc-4.2 is needed anymore. Apple has retired it as platform C
Post by Ralf Gommers
compiler on OS X. We need a Fortran compiler that can be used together with
clang as C compiler.
Clang together with gfortran 4.2 works fine on OS X 10.9.

Ralf
Sturla Molden
2014-04-28 17:22:19 UTC
Permalink
Post by Ralf Gommers
Sounds good. Let's give it a bit more time, once you've given it a good
workout we can add that those gfortran 4.8.x compilers seem to work fine to
the scipy build instructions.
Yes, it needs to be tested properly.

The build instructions for OS X Mavericks should also mention where to
obtain Xcode (Appstore) and the secret command to retrieve the command-line
utils after Xcode is installed:

$ /usr/bin/xcode-select --install

Probably it should also mention how to use alternative BLAS and LAPACK
versions (MKL and OpenBLAS), although all three are equally performant on
Mavericks (except Accelerate is not fork safe):

https://twitter.com/nedlom/status/437427557919891457

Sturla
Sturla Molden
2014-04-28 20:44:53 UTC
Permalink
Post by Sturla Molden
No problems thus far, but I only installed it yesterday. :-)
Sounds good. Let's give it a bit more time, once you've given it a good
workout we can add that those gfortran 4.8.x compilers seem to work fine
to the scipy build instructions.
I have not looked at building SciPy yet, but I was able to build MPICH
3.0.4 from source without a problem. I worked on the first attempt
without any error or warnings. That is more than I hoped for...

Using BLAS and LAPACK from Accelerate also worked correctly with flags
-ff2c and -framework Accelerate. I can use it from Python (NumPy) with
ctypes and Cython. I get correct results and it does not segfault.

(It does segfault without -ff2c, but that is as expected, given that
Accelerate has f2c/g77 ABI.)

I was also able to build OpenBLAS with Clang as C compiler and gfortran
as Fortran compiler. It works correctly as well (both the build process
and the binaries I get).

So far it looks damn good :-)

The next step is to build NumPy and SciPy and run some tests :-)

Sturla





P.S. Here is what I did to build MPICH from source, for those interested:

$./configure CC=clang CXX=clang++ F77=gfortran FC=gfortran
--enable-fast=all,O3 --with-pm=gforker --prefix=/opt/mpich
$ make
$ sudo make install

$ export PATH="/opt/mpich/bin:$PATH" # actually in ~/.bash_profile

Now testing with some hello worlds:

$ mpif77 -o hello hello.f
$ mpiexec -np 4 ./hello
Hello world
Hello world
Hello world
Hello world


$ rm hello
$ mpicc -o hello hello.c
$ mpiexec -np 4 ./hello
Hello world from process 0 of 4
Hello world from process 1 of 4
Hello world from process 2 of 4
Hello world from process 3 of 4


The hello world programs looked like this:

#include <stdio.h>
#include <mpi.h>

int main (int argc, char *argv[])
{
int rank, size;
MPI_Init (&argc, &argv);
MPI_Comm_rank (MPI_COMM_WORLD, &rank);
MPI_Comm_size (MPI_COMM_WORLD, &size);
printf( "Hello world from process %d of %d\n", rank, size);
MPI_Finalize();
return 0;
}

program hello_world
include 'mpif.h'
integer ierr
call MPI_INIT(ierr)
print *, "Hello world"
call MPI_FINALIZE(ierr)
stop
end
Carl Kleffner
2014-04-26 18:10:08 UTC
Permalink
Hi,

basically the toolchain was created with a local fork of the "mingw-builds"
build process along with some addons and patches. It is NOT a mingw-w64
fork. BTW: there are numerous mingw-w64 based toolchains out there, most of
them build without any information about the build process and patches they
used.

As long as the "mingw-builds" maintainers continue working on their
project, maintaining usuable toolchain for Python development on Windows
should be feasible.

More details are given here:
http://article.gmane.org/gmane.comp.python.numeric.general/57446

Regards

Carl
Post by Sturla Molden
Post by Matthew Brett
Thanks to Cark Kleffner's toolchain and some help from Clint Whaley
(main author of ATLAS), I've built 64-bit windows numpy and scipy
wheels for testing.
Thanks for your great effort to solve this mess.
By Murphy's law, I do not have access to a Windows computer on which to
test now. :-(
This approach worries me a bit though: Will we have to maintain a fork of
MinGW-w64 for building NumPy and SciPy? Should this toolset be distributed
along with NumPy and SciPy on Windows? I presume it is needed to build C
and Cython extensions?
On the positive side: Does this mean we finally can use gfortran on
Windows? And if so, can we use Fortran versions beyond Fortran 77 in SciPy
now? Or is Mac OS X a blocker?
Sturla
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Matthew Brett
2014-04-27 07:25:58 UTC
Permalink
Hi Carl,
Post by Carl Kleffner
Hi,
basically the toolchain was created with a local fork of the "mingw-builds"
build process along with some addons and patches. It is NOT a mingw-w64
fork. BTW: there are numerous mingw-w64 based toolchains out there, most of
them build without any information about the build process and patches they
used.
As long as the "mingw-builds" maintainers continue working on their project,
maintaining usuable toolchain for Python development on Windows should be
feasible.
http://article.gmane.org/gmane.comp.python.numeric.general/57446
I hope you don't mind, but I took the liberty of putting some of your
email explanations and notes into the numpy wiki:

https://github.com/numpy/numpy/wiki/Mingw-w64-faq
https://github.com/numpy/numpy/wiki/Mingw-static-toolchain

Do you have anywhere a description of what you did to create your fork
of the build process?

Maybe we can automate this using a Fedora or other cross-compiler?

I think we need to make sure that the build system need not die if you
get hired by some great company and can't work on this anymore. Do
you think that is possible?

Thanks again for the all the hard work you've done here. I think we
are getting very close to a good solution, and that has seemed a long
way off until now...

Cheers,

Matthew
Pauli Virtanen
2014-04-27 13:09:27 UTC
Permalink
Hi,

25.04.2014 00:56, Matthew Brett kirjoitti:> Thanks to Cark Kleffner's
toolchain and some help from Clint Whaley
Post by Matthew Brett
(main author of ATLAS), I've built 64-bit windows numpy and scipy
wheels for testing.
Where can I get your

numpy.patch
scipy.patch

and what's in them?

Cheers,
Pauli
Matthew Brett
2014-04-27 18:29:58 UTC
Permalink
Hi,
Post by Pauli Virtanen
Hi,
25.04.2014 00:56, Matthew Brett kirjoitti:> Thanks to Cark Kleffner's
toolchain and some help from Clint Whaley
Post by Matthew Brett
(main author of ATLAS), I've built 64-bit windows numpy and scipy
wheels for testing.
Where can I get your
numpy.patch
scipy.patch
They are Carl's patches - here:

https://bitbucket.org/carlkl/mingw-w64-for-python/downloads

The scipy patch is tiny, the numpy patch more substantial.

Carl - any interest into working up a pull-request for these? I'm
happy to do it if you don't have time.

Cheers,

Matthew
Carl Kleffner
2014-04-27 21:34:03 UTC
Permalink
Hi,

I will definitly don't have not time until thursday this week working out
the github workflow for a numpy pull request. So feel free to do it for me.

BTW: There is a missing feature in the mingw-w64 toolchain. By now it
features linking to msvcrt90 runtime only. I have do extend the specs file
to allow linking to msvcr100 with an addional flag. Or create a dedicated
toolchain - what do you think?


Cheers,

Carl
Post by Matthew Brett
Hi,
Post by Pauli Virtanen
Hi,
25.04.2014 00:56, Matthew Brett kirjoitti:> Thanks to Cark Kleffner's
toolchain and some help from Clint Whaley
Post by Matthew Brett
(main author of ATLAS), I've built 64-bit windows numpy and scipy
wheels for testing.
Where can I get your
numpy.patch
scipy.patch
https://bitbucket.org/carlkl/mingw-w64-for-python/downloads
The scipy patch is tiny, the numpy patch more substantial.
Carl - any interest into working up a pull-request for these? I'm
happy to do it if you don't have time.
Cheers,
Matthew
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Matthew Brett
2014-04-27 21:46:52 UTC
Permalink
Hi,
Post by Carl Kleffner
Hi,
I will definitly don't have not time until thursday this week working out
the github workflow for a numpy pull request. So feel free to do it for me.
OK - I will have a go at this tomorrow.
Post by Carl Kleffner
BTW: There is a missing feature in the mingw-w64 toolchain. By now it
features linking to msvcrt90 runtime only. I have do extend the specs file
to allow linking to msvcr100 with an addional flag. Or create a dedicated
toolchain - what do you think?
I don't know.

Is this a discussion that should go to the mingw-w64 list do you
think? It must be a very common feature.

As you know, I'm really hoping it will be possible make a devkit for
Python similar to the Ruby devkits [1].

The ideal would be a devkit that transparently picked up 32 vs 64 bit,
and MSVC runtime according to the Python version. For example, OSX
compilation automatically picks up the OSX SDK with which the relevant
Python was built. Do you think something like this is possible? That
would be a great improvement for people building extensions and wheels
on Windows.

Cheers,

Matthew

[1] http://rubyinstaller.org/add-ons/devkit/
Carl Kleffner
2014-04-27 22:06:39 UTC
Permalink
A possible option is to install the toolchain inside site-packages and to
deploy it as PYPI wheel or wininst packages. The PATH to the toolchain
could be extended during import of the package. But I have no idea, whats
the best strategy to additionaly install ATLAS or other third party
libraries.

Cheers,

Carl
Post by Matthew Brett
Hi,
Post by Carl Kleffner
Hi,
I will definitly don't have not time until thursday this week working out
the github workflow for a numpy pull request. So feel free to do it for
me.
OK - I will have a go at this tomorrow.
Post by Carl Kleffner
BTW: There is a missing feature in the mingw-w64 toolchain. By now it
features linking to msvcrt90 runtime only. I have do extend the specs
file
Post by Carl Kleffner
to allow linking to msvcr100 with an addional flag. Or create a dedicated
toolchain - what do you think?
I don't know.
Is this a discussion that should go to the mingw-w64 list do you
think? It must be a very common feature.
As you know, I'm really hoping it will be possible make a devkit for
Python similar to the Ruby devkits [1].
The ideal would be a devkit that transparently picked up 32 vs 64 bit,
and MSVC runtime according to the Python version. For example, OSX
compilation automatically picks up the OSX SDK with which the relevant
Python was built. Do you think something like this is possible? That
would be a great improvement for people building extensions and wheels
on Windows.
Cheers,
Matthew
[1] http://rubyinstaller.org/add-ons/devkit/
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Matthew Brett
2014-04-27 22:19:03 UTC
Permalink
Hi,
Post by Carl Kleffner
A possible option is to install the toolchain inside site-packages and to
deploy it as PYPI wheel or wininst packages. The PATH to the toolchain could
be extended during import of the package. But I have no idea, whats the best
strategy to additionaly install ATLAS or other third party libraries.
Maybe we could provide ATLAS binaries for 32 / 64 bit as part of the
devkit package. It sounds like OpenBLAS will be much easier to build,
so we could start with ATLAS binaries as a default, expecting OpenBLAS
to be built more often with the toolchain. I think that's how numpy
binary installers are built at the moment - using old binary builds of
ATLAS.

I'm happy to provide the builds of ATLAS - e.g. here:

https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds

I can also give access to the dedicated machine doing the builds.

Cheers,

Matthew
Matthew Brett
2014-04-27 22:50:44 UTC
Permalink
Aha,
Post by Matthew Brett
Hi,
Post by Carl Kleffner
A possible option is to install the toolchain inside site-packages and to
deploy it as PYPI wheel or wininst packages. The PATH to the toolchain could
be extended during import of the package. But I have no idea, whats the best
strategy to additionaly install ATLAS or other third party libraries.
Maybe we could provide ATLAS binaries for 32 / 64 bit as part of the
devkit package. It sounds like OpenBLAS will be much easier to build,
so we could start with ATLAS binaries as a default, expecting OpenBLAS
to be built more often with the toolchain. I think that's how numpy
binary installers are built at the moment - using old binary builds of
ATLAS.
https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds
I just found the official numpy binary builds of ATLAS:

https://github.com/numpy/vendor/tree/master/binaries

But - they are from an old version of ATLAS / Lapack, and only for 32-bit.

David - what say we update these to latest ATLAS stable?

Cheers,

Matthew
David Cournapeau
2014-04-28 22:29:31 UTC
Permalink
Post by Matthew Brett
Aha,
Post by Matthew Brett
Hi,
Post by Carl Kleffner
A possible option is to install the toolchain inside site-packages and
to
Post by Matthew Brett
Post by Carl Kleffner
deploy it as PYPI wheel or wininst packages. The PATH to the toolchain
could
Post by Matthew Brett
Post by Carl Kleffner
be extended during import of the package. But I have no idea, whats the
best
Post by Matthew Brett
Post by Carl Kleffner
strategy to additionaly install ATLAS or other third party libraries.
Maybe we could provide ATLAS binaries for 32 / 64 bit as part of the
devkit package. It sounds like OpenBLAS will be much easier to build,
so we could start with ATLAS binaries as a default, expecting OpenBLAS
to be built more often with the toolchain. I think that's how numpy
binary installers are built at the moment - using old binary builds of
ATLAS.
https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds
https://github.com/numpy/vendor/tree/master/binaries
But - they are from an old version of ATLAS / Lapack, and only for 32-bit.
David - what say we update these to latest ATLAS stable?
Fine by me (not that you need my approval !).

How easy is it to build ATLAS targetting a specific CPU these days ? I
think we need to at least support nosse and sse2 and above.

David
Post by Matthew Brett
Cheers,
Matthew
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Matthew Brett
2014-05-09 00:51:28 UTC
Permalink
Hi,
Post by David Cournapeau
Post by Matthew Brett
Aha,
Post by Matthew Brett
Hi,
Post by Carl Kleffner
A possible option is to install the toolchain inside site-packages and to
deploy it as PYPI wheel or wininst packages. The PATH to the toolchain could
be extended during import of the package. But I have no idea, whats the best
strategy to additionaly install ATLAS or other third party libraries.
Maybe we could provide ATLAS binaries for 32 / 64 bit as part of the
devkit package. It sounds like OpenBLAS will be much easier to build,
so we could start with ATLAS binaries as a default, expecting OpenBLAS
to be built more often with the toolchain. I think that's how numpy
binary installers are built at the moment - using old binary builds of
ATLAS.
https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds
https://github.com/numpy/vendor/tree/master/binaries
But - they are from an old version of ATLAS / Lapack, and only for 32-bit.
David - what say we update these to latest ATLAS stable?
Fine by me (not that you need my approval !).
How easy is it to build ATLAS targetting a specific CPU these days ? I think
we need to at least support nosse and sse2 and above.
I'm getting crashes trying to build SSE2-only ATLAS on 32-bits, I
think Clint will have some time to help out next week.

I did some analysis of SSE2 prevalence here:

https://github.com/numpy/numpy/wiki/Window-versions

Firefox crash reports now have about 1 percent of machines without
SSE2. I suspect that people running new installs of numpy will have
slightly better machines on average than Firefox users, but it's only
a guess.

I wonder if we could add a CPU check on numpy import to give a polite
'install from the exe' message for people without SSE2.

Cheers,

Matthew
Sturla Molden
2014-05-09 01:11:53 UTC
Permalink
Post by Matthew Brett
https://github.com/numpy/numpy/wiki/Window-versions
Firefox crash reports now have about 1 percent of machines without
SSE2. I suspect that people running new installs of numpy will have
slightly better machines on average than Firefox users, but it's only
a guess.
Ok, so that is 1 % of Windows users.

https://gist.github.com/matthew-brett/9cb5274f7451a3eb8fc0
Post by Matthew Brett
I wonder if we could add a CPU check on numpy import to give a polite
'install from the exe' message for people without SSE2.
Supporting Pentium II and Pentium III might not be the highest priority
today. I would say just let the install fail and tell them to compile
from source.


Sturla
David Cournapeau
2014-05-09 10:42:05 UTC
Permalink
Post by Matthew Brett
Hi,
Post by David Cournapeau
Post by Matthew Brett
Aha,
Post by Matthew Brett
Hi,
Post by Carl Kleffner
A possible option is to install the toolchain inside site-packages
and
Post by David Cournapeau
Post by Matthew Brett
Post by Matthew Brett
Post by Carl Kleffner
to
deploy it as PYPI wheel or wininst packages. The PATH to the
toolchain
Post by David Cournapeau
Post by Matthew Brett
Post by Matthew Brett
Post by Carl Kleffner
could
be extended during import of the package. But I have no idea, whats
the
Post by David Cournapeau
Post by Matthew Brett
Post by Matthew Brett
Post by Carl Kleffner
best
strategy to additionaly install ATLAS or other third party libraries.
Maybe we could provide ATLAS binaries for 32 / 64 bit as part of the
devkit package. It sounds like OpenBLAS will be much easier to build,
so we could start with ATLAS binaries as a default, expecting OpenBLAS
to be built more often with the toolchain. I think that's how numpy
binary installers are built at the moment - using old binary builds of
ATLAS.
https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds
https://github.com/numpy/vendor/tree/master/binaries
But - they are from an old version of ATLAS / Lapack, and only for
32-bit.
Post by David Cournapeau
Post by Matthew Brett
David - what say we update these to latest ATLAS stable?
Fine by me (not that you need my approval !).
How easy is it to build ATLAS targetting a specific CPU these days ? I
think
Post by David Cournapeau
we need to at least support nosse and sse2 and above.
I'm getting crashes trying to build SSE2-only ATLAS on 32-bits, I
think Clint will have some time to help out next week.
https://github.com/numpy/numpy/wiki/Window-versions
Firefox crash reports now have about 1 percent of machines without
SSE2. I suspect that people running new installs of numpy will have
slightly better machines on average than Firefox users, but it's only
a guess.
I wonder if we could add a CPU check on numpy import to give a polite
'install from the exe' message for people without SSE2.
We could, although you unfortunately can't do it easily from ctypes only
(as you need some ASM).

I can take a quick look at a simple cython extension that could be imported
before anything else, and would raise an ImportError if the wrong arch is
detected.

David
Post by Matthew Brett
Cheers,
Matthew
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Julian Taylor
2014-05-09 10:49:42 UTC
Permalink
Hi,
On Mon, Apr 28, 2014 at 3:29 PM, David Cournapeau
On Sun, Apr 27, 2014 at 11:50 PM, Matthew Brett
Aha,
On Sun, Apr 27, 2014 at 3:19 PM, Matthew Brett
Hi,
On Sun, Apr 27, 2014 at 3:06 PM, Carl Kleffner
Post by Carl Kleffner
A possible option is to install the toolchain inside
site-packages and
Post by Carl Kleffner
to
deploy it as PYPI wheel or wininst packages. The PATH to the
toolchain
Post by Carl Kleffner
could
be extended during import of the package. But I have no idea,
whats the
Post by Carl Kleffner
best
strategy to additionaly install ATLAS or other third party
libraries.
Maybe we could provide ATLAS binaries for 32 / 64 bit as part
of the
devkit package. It sounds like OpenBLAS will be much easier to
build,
so we could start with ATLAS binaries as a default, expecting
OpenBLAS
to be built more often with the toolchain. I think that's how
numpy
binary installers are built at the moment - using old binary
builds of
ATLAS.
https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds
https://github.com/numpy/vendor/tree/master/binaries
But - they are from an old version of ATLAS / Lapack, and only
for 32-bit.
David - what say we update these to latest ATLAS stable?
Fine by me (not that you need my approval !).
How easy is it to build ATLAS targetting a specific CPU these days
? I think
we need to at least support nosse and sse2 and above.
I'm getting crashes trying to build SSE2-only ATLAS on 32-bits, I
think Clint will have some time to help out next week.
https://github.com/numpy/numpy/wiki/Window-versions
Firefox crash reports now have about 1 percent of machines without
SSE2. I suspect that people running new installs of numpy will have
slightly better machines on average than Firefox users, but it's only
a guess.
I wonder if we could add a CPU check on numpy import to give a polite
'install from the exe' message for people without SSE2.
We could, although you unfortunately can't do it easily from ctypes only
(as you need some ASM).
I can take a quick look at a simple cython extension that could be
imported before anything else, and would raise an ImportError if the
wrong arch is detected.
assuming mingw is new enough

#ifdef __SSE2___
raise_if(!__builtin_cpu_supports("sse"))
#endof

in import_array() should do it
David Cournapeau
2014-05-09 11:06:35 UTC
Permalink
On Fri, May 9, 2014 at 11:49 AM, Julian Taylor <
Post by Julian Taylor
Hi,
On Mon, Apr 28, 2014 at 3:29 PM, David Cournapeau
On Sun, Apr 27, 2014 at 11:50 PM, Matthew Brett
Aha,
On Sun, Apr 27, 2014 at 3:19 PM, Matthew Brett
Hi,
On Sun, Apr 27, 2014 at 3:06 PM, Carl Kleffner
Post by Carl Kleffner
A possible option is to install the toolchain inside
site-packages and
Post by Carl Kleffner
to
deploy it as PYPI wheel or wininst packages. The PATH to the
toolchain
Post by Carl Kleffner
could
be extended during import of the package. But I have no idea,
whats the
Post by Carl Kleffner
best
strategy to additionaly install ATLAS or other third party
libraries.
Maybe we could provide ATLAS binaries for 32 / 64 bit as part
of the
devkit package. It sounds like OpenBLAS will be much easier to
build,
so we could start with ATLAS binaries as a default, expecting
OpenBLAS
to be built more often with the toolchain. I think that's how
numpy
binary installers are built at the moment - using old binary
builds of
ATLAS.
https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds
https://github.com/numpy/vendor/tree/master/binaries
But - they are from an old version of ATLAS / Lapack, and only
for 32-bit.
David - what say we update these to latest ATLAS stable?
Fine by me (not that you need my approval !).
How easy is it to build ATLAS targetting a specific CPU these days
? I think
we need to at least support nosse and sse2 and above.
I'm getting crashes trying to build SSE2-only ATLAS on 32-bits, I
think Clint will have some time to help out next week.
https://github.com/numpy/numpy/wiki/Window-versions
Firefox crash reports now have about 1 percent of machines without
SSE2. I suspect that people running new installs of numpy will have
slightly better machines on average than Firefox users, but it's only
a guess.
I wonder if we could add a CPU check on numpy import to give a polite
'install from the exe' message for people without SSE2.
We could, although you unfortunately can't do it easily from ctypes only
(as you need some ASM).
I can take a quick look at a simple cython extension that could be
imported before anything else, and would raise an ImportError if the
wrong arch is detected.
assuming mingw is new enough
#ifdef __SSE2___
raise_if(!__builtin_cpu_supports("sse"))
#endof
We need to support it for VS as well, but it looks like win32 API has a
function to do it:
http://msdn.microsoft.com/en-us/library/ms724482%28VS.85%29.aspx

Makes it even easier.

David
Post by Julian Taylor
in import_array() should do it
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Carl Kleffner
2014-05-09 12:19:49 UTC
Permalink
this is from:

http://gcc.gnu.org/onlinedocs/gcc/X86-Built-in-Functions.html

// ifunc resolvers fire before constructors, explicitly call the
init function.
__builtin_cpu_init ();
if (__builtin_cpu_supports ("ssse2"))
<code>
else
<code>

Cheers,

Carl
Post by David Cournapeau
On Fri, May 9, 2014 at 11:49 AM, Julian Taylor <
Post by Matthew Brett
Hi,
On Mon, Apr 28, 2014 at 3:29 PM, David Cournapeau
On Sun, Apr 27, 2014 at 11:50 PM, Matthew Brett
Aha,
On Sun, Apr 27, 2014 at 3:19 PM, Matthew Brett
Hi,
On Sun, Apr 27, 2014 at 3:06 PM, Carl Kleffner
Post by Carl Kleffner
A possible option is to install the toolchain inside
site-packages and
Post by Carl Kleffner
to
deploy it as PYPI wheel or wininst packages. The PATH to the
toolchain
Post by Carl Kleffner
could
be extended during import of the package. But I have no idea,
whats the
Post by Carl Kleffner
best
strategy to additionaly install ATLAS or other third party
libraries.
Maybe we could provide ATLAS binaries for 32 / 64 bit as part
of the
devkit package. It sounds like OpenBLAS will be much easier to
build,
so we could start with ATLAS binaries as a default, expecting
OpenBLAS
to be built more often with the toolchain. I think that's how
numpy
binary installers are built at the moment - using old binary
builds of
ATLAS.
https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds
https://github.com/numpy/vendor/tree/master/binaries
But - they are from an old version of ATLAS / Lapack, and only
for 32-bit.
David - what say we update these to latest ATLAS stable?
Fine by me (not that you need my approval !).
How easy is it to build ATLAS targetting a specific CPU these days
? I think
we need to at least support nosse and sse2 and above.
I'm getting crashes trying to build SSE2-only ATLAS on 32-bits, I
think Clint will have some time to help out next week.
https://github.com/numpy/numpy/wiki/Window-versions
Firefox crash reports now have about 1 percent of machines without
SSE2. I suspect that people running new installs of numpy will have
slightly better machines on average than Firefox users, but it's
only
a guess.
I wonder if we could add a CPU check on numpy import to give a
polite
'install from the exe' message for people without SSE2.
We could, although you unfortunately can't do it easily from ctypes only
(as you need some ASM).
I can take a quick look at a simple cython extension that could be
imported before anything else, and would raise an ImportError if the
wrong arch is detected.
assuming mingw is new enough
#ifdef __SSE2___
raise_if(!__builtin_cpu_supports("sse"))
#endof
We need to support it for VS as well, but it looks like win32 API has a
http://msdn.microsoft.com/en-us/library/ms724482%28VS.85%29.aspx
Makes it even easier.
David
Post by Matthew Brett
in import_array() should do it
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Matthew Brett
2014-05-23 00:41:48 UTC
Permalink
Hi,
Post by David Cournapeau
On Fri, May 9, 2014 at 11:49 AM, Julian Taylor
Post by Julian Taylor
Hi,
On Mon, Apr 28, 2014 at 3:29 PM, David Cournapeau
On Sun, Apr 27, 2014 at 11:50 PM, Matthew Brett
Aha,
On Sun, Apr 27, 2014 at 3:19 PM, Matthew Brett
Hi,
On Sun, Apr 27, 2014 at 3:06 PM, Carl Kleffner
Post by Carl Kleffner
A possible option is to install the toolchain inside
site-packages and
Post by Carl Kleffner
to
deploy it as PYPI wheel or wininst packages. The PATH to the
toolchain
Post by Carl Kleffner
could
be extended during import of the package. But I have no idea,
whats the
Post by Carl Kleffner
best
strategy to additionaly install ATLAS or other third party
libraries.
Maybe we could provide ATLAS binaries for 32 / 64 bit as part
of the
devkit package. It sounds like OpenBLAS will be much easier to
build,
so we could start with ATLAS binaries as a default, expecting
OpenBLAS
to be built more often with the toolchain. I think that's how
numpy
binary installers are built at the moment - using old binary
builds of
ATLAS.
https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds
https://github.com/numpy/vendor/tree/master/binaries
But - they are from an old version of ATLAS / Lapack, and only
for 32-bit.
David - what say we update these to latest ATLAS stable?
Fine by me (not that you need my approval !).
How easy is it to build ATLAS targetting a specific CPU these days
? I think
we need to at least support nosse and sse2 and above.
I'm getting crashes trying to build SSE2-only ATLAS on 32-bits, I
think Clint will have some time to help out next week.
https://github.com/numpy/numpy/wiki/Window-versions
Firefox crash reports now have about 1 percent of machines without
SSE2. I suspect that people running new installs of numpy will have
slightly better machines on average than Firefox users, but it's only
a guess.
I wonder if we could add a CPU check on numpy import to give a polite
'install from the exe' message for people without SSE2.
We could, although you unfortunately can't do it easily from ctypes only
(as you need some ASM).
I can take a quick look at a simple cython extension that could be
imported before anything else, and would raise an ImportError if the
wrong arch is detected.
assuming mingw is new enough
#ifdef __SSE2___
raise_if(!__builtin_cpu_supports("sse"))
#endof
We need to support it for VS as well, but it looks like win32 API has a
http://msdn.microsoft.com/en-us/library/ms724482%28VS.85%29.aspx
Makes it even easier.
Nice. So all we would need is something like:

try:
from ctypes import windll, wintypes
except (ImportError, ValueError):
pass
else:
has_feature = windll.kernel32.IsProcessorFeaturePresent
has_feature.argtypes = [wintypes.DWORD]
if not has_feature(10):
msg = ("This version of numpy needs a CPU capable of SSE2, "
"but Windows says - not so.\n",
"Please reinstall numpy using a superpack installer")
raise RuntimeError(msg)

At the top of numpy/__init__.py

What would be the best way of including that code in the 32-bit wheel?
(The 64-bit wheel can depend on SSE2).

Cheers,

Matthew
Ralf Gommers
2014-05-30 14:09:25 UTC
Permalink
Post by Matthew Brett
Hi,
Post by David Cournapeau
On Fri, May 9, 2014 at 11:49 AM, Julian Taylor
Post by Julian Taylor
On Fri, May 9, 2014 at 1:51 AM, Matthew Brett <
Hi,
On Mon, Apr 28, 2014 at 3:29 PM, David Cournapeau
On Sun, Apr 27, 2014 at 11:50 PM, Matthew Brett
Aha,
On Sun, Apr 27, 2014 at 3:19 PM, Matthew Brett
Hi,
On Sun, Apr 27, 2014 at 3:06 PM, Carl Kleffner
Post by Carl Kleffner
A possible option is to install the toolchain inside
site-packages and
Post by Carl Kleffner
to
deploy it as PYPI wheel or wininst packages. The PATH to the
toolchain
Post by Carl Kleffner
could
be extended during import of the package. But I have no
idea,
Post by David Cournapeau
Post by Julian Taylor
whats the
Post by Carl Kleffner
best
strategy to additionaly install ATLAS or other third party
libraries.
Maybe we could provide ATLAS binaries for 32 / 64 bit as part
of the
devkit package. It sounds like OpenBLAS will be much easier
to
Post by David Cournapeau
Post by Julian Taylor
build,
so we could start with ATLAS binaries as a default, expecting
OpenBLAS
to be built more often with the toolchain. I think that's
how
Post by David Cournapeau
Post by Julian Taylor
numpy
binary installers are built at the moment - using old binary
builds of
ATLAS.
https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds
https://github.com/numpy/vendor/tree/master/binaries
But - they are from an old version of ATLAS / Lapack, and only
for 32-bit.
David - what say we update these to latest ATLAS stable?
Fine by me (not that you need my approval !).
How easy is it to build ATLAS targetting a specific CPU these
days
Post by David Cournapeau
Post by Julian Taylor
? I think
we need to at least support nosse and sse2 and above.
I'm getting crashes trying to build SSE2-only ATLAS on 32-bits, I
think Clint will have some time to help out next week.
https://github.com/numpy/numpy/wiki/Window-versions
Firefox crash reports now have about 1 percent of machines without
SSE2. I suspect that people running new installs of numpy will
have
Post by David Cournapeau
Post by Julian Taylor
slightly better machines on average than Firefox users, but it's only
a guess.
I wonder if we could add a CPU check on numpy import to give a polite
'install from the exe' message for people without SSE2.
We could, although you unfortunately can't do it easily from ctypes
only
Post by David Cournapeau
Post by Julian Taylor
(as you need some ASM).
I can take a quick look at a simple cython extension that could be
imported before anything else, and would raise an ImportError if the
wrong arch is detected.
assuming mingw is new enough
#ifdef __SSE2___
raise_if(!__builtin_cpu_supports("sse"))
#endof
We need to support it for VS as well, but it looks like win32 API has a
http://msdn.microsoft.com/en-us/library/ms724482%28VS.85%29.aspx
Makes it even easier.
from ctypes import windll, wintypes
pass
has_feature = windll.kernel32.IsProcessorFeaturePresent
has_feature.argtypes = [wintypes.DWORD]
msg = ("This version of numpy needs a CPU capable of SSE2, "
"but Windows says - not so.\n",
"Please reinstall numpy using a superpack installer")
raise RuntimeError(msg)
At the top of numpy/__init__.py
What would be the best way of including that code in the 32-bit wheel?
(The 64-bit wheel can depend on SSE2).
Maybe write a separate file `_check_win32_sse2.py.in`, and ensure that when
you generate `_check_win32_sse2.py` from setup.py you only end up with the
above code when you go through the
if len(sys.argv) >= 2 and sys.argv[1] == 'bdist_wheel':
branch.

Ralf
Post by Matthew Brett
Cheers,
Matthew
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Olivier Grisel
2014-07-02 07:24:44 UTC
Permalink
Hi Matthew and Ralf,

Has anyone managed to build working whl packages for numpy and scipy
on win32 using the static mingw-w64 toolchain?
--
Olivier
Carl Kleffner
2014-07-02 09:36:44 UTC
Permalink
Hi all,

I do regulary builds for python-2.7. Due to my limited resources I didn't
build for 3.3 or 3.4 right now. I didn't updated my toolchhain from
february, but I do regulary builds of OpenBLAS. OpenBLAS is under heavy
development right now, thanks to Werner Saar, see:
https://github.com/wernsaar/OpenBLAS .
A lot of bugs have been canceled out at the cost of performance, see the
kernel TODO list:
https://github.com/xianyi/OpenBLAS/wiki/Fixed-optimized-kernels-To-do-List
. Many bugs related to Windows have been corrected. A very weird bug i.e.:
https://github.com/xianyi/OpenBLAS/issues/394 and
https://github.com/JuliaLang/julia/issues/5574 .
I got the impression, that the Julia community (and maybe the R and octave
community) is very interested getting towards a stable Windows OpenBLAS.
OpenBLAS is the only free OSS optimized BLAS/Lapack solution maintained for
Windows today. Atlas seems not to be maintained for Windows anymore (is
this true Matthew?)

somewhat older test wheels for python-2.7 can be downloaded here:
see: http://figshare.com/articles/search?q=numpy&quick=1&x=0&y=0
(2014-06-10) numpy and scipy wheels for py-2.7
The scipy test suite (amd64) emits segfaults with multithreaded OpenBLAS,
but is stable with single thread (see the log files). I didn't dig into
this further. Win32 works with MT OpenBLAS, but has some test failures with
atan2 and hypot. The is more or less the status today. I can upload new
wheels linked against a recent OpenBLAS, maybe tomorrow on Binstar.

Regards,

Carl
Post by Olivier Grisel
Hi Matthew and Ralf,
Has anyone managed to build working whl packages for numpy and scipy
on win32 using the static mingw-w64 toolchain?
--
Olivier
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Matthew Brett
2014-07-02 10:29:07 UTC
Permalink
Hi,
Post by Carl Kleffner
Hi all,
I do regulary builds for python-2.7. Due to my limited resources I didn't
build for 3.3 or 3.4 right now. I didn't updated my toolchhain from
february, but I do regulary builds of OpenBLAS. OpenBLAS is under heavy
https://github.com/wernsaar/OpenBLAS .
A lot of bugs have been canceled out at the cost of performance, see the
https://github.com/xianyi/OpenBLAS/wiki/Fixed-optimized-kernels-To-do-List .
https://github.com/xianyi/OpenBLAS/issues/394 and
https://github.com/JuliaLang/julia/issues/5574 .
I got the impression, that the Julia community (and maybe the R and octave
community) is very interested getting towards a stable Windows OpenBLAS.
OpenBLAS is the only free OSS optimized BLAS/Lapack solution maintained for
Windows today. Atlas seems not to be maintained for Windows anymore (is this
true Matthew?)
No, it's not true, but it's not really false either. Clint Whaley is
the ATLAS maintainer and his interests are firmly in
high-performance-computing so he is much more interested in exotic new
chips than in Windows. But, he does aim to make the latest stable
release buildable on Windows, and he's helped me do that for the
latest stable, with some hope he'll continue to work on the 64-bit
Windows kernels which are hobbled at the moment because of differences
in the Windows / other OS 64-bit ABI. Builds here:

https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/
Post by Carl Kleffner
see: http://figshare.com/articles/search?q=numpy&quick=1&x=0&y=0
(2014-06-10) numpy and scipy wheels for py-2.7
The scipy test suite (amd64) emits segfaults with multithreaded OpenBLAS,
but is stable with single thread (see the log files). I didn't dig into this
further. Win32 works with MT OpenBLAS, but has some test failures with atan2
and hypot. The is more or less the status today. I can upload new wheels
linked against a recent OpenBLAS, maybe tomorrow on Binstar.
I built some 64-bit wheels against Carl's toolchain and the ATLAS
above, I think they don't have any threading issues, but the scipy
wheel fails one scipy test due to some very small precision
differences in the mingw runtime. I think we agreed this failure
wasn't important.

https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd64.whl
https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_amd64.whl

Cheers,

Matthew
Matthew Brett
2014-07-02 10:37:16 UTC
Permalink
Hi,
Post by Matthew Brett
Hi,
Post by Carl Kleffner
Hi all,
I do regulary builds for python-2.7. Due to my limited resources I didn't
build for 3.3 or 3.4 right now. I didn't updated my toolchhain from
february, but I do regulary builds of OpenBLAS. OpenBLAS is under heavy
https://github.com/wernsaar/OpenBLAS .
A lot of bugs have been canceled out at the cost of performance, see the
https://github.com/xianyi/OpenBLAS/wiki/Fixed-optimized-kernels-To-do-List .
https://github.com/xianyi/OpenBLAS/issues/394 and
https://github.com/JuliaLang/julia/issues/5574 .
I got the impression, that the Julia community (and maybe the R and octave
community) is very interested getting towards a stable Windows OpenBLAS.
OpenBLAS is the only free OSS optimized BLAS/Lapack solution maintained for
Windows today. Atlas seems not to be maintained for Windows anymore (is this
true Matthew?)
No, it's not true, but it's not really false either. Clint Whaley is
the ATLAS maintainer and his interests are firmly in
high-performance-computing so he is much more interested in exotic new
chips than in Windows. But, he does aim to make the latest stable
release buildable on Windows, and he's helped me do that for the
latest stable, with some hope he'll continue to work on the 64-bit
Windows kernels which are hobbled at the moment because of differences
https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/
Post by Carl Kleffner
see: http://figshare.com/articles/search?q=numpy&quick=1&x=0&y=0
(2014-06-10) numpy and scipy wheels for py-2.7
The scipy test suite (amd64) emits segfaults with multithreaded OpenBLAS,
but is stable with single thread (see the log files). I didn't dig into this
further. Win32 works with MT OpenBLAS, but has some test failures with atan2
and hypot. The is more or less the status today. I can upload new wheels
linked against a recent OpenBLAS, maybe tomorrow on Binstar.
I built some 64-bit wheels against Carl's toolchain and the ATLAS
above, I think they don't have any threading issues, but the scipy
wheel fails one scipy test due to some very small precision
differences in the mingw runtime. I think we agreed this failure
wasn't important.
https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd64.whl
https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_amd64.whl
Sorry - I wasn't paying attention - you asked about 32-bit wheels.
Honestly, using the same toolchain, they wouldn't be at all hard to
build.

One issue is that the ATLAS builds depend on SSE2. That isn't an
issue for 64 bit builds because the 64-bit ABI requires SSE2, but it
is an issue for 32-bit where we have no such guarantee. It looks like
99% of Windows users do have SSE2 though [1]. So I think what is
required is

* Build the wheels for 32-bit (easy)
* Patch the wheels to check and give helpful error in absence of SSE2
(fairly easy)
* Get agreement these should go up on pypi and be maintained (feedback anyone?)

Cheers,

Matthew

[1] https://github.com/numpy/numpy/wiki/Windows-versions#sse--sse2
Carl Kleffner
2014-07-02 11:18:07 UTC
Permalink
Hi,

The mingw-w64 based wheels (Atlas and openBLAS) are based on a patched
numpy version, that hasn't been published as numpy pull for revision until
now (my failure). I could try to do this tomorrow in the evening. Another
important point is, that the toolchain, that is capable to compile
numpy/scipy was adapted to allow for MSVC / mingw runtime compatibility and
does not create any gcc/mingw runtime dependency anymore.

OpenBLAS has one advantage over Atlas: numpy/scipy are linked dynamically
against OpenBLAS. Statically linked BLAS like MKL or ATLAS creates huge
python extensions and have considerable higher memory consumption compared
to dynamically linkage. On the other hand correctness is more important, so
ATLAS has to be preferred now.

Users with non SEE processors could be provided with wheels distributed on
binstar.

Regards

Carl
Post by Carl Kleffner
Hi,
Post by Matthew Brett
Hi,
Post by Carl Kleffner
Hi all,
I do regulary builds for python-2.7. Due to my limited resources I
didn't
Post by Matthew Brett
Post by Carl Kleffner
build for 3.3 or 3.4 right now. I didn't updated my toolchhain from
february, but I do regulary builds of OpenBLAS. OpenBLAS is under heavy
https://github.com/wernsaar/OpenBLAS .
A lot of bugs have been canceled out at the cost of performance, see the
https://github.com/xianyi/OpenBLAS/wiki/Fixed-optimized-kernels-To-do-List .
Post by Matthew Brett
Post by Carl Kleffner
https://github.com/xianyi/OpenBLAS/issues/394 and
https://github.com/JuliaLang/julia/issues/5574 .
I got the impression, that the Julia community (and maybe the R and
octave
Post by Matthew Brett
Post by Carl Kleffner
community) is very interested getting towards a stable Windows OpenBLAS.
OpenBLAS is the only free OSS optimized BLAS/Lapack solution maintained
for
Post by Matthew Brett
Post by Carl Kleffner
Windows today. Atlas seems not to be maintained for Windows anymore (is
this
Post by Matthew Brett
Post by Carl Kleffner
true Matthew?)
No, it's not true, but it's not really false either. Clint Whaley is
the ATLAS maintainer and his interests are firmly in
high-performance-computing so he is much more interested in exotic new
chips than in Windows. But, he does aim to make the latest stable
release buildable on Windows, and he's helped me do that for the
latest stable, with some hope he'll continue to work on the 64-bit
Windows kernels which are hobbled at the moment because of differences
https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/
Post by Carl Kleffner
see: http://figshare.com/articles/search?q=numpy&quick=1&x=0&y=0
(2014-06-10) numpy and scipy wheels for py-2.7
The scipy test suite (amd64) emits segfaults with multithreaded
OpenBLAS,
Post by Matthew Brett
Post by Carl Kleffner
but is stable with single thread (see the log files). I didn't dig into
this
Post by Matthew Brett
Post by Carl Kleffner
further. Win32 works with MT OpenBLAS, but has some test failures with
atan2
Post by Matthew Brett
Post by Carl Kleffner
and hypot. The is more or less the status today. I can upload new wheels
linked against a recent OpenBLAS, maybe tomorrow on Binstar.
I built some 64-bit wheels against Carl's toolchain and the ATLAS
above, I think they don't have any threading issues, but the scipy
wheel fails one scipy test due to some very small precision
differences in the mingw runtime. I think we agreed this failure
wasn't important.
https://nipy.bic.berkeley.edu/scipy_installers/numpy-1.8.1-cp27-none-win_amd64.whl
https://nipy.bic.berkeley.edu/scipy_installers/scipy-0.13.3-cp27-none-win_amd64.whl
Sorry - I wasn't paying attention - you asked about 32-bit wheels.
Honestly, using the same toolchain, they wouldn't be at all hard to
build.
One issue is that the ATLAS builds depend on SSE2. That isn't an
issue for 64 bit builds because the 64-bit ABI requires SSE2, but it
is an issue for 32-bit where we have no such guarantee. It looks like
99% of Windows users do have SSE2 though [1]. So I think what is
required is
* Build the wheels for 32-bit (easy)
* Patch the wheels to check and give helpful error in absence of SSE2
(fairly easy)
* Get agreement these should go up on pypi and be maintained (feedback anyone?)
Cheers,
Matthew
[1] https://github.com/numpy/numpy/wiki/Windows-versions#sse--sse2
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Matthew Brett
2014-07-02 11:35:07 UTC
Permalink
Hi,
Hi,
The mingw-w64 based wheels (Atlas and openBLAS) are based on a patched numpy
version, that hasn't been published as numpy pull for revision until now (my
failure). I could try to do this tomorrow in the evening.
That would be really good. I'll try and help with review if I can.
Another important
point is, that the toolchain, that is capable to compile numpy/scipy was
adapted to allow for MSVC / mingw runtime compatibility and does not create
any gcc/mingw runtime dependency anymore.
OpenBLAS has one advantage over Atlas: numpy/scipy are linked dynamically
against OpenBLAS. Statically linked BLAS like MKL or ATLAS creates huge
python extensions and have considerable higher memory consumption compared
to dynamically linkage. On the other hand correctness is more important, so
ATLAS has to be preferred now.
Do you have any index of what the memory cost is? If it's in the
order of 20M presumably that won't have much practical impact?
Users with non SEE processors could be provided with wheels distributed on
binstar.
The last plan we seemed to have was to continue making the 'superpack'
exe installers which contain no-SSE, SSE2 and SSE3 builds where the
installer selects which one to install at runtime. The warning from
the wheel would point to these installers as the backup option.

If we did want to produce alternative wheels, I guess a specific
static https directory would be easiest; otherwise the user would get
the odd effect that they'd get a hobbled wheel by default when
installing from binstar (assuming they did in fact have SSE2). I
mean, this

pip install -f https://somewhere.org/no_sse_wheels --no-index numpy

seems to make more sense as an alternative install command for
non-SSE, than this:

pip install -i http://binstar.org numpy

because in the former case, you can see what is special about the command.

Cheers,

Matthew
Carl Kleffner
2014-07-02 13:24:13 UTC
Permalink
Hi,

personally I don't have a preference of Binstar over somewhere.org. More
important is that one has to agree where to find the binaries. Binstar has
the concept of channels and allow wheels. So one could provide a channel
for NOSSE and more channels for other specialized builds:
ATLAS/OpenBLAS/RefBLAS, SSE4/AVX and so on.

A generic binary should be build with generic optimizing GCC switches and
SSE2 per default. I propose to provide generic binaries for PYPI instead of
superbinaries. and specialized binaries on Binstar or somewhere else.

Just thinking two or three steps ahead.

Regards

Carl
Post by Matthew Brett
Hi,
Post by Carl Kleffner
Hi,
The mingw-w64 based wheels (Atlas and openBLAS) are based on a patched
numpy
Post by Carl Kleffner
version, that hasn't been published as numpy pull for revision until now
(my
Post by Carl Kleffner
failure). I could try to do this tomorrow in the evening.
That would be really good. I'll try and help with review if I can.
Post by Carl Kleffner
Another important
point is, that the toolchain, that is capable to compile numpy/scipy was
adapted to allow for MSVC / mingw runtime compatibility and does not
create
Post by Carl Kleffner
any gcc/mingw runtime dependency anymore.
OpenBLAS has one advantage over Atlas: numpy/scipy are linked dynamically
against OpenBLAS. Statically linked BLAS like MKL or ATLAS creates huge
python extensions and have considerable higher memory consumption
compared
Post by Carl Kleffner
to dynamically linkage. On the other hand correctness is more important,
so
Post by Carl Kleffner
ATLAS has to be preferred now.
Do you have any index of what the memory cost is? If it's in the
order of 20M presumably that won't have much practical impact?
Post by Carl Kleffner
Users with non SEE processors could be provided with wheels distributed
on
Post by Carl Kleffner
binstar.
The last plan we seemed to have was to continue making the 'superpack'
exe installers which contain no-SSE, SSE2 and SSE3 builds where the
installer selects which one to install at runtime. The warning from
the wheel would point to these installers as the backup option.
If we did want to produce alternative wheels, I guess a specific
static https directory would be easiest; otherwise the user would get
the odd effect that they'd get a hobbled wheel by default when
installing from binstar (assuming they did in fact have SSE2). I
mean, this
pip install -f https://somewhere.org/no_sse_wheels --no-index numpy
seems to make more sense as an alternative install command for
pip install -i http://binstar.org numpy
because in the former case, you can see what is special about the command.
Cheers,
Matthew
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Matthew Brett
2014-07-02 13:36:57 UTC
Permalink
Hi,
Post by Carl Kleffner
Hi,
personally I don't have a preference of Binstar over somewhere.org. More
important is that one has to agree where to find the binaries. Binstar has
the concept of channels and allow wheels. So one could provide a channel for
ATLAS/OpenBLAS/RefBLAS, SSE4/AVX and so on.
Having a noSSE channel would make sense.
Post by Carl Kleffner
A generic binary should be build with generic optimizing GCC switches and
SSE2 per default. I propose to provide generic binaries for PYPI instead of
superbinaries. and specialized binaries on Binstar or somewhere else.
The exe superbinary installers can also go on pypi without causing
confusion to pip at least, but it would be good to have wheels as
well.
Post by Carl Kleffner
Just thinking two or three steps ahead.
It's good to have a plan :)

Cheers,

Matthew
Chris Barker
2014-07-02 17:55:41 UTC
Permalink
Post by Matthew Brett
Having a noSSE channel would make sense.
Indeed -- the default (i.e what you get with pip install numpy) should be
SSE2 -- I":d much rather have a few folks with old hardware have to go
through some hoops that n have most people get something that is "much
slower than MATLAB".
Post by Matthew Brett
The exe superbinary installers can also go on pypi without causing
confusion to pip at least, but it would be good to have wheels as
well.
it doesn't hurt to have them, but we really need to get Windows away from
the exe installers into the pip / virtualenv / etc world.

-Chris
--
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception

***@noaa.gov
Sturla Molden
2014-07-03 03:56:17 UTC
Permalink
Post by Chris Barker
Indeed -- the default (i.e what you get with pip install numpy) should
be SSE2 -- I":d much rather have a few folks with old hardware have to
go through some hoops that n have most people get something that is
"much slower than MATLAB".
I think we should use SSE3 as default. It is already ten years old. Most
users (99.999 %) who want binary wheels have an SSE3 capable CPU.

According to Wikipedia:

AMD:
Athlon 64 (since Venice Stepping E3 and San Diego Stepping E4)
Athlon 64 X2
Athlon 64 FX (since San Diego Stepping E4)
Opteron (since Stepping E4)
Sempron (since Palermo. Stepping E3)
Phenom
Phenom II
Athlon II
Turion 64
Turion 64 X2
Turion X2
Turion X2 Ultra
Turion II X2 Mobile
Turion II X2 Ultra
APU
FX Series

Intel:
Celeron D
Celeron (starting with Core microarchitecture)
Pentium 4 (since Prescott)
Pentium D
Pentium Extreme Edition (but NOT Pentium 4 Extreme Edition)
Pentium Dual-Core
Pentium (starting with Core microarchitecture)
Core
Xeon (since Nocona)
Atom


If you have Pentium II, you can build your own NumPy...



Sturla
Julian Taylor
2014-07-03 07:42:41 UTC
Permalink
Post by Sturla Molden
Post by Chris Barker
Indeed -- the default (i.e what you get with pip install numpy) should
be SSE2 -- I":d much rather have a few folks with old hardware have to
go through some hoops that n have most people get something that is
"much slower than MATLAB".
I think we should use SSE3 as default. It is already ten years old. Most
users (99.999 %) who want binary wheels have an SSE3 capable CPU.
while true that pretty much all cpus currently around have it there is
no technical requirement for even new cpus to have SSE3. Compared to
SSE2 you do not have to implement it to sell a compatible 64 bit cpu.
Not even the new x32 ABI requires it.

In practice I think we could easily get away with using SSE3 as default
but I still would like to see if it makes any performance difference in
benchmarks. In my experience (which is exclusively on pre-haswell
machines) the horizontal operations it offers tend to be slower than
other solutions.
Matthew Brett
2014-07-03 10:06:39 UTC
Permalink
Hi,
Post by Sturla Molden
Post by Chris Barker
Indeed -- the default (i.e what you get with pip install numpy) should
be SSE2 -- I":d much rather have a few folks with old hardware have to
go through some hoops that n have most people get something that is
"much slower than MATLAB".
I think we should use SSE3 as default. It is already ten years old. Most
users (99.999 %) who want binary wheels have an SSE3 capable CPU.
The 99% for SSE2 comes from the Firefox crash reports, where the large
majority are for very recent Firefox downloads.

If you can identify SSE3 machines from the reported CPU string (as the
Firefox people did for SSE2), please do have a look a see if you can
get a count for SSE3 in the Firefox crash reports; if it's close to
99% that would make a strong argument:

https://github.com/numpy/numpy/wiki/Windows-versions#sse--sse2
https://gist.github.com/matthew-brett/9cb5274f7451a3eb8fc0

Cheers,

Matthew
Matthew Brett
2014-07-03 10:46:23 UTC
Permalink
Post by Matthew Brett
Hi,
Post by Sturla Molden
Post by Chris Barker
Indeed -- the default (i.e what you get with pip install numpy) should
be SSE2 -- I":d much rather have a few folks with old hardware have to
go through some hoops that n have most people get something that is
"much slower than MATLAB".
I think we should use SSE3 as default. It is already ten years old. Most
users (99.999 %) who want binary wheels have an SSE3 capable CPU.
The 99% for SSE2 comes from the Firefox crash reports, where the large
majority are for very recent Firefox downloads.
If you can identify SSE3 machines from the reported CPU string (as the
Firefox people did for SSE2), please do have a look a see if you can
get a count for SSE3 in the Firefox crash reports; if it's close to
https://github.com/numpy/numpy/wiki/Windows-versions#sse--sse2
https://gist.github.com/matthew-brett/9cb5274f7451a3eb8fc0
Jonathan Helmus recently pointed out https://ci.appveyor.com in a
discussion on the scikit-image mailing list. The scikit-image team
are trying to get builds and tests working there. The configuration
file allows arbitrary cmd and powershell commands executed in a clean
Windows virtual machine. Do you think it would be possible to get the
wheel builds working on something like that? That would be a big step
forward, just because the current procedure is rather fiddly, even if
not very difficult.

Any news on the pull request to numpy? Waiting eagerly :)

Cheers,

Matthew
Carl Kleffner
2014-07-03 11:51:56 UTC
Permalink
Hi Matthew,

I can make it in the late evening (MEZ timezone), so you have to wait a bit
... I also will try to create new numpy/scipy wheels. I now have the latest
OpenBLAS version ready. Olivier gaves me access to rackspace. I wil try it
out on the weekend.

Regards

Carl
Post by Matthew Brett
Post by Matthew Brett
Hi,
Post by Sturla Molden
Post by Chris Barker
Indeed -- the default (i.e what you get with pip install numpy) should
be SSE2 -- I":d much rather have a few folks with old hardware have to
go through some hoops that n have most people get something that is
"much slower than MATLAB".
I think we should use SSE3 as default. It is already ten years old. Most
users (99.999 %) who want binary wheels have an SSE3 capable CPU.
The 99% for SSE2 comes from the Firefox crash reports, where the large
majority are for very recent Firefox downloads.
If you can identify SSE3 machines from the reported CPU string (as the
Firefox people did for SSE2), please do have a look a see if you can
get a count for SSE3 in the Firefox crash reports; if it's close to
https://github.com/numpy/numpy/wiki/Windows-versions#sse--sse2
https://gist.github.com/matthew-brett/9cb5274f7451a3eb8fc0
Jonathan Helmus recently pointed out https://ci.appveyor.com in a
discussion on the scikit-image mailing list. The scikit-image team
are trying to get builds and tests working there. The configuration
file allows arbitrary cmd and powershell commands executed in a clean
Windows virtual machine. Do you think it would be possible to get the
wheel builds working on something like that? That would be a big step
forward, just because the current procedure is rather fiddly, even if
not very difficult.
Any news on the pull request to numpy? Waiting eagerly :)
Cheers,
Matthew
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Matthew Brett
2014-07-03 14:43:51 UTC
Permalink
Hi,
Post by Charles R Harris
Hi Matthew,
I can make it in the late evening (MEZ timezone), so you have to wait a bit
... I also will try to create new numpy/scipy wheels. I now have the latest
OpenBLAS version ready. Olivier gaves me access to rackspace. I wil try it
out on the weekend.
Great - thanks a lot,

Matthew
Olivier Grisel
2014-07-07 09:18:42 UTC
Permalink
Hi!

I gave appveyor a try this WE so as to build a minimalistic Python 3
project with a Cython extension. It works both with 32 and 64 bit
MSVC++ and can generate wheel packages. See:

https://github.com/ogrisel/python-appveyor-demo

However 2008 is not (yet) installed so it cannot be used for Python
2.7. The Feodor Fitsner seems to be open to install older versions of
MSVC++ on the worker VM image so this might be possible in the future.
Let's see.

Off-course for numpy / scipy this does not solve the fortran compiler
issue, so Carl's static mingw-w64 toolchain still looks like a very
promising solution (and could probably be run on the appveyor infra as
well).

Best,
--
Olivier
Olivier Grisel
2014-07-09 14:00:34 UTC
Permalink
Feodor updated the AppVeyor nodes to have the Windows SDK matching
MSVC 2008 Express for Python 2. I have updated my sample scripts and
we now have a working example of a free CI system for:

Python 2 and 3 both for 32 and 64 bit architectures.

https://github.com/ogrisel/python-appveyor-demo

Best,
--
Olivier
Robert McGibbon
2014-07-09 22:53:26 UTC
Permalink
This is an awesome resource for tons of projects.

Thanks Olivier!

-Robert
Post by Olivier Grisel
Feodor updated the AppVeyor nodes to have the Windows SDK matching
MSVC 2008 Express for Python 2. I have updated my sample scripts and
Python 2 and 3 both for 32 and 64 bit architectures.
https://github.com/ogrisel/python-appveyor-demo
Best,
--
Olivier
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Olivier Grisel
2014-07-11 10:30:40 UTC
Permalink
Post by Robert McGibbon
This is an awesome resource for tons of projects.
Thanks.

FYI here is the PR for sklearn to use AppVeyor CI:

https://github.com/scikit-learn/scikit-learn/pull/3363

It's slightly different from the minimalistic sample I wrote for
python-appveyor-demo in the sense that for sklearn I decided to
actually install the generated wheel package and run the tests on the
resulting installed library rather than on the project source folder.

--
Olivier
Robert McGibbon
2014-07-28 02:32:54 UTC
Permalink
I forked Olivier's example project to use the same infrastructure for
building conda binaries and deploying them to binstar, which might also be
useful for some projects.

https://github.com/rmcgibbo/python-appveyor-conda-example

-Robert
Post by Robert McGibbon
This is an awesome resource for tons of projects.
Thanks Olivier!
-Robert
Post by Olivier Grisel
Feodor updated the AppVeyor nodes to have the Windows SDK matching
MSVC 2008 Express for Python 2. I have updated my sample scripts and
Python 2 and 3 both for 32 and 64 bit architectures.
https://github.com/ogrisel/python-appveyor-demo
Best,
--
Olivier
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Carl Kleffner
2014-07-28 13:25:33 UTC
Permalink
Hi,

on https://bitbucket.org/carlkl/mingw-w64-for-python/downloads I uploaded
7z-archives for mingw-w64 and for OpenBLAS-0.2.10 for 32 bit and for 64
bit.
To use mingw-w64 for Python >= 3.3 you have to manually tweak the so called
specs file - see readme.txt in the archive.

Regards

Carl
Post by Robert McGibbon
I forked Olivier's example project to use the same infrastructure for
building conda binaries and deploying them to binstar, which might also be
useful for some projects.
https://github.com/rmcgibbo/python-appveyor-conda-example
-Robert
Post by Robert McGibbon
This is an awesome resource for tons of projects.
Thanks Olivier!
-Robert
Post by Olivier Grisel
Feodor updated the AppVeyor nodes to have the Windows SDK matching
MSVC 2008 Express for Python 2. I have updated my sample scripts and
Python 2 and 3 both for 32 and 64 bit architectures.
https://github.com/ogrisel/python-appveyor-demo
Best,
--
Olivier
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Olivier Grisel
2014-07-28 14:46:26 UTC
Permalink
Post by Carl Kleffner
Hi,
on https://bitbucket.org/carlkl/mingw-w64-for-python/downloads I uploaded
7z-archives for mingw-w64 and for OpenBLAS-0.2.10 for 32 bit and for 64 bit.
To use mingw-w64 for Python >= 3.3 you have to manually tweak the so called
specs file - see readme.txt in the archive.
Have the patches to build numpy and scipy with mingw-w64 been merged
in the master branches of those projects?
--
Olivier
http://twitter.com/ogrisel - http://github.com/ogrisel
Carl Kleffner
2014-07-28 15:16:47 UTC
Permalink
I had to move my development enviroment on different windows box recently
(stilll in progress). On this box I don't have full access unfortunately.
The patch for scipy build was merged into scipy master some time ago, see
https://github.com/scipy/scipy/pull/3484 . I have some additional patches
for scipy.test.
The pull request for numpy build has not yet been made for the reasons I
mentioned.

Cheers,

Carl
Post by Carl Kleffner
Post by Carl Kleffner
Hi,
on https://bitbucket.org/carlkl/mingw-w64-for-python/downloads I
uploaded
Post by Carl Kleffner
7z-archives for mingw-w64 and for OpenBLAS-0.2.10 for 32 bit and for 64
bit.
Post by Carl Kleffner
To use mingw-w64 for Python >= 3.3 you have to manually tweak the so
called
Post by Carl Kleffner
specs file - see readme.txt in the archive.
Have the patches to build numpy and scipy with mingw-w64 been merged
in the master branches of those projects?
--
Olivier
http://twitter.com/ogrisel - http://github.com/ogrisel
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Olivier Grisel
2014-07-02 11:47:27 UTC
Permalink
Hi Carl,

All the items you suggest would be very appreciated. Don't hesitate to
ping me if you need me to test new packages.

Also the sklearn project has a free Rackspace Cloud account that
Matthew is already using to make travis upload OSX wheels for the
master branch of various scipy stack projects. Rackspace cloud can
also be used to start windows VMs if needed. Please tell me if you
want a some user credentials and API key.

Myself I use the Rackspace Cloud account to build sklearn wheels
following those instructions:

https://github.com/scikit-learn/scikit-learn/wiki/How-to-make-a-release#building-windows-binary-packages

We are using msvc express (but only for 32bit Python) right now. I
have yet to try to build sklearn with your mingw-w64 static toolchain.

Rackspace granted us $2000 worth of cloud resource per month (e.g.
bandwith and VM time) so there is plenty of resource left to help with
upstream projects such as numpy and scipy.

Best,
--
Olivier
Chris Barker
2014-07-02 17:34:40 UTC
Permalink
Post by Matthew Brett
It looks like
99% of Windows users do have SSE2 though [1]. So I think what is
required is
* Build the wheels for 32-bit (easy)
* Patch the wheels to check and give helpful error in absence of SSE2
(fairly easy)
* Get agreement these should go up on pypi and be maintained (feedback anyone?)
+Inf

It would benefit the community a LOT to have binary wheels up on PyPi, and
the very small number of failures due to old hardware will be no big deal,
as long as the users get a meaningful message, rather than a hard crash.

-Chris
--
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception

***@noaa.gov
Matthew Brett
2014-05-14 15:50:49 UTC
Permalink
Hi,
Post by Matthew Brett
Hi,
Post by David Cournapeau
Post by Matthew Brett
Aha,
Post by Matthew Brett
Hi,
Post by Carl Kleffner
A possible option is to install the toolchain inside site-packages and to
deploy it as PYPI wheel or wininst packages. The PATH to the toolchain could
be extended during import of the package. But I have no idea, whats the best
strategy to additionaly install ATLAS or other third party libraries.
Maybe we could provide ATLAS binaries for 32 / 64 bit as part of the
devkit package. It sounds like OpenBLAS will be much easier to build,
so we could start with ATLAS binaries as a default, expecting OpenBLAS
to be built more often with the toolchain. I think that's how numpy
binary installers are built at the moment - using old binary builds of
ATLAS.
https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds
https://github.com/numpy/vendor/tree/master/binaries
But - they are from an old version of ATLAS / Lapack, and only for 32-bit.
David - what say we update these to latest ATLAS stable?
Fine by me (not that you need my approval !).
How easy is it to build ATLAS targetting a specific CPU these days ? I think
we need to at least support nosse and sse2 and above.
I'm getting crashes trying to build SSE2-only ATLAS on 32-bits, I
think Clint will have some time to help out next week.
Clint spent an hour on the phone working through the 32-bit build.
There was a nasty gcc bug revealed by some oddness to the input flags.
Fixed now:

https://nipy.bic.berkeley.edu/scipy_installers/atlas_builds/

Configure flags needed for 32-bit:

config_opts="-b 32 -Si archdef 0 -A 13 -V 384 \
--with-netlib-lapack-tarfile=${lapack_tarfile} \
-Fa al '-mincoming-stack-boundary=2 -mfpmath=sse -msse2'"

For 64-bit:

config_opts="-b 64 -V 384 --with-netlib-lapack-tarfile=${lapack_tarfile}"

Cheers,

Matthew
j***@gmail.com
2014-04-27 22:26:38 UTC
Permalink
Post by Carl Kleffner
A possible option is to install the toolchain inside site-packages and to
deploy it as PYPI wheel or wininst packages. The PATH to the toolchain
could be extended during import of the package. But I have no idea, whats
the best strategy to additionaly install ATLAS or other third party
libraries.
What I did in the past is just to download the ATLAS binaries from the
scipy/numpy wiki and move them into the Python/Dlls directory. IIRC

My impression was that finding ATLAS binaries was the difficult part, not
moving them into the right directory.
Post by Carl Kleffner
Cheers,
Carl
Hi,
Post by Matthew Brett
Post by Carl Kleffner
Hi,
I will definitly don't have not time until thursday this week working
out
Post by Carl Kleffner
the github workflow for a numpy pull request. So feel free to do it for
me.
OK - I will have a go at this tomorrow.
Post by Carl Kleffner
BTW: There is a missing feature in the mingw-w64 toolchain. By now it
features linking to msvcrt90 runtime only. I have do extend the specs
file
Post by Carl Kleffner
to allow linking to msvcr100 with an addional flag. Or create a
dedicated
Post by Carl Kleffner
toolchain - what do you think?
I don't know.
Is this a discussion that should go to the mingw-w64 list do you
think? It must be a very common feature.
As you know, I'm really hoping it will be possible make a devkit for
Python similar to the Ruby devkits [1].
I got my entire initial setup on the computer I'm using right now through
python-xy, including MingW 32.

The only thing I ever had to do was to create the `distutils.cfg` in new
python install.

python-xy relies on the availability of a open source development
environment for numpy and scipy, and has been restricted so far to python
32 versions.
winpython is only a python distribution and is also available for 64bit
(with Gohlke binaries I think)

I think it would be very helpful to get python-xy set up for development
for 64 bit versions, now that the toolchain with MingW is available.

I'm skeptical about having lot's of distributions that install all their
own full toolchain (I always worry about which one is actually on the path.
I deleted my first git for Windows version because it came with a built-in
MSYS/MingW toolchain and now just use the nice and small portable version.)
Post by Carl Kleffner
Post by Matthew Brett
The ideal would be a devkit that transparently picked up 32 vs 64 bit,
and MSVC runtime according to the Python version. For example, OSX
compilation automatically picks up the OSX SDK with which the relevant
Python was built. Do you think something like this is possible? That
would be a great improvement for people building extensions and wheels
on Windows.
How does MingW64 decide whether to build 32 or to build 64 bit versions?
Does the python version matter for MingW?

or should this pick up one of the Visual SDK's that the user needs to
install?

Josef
Post by Carl Kleffner
Post by Matthew Brett
Cheers,
Matthew
[1] http://rubyinstaller.org/add-ons/devkit/
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Chris Barker
2014-04-28 17:54:11 UTC
Permalink
Post by Matthew Brett
As you know, I'm really hoping it will be possible make a devkit for
Python similar to the Ruby devkits [1].
That would be great!
Matthew Brett
2014-04-29 09:19:08 UTC
Permalink
Hi,
Post by Chris Barker
Post by Matthew Brett
As you know, I'm really hoping it will be possible make a devkit for
Python similar to the Ruby devkits [1].
That would be great!
Carl Kleffner
2014-04-29 13:37:31 UTC
Permalink
(1) Yes, Support for MSVC100 (python-3.3 and up) is on the TODO list

(2) both toolchains are configured for static linking.
No need to deploy: libgcc_s_dw2-1.dll, libgomp-1.dll,
libquadmath-0.dll, libstdc++-6.dll, libgfortran-3.dll or libwinpthread-1.dll

(3) I decided to create two dedicated toolchains for 32bit and for 64bit

Regards,

Carl
Hi,
Post by Chris Barker
Post by Matthew Brett
As you know, I'm really hoping it will be possible make a devkit for
Python similar to the Ruby devkits [1].
That would be great!
Sturla Molden
2014-04-29 15:10:06 UTC
Permalink
2) Static linking - Carl's toolchain does full static linking
including C runtimes
The C runtime cannot be statically linked. It would mean that we get
multiple copies of errno and multiple malloc heaps in the process – one of
each static CRT. We must use the same C runtime DLL as Python. But loading
it is not a problem because Python has done that before NumPy is imported.

Sturla
Carl Kleffner
2014-04-29 18:21:52 UTC
Permalink
Correction:

gcc (mingw) runtimes are statically linked. The C-runtime DLL msvcrXXX is
linked dynamically.

Carl
Post by Sturla Molden
2) Static linking - Carl's toolchain does full static linking
including C runtimes
The C runtime cannot be statically linked. It would mean that we get
multiple copies of errno and multiple malloc heaps in the process – one of
each static CRT. We must use the same C runtime DLL as Python. But loading
it is not a problem because Python has done that before NumPy is imported.
Sturla
_______________________________________________
NumPy-Discussion mailing list
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Continue reading on narkive:
Loading...