Discussion:
[Numpy-discussion] State-of-the-art to use a C/C++ library from Python
Michael Bieri
2016-08-31 11:28:21 UTC
Permalink
Hi all

There are several ways on how to use C/C++ code from Python with NumPy, as
given in http://docs.scipy.org/doc/numpy/user/c-info.html . Furthermore,
there's at least pybind11.

I'm not quite sure which approach is state-of-the-art as of 2016. How would
you do it if you had to make a C/C++ library available in Python right now?

In my case, I have a C library with some scientific functions on matrices
and vectors. You will typically call a few functions to configure the
computation, then hand over some pointers to existing buffers containing
vector data, then start the computation, and finally read back the data.
The library also can use MPI to parallelize.

Best regards,
Michael
Robert Kern
2016-08-31 12:23:57 UTC
Permalink
Post by Michael Bieri
Hi all
There are several ways on how to use C/C++ code from Python with NumPy,
as given in http://docs.scipy.org/doc/numpy/user/c-info.html . Furthermore,
there's at least pybind11.
Post by Michael Bieri
I'm not quite sure which approach is state-of-the-art as of 2016. How
would you do it if you had to make a C/C++ library available in Python
right now?
Post by Michael Bieri
In my case, I have a C library with some scientific functions on matrices
and vectors. You will typically call a few functions to configure the
computation, then hand over some pointers to existing buffers containing
vector data, then start the computation, and finally read back the data.
The library also can use MPI to parallelize.

I usually reach for Cython:

http://cython.org/
http://docs.cython.org/en/latest/src/userguide/memoryviews.html

--
Robert Kern
Neal Becker
2016-08-31 13:04:09 UTC
Permalink
Post by Michael Bieri
Hi all
There are several ways on how to use C/C++ code from Python with NumPy, as
given in http://docs.scipy.org/doc/numpy/user/c-info.html . Furthermore,
there's at least pybind11.
I'm not quite sure which approach is state-of-the-art as of 2016. How
would you do it if you had to make a C/C++ library available in Python
right now?
In my case, I have a C library with some scientific functions on matrices
and vectors. You will typically call a few functions to configure the
computation, then hand over some pointers to existing buffers containing
vector data, then start the computation, and finally read back the data.
The library also can use MPI to parallelize.
Best regards,
Michael
I prefer ndarray:
https://github.com/ndarray/ndarray
David Morris
2016-08-31 17:08:58 UTC
Permalink
Post by Michael Bieri
Hi all
There are several ways on how to use C/C++ code from Python with NumPy, as
given in http://docs.scipy.org/doc/numpy/user/c-info.html . Furthermore,
there's at least pybind11.
I'm not quite sure which approach is state-of-the-art as of 2016. How
would you do it if you had to make a C/C++ library available in Python
right now?
In my case, I have a C library with some scientific functions on matrices
and vectors. You will typically call a few functions to configure the
computation, then hand over some pointers to existing buffers containing
vector data, then start the computation, and finally read back the data.
The library also can use MPI to parallelize.
I have been delighted with Cython for this purpose. Great integration with
NumPy (you can access numpy arrays directly as C arrays), very python like
syntax and amazing performance.

Good luck,

David
Jason Newton
2016-08-31 17:20:28 UTC
Permalink
I just wanted to follow up on the C++ side of OP email - Cython has quite a
few difficulties working with C++ code at the moment. It's really more of
a C solution most of the time and you must split things up into a mostly C
call interface (that is the C code Cython can call) and limit
exposure/complications with templates and complex C++11+ constructs. This
may change in the longer term but in the near, that is the state.

I used to use Boost.Python but I'm getting my feet wet with Pybind (which
is basically the same api but works more as you expect it to with it's
signature/type plumbing (including std::shared_ptr islanding), with some
other C++11 based improvements, and is header only + submodule friendly!).
I also remembered ndarray thanks to Neal's post but I haven't figured out
how to leverage it better than pybind, at the moment. I'd be interested to
see ndarray gain support for pybind interoperability...

-Jason
Post by David Morris
Post by Michael Bieri
Hi all
There are several ways on how to use C/C++ code from Python with NumPy,
as given in http://docs.scipy.org/doc/numpy/user/c-info.html .
Furthermore, there's at least pybind11.
I'm not quite sure which approach is state-of-the-art as of 2016. How
would you do it if you had to make a C/C++ library available in Python
right now?
In my case, I have a C library with some scientific functions on matrices
and vectors. You will typically call a few functions to configure the
computation, then hand over some pointers to existing buffers containing
vector data, then start the computation, and finally read back the data.
The library also can use MPI to parallelize.
I have been delighted with Cython for this purpose. Great integration
with NumPy (you can access numpy arrays directly as C arrays), very python
like syntax and amazing performance.
Good luck,
David
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Ian Henriksen
2016-08-31 18:17:56 UTC
Permalink
We use Cython very heavily in DyND's Python bindings. It has worked well
for us
even when working with some very modern C++. That said, a lot depends on
exactly which C++ features you want to expose as a part of the interface.
Interfaces that require things like non-type template parameters or variadic
templates will often require a some extra C++ code to work them in to
something
that Cython can understand. In my experience, those particular limitations
haven't
been that hard to work with.
Best,
Ian Henriksen
Post by Jason Newton
I just wanted to follow up on the C++ side of OP email - Cython has quite
a few difficulties working with C++ code at the moment. It's really more
of a C solution most of the time and you must split things up into a mostly
C call interface (that is the C code Cython can call) and limit
exposure/complications with templates and complex C++11+ constructs. This
may change in the longer term but in the near, that is the state.
I used to use Boost.Python but I'm getting my feet wet with Pybind (which
is basically the same api but works more as you expect it to with it's
signature/type plumbing (including std::shared_ptr islanding), with some
other C++11 based improvements, and is header only + submodule friendly!).
I also remembered ndarray thanks to Neal's post but I haven't figured out
how to leverage it better than pybind, at the moment. I'd be interested to
see ndarray gain support for pybind interoperability...
-Jason
Post by David Morris
Post by Michael Bieri
Hi all
There are several ways on how to use C/C++ code from Python with NumPy,
as given in http://docs.scipy.org/doc/numpy/user/c-info.html .
Furthermore, there's at least pybind11.
I'm not quite sure which approach is state-of-the-art as of 2016. How
would you do it if you had to make a C/C++ library available in Python
right now?
In my case, I have a C library with some scientific functions on
matrices and vectors. You will typically call a few functions to configure
the computation, then hand over some pointers to existing buffers
containing vector data, then start the computation, and finally read back
the data. The library also can use MPI to parallelize.
I have been delighted with Cython for this purpose. Great integration
with NumPy (you can access numpy arrays directly as C arrays), very python
like syntax and amazing performance.
Good luck,
David
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Jason Newton
2016-08-31 20:57:06 UTC
Permalink
Hey Ian - I hope I gave Cython a fair comment, but I have to add the
disclaimer that your capability to understand/implement those
solutions/workarounds in that project is greatly enhanced from your knowing
the innards of Cython from being core developer on the Cython project. This
doesn't detract from DyDN's accomplishments (if nothing it means Cython
users should look there for how to use C++ with Cython and the workarounds
used + shortcomings) but I would not expect not everyone would want to jump
through those hoops to get things working without a firm understanding of
Cython's edges, and all this potential for special/hack adaption code is
still something to keep in mind when comparing to something more straight
forward and easier to understand coming from a more pure C/C++ side, where
things are a bit more dangerous and fairly more verbose but make play with
the language and environment first-class (like Boost.Python/pybind). Since
this thread is a survey over state and options it's my intent just to make
sure readers have something bare in mind for current pros/cons of the
approaches.

-Jason

On Wed, Aug 31, 2016 at 2:17 PM, Ian Henriksen <
Post by Ian Henriksen
We use Cython very heavily in DyND's Python bindings. It has worked well
for us
even when working with some very modern C++. That said, a lot depends on
exactly which C++ features you want to expose as a part of the interface.
Interfaces that require things like non-type template parameters or variadic
templates will often require a some extra C++ code to work them in to
something
that Cython can understand. In my experience, those particular limitations
haven't
been that hard to work with.
Best,
Ian Henriksen
Post by Jason Newton
I just wanted to follow up on the C++ side of OP email - Cython has quite
a few difficulties working with C++ code at the moment. It's really more
of a C solution most of the time and you must split things up into a mostly
C call interface (that is the C code Cython can call) and limit
exposure/complications with templates and complex C++11+ constructs. This
may change in the longer term but in the near, that is the state.
I used to use Boost.Python but I'm getting my feet wet with Pybind (which
is basically the same api but works more as you expect it to with it's
signature/type plumbing (including std::shared_ptr islanding), with some
other C++11 based improvements, and is header only + submodule friendly!).
I also remembered ndarray thanks to Neal's post but I haven't figured out
how to leverage it better than pybind, at the moment. I'd be interested to
see ndarray gain support for pybind interoperability...
-Jason
Post by David Morris
Post by Michael Bieri
Hi all
There are several ways on how to use C/C++ code from Python with NumPy,
as given in http://docs.scipy.org/doc/numpy/user/c-info.html .
Furthermore, there's at least pybind11.
I'm not quite sure which approach is state-of-the-art as of 2016. How
would you do it if you had to make a C/C++ library available in Python
right now?
In my case, I have a C library with some scientific functions on
matrices and vectors. You will typically call a few functions to configure
the computation, then hand over some pointers to existing buffers
containing vector data, then start the computation, and finally read back
the data. The library also can use MPI to parallelize.
I have been delighted with Cython for this purpose. Great integration
with NumPy (you can access numpy arrays directly as C arrays), very python
like syntax and amazing performance.
Good luck,
David
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Stefan van der Walt
2016-08-31 22:04:33 UTC
Permalink
Post by Jason Newton
Hey Ian - I hope I gave Cython a fair comment, but I have to add the
disclaimer that your capability to understand/implement those
solutions/workarounds in that project is greatly enhanced from your
knowing the innards of Cython from being core developer on the Cython
project. This doesn't detract from DyDN's accomplishments (if nothing
it means Cython users should look there for how to use C++ with Cython
and the workarounds used + shortcomings) but I would not expect not
everyone would want to jump through those hoops to get things working
without a firm understanding of Cython's edges, and all this potential
for special/hack adaption code is still something to keep in mind when
comparing to something more straight forward and easier to understand
coming from a more pure C/C++ side, where things are a bit more
dangerous and fairly more verbose but make play with the language and
environment first-class (like Boost.Python/pybind). Since this thread
is a survey over state and options it's my intent just to make sure
readers have something bare in mind for current pros/cons of the
approaches.
There are many teaching resources available for Cython, after which
exposure to sharp edges may be greatly reduced. See, e.g.,

https://github.com/stefanv/teaching/blob/master/2014_assp_split_cython/slides/split2014_cython.pdf

and accompanying problems and exercises at

https://github.com/stefanv/teaching/tree/master/2014_assp_split_cython

Stéfan
Ian Henriksen
2016-08-31 22:53:05 UTC
Permalink
Post by Jason Newton
Hey Ian - I hope I gave Cython a fair comment, but I have to add the
disclaimer that your capability to understand/implement those
solutions/workarounds in that project is greatly enhanced from your knowing
the innards of Cython from being core developer on the Cython project. This
doesn't detract from DyDN's accomplishments (if nothing it means Cython
users should look there for how to use C++ with Cython and the workarounds
used + shortcomings) but I would not expect not everyone would want to jump
through those hoops to get things working without a firm understanding of
Cython's edges, and all this potential for special/hack adaption code is
still something to keep in mind when comparing to something more straight
forward and easier to understand coming from a more pure C/C++ side, where
things are a bit more dangerous and fairly more verbose but make play with
the language and environment first-class (like Boost.Python/pybind). Since
this thread is a survey over state and options it's my intent just to make
sure readers have something bare in mind for current pros/cons of the
approaches.
-Jason
No offense taken at all. I'm actually not a Cython developer, just a
frequent
contributor. That said, knowing the compiler internals certainly helps when
finding
workarounds and building intermediate interfaces. My main point was just
that, in
my experience, Cython has worked well for many things beyond plain C
interfaces
and that workarounds (hackery entirely aside) for any missing features are
usually
manageable. Given that my perspective is a bit different in that regard, it
seemed
worth chiming in on the discussion. I suppose the moral of the story is
that there's
still not a clear cut "best" way of building wrappers and that your mileage
may vary
depending on what features you need.
Thanks,
Ian Henriksen
Neal Becker
2016-09-01 14:48:20 UTC
Permalink
Post by Jason Newton
I just wanted to follow up on the C++ side of OP email - Cython has quite a
few difficulties working with C++ code at the moment. It's really more of
a C solution most of the time and you must split things up into a mostly C
call interface (that is the C code Cython can call) and limit
exposure/complications with templates and complex C++11+ constructs.
This may change in the longer term but in the near, that is the state.
I used to use Boost.Python but I'm getting my feet wet with Pybind (which
is basically the same api but works more as you expect it to with it's
signature/type plumbing (including std::shared_ptr islanding), with some
other C++11 based improvements, and is header only + submodule friendly!).
I also remembered ndarray thanks to Neal's post but I haven't figured out
how to leverage it better than pybind, at the moment. I'd be interested
to see ndarray gain support for pybind interoperability...
-Jason
Post by David Morris
Post by Michael Bieri
Hi all
There are several ways on how to use C/C++ code from Python with NumPy,
as given in http://docs.scipy.org/doc/numpy/user/c-info.html .
Furthermore, there's at least pybind11.
I'm not quite sure which approach is state-of-the-art as of 2016. How
would you do it if you had to make a C/C++ library available in Python
right now?
In my case, I have a C library with some scientific functions on
matrices and vectors. You will typically call a few functions to
configure the computation, then hand over some pointers to existing
buffers containing vector data, then start the computation, and finally
read back the data. The library also can use MPI to parallelize.
I have been delighted with Cython for this purpose. Great integration
with NumPy (you can access numpy arrays directly as C arrays), very
python like syntax and amazing performance.
Good luck,
David
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
pybind11 looks very nice. My problem is that the numpy API exposed by
pybind11 is fairly weak at this point, as far as I can see from the docs.
ndarray exposes a lot of functionality through the Array object, including
convenient indexing and slicing. AFAICT, the interface in pybind11 is
pretty low level - just pointers.

There is also some functionality exposed by pybind11 using eigen.
Personally, I find eigen rather baroque, and only use it when I see no
alternative.
Sylvain Corlay
2016-08-31 17:14:24 UTC
Permalink
+1 on pybind11.

Sylvain
Post by Michael Bieri
Hi all
There are several ways on how to use C/C++ code from Python with NumPy, as
given in http://docs.scipy.org/doc/numpy/user/c-info.html . Furthermore,
there's at least pybind11.
I'm not quite sure which approach is state-of-the-art as of 2016. How
would you do it if you had to make a C/C++ library available in Python
right now?
In my case, I have a C library with some scientific functions on matrices
and vectors. You will typically call a few functions to configure the
computation, then hand over some pointers to existing buffers containing
vector data, then start the computation, and finally read back the data.
The library also can use MPI to parallelize.
Best regards,
Michael
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Peter Creasey
2016-09-02 08:16:25 UTC
Permalink
Date: Wed, 31 Aug 2016 13:28:21 +0200
I'm not quite sure which approach is state-of-the-art as of 2016. How would
you do it if you had to make a C/C++ library available in Python right now?
In my case, I have a C library with some scientific functions on matrices
and vectors. You will typically call a few functions to configure the
computation, then hand over some pointers to existing buffers containing
vector data, then start the computation, and finally read back the data.
The library also can use MPI to parallelize.
Depending on how minimal and universal you want to keep things, I use
the ctypes approach quite often, i.e. treat your numpy inputs an
outputs as arrays of doubles etc using the ndpointer(...) syntax. I
find it works well if you have a small number of well-defined
functions (not too many options) which are numerically very heavy.
With this approach I usually wrap each method in python to check the
inputs for contiguity, pass in the sizes etc. and allocate the numpy
array for the result.

Peter
Nathaniel Smith
2016-09-02 09:16:35 UTC
Permalink
On Fri, Sep 2, 2016 at 1:16 AM, Peter Creasey
Post by Peter Creasey
Date: Wed, 31 Aug 2016 13:28:21 +0200
I'm not quite sure which approach is state-of-the-art as of 2016. How would
you do it if you had to make a C/C++ library available in Python right now?
In my case, I have a C library with some scientific functions on matrices
and vectors. You will typically call a few functions to configure the
computation, then hand over some pointers to existing buffers containing
vector data, then start the computation, and finally read back the data.
The library also can use MPI to parallelize.
Depending on how minimal and universal you want to keep things, I use
the ctypes approach quite often, i.e. treat your numpy inputs an
outputs as arrays of doubles etc using the ndpointer(...) syntax. I
find it works well if you have a small number of well-defined
functions (not too many options) which are numerically very heavy.
With this approach I usually wrap each method in python to check the
inputs for contiguity, pass in the sizes etc. and allocate the numpy
array for the result.
FWIW, the broader Python community seems to have largely deprecated
ctypes in favor of cffi. Unfortunately I don't know if anyone has
written helpers like numpy.ctypeslib for cffi...

-n
--
Nathaniel J. Smith -- https://vorpus.org
Carl Kleffner
2016-09-02 10:33:05 UTC
Permalink
maybe https://bitbucket.org/memotype/cffiwrap or
https://github.com/andrewleech/cfficloak helps?

C.
Post by Nathaniel Smith
On Fri, Sep 2, 2016 at 1:16 AM, Peter Creasey
Post by Peter Creasey
Date: Wed, 31 Aug 2016 13:28:21 +0200
I'm not quite sure which approach is state-of-the-art as of 2016. How
would
Post by Peter Creasey
you do it if you had to make a C/C++ library available in Python right
now?
Post by Peter Creasey
In my case, I have a C library with some scientific functions on
matrices
Post by Peter Creasey
and vectors. You will typically call a few functions to configure the
computation, then hand over some pointers to existing buffers containing
vector data, then start the computation, and finally read back the data.
The library also can use MPI to parallelize.
Depending on how minimal and universal you want to keep things, I use
the ctypes approach quite often, i.e. treat your numpy inputs an
outputs as arrays of doubles etc using the ndpointer(...) syntax. I
find it works well if you have a small number of well-defined
functions (not too many options) which are numerically very heavy.
With this approach I usually wrap each method in python to check the
inputs for contiguity, pass in the sizes etc. and allocate the numpy
array for the result.
FWIW, the broader Python community seems to have largely deprecated
ctypes in favor of cffi. Unfortunately I don't know if anyone has
written helpers like numpy.ctypeslib for cffi...
-n
--
Nathaniel J. Smith -- https://vorpus.org
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Sebastian Haase
2016-09-02 11:46:42 UTC
Permalink
How do these two relate to each other !?
- Sebastian
maybe https://bitbucket.org/memotype/cffiwrap or https://github.com/
andrewleech/cfficloak helps?
C.
Post by Nathaniel Smith
On Fri, Sep 2, 2016 at 1:16 AM, Peter Creasey
Post by Peter Creasey
Date: Wed, 31 Aug 2016 13:28:21 +0200
I'm not quite sure which approach is state-of-the-art as of 2016. How
would
Post by Peter Creasey
you do it if you had to make a C/C++ library available in Python right
now?
Post by Peter Creasey
In my case, I have a C library with some scientific functions on
matrices
Post by Peter Creasey
and vectors. You will typically call a few functions to configure the
computation, then hand over some pointers to existing buffers
containing
Post by Peter Creasey
vector data, then start the computation, and finally read back the
data.
Post by Peter Creasey
The library also can use MPI to parallelize.
Depending on how minimal and universal you want to keep things, I use
the ctypes approach quite often, i.e. treat your numpy inputs an
outputs as arrays of doubles etc using the ndpointer(...) syntax. I
find it works well if you have a small number of well-defined
functions (not too many options) which are numerically very heavy.
With this approach I usually wrap each method in python to check the
inputs for contiguity, pass in the sizes etc. and allocate the numpy
array for the result.
FWIW, the broader Python community seems to have largely deprecated
ctypes in favor of cffi. Unfortunately I don't know if anyone has
written helpers like numpy.ctypeslib for cffi...
-n
--
Nathaniel J. Smith -- https://vorpus.org
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Carl Kleffner
2016-09-02 11:53:20 UTC
Permalink
fork / extension of cffiwrap:


*"cfficloak - A simple but flexible module for creating object-oriented,
pythonic CFFI wrappers.This is an extension of
https://bitbucket.org/memotype/cffiwrap
<https://bitbucket.org/memotype/cffiwrap>"*
Post by Sebastian Haase
How do these two relate to each other !?
- Sebastian
Post by Carl Kleffner
maybe https://bitbucket.org/memotype/cffiwrap or
https://github.com/andrewleech/cfficloak helps?
C.
Post by Nathaniel Smith
On Fri, Sep 2, 2016 at 1:16 AM, Peter Creasey
Post by Peter Creasey
Date: Wed, 31 Aug 2016 13:28:21 +0200
I'm not quite sure which approach is state-of-the-art as of 2016. How
would
Post by Peter Creasey
you do it if you had to make a C/C++ library available in Python
right now?
Post by Peter Creasey
In my case, I have a C library with some scientific functions on
matrices
Post by Peter Creasey
and vectors. You will typically call a few functions to configure the
computation, then hand over some pointers to existing buffers
containing
Post by Peter Creasey
vector data, then start the computation, and finally read back the
data.
Post by Peter Creasey
The library also can use MPI to parallelize.
Depending on how minimal and universal you want to keep things, I use
the ctypes approach quite often, i.e. treat your numpy inputs an
outputs as arrays of doubles etc using the ndpointer(...) syntax. I
find it works well if you have a small number of well-defined
functions (not too many options) which are numerically very heavy.
With this approach I usually wrap each method in python to check the
inputs for contiguity, pass in the sizes etc. and allocate the numpy
array for the result.
FWIW, the broader Python community seems to have largely deprecated
ctypes in favor of cffi. Unfortunately I don't know if anyone has
written helpers like numpy.ctypeslib for cffi...
-n
--
Nathaniel J. Smith -- https://vorpus.org
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Thiago Franco Moraes
2016-09-02 19:33:25 UTC
Permalink
I think you can use ffi.from_buffer and ffi.cast from cffi.
Post by Carl Kleffner
*"cfficloak - A simple but flexible module for creating object-oriented,
pythonic CFFI wrappers.This is an extension of
https://bitbucket.org/memotype/cffiwrap
<https://bitbucket.org/memotype/cffiwrap>"*
Post by Sebastian Haase
How do these two relate to each other !?
- Sebastian
Post by Carl Kleffner
maybe https://bitbucket.org/memotype/cffiwrap or
https://github.com/andrewleech/cfficloak helps?
C.
Post by Nathaniel Smith
On Fri, Sep 2, 2016 at 1:16 AM, Peter Creasey
Post by Peter Creasey
Date: Wed, 31 Aug 2016 13:28:21 +0200
I'm not quite sure which approach is state-of-the-art as of 2016.
How would
Post by Peter Creasey
you do it if you had to make a C/C++ library available in Python
right now?
Post by Peter Creasey
In my case, I have a C library with some scientific functions on
matrices
Post by Peter Creasey
and vectors. You will typically call a few functions to configure the
computation, then hand over some pointers to existing buffers
containing
Post by Peter Creasey
vector data, then start the computation, and finally read back the
data.
Post by Peter Creasey
The library also can use MPI to parallelize.
Depending on how minimal and universal you want to keep things, I use
the ctypes approach quite often, i.e. treat your numpy inputs an
outputs as arrays of doubles etc using the ndpointer(...) syntax. I
find it works well if you have a small number of well-defined
functions (not too many options) which are numerically very heavy.
With this approach I usually wrap each method in python to check the
inputs for contiguity, pass in the sizes etc. and allocate the numpy
array for the result.
FWIW, the broader Python community seems to have largely deprecated
ctypes in favor of cffi. Unfortunately I don't know if anyone has
written helpers like numpy.ctypeslib for cffi...
-n
--
Nathaniel J. Smith -- https://vorpus.org
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Antoine Pitrou
2016-09-07 13:35:39 UTC
Permalink
On Fri, 2 Sep 2016 02:16:35 -0700
Post by Nathaniel Smith
Post by Peter Creasey
Depending on how minimal and universal you want to keep things, I use
the ctypes approach quite often, i.e. treat your numpy inputs an
outputs as arrays of doubles etc using the ndpointer(...) syntax. I
find it works well if you have a small number of well-defined
functions (not too many options) which are numerically very heavy.
With this approach I usually wrap each method in python to check the
inputs for contiguity, pass in the sizes etc. and allocate the numpy
array for the result.
FWIW, the broader Python community seems to have largely deprecated
ctypes in favor of cffi.
I'm not sure about "largely deprecated". For sure, that's the notion
spreaded by a number of people.

Regards

Antoine.
Chris Barker
2016-09-06 22:42:03 UTC
Permalink
Post by Michael Bieri
I'm not quite sure which approach is state-of-the-art as of 2016. How
would
Post by Michael Bieri
you do it if you had to make a C/C++ library available in Python right
now?
Post by Michael Bieri
In my case, I have a C library with some scientific functions on matrices
and vectors. You will typically call a few functions to configure the
computation, then hand over some pointers to existing buffers containing
vector data, then start the computation, and finally read back the data.
The library also can use MPI to parallelize.
Cython works really well for this.

ctypes is a better option if you have a "black box" shared lib you want a
call a couple functions in.

Cython works better if you want to write a little "thicker" wrapper around
youe C code -- i.e. it may do a scalar computation, and you want to apply
it to an entire numpy array, at C speed.

Either would work in this case, But I like Cyton better, as long as I don't
have compilation issues.

-Chris
--
Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception

***@noaa.gov
Loading...