Discussion:
[Numpy-discussion] Starting work on ufunc rewrite
Jaime Fernández del Río
2016-03-31 20:00:27 UTC
Permalink
I have started discussing with Nathaniel the implementation of the ufunc
ABI break that he proposed in a draft NEP a few months ago:

http://thread.gmane.org/gmane.comp.python.numeric.general/61270

His original proposal was to make the public portion of PyUFuncObject be:

typedef struct {
PyObject_HEAD
int nin, nout, nargs;
} PyUFuncObject;

Of course the idea is that internally we would use a much larger struct
that we could change at will, as long as its first few entries matched
those of PyUFuncObject. My problem with this, and I may very well be
missing something, is that in PyUFunc_Type we need to set the tp_basicsize
to the size of the extended struct, so we would end up having to expose its
contents. This is somewhat similar to what now happens with PyArrayObject:
anyone can #include "ndarraytypes.h", cast PyArrayObject* to
PyArrayObjectFields*, and access the guts of the struct without using the
supplied API inline functions. Not the end of the world, but if you want to
make something private, you might as well make it truly private.

I think it would be to have something similar to what NpyIter does::

typedef struct {
PyObject_HEAD
NpyUFunc *ufunc;
} PyUFuncObject;

where NpyUFunc would, at this level, be an opaque type of which nothing
would be known. We could have some of the NpyUFunc attributes cached on the
PyUFuncObject struct for easier access, as is done in NewNpyArrayIterObject.
This would also give us more liberty in making NpyUFunc be whatever we want
it to be, including a variable-sized memory chunk that we could use and
access at will. NpyIter is again a good example, where rather than storing
pointers to strides and dimensions arrays, these are made part of the
NpyIter memory chunk, effectively being equivalent to having variable sized
arrays as part of the struct. And I think we will probably no longer
trigger the Cython warnings about size changes either.

Any thoughts on this approach? Is there anything fundamentally wrong with
what I'm proposing here?

Also, this is probably going to end up being a rewrite of a pretty large
and complex codebase. I am not sure that working on this on my own and
eventually sending a humongous PR is the best approach. Any thoughts on how
best to handle turning this into a collaborative, incremental effort?
Anyone who would like to join in the fun?

Jaime
--
(\__/)
( O.o)
( > <) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes
de dominación mundial.
Joseph Fox-Rabinovitz
2016-03-31 20:14:26 UTC
Permalink
There is certainly good precedent for the approach you suggest.
Shortly after Nathaniel mentioned the rewrite to me, I looked up
d-pointers as a possible technique: https://wiki.qt.io/D-Pointer.

If we allow arbitrary kwargs for the new functions, is that something
you would want to note in the public structure? I was thinking
something along the lines of adding a hook to process additional
kwargs and return a void * that would then be passed to the loop.

To do this incrementally, perhaps opening a special development branch
on the main repository is in order?

I would love to join in the fun as time permits. Unfortunately, it is
not especially permissive right about now. I will at least throw in
some ideas that I have been mulling over.

-Joe


On Thu, Mar 31, 2016 at 4:00 PM, Jaime Fernández del Río
I have started discussing with Nathaniel the implementation of the ufunc ABI
http://thread.gmane.org/gmane.comp.python.numeric.general/61270
typedef struct {
PyObject_HEAD
int nin, nout, nargs;
} PyUFuncObject;
Of course the idea is that internally we would use a much larger struct that
we could change at will, as long as its first few entries matched those of
PyUFuncObject. My problem with this, and I may very well be missing
something, is that in PyUFunc_Type we need to set the tp_basicsize to the
size of the extended struct, so we would end up having to expose its
anyone can #include "ndarraytypes.h", cast PyArrayObject* to
PyArrayObjectFields*, and access the guts of the struct without using the
supplied API inline functions. Not the end of the world, but if you want to
make something private, you might as well make it truly private.
typedef struct {
PyObject_HEAD
NpyUFunc *ufunc;
} PyUFuncObject;
where NpyUFunc would, at this level, be an opaque type of which nothing
would be known. We could have some of the NpyUFunc attributes cached on the
PyUFuncObject struct for easier access, as is done in NewNpyArrayIterObject.
This would also give us more liberty in making NpyUFunc be whatever we want
it to be, including a variable-sized memory chunk that we could use and
access at will. NpyIter is again a good example, where rather than storing
pointers to strides and dimensions arrays, these are made part of the
NpyIter memory chunk, effectively being equivalent to having variable sized
arrays as part of the struct. And I think we will probably no longer trigger
the Cython warnings about size changes either.
Any thoughts on this approach? Is there anything fundamentally wrong with
what I'm proposing here?
Also, this is probably going to end up being a rewrite of a pretty large and
complex codebase. I am not sure that working on this on my own and
eventually sending a humongous PR is the best approach. Any thoughts on how
best to handle turning this into a collaborative, incremental effort? Anyone
who would like to join in the fun?
Jaime
--
(\__/)
( O.o)
( > <) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes de
dominación mundial.
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
Jaime Fernández del Río
2016-04-01 20:04:24 UTC
Permalink
On Thu, Mar 31, 2016 at 10:14 PM, Joseph Fox-Rabinovitz <
Post by Joseph Fox-Rabinovitz
There is certainly good precedent for the approach you suggest.
Shortly after Nathaniel mentioned the rewrite to me, I looked up
d-pointers as a possible technique: https://wiki.qt.io/D-Pointer.
Yes, the idea is similar, although somewhat simpler since we are doing C,
not C++.
Post by Joseph Fox-Rabinovitz
If we allow arbitrary kwargs for the new functions, is that something
you would want to note in the public structure? I was thinking
something along the lines of adding a hook to process additional
kwargs and return a void * that would then be passed to the loop.
I'm not sure I understand what you mean... But I also don't think it is
very relevant at this point? What I intend to do is simply to hide the guts
of ufuncs, breaking everyone's code once... so that we can later change
whatever we want without breaking anything else. PyUFunc_GenericFunction
already takes *args and **kwargs, and the internal logic of how these get
processed can be modified at will. If what you are proposing is to create a
PyUFunc_FromFuncAndDataAndSignatureAndKwargProcessor API function that
would provide a customized function to process extra kwargs and somehow
pass them into the actual ufunc loop, that would just be an API extension,
and there shouldn't be any major problem in introducing that whenever,
especially once we are free to modify the internal representation of ufuncs
without breaking ABI compatibility.
Post by Joseph Fox-Rabinovitz
To do this incrementally, perhaps opening a special development branch
on the main repository is in order?
Yes, something like that seems like the right thing to do indeed. I would
like someone with more git foo than me to spell out the details of how we
would create and eventually merge that branch.
Post by Joseph Fox-Rabinovitz
I would love to join in the fun as time permits. Unfortunately, it is
not especially permissive right about now. I will at least throw in
some ideas that I have been mulling over.
Please do!

Jaime
Post by Joseph Fox-Rabinovitz
-Joe
On Thu, Mar 31, 2016 at 4:00 PM, Jaime Fernández del Río
Post by Jaime Fernández del Río
I have started discussing with Nathaniel the implementation of the ufunc
ABI
Post by Jaime Fernández del Río
http://thread.gmane.org/gmane.comp.python.numeric.general/61270
typedef struct {
PyObject_HEAD
int nin, nout, nargs;
} PyUFuncObject;
Of course the idea is that internally we would use a much larger struct
that
Post by Jaime Fernández del Río
we could change at will, as long as its first few entries matched those
of
Post by Jaime Fernández del Río
PyUFuncObject. My problem with this, and I may very well be missing
something, is that in PyUFunc_Type we need to set the tp_basicsize to the
size of the extended struct, so we would end up having to expose its
contents. This is somewhat similar to what now happens with
anyone can #include "ndarraytypes.h", cast PyArrayObject* to
PyArrayObjectFields*, and access the guts of the struct without using the
supplied API inline functions. Not the end of the world, but if you want
to
Post by Jaime Fernández del Río
make something private, you might as well make it truly private.
typedef struct {
PyObject_HEAD
NpyUFunc *ufunc;
} PyUFuncObject;
where NpyUFunc would, at this level, be an opaque type of which nothing
would be known. We could have some of the NpyUFunc attributes cached on
the
Post by Jaime Fernández del Río
PyUFuncObject struct for easier access, as is done in
NewNpyArrayIterObject.
Post by Jaime Fernández del Río
This would also give us more liberty in making NpyUFunc be whatever we
want
Post by Jaime Fernández del Río
it to be, including a variable-sized memory chunk that we could use and
access at will. NpyIter is again a good example, where rather than
storing
Post by Jaime Fernández del Río
pointers to strides and dimensions arrays, these are made part of the
NpyIter memory chunk, effectively being equivalent to having variable
sized
Post by Jaime Fernández del Río
arrays as part of the struct. And I think we will probably no longer
trigger
Post by Jaime Fernández del Río
the Cython warnings about size changes either.
Any thoughts on this approach? Is there anything fundamentally wrong with
what I'm proposing here?
Also, this is probably going to end up being a rewrite of a pretty large
and
Post by Jaime Fernández del Río
complex codebase. I am not sure that working on this on my own and
eventually sending a humongous PR is the best approach. Any thoughts on
how
Post by Jaime Fernández del Río
best to handle turning this into a collaborative, incremental effort?
Anyone
Post by Jaime Fernández del Río
who would like to join in the fun?
Jaime
--
(\__/)
( O.o)
( > <) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus
planes de
Post by Jaime Fernández del Río
dominación mundial.
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
_______________________________________________
NumPy-Discussion mailing list
https://mail.scipy.org/mailman/listinfo/numpy-discussion
--
(\__/)
( O.o)
( > <) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes
de dominación mundial.
Irwin Zaid
2016-03-31 22:09:40 UTC
Permalink
Hey guys,

I figured I'd just chime in here.

Over in DyND-town, we've spent a lot of time figuring out how to structure
DyND callables, which are actually more general than NumPy gufuncs. We've
just recently got them to a place where we are very happy, and are able to
represent a wide range of computations.

Our callables use a two-fold approach to evaluation. The first pass is a
resolution pass, where a callable can specialize what it is doing based on
the input types. It is able to deduce the return type, multidispatch, or
even perform some sort of recursive analysis in the form of computations
that call themselves. The second pass is construction of a kernel object
that is exactly specialized to the metadata (e.g., strides, contiguity,
...) of the array.

The callable itself can store arbitrary data, as can each pass of the
evaluation. Either (or both) of these passes can be done ahead of time,
allowing one to have a callable exactly specialized for your array.

If NumPy is looking to change it's ufunc design, we'd be happy to share our
experiences with this.

Irwin

On Thu, Mar 31, 2016 at 4:00 PM, Jaime Fernández del Río
<jaime.frio at gmail.com
* I have started discussing with Nathaniel the implementation of the ufunc ABI
*>* break that he proposed in a draft NEP a few months ago:
*>>* http://thread.gmane.org/gmane.comp.python.numeric.general/61270
<http://thread.gmane.org/gmane.comp.python.numeric.general/61270>
*>>* His original proposal was to make the public portion of PyUFuncObject be:
*>>* typedef struct {
*>* PyObject_HEAD
*>* int nin, nout, nargs;
*>* } PyUFuncObject;
*>>* Of course the idea is that internally we would use a much larger
struct that
*>* we could change at will, as long as its first few entries matched those of
*>* PyUFuncObject. My problem with this, and I may very well be missing
*>* something, is that in PyUFunc_Type we need to set the tp_basicsize to the
*>* size of the extended struct, so we would end up having to expose its
*>* contents. This is somewhat similar to what now happens with PyArrayObject:
*>* anyone can #include "ndarraytypes.h", cast PyArrayObject* to
*>* PyArrayObjectFields*, and access the guts of the struct without using the
*>* supplied API inline functions. Not the end of the world, but if you want to
*>* make something private, you might as well make it truly private.
*>>* I think it would be to have something similar to what NpyIter does::
*>>* typedef struct {
*>* PyObject_HEAD
*>* NpyUFunc *ufunc;
*>* } PyUFuncObject;
*>>* where NpyUFunc would, at this level, be an opaque type of which nothing
*>* would be known. We could have some of the NpyUFunc attributes cached on the
*>* PyUFuncObject struct for easier access, as is done in NewNpyArrayIterObject.
*>* This would also give us more liberty in making NpyUFunc be whatever we want
*>* it to be, including a variable-sized memory chunk that we could use and
*>* access at will. NpyIter is again a good example, where rather than storing
*>* pointers to strides and dimensions arrays, these are made part of the
*>* NpyIter memory chunk, effectively being equivalent to having variable sized
*>* arrays as part of the struct. And I think we will probably no longer trigger
*>* the Cython warnings about size changes either.
*>>* Any thoughts on this approach? Is there anything fundamentally wrong with
*>* what I'm proposing here?
*>>* Also, this is probably going to end up being a rewrite of a
pretty large and
*>* complex codebase. I am not sure that working on this on my own and
*>* eventually sending a humongous PR is the best approach. Any thoughts on how
*>* best to handle turning this into a collaborative, incremental effort? Anyone
*>* who would like to join in the fun?
*>>* Jaime
*>>* --
*>* (\__/)
*>* ( O.o)
*>* ( > <) Este es Conejo. Copia a Conejo en tu firma y ayúdale en sus planes de
*>* dominación mundial.
*>>* _______________________________________________
*>* NumPy-Discussion mailing list
*>* NumPy-Discussion at scipy.org
<https://mail.scipy.org/mailman/listinfo/numpy-discussion>
*>* https://mail.scipy.org/mailman/listinfo/numpy-discussion
<https://mail.scipy.org/mailman/listinfo/numpy-discussion>
*>
Nathaniel Smith
2016-04-03 01:15:41 UTC
Permalink
Post by Irwin Zaid
Hey guys,
I figured I'd just chime in here.
Over in DyND-town, we've spent a lot of time figuring out how to structure
DyND callables, which are actually more general than NumPy gufuncs. We've
just recently got them to a place where we are very happy, and are able to
represent a wide range of computations.
Our callables use a two-fold approach to evaluation. The first pass is a
resolution pass, where a callable can specialize what it is doing based on
the input types. It is able to deduce the return type, multidispatch, or
even perform some sort of recursive analysis in the form of computations
that call themselves. The second pass is construction of a kernel object
that is exactly specialized to the metadata (e.g., strides, contiguity, ...)
of the array.
The callable itself can store arbitrary data, as can each pass of the
evaluation. Either (or both) of these passes can be done ahead of time,
allowing one to have a callable exactly specialized for your array.
If NumPy is looking to change it's ufunc design, we'd be happy to share our
experiences with this.
Yeah, this all sounds very relevant :-). You can even see some of the
kernel of that design in numpy's current ufuncs, with their
first-stage "resolver" choosing which inner loop to use, but we
definitely need to make these semantics richer if we want to allow for
things like inner loops that depend on kwargs (e.g. sort(...,
kind="quicksort") versus sort(..., kind="mergesort")) or dtype
attributes. Is your design written up anywhere?

-n
--
Nathaniel J. Smith -- https://vorpus.org
Nathaniel Smith
2016-04-03 01:12:03 UTC
Permalink
On Thu, Mar 31, 2016 at 1:00 PM, Jaime Fernández del Río
I have started discussing with Nathaniel the implementation of the ufunc ABI
http://thread.gmane.org/gmane.comp.python.numeric.general/61270
typedef struct {
PyObject_HEAD
int nin, nout, nargs;
} PyUFuncObject;
Of course the idea is that internally we would use a much larger struct that
we could change at will, as long as its first few entries matched those of
PyUFuncObject. My problem with this, and I may very well be missing
something, is that in PyUFunc_Type we need to set the tp_basicsize to the
size of the extended struct, so we would end up having to expose its
contents.
How so? tp_basicsize tells you the size of the real struct, but that
doesn't let you actually access any of its fields. Unless you decide
to start cheating and reaching into random bits of memory by hand,
but, well, this is C, we can't really prevent that :-).
anyone can #include "ndarraytypes.h", cast PyArrayObject* to
PyArrayObjectFields*, and access the guts of the struct without using the
supplied API inline functions. Not the end of the world, but if you want to
make something private, you might as well make it truly private.
Yeah, there is also an issue here where we don't always do a great job
of separating our internal headers from our public headers. But that's
orthogonal -- any solution for hiding PyUFunc's internals will require
handling that somehow.
typedef struct {
PyObject_HEAD
NpyUFunc *ufunc;
} PyUFuncObject;
A few points:

We have to leave nin, nout, nargs where they are in PyUFuncObject,
because there code out there that accesses them.

This technique is usually used when you want to allow subclassing of a
struct, while also allowing you to add fields later without breaking
ABI. We don't want to allow subclassing of PyUFunc (regardless of what
happens here -- subclassing just creates tons of problems), so AFAICT
it isn't really necessary. It adds a bit of extra complexity (two
allocations instead of one, extra pointer chasing, etc.), though to be
fair the hidden struct approach also adds some complexity (you have to
cast to the internal type), so it's not a huge deal either way.

If the NpyUFunc pointer field is public then in principle people could
refer to it and create problems down the line in case we ever decided
to switch to a different strategy... not very likely given that it'd
just be a meaningless opaque pointer, but mentioning for
completeness's sake.
where NpyUFunc would, at this level, be an opaque type of which nothing
would be known. We could have some of the NpyUFunc attributes cached on the
PyUFuncObject struct for easier access, as is done in NewNpyArrayIterObject.
Caching sounds like *way* more complexity than we want :-). As soon as
you have two copies of data then they can get out of sync...
This would also give us more liberty in making NpyUFunc be whatever we want
it to be, including a variable-sized memory chunk that we could use and
access at will.
Python objects are allowed to be variable size: tp_basicsize is the
minimum size. Built-ins like lists and strings have variable size
structs.
NpyIter is again a good example, where rather than storing
pointers to strides and dimensions arrays, these are made part of the
NpyIter memory chunk, effectively being equivalent to having variable sized
arrays as part of the struct. And I think we will probably no longer trigger
the Cython warnings about size changes either.
Any thoughts on this approach? Is there anything fundamentally wrong with
what I'm proposing here?
Modulo the issue with nin/nout/nargs, I don't think it makes a huge
difference either way. I don't see any compelling advantages to your
proposal given our particular situation, but it doesn't make a huge
difference either way. Maybe I'm missing something.
Also, this is probably going to end up being a rewrite of a pretty large and
complex codebase. I am not sure that working on this on my own and
eventually sending a humongous PR is the best approach. Any thoughts on how
best to handle turning this into a collaborative, incremental effort? Anyone
who would like to join in the fun?
I'd strongly recommend breaking it up into individually mergeable
pieces to the absolute maximum extent possible, and merging them back
as we go, so that we never have a giant branch diverging from master.
(E.g., refactor a few functions -> submit a PR -> merge, refactor some
more -> merge, add a new feature enabled by the refactoring -> merge,
repeat). There are limits to how far you can take this, e.g. the PR
for just hiding the current API + adding back the public API pieces
that Numba needs will itself be not quite trivial even if we do no
refactoring yet, and until we get more of an outline for where we're
trying to get to it will be hard to tell how to break it into pieces
:-). But once things are hidden it should be possible to do quite a
bit of internal rearranging incrementally on master, I hope?

For coordinating this though it would probably be good to start
working on some public notes (gdocs or the wiki or something) where we
sketch out some overall plan, make a plan of attack for how to break
it up, etc., and maybe have some higher-bandwidth conversations to
make that outline (google hangout?).

-n
--
Nathaniel J. Smith -- https://vorpus.org
Loading...