Hello everyone.
This is the roadmap for numpy effort in PyPy as discussed on the London sprint. First, the highest on our priority list is to finish the low-level part of the numpy module. What we'll do is to finish the RPython part of numpy and provide a pip installable numpypy repository that includes the pure python part of Numpy. This would contain the original Numpy with a few minor changes.
Second, we need to work on the JIT support that will make NumPy on PyPy faster. In detail:
- reenable the lazy loop evaluation
- optimize bridges, which is depending on optimizer refactorings
- SSE support
On the compatibility front, there were some independent attempts into making the following stuff working:
- f2py
- C API (in fact, PyArray_* API is partly present in the nightly builds of PyPy)
- matplotlib (both using PyArray_* API and embedding CPython runtime in PyPy)
- scipy
In order to make all of the above happen faster, it would be helpful to raise more funds. You can donate to PyPy's NumPy project on our website. Note that PyPy is a member of SFC which is a 501(c)(3) US non-profit, so donations from US companies can be tax-deducted.
Cheers,
fijal, arigo, ronan, rguillebert, anto and others
5 comments:
Thanks for the update. I'm hoping the other presentations can also be summarized here for those who couldn't attend this (very interesting) mini-conference.
Thanks for the info! I can't wait to play with it.
I only have a very rudimentary understanding of numpypy and pypy, so please forgive if this is a stupid question:
Will there be a way to do additional high level optimization steps before the JIT level?
I.e. elimination of temporaries for matrices, expression optimization and so on?
Basically check if the expression should be handled by the pypy JIT, or if if should be passed on to something like numexpr
http://code.google.com/p/numexpr/
that will itself hand over the code to optimized vendor libraries?
I am a bit concerned that while the pypy JIT optimizations are without question very impressive and probably close to optimal to what can be done for generic code, the performance issues with numerical code are very different.
Any JIT will (please correct me if I am wrong, this would be a significant breakthrough) never be able to even come close to what a vendor library like the MKL can do.
The comparison will be even more to the disadvantage of the JIT if one uses a library like Theano that runs the code on the GPU.
For my work, beating c for speed is not enough anymore, the challenges are how to run the computation in parallel, how to call optimized libraries without pain and how to use a GPU without re-writing the entire program and learning about a completely new system.
Will libraries like numexpr, numba and theano be able to run under pypy, and will it eventually be possible to automatically hand over numerical expressions automatically to these libraries?
Hi Dan.
Yes, pypy will do the removal of temporary matrices, this is a very basic optimization that we had, but disabled for a while to simplify development.
I don't think numba, numexpr or theano would ever work on PyPy (I would ask their authors though), but I personally think we can match their performance or even exceed it, time will tell though.
Cheers,
fijal
Hi Maciej,
Thanks for the answer.
A pypy that matches what numba or theano can do, all without doing any extra annotation, would not only be a huge breakthrough for pypy, it will be a gigantic step forward for the entire numerics community.
Thank you and keep up the good work,
Dan
@Anonymous: I'd like to point out again that all this NumPy work would get more traction and faster development within PyPy if we could manage to interest (and get contributions from) anyone that comes from the scientific community. Ourselves, we are looking at this topic as a smallish part of the whole Python world, so we disagree (to a point) with your comment "a huge breakthrough for pypy". :-)
Post a Comment