Thursday, April 17, 2008

Float operations for JIT

Recently, we taught the JIT x86 backend how to produce code for the x87 floating point coprocessor. This means that JIT is able to nicely speed up float operations (this this is not true for our Python interpreter yet - we did not integrate it yet). This is the first time we started going beyond what is feasible in psyco - it would take a lot of effort to make floats working on top of psyco, way more than it will take on PyPy.

This work is in very early stage and lives on a jit-hotpath branch, which includes all our recent experiments on JIT compiler generation, including tracing JIT experiments and huge JIT refactoring.

Because we don't encode the Python's semantics in our JIT (which is really a JIT generator), it is expected that our Python interpreter with a JIT will become fast "suddenly", when our JIT generator is good enough. If this point is reached, we would also get fast interpreters for Smalltalk or JavaScript with relatively low effort.

Stay tuned.

Cheers,
fijal

6 comments:

Michael Foord said...

Having a fast implementation of Ruby written in Python would be very cool. :-p

René Dudfield said...

Super cool!

Are you going to add SIMD stuff to the i386 backend?

Which is the main backend at the moment? LLVM?

cheers,

jlg said...

It would be amazing to run SciPy on PyPy with the JIT when this will be ready.

Anonymous said...

I'm interested in the choice of x87 as well. My understanding was that Intel (at least) was keeping x87 floating point around because of binary applications but that for single element floating point the SSE single-element instructions were the preferred option on any processor which supports SSE. (Unfortunately since they've got such different styles of programming I can understand if it's just that "older chips have to be supported, and we've only got enough programming manpower for 1 implementation".)

Maciej Fijalkowski said...

x87 because it's simpler and better documented. Right now would be ridiculously easy to reimplement it using SSE.

Armin Rigo said...

The main backend is the one for 386. We have no working LLVM JIT backend: although llvm advertizes supporting JIT compilation, what it really provides is a regular compiler packaged as a library that can be used at run-time. This is only suitable for some kinds of usages; for example, it couldn't be used to write a Java VM with good just-in-time optimizations (which need e.g. quick and lazy code generation and regeneration, polymorphic inline caches, etc.)