PyPy contains a framework for writing custom Garbage Collectors, and a few simple GCs have been written in this framework. A common issue with all these GCs is how to find all the stack roots, i.e. all the pointers to live GC-managed objects currently stored in local variables, in all the callers of the current function. The current solution is to maintain a custom shadow stack of roots, where all functions push and pop copies of their local variables of type "GC pointer". Clearly this is an overhead. Can we remove it?
LLVM has recently grown some support for this. By emitting markers in the LLVM source and with the help of a bit of custom C++ code, we can generate stack maps for the functions compiled by LLVM. Then, with 100% non-portable code in our framework GC's root finding algorithm, we can walk the machine stack and locate where in each stack frame LLVM stores the GC pointers. (Yes, I mean non-portable: LLVM offers no help for doing that. Maybe it will at some point, though I didn't manage to explain why this is an issue to people working on this in LLVM so far...). I've tried that approach in the llvmgcroot branch. Over the manually-managed shadow stack, this gives speed improvements which are, very roughly, on the order of 5%.
Note that this prevents some optimizations in LLVM, because it forces it to allocate all local variables of type "GC pointer" in the stack; it cannot keep them in registers and it must assume that they can be changed more or less at any time (as moving GCs do). Can we do better?
Actually, yes. We can even do better in the C backend, using a GCC hack. GCC has this nice extension:
asm("bla", constrains);This is meant to generate assembler instructions directly from C. Internally, GCC considers the whole asm() as a single regular instruction of its intermediate language; the constrains are expressed in the same way as the constrains for all the prebuilt intermediate language instructions. They express things like input and output operands of the instruction, whether they can live in memory or in registers, whether the whole instruction has side-effects, etc. The nice thing about asm() is that it doesn't kill any optimization whatsoever in GCC - it's your job to make sure that you use the correct constrains.
So what I've tried in the asmgcroot branch is to use asm() as markers. In this branch, the C backend produces code like this after each function call, for each local variable containing a live GC pointer:
asm("/* GCROOT %0 */" : "=g"(localvar) : "0"(localvar) : "memory");
This causes GCC to emit the following line in the assembler file it generates:
/* GCROOT register-or-memory-containing-localvar */
I won't go in the details of the asm() line above - the constrains are just enough to make sure that GCC doesn't optimize too much, but don't prevent most optimizations from occurring. For example, the localvar can be in a register.
The assembler will just ignore the line above; it is a comment. But what we can do is write our own tool parsing the assembler files. This tool locates the /* GCROOT */ comments and follows where the register or memory location in the comment comes from (to do this it must follow the control flow and data flow of the function). This allows it to build a stack map: for each call instruction it knows exactly which registers and frame stack locations contain a live GC pointer. The stack map is then emitted in an extra assembler file that we link with the rest. As with LLVM above, the stack map is then used at run-time by non-portable code written in our GC's stack root tracker.
Yes, that's rather insane. But at least, we don't need to modify the assembler file - just read it. If GCC is too clever in its optimizations, the custom parser will get lost and complain cleanly; but I think that it is relatively safe in the sense that GCC optimizations should not be able to make the custom parser produce wrong results.
The branch is not merged because it's probably too insane to merge (not to mention, it's probably not portable to non-GCC compilers, and it is completely platform-specific). Still, it gives good results, better that the pure LLVM approach - on the order of 10% to 25% speed-ups for pypy-c.
How does Objective-C 2.0 handle this same problem?
ReplyDeleteObviously it depends on the compiler, but the basic idea is that the natural place to support this is in the compiler itself. For example, instead of parsing the assembler produced by GCC, it would probably be possible to extend GCC to cleanly generate stack maps. (This is basically what I tried to do with LLVM, which gives a plug-in API to do that.)
ReplyDeleteAfter a bit of googling, GCC doesn't seem to support Objective-C 2.0 yet. Moreover, the current Objective-C run-time library simply uses the conservative Boehm collector.
ObjC 2 does not use Boehm. The collecting thread suspends other threads and conservatively scans their stacks. It picks up values in registers by querying the kernel for the suspended thread state. It depends heavily on Mach.
ReplyDeletellvm-gcc fully supports inline asm, so you could use the same hack you use with GCC with your llvm backend.
ReplyDeleteAlso, you might be interested in http://llvm.org/PR1917, which proposes a method of identifying GC pointers that doesn't disable most optimizations.
To Anonymous saying Objective C 2 not using Boehm: that may be true (I don't know the details), but the Boehm GC also suspends other threads, conservatively scans their stacks and picks up values in registers using the OS.
ReplyDelete