Hello everyone.
We're pleased to announce that as of today, the default PyPy comes with a GC that has much smaller pauses than yesterday.
Let's start with explaining roughly what GC pauses are. In CPython each object has a reference count, which is incremented each time we create references and decremented each time we forget them. This means that objects are freed each time they become unreachable. That is only half of the story though. First note that when the last reference to a large tree of objects goes away, you have a pause: all the objects are freed. Your program is not progressing at all during this pause, and this pause's duration can be arbitrarily large. This occurs at deterministic times, though. But consider code like this:
class A(object): pass a = A() b = A() a.item = b b.item = a del a del b
This creates a reference cycle. It means that while we deleted references to a and b from the current scope, they still have a reference count of 1, because they point to each other, even though the whole group has no references from the outside. CPython employs a cyclic garbage collector which is used to find such cycles. It walks over all objects in memory, starting from some known roots, such as type objects, variables on the stack, etc. This solves the problem, but can create noticeable, nondeterministic GC pauses as the heap becomes large and convoluted.
PyPy essentially has only the cycle finder - it does not bother with reference counting, instead it walks alive objects every now and then (this is a big simplification, PyPy's GC is much more complex than this). Although this might sound like a missing feature, it is really one of the reasons why PyPy is so fast, because at the end of the day the total time spent in managing the memory is lower in PyPy than CPython. However, as a result, PyPy also has the problem of GC pauses.
To alleviate this problem, which is essential for applications like games, we started to work on incremental GC, which spreads the walking of objects and cleaning them across the execution time in smaller intervals. The work was sponsored by the Raspberry Pi foundation, started by Andrew Chambers and finished by Armin Rigo and Maciej Fijałkowski.
Benchmarks
Everyone loves benchmarks. We did not measure any significant speed difference on our quite extensive benchmark suite on speed.pypy.org. The main benchmark that we used for other comparisons was translating the topaz ruby interpreter using various versions of PyPy and CPython. The exact command was python <pypy-checkout>/bin/rpython -O2 --rtype targettopaz.py. Versions:
- topaz - dce3eef7b1910fc5600a4cd0afd6220543104823
- pypy source - defb5119e3c6
- pypy compiled with minimark (non-incremental GC) - d1a0c07b6586
- pypy compiled with incminimark (new, incremental GC) - 417a7117f8d7
- CPython - 2.7.3
The memory usage of CPython, PyPy with minimark and PyPy with incminimark is shown here. Note that this benchmark is quite bad for PyPy in general, the memory usage is higher and the amount of time taken is longer. This is due to the JIT warmup being both memory hungry and inefficient (see below). But first, the new GC is not worse than the old one.
EDIT:Red line is CPython, blue is incminimark (new), green is minimark (old)
The image was obtained by graphing the output of memusage.py.
However, the GC pauses are significantly smaller. For PyPy the way to get GC pauses is to measure time between start and stop while running stuff with PYPYLOG=gc-collect:log pypy program.py, for CPython, the magic incantation is gc.set_debug(gc.DEBUG_STATS) and parsing the output. For what is worth, the average and total for CPython, as well as the total number of events are not directly comparable since it only shows the cyclic collector, not the reference counts. The only comparable thing is the amount of long pauses and their duration. In the table below, pause duration is sorted into 8 buckets, each meaning "below that or equal to the threshold". The output is generated using the gcanalyze tool.
CPython:
150.1ms | 300.2ms | 450.3ms | 600.5ms | 750.6ms | 900.7ms | 1050.8ms | 1200.9ms |
5417 | 5 | 3 | 2 | 1 | 1 | 0 | 1 |
PyPy minimark (non-incremental GC):
216.4ms | 432.8ms | 649.2ms | 865.6ms | 1082.0ms | 1298.4ms | 1514.8ms | 1731.2ms |
27 | 14 | 6 | 4 | 6 | 5 | 3 | 3 |
PyPy incminimark (new incremental GC):
15.7ms | 31.4ms | 47.1ms | 62.8ms | 78.6ms | 94.3ms | 110.0ms | 125.7ms |
25512 | 122 | 4 | 1 | 0 | 0 | 0 | 2 |
As we can see, while there is still work to be done (the 100ms ones could be split among several steps), we did improve the situation quite drastically without any actual performance difference.
Note about the benchmark - we know it's a pretty extreme case of JIT warmup, we know we suck on it, we're working on it and we're not afraid of showing PyPy is not always the best ;-)
Nitty gritty details
Here are some nitty gritty details for people really interested in Garbage Collection. This was done as a patch to "minimark", our current GC, and called "incminimark" for now. The former is a generational stop-the-world GC. New objects are allocated "young", which means that they initially live in the "nursery", a special zone of a few MB of memory. When the nursery is full, a "minor collection" step moves the surviving objects out of the nursery. This can be done quickly (a few millisecond) because we only need to walk through the young objects that survive --- usually a small fraction of all young objects; and also by far not all objects that are alive at this point, but only the young ones. However, from time to time this minor collection is followed by a "major collection": in that step, we really need to walk all objects to classify which ones are still alive and which ones are now dead ("marking") and free the memory occupied by the dead ones ("sweeping"). You can read more details here.
This "major collection" is what gives the long GC pauses. To fix this problem we made the GC incremental: instead of running one complete major collection, we split its work into a variable number of pieces and run each piece after every minor collection for a while, until there are no more pieces. The pieces are each doing a fraction of marking, or a fraction of sweeping. It adds some few milliseconds after each of these minor collections, rather than requiring hundreds of milliseconds in one go.
The main issue is that splitting the major collections means that the main program is actually running between the pieces, and so it can change the pointers in the objects to point to other objects. This is not a problem for sweeping: dead objects will remain dead whatever the main program does. However, it is a problem for marking. Let us see why.
In terms of the incremental GC literature, objects are either "white", "gray" or "black". This is called tri-color marking. See for example this blog post about Rubinius, or this page about LuaJIT or the wikipedia description. The objects start as "white" at the beginning of marking; become "gray" when they are found to be alive; and become "black" when they have been fully traversed. Marking proceeds by scanning grey objects for pointers to white objects. The white objects found are turned grey, and the grey objects scanned are turned black. When there are no more grey objects, the marking phase is complete: all remaining white objects are truly unreachable and can be freed (by the following sweeping phase).
In this model, the important part is that a black object can never point to a white object: if the latter remains white until the end, it will be freed, which is incorrect because the black object itself can still be reached. How do we ensure that the main program, running in the middle of marking, will not try to write a pointer to white object into a black object? This requires a "write barrier", i.e. a piece of code that runs every time we set a pointer into an object or array. This piece of code checks if some (hopefully rare) condition is met, and calls a function if that is the case.
The trick we used in PyPy is to consider minor collections as part of the whole, rather than focus only on major collections. The existing minimark GC had always used a write barrier of its own to do its job, like any generational GC. This existing write barrier is used to detect when an old object (outside the nursery) is modified to point to a young object (inside the nursery), which is essential information for minor collections. Actually, although this was the goal, the actual write barrier code is simpler: it just records all old objects into which we write any pointer --- to a young or old object. As we found out over time, doing so is not actually slower, and might actually be a performance improvement: for example, if the main program does a lot of writes into the same old object, we don't need to check over and over again if the written pointer points to a young object or not. We just record the old object in some list the first time, and that's it.
The trick is that this unmodified write barrier works for incminimark too. Imagine that we are in the middle of the marking phase, running the main program. The write barrier will record all old objects that are being modified. Then at the next minor collection, all surviving young objects will be moved out of the nursery. At this point, as we're about to continue running the major collection's marking phase, we simply add to the list of pending gray objects all the objects that we just considered --- both the objects listed as "old objects that are being modified", and the objects that we just moved out of the nursery. A fraction from the former list were black object; so this mean that they are turned back from the black to the gray color. This technique implements nicely, if indirectly, what is called a "backward write barrier" in the literature. The backwardness is about the color that needs to be changed in the opposite of the usual direction "white -> gray -> black", thus making more work for the GC. (This is as opposed to "forward write barrier", where we would also detect "black -> white" writes but turn the white object gray.)
In summary, I realize that this description is less about how we turned minimark into incminimark, and more about how we differ from the standard way of making a GC incremental. What we really had to do to make incminimark was to write logic that says "if the major collection is in the middle of the marking phase, then add this object to the list of gray objects", and put it at a few places throughout minor collection. Then we simply split a major collection into increments, doing marking or sweeping of some (relatively arbitrary) number of objects before returning. That's why, after we found that the existing write barrier would do, it was not much actual work, and could be done without major changes. For example, not a single line from the JIT needed adaptation. All in all it was relatively painless work. ;-)
Cheers,
armin and fijal
16 comments:
Nice work! :)
Thank you for this nice explanation.
Which mechanism do you use for not adding twice an old object in the list of modified old objects?
Thank you! thank you! thank you! Game dev on pypy just leveled up!
Very clever! But eh, your graphs show that your program is using 2-3x the memory of CPython. How much faster is your program overall in exchange for this hugely larger memory usage?
@François: a flag on the object. All old objects have this flag initially, and we use it to detect if the write barrier must trigger. We remove it when the write barrier has triggered once. We re-add it during the following minor collection.
@Anonymous: this program is slower on PyPy too. The point of the benchmark is to show that incminimark gives the same results as minimark, and to show that the JIT has bad cases. Running the same program for a much longer time (5-10x) lets PyPy slowly catch up and eventually beat CPython by a factor 2. The memory usage is evening out at around around 4 or 4.5GB (and I'd expect even larger examples to show lower consumption on PyPy, but that's mostly a guess).
Thanks for moving Python forward!
How does the incminimarc compares to Azul C4 JVM GC and Hotspots G1 GC?
In other words are there strong guarantees that for big heap sizes e.g. 12 GB the GC pauses will not exceed some value e.g. 100ms?
Sounds like great progress, but I hope you understand that even 15-30ms is way too much for games. That's 1-2 frames. It needs to be an order of magnitude less to ensure smooth FPS.
Do you have plans to give the program any say in whether the GC should strive for low latency vs. high throughput?
Great writeup, explaining that kind of concept in a clear way is not easy. And well done on the unequivocal improvements :)
@annonymous Yes you'll still miss some frames, but compared to a 1 second pause, pypy suddenly became usable for games. 55fps (over what duration did those 25K collections happen ?) is not perfect, but most users won't notice. That said, it *would* be nice to be able to tune latency vs throughput.
@Anonymous: our incminimark comes with no serious strong guarantee. I still think it's enough for most games, say, if "almost all" the pauses are around 10ms. It's also tweakable (see the PYPY_GC_* environment variables documented in rpython/memory/gc/incminimark.py, and try to call something like gc.collect(1) at the end of each frame).
Anyway, at around the same time scale is the time spent JITting, which also causes apparent pauses in the program. I think that fixing it all with really strong guarantees is a much, much harder problem. CPython doesn't gives any guarantee either, as explained at the start of the blog post.
@Armin Rigo: Yeah, it's no use pushing GC pauses much lower than other pauses, but that just means other things need improving as well. ;) If I had to draw an arbitrary line, I'd say half a frame (i.e. 8ms for 60fps, 4ms for 120fps 3D) is probably a good target for the maximum.
The thing with CPython is that you can turn the GC off and still have everything non-cyclic collected. So with enough attention to detail you can avoid GC pauses completely.
BTW, is any work ongoing with regard to fully concurrent GC?
You *cannot* avoid GC pauses in CPython: see the first paragraph of the blog post. You can only make the GC pauses deterministic, by disabling the cyclic collector. Then you can hack the program as needed to reduce GC pauses if there are some.
Thanks for this really educational and accessible explanation - it's rare to find such a concise and clear piece of writing that a non expert can understand on the subject of GC.
This needs visualization of processes to win Wikipedia article of the month.
You can also get arbitrarily long "gc" pauses in CPython by removing the last reference to some deeply nested data structure...
Wow PyPy keeps paying off!
I am so glad you guys have time (and hopefully funding) push dynamic language world forward!
If you fork() the whole process and do the marking on the frozen forked copy (on write) then you can be fully incremental without pauses, as long as you've got enough spare system memory compared to process size (as the main process keeps growing while you're marking and the pathological case of copy on write is 2x, however unlikely).
Post a Comment