Last week a paper was discussed at the operating systems reading group (website is out of date), about a garbage collector called the Mapping Collector. It is a generational garbage collector that attempts to address the issue that mark-and-compact and copying garbage collectors spends most of the overhead moving live objects in memory. The MC still copies objects from nursery to tenure because the assumption is that most objects die in nursery, so copying nursery objects is cheap. It avoids copying only in the tenure—it figures out which memory page is completely occupied by garbage, and tells the operating system to unmap the page in order to free the memory. Of course, there is a fallback strategy where if the unmapping does not reclaim enough memory, then it would have to compact the tenure using mark-and-compact Compressor collector.
The design implies that MC cannot do more copying than the pure fallback mechanism, but it could reclaim less memory because partially occupied pages cannot be freed. The benchmark confirms the result, but the question arises if there is a way to write a program to trigger worst case performance. MC relies on the assumption that garbage in the tenure would clump into clusters big enough to span memory pages. If the memory page contains even one small live object, it would not be unmapped. The idea is like a nail house phenomenon (urban planners share similar problems of dynamic storage allocation).
To support the assumption, MC cites the "widely-known phenomenon that objects with similar lifetimes tend to exhibit spatial locality in the heap," which is an observation advocated by P. R. Wilson et al., and "in particular, we ﬁnd that dead objects often occur in large clusters." The question now becomes this: what makes the deaths happen in large clusters? If we willfully distribute the placement of these objects, certainly objects that would previously die in large clusters would die in different places now. A program has no control over where it wants to allocate objects, and it has to take into account that all objects have a second "birth" after it survives the nursery and is moved to the tenure heap. A worst-case program could assume that objects are copied in depth-first search order, infer the layout of these objects in tenure, and make them die in such a way that scatters garbage in memory.
This thought exercise might seem contrived, but it does bring out a point about the transfer of objects from nursery to tenure. Depth-first search order approximates the way many programs traverse objects in memory, and object references tend to relate objects of similar usefulness together. In other words, copying objects in this order tends to create spatial locality. This property of generational garbage collection favors the Mapping Collector. In order for MC to perform well, it needs to be able to gather at least a few pages full of related live objects—assuming they would be freed together—before the nursery gets full. The size of the nursery becomes a factor in MC's performance.
This presents an interesting trade-off, namely the more work MC does in the nursery collection, the more memory it is likely to be able to free later. This is probably something the original authors of the work did not anticipate.
Post a Comment