Friday, July 24, 2009

Splay trees with duplicate items

I've implemented a splay tree, and I'd like it to store duplicate items, which are key-value mappings, as follows: if a key to be inserted already exists, then the new binding shadows the old one. If a key is to be removed, then the old binding is revealed. In other words, the inserting and removing the same key reveals the value in LIFO, or stack, order.
The most obvious solution is to have each node be a pointer to a singly linked list, with the cons/uncons operation trivially obeying stack order. But we could embed the singly linked list in a simple binary search tree if we relaxed the criteria a bit, for example making the left child <= parent as opposed to left child < parent. This effectively makes duplicate nodes a chain of singly linked list down the left path. The idea is that we always reach the parent first, which is the most recently inserted item. And if the parent is removed, it reveals the second next item of the same key.
The complication with Splay tree is that it could rotate the tree, which breaks the chain of duplicate nodes. Sometimes it is as bad as splitting the chain to left and right children to a node != the key of the chain, invalidating the binary search tree invariant. I figured out that, in the zig-zig and zig-zag cases, if the grand-parent node has the same key as the parent node, then it is possible to come up with special case rotation that preserve the chain. However, the terminal case actually needs to descend down the whole chain of duplicate notes. At this moment I gave up.
Also, another problem with embedding singly linked list in a tree is that it will increase the depth of the tree, hurting the overall running time. In this case, it seems that the simple approach, making each node a pointer to a singly linked list, is better. It only took me 5 minutes to change the implementation this way and it worked.
Update (9/15): Wikipedia claims that identical keys in a splay tree are preserved in its order like stable sort. It suffices to find the leftmost or rightmost node of the identical key. I haven't tried this.
Update (9/16): Here is the Wikipedia edit that added the aforementioned claim about stable sort. However, I don't think retrieving duplicate item is as simple as finding the leftmost or rightmost node. Even if insertion is stable, the duplicate nodes may not be continuous (e.g. the balanced binary search tree for 1, 2, 3, 3, 3, 4, 5, where the 3's are separated by nodes 2 and 4). Looking for the "largest" duplicate complicates the search by a great deal because you will have to probe either the minimal of the right child, or the maximal of the left child, and may have to back-track. Curiously, Splaysort did not mention how they deal with duplicates.

Wednesday, July 22, 2009

Captain's Log on Quest

Okay, not quite captain's log.
My office mate has been working on this operating system called Quest, and I'm keeping a record for him about what happened. He had been testing the OS on a virtual machine, and yesterday he decided to try it on real hardware. He found a reasonably recent hardware with APIC support in the lab. He would burn a CD-R, run upstairs, something goes wrong, write down some diagnostic on a piece of paper, run downstairs back to the office, change some code, burn a new CD-R, and run upstairs.
I hate to see CD-Rs being thrown into trash at an incredible rate. After all, CD-Rs are supposed to last for 100 years; whether they still preserve any data or not at that point is an entirely different question. Since each image is small (~2MB), I suggested burning multi-session CDs instead. He tried but couldn't get it to boot. I don't have any other ideas.
Today he bought some CD-RWs, which is an improvement. He also brought his laptop and stayed in the lab most of the time, which cuts the turnaround time a lot. When I visited him, he showed me a strange error that he fixed.
There is a static char arena[1000000]; in the code which is used for rudimentary memory allocation. He found out that if he did not initialize this arena to zero, then the code that initializes this memory with free list pointers would have page fault, accessing memory at some outrageous location (this is typical of array out of bounds problem somewhere else, polluting the arena).
I told him to try to memset to 0x5A instead. It will give us a clear indicator of what's wrong. He burned the CD-RW (had to erase the whole disk first) and booted it up. Lo and behold! The whole text-mode screen was then filled with bright green Z with pink background (thank goodness I didn't suggest 0xA5, or it would be filled with *blinking* purple Ñ on green). It seemed that the arena overlapped with text mode video buffer.
Another lesson learned about writing operating systems: double check special hardware physical address mappings.

Tuesday, July 14, 2009

Manifesto of a Very Hard To Find Memory Leak (To Be)

I'm implementing a hash table that will become the basis of a free list memory allocator. The hash table uses singly linked list for buckets. In the list implementation, there is a RemoveAll function written like this:
void RemoveAll(Predicate& p, Consume& c)
{
Node **result;
for (result = &first_; *result != NULL; result = &(*result)->next_) {
if (p(**result)) {
Node *node = *result;
*result = node->next_;
node->next_ = NULL;
c(node);

if (*result == NULL)
break;
}
}
}
Admittedly, the code is copied from a similar function called Remove(), which removes only the first node that satisfies the predicate. However, when it is modified to remove all nodes, an unintended side effect is, if a node is to be removed, it always skips the next node. When I wrote a unit test, I inserted nodes for 0 through 16, and removed all the even nodes, so this problem was not detected. However, when I used this function with a predicate that always returns true, I noticed that the list is not cleared properly.
Now, this function is used in the hash table to clear all the buckets. Each item in the bucket would be a free list for a certain size of objects. When a free list memory allocator uses this hash table, it will see memory leak on a certain size of objects, and the size would appear to be at random.
Fortunately, when a list is destructed, I have it do nothing but to assert that the list is empty. This helped detecting the problem early. Unit test for the hash table to clear all entries helped bringing this problem to the surface.

Hash table using a variety of things for bucket

A hash table stores key-value mapping of data in an array of buckets. The bucket number for a particular key-value pair is decided by a hash of the key modulo the number of buckets. The idea is that if the average number of key-value pairs in buckets is low overall, then we can achieve constant lookup time. One way to organize the bucket is by using a singly linked list. This has the side effect that, if the buckets are overloaded, then lookup time approaches the time of the singly linked list, which is O(n).

An obvious improvement is to make the bucket a binary search tree, perhaps a splay tree (so that item being looked up more often is quicker to find). This cuts down the worst case time of overloading to O(logn), while achieving average case lookup time of O(1).

We can nest hash tables, so a bucket can be implemented by another hash table. In this case, if the load of buckets are small, then we can let two or more buckets initially share a hash table. As buckets grow in size, they split. The size of nested hash tables must be relatively prime, or the inner tables would have high collision rate and won't be effective. The hashing scheme can be organized in the fashion of radix sort where the outer table use one hash function on the most significant bits, and the inner table use another hash function on the least significant bits.