Deletion in Bounding Volume Hierarchies

Bounding volume hierarchies (BVHs) were popularized in the ray tracing community by Goldsmith and Salmon in their 1987 paper Automatic Creation of Object Hierarchies for Ray Tracing. In 1988, Eric Haines followed with an article in the Ray Tracing News providing an excellent explanation of creating near optimal bounding volume hierarchies given a list of objects, based on Goldsmith and Salmon's paper.

Two decades later, there is still significant interest in BVHs. In 2006, Lauterbach et al. published RT-DEFORM: Interactive Ray Tracing of Dynamic Scenes using BVHs and in 2007, Eisemann et al. published Automatic Creation of Object Hierarchies for Ray Tracing of Dynamic Scenes. Both of these papers describe using BVHs to reduce the number of intersections calculated per ray for ray tracing dynamic scenes, i.e. scenes with moving objects.

Point Break doesn't care much about ray tracing but it cares a lot about efficiently rendering dynamic scenes which led me to these ray tracing papers. Besides being useful for ray tracing, BVHs are useful for hierarchical view frustum culling. The intersection algorithm is basically the same; just replace the ray/sphere test with a frustum/sphere test. Primitives use BVHs for this purpose.

Although there is a lot of literature on inserting objects into a BVH and updating objects in a BVH, there isn't much in the way of deleting objects from a BVH. This may be because some approaches to the problem are straightforward or because many application don't need to delete objects from BVHs; they just delete the whole BVH. Point Break needs to delete objects from BVHs. In some cases, deletion from a BVH is done in response to a GUI action such as hiding latitude/longitude lines or turning off the ground track or a model representing a satellite.

Deletion Algorithm

I will present a method to delete objects from a BVH that takes advantage of the objects' spatial locality and typically does not require an O(n) tree traversal to cleanup afterwards. Deletion is expected log(n). With good spatial locality, many deletions will be constant time. Deletions are batched and cleaned in a pass with a run-time proportional to the number of dirty nodes. Although the tree is cleaned up (bounding volumes are recomputed, interior nodes are deleted or merged), it is still susceptible to becoming less optimal over time.  Both Lauterbach and Eisemann present criterion to determine when a BVH, or one of its sub-trees, has degraded to the point that it should be rebuilt.

Similar to Eric Haines' article, interior nodes contain pointers to their children and only leaf nodes contain pointers to objects. In addition, each node contains a pointer to its parent as well as a boolean value indicating if it is dirty. As you will see, leaf nodes do not need to maintain this dirty flag. Objects also contain pointers back to their leaf node. As Eisemann mentions, a big advantage of BVHs is every object is contained in just one leaf node, which makes a back pointer like this possible. Maintaining these extra pointers only adds slight bookkeeping to the insertion algorithm.

BVH Tree

A trivial deletion algorithm is given an object, retrieve its leaf node, delete the leaf node and remove the leaf node from its parent. This can quickly lead to a tree with unnecessarily large bounding volumes and interior nodes with no children. One possible solution is, after removing the leaf node, walk up the parent node pointers, recomputing the node's bounding volumes and deleting nodes with no children. This increases deletion from constant time to expected log(n) time and deleting objects near each other would cause parent bounding volumes to recompute multiple times. To remedy this and achieve constant time deletion, instead of walking up parent pointers, just mark the parent as dirty. After a batch of objects is deleted, the tree can be cleaned with an O(n) traversal. If only a single object is removed, this slows deletion down to O(n); but if many objects are deleted, the O(n) traversal is amortized over the cost of each deletion.

Given these solutions, it's easy to construct an algorithm that takes advantage of the spatial locality of deleted objects to achieve good performance. As usual, deletion is done in two phases.

  • Given an object to delete, delete its leaf node and remove the leaf node from the leaf's parent. If the leaf node's bounding volume was flush against its parent's bounding volume or the parent now has less than two children, mark the parent as dirty. Continue walking up parent pointers, marking them dirty, until you find a node that is already dirty or is the root node (which should be marked dirty as well).
  • Once per frame, hopefully after several objects near each other were deleted, clean the tree. Start at the root and only follow children that are marked as dirty. Once the recursion starts unfolding, recompute the nodes' bounding volumes bottom-up, delete nodes with no children, and merge nodes with only one child.

The deletion operation is expected log(n) but if an object is deleted that is in a sub-tree with an object that was already deleted, the parent pointers are not walked all the way to the root, allowing these types of deletions to run in near constant time. Cleaning the tree is still O(n) if a wide range of objects were deleted. If all the objects deleted were in the same area, cleaning will be closer to log(n).


This algorithm is best illustrated with an example. Suppose we create a BVH of spheres with 5 objects that have centers along the x axis. This is shown in the following code and image.

   object0->node = bvh.Insert(Sphere(Vector( 0, 0, 0), 2), object0);
   object1->node = bvh.Insert(Sphere(Vector( 4, 0, 0), 1), object1);
   object2->node = bvh.Insert(Sphere(Vector(-3, 0, 0), 1), object2);
   object3->node = bvh.Insert(Sphere(Vector(-1, 0, 0), 1), object3);
   object4->node = bvh.Insert(Sphere(Vector( 1, 0, 0), 1), object4);

BVH:  Before Deletes

Parent pointers and arrows are not shown for clarity. Now suppose the three deepest objects, whose leaf nodes all share the same parent, are deleted from the tree and then the tree is cleaned.


After the first delete, object0->node is removed from the tree and its parents up to the root are marked as dirty as shown below.

BVH:  After first delete

Deletion of object3->node and object4->node do not require walking parent pointers since they were marked dirty by the first deletion. After the three deletions but before the cleaning, the tree looks like:

BVH:  After all deletes

When cleaning occurs, the roots right sub-tree is not touched since it is not dirty. This is both the algorithm's strength and its weakness since maintaining an optimal tree may involve manipulation of nodes in both sub-trees. After cleaning, the dirty nodes' bounding volumes are recomputed and the unnecessary interior nodes are deleted.

BVH:  After Cleaning


If you've implemented insertion into a BVH, this deletion algorithm will be easy to implement. I've left a few details to your imagination. Can (or should) you allow interleaving of insertions and deletions? Do you want to rely on a heuristic tree search for insertion when there are dirty nodes in the tree? Keep in mind that for tree cleaning to work, the following invariant must hold true: if a node is dirty all its parents up to the root are also dirty.

The best case for deletion is the first delete takes log(n) and additional deletes are constant time. Then, updating the dirty nodes is log(n). The worst case for deletion is log(n) for each deletion (although once all nodes are marked dirty, deletion will be constant time) and O(n) for updating the dirty nodes. In practice, I don't expect either extreme is common but given the spatial locality of many deletions, I would expect the run-time being closer to the best case.

5 Responses to “Deletion in Bounding Volume Hierarchies”

  1. 1 qz0rz

    I’m guessing that 99.99% of the time, nothing will change between two consecutive frames… so it sounds def necessary to have some dirty flag–even if it’s global.

    if you’re thinking about having thousands+ of primitives, ur idea sounds like a good one…

    here’s an idea… I’m guessing bounding volume creation is expensive… so if someone deletes stuff, keep rendering using the same bounding volumes (this will cause overdraw + overcomputation, I know), and defer compacting the bounding volumes to another thread and sync up the results once that worker thread is finished…. this should minimize the number of hiccups, if any

    well, if there aren’t any hiccups in the first place, then it’s probably not even worth the effort, but it would be nice to have worker threads carrying out jobs asynchronously.

  2. 2 Cozzi

    Tree creation is fairly expensive: expected nlog(n). The good news is each step requires very little computation. It’s mostly multiplications and additions; no square roots or transcendental functions.

    I like your threading idea. It may also be useful for rebuilding parts of the tree due to moving objects degrading the quality of the tree. I’ll let you know if/when I add threading.

  3. 3 qz0rz

    that’s a good point about the dynamic objects… if something moves out of its node’s bounding volume, then keep bumping it up the tree until it’s within a valid bounding volume… and have a separate thread pushing objects back down the tree asynchronously

    the tricky part is synchronization between the threads… the main thread needs (at least) a read lock while it renders, etc. the worker thread will need at least part of the tree locked for read while it does its computation and also for write when it wants to commit its changes. but you don’t want your worker thread to be locking the tree very often, because that’ll stall rendering on the main thread… maybe the worker thread should do all its computation and commit changes (quickly) every so often?

    actually, if the tree isn’t changing very often, then this might not even be worth the headache…

  4. 4 Cozzi

    Handling dynamic objects by walking up parent pointers to find an insertion node is what Eisemann’s paper, mentioned in the blog’s intro, refers to as “Dynamic Goldsmith and Salmon.” I currently have this implemented in the main thread. When you consider the way most objects move, they may not go far enough to have to walk up the tree more than one or two nodes so its hard to justify a worker thread. As you said, we want to avoid excessive locking.

    Although, doing this re-reinsertion over and over again will lead to a less optimal tree. To handle this, Eisemann suggests computing a quality measurement for each node; the surface area of a node divided by the number of objects in the sub-tree. When the quality of a node degrades compared to its initial value, the BVH is readjusted. Section 3.1 in the paper has the details. This readjusting may be a good candidate for a worker thread.

    Also, the Lauterbach paper, mentioned in the blog, has some suggestions for threading in section 4.2.

  5. 5 qz0rz

    fancy schmantzy!

Comments are currently closed.