Archive for the 'Graphics Developer' Category

Virtual Globe and Terrain Rendering

Our team has learned a lot while developing Insight3D and the 3D capabilities in STK. Designing a 3D engine that meets the precision and performance requirements for these types of applications is challenging, to say the least. It takes a carefully designed engine to do things like visualize immense amounts of imagery and terrain, and precisely render a massive world.

Deron and I realized that not much has been written about this type of 3D engine, at least not all in the same place, so we decided to start writing a book on the topic: Virtual Globe and Terrain Rendering. This isn't a book on Insight3D or STK specifically but rather a best practices guide for designing and implementing 3D engines for virtual globes, GIS, simulations, etc - all things we've learned by working on Insight3D and STK. If you're interested in this kind of stuff, check out the book's blog as we continue to work on the manuscript and example code.

I should mention that our book is not directly affiliated with AGI. Although, since AGI is such an awesome employer, they are very encouraging about our book project and allowed us to mention it on this blog.

GPU Ray Casting of Virtual Globes

Exciting news - our poster GPU Ray Casting of Virtual Globes was accepted to SIGGRAPH 2010.

This was a collaborative effort between the aerospace guys on the DGL team and our 3D team.  We hope these types of next generation rendering algorithms make their way into Insight3D.

Short Abstract: Our work presents a GPU ray casting approach to rendering the ellipsoidal surface of virtual globes that provides an infinite level of geometric detail at frame rates competitive with traditional tessellation and rasterization approaches. This work has application in areas including globe rendering for geographic information systems and video games.


A shaded, ray casted globe correctly interacting with rasterized billboards.  Upper Right: Viewport-aligned ellipsoid bounding polygon.  Ray hits are cyan and misses are gray.  Raster data from Natural Earth and icons from Yusuke Kamiyamane.

Trapezoidal Texture Projection with OpenGL

All surface images in Insight3D are placed on the Earth where the edges of the image align with latitudinal and longitudinal lines and the top edge is north.  Several customers have asked if they could map an image where that is not the case, where they have the latitude and longitude coordinates of each corner.

Fig. 1 shows an image captured from the viewpoint of a simulated UAV camera.  While the image itself is exactly as it was, the image is not correctly positioned on the terrain.  As mentioned, the image edges align with latitudinal and longitudinal lines. 

Fig. 1

In fig. 2, the image’s corners have been mapped from their original coordinates to their actual coordinates. One can tell that the camera took this image from the southwest.

Fig. 2

This capability has been added to Insight3D’s  Surface Mesh Primitive for our upcoming r8 release.  I’ll wait until r8 is out to discuss the new interfaces and how this method differs from projecting an image onto the terrain.

In this post, I’ll discuss how we use OpenGL to remap the image of fig. 1 to that of fig. 2.


SIGGRAPH 2009 Trip Report

Siggraph2009Once again, SIGGRAPH was jam packed with the latest in computer graphics.  For the areas we're interested in, this year seemed a little more incremental than last year, probably because last year had big announcements, including Larrabee and OpenGL 3.  Nonetheless, we were exposed to plenty of ideas that will help us keep the technology underlying Insight3D on the cutting edge.  I'll hit the highlights in this post.

We tend to spend most of our time in SIGGRAPH courses.  This year, my favorite one was Advances in Real-Time Rendering.   Wolfgang Engel gave a good talk on deferred shading and light pre-pass.  Deferred shading showed up just about everywhere at SIGGRAPH this year, and rightfully so, since it is such a cool technique.  Currently, Insight3D uses so-called forward shading to allow support for a wide array of video cards.


Horizon Culling 2

In a previous entry, I described a simple method for horizon culling.  That method determined if one sphere, the occluder, occluded another sphere, the occludee, for example if a planet represented as a bounding sphere occluded a satellite represented as a bounding sphere.  In this entry, I’ll describe how to find if a planet occludes a part of itself.  Normally, I write out all the math; however given my time constraints and in the interest of me finally writing another blog, I’ll forego that.

3D GIS applications invariably render the Earth and other planets.  A planet is generally organized as a hierarchy of terrain tiles of varying levels of detail.  As the viewer moves closer and closer to particular part of a planet, higher and higher fidelity tiles are rendered.  The Virtual Terrain Project has a nice list of terrain rendering algorithms.  Frustum culling techniques are normally first applied to tiles, so that only tiles inside the view frustum are rendered.  Next, horizon culling is applied.


Out-of-Core Rendering

Out-of-core (OOC) rendering algorithms render a model without the need to load the entire model into memory.  A prime example of this is the terrain and imagery engine in Insight3D and STK.  Most terrain data sets simply do not fit into main memory so an OOC algorithm is called for.  Since OOC algorithms have many uses beyond terrain, e.g. cities, I wrote my thesis on the topic:

Title:  Visibility Driven Out-of-Core HLOD Rendering

Abstract:  With advances in model acquisition and procedural modeling, geometric models can have billions of polygons and gigabytes of textures. Such model complexity continues to outpace the explosive growth of CPU and GPU processing power. Brute force rendering cannot achieve interactive frame rates. Even if these massive models could fit into video memory, current GPUs can only process 10-200 million triangles per second. Interactive massive model rendering requires techniques that are output-sensitive: performance is a function of the number of pixels rendered, not the size of the model. Such techniques are surveyed, including visibility culling, level of detail, and memory management. In addition, this work presents a new out-of-core rendering algorithm that is demonstrated with a variety of HLOD rendering algorithms.

Here is the video of my defense (minus the first 30 seconds or so, sorry):


Vector Vectoria

3D GIS applications often render 2D vector data, like roads, rivers, and country boundaries onto the landscape.  Techniques for combining 2D features with 3D terrain are generally either texture or geometry based.  Both have their issues.  In this blog entry, I'm going to give a short preview of Insight3D's approach.

A few years ago for STK, we developed a method using shadow volumes.  A long thin box is created that represents, for example, a road.  The bottom of the box is below the terrain and the top is above.  The terrain is colored where the box intersects the terrain forming a line.  Two papers were recently written that describe this method:  Efficient and Accurate Rendering of Vector Data on Virtual Landscapes  and Rendering 3D Vector Data using the Theory of Stencil Shadow Volumes.

While both papers describe how to determine the height of the box, they do not describe how to determine the width.  If they keep the width static, as the camera moves away from the line, the line will disappear.  In STK, we use a vertex shader to dynamically modify the width based on the camera's field of view and distance from the line to keep the line a user defined width in pixels.

This method has two issues.  Lines breaks up from certain viewpoints as seen in the left image of fig. 1.

Figure 1

Also, lines smear down the side of steep terrain, such as on mountainsides.  The left image of fig. 2 shows a line that is supposed to be one pixel wide.

Figure 2

We have been working on a new lines on terrain method for Insight3D to eliminate these issues.  The right images of figs. 1 and 2 show the new method.  The line is not broken and remains one pixel wide over the mountainside.

Research into this method is ongoing.  We hope to add the ability to create patterned lines, like dashed and dotted lines.  We'd like to apply this method to create altitude contour lines.  We are also working on a line level of detail system to allow Insight3D to render massive amounts of 2D vector data.

There you go - a short preview of current research we are doing.  Sometime in the future, we expect to describe the new method in detail.

Geometry Shader for Debugging Normals

To debug lighting, it is handy to visualize per-vertex normals. Traditionally, one would create a vertex buffer with a bunch of lines representing normals and render this after rendering the mesh itself. This works but it is not nearly as simple, or as cool, as using a geometry shader.

Model with normals

You can write a trivial geometry shader to visualize normals. The shader takes a triangle as input and outputs 3 lines that represent the normal for each vertex. In the first pass, render the mesh as you normally would. In the second, pass enable the geometry shader and render the mesh again. That's it. No extra vertex buffer, just an extra pass.


Precisions, Precisions

Rendering objects over large distances is common for geospatial programs, and when done incorrectly, the objects may visually jitter.  Here, an object is made up of any combination of triangles, lines, and points, like a 3D model.  The problem becomes more noticeable as the viewer nears the object.  The following video demonstrates this using STK.  (Note that I had to modify STK as it does not ordinarily exhibit jitter.)

Rotating about the space shuttle from far away, there is no jitter.  After zooming in, the jitter is readily apparent.  In this blog entry, I'll discuss the cause of this problem and the solutions used in Point Break and STK.


Rendering Text Fast

Our new text rendering code is 5x faster (in a bad case) than our STK code - and the new code uses the same algorithm! How's that work? Well, we are using the same algorithm but the implementation is vastly different.  In this post, I'll describe the new implementation, which offloads work from the CPU to a vertex shader running on the GPU, enabling the use of static vertex buffer objects.