Distorting marching cubes output using vector fields

Using the plain Marching Cubes algorithm produces a pretty dull and uniform-looking terrain. For “regular” modeling purposes this is fine since the mesh is assigned proper vertex normals, so that the shader can shade it “realistically”. However, since I’m still bent on using flat shading and I hate “realism”, the terrain I get is boring as hell. So, I decided to deform it a little bit.

I was researching the best way to encode distortion into the marching cubes algorithm, and started fiddling around with 3D Perlin noise. I thought I could generate three 3D noise functions (one for each coordinate), and then make a vector out of those that would represent the distortion vector at that given point. However, since the algorithm for generating seamless 3D noise was a bit more complicated than I thought necessary, after some more poking around I thought “how about instead of 3D noise I use a set of vectors in space, and then when calculating the distortion vector at a given point, I interpolate the nearest neighbour vectors?”. Google pointed me right at vector fields. Perfect!

The code for calculating the interpolated vector at a given point inside the vector field is quite simple. Since the vector field internally uses a 3D array of Vector3 objects, in order to calculate the interpolated vector at a given point, we have to sample all eight vectors that are defined on the vertices of the imaginary cube encompassing the given point. Since the given point is closer to some vectors than others, we need to scale the “influence” each vector has on the resulting vector based on its distance from the given point. Here’s the code if you’re interested.

Here’s how a 4x4x4 vector field looks when interpolated using 20 samples along each axis:

A 4x4x4 vector field, interpolated.

A 4x4x4 vector field, interpolated.

Sweet. Now let’s put that to use. Now that I have the distortion vector at any coordinate I want, I can use it to offset the “boring” lattice coordinates that Marching Cubes gives us, and create terrain that looks more dynamic.

After plugging that into the terrain generator, here’s what I got:

distort5

distort4distort3distort2distort1Pretty sweet looking terrain, compared to the boring-ass stuff I had earlier. There’s still some issues with “ribbed” polygons, which you can see on the left side of the fourth screenshot. Don’t know where that’s coming from, will investigate.

Another nice property of the distortion is that now the LOD boundaries are less apparent. You can try finding the LOD boundaries on the screenshots above, but here’s a wireframed version showing how it really looks like:

distortion_wf1The closer you are though, the harder it is to see the boundary.distortion_wf2Since the player is looking at all this from an even more up-close point, and he doesn’t have the wireframe helping him, the boundary is pretty well hidden.

I’m rather pleased with how this came out. The next steps regarding distortion are to figure out a way to distort the distortion field! Seriously. I’d like the vector field’s vectors to not be evenly spaced across the field, but to be somehow more grouped together semi-randomly. That way I could have regions of greater detail (more “curvy”), and regions of more uniform looking terrain, which would benefit the visual diversity. As you can see, right now everything is kind of equally distorted, and I’d like to change that. Ideally, there’d be a way to control this “distortion distortion” through some user-facing editor params.

Level of detail handling

I’m doing level of detail (LOD) handling using a straightforward distance-based reduction of polygons each chunk has. The picture describes this approach:

A terrain with two levels of detail (top view).

A terrain with two levels of detail (top view).

The green rings represent LOD boundaries, and can be configured from the editor. The player’s location is not visible here, but it’s in the middle of the green square (of course). Chunk resolution on each LOD can also be configured, although this can only be done explicitly for the first LOD, and every subsequent LOD has a chunk resolution equal to the chunk resolution of the LOD level before it, multiplied by the “LOD reduction factor”. Currently it’s set to 2, so that each LOD has half the resolution of the LOD before it. The number of LOD levels is also configurable.

If you’re familiar with volumetric LOD algorithms, you know about the “crack” problem that occurs on the boundary between two LODs:

Cracks occur on the LOD boundary.

Cracks occur on the LOD boundary.

I’m planning to solve this using (a heavily modified version of) Eric Lengyel’s Transvoxel algorithm. It’s going to take a couple of days of work, so I’m struggling to find the time to sit down and dig into the meat of the algorithm. No excuses though, it has to be done at some point.

Back to LOD handling. As the player moves across the map, some chunks and their LODs have to be recalculated. For example, imagine that the player moves from his current chunk to the chunk on the right, on the picture given above. There are a couple of things that have to be done:

  • for each LOD level, all rightmost chunks have to have their LOD increased
  • for the lowest LOD, its rightmost chunks have to be loaded, since the player has moved to the right and those chunks do not yet exist
  • for each LOD level, all leftmost chunks have to have their LOD decreased
  • for the lowes LOD, its leftmost chunks have to be unloaded, since the player has moved to the right and those chunks are now outside of the LOD range

Initially, I did this all on the main Unity thread, which worked fine. However, there was a slight hiccup (as you would imagine) when loading new chunks, so I decided to make it multithreaded. Currently, every LOD level has its own dedicated thread and processes all its chunks sequentially, which works nicely. I still have to perform all Unity stuff on the main thread, so I’m using kind of a producer-consumer architecture and performing the setting of actual mesh data on the main thread, while all the heavy calculations are performed on separate threads.

Inspiration, Part I

I wanted to make sporadic posts about the various things inspiring me to do this weird kind of project. Hopefully they will help paint a clearer picture about my goals and my general taste for things. Here goes.

I have one digital painting as my main reference for this entire project. There are dozens more in the “Inspiration” folder, but this one is it. Here it is:

"Waterfalls" - Matt Allsopp

“Waterfalls” – Matt Allsopp

I found it on CGHUB. Thank you Matt for this painting.

There are many things I like about this image, but the main driver for me is that this is how I’d like computer graphics to be. Stylized, artistic, expressionistic. Often the contemporary 3D artists are concerned about the micro-detail – the texture of the object, the shader to use, the number of triangles on the model, etc. What I’d like to focus my time on in the coming months is experimenting with the macro-detail. Here’s what I mean by that.

I believe there is much to explore and learn about the relations between objects, their number, their placement, their color pallete, their shape. I believe there can be found a simple or not-too-complicated rule behind all these things and that we can express most of them through formulas and algorithms.

Look at the above image. I claim that there can be found a formula for generating and placing the green mesh that represents grass on those rocks on the bottom of the image. There can be found a formula which automatically generates the grass mesh of a given size, pallete, orientation and places it on those rocks. There can be found a formula which places the pine forest in the back of the image according to terrain parameters such as altitude, humidity, closeness to populated areas, and so on.

In addition to that, I claim that the micro-detail can be ignored if you look at the problem like I do. I don’t care if the grass isn’t perfectly rendered, each leaf bending perfectly on the wind. When enjoying Matt’s image, I ignore the micro-detail. I focus on the color pallete, the shapes of individual elements, the placement of elements.

My explorations are focused on those things.

This brings me to my second big inspiration:

Love - a game by Eskil Steenberg

Love – a game by Eskil Steenberg

Eskil Steenberg’s Love is a graphical masterpiece. It has a very bold impressionistic style to it, and Eskil has pulled this entire project off on his own. I have the utmost respect for this man. You can watch his excellent (yet completely underrated) talk about the tech behind Love. Unfortunately this talk has only around 1000 views at the moment.

There’ll be more things I post in the “Inspiration” series.

Quad Trees

In my current implementation, the road generator performs a two-step process during each L-system iteration:

  • generation of map nodes and edges
  • checking of local conditions for each generated node or edge

The local conditions are nicely described in Parish & Mueller’s paper. At this stage, I’m working on implementing checks for road intersections (so that a crossroad can be generated) and the proximity of nodes to existing edges (so they can be “stitched” to the existing edge). This is described visually in Figure 8 of the paper.

It is of course intuitively clear that a brute force solution for checking for a node’s neighbours in a given radius is a very unoptimal one. In just a dozen or so iterations, the L-system spits out so many nodes that Unity starts crumbling down if I just walk through them in an O(n^2) fashion. So, I needed a lookup table of some sort, which would keep the locality information so that the neighbours can be fetched more easily.

Enter quad trees. They have exactly what I want, a fast drill-down to the exact neighbourhood of a given node. After implementing the class and adding each map node to it after spawning it from the L-system, I get something like this:

quad1The quad tree is drawn in cyan, nodes and edges are red. You see I still haven’t fixed that road creation bug, I really like the spiral. You can see how the quad tree is localizing tightly packed nodes into tightly packed subtrees, and sparse areas are covered with larger trees.

Now, what I really like about my quad tree implementation is that the Add method actually returns an instance of the quad tree: if the node was added to the current tree, that tree is returned. If, however, a node is added outside the bounds of the current tree, that tree automatically creates a parent tree, adds itself as its subtree, and then returns the parent quad tree. This way the client can ignore the growth of the quad tree by simply always assigning the result of the Add operation back to its quad tree reference, and voila.

Here’s how it looks zoomed out after the tree has grown a bit:

quad2I still haven’t implemented the fetching of a node’s neighbours, I’ll be working on that in my next hacking session. I know the most correct way would be to specify the radius around the given node for fetching its neighbours, but I think a more “blocky”, approximate solution will be just fine – once you locate the node inside a particular quad tree, just walk the hierarchy upwards until the current quad tree has an area that encapsulates the search circle (given by a node’s position and search radius). We’ll see if that approach is good enough.

Roads

the_spiral

I’m currently working on a road generator which will be integrated into Medjed’s terrain generator. What you see in the image above is a bug I accidentally created while working on the radial road rule. Beautiful, right?

I’m heavily mimicking the work Parish & Mueller did in their excellent (if too concise) paper Procedural Modeling of Cities. Their approach is a very detailed one and I’m iterating through different levels of complexity as I go. I started with a basic L-system with just two modules (road and branch), while their system is much more complex with more strict rules.

The idea is that at some stage the road generator will get integrated into Medjed’s density-based terrain generator, so that the generated roads actually influence the terrain around them. For example, if I define the space around the road to have a very large negative density, then any terrain that would get generated at that point would get cancelled out by the road’s density function. Think tunnels (for free!), think tight mountain roads that automagically carve themselves into the side of the mountain, etc. Good stuff.

I’ll post more goodies as I go.

A terrain based on a 3D density function

The terrain engine I’ve implemented thus far uses a three-dimensional density function to calculate its polygons. Density-based geometry generation is nothing new, and in fact my implementation uses the Marching Cubes algorithm, beautifully described in the GPU Gems 3 cookbook. That article is what sparked my interest in the first place.

chunks

However, the article describes a very low-level approach to creating voxels (using shaders), which are very small in rendering terms (up to five triangles). I wanted a more high-level implementation, which could take advantage of all the goodness Unity brings to the table (such as automagic collision detection when given a 3D mesh). I ended up using the majority of the lookup tables given in that article, but I rewrote the entire algorithm from scratch.

At the base of my terrain implementation sits a chunk:

chunkA chunk is a collection of voxels, as described in the GPU Gems article. In the above image, I’m using a 20x20x20 chunk. In a naive implementation, this chunk would have 8000 voxels, and they would each in turn have a small mesh containing from zero to five triangles. This is exactly what I wanted to avoid, since I wanted the meshes to be as big as possible — Unity has solid support for handling meshes, and I wanted to take advantage of that. Using several thousand mesh objects just to display a small piece of the terrain was out of the question.

When generating a chunk, I’m sampling the density function at all 8 corners of each voxel in order to get triangle data needed to construct the given chunk’s mesh. So, for a chunk with NxNxN voxels, I’m sampling the density function (N+1)x(N+1)x(N+1) times in order to generate the mesh.

Note that “voxels” are entirely fictional here, I’m using them simply to illustrate the concept. The algorithm takes a 3D density function and spits out a display-ready Mesh object. It has no internal “voxel collection” or anything like that.

This process is repeated for all chunks in the currently visible part of the terrain, and you get a nice piece of 3D terrain, rendered using only math and no models:

terrain_block

I’m intentionally using flat shading, since that’s the art style I’m aiming for. Unfortunately, that means the meshes have no vertex reuse – every triangle has its own vertices, which greatly increases the size of the vertices array. I might find a better way to do flat shading in the future.

In the next article, I’ll write about how I’m handling the infiniteness of the terrain, using chunk recycling and level of detail (LOD) rendering.

 

About the project

medjedI’m starting this blog mainly to document my progress on Medjed, a (temporarily-named) procedural terrain generator that I’m writing in C# using Unity. I have been playing with this concept on-and-off for a couple of months, and I hope this blog will make me more disciplined and motivated.

The aim of the project is to have a terrain engine built on top of Unity that can generate massive, beautiful natural phenomena, such as cliffs, gorges, caves, etc. The terrain will be entirely described in code, using formulas, algorithms, and an occasional lookup texture or two. I’m not planning on building any fancy tools for “drawing” the terrain. Ideally, no 3D models should ever be used.

In addition to the terrain generator, I’m planning on adding a city and building generator and eventually making some kind of “ruin exploration” mechanic. I’m a big fan of ruins.

The posts will probably alternate between technical descriptions of algorithms I’m currently implementing, and various inspirational material which will help keep me on track and also let anyone interested know what style I’m aiming at.