PlanetCell on Blender Cycles – First WIP

 

An ordinary render of my procedurally-shaded planet. This took less than 3 minutes and 5 seconds to render.

An ordinary render of my procedurally-shaded planet. This took less than 3 minutes and 5 seconds to render.

For a long time, a very long time, I’ve been impressed by the abilities of the Flaming Pear LunarCell plugin for Photoshop. It quickly generates planetary surfaces that are, if not realistic, at least attractive and plausible to cursory examination. Mostly the former, but that’s not bad.

There are things, though, that I think could be improved. It’s not cheap, to start with. It’s not terribly controllable. I’d like to be able to manually guide creation with input masks. I also want the option of deeper control of the noise generation. I’d like to be able to toggle ice cover. In any case, the sea ice generation has been broken in the last several versions. Perhaps the biggest thing I’d like to get away from is Photoshop.

I like Photoshop perfectly well. Very much in fact. But for the purposes of this blog, I’d like to stick to applications that could be available to all or most of my readers. This isn’t the,”Look at all the Cool Junk I Have that You Don’t,” show, after all. Optimally, I want my readers to be able to benefit from my tutorials with no boundaries to entry higher than downloading a few free apps from the internet.

This is looking to be one of my successes at this. In the first iteration, I needed Blender for the noise rendering(the major point of the exercise), Photoshop(!) to flip and convert the RGB OpenEXR height bakes to greyscale TIFF so that I could feed them into GDAL in order to convert to BT format so that I could use Wilbur to create RGB-separated 24-bit color images that I had to feed into POV-Ray to flatten them into equirectangular! Whew. Try saying all that in one breath. A whole lot of work. A whole lot of bouncing around from one app to the next, and at least one expensive, not generally available app in the pipeline. After all that, you still had to use Wilbur to convert the POV-Ray color-hash back into a usable GIS-ready elevation format.

The current iteration does all the hard work of noise generation and rendering of equirectangular maps into one(so far very sloow) procedure inside of Blender(well one per image). Once that’s out of the way, the color imagery and any 8-bit masks are good to go, out of the box. Just add worldfiles. For the heightmaps you’ll need something like FDRTools to convert the RGB OpenEXR to RGB TIFF(FDRTools doesn’t seem to have any functionality to convert files to grayscale) and GDAL to convert the TIFF into a usable 32-bit GIS format. Then you’re ready to play with your data in QGIS, GRASS, Wilbur or whatever floats your boat.

Now, this is definitely a work in progress. It’s slow, I think at least in part ’cause my node tree wasn’t all that thought out. I was kind of floundering as to how to do this to start with so what work I’ve done is a bit of a mishmash of half-completed ideas. So this’ll probably need a full-on refactoring before I’m done, but(at least until the Blender cycles engine gives me a way to save full float bumpmaps directly) it should be adequate for my purposes. Hopefully you will find it useful as well.

Rather than walking you through the creation of this thing, I’m going to post the blend file so you can examine it at leisure. I’ll try to explain the function of the parts, what I was trying to do in building them that way and how they can be used.

First of all, this thing is overly complex and seriously needing a full refactoring. The main idea in the organization of node groups was to divide operations into small “labs” for such things as heightfield generation, land and sea texturing and distribution of land and sea elements. Ultimately, I’d like to reduce the inputs on each group to those that would be most commonly used, with less often accessed parts being well labelled inside the various group nodes. I’m not there yet.

So let’s go over what I do have, starting at the left side of the network. The Texture Coordinate node provides “Generated” coordinates to all the basic position dependent nodes(mostly noise textures). This ensures that the coordinates are always 0..1 on x:y:z, regardless of how you move or rotate the object. Important for animation unless you want the texture to change as the planet moves in its orbit. Very off putting when that happens…

Next comes the Height Group. This would be my Height Control Lab, the bread and butter of the whole concept. For the core of PlanetCell this hasn’t gotten anywhere near enough attention as yet. In my defense, I wanted to assemble a generally functional whole and then go back to improve the details later.

Let’s start by looking at the control inputs. We’ll do this for all the nodes and groups.

Type controls the kind of Musgrave fractal used. This can have the values, “Multifractal,” “fBM,” “Hybrid Multifractal,” “Ridged Multifractal,” or “Hetero Terrain.” These control the appearance of the fractal function. The fBM, or fractional Brownian Motion, type is largely homogeneous, while the others are, in various ways, more heterogeneous. For manual editing work, I like a homogeneous noise for its controllability, but as the generator for a planetary landscape heterogeneous is usually the way to go in my opinion.

Basis controls the noise basis for the Musgrave fractal used. The possible values are, “cell,” “perlin,” “uperlin,” “simplex,” “usimplex,” or “gabor.” Perlin or uperlin, which can also be referred to as “snoise” or “noise” respectively, are, as the name implies, Perlin noise functions. Simplex or usimplex are, similarly, Perlin simplex noise functions, a somewhat faster, in my opinion somewhat more attractive noise. Cell noise is simply a discrete grid of random values, rather like the Add Noise filter common to many painting programs. There is also the option to use a Gabor noise basis. This is significantly slower than the other basis functions, and, because of the way the Musgrave node was written, the various Gabor control parameters are not available. I’m hoping eventually to be able to add various Voronoi-based Worley basis functions as options, but that will require a complete rewrite of the OSL Musgrave node. Definitely a lower priority. Musgrave generally works best with a basis that provides values in the range -1.0..1.0, so it is better to use a perlin, simplex, cell, or gabor basis than the unsigned uperlin or usimplex basis. The unsigned basis functions remain as options to provide flexibility and because the programmer was too lazy to bother filtering them out. Why go to the extra effort to remove flexibility that does no harm? Play around, you might get good results, but understand it’ll be harder to get a decent image with the unsigned basis functions. A good guide to this sort of thing is always Ken Musgrave himself.

Dimension is the Hurst exponent, commonly given as H, of the Musgrave fractal. In short, this controls how rough the Musgrave fractal is. A value of 1.0 will give a fairly smooth fractal and 0.0 will give a very rough effect.

Lacunarity is the frequency gap between each octave of the Musgrave fractal(if it seems like most of the inputs in this group control the fractal then you’ve been paying attention). Typically somewhere around 2.0 is a good value. Higher values will tend to roughen up with fewer octaves(thus faster) than smaller values. I suspect that lower values will produce fewer artifacts, but I can’t prove that.

Detail is the number of octaves of compounded noise in the fractal. There is a hard limit of no more than 22.0 octaves. The more octaves you use the slower the rendering. From the Musgrave article linked above more than about log2(screen resolution)-2 octaves are wasted. So for a 4096×2048 map 10 octaves should be altogether sufficient adding one octave for each doubling of resolution or subtracting one for each halving.

Offset and gain are still a bit of a mystery to me. Multifractal and fBM don’t use them. Hybrid Multifractal and Ridged Multifractal use both. Hetero Terrain only uses offset. Just play around with them…

Displace is a vector that allows you to move the center of the fractal in 3d phase space. This allows you to change the character of the fractal without reseeding(which is good, because you can’t reseed). Every time you want a different world, just change the displacement values.

Scale controls the frequency of the noise. Smaller values lead to larger features, larger values lead to smaller features. Because scale is a 3d vector, you can do special effects like using a larger value of scale in the z direction to compress features in the N/S direction. That could be useful for creating a cloud cover map. Typically, the x and y scales should be the same unless you are going for some odd effects… A scale of somewhere around 2.0 in all dimensions is often good for our purposes.

Vector is basically the position vector of each point on the surface of the object. For most purposes, just leave that on the Generated texture coordinates.

Time takes advantage of the fact that we’re using four dimensional noise. If you want to animate the surface of the planet, you can connect that to a driver. I haven’t studied that yet, so you’re on your own…

Exponent, intensity and final offset were early experiments of mine. I’m not sure how useful they may prove. Exponent raises the raw elevation values to a power(by default 1.0), intensity multiplies the powered value by a factor(default of 1.0) and final offset adds a value to that(default of 0.0).

The internals of this group were pretty well exposed, except for a couple of hardwired nodes that were intended to map the output value from -1..1(theoretically) to 0..1. I may change some boneheaded parts in the future, but for the casual user there really isn’t much to fiddle with under the hood.

The next node is the Land Shader group, which I essentially copied from the Blender Stack Exchange. I’m not using it at present, but in the future I expect to use something very like it to add climate zones(tropical, desert, temperate, land ice, sea ice, etc.). Ultimately, I’ll need to add some sort of noise effect to break up the spatial zones a bit. This is here mainly to give some idea of part of where I’m going with this.

The Land Coloring and Sea Coloring groups are simply noise-based textures for, as the names imply, the land and the sea. The controls on these are kinda funky, and ultimately I’d like to have similar(but less funky) control groups for multiple climate types. At present, these things are way over complicated. I am not entirely unhappy with the land texture, though and may poach it for a desert shader(maybe). Parts of the sea shader could prove adaptable to a cloud shader when I add an atmosphere in the future.

The Land/Sea Mix group mostly exists to hide the, rather spaghetti-like, graph I used to control the sea level and distribute the land and sea shader effects accordingly.

The Bump, Diffuse BSDF and Material Output nodes are standard and self explanatory. The isolated Image Texture node is present to allow for baking textures to uv. For the color imagery, you can simply save to png and you’re good to go.

For the bumpmaps, things get a bit more complicated. Since I can’t output directly to grayscale float, I need to use a Color Ramp to create a linear grayscale RGB for output. Since these clamp all inputs outside of the range 0..1, I need to rescale my heights to that range. To prepare for Bumpmap output connect either the Height output from the Height group or the Flatsea Height output from Land/Sea Mix to the bottom Value input in Set Bump Min(Add). Connect the Color output from Test Bump to the Color input of Diffuse BSDF and disconnect Bump from the Normal input of Diffuse BSDF.

Make sure the 3D View is visible and set to Camera View with Inner Camera set as the camera in Scene properties. Incrementally reduce the value in the upper value input of Set Bump Min(Add) until some blue appears on the 3D View, then raise it just enough to reduce the blue to a few points(For Flatsea Heights, the entire ocean might turn blue). Once you’ve got the lower limit set to your satisfaction, begin raising the value in the lower input of Set Bump Max(Multiply) until some red appears on the planet surface and then step it back by small increments until no more than a few isolated points are red. Once you are satisfied with the results, disconnect Test Bump from the Diffuse BSDF node and connect the Color output of Bump Color to the Color input of the Diffuse BSDF. Now you can render your chosen bumpmap or heightfield. For best results, save the bumpmap to OpenEXR as a full float.

The color imagery, as I’ve said is ready to go, out of the box. Just add a pgw worldfile and Robert is one of your parents’ brothers. Because the OpenEXR seems to be essentially unheard of in GIS circles, you need to convert that into a tiff, then you can feed the tiff into GDAL to make it into something more GIS-ey. Ordinarily, if I was using Photoshop to do the conversion, I’d also reduce the image to a single channel greyscale. Because, I’m trying to get away from expensive software of limited availability, I’m using FDRTools instead. So far as I can discern, FDRTools doesn’t have an option for greyscale output, and GDAL doesn’t do color. At least not directly. Fortunately, GDAL has the -b option to choose a particular channel or “band” of the input file. The bands are numbered from 1, and we know we have three identical bands in this file, so, to convert to Virtual Terrains BT format, the command looks like this,

gdal_translate -b 1 -of BT [Location_of_input_file]/[Name_of_input_file].tif [Location_of_output_file]/[Name_of_output_file].bt

Now you can manipulate this to your heart’s content with a variety of GIS tools.

Obviously, this still needs work. I need to implement variation of shading with latitude: ice and snow, deserts, jungles and other such things. I’d like to add atmospheric effects, at least to the beauty shots(low priority). More significantly, this thing is unaccountably sl-ooo-ow. I’m not entirely sure why. Part of it, surely, is my 2008 model MacBook, but LunarCell runs in a couple minutes or less in 4kx2k res, while this takes hours. POVRay renderings are pretty quick, too. Texture baking, and ordinary renders in Blender seem to be much quicker as well. I may need to compare total time for baking, preparation and map rendering using POVRay. I’m thinking it’ll likely be faster if a bit more laborious. I can probably shave some time off of the baking or equirectangular baking time if I understood shading a little better, but I think there might be a speed issue with Blender’s equirectangular camera…

Please feel free to post any questions(I try to write clearly, but clearly I fail…), comments(“You’re covrage of this suject is very thought provoking. I will surely be read this of the future,” or, “This is very good approach to subject, but you SEO optimization need work. Read http://www.SEOcrap.com for further helpings,” are likely to fall prey to my cruel and overzealous spam filter. Sorry…), or suggestions(speed optimizations would be very appreciated 🙂 !)

Thank you for appeasing my inner attention hog!
The Astrographer

An equirectangular render of the same planet, with the same set of shaders. This took several hours. I promise a no-prize to the first person who can explain this. Seriously, WTF?!?

An equirectangular render of the same planet, with the same set of shaders. This took several hours. I promise a no-prize to the first person who can explain this. Seriously, WTF?!?

Advertisements
This entry was posted in Mapping, World Building and tagged , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s