PlanetCell on Blender Cycles – First WIP

 

An ordinary render of my procedurally-shaded planet. This took less than 3 minutes and 5 seconds to render.

An ordinary render of my procedurally-shaded planet. This took less than 3 minutes and 5 seconds to render.

For a long time, a very long time, I’ve been impressed by the abilities of the Flaming Pear LunarCell plugin for Photoshop. It quickly generates planetary surfaces that are, if not realistic, at least attractive and plausible to cursory examination. Mostly the former, but that’s not bad.

There are things, though, that I think could be improved. It’s not cheap, to start with. It’s not terribly controllable. I’d like to be able to manually guide creation with input masks. I also want the option of deeper control of the noise generation. I’d like to be able to toggle ice cover. In any case, the sea ice generation has been broken in the last several versions. Perhaps the biggest thing I’d like to get away from is Photoshop.

I like Photoshop perfectly well. Very much in fact. But for the purposes of this blog, I’d like to stick to applications that could be available to all or most of my readers. This isn’t the,”Look at all the Cool Junk I Have that You Don’t,” show, after all. Optimally, I want my readers to be able to benefit from my tutorials with no boundaries to entry higher than downloading a few free apps from the internet.

This is looking to be one of my successes at this. In the first iteration, I needed Blender for the noise rendering(the major point of the exercise), Photoshop(!) to flip and convert the RGB OpenEXR height bakes to greyscale TIFF so that I could feed them into GDAL in order to convert to BT format so that I could use Wilbur to create RGB-separated 24-bit color images that I had to feed into POV-Ray to flatten them into equirectangular! Whew. Try saying all that in one breath. A whole lot of work. A whole lot of bouncing around from one app to the next, and at least one expensive, not generally available app in the pipeline. After all that, you still had to use Wilbur to convert the POV-Ray color-hash back into a usable GIS-ready elevation format.

The current iteration does all the hard work of noise generation and rendering of equirectangular maps into one(so far very sloow) procedure inside of Blender(well one per image). Once that’s out of the way, the color imagery and any 8-bit masks are good to go, out of the box. Just add worldfiles. For the heightmaps you’ll need something like FDRTools to convert the RGB OpenEXR to RGB TIFF(FDRTools doesn’t seem to have any functionality to convert files to grayscale) and GDAL to convert the TIFF into a usable 32-bit GIS format. Then you’re ready to play with your data in QGIS, GRASS, Wilbur or whatever floats your boat.

Now, this is definitely a work in progress. It’s slow, I think at least in part ’cause my node tree wasn’t all that thought out. I was kind of floundering as to how to do this to start with so what work I’ve done is a bit of a mishmash of half-completed ideas. So this’ll probably need a full-on refactoring before I’m done, but(at least until the Blender cycles engine gives me a way to save full float bumpmaps directly) it should be adequate for my purposes. Hopefully you will find it useful as well.

Rather than walking you through the creation of this thing, I’m going to post the blend file so you can examine it at leisure. I’ll try to explain the function of the parts, what I was trying to do in building them that way and how they can be used.

First of all, this thing is overly complex and seriously needing a full refactoring. The main idea in the organization of node groups was to divide operations into small “labs” for such things as heightfield generation, land and sea texturing and distribution of land and sea elements. Ultimately, I’d like to reduce the inputs on each group to those that would be most commonly used, with less often accessed parts being well labelled inside the various group nodes. I’m not there yet.

So let’s go over what I do have, starting at the left side of the network. The Texture Coordinate node provides “Generated” coordinates to all the basic position dependent nodes(mostly noise textures). This ensures that the coordinates are always 0..1 on x:y:z, regardless of how you move or rotate the object. Important for animation unless you want the texture to change as the planet moves in its orbit. Very off putting when that happens…

Next comes the Height Group. This would be my Height Control Lab, the bread and butter of the whole concept. For the core of PlanetCell this hasn’t gotten anywhere near enough attention as yet. In my defense, I wanted to assemble a generally functional whole and then go back to improve the details later.

Let’s start by looking at the control inputs. We’ll do this for all the nodes and groups.

Type controls the kind of Musgrave fractal used. This can have the values, “Multifractal,” “fBM,” “Hybrid Multifractal,” “Ridged Multifractal,” or “Hetero Terrain.” These control the appearance of the fractal function. The fBM, or fractional Brownian Motion, type is largely homogeneous, while the others are, in various ways, more heterogeneous. For manual editing work, I like a homogeneous noise for its controllability, but as the generator for a planetary landscape heterogeneous is usually the way to go in my opinion.

Basis controls the noise basis for the Musgrave fractal used. The possible values are, “cell,” “perlin,” “uperlin,” “simplex,” “usimplex,” or “gabor.” Perlin or uperlin, which can also be referred to as “snoise” or “noise” respectively, are, as the name implies, Perlin noise functions. Simplex or usimplex are, similarly, Perlin simplex noise functions, a somewhat faster, in my opinion somewhat more attractive noise. Cell noise is simply a discrete grid of random values, rather like the Add Noise filter common to many painting programs. There is also the option to use a Gabor noise basis. This is significantly slower than the other basis functions, and, because of the way the Musgrave node was written, the various Gabor control parameters are not available. I’m hoping eventually to be able to add various Voronoi-based Worley basis functions as options, but that will require a complete rewrite of the OSL Musgrave node. Definitely a lower priority. Musgrave generally works best with a basis that provides values in the range -1.0..1.0, so it is better to use a perlin, simplex, cell, or gabor basis than the unsigned uperlin or usimplex basis. The unsigned basis functions remain as options to provide flexibility and because the programmer was too lazy to bother filtering them out. Why go to the extra effort to remove flexibility that does no harm? Play around, you might get good results, but understand it’ll be harder to get a decent image with the unsigned basis functions. A good guide to this sort of thing is always Ken Musgrave himself.

Dimension is the Hurst exponent, commonly given as H, of the Musgrave fractal. In short, this controls how rough the Musgrave fractal is. A value of 1.0 will give a fairly smooth fractal and 0.0 will give a very rough effect.

Lacunarity is the frequency gap between each octave of the Musgrave fractal(if it seems like most of the inputs in this group control the fractal then you’ve been paying attention). Typically somewhere around 2.0 is a good value. Higher values will tend to roughen up with fewer octaves(thus faster) than smaller values. I suspect that lower values will produce fewer artifacts, but I can’t prove that.

Detail is the number of octaves of compounded noise in the fractal. There is a hard limit of no more than 22.0 octaves. The more octaves you use the slower the rendering. From the Musgrave article linked above more than about log2(screen resolution)-2 octaves are wasted. So for a 4096×2048 map 10 octaves should be altogether sufficient adding one octave for each doubling of resolution or subtracting one for each halving.

Offset and gain are still a bit of a mystery to me. Multifractal and fBM don’t use them. Hybrid Multifractal and Ridged Multifractal use both. Hetero Terrain only uses offset. Just play around with them…

Displace is a vector that allows you to move the center of the fractal in 3d phase space. This allows you to change the character of the fractal without reseeding(which is good, because you can’t reseed). Every time you want a different world, just change the displacement values.

Scale controls the frequency of the noise. Smaller values lead to larger features, larger values lead to smaller features. Because scale is a 3d vector, you can do special effects like using a larger value of scale in the z direction to compress features in the N/S direction. That could be useful for creating a cloud cover map. Typically, the x and y scales should be the same unless you are going for some odd effects… A scale of somewhere around 2.0 in all dimensions is often good for our purposes.

Vector is basically the position vector of each point on the surface of the object. For most purposes, just leave that on the Generated texture coordinates.

Time takes advantage of the fact that we’re using four dimensional noise. If you want to animate the surface of the planet, you can connect that to a driver. I haven’t studied that yet, so you’re on your own…

Exponent, intensity and final offset were early experiments of mine. I’m not sure how useful they may prove. Exponent raises the raw elevation values to a power(by default 1.0), intensity multiplies the powered value by a factor(default of 1.0) and final offset adds a value to that(default of 0.0).

The internals of this group were pretty well exposed, except for a couple of hardwired nodes that were intended to map the output value from -1..1(theoretically) to 0..1. I may change some boneheaded parts in the future, but for the casual user there really isn’t much to fiddle with under the hood.

The next node is the Land Shader group, which I essentially copied from the Blender Stack Exchange. I’m not using it at present, but in the future I expect to use something very like it to add climate zones(tropical, desert, temperate, land ice, sea ice, etc.). Ultimately, I’ll need to add some sort of noise effect to break up the spatial zones a bit. This is here mainly to give some idea of part of where I’m going with this.

The Land Coloring and Sea Coloring groups are simply noise-based textures for, as the names imply, the land and the sea. The controls on these are kinda funky, and ultimately I’d like to have similar(but less funky) control groups for multiple climate types. At present, these things are way over complicated. I am not entirely unhappy with the land texture, though and may poach it for a desert shader(maybe). Parts of the sea shader could prove adaptable to a cloud shader when I add an atmosphere in the future.

The Land/Sea Mix group mostly exists to hide the, rather spaghetti-like, graph I used to control the sea level and distribute the land and sea shader effects accordingly.

The Bump, Diffuse BSDF and Material Output nodes are standard and self explanatory. The isolated Image Texture node is present to allow for baking textures to uv. For the color imagery, you can simply save to png and you’re good to go.

For the bumpmaps, things get a bit more complicated. Since I can’t output directly to grayscale float, I need to use a Color Ramp to create a linear grayscale RGB for output. Since these clamp all inputs outside of the range 0..1, I need to rescale my heights to that range. To prepare for Bumpmap output connect either the Height output from the Height group or the Flatsea Height output from Land/Sea Mix to the bottom Value input in Set Bump Min(Add). Connect the Color output from Test Bump to the Color input of Diffuse BSDF and disconnect Bump from the Normal input of Diffuse BSDF.

Make sure the 3D View is visible and set to Camera View with Inner Camera set as the camera in Scene properties. Incrementally reduce the value in the upper value input of Set Bump Min(Add) until some blue appears on the 3D View, then raise it just enough to reduce the blue to a few points(For Flatsea Heights, the entire ocean might turn blue). Once you’ve got the lower limit set to your satisfaction, begin raising the value in the lower input of Set Bump Max(Multiply) until some red appears on the planet surface and then step it back by small increments until no more than a few isolated points are red. Once you are satisfied with the results, disconnect Test Bump from the Diffuse BSDF node and connect the Color output of Bump Color to the Color input of the Diffuse BSDF. Now you can render your chosen bumpmap or heightfield. For best results, save the bumpmap to OpenEXR as a full float.

The color imagery, as I’ve said is ready to go, out of the box. Just add a pgw worldfile and Robert is one of your parents’ brothers. Because the OpenEXR seems to be essentially unheard of in GIS circles, you need to convert that into a tiff, then you can feed the tiff into GDAL to make it into something more GIS-ey. Ordinarily, if I was using Photoshop to do the conversion, I’d also reduce the image to a single channel greyscale. Because, I’m trying to get away from expensive software of limited availability, I’m using FDRTools instead. So far as I can discern, FDRTools doesn’t have an option for greyscale output, and GDAL doesn’t do color. At least not directly. Fortunately, GDAL has the -b option to choose a particular channel or “band” of the input file. The bands are numbered from 1, and we know we have three identical bands in this file, so, to convert to Virtual Terrains BT format, the command looks like this,

gdal_translate -b 1 -of BT [Location_of_input_file]/[Name_of_input_file].tif [Location_of_output_file]/[Name_of_output_file].bt

Now you can manipulate this to your heart’s content with a variety of GIS tools.

Obviously, this still needs work. I need to implement variation of shading with latitude: ice and snow, deserts, jungles and other such things. I’d like to add atmospheric effects, at least to the beauty shots(low priority). More significantly, this thing is unaccountably sl-ooo-ow. I’m not entirely sure why. Part of it, surely, is my 2008 model MacBook, but LunarCell runs in a couple minutes or less in 4kx2k res, while this takes hours. POVRay renderings are pretty quick, too. Texture baking, and ordinary renders in Blender seem to be much quicker as well. I may need to compare total time for baking, preparation and map rendering using POVRay. I’m thinking it’ll likely be faster if a bit more laborious. I can probably shave some time off of the baking or equirectangular baking time if I understood shading a little better, but I think there might be a speed issue with Blender’s equirectangular camera…

Please feel free to post any questions(I try to write clearly, but clearly I fail…), comments(“You’re covrage of this suject is very thought provoking. I will surely be read this of the future,” or, “This is very good approach to subject, but you SEO optimization need work. Read http://www.SEOcrap.com for further helpings,” are likely to fall prey to my cruel and overzealous spam filter. Sorry…), or suggestions(speed optimizations would be very appreciated :) !)

Thank you for appeasing my inner attention hog!
The Astrographer

An equirectangular render of the same planet, with the same set of shaders. This took several hours. I promise a no-prize to the first person who can explain this. Seriously, WTF?!?

An equirectangular render of the same planet, with the same set of shaders. This took several hours. I promise a no-prize to the first person who can explain this. Seriously, WTF?!?

Posted in Mapping, World Building | Tagged , , , , | Leave a comment

Shield Volcanoes and Calderas

Olympus Mons on Mars. A gigantic and striking example of a shield volcano with calderas.

Olympus Mons on Mars. A gigantic and striking example of a shield volcano with calderas.

Trying to come up with coherent advice for this fellow trying to create an island over on the Cartographer’s Guild, I ran across a sometimes successful method for creating a shield volcano with one or more calderas. This should be good for making your own private Mons Olympus or Hawaii. A variant on this might prove useful in creating impact craters, but I’m not really going to delve into that. Focus!

I’m going to make a quick build of this using nothing but the Mound filter on Wilbur. The effect, I’m sure, could be improved with additional noise and erosion effects. Again, focus!

I’ll start with a 1024×1024 blank map. To keep things sane, I’m just going to have this thing pop up out of a flat plane. No worries about developing the rest of the island or other landmass upon which this volcano may be growing.

Figure 1. Shield Selection

Figure 1. Shield Selection

First, using the freehand selection tool with feather set to 1.0, select a roughly circular base for your volcano.

Next, we’ll invoke the Mound filter(ctrl-M or Filter>Fill>Mound).

FirstMound_Settings

Figure 2. Shield Settings

Because I want to give this shield a bit of a lip above the surrounding plains like the edge of Mons Olympus on Mars, I will set the minimum height to 300 meters. I arbitrarily choose a maximum height of 15000 meters, making this lower than Olympus Mons, but definitely more of a Martian volcano than Terrestrial. For a shield volcano on an earthlike planet 5 or 6 kilometers above sea level would be a really tall shield. Set the Operation to Add and Noise to zero.

Now click the Edit Profile button. Your goal here will be to create a fairly steep-sided flat-topped profile. I usually start by entering a value like 0.9 in the Non-Linearity and hit Apply several times till I like the shape. Next hand

Figure 2. Shield Profile

Figure 3. Shield Profile

draw a flatter top to the curve, trying to keep it continuously rounded and monotonically increasing to the right. Repeated applications of Smooth, Non-Linearity and Normalize will help. With luck and a bit of work you’ll end up with a curve similar to the one below. With more luck, experience and effort you could get better results. This is pretty quickly thrown together.

Figure 3. First Caldera Selection

Figure 4. First Caldera Selection

Next, we want to create a caldera. Select a large round area near the top of the previous mound. Invoke the Mound tool with the Minimum Height set to zero and the Maximum Height set to a negative value equal to the desired depth of your caldera. In this case, -3000. Most of the time you’ll want a much steeper dropoff around the edge of the caldera than the edge of the shield. A few more applications of Non-Linearity, perhaps flattening off the top again, followed by more Smoothing and Normalize should do for that.

Figure 5. First Caldera Settings.

Figure 5. First Caldera Settings.

Figure 6. Caldera Profile.

Figure 6. Caldera Profile.

To give an overview of the problem cases of making subsidiary calderas, I’ll go through the process of making a caldera completely within the larger caldera, and one that lies across the edge of the larger caldera. As we go along, it should become clearer what I’m talking about.

First we’ll use the freehand selection tool to select a roughly circular area completely within the larger caldera.

InnerCaldera_Selection

I’m going to make this one a bit shallower than the larger caldera, so let’s use a maximum height of -800. Otherwise, use the same settings as before. For more interest, you could use a different Profile for each caldera, some could be steep, while others are gently sloping. For this exercise, the previously generated Profile will do just fine.

For this exercise, let’s make a larger subsidiary caldera that crosses the edge of

Figure 8. Edge Caldera Selection with Inner Caldera visible to left.

Figure 8. Edge Caldera Selection with Inner Caldera visible to left.

the first caldera. It’ll still be smaller than the main pit, but bigger than the other sub-caldera.

Before I do anything else, I’ll Select>Feather the selection twice with a Sigma of 1.0. After I feather the selection I’m going to Filter>Blur>Gaussian Blur the area pretty radically. Perhaps three times with a Sigma of 8. This will tend to

Figure 9. Edge Caldera blurred.

Figure 9. Edge Caldera blurred.

make the existing edge a lot less visible in the final new caldera. You can definitely blur it more for better results, but for now… Hurry, hurry!

Before making our Mound(hole), we want to harden the selection back up. Select>Modify>Binarize with a Threshold of 128, then Select>Feather with a Sigma of 1.0 to bring its hardness back in line with what we’ve done before.

I’m going to make this caldera deeper than the main one, so set the Maximum

Figure 10. Result with Edge Caldera in place.

Figure 10. Result with Edge Caldera in place.

Height to -5000. You might have noticed that the maximum and minimum values have been reversed. They should probably have been something like Outer Height and Inner Height, but the programmer seems to have intended this as a tool for making hills not holes. I’m just being contrary here.

Figure 11. Final Result with embellishments, not all successful.

Figure 11. Final Result with embellishments, not all successful.

 

After doing all this, I tried to embellish here and there, adding inner subsidence to calderas and additional calderas. I even tried to add a smaller cinder cone to the side. When this failed miserably, I made a try at creating a landslip on the side of the shield. You can tell by the results that this is quite a ways short of being ready for primetime.

Figure 13. Greyscale map of elevations. Should be 16-bit...

Figure 13. Greyscale map of elevations. Should be 16-bit…

Figure 12. Perspective view of the final result, prominently showing the attempt at a land slip.

Figure 12. Perspective view of the final result, prominently showing the attempt at a land slip.

 

I have also posted the resulting heightmap in hopefully 16-bit png, and a simple opengl perspective view.

If you have any comments, complaints, questions, kvetches, kibitzes, or better ideas, please feel free to leave a comment.

If you use this technique, please drop a line to let me know how it worked for you, any rough spots you ran across or any alterations you made to get over those rough spots. I’d love to see what people with greater talent than myself might be able to do with this technique.

Thank you for your attention and whatnot,
The Astrographer.

Posted in Mapping | Tagged , , , | Leave a comment

Cratered Planets

cratered-planet

Today, we’re going to create a cratered world using noise in planetGenesis. This has been a headache for me for some time. The result is far from perfect. I don’t think I’d use it as an elevation map for ground views, but it’s a decent “fractal forgery” from a distance.

First up, we’re going to create the basic heightfield in planetGenesis. Right-click, Add Noise>Worley>Summed Worley Modified by Perlin. Select the new node and set Neighbor to 7, Points per Cell to 4 and Height, Width and Depth to about 1. So far, not so good. Next, we want to add a warping noise to reduce the very ordered shape of this noise. Add Noise>Perlin>Perlin’s Noise. Leave the settings on this one largely alone except to alter the Length, Width and Height under Noise Size to about 1. Now, Add Function>Adjustments setting Scale to about 0.2. Shift drag a link from the output of the Perlin node to the input on top of the Adjust node. Now Add Combiner>Warp and connect the output of the Worley noise node to the right input on the Warp node and connect the Adjust node to the left input of the Warp node. To the output at the bottom of the Warp node connect a Musgrave HeteroFractallize node. The effect here is starting to look interesting.

As much as I like the look so far, I would like to add an effect similar to lunar maria. Add a new Perlin Noise node with Noise Size values of 1,1,1. Add Function>Range, I set the Lower End to -0.1, the Ramp to 1.0 and the Value above range to 1.0. Connect the output of the last Perlin node to the input of the Range node. Feed the outputs of the HeteroFractal node and the range node to a new Multiply node. Feed the output of the Multiply node to the  Terrain node and Run. Try opening the result in Wilbur, also add the same image as a texture in Lighting Settings. Let’s keep this as our texture.

In the Terrain node, change the image file name. Attach the output from the Range node to a new Adjust node, set the Scale on that to about 9. Attach the output of the new Adjust node to a new Add node. To the other input of the Add combiner, connect the output of the Multiply node. Finally connect the output of the Add node to the Terrain node. This is basically to force the highlands to a higher elevation than the maria.

Our cratered planet with hillshade rendered in Wilbur.

Our cratered planet with hillshade rendered in Wilbur.

Now, in Wilbur, we look at the bumpmap we have just created with the previously created image as a texture. For what it is, the effect is pretty decent. For something I came across more or less accidentally and am still trying to refine, it’s pretty awesome. I think it may be possible to enhance the craters a little bit in Photoshop.

There’s a lot of controls in there with no obvious sense of what their results might be. I still recommend blind messing about over slavishly following instructions. That’s mostly how I found this… The Scale values in the Adjustment nodes and pretty much everything in the Range nodes are very sensitive to the values produced by the noise, so that may need rejiggering when the seeds are changed.

For the sake of illustration and to get you started, I’ve saved zipped up pG nodefiles for creating the bumpmap and texture images. Mess around with them, change the seeds to create different planets. Let me know if you come up with anything cool or use this for anything interesting.

Now, let’s look at this “in space.” I’ll start by launching Blender, and deleting the default cube. Next, I’ll create a UV Sphere of Size 3.000. Make sure all faces are Shade Smooth. With the sphere selected, I’ll add a new Material and a new Texture of Type Image or Movie. Load the bumpmap image. Turn Off the Color Influence and turn on the Normal Influence under Geometry.

Now, add another new image texture and load the texture image. Keep the Color Influence. Now render.

For now, I’ll leave it as an exercise for the reader to figure out how to extract the z-buffer to Photoshop and use it to composite the image with a Flaming Pear Glitterato background.

Hopefully this will be useful.

Thank you for your attention,

The Astrographer

EDIT: I forgot to post the planetGenesis nodefiles, so I’m posting them now along with heightfield and texture images rendered to 8k x 4k resolution, a nice composite image created in Blender and a blenderfile showing both how the cratered planet material goes together, and, if you check the nodes view, how the compositing was done. Here it is. Thank you for your patience…

Posted in Mapping, World Building | Tagged , , , , , , , | Leave a comment

Using “Planet” for 16-bit planetary maps

Astrographer:

Reblogging a two year old post. Oldies but goodies…

Originally posted on Astrographer:

The map on this page got me to thinking about automated terrain generation again. I’ve discussed the inadequacies of most terrain generation before, and I stand by it. On the other hand, as Realmwright says, there are advantages to exploring and filling in an existing map. Much as die rolls serve to spur the imagination in the generation of other planetary parameters, so does a randomly genned map. That’s not to say I’d use a generated map exactly as is. As a geography guy I feel it’s a matter of honor to improve the terrains and maps I’m presented with.

I had originally thought that the map on Realmwright’s page was generated using Torben Mogensen’s Planet generator, but it turns out that the Donjon generator he links to is actually a working implementation of John Olsson’s(johol) old FWMG generator. The cgi on Mr. Olsson’s online app no longer…

View original 1,253 more words

Posted in World Building | 1 Comment

New Additions

In the menu, under World Builder’s Bookshelf, you’ll find two new additions to the site. With permission from Geoff Eddy, I’m mirroring his Creating an Earthlike Planet and Climate Cookbook pages. Some formatting is lost, but the valuable information is intact and I will try to improve the formatting issues if possible.

While I hope that Mr. Eddy’s pages will find a better home, I’m happy to at least keep them available.

Thanks are due to Geoff Eddy for the creation of such useful resources and the permission to disseminate them. I hope this will prove as useful to others as it has for me.

The Astrographer

Posted in Links, World Building | Tagged , , , , , , , , | Leave a comment

Projecting a Map From Imagery: MMPS and GIMP

On friday I created an equirectangular map(or a known portion of one…) from a partial image as from space using Photoshop and the Flaming Pear Flexify 2 tool. Photoshop is fairly expensive and Flexify isn’t exactly cheap. This means they aren’t universally available. I try whenever possible to create tutorials using freely available open-source or shareware applications. Today, I will try to do the same thing I did last time using GIMP and Matthew’s Map Projection Software(MMPS).
First, I loaded the jpg of my image of Asdakseghzan as seen from space into GIMP.
Zeta-GoldI resized the canvas into a square, using Image>Canvas Size… I’ll set both the width and height to match the larger of the existing values. In this case the image is wider than it is high, so I set the height to match the width to avoid losing any information. I hit the Center button under Offset, and told it not to resize any layers.
Next, I make a new transparent layer to act as a template for the placement and sizing of the circular area. With that layer selected, I use the ellipse select tool with aspect ratio fixed at 1:1 to select a circular area centered on the image and covering a maximum possible area. If necessary, use the tool options to set the position to 0,0 and the size to the height and width of the image. Select>Invert and fill the surrounding area with a dark color. Create another layer, fill it with another contrasting color, and move that layer beneath the image layer.
Now select the image layer. Move the layer till the round edge of the planet is close to a round edge of the template. Usually, I use the arrow keys to get this as close as posssible. Even if you get part of the edge matched up perfectly, the limb of the planet in the image will probably diverge from the edge of the template. If not, you’re golden, but if so you’ll need to rescale the layer. This was much easier in Photoshop. I’m not sure you get what you pay for, but there are perks.
Tools>Transform Tools>Scale to start the scaling process. Make sure Keep Aspect is checked. Grab the side perpendicular from the side where the image touches the template and drag till the limb follows the template. My planet image touched the template on the left side, so I stretched up and down on the top and bottom handles.
Now I had a pretty decent centered, maximally space-filling image, ready for reprojection. So I export it as a ppm. On my first try, I saved it as a jpeg and made a conversion using the ImageMagick mogrify facility, but that proved unnecessary.
Zeta-Gold_mergedMy initial plan was to forego dealing with my traced “map.” It wasn’t really all that hot, and the scale and shift process in GIMP seemed a bit horrific. Well, with a bit of practice, scaling and adjusting the position of layers didn’t seem quite as bad, and more practice seemed like a good idea. So I did it anyway.
Asdakseghzan_merged

 

Asdakseghzan_scaled

I decided to just show the thumbnail, ’cause there’s a LOT of whitespace here!

 

 

 

 

 

 

As you can see, the fit isn’t quite as precise as with the version I made in Photoshop. With a lot of effort, I could have made it better, but without transparency sizing is a frustrating endeavor. In order to maintain the tracing as a separable element, I hid the other layers and created a version with just the tracing itself. This is what I would project…

Now we get onto the CLI shell. The commands here assume you’re using some sort of UNIX-based OS like Linux or Mac OS X. Microsoft commands will differ somewhat.

First, I changed my working directory to the location of my images. Next, I gave the following command to reproject the contents of merged_image into map_image:
{location of MMPS}/project -i orthographic -w 2048 -h 1024 -lat -7 -long 7 -turn 2 -f merged_image.ppm > map_image.ppm

Optimally, the app would be on the search list, but the name “project” conflicts with one of my gis tools, so I have to use the full path. Your mileage might vary.

When MMPS was done projecting the basic composite, this was the result.
Zeta-Gold_map
I also projected the tracing.
Asdakseghzan_projected
I loaded that into GIMP as a layer on top of the other elements. Because the PPM format which MMPS uses doesn’t support transparency(so far as I, or apparently MMPS, know), I had to select the empty areas and make them transparent using a layer mask. This was made easy by the high contrast between the background elements and the tracing. If I had been working in black and white, it would have been more involved.
Asdakseghzan_mapWhile the process of positioning and scaling the elements was more difficult, I managed this in about half the time I took with Photoshop. There are a number of reasons for that. I’ve had more experience with the process, for one. I also did this in a more hurried and slipshod way; I spent a lot of time in Photoshop refining the fit between elements. The major difference, though, comes down to the crashing problem. If I hadn’t been required to restart the program and retrace the process from the beginning several times(Photoshop usually crashed on the first attempt to open the Save As dialog after using Flexify), the Photoshop would still have been somewhat quicker. Better Transform tools will tell. You may get something for what you pay for, but free is still pretty darned attractive.

Now that we have these existing elements properly projected(more or less), now it’s time to add in the rest of the world and bring this stuff into more robust mapping tools. That’s for the next few posts.

Thank you for reading. Please feel free to comment, leave suggestions or ask questions.
The Astrographer

Posted in Building a Worldmap, Mapping, Planetary Stuff, Projects, World Building | Tagged , , , , , , , | Leave a comment

Projecting a Map From Imagery: Photoshop and Flexify

Today we’re going to try something a little different. Quite some time ago, in the hoary days before Flaming Pear LunarCell had equirectangular map outputs, I created this rather nice planetary image.
Zeta-GoldUnfortunately, I lost the settings long before those cool map outputs were added to LunarCell. Later, but well before I started learning the art of mapmaking on Cartographer’s Guild, I roughly traced the image to create this interesting, but neither scaled nor properly-projected “map” of a planet I wound up naming Asdakseghzan, sorta-homeworld of the canine-derived Vugoa. Think Traveller Vargr…
Zeta-Mtns
I have no idea what the scale should be, and that “equator” line should probably be ignored in future.

I’d like to create a reasonably projected, fairly well scaled version of the map with some fidelity to the original image.

I started by cropping the space picture down to the circular area covered by the planet in the picture. Then I expanded the canvas size to a square area just large enough to contain the circle. Make sure to save the resulting image.
Zeta-Gold_rawMerge those layers together. Next thing I did was to load the old, traced “map” I had created into Photoshop. Paste it over the square, cropped version of the beauty shot you just made, reduce opacity so that you can see the underlying image, and Edit>Transform>Scale the pasted layer so that the traced shorelines match the shorelines in the underlying picture. This might take a bit of jockeying about and messing with the opacity to get a good idea of when everything fits. Now, I made sure to select the underlying picture image.

Next, I brought up the Flaming Pear Flexify 2 filter to project from an input projection of Orthographic to an output projection of Equitall(to fill the square space). I adjusted the rotation controls till I had the area roughly where I wanted it. I now had at least a credible global projection of the portion of the planet visible in the image. Eventually, I’ll be able to use this to work out the scale of the map.

After that, I selected the resized layer with the traced image, and projected that. All of the settings should remain unchanged, otherwise the layers won’t match up properly.

Once I have the image reprojected, I resize it to 2048×1024 to get the proper aspect ratio for an equirectangular projection. Here is the resulting map.
Zeta-Gold_projectedAnd the composite.
Zeta-Asdakseghzan_compositeNotice the large green area. That is about the area that would have been filled with mapped features if the original image had been a fullan face shot. The distorted rectangle on the west end of that potentially visible area is the portion of the planet that had made it in front of the lens. Part of that is even in darkness. The sacrifices we make for a visually striking picture. Usually NASA doesn’t use imagery from the limb of the planet to create composite maps. Unless they have to. Also notice that I said, “map,” this is the first image worthy of the term. The red area? That wouldn’t have been visible.
Now that I know where this is on the globe, I can set about creating the rest of the world without fear of contradicting this little portion that I’ve already created so much backstory for. Some details will have to change. The equator, for instance, isn’t precisely where I envisioned it, although the match was closer than I’d feared.

What are the potential uses for this? Well, once upon a time, when I was even more of a Trekkie than I am now, I wanted to make maps of some of the planets that the Enterprise was shown orbiting. That’s even more attractive with the often more convincing planets in the remaster, though I’m not as obsessed as I used to be. All you have to do is grab a screenshot, Register the visible portion of the planet to a circular template filling a square canvas, and Bob’s your uncle! Getting features shown in multiple disconnected shots into the correct spatial relationship could be an interesting challenge. I’ll leave that to the readers to try figuring out. Please, post a link to the comments if you find or create a good method… Anyway, this could be an excellent first step to creating your map of Tatooine or Alderaan or Pandora. Or as I did, recovering some old tracing you tried to make into a map.

Another use I really hadn’t thought about before, would be to use a technique similar to the one described here to convert scrawlings on an actual physical globe into a map. Yeah, that’s definitely going in my notebook for a future post. Time to make that dry-erase globe I’ve been dreaming of…

For my next trick, I’ll try repeating this procedure with MMPS and the GIMP. Photoshop has been an awful crash-monster lately, particularly after I use Flexify. I was also impressed by the latest version of GIMP and want to try it out. Finally, I really want to try using open source or at least freely-available tools as much as possible to make my solutions as generally useful as possible. I’ll post that by monday. I promise…

Thank you for your attention. Please feel free to ask for clarification, make comments, or give ideas for improvements.
The Astrographer

Posted in Building a Worldmap, Mapping, Planetary Stuff, Projects, World Building | Tagged , , , , , , , , | Leave a comment

Foes of the UNSEO: The Hanrul

I’ve been working on a post using the Mercury6 symplectic integrator to analyze the stability of the Yaccatrice System. It’s been a long learning experience, and I think I screwed up entering the initial Orbital State Vectors. Sky Moon fell into Cintilla in less than nine days and Yaccatrice continued in a wildly eccentric orbit for several years thereafter. I expected that Yaccatrice might not have been stable, but I find it really unlikely that tidal forces could have pulled the gas giant out of a circular orbit in just a little over one time step(of 8 days). Even less likely that a moon of that planet would have survived as long as Yaccatrice did. I need to go back and recalculate those position and velocity vectors. This may take awhile…

In the meantime, I’ll post an alien species description from my notebook.

Hanrul

The Hanrul began as kind of a cross between Kzinti, Ferengi and Vulcans, with many of the worst aspects of the stereotypical steampunk mad scientist.

Coldly unemotional and ruthlessly exploitative at best. Often cold-blooded sophontocidal cannibalistic psychopaths.

The currently dominant culture of the Hanrul is one of the most technologically advanced societies yet contacted by humanity. In some areas, particularly biotech, the Hanrul are more advanced than humans. Their technology is frequently odd to the point of inefficiency as a result of their Mad Scientist culture. It’s important, in spite of the term Mad Scientist, that while technologically advanced, the Hanrul have little concept of science as such.

Hanrul also show little evidence of any concept of other people as independent thinking entities with feelings that matter. Both among themselves and with individuals of other species, the Hanrul are utterly callous and without empathy. Such ethics as the Hanrul have, mostly resemble Ayn Rand’s Objectivism.

Hanrul are very prickly about their byzantine hierarchy of social status. They are quick to offense and equally quick to offend. Hanrul typically require long drawn-out introductions in order to fully acquaint others  to their status in a large number of overlapping and interlocking social spheres. It’s also nearly a universal in Hanrul society that listening closely to an introduction is a tacit admission that the other is a superior. Hanrul social norms seem almost designed to make ignorance of status quickly evident. These things, along with oddities of Hanrul reproduction, tend to make life on Sylan short, dangerous and violent. Rising through the ranks by assassination and duels is the norm for Hanrul.

Although horribly xenophobic, with an inhuman(in all the worst senses) psychology, the Hanrul are biologically very similar to humans. This has made humans useful subjects for Hanrul genetic experimentation and vivisection. They do the same to low-status members of their own species.

Economic domination and selfishness is central to Hanrul culture. Their society is cemented together by Exploitation Societies great and minor.

Although humans find Hanrul culture disturbing (with good reason), and believe that individual Hanrul could grow up to be decent and productive members of galactic society if separated from their toxic culture, the experiment has never been tried. The act of removing people from their own society and forcing them to grow up in an alien environment against their will is considered against human ethics. Even given a lapse in ethics, the Hanrul are advanced and strong enough to fiercely resist any such effort.

The United Nations of Sol and polities of many neighboring sapient species resist Hanrul exploitation of other sapient species when possible, but otherwise long-term plans for dealing with the Hanrul are still very much a matter of debate.

The Hanrul didn’t have FTL technology until they gained it after first contact with the Solar Union.

Physical Appearance

Hanrul look something like an octopodal cross between a glossy patent leather wasp, and a centaur, with vicious crab claws.

The four legs on the Hanrul’s abdomen are used for walking. The two upper arms on the thorax end in hands with three opposed thumbs, and are used for manipulation. The two lower arms are greatly increased in size and end in claws which are used in combat and to help climb.

Reproduction

Large numbers of eggs are laid and cared for in communal hatching grottoes. Modern Hanrul cities often use sewage and other refuse to create a nutrient-rich mulch upon which the young larvae feed. Criminals and low-status Hanrul are frequently dropped into the spawning pits to feed adolescent Hanrul. The young also often fight and each other, helping to whittle numbers down from ridiculously excessive to merely excessive when they finally emerge as young males from the pits.

Conflict continues for these males. In the dominant Sylan language, the world for male can also be translated as, “cannon fodder.” Even with the further whittling of the male population by continuous warfare with other Hanrul and neighboring species, population pressures remain fierce.

Those males who manage to survive and fight to earn the right to mate, will impregnate the female with hundreds of eggs. Laying eggs renders the female terribly hungry, and she will try to eat everything she can catch. Mating chambers are always well-stocked with food, because the female can easily die from the stress of laying so many eggs. She will still try very hard to eat her mate.

Provided the male manages to avoid being devoured by his mate, he will begin to go through a metamorphosis into another female. Hanrul females can live for centuries, and are capable of producing a clutch about twice an Earth year. In practice, they only lay eggs a couple times a decade, because of the immense stress involved.

Sylan Empire

Sylan, the homeworld of the Hanrul, is still divided bitterly into small sovereign entities, often overlapping and with disparate systems of social organization. The global organization, which humans refer to as the Sylan Empire, is at best a loose confederacy consisting of a handful of “international” alliances.

An innovation of the new global culture is a harsh meritocracy that makes it costly and possibly deadly to rise above one’s level of competence.

Since expanding into interstellar space, the Sylan Empire has colonized several worlds and enslaved at least one primitive sapient species.

Although often only tenuously disciplined, the Sylan military is strong, technologically fairly advanced and highly aggressive. The Sylan Empire is widely considered a threat to its neighbors.

Hopefully, more to come soon. Sylan obviously needs some specs and maps…

Thanks,
The Astrographer

Posted in Aliens, World Building | Tagged , , , , | Leave a comment

Wilbur on the Macintosh

Wilbur. as seen on my Macintosh. Note the Apple in the upper left corner.

Wilbur. as seen on my Macintosh.
Note the Apple in the upper left corner.

While the title sounds like a good name for a fictional English(-ish) village(the Macintosh River seems a bit un-English-y), I’m actually talking about getting Joe Slayton’s Wilbur program running on my Mac OS X-based computer.

I’ve been using Boot Camp to boot my computer in Windows XP, which seems to work fine, but I’m not that fond of being bound to Windows till I can reboot. I’m aware of Parallels and similar programs which allow Windows to run in a virtual machine parallel to the Mac OS, but they cost money. If I had any money, I’d spend it on cartography and simulation apps… er, not to mention clothes for the kids and… food and whatnot… ehhh…

Anyway! Whilst pointlessly bouncing about the internets, I discovered this post on using Wineskin to “port” Orbiter to Macintosh. I’ve had some experience trying to get Wine working so I wasn’t terribly optimistic, but I tried it. After some false starts(X11 doesn’t seem to open right on my computer, but I’ll go over my workaround if you have the same problem) I had Orbiter running on my computer. As far as I can tell, it works fine, though the simulation is realistic enough that, I at least, can’t get a Delta Glider with a ridiculous amount of delta-v into orbit to save my life. I may need to stick to Kerbal Space Program till I get some, um, skillz. But that said, Orbiter is a pretty big and complicated program, and I didn’t have too much trouble gettin’ it going.

Wilbur, on the other hand, is a relatively simple app, so I went with it. I suppose if I can get Wilbur, SagaGIS and Fractal Terrains running in Wine, I can dump Windows and free up about 180 gigabytes of disk space. Sadly ArcGIS, not surprisingly, doesn’t function on Wine.

First prerequisite, of course, is having mac os on your computer. If you have windows and you like it, then you don’t need my help. If you have linux, then the vanilla version of wine should be hunky-dory, but I can’t really help you with that.

Second step is to get a copy of the wilbur installer here. Next, get a copy of Wineskin Winery here and install it.

The first time you open Wineskin Winery you need to install an engine. Simply click the plus sign next to where it says, “New engine(s) available!” Select the highest numbered version shown in the pull-down menu and click the, “Download and install,” button.

Once the new engine is installed, click update under, “Wrapper Version.”

Next click, “Create New Blank Wrapper,” and give it a good name, like, “Wilbur 180.”

My computer runs Mac OS 10.6.8 with pretty extensive modifications, but if you get a pop-up window that says, “The application X11 could not be opened,” don’t worry, just click quit. Everything should be golden. Wait a little bit and another window should pop up that says, “Wrapper Creation Finished.” Go ahead and click, “View Wrapper in Finder,” and double click the appropriately titled icon(Wilbur 180, in my case) to open the wrapper.

Don’t click, “Install Software,” just yet. Click advanced. If you like you can change the version number to an appropriate value. Wilbur is currently on 1.80, as of publishing this post. Also, because we’re using an msi installer, check, “Use Start.exe.”

Go to the Options tab and check, “Option key works as alt,” and “Try to shut down nicely.” Now click on, “Set Screen Options.” Under, “Other Options,” check, “Use Mac Driver,” and “Use Direct 3D Boost.”

Wilbur needs the vcomp100.dll from Visual C++ Redistributable Components to run. I tried using the installer from Microsoft, but that failed. I also tried using the Winetricks tool to load, “vc2010 express,” in, “apps,” and, “vcrun2010,” in, “dlls,” but that failed, too. Lets close the Wineskin.

Instead download a copy of vcomp100.dll from here. Scroll down and click the plus icon next to, “How to install vcomp100.dll,” and scroll down to, “Using the zip file.” Click, “Download vcomp100.zip.”

When vcomp100.dll is downloaded and extracted, navigate to “Applications/Wineskin” in your user directory and left click on the Wilbur 180 icon. Select, “Show Package Contents,” in the resulting pull-down menu. Drilldown through, “drive_c,” and, “windows,” to, “system32.” Drag the copy of vcomp100.dll you just downloaded to the system32 directory, and close the package.

Now doubleclick on the Wilbur 180 Wineskin. Click, “Install Software.” Click, “Choose Setup Executable,” and browse to the location where the, “Wilbur32_setup.msi,” file is located. Sadly, as far as I’m aware, wine can’t run 64-bit apps… Wait a bit, and the Installer window should open. Click, Next.

I manually entered, “C:\Program Files\Wilbur\,” for the folder location(don’t enter the comma). Next. Next. Close.

The Choose Executable window, if it comes up, should show, “\Program Files\Wilbur\Wilbur1.exe.” If so, click OK.

Try a, “Test Run.” If it succeeded, you’re golden.

If not… It took me several tries before I got everything shipshape.

You can either start again from the beginning, or, if you just want to try fixing a parameter that might not have gotten properly set, use Show Package Contents on Wilbur 180, and click Wineskin at the top of the package hierarchy. This will allow you to change any necessary Wineskin parameters or use any of the tools.

In my case, I usually just forgot to check Use Mac Driver in Screen Options.

Once everything checks out, you can open the app just like any mac app by double clicking the appropriate Wineskin icon(Wilbur 180).

I’ve found that most of Wilbur works pretty well in wine. The 3d Preview window fails completely and some other windows, like Map Projection don’t resize properly, but otherwise Wilbur is pretty functional, and in my experience runs a bit faster than in Bootcamp. In fact, Wilbur is stable, if very slow, handling 8192×4096 images, which usually crash it pretty promptly in Bootcamp.

I’ve also successfully ported Fractal Terrains(which had problems with dockable windows, but otherwise seemed pretty good), and SagaGIS(which, so far, works fine), along with a few other programs.

I’m very satisfied with Wineskin, though I may have to keep Bootcamp to run ArcGIS and AutoCAD.

I hope this has helped my more mac loving friends  to add another dimension to their enjoyment of their computers.

Questions and comments, as always, are very welcome!

Thank you for reading,
The Astrographer

Posted in Mapping, Uncategorized | Tagged , , , , , | 1 Comment

MMPS

I may have found another way to flatten imagery and maps onto equirectangular projection. Matthew’s Map Projection Software, created by Matthew Arcus, is a suite of command-line applications for creating and re-projecting maps. At least on my Macintosh, it was quick and easy to compile and link the code using the instructions given on the page. Given that it doesn’t need porting for a Mac, I’m confident it would work perfectly for other Unices and Linuces. For Windows, you’d need to install some sort of “make” program, but even without the make utility the compile process doesn’t look too terribly complicated. As always, Image Magick is strongly recommended and free. MMPS apps require PPM images for input and return PPM images for output. For those with a make app, the command “make ppms” will convert any jpeg files in the images subdirectory into PPM format. The Image Magick mogrify command can also perform the conversion to and from a wider variety of file formats. The thing that caught my eye was the inverse projection option. This allows the user to project from any of the available projections back to equirectangular projection. Here you can find a basic introduction to the use of inverse projection to create at least a partial map from imagery. And if this page, describing how to convert a four view into a map, doesn’t give you some idea of what I’ll be doing in a future post, you haven’t been reading much of this blog :). Suffice to say, if you start playing around with a well-placed orthometric camera and a sphere with an unshaded texture in Blender it probably won’t be anything new to you by the time I get it out… For my purposes, the thing that finally makes MMPS almost a straight-out freeware replacement for Flexify 2 is its ability to easily transform map coordinate systems(recenter the map to a different latitude and longitude and rotate around that center). Flexify has a much more extensive set of projections, but a lot of those are… peculiar and the names are somewhat uninformative. For instance, let’s say we start with a 2048×1024 pixel equirectangular png(generated in Photoshop with Lunarcell) named testmap.png, and we want to center the map over a point at 90ºE, 45ºN, and tilt around that center by 30º counter-clockwise. Start by using Image Magick to convert to ppm. convert testmap.png testmap.ppm Now, we use the following code in MMPS to perform the coordinate system transform. ./project -w 2048 -h 1024 latlong -long 90 -lat 45 -tilt 30 -f images/testmap.ppm > images/rotmap.ppm The resulting image,”rotmap.ppm,” will be essentially identical to an image transformed in Flexify 2 with the latitude slider set to 45, the longitude slider set to 90 and the spin slider set to 30. Perfect. The only unfortunate aspect of the MMPS project tool compared to Flexify is that it apparently can’t handle 16-bit imagery. (Note: Okay, maybe it can handle 16-bit. I’ll have to recheck that.)Other than that and a slightly more limited selection of projections, it is an excellent substitute.

Posted in Mapping, World Building | Tagged , , , , , , , , , | 1 Comment