Foes of the UNSEO: The Hanrul

I’ve been working on a post using the Mercury6 symplectic integrator to analyze the stability of the Yaccatrice System. It’s been a long learning experience, and I think I screwed up entering the initial Orbital State Vectors. Sky Moon fell into Cintilla in less than nine days and Yaccatrice continued in a wildly eccentric orbit for several years thereafter. I expected that Yaccatrice might not have been stable, but I find it really unlikely that tidal forces could have pulled the gas giant out of a circular orbit in just a little over one time step(of 8 days). Even less likely that a moon of that planet would have survived as long as Yaccatrice did. I need to go back and recalculate those position and velocity vectors. This may take awhile…

In the meantime, I’ll post an alien species description from my notebook.


The Hanrul began as kind of a cross between Kzinti, Ferengi and Vulcans, with many of the worst aspects of the stereotypical steampunk mad scientist.

Coldly unemotional and ruthlessly exploitative at best. Often cold-blooded sophontocidal cannibalistic psychopaths.

The currently dominant culture of the Hanrul is one of the most technologically advanced societies yet contacted by humanity. In some areas, particularly biotech, the Hanrul are more advanced than humans. Their technology is frequently odd to the point of inefficiency as a result of their Mad Scientist culture. It’s important, in spite of the term Mad Scientist, that while technologically advanced, the Hanrul have little concept of science as such.

Hanrul also show little evidence of any concept of other people as independent thinking entities with feelings that matter. Both among themselves and with individuals of other species, the Hanrul are utterly callous and without empathy. Such ethics as the Hanrul have, mostly resemble Ayn Rand’s Objectivism.

Hanrul are very prickly about their byzantine hierarchy of social status. They are quick to offense and equally quick to offend. Hanrul typically require long drawn-out introductions in order to fully acquaint others  to their status in a large number of overlapping and interlocking social spheres. It’s also nearly a universal in Hanrul society that listening closely to an introduction is a tacit admission that the other is a superior. Hanrul social norms seem almost designed to make ignorance of status quickly evident. These things, along with oddities of Hanrul reproduction, tend to make life on Sylan short, dangerous and violent. Rising through the ranks by assassination and duels is the norm for Hanrul.

Although horribly xenophobic, with an inhuman(in all the worst senses) psychology, the Hanrul are biologically very similar to humans. This has made humans useful subjects for Hanrul genetic experimentation and vivisection. They do the same to low-status members of their own species.

Economic domination and selfishness is central to Hanrul culture. Their society is cemented together by Exploitation Societies great and minor.

Although humans find Hanrul culture disturbing (with good reason), and believe that individual Hanrul could grow up to be decent and productive members of galactic society if separated from their toxic culture, the experiment has never been tried. The act of removing people from their own society and forcing them to grow up in an alien environment against their will is considered against human ethics. Even given a lapse in ethics, the Hanrul are advanced and strong enough to fiercely resist any such effort.

The United Nations of Sol and polities of many neighboring sapient species resist Hanrul exploitation of other sapient species when possible, but otherwise long-term plans for dealing with the Hanrul are still very much a matter of debate.

The Hanrul didn’t have FTL technology until they gained it after first contact with the Solar Union.

Physical Appearance

Hanrul look something like an octopodal cross between a glossy patent leather wasp, and a centaur, with vicious crab claws.

The four legs on the Hanrul’s abdomen are used for walking. The two upper arms on the thorax end in hands with three opposed thumbs, and are used for manipulation. The two lower arms are greatly increased in size and end in claws which are used in combat and to help climb.


Large numbers of eggs are laid and cared for in communal hatching grottoes. Modern Hanrul cities often use sewage and other refuse to create a nutrient-rich mulch upon which the young larvae feed. Criminals and low-status Hanrul are frequently dropped into the spawning pits to feed adolescent Hanrul. The young also often fight and each other, helping to whittle numbers down from ridiculously excessive to merely excessive when they finally emerge as young males from the pits.

Conflict continues for these males. In the dominant Sylan language, the world for male can also be translated as, “cannon fodder.” Even with the further whittling of the male population by continuous warfare with other Hanrul and neighboring species, population pressures remain fierce.

Those males who manage to survive and fight to earn the right to mate, will impregnate the female with hundreds of eggs. Laying eggs renders the female terribly hungry, and she will try to eat everything she can catch. Mating chambers are always well-stocked with food, because the female can easily die from the stress of laying so many eggs. She will still try very hard to eat her mate.

Provided the male manages to avoid being devoured by his mate, he will begin to go through a metamorphosis into another female. Hanrul females can live for centuries, and are capable of producing a clutch about twice an Earth year. In practice, they only lay eggs a couple times a decade, because of the immense stress involved.

Sylan Empire

Sylan, the homeworld of the Hanrul, is still divided bitterly into small sovereign entities, often overlapping and with disparate systems of social organization. The global organization, which humans refer to as the Sylan Empire, is at best a loose confederacy consisting of a handful of “international” alliances.

An innovation of the new global culture is a harsh meritocracy that makes it costly and possibly deadly to rise above one’s level of competence.

Since expanding into interstellar space, the Sylan Empire has colonized several worlds and enslaved at least one primitive sapient species.

Although often only tenuously disciplined, the Sylan military is strong, technologically fairly advanced and highly aggressive. The Sylan Empire is widely considered a threat to its neighbors.

Hopefully, more to come soon. Sylan obviously needs some specs and maps…

The Astrographer

Posted in Aliens, World Building | Tagged , , , , | Leave a comment

Wilbur on the Macintosh

Wilbur. as seen on my Macintosh. Note the Apple in the upper left corner.

Wilbur. as seen on my Macintosh.
Note the Apple in the upper left corner.

While the title sounds like a good name for a fictional English(-ish) village(the Macintosh River seems a bit un-English-y), I’m actually talking about getting Joe Slayton’s Wilbur program running on my Mac OS X-based computer.

I’ve been using Boot Camp to boot my computer in Windows XP, which seems to work fine, but I’m not that fond of being bound to Windows till I can reboot. I’m aware of Parallels and similar programs which allow Windows to run in a virtual machine parallel to the Mac OS, but they cost money. If I had any money, I’d spend it on cartography and simulation apps… er, not to mention clothes for the kids and… food and whatnot… ehhh…

Anyway! Whilst pointlessly bouncing about the internets, I discovered this post on using Wineskin to “port” Orbiter to Macintosh. I’ve had some experience trying to get Wine working so I wasn’t terribly optimistic, but I tried it. After some false starts(X11 doesn’t seem to open right on my computer, but I’ll go over my workaround if you have the same problem) I had Orbiter running on my computer. As far as I can tell, it works fine, though the simulation is realistic enough that, I at least, can’t get a Delta Glider with a ridiculous amount of delta-v into orbit to save my life. I may need to stick to Kerbal Space Program till I get some, um, skillz. But that said, Orbiter is a pretty big and complicated program, and I didn’t have too much trouble gettin’ it going.

Wilbur, on the other hand, is a relatively simple app, so I went with it. I suppose if I can get Wilbur, SagaGIS and Fractal Terrains running in Wine, I can dump Windows and free up about 180 gigabytes of disk space. Sadly ArcGIS, not surprisingly, doesn’t function on Wine.

First prerequisite, of course, is having mac os on your computer. If you have windows and you like it, then you don’t need my help. If you have linux, then the vanilla version of wine should be hunky-dory, but I can’t really help you with that.

Second step is to get a copy of the wilbur installer here. Next, get a copy of Wineskin Winery here and install it.

The first time you open Wineskin Winery you need to install an engine. Simply click the plus sign next to where it says, “New engine(s) available!” Select the highest numbered version shown in the pull-down menu and click the, “Download and install,” button.

Once the new engine is installed, click update under, “Wrapper Version.”

Next click, “Create New Blank Wrapper,” and give it a good name, like, “Wilbur 180.”

My computer runs Mac OS 10.6.8 with pretty extensive modifications, but if you get a pop-up window that says, “The application X11 could not be opened,” don’t worry, just click quit. Everything should be golden. Wait a little bit and another window should pop up that says, “Wrapper Creation Finished.” Go ahead and click, “View Wrapper in Finder,” and double click the appropriately titled icon(Wilbur 180, in my case) to open the wrapper.

Don’t click, “Install Software,” just yet. Click advanced. If you like you can change the version number to an appropriate value. Wilbur is currently on 1.80, as of publishing this post. Also, because we’re using an msi installer, check, “Use Start.exe.”

Go to the Options tab and check, “Option key works as alt,” and “Try to shut down nicely.” Now click on, “Set Screen Options.” Under, “Other Options,” check, “Use Mac Driver,” and “Use Direct 3D Boost.”

Wilbur needs the vcomp100.dll from Visual C++ Redistributable Components to run. I tried using the installer from Microsoft, but that failed. I also tried using the Winetricks tool to load, “vc2010 express,” in, “apps,” and, “vcrun2010,” in, “dlls,” but that failed, too. Lets close the Wineskin.

Instead download a copy of vcomp100.dll from here. Scroll down and click the plus icon next to, “How to install vcomp100.dll,” and scroll down to, “Using the zip file.” Click, “Download”

When vcomp100.dll is downloaded and extracted, navigate to “Applications/Wineskin” in your user directory and left click on the Wilbur 180 icon. Select, “Show Package Contents,” in the resulting pull-down menu. Drilldown through, “drive_c,” and, “windows,” to, “system32.” Drag the copy of vcomp100.dll you just downloaded to the system32 directory, and close the package.

Now doubleclick on the Wilbur 180 Wineskin. Click, “Install Software.” Click, “Choose Setup Executable,” and browse to the location where the, “Wilbur32_setup.msi,” file is located. Sadly, as far as I’m aware, wine can’t run 64-bit apps… Wait a bit, and the Installer window should open. Click, Next.

I manually entered, “C:\Program Files\Wilbur\,” for the folder location(don’t enter the comma). Next. Next. Close.

The Choose Executable window, if it comes up, should show, “\Program Files\Wilbur\Wilbur1.exe.” If so, click OK.

Try a, “Test Run.” If it succeeded, you’re golden.

If not… It took me several tries before I got everything shipshape.

You can either start again from the beginning, or, if you just want to try fixing a parameter that might not have gotten properly set, use Show Package Contents on Wilbur 180, and click Wineskin at the top of the package hierarchy. This will allow you to change any necessary Wineskin parameters or use any of the tools.

In my case, I usually just forgot to check Use Mac Driver in Screen Options.

Once everything checks out, you can open the app just like any mac app by double clicking the appropriate Wineskin icon(Wilbur 180).

I’ve found that most of Wilbur works pretty well in wine. The 3d Preview window fails completely and some other windows, like Map Projection don’t resize properly, but otherwise Wilbur is pretty functional, and in my experience runs a bit faster than in Bootcamp. In fact, Wilbur is stable, if very slow, handling 8192×4096 images, which usually crash it pretty promptly in Bootcamp.

I’ve also successfully ported Fractal Terrains(which had problems with dockable windows, but otherwise seemed pretty good), and SagaGIS(which, so far, works fine), along with a few other programs.

I’m very satisfied with Wineskin, though I may have to keep Bootcamp to run ArcGIS and AutoCAD.

I hope this has helped my more mac loving friends  to add another dimension to their enjoyment of their computers.

Questions and comments, as always, are very welcome!

Thank you for reading,
The Astrographer

Posted in Mapping, Uncategorized | Tagged , , , , , | 1 Comment


I may have found another way to flatten imagery and maps onto equirectangular projection. Matthew’s Map Projection Software, created by Matthew Arcus, is a suite of command-line applications for creating and re-projecting maps. At least on my Macintosh, it was quick and easy to compile and link the code using the instructions given on the page. Given that it doesn’t need porting for a Mac, I’m confident it would work perfectly for other Unices and Linuces. For Windows, you’d need to install some sort of “make” program, but even without the make utility the compile process doesn’t look too terribly complicated. As always, Image Magick is strongly recommended and free. MMPS apps require PPM images for input and return PPM images for output. For those with a make app, the command “make ppms” will convert any jpeg files in the images subdirectory into PPM format. The Image Magick mogrify command can also perform the conversion to and from a wider variety of file formats. The thing that caught my eye was the inverse projection option. This allows the user to project from any of the available projections back to equirectangular projection. Here you can find a basic introduction to the use of inverse projection to create at least a partial map from imagery. And if this page, describing how to convert a four view into a map, doesn’t give you some idea of what I’ll be doing in a future post, you haven’t been reading much of this blog :). Suffice to say, if you start playing around with a well-placed orthometric camera and a sphere with an unshaded texture in Blender it probably won’t be anything new to you by the time I get it out… For my purposes, the thing that finally makes MMPS almost a straight-out freeware replacement for Flexify 2 is its ability to easily transform map coordinate systems(recenter the map to a different latitude and longitude and rotate around that center). Flexify has a much more extensive set of projections, but a lot of those are… peculiar and the names are somewhat uninformative. For instance, let’s say we start with a 2048×1024 pixel equirectangular png(generated in Photoshop with Lunarcell) named testmap.png, and we want to center the map over a point at 90ºE, 45ºN, and tilt around that center by 30º counter-clockwise. Start by using Image Magick to convert to ppm. convert testmap.png testmap.ppm Now, we use the following code in MMPS to perform the coordinate system transform. ./project -w 2048 -h 1024 latlong -long 90 -lat 45 -tilt 30 -f images/testmap.ppm > images/rotmap.ppm The resulting image,”rotmap.ppm,” will be essentially identical to an image transformed in Flexify 2 with the latitude slider set to 45, the longitude slider set to 90 and the spin slider set to 30. Perfect. The only unfortunate aspect of the MMPS project tool compared to Flexify is that it apparently can’t handle 16-bit imagery. Other than that and a slightly more limited selection of projections, it is an excellent substitute.

Posted in Mapping, World Building | Tagged , , , , , , , , , | 1 Comment

Painting Planets Using Blender’s Texture Paint Tool

I’ve been interested for awhile in using the Texture Paint tool from Blender to paint on the globe. There are a few things that you need to know how to do to make this technique work. First, you need a uv-mapped sphere. This can be done in a variety of methods, but I still find the icomap method I’ve used before to be the most reliable and effective. The uv-map image texture doesn’t necessarily have to fit any particular projection if you’re going

The Tissot Indicatrix of an icomap. Each of the "circles" on the map represents a true circle on the spherical surface, all of the same size.

The Tissot Indicatrix of an icomap. Each of the “circles” on the map represents a true circle on the spherical surface, all of the same size.

to use the POV-Ray spherical camera to reproject the globe, but it is best to have a mapping with a minimum of distortion of shape and area. This allows each pixel to reliably represent roughly the same surface area  with a minimum of distortion. The icomap projection does a very good job of this as shown by the Tissot indicatrix.

The fact that all those little “circles” have nearly the same area and are close to circular is a good indicator that distortion is minimal over nearly all of the surface. Although equirectangular projection(also referred to as “spherical,” “latlong,” “geographical,” “rectangular,” or “plate carée”) is a good universal interchange format, it is very distorted it’s miserable to create an appropriate uv map, which is required for the Texture Paint tool, so essentially I’ll paint to an icomap and use POV-Ray to convert that into an equirectangular.

I’ve been wanting to do a video tutorial for awhile. Mainly ’cause I’m not sure how clearly I’m describing the things I’m doing. Unfortunately, this being the first kind of ground up procedure, things got a little long. I’ve decided to split up the procedure into several separate sections. Along with each video, I’ve included a rough transcript. The video is good for showing where things are and roughly what is being done, but when you’ve got that out of the way and just want to refer back to find forgotten keystrokes and such text is much more efficacious.

NOTE: My limited talents as an actor, the learning curve of video editing software, and problems uploading to Youtube have all conspired to greatly delay this post. With that in mind, I have decided to post the text now and add in links to the videos as I can get them together. In the meantime, Andrew Price has tutorials for pretty much everything I know about Blender so far. If you dig into his tutorials and have a look at a few of the related videos on the Youtube sidebar, the text here should be pretty clear. My videos may even be anticlimactic. Oh well…

Part 1: Setting Up the Spherical Canvas

This video will demonstrate the process of creating and preparing the globe which will be the canvas on which the map will be painted.

Press shift-A to bring up the Add Object menu. Under Mesh, select Icosphere to create our globe.

The default Icosphere, as it turns out, is not a true icosahedron. It is a geodesic spherical shape consisting of triangular faces, but it’s not the classic d20 shape we all know and love. This is perfectly usable with the POV-Ray method for creating equirectangular maps, but I’d like to have a proper icomap straight out of the box. Just personal preference, that…

Hit the T-key with the mouse pointer in the 3D View space to bring up the Tools Panel, and at the bottom there will be a pane that allows editing of the parameters for the last command entered. The top entry for the Add Ico Sphere pane controls Subdivisions. Change it from the default setting of 2 to 1 to get a true icosahedron.

Now go into Edit Mode. Select Mesh>Faces>Shade Smooth in the 3D View footer menu or click the Smooth button under Shading in the Tools Pane. Hit the A-key deselect all faces and, making sure the select mode is set to Edges, select an edge at the north pole. Holding down shift, select the other four edges around the north pole and all five edges around the south pole. Now select an edge crossing the “waist” of the icosahedron. This is somewhat arbitrary, but if we want a particular orientation to our icomap, it pays to select the edge with care and take note of its direction.

Looking at the icomap generated by Flexify, we see that the edges on either side trend from northwest to southeast. The best edge to select, for that case would be the one that coincides with the positive Y-axis. The best way to find this edge is to look at the icosahedron in Object View. Later, this information will be used to select an appropriate look_at value for the POV-Ray camera. So make sure to write down your choice of direction.

Wilbur-generated icomaps have the opposite orientation, so the edge passing through the negative Y-axis would be appropriate.

Classic Traveller-style icomaps cut through the middle of a triangle. The best way to reproduce this effect would be to cut the triangle that the negative X-axis passes through using the knife tool. In the accompanying video, I demonstrate the Traveller style, both because it is the most challenging, and because it allows me to introduce a new and very useful tool, the knife. With the appropriate face and the north pole in view(the knife tool interferes with the view controls, a bit of a bug, frankly), hit the K key to use the knife. Click the vertex at the bottom of the triangle and, holding down the control key, select the midpoint of the upper edge. Click again on the north pole vertex and hit return to make the cut.

In the Shading/UVs tab of the Tools panel, under UVs, is, not surprisingly, the UV Mapping section. Below the Unwrap menu button click on the Mark Seam button. If you look at the 3D View canvas, you will see that the selected edges are highlighted in red to show that they are seams. Now, select all vertices of the icosahedron and click Unwrap. In the UV/Image editor, we will find that the unwrapped faces are displayed. In the UVs menu, check snap to pixels and constrain to image bounds. Using the Grab, Rotate, and Scale tools we will center the unwrapped faces and stretch them to almost fill the image space. “Almost,” because, even with large bleed, I had problems with blank spaces where edges were too close to the image boundaries. I’m hoping a bit of a margin will alleviate that.

Next, in the Properties View, give the icosahedron a basic material. Click the material tab, a coppery-colored ball with a faint checkerboard pattern. Under that tab, click the New button to create a new material. Don’t worry about the settings for now. It might be helpful to give the material a more informative name like “planet_surface” if you like. In the Options section of the material, make sure to check the UV Project box.

The last step in preparing the globe which will be the canvas for our map, will be adding a subdivision modifier. In the Properties view, select the tab with the wrench icon. This is the Object Modifiers tab. Under that tab, you will find a menu button which reads Add Modifier. Click on that and select Subdivision Surface under Generate. Under options, uncheck Subdivide UVs. Set Subdivisions in View to about 3 and in Render to about 4. If applied, 3 subdivisions will result in 960 faces and 4 subdivisions would result in about 3840. Keeping those face counts down could speed things up a lot down the line when painting.

Note that, while the particular way of marking seams for a Traveller-style icomap may be suitable for converting maps to equirectangular, the “sphere” that results is pretty badly distorted for display purposes. You can fix this with Spherize in Edit or Object mode(Shift-alt/option-S)!

Now on to Texture Paint!

Part 2: Painting the Texture

First off, in the main menu bar, open File>User Preferences… Look at the Add-Ons tab. Make sure Paint:Paint Palettes, Paint:Texture Paint Layer Manager and Import/Export:Wavefront OBJ format are all checked.

Set the 3D View to Texture Paint mode.

In the Tools panel(T-key), look under the Tools tab for the Color Palette section. This is probably an empty palette to start with. To add a color, click on the plus-icon next to Index and set the color in the usual manner. You can add a name for the color entry in the text field next to the color wheel icon. Repeat this process till you have your desired palette.

To save your new palette, start by clicking on the file folder icon at the bottom of the Color Palette section. This will allow you to select the directory from which to choose existing palettes or in which to save your new palette. At the top of the palette, there will be a pull-down menu saying Preset, find the plus-icon next to that and press it. Enter a name for your palette and press OK. The palette should be saved.

You’ll find the paintbrush settings in the Tools panel, at the top of the Tools tab. In this section, you can set the size and strength of your painting brush. Next to the radius and strength fields, there are buttons which allow control of the attributes by pen pressure. You can also set color here, but the palette will allow you to reliably repeat color selections.

Above the Color Palette section, you’ll find the Curve section. This allows you to set the softness of the edges and the sharpness of the center of the brush.

Finally, near the bottom of the Options tab of the Tools panel, you’ll find a Bleed option. A large bleed will make it less likely to render grey edges on the surface. Larger the safer. If you want to use the icomap you paint directly, it’s best to leave this at zero. Bleed also makes painting a bit slower…

The next point is the use of Texture Paint layers. Near the bottom of the numeric panel(N-key) are two sections of interest.

The first section is Texture Paint Layers. This allows you to select any materials associated with the object and, below that, any any existing textures that are part of the selected material. To edit any given texture, simply click on the paintbrush icon next to the texture’s name. If you don’t see any textures with paintbrush icons then you need to read the next paragraph.

Beneath Texture Paint Layers, you’ll find the Add Paint Layers section. If you don’t yet have a diffuse color texture, click on the Add Color button to add a new layer. Give it a name and you should find that texture listed in the Texture Paint Layers section above.

At this point just start painting on the globe.

Setting up a bump map layer can be a bit more complicated. While clicking Add Bump is simple, as far as I can tell it creates an 8-bit image. For bump maps, it’s best to use at least 16-bits to avoid stair-stepping. Also, part of the intent of this exercise is to create detailed map data down the line.

With that in mind, we’re still going to create the new bumpmap Texture by clicking Add Bump. Now we’re going to go into the UV/Image Editor View and, find the button with the plus sign next to the image browser pulldown. Click that, and in the window that pops up enter a name, a desired resolution. Make sure to check “32 bit Float” before you hit OK. In the Texture Properties make sure to select the 32 bit image you just made in the Image section, and in the Mapping section, make sure the Coordinates are set to UV and the Map your UVMap shows in the Map selection area. In the Influence section, make sure Normal, under Geometry is checked and everything else is unchecked. Make sure the normal influence is a positive value. I’d go with 1.0 while painting. You can adjust the value(probably downward)later, to make it pretty. Your canvas is now ready to paint in the bumps.

For best results, use one of the pointier brush Curves, fairly low Strength and Radius with pressure-sensitivity for both, and set the Blend to Add(to raise) or Subtract(to lower). For most purposes, leaving the color set to white is perfectly good. You should now be prepared to start painting bumps!

If your computer is decently fast you should use Textured viewport shading. I use a 5 year old bottom of the line MacBook, things get a little boggy, but it’s still usually worthwhile to be able to see what my bumpmapping looks like.

Once you’re done, save the color map to png and the bump map to 16 bit TIFF. I’d love to use full 32-bit OpenEXR, but my conversion paths are limited.

Part 3: Flattening to Equirectangular

In the main menu, select File>Export>Wavefront(.obj) to export the globe. Give it a name and save it.

Now open Wings3D. In the menu select File>Import>Wavefront(.obj)…, and find your saved globe object. Now, we’re going to turn right around and export to POV-Ray(File>Export>POV-Ray(.pov)). Wings3D is a capable and highly useful modeling tool, but this time all we’re doing is using it to translate between Blender and POV-Ray. Go figure…

Now we can go to our favorite text editor to change some settings in the generated .pov file. In global_settings, set ambient_light rgb vector to <0.7,0.7,0.7>. If this proves too dim after rendering, you can increase it later. Set camera_location equal to <0,0,0>. Comment out(//) right up angle and sky. Set camera look_at according to the location where you made the waist seam in the uv-mapping stage. Note that Y and Z are reversed in POV-Ray relative to Blender. So, if your cut was across the positive y-axis, you’ll want to look at the negative z-axis(<0,0,-1>). For the Traveller-style map, my cut was across the negative x-axis, so in my example, I’d set the look_at to <1,0,0>. Comment out light source(nest it between /* and */). Add the uv_mapping statement to texture. Go down to ambient, and comment out the color_rgbf statement. Add, image_map {png “name-of-map-image”}. You should be able to render now and save the image to disk.

Finally, we open the resulting image in Photoshop and flip the canvas Image>Image Rotation>Flip Canvas Horizontal. The analogous command in GIMP would be Image>Transform>Flip Horizontally. Save the result and you have your image as a proper equirectangular map.

Part 4: Converting the Bumpmap

To do the same for the bumpmap, you need to be able to convert the 32-bit image into something that POV-Ray can render to. You could possibly use Landserf to convert the 32-bit single-channel data into an RGBA separated png image, and project that in POV-Ray. Then come back to Landserf to recombine. You’ll would want to save the 32-bit bumpmap to OpenEXR in Blender, use Photoshop to save that to a 32-bit TIFF, then use GDAL to convert the TIFF to something Landserf can read(like BT ).


Posted in Mapping, World Building | Tagged , , , , , , , , , | Leave a comment

A Resource for Learning Quantum GIS

I found a nice set of video tutorials for learning the use of QGIS at Mango Map. The first module introduces the QGIS interface. The second module goes over the basics of creating a map. It looks like further posts are being made at roughly weekly intervals(like my own blog… in theory).

Hopefully this will be a good introduction to the use of the program, even if it doesn’t necessarily delve deeply into the particular problems of people trying to use QGIS to create maps of imaginary places. Fantasy mapping is still mapping, so the basics will be useful.

The Astrographer

Posted in Mapping | Tagged , , , , | Leave a comment

Big Planet Keep on Rolling

Same planet, slightly better render...

Same planet, slightly better render…

My intended post for last week took so long that I decided to simplify things a bit. I was going to discuss prettifying the tectonics.js output and making a blender animation of the prettified planet spinning. I’ve learned a lot about qgis(and wilbur) while trying to do this, but I’m still groping around. I’m not saying anything against tectonics.js, it’s my fault for pretty much ignoring too many of the useful hints the program gives and misinterpreting too many of the others. I also habitually underestimate just how wide my mountain ranges are. Anyway, for nor now I’m just going to focus on the animation using the planet I have. Not Earth, that’s just cheating, but I’m using the not altogether successful planet I tried to create over the last two weeks. I need a quicker workflow; one that doesn’t involve constantly googling for instructions…

I’ll start with a sphere similar to the one I put together for an earlier article on displaying your world. I replaced the bedrock color I previously used for the diffuse and specular color with a hypsometric texture I created in wilbur. My original intent was to create a more realistic satellite view with a simple climate model. That would have been awesome!

I used a 16-bit tiff image for the bumpmap. Sixteen bit png’s seem to fail in blender, so I used qgis to convert my png to tiff. I also wanted to create a subtle displacement map as well, but the sixteen bit tiff inflated the planet into a lumpy mess several times as large as the undisplaced sphere even with a nearly zero displacement influence. I decided to use a more conventional 8 bit  version of the map for a separate displacement texture.

First thing I tried was to use the gdal translate tool to convert my 32 bit floating point BT elevation map into an 8 bit png.

gdal_translate -ot Byte -of PNG ${input_file} ${output_file}
, where ${input_file} is the name and path to the input file…
,and ${output_file} is the desired name and location for the converted file.

Unfortunately, this failed badly. Basically, all the elevations were clipped to below 255 meters. Instead, I used the Raster Calculator to make an intermediate file with the following expression.
${input_elevation_layer} / 32.0
This will result in another 32 bit elevation file with values in the range of 0..255. It helped that I started with an elevation range from sea level to less than 8000 meters. The divisor may need to be larger if the range of values is larger and can be smaller for a smaller if the range is smaller. I then used the gdal function to convert that into an 8 bit png.

Since all I wanted was a very small relief like that on some globes, the 8 bit was sufficient. Unless you’re using something like terragen, there’s really no way to make a displacement map in realistic proportions, real planets are smoother than cue balls in proportion.

For the bumpmap I had used a normal influence of 12.0, for the relief texture, I used a displacement influence of point twelve. Even with values less than 256.

I decided to discard the clouds and atmospheric effects. Maybe this is a desk globe. Perhaps I should also model a stand for the thing… A slightly less subtle displacement might be in order.

Now that we have a kinda decent globe, let’s animate the thing. I started at frame zero, with the rotation set to zero. In the “tool palette” to the left of the 3d view(toggle it on and off with the “t” key), I scrolled down to find the keyframes section, clicked “insert” and selected “Rotation.”

At the bottom of the Timeline editor there are three numeric entry fields. The first two are labelled “Start:,” and “End:.” Predictably, these denote the starting and ending frames of the animation. This will be useful later. To the left of these is another numeric field with the current frame number displayed. Click on this and enter a frame number for the next desired keyframe. I chose to put in keyframes every 65 frames, so 0, 65, 130, 195, and  260. At each keyframe, I went to the numeric palette to the right of the 3d view(toggled with the “n” key), near the top you’ll find “transformations,” I added 180° to the z-axis rotation with each keyframe. So 0, 180, 360, 540 and 720.

With that done, it was time to go to the Properties editor and select the Render tab. There are sections here controlling the display of renders, resolution, anti-aliasing and the like. I invite you to experiment with other sections, but for this I’ll focus on the Dimensions and Output sections. In Dimensions select the desired resolution and frame rate. I went with a 960 by 800 pixel image size and 16 frames per second. If you change the resolution you may need to (g)rab and (r)otate the camera to restore the composition of your scene. I’ll wait.

Below the X and Y resolution there is an additional percentage field. This allows you to create fast test renders without messing around with the camera every time. This is a pretty simple project, but when you are dealing with more complex scenes and longer render times, its nice to be able to take a quick look at what your scene looks like to the camera.

Under the Output section, first select an output path. Since I’m going to render all the frames separately and stitch them together later, I decide to create a directory specifically for my render. Check overwrite and file extensions, you may need to redo things…

Below the Placeholders checkbox, which I leave unchecked there is an output format menu with a number of image and movie formats. You could choose a movie format like mov, avi or MPEG, but I’m going with png for individual numbered frames. I’m pretty sure you can give a C printf-style name template, but I’m not entirely sure.

To render an image press F12, to render an animation sequence, press ctrl-F12. You can also select them under Render in the Info panel menu.

Initially, I set the animation to start at frame 1, the frame after the initial keyframe and to end at frame 260, the last keyframe which returns the globe to its initial rotation. This is supposed to allow looping without hesitation, but when I rendered an avi internally, it seemed like the animation was accelerating up to speed and decelerating at the end. I’m not sure why this was happening, but the render time was a bit long, so I figured I’d render out a full rotation from the middle of the sequence and stitch images together in an outside program. Thus, I set start to 66 and end to 195. Once all the images were rendered and saved under names of the form 0066.png .. 0195.png, it was time for stitching.

From my understanding ffmpeg is the best free standalone program for stitching together images into movies(and a lot of other movie-related tasks; it’s kind of the image magick of movies).

In my unix terminal I enter the following command:
ffmpeg -r 16 -vsync 1 -f image2 -start_number 0066 -i %04d.png -vcodec copy -qscale 5

-r 16 sets the speed to 16 frames per second

-f image2 tells it to accept a sequence of images as input.

-start_number 0066 is important. It tells the program to start rendering from an image with frame number 66. Otherwise, if it doesn’t find an image with an index less than five it will assume files are missing and punt out.

-i %04d.png is a format descriptor telling ffmpeg where to look for input files. is the name and format of the desired output movie file.

The rest of the options may or may not matter. I’m not taking chances…

Next time, maybe I’ll add sound…

Comments, questions, corrections or suggestions are welcome! Thank you for your patience,
The Astrographer

Posted in Mapping, World Building | Tagged , , , , , | Leave a comment

Geometry for Geographers


Today, I’d like to share a few geometric formulae I’ve found useful in worldbuilding. There are formulae here for determining the distance between two points with known latitudes and longitudes, the inverse function(latitude and longitude of a destination given a known origin location and a direction and distance. The area of polygons on a sphere and the distance to the horizon for a planet of a given radius given a viewpoint height, and the area of a circle of given radius on a sphere.

Great Circle Distance Between Two Points on a Sphere

If you know the latitude and longitude of two points on a sphere, you can figure out the arc distance in radians between those points with just a little trigonometry. Point A is at latitude,lat_a , longitude,lon_a . Point B is at latitude,lat_b , longitude, lon_b. The difference of longitude is, P = lat_alat_b.

The arc distance is, GreatCircle_ArcDistance_FORMULA.

Thus distance, GreatCircle_Distance_FORMULA.

Once, you know the distance, you can readily calculate the initial bearing from point A to point B. Bearing, bearing_FORMULA. You can figure out the final bearing by interchanging b and a. This will prove useful in determining the area of spherical polygons. Keep it in mind.

Destination Given Distance and Bearing from Origin Point

Given a known point at lat_alon_a, a planet’s radius, R, a bearing, θ, and a distance, d, how do we find the new point lat_blon_b? Note, since Mathematica’s implementation of the atan2(y,x) function is apparently functionally identical to its atan(y/x) function, and its the same function name overloaded with inverted input order(ArcTan[x,y] == Arctan[y/x]), I decided to just go with the y/x form. In a Java or Python or, apparently, JS program, you’d use atan2(num, denom), instead.



For further information, check this page out.

Area of Spherical Polygons

The formula for the area of a spherical triangle is pretty simple looking. Just, Spherical_Triangle_AREA. A, B and C are the three inner angles of the triangle, R is the radius of the sphere and S is the surface area of the triangle. For each vertex, use the Great Circle formulas above to determine the distance and bearing to both neighboring vertices. The inner vertex angle is equal to the distance between the bearings to the two neighboring vertices.

The same principle is used to find the area of more complicated polygons. In the general polygon case, though, it’s important to keep track of convex and concave angles. It might be necessary to make diagrams to keep track of which angles are internal.

Spherical_Polygon_AREA, where σ is the sum of angles in radians, and n is the number of sides.

Distance to the Horizon

Figure 1

Figure 1

As shown in figure 1, point, P, is our central point of interest, point, H, is the point on the horizon of view from P, point, A, is the point on the surface directly beneath P, angle, θ, is the angle subtended, at the center of the sphere, between points P and H. As before, R is the radius of the sphere.

D, the direct distance between points P and H, is also known as the slant distance. The formula for slant distance is horizon_slant_FORMULA, where h is the distance of the viewing point above the ground(length PA).

The value for θ would be, horizon_theta_FORMULA.

The distance along the arc AH is d=Rθ, with θ in radians. Thus, the arc distance, which I call the map distance, since it would be the distance measured on a map, would be map_distance_FORMULA.

The area of a planet observable from a point at height, h, is, observable_area_FORMULA.

The fraction of a planet observable from that height would be, observable_fraction_FORMULA.

For reference planetary_surface_FORMULA, which is the formula for the total surface area of the planet.

Area of a Circle on the Surface of a Sphere

Figure 2

Figure 2

My next formula will be for the surface area of the circular region within a distance, d, of a point, P, on the surface of a sphere of radius, R, as shown in figure 2. From page 128 of the CRC Standard Mathematical Tables, 26th edition(similar information, with 3d figures, here), I find under spherical figures that the zone and segment of one base has a surface area of zone_and_segment_SURF. Incidentally, the volume of this portion of the sphere is, zone_and_segment_VOLM, not that we’re using that here. The arc distance from P to the edge of the area is d=Rθ. An examination of the geometry leads us to the conclusion that h-theta, so the area of the spherical surface within angular distance θ of the center is, circle_on_sphere_FORMULA.

Posted in World Building | Leave a comment

Displaying your Planet in Blender

I love it when a plan comes together!

I love it when a planet comes together!


In the process of making some pictures for recent blogs, I’ve found myself messing around quite a bit with Blender. There’s things I’ve done before on Blender that I had completely forgotten how to do, and there are other things that are somewhat involved to do in Blender that other programs pull off without a hitch.

Some of this comes down to the general purpose nature of Blender as compared to the more focussed purposes of other programs. Displaying a map on a rotating globe is fairly easy for gplates, because that’s one of its core competencies. On the other hand gplates isn’t capable of displaying raytraced specularity variation across a planet’s surface or showing proper hillshading due to surface topography. Bryce, on the other hand, is capable of doing these things, to some degree, and to some degree some of these things are easier. Bryce is getting pretty long in the tooth at this time, though, and even fairly simple renders are sloowww. Terragen is pretty sweet. Like Google Earth with raytracing and your own world’s terrain. Unfortunately, my family has to eat and stuff, so Terragen is right out…

Creating a Globe

Our first step will be to create the globe we’ll be texturing. On the menubar, select Add>Mesh>UV Sphere. Since we’re not going to do UV-mapping on this one, I’m going to go ahead and smooth the thing. First, go into Edit mode and in the 3D View menu select Mesh>Faces>Shade Smooth. Next, return to Object mode. In Parameters, select the Modifiers tab. Click Add Modifier(wrench icon) and select Subdivision Surface. Set Render to three subdivisions and click Apply Modifier. If you like, you can forego applying and simply leave the modifier in place. Whatever you choose, you now have a globe. Now to texture the thing.

Loading Spherical Image Textures

The first problem to solve is loading equirectangular projection(or “spherical”) images and using them as textures. Surprisingly, this seems easier with UV-mapped icosahedral textures. Although, to be honest, I did all the modeling and UV-mapping in Wings3D. For this purpose, I’ll be using textures generated by the Tectonics.js program. I am aware that these aren’t necessarily suitable as-is for this purpose, but this can be considered sort of an early evaluation prior to spending a lot of time optimizing them.

I’ll start with the bedrock texture. This is the simplest, because it’s simply a color, which is the easiest thing to apply in Blender. Making sure you have your globe selected, go to the Material editing tab(brass ball icon) in Properties. There will be a button that says New. To the left of that will be a pull-down that allows you to browse materials. If you started with an empty scene then the only material in the list will belong to the sphere you made. Select that. Rename it if you wish. For now we’ll leave the settings as-is. It’s hard to tell what effect the various shading options will have till you have the textures onboard.

Bring up the Textures tab(checkerboard square). The currently selected texture will be imaginatively named Tex, and its type will be None. Change the type to Image or Movie, and, if you like, change the name to something more descriptive of its role as surface coloration. While you’re up there set the Preview to Both, so you can see the texture image and get some idea what it’s going to do. Make sure the preview render is on a sphere.

Now, scroll down to Image and click Open. Pick out the desired image from the filesystem. In the preview, you’ll see that the texture is about what we expect, although squeezed into a square. The material preview, however, is going to be disappointing. This is because the projection of the flat texture onto the sphere is wrong. Let us now fix that.

Scroll down to Mapping. The coordinates seem to be fine as Generated, so we’ll leave that be. Let’s change the Projection to Sphere, and have us a look at the preview. The material should be much better.

Let’s make a trial render to see how this came out. Go to the Render tab(camera icon) and scroll down to dimensions. Set the X and Y resolution to whatever you’ll want as your final render size and set the scale to 50% to speed up your trial renders. If your desired resolution is much less than 1000×1000, maybe you should leave scale at something closer to 100%…

Scrolling down to Output, you can set your image format and related parameters. I’m not too worried about that at this stage. I’ll just let the trial renders live in slots within the program till I’m ready for a final render.

Scroll back up to the Render pane. I usually set Display to New Window for convenience, because it defaults to Image Editor and replaces your 3D View window with an Image Editor window. Set that as you like… Click Render or press F12. Not the prettiest thing ever, but the texture seems to work. It seems to me, the seas should have more glare than the land. Let’s see what we can do about that.

Now, previously in Photoshop, I created a Sea mask image by making a magic wand selection of the water in the bedrock image and saving the resulting channel to its own file. I also made a land mask image by saving an inverted version of same. I go back to the Texture tab and select an empty texture slot. Hit New and select Image or Movie. Scroll down to Image, hit open and select the sea mask image. Make sure to uncheck Use Alpha under Image. This image doesn’t have a useful alpha channel, so we want it to use the greyscale rgb as alpha, which is what it uses to control intensities. Set your mapping and such as with the previous texture. You’ll see the black and white image in the texture now, instead of the bedrock colors, but at least it ain’t a white cueball and everything’s in the right place.

Scroll down to Influence. Uncheck Diffuse Color, check Specular Intensity. Maybe check Hardness under Specular, as well. The sea colors seem a bit bright, so you could use this to put a large negative influence on diffuse intensity as well, but, in my limited experience, that is fraught with issues(it tends to brighten the land too much, it’s a bear to adjust, and the color of the water tends to get way too deep and saturated by the time you’ve gotten it dark enough). Best way to adjust colors, for the moment, is probably in the texture itself, using your favorite image editor(not, in my case, by any means, Blender). Try another trial render.

At this point, you should adjust the parameters on the material and textures. This will involve a certain amount of trial and error, jogging between the textures and the material controls and frequent trial renders. Try other controls as well, such as the other texture influences and stuff in the Shading panel of the Materials tab.

complete_planetNext thing to do is to give the globe a bit of relief. Once again, we select an empty texture slot, create a new image texture, load an image(this time elevations) and set the mapping and such. Uncheck all of the influences except Normal, reduce the strength of the normal to at most about 0.5. Unless of course you want to intentionally exaggerate relief in order to bring out smaller features.

This would be a good time to try a preliminary full render. Take a note on the dimensions of the planet sphere. Once we have the planet surface the way we want it, its safest to go up to the Outliner and restrict viewport selection of the planet surface object. Just click on the arrow icon to shadow it, click on it again if you need to change the planet in the future.

Making a Cloudsphere

Now we add a new sphere with the same center as the planet globe to put the clouds on. My notes say that the X/Y/Z dimensions of the globe are 12/12/12, and I want the clouds to hug the planet pretty closely, so I’ll size it to 12.35/12.35/12.35 after smoothing and such. Make sure to smooth and subdivide the clouds sphere as you did the planet. Create a new material, and zero its diffuse, specular and ambient values(at least initially). Check Transparent and set it to Raytrace. Set alpha to zero. Go down to the Options pane and turn Traceable off. Traceable always seems to make the planet surface render solid black, I’m not certain why. Do a quick test render to make sure the planet surface is still visible.

Add a new texture for your clouds. Figuring out a noise that looks good for global clouds is a problem I’ve yet to solve, so I’ll leave you to work out the details. I used a Distorted Noise with a Voronoi F2 basis and considerable Improved Perlin distortion. In Mapping, I stretched the size by about three in the z-coordinate. Best results could be attained by loading a real world global cloud map, but these sometimes show evidence of Earthly continent shapes to the wary. An artist could try painting in a cloud map, but my skills aren’t remotely up to that. For now, this will have to do.

complete_cloudsI gave the cloud texture influence over diffuse intensity, color and alpha, specular intensity an geometry normal. All of these were close to one with small adjustments downward.

I put a ramp on the colors. It’s all white, but the alpha is 0.8 on the right and 0.0 on the left. I added another 0.8 alpha stop at the 0.965 position, and another 0.0 alpha stop at position 0.480. The ramp allowed me finer control over cloud cover. A final render with clouds is in order.


Next we add an atmosphere. This is still very much a work in progress. I’m trying for something like a LunarCell atmospher with more control and realism. I haven’t yet attained the first goal. I’m pretty sure Blender has a way to make volumetric density fall off with distance from the center, but I haven’t figured it out yet. If I can figure out how to make an object presence mask, like I can in Bryce, then I could possibly do something useful with a radial gradient in photoshop. No dice yet, though. To start with I’ll just settle for a volumetric ball with some scattering.

So, first we make a nicely smoothed and subdivided sphere with X/Y/Z dimensions of 13/13/13. We create a material for it. Make the material transparent, with density, oh, let’s push it down to 0.1. I’ll rack the scattering up to 1.0, with a -0.5 asymmetry, meaning that more light is back scattered. A test render and… that didn’t come out well. Must remember to uncheck Traceable in the Options pane of the Material. Try again… success! Looks a little extreme, though. Since density should already be pretty subtle, I’ll start by reducing the Scattering values a bit, especially the amount. By the time I’m done with the whole test render(30% size, now, ’cause it’s not quick), adjust, render again process, I have a density of 0.12 and a  scattering of 0.3 with 0.0 asymetry. It looks good, but maybe a little too wide so I reduce the size of the atmo sphere to 12.7/12,7/12.7.

I’m pretty happy with the results. The shaded relief needs work in Wilbur, and, in spite of a lot of fiddling, the cloudmap isn’t nearly as good as what LunarCell can do. Which isn’t actually very good. LunarCell is good for pretty pictures and it’s mapgen isn’t bad so far as noise-centered generation goes, but it’s cloudmap generation is socially awkward at best. Sadly, it’s about the best clouds-from-noise I’ve seen… Looks ok from a distance, but it needs work. I’ll probably just have to bite the bullet and use real-life clouds.



Hopefully, this was useful to people. If not it should probably be a good reference for me. I’ve gotten pretty good with the very basics of Blender, but beyond rendering models as monochromatic plastic toys, materials have had me flummoxed. This should be useful next time I’m trying to texture a spaceship. It should also make a good background.

For my next trick, the real reason why I jumped into Blender with this in the first place, a revolving-head animation of the planet. Now I’m well away from familiar shores!

Thanks for reading all of this, and any comments and advice are very very welcome.
The Astrographer

Posted in Mapping, Planetary Stuff, World Building | Tagged , , , , , , , | 1 Comment

Realistic Plate Tectonics with Tectonics.js


For some time I’ve had an interest in terrain generation using simulated tectonic processes. I’ve successfully used PlaTec, but it’s strictly 2-d and the output is pretty limited. Another one that seemed promising was pytectonics, but since it froze my system dead, I’m not sure how good it might be(sour grapes and all that…).

Recently, I came across a plate tectonic simulator that runs in javascript on the web browser. Surprisingly, given all the trouble I’ve had with compatibility issues lately, it worked and was reasonably fast. Tectonics.js was created by Carl Davidson, who was also the author of the forementioned pytectonics. I’ve been engaged in a discussion with Mr. Davidson on reddit, and he has been very active and responsive to user suggestions.

The procedure, in a nutshell, will be, first, to create an attractive tectonic simulation, and then, second, to convert that into a decent map.

First, run Tectonics.js at a speed of about 5 Myr/s till the age reaches about a billion years or so. The goal, here, is to give the model time to reach a reasonable equilibrium without spending forever doing it. Slower speeds, on the other hand, tend to produce more attractive results. I’m using the Safari browser, which isn’t all that fast, but my attempts with Chrome, while much faster, also tend to crash out after roughly the first billion years. If your browser has a significantly faster javascript implementation, your computer is a bit less long in the tooth than mine or you’re a lot more patient than me, it could pay off to run at smaller time steps. Although it took most of a day, I’ve made runs at as small a time step as 0.25 Myr/s. For the most part, much cleaner.

From about a billion years, reduce the time step or “Speed” to 1 Myr/s. Run it like that till you approach a desired age, perhaps four to five billion years. Make sure you get at least the last half billion years or so at no more than 1 Myr/s time step. If desired, reduce the speed to around 0.25-0.5 Myr/s for the last few hundred million years.

When you’ve reached the desired time or the map is in a configuration a you find attractive, reduce the Speed to zero to stop the animation. Personally, I consider the Bedrock view attractive and useful and the Plates view is a crucial guide to building your world. The Elevation view is less useful than I’d hoped, but its still helpful. First, make sure that the projection is set to Equirectangular, and the screen is sized so that some black is showing all around the jagged edges of the map. This can take some window resizing and using the scroll wheel to bring the image in and out. It’s self explanatory once you try it. Next, set the view to Bedrock and press p to create a screenshot in a new tab. Save the new tab to a png in your working directory. Repeat this process with the view set to Plates, then again for Elevation. You can also save copies in other modes, like temperature and precipitation, but, as of this writing, those are less useful. The program is currently in active development, so those modes may be more useful later.

It can pay off to save intermediate imagery before you reach your desired time. Sometimes the model approaches an attractive configuration, then, in a fit of perversity, quickly morphs irretrievably into an ugly mess. Perhaps, even if you don’t initially intend to model the geological history of the planet, having maps of the earlier continental positions could be useful later. Particularly, if you’d like to model adaptive radiation of local lifeforms and such, having at least a sketchy history of the world’s tectonic drift could be helpful. I’ll deal with geological history in a later post. For now, you just want to pick out one time point that fits your needs.

Now, import the Bedrock image from your chosen time period to Photoshop or your favorite raster editing app. First, select the black background with the Magic Wand tool on zero tolerance. Next, invert the selection and copy. Now create a new image, retaining the default size, and paste from clipboard. In Photoshop, at least, a New image file defaults to a size just big enough to contain the selected area.

If you’re raster editor doesn’t behave similarly, you can simply Crop the image down till it just contains the map area instead of the procedure described in the previous paragraph.

If you examine your image, now, you’ll notice two things. First, the edges are jagged.

The prepared equirectangular bedrock map.

The prepared equirectangular bedrock map.

Second, the image size is not quite a 2:1 rectangle. I believe these both relate to the fact that the map is composed of discrete cells that don’t conform to the latitude, longitude grid. The easiest way to deal with this is to crop the image down so that the jagged edges don’t show and resample the result to a 2:1 rectangle. This will necessarily reduce precision a bit, but for most purposes it doesn’t matter. You might need to cleanup around the edges to fix shorelines that don’t quite line up. I made an attempt to line up the east and west edges, but they didn’t line up. Instead, I decided to keep the image as it is, use the eyedropper to sample the ocean color, and fill the background layer with ocean color. This works because I could center all the land on the map without overlaps. If it’s impossible to center land on the map such that it doesn’t overlap the edges, you’ll need to connect the land across the boundary somehow.

Now resample the image to a 2:1 rectangle.

The prepared equirectangular map of the tectonic plates.

The prepared equirectangular map of the tectonic plates.

Repeat for all of the output images. For the Elevation, I fill the ocean background with white to represent sealevel elevation. I then invert the image, because I prefer darker values for lower elevations. It’s a matter of taste, though you have to keep track. I also saved a seamask, based on the selection I created to mask out the blue seas. Except for resizing, I took Plates pretty much as-is, because the edge behavior is continuous, so any boundaries across the problem areas would be a work of imagination.

The prepared equirectangular elevation map.

The prepared equirectangular elevation map.

The imagery is now ready to be applied to gplates, of course. Each of them will have an extent of 90º N to 90º S, and 180º W to 180º E.

Once you have all that loaded into gplates, you can look at it with a nice graticule, varying opacity, so that you can for, instance, compare plate boundaries to continental shorelines, or a variety of other effects.

The final prepared, separated and inverted equirectangular map.

The final prepared, separated and inverted equirectangular map.

For the picture at the top of the page, I added a photoshop-derived hillshade to give a better sense of what the elevations look like straight out of the box. Photoshop or Wilbur with a judicious bit of well-applied noise could be used to enhance the appearance of the mountains. The vector editing tools in gplates or qgis could be used to mark shorelines, various kinds of plate boundaries, mountainous regions and other data derived from the tectonic simulation. I’ll leave that for a future article. For now, have fun with tectonics!

The Astrographer

Posted in Mapping, World Building | Tagged , , , , , , , , | 1 Comment

Placing Raster Features Using GPlates

Today we’re going to look at using the gplates program to place pre-created raster features on the globe. For minimal distortion, we will begin by placing the raster at the center of the map, where the central meridian crosses the equator. If we were placing raster features taken from particular parts of Earth, we would want to make sure they were in equirectangular(or geographic, or plate carée or latlong) projection and place them in the position they were in on the original map(this is good for importing real world data from sources such as the SRTM 90-m database). I am going to give instructions both for the use of real world data and island maps from Amit Patel’s Polygon Map Generation Demo.

A few tips I’ve picked up through previous experimentation. Raster layers which are imported into the same location(since we’re dropping unprojected imagery as close to the center as possible to minimize distortion) need to have separate associated vector shapefiles.

In my filesystem, I create a separate directory for each raster. Within that directory, I create a “raster” subdirectory, where I place the raster itself and a “vector” directory, where I place the associated shapefile. This will make it easier to keep track of everything.

To start with, I created a patch in photoshop. Just an ordinary tiff image. I used tiff to test whether I could import and reproject 16-bit or 32-bit rasters. GPlates choked on the 32-bit tiff, but successfully loaded the 16-bit version. The patch I created was small and silly, so I decided to make its geographic extent small, if this works in 16-bit I might, perhaps use it as a set of elevations for the somewhat outscale raster I imported as a continent earlier. So how do I set the georeferencing. Nine meter resolution is pretty common and excellent for moderately close in work, so I’m using that. To reference this to Earth, the  diameter of our planet is close enough to 12,756,000 meters. Given that the circumference of a circle is equal to its diameter times π(about 3.14159265…), that gives us a circumference of about 40,074,156 meters. As back of the envelope as this is getting 1-meter precision is more than sufficient. The resolution of my image is 1k-square(1024×1024), so that’s an extent of 9,216 meters square. A degree comes to about 111,317 meters, so, keeping track of units,

9,216 meters / 111,317 meters/º = 8.279-yada-yada x 10^-2º

I want this centered at the [0,0] point, so divide that by two to get the extents. Top latitude of 0.0414º N, bottom latitude of 0.0414º S(-0.0414), left longitude of 0.0414º W(-0.0414) and right longitude of 0.0414º E. Unfortunately, this throws an inescapable exception on gplates. I successfully import the raster with the extent being 0.2º on a side. That gives me a pixel size of about 21.7 meters(about 71 feet). Once I get that imported, I digitize a new polygon geometry roughly covering the area of the image. I gave it a classification of gpml:UnclassifiedFeature, a plateID and a name. I also made sure that the checkboxes for Distant Past and Distant Future were filled, not that it matters for what we’re doing here, but whatever… Create and Save to a shapefile in the Vector directory associated with the raster. In the Layers window click on the arrow next to the Reconstructed Raster you just imported. Under inputs, find Reconstructed polygons, click on “Add  new connection” and select the Reconstructed Geometry you just digitized. Use the Choose Feature tool to select the polygon associated with your raster. You can now use the Modify Reconstruction Pole tool to move the raster to where you want it. In my case I placed it somewhere in the mountains of the small continent I had placed while practicing to do this. Place it where you want it, hit Apply and hit OK a couple times. I had to jockey mine around a bit to get it right where I wanted it. If all of your edits are done without changing the Time setting, there will only be one entry on the rot-file.

Speaking of the rot-file, go to File>Manage Feature Collections(cmd-M or ctrl-M), and make sure all changes are saved.

Now, I’m going to load in an island generated on the Polygon Map Generation website. To figure out the extent for this, I looked at a map of the Hawaiian Islands and observed that the Big Island fit into a space a little over a degree on a side. The islands generated by Mr. Patel’s generator just have the feel of being much smaller than Hawaii. I’ve decided to give it an extent of about half a degree each way, and, since the island shapes seem to roughly fit with the look of the other islands, I’ll center it at 21.5º N by 157.5º W. That would be just off Oahu, and maybe just a bit bigger. So, I used a top latitude of 21.7º N, a bottom latitude of 21.3º N, left longitude of 157.7º W(-157.7) and a right longitude of 157.3º W(-157.3). I reduced the extent a little ’cause the island still seemed big.

This time, we’ll roughly hug the coast with the shape feature we create. This will minimize the amount of water color we have to clean up later. In this case, I’m just going to play around and pretend like the slop represents shallow water. Once you’ve digitized and created the feature with a unique plateID, save it to a new shapefile. I actually like this thing’s rough location, so I’m going to move it just a little, mostly rotating it. Maybe I’ll plant it over Oahu’s position and call it Notoahu…

Now, I’m going to add another couple of islands, but I’m going to add them both to the center of the map. The first one will re-use the same shapefile as the previous import, and I will locate it at 0.2,-0.2,-0.2,0.2, in the usual order. I’ll digitize an outline of this island and save it to the previously created shapefile. A possibly late word of warning, it’s best to give all of your files easily recognized names. It’s maddening to try to find, “planet_island-61462-2AF,” somewhere between “planet_island-61462-1TR,” and “planet_island-61462-2JL.” Anyway, I wound up using the Pole Rotation tool to place that island somewhere in the space between Maui, Lanai and Molokai on the Earth map.

The next island I placed at 0.3,-0.3,-0.3,0.3. Since the initial location of this island coincides with the previous, it needs its own shapefile, otherwise overlapping will be a problem. Once I’ve got everything digitized and connected, I’ll shift it over with the rest of my little island chain.

For my last trick, I want to move a tile of countryside taken from the SRTM Database at TileX 54 TileY 3. I select the GeoTiff radio button before hitting the button marked, “Click here to Begin Search. I observe from the download page that the filename is The latitude has a minimum of 40º N, and a maximum of 45º N, the longitude has a minimum of 85º E and a maximum of 90º E. We also observe that the center point is at latitude 42.5º E by longitude 87.5º E. I chose Data Download(HTTP) to download the tiff.

Sadly, gplates has some serious problems with the 16-bit geotiff. This is really a shame, as moving fragments of real-world elevations and piecing them together is probably the single most useful aspect of this technique. Popping down pictures of islands is all well and good, but not a terribly powerful use-case.

It seems I might need to convince the developers of gplates to implement the import and reconstructed export of multi-byte elevation data. Failing that, the raster import/reconstruction/export abilities of this program are going to be functionally limited to imagery. Shame, really.

Hopefully, this could prove useful.

The Astrographer

Posted in Mapping, World Building | Tagged , , , , , , | Leave a comment