New Additions

In the menu, under World Builder’s Bookshelf, you’ll find two new additions to the site. With permission from Geoff Eddy, I’m mirroring his Creating an Earthlike Planet and Climate Cookbook pages. Some formatting is lost, but the valuable information is intact and I will try to improve the formatting issues if possible.

While I hope that Mr. Eddy’s pages will find a better home, I’m happy to at least keep them available.

Thanks are due to Geoff Eddy for the creation of such useful resources and the permission to disseminate them. I hope this will prove as useful to others as it has for me.

The Astrographer

Posted in Links, World Building | Tagged , , , , , , , , | Leave a comment

Projecting a Map From Imagery: MMPS and GIMP

On friday I created an equirectangular map(or a known portion of one…) from a partial image as from space using Photoshop and the Flaming Pear Flexify 2 tool. Photoshop is fairly expensive and Flexify isn’t exactly cheap. This means they aren’t universally available. I try whenever possible to create tutorials using freely available open-source or shareware applications. Today, I will try to do the same thing I did last time using GIMP and Matthew’s Map Projection Software(MMPS).
First, I loaded the jpg of my image of Asdakseghzan as seen from space into GIMP.
Zeta-GoldI resized the canvas into a square, using Image>Canvas Size… I’ll set both the width and height to match the larger of the existing values. In this case the image is wider than it is high, so I set the height to match the width to avoid losing any information. I hit the Center button under Offset, and told it not to resize any layers.
Next, I make a new transparent layer to act as a template for the placement and sizing of the circular area. With that layer selected, I use the ellipse select tool with aspect ratio fixed at 1:1 to select a circular area centered on the image and covering a maximum possible area. If necessary, use the tool options to set the position to 0,0 and the size to the height and width of the image. Select>Invert and fill the surrounding area with a dark color. Create another layer, fill it with another contrasting color, and move that layer beneath the image layer.
Now select the image layer. Move the layer till the round edge of the planet is close to a round edge of the template. Usually, I use the arrow keys to get this as close as posssible. Even if you get part of the edge matched up perfectly, the limb of the planet in the image will probably diverge from the edge of the template. If not, you’re golden, but if so you’ll need to rescale the layer. This was much easier in Photoshop. I’m not sure you get what you pay for, but there are perks.
Tools>Transform Tools>Scale to start the scaling process. Make sure Keep Aspect is checked. Grab the side perpendicular from the side where the image touches the template and drag till the limb follows the template. My planet image touched the template on the left side, so I stretched up and down on the top and bottom handles.
Now I had a pretty decent centered, maximally space-filling image, ready for reprojection. So I export it as a ppm. On my first try, I saved it as a jpeg and made a conversion using the ImageMagick mogrify facility, but that proved unnecessary.
Zeta-Gold_mergedMy initial plan was to forego dealing with my traced “map.” It wasn’t really all that hot, and the scale and shift process in GIMP seemed a bit horrific. Well, with a bit of practice, scaling and adjusting the position of layers didn’t seem quite as bad, and more practice seemed like a good idea. So I did it anyway.



I decided to just show the thumbnail, ’cause there’s a LOT of whitespace here!







As you can see, the fit isn’t quite as precise as with the version I made in Photoshop. With a lot of effort, I could have made it better, but without transparency sizing is a frustrating endeavor. In order to maintain the tracing as a separable element, I hid the other layers and created a version with just the tracing itself. This is what I would project…

Now we get onto the CLI shell. The commands here assume you’re using some sort of UNIX-based OS like Linux or Mac OS X. Microsoft commands will differ somewhat.

First, I changed my working directory to the location of my images. Next, I gave the following command to reproject the contents of merged_image into map_image:
{location of MMPS}/project -i orthographic -w 2048 -h 1024 -lat -7 -long 7 -turn 2 -f merged_image.ppm > map_image.ppm

Optimally, the app would be on the search list, but the name “project” conflicts with one of my gis tools, so I have to use the full path. Your mileage might vary.

When MMPS was done projecting the basic composite, this was the result.
I also projected the tracing.
I loaded that into GIMP as a layer on top of the other elements. Because the PPM format which MMPS uses doesn’t support transparency(so far as I, or apparently MMPS, know), I had to select the empty areas and make them transparent using a layer mask. This was made easy by the high contrast between the background elements and the tracing. If I had been working in black and white, it would have been more involved.
Asdakseghzan_mapWhile the process of positioning and scaling the elements was more difficult, I managed this in about half the time I took with Photoshop. There are a number of reasons for that. I’ve had more experience with the process, for one. I also did this in a more hurried and slipshod way; I spent a lot of time in Photoshop refining the fit between elements. The major difference, though, comes down to the crashing problem. If I hadn’t been required to restart the program and retrace the process from the beginning several times(Photoshop usually crashed on the first attempt to open the Save As dialog after using Flexify), the Photoshop would still have been somewhat quicker. Better Transform tools will tell. You may get something for what you pay for, but free is still pretty darned attractive.

Now that we have these existing elements properly projected(more or less), now it’s time to add in the rest of the world and bring this stuff into more robust mapping tools. That’s for the next few posts.

Thank you for reading. Please feel free to comment, leave suggestions or ask questions.
The Astrographer

Posted in Building a Worldmap, Mapping, Planetary Stuff, Projects, World Building | Tagged , , , , , , , | Leave a comment

Projecting a Map From Imagery: Photoshop and Flexify

Today we’re going to try something a little different. Quite some time ago, in the hoary days before Flaming Pear LunarCell had equirectangular map outputs, I created this rather nice planetary image.
Zeta-GoldUnfortunately, I lost the settings long before those cool map outputs were added to LunarCell. Later, but well before I started learning the art of mapmaking on Cartographer’s Guild, I roughly traced the image to create this interesting, but neither scaled nor properly-projected “map” of a planet I wound up naming Asdakseghzan, sorta-homeworld of the canine-derived Vugoa. Think Traveller Vargr…
I have no idea what the scale should be, and that “equator” line should probably be ignored in future.

I’d like to create a reasonably projected, fairly well scaled version of the map with some fidelity to the original image.

I started by cropping the space picture down to the circular area covered by the planet in the picture. Then I expanded the canvas size to a square area just large enough to contain the circle. Make sure to save the resulting image.
Zeta-Gold_rawMerge those layers together. Next thing I did was to load the old, traced “map” I had created into Photoshop. Paste it over the square, cropped version of the beauty shot you just made, reduce opacity so that you can see the underlying image, and Edit>Transform>Scale the pasted layer so that the traced shorelines match the shorelines in the underlying picture. This might take a bit of jockeying about and messing with the opacity to get a good idea of when everything fits. Now, I made sure to select the underlying picture image.

Next, I brought up the Flaming Pear Flexify 2 filter to project from an input projection of Orthographic to an output projection of Equitall(to fill the square space). I adjusted the rotation controls till I had the area roughly where I wanted it. I now had at least a credible global projection of the portion of the planet visible in the image. Eventually, I’ll be able to use this to work out the scale of the map.

After that, I selected the resized layer with the traced image, and projected that. All of the settings should remain unchanged, otherwise the layers won’t match up properly.

Once I have the image reprojected, I resize it to 2048×1024 to get the proper aspect ratio for an equirectangular projection. Here is the resulting map.
Zeta-Gold_projectedAnd the composite.
Zeta-Asdakseghzan_compositeNotice the large green area. That is about the area that would have been filled with mapped features if the original image had been a fullan face shot. The distorted rectangle on the west end of that potentially visible area is the portion of the planet that had made it in front of the lens. Part of that is even in darkness. The sacrifices we make for a visually striking picture. Usually NASA doesn’t use imagery from the limb of the planet to create composite maps. Unless they have to. Also notice that I said, “map,” this is the first image worthy of the term. The red area? That wouldn’t have been visible.
Now that I know where this is on the globe, I can set about creating the rest of the world without fear of contradicting this little portion that I’ve already created so much backstory for. Some details will have to change. The equator, for instance, isn’t precisely where I envisioned it, although the match was closer than I’d feared.

What are the potential uses for this? Well, once upon a time, when I was even more of a Trekkie than I am now, I wanted to make maps of some of the planets that the Enterprise was shown orbiting. That’s even more attractive with the often more convincing planets in the remaster, though I’m not as obsessed as I used to be. All you have to do is grab a screenshot, Register the visible portion of the planet to a circular template filling a square canvas, and Bob’s your uncle! Getting features shown in multiple disconnected shots into the correct spatial relationship could be an interesting challenge. I’ll leave that to the readers to try figuring out. Please, post a link to the comments if you find or create a good method… Anyway, this could be an excellent first step to creating your map of Tatooine or Alderaan or Pandora. Or as I did, recovering some old tracing you tried to make into a map.

Another use I really hadn’t thought about before, would be to use a technique similar to the one described here to convert scrawlings on an actual physical globe into a map. Yeah, that’s definitely going in my notebook for a future post. Time to make that dry-erase globe I’ve been dreaming of…

For my next trick, I’ll try repeating this procedure with MMPS and the GIMP. Photoshop has been an awful crash-monster lately, particularly after I use Flexify. I was also impressed by the latest version of GIMP and want to try it out. Finally, I really want to try using open source or at least freely-available tools as much as possible to make my solutions as generally useful as possible. I’ll post that by monday. I promise…

Thank you for your attention. Please feel free to ask for clarification, make comments, or give ideas for improvements.
The Astrographer

Posted in Building a Worldmap, Mapping, Planetary Stuff, Projects, World Building | Tagged , , , , , , , , | Leave a comment

Foes of the UNSEO: The Hanrul

I’ve been working on a post using the Mercury6 symplectic integrator to analyze the stability of the Yaccatrice System. It’s been a long learning experience, and I think I screwed up entering the initial Orbital State Vectors. Sky Moon fell into Cintilla in less than nine days and Yaccatrice continued in a wildly eccentric orbit for several years thereafter. I expected that Yaccatrice might not have been stable, but I find it really unlikely that tidal forces could have pulled the gas giant out of a circular orbit in just a little over one time step(of 8 days). Even less likely that a moon of that planet would have survived as long as Yaccatrice did. I need to go back and recalculate those position and velocity vectors. This may take awhile…

In the meantime, I’ll post an alien species description from my notebook.


The Hanrul began as kind of a cross between Kzinti, Ferengi and Vulcans, with many of the worst aspects of the stereotypical steampunk mad scientist.

Coldly unemotional and ruthlessly exploitative at best. Often cold-blooded sophontocidal cannibalistic psychopaths.

The currently dominant culture of the Hanrul is one of the most technologically advanced societies yet contacted by humanity. In some areas, particularly biotech, the Hanrul are more advanced than humans. Their technology is frequently odd to the point of inefficiency as a result of their Mad Scientist culture. It’s important, in spite of the term Mad Scientist, that while technologically advanced, the Hanrul have little concept of science as such.

Hanrul also show little evidence of any concept of other people as independent thinking entities with feelings that matter. Both among themselves and with individuals of other species, the Hanrul are utterly callous and without empathy. Such ethics as the Hanrul have, mostly resemble Ayn Rand’s Objectivism.

Hanrul are very prickly about their byzantine hierarchy of social status. They are quick to offense and equally quick to offend. Hanrul typically require long drawn-out introductions in order to fully acquaint others  to their status in a large number of overlapping and interlocking social spheres. It’s also nearly a universal in Hanrul society that listening closely to an introduction is a tacit admission that the other is a superior. Hanrul social norms seem almost designed to make ignorance of status quickly evident. These things, along with oddities of Hanrul reproduction, tend to make life on Sylan short, dangerous and violent. Rising through the ranks by assassination and duels is the norm for Hanrul.

Although horribly xenophobic, with an inhuman(in all the worst senses) psychology, the Hanrul are biologically very similar to humans. This has made humans useful subjects for Hanrul genetic experimentation and vivisection. They do the same to low-status members of their own species.

Economic domination and selfishness is central to Hanrul culture. Their society is cemented together by Exploitation Societies great and minor.

Although humans find Hanrul culture disturbing (with good reason), and believe that individual Hanrul could grow up to be decent and productive members of galactic society if separated from their toxic culture, the experiment has never been tried. The act of removing people from their own society and forcing them to grow up in an alien environment against their will is considered against human ethics. Even given a lapse in ethics, the Hanrul are advanced and strong enough to fiercely resist any such effort.

The United Nations of Sol and polities of many neighboring sapient species resist Hanrul exploitation of other sapient species when possible, but otherwise long-term plans for dealing with the Hanrul are still very much a matter of debate.

The Hanrul didn’t have FTL technology until they gained it after first contact with the Solar Union.

Physical Appearance

Hanrul look something like an octopodal cross between a glossy patent leather wasp, and a centaur, with vicious crab claws.

The four legs on the Hanrul’s abdomen are used for walking. The two upper arms on the thorax end in hands with three opposed thumbs, and are used for manipulation. The two lower arms are greatly increased in size and end in claws which are used in combat and to help climb.


Large numbers of eggs are laid and cared for in communal hatching grottoes. Modern Hanrul cities often use sewage and other refuse to create a nutrient-rich mulch upon which the young larvae feed. Criminals and low-status Hanrul are frequently dropped into the spawning pits to feed adolescent Hanrul. The young also often fight and each other, helping to whittle numbers down from ridiculously excessive to merely excessive when they finally emerge as young males from the pits.

Conflict continues for these males. In the dominant Sylan language, the world for male can also be translated as, “cannon fodder.” Even with the further whittling of the male population by continuous warfare with other Hanrul and neighboring species, population pressures remain fierce.

Those males who manage to survive and fight to earn the right to mate, will impregnate the female with hundreds of eggs. Laying eggs renders the female terribly hungry, and she will try to eat everything she can catch. Mating chambers are always well-stocked with food, because the female can easily die from the stress of laying so many eggs. She will still try very hard to eat her mate.

Provided the male manages to avoid being devoured by his mate, he will begin to go through a metamorphosis into another female. Hanrul females can live for centuries, and are capable of producing a clutch about twice an Earth year. In practice, they only lay eggs a couple times a decade, because of the immense stress involved.

Sylan Empire

Sylan, the homeworld of the Hanrul, is still divided bitterly into small sovereign entities, often overlapping and with disparate systems of social organization. The global organization, which humans refer to as the Sylan Empire, is at best a loose confederacy consisting of a handful of “international” alliances.

An innovation of the new global culture is a harsh meritocracy that makes it costly and possibly deadly to rise above one’s level of competence.

Since expanding into interstellar space, the Sylan Empire has colonized several worlds and enslaved at least one primitive sapient species.

Although often only tenuously disciplined, the Sylan military is strong, technologically fairly advanced and highly aggressive. The Sylan Empire is widely considered a threat to its neighbors.

Hopefully, more to come soon. Sylan obviously needs some specs and maps…

The Astrographer

Posted in Aliens, World Building | Tagged , , , , | Leave a comment

Wilbur on the Macintosh

Wilbur. as seen on my Macintosh. Note the Apple in the upper left corner.

Wilbur. as seen on my Macintosh.
Note the Apple in the upper left corner.

While the title sounds like a good name for a fictional English(-ish) village(the Macintosh River seems a bit un-English-y), I’m actually talking about getting Joe Slayton’s Wilbur program running on my Mac OS X-based computer.

I’ve been using Boot Camp to boot my computer in Windows XP, which seems to work fine, but I’m not that fond of being bound to Windows till I can reboot. I’m aware of Parallels and similar programs which allow Windows to run in a virtual machine parallel to the Mac OS, but they cost money. If I had any money, I’d spend it on cartography and simulation apps… er, not to mention clothes for the kids and… food and whatnot… ehhh…

Anyway! Whilst pointlessly bouncing about the internets, I discovered this post on using Wineskin to “port” Orbiter to Macintosh. I’ve had some experience trying to get Wine working so I wasn’t terribly optimistic, but I tried it. After some false starts(X11 doesn’t seem to open right on my computer, but I’ll go over my workaround if you have the same problem) I had Orbiter running on my computer. As far as I can tell, it works fine, though the simulation is realistic enough that, I at least, can’t get a Delta Glider with a ridiculous amount of delta-v into orbit to save my life. I may need to stick to Kerbal Space Program till I get some, um, skillz. But that said, Orbiter is a pretty big and complicated program, and I didn’t have too much trouble gettin’ it going.

Wilbur, on the other hand, is a relatively simple app, so I went with it. I suppose if I can get Wilbur, SagaGIS and Fractal Terrains running in Wine, I can dump Windows and free up about 180 gigabytes of disk space. Sadly ArcGIS, not surprisingly, doesn’t function on Wine.

First prerequisite, of course, is having mac os on your computer. If you have windows and you like it, then you don’t need my help. If you have linux, then the vanilla version of wine should be hunky-dory, but I can’t really help you with that.

Second step is to get a copy of the wilbur installer here. Next, get a copy of Wineskin Winery here and install it.

The first time you open Wineskin Winery you need to install an engine. Simply click the plus sign next to where it says, “New engine(s) available!” Select the highest numbered version shown in the pull-down menu and click the, “Download and install,” button.

Once the new engine is installed, click update under, “Wrapper Version.”

Next click, “Create New Blank Wrapper,” and give it a good name, like, “Wilbur 180.”

My computer runs Mac OS 10.6.8 with pretty extensive modifications, but if you get a pop-up window that says, “The application X11 could not be opened,” don’t worry, just click quit. Everything should be golden. Wait a little bit and another window should pop up that says, “Wrapper Creation Finished.” Go ahead and click, “View Wrapper in Finder,” and double click the appropriately titled icon(Wilbur 180, in my case) to open the wrapper.

Don’t click, “Install Software,” just yet. Click advanced. If you like you can change the version number to an appropriate value. Wilbur is currently on 1.80, as of publishing this post. Also, because we’re using an msi installer, check, “Use Start.exe.”

Go to the Options tab and check, “Option key works as alt,” and “Try to shut down nicely.” Now click on, “Set Screen Options.” Under, “Other Options,” check, “Use Mac Driver,” and “Use Direct 3D Boost.”

Wilbur needs the vcomp100.dll from Visual C++ Redistributable Components to run. I tried using the installer from Microsoft, but that failed. I also tried using the Winetricks tool to load, “vc2010 express,” in, “apps,” and, “vcrun2010,” in, “dlls,” but that failed, too. Lets close the Wineskin.

Instead download a copy of vcomp100.dll from here. Scroll down and click the plus icon next to, “How to install vcomp100.dll,” and scroll down to, “Using the zip file.” Click, “Download”

When vcomp100.dll is downloaded and extracted, navigate to “Applications/Wineskin” in your user directory and left click on the Wilbur 180 icon. Select, “Show Package Contents,” in the resulting pull-down menu. Drilldown through, “drive_c,” and, “windows,” to, “system32.” Drag the copy of vcomp100.dll you just downloaded to the system32 directory, and close the package.

Now doubleclick on the Wilbur 180 Wineskin. Click, “Install Software.” Click, “Choose Setup Executable,” and browse to the location where the, “Wilbur32_setup.msi,” file is located. Sadly, as far as I’m aware, wine can’t run 64-bit apps… Wait a bit, and the Installer window should open. Click, Next.

I manually entered, “C:\Program Files\Wilbur\,” for the folder location(don’t enter the comma). Next. Next. Close.

The Choose Executable window, if it comes up, should show, “\Program Files\Wilbur\Wilbur1.exe.” If so, click OK.

Try a, “Test Run.” If it succeeded, you’re golden.

If not… It took me several tries before I got everything shipshape.

You can either start again from the beginning, or, if you just want to try fixing a parameter that might not have gotten properly set, use Show Package Contents on Wilbur 180, and click Wineskin at the top of the package hierarchy. This will allow you to change any necessary Wineskin parameters or use any of the tools.

In my case, I usually just forgot to check Use Mac Driver in Screen Options.

Once everything checks out, you can open the app just like any mac app by double clicking the appropriate Wineskin icon(Wilbur 180).

I’ve found that most of Wilbur works pretty well in wine. The 3d Preview window fails completely and some other windows, like Map Projection don’t resize properly, but otherwise Wilbur is pretty functional, and in my experience runs a bit faster than in Bootcamp. In fact, Wilbur is stable, if very slow, handling 8192×4096 images, which usually crash it pretty promptly in Bootcamp.

I’ve also successfully ported Fractal Terrains(which had problems with dockable windows, but otherwise seemed pretty good), and SagaGIS(which, so far, works fine), along with a few other programs.

I’m very satisfied with Wineskin, though I may have to keep Bootcamp to run ArcGIS and AutoCAD.

I hope this has helped my more mac loving friends  to add another dimension to their enjoyment of their computers.

Questions and comments, as always, are very welcome!

Thank you for reading,
The Astrographer

Posted in Mapping, Uncategorized | Tagged , , , , , | 1 Comment


I may have found another way to flatten imagery and maps onto equirectangular projection. Matthew’s Map Projection Software, created by Matthew Arcus, is a suite of command-line applications for creating and re-projecting maps. At least on my Macintosh, it was quick and easy to compile and link the code using the instructions given on the page. Given that it doesn’t need porting for a Mac, I’m confident it would work perfectly for other Unices and Linuces. For Windows, you’d need to install some sort of “make” program, but even without the make utility the compile process doesn’t look too terribly complicated. As always, Image Magick is strongly recommended and free. MMPS apps require PPM images for input and return PPM images for output. For those with a make app, the command “make ppms” will convert any jpeg files in the images subdirectory into PPM format. The Image Magick mogrify command can also perform the conversion to and from a wider variety of file formats. The thing that caught my eye was the inverse projection option. This allows the user to project from any of the available projections back to equirectangular projection. Here you can find a basic introduction to the use of inverse projection to create at least a partial map from imagery. And if this page, describing how to convert a four view into a map, doesn’t give you some idea of what I’ll be doing in a future post, you haven’t been reading much of this blog :). Suffice to say, if you start playing around with a well-placed orthometric camera and a sphere with an unshaded texture in Blender it probably won’t be anything new to you by the time I get it out… For my purposes, the thing that finally makes MMPS almost a straight-out freeware replacement for Flexify 2 is its ability to easily transform map coordinate systems(recenter the map to a different latitude and longitude and rotate around that center). Flexify has a much more extensive set of projections, but a lot of those are… peculiar and the names are somewhat uninformative. For instance, let’s say we start with a 2048×1024 pixel equirectangular png(generated in Photoshop with Lunarcell) named testmap.png, and we want to center the map over a point at 90ºE, 45ºN, and tilt around that center by 30º counter-clockwise. Start by using Image Magick to convert to ppm. convert testmap.png testmap.ppm Now, we use the following code in MMPS to perform the coordinate system transform. ./project -w 2048 -h 1024 latlong -long 90 -lat 45 -tilt 30 -f images/testmap.ppm > images/rotmap.ppm The resulting image,”rotmap.ppm,” will be essentially identical to an image transformed in Flexify 2 with the latitude slider set to 45, the longitude slider set to 90 and the spin slider set to 30. Perfect. The only unfortunate aspect of the MMPS project tool compared to Flexify is that it apparently can’t handle 16-bit imagery. Other than that and a slightly more limited selection of projections, it is an excellent substitute.

Posted in Mapping, World Building | Tagged , , , , , , , , , | 1 Comment

Painting Planets Using Blender’s Texture Paint Tool

I’ve been interested for awhile in using the Texture Paint tool from Blender to paint on the globe. There are a few things that you need to know how to do to make this technique work. First, you need a uv-mapped sphere. This can be done in a variety of methods, but I still find the icomap method I’ve used before to be the most reliable and effective. The uv-map image texture doesn’t necessarily have to fit any particular projection if you’re going

The Tissot Indicatrix of an icomap. Each of the "circles" on the map represents a true circle on the spherical surface, all of the same size.

The Tissot Indicatrix of an icomap. Each of the “circles” on the map represents a true circle on the spherical surface, all of the same size.

to use the POV-Ray spherical camera to reproject the globe, but it is best to have a mapping with a minimum of distortion of shape and area. This allows each pixel to reliably represent roughly the same surface area  with a minimum of distortion. The icomap projection does a very good job of this as shown by the Tissot indicatrix.

The fact that all those little “circles” have nearly the same area and are close to circular is a good indicator that distortion is minimal over nearly all of the surface. Although equirectangular projection(also referred to as “spherical,” “latlong,” “geographical,” “rectangular,” or “plate carée”) is a good universal interchange format, it is very distorted it’s miserable to create an appropriate uv map, which is required for the Texture Paint tool, so essentially I’ll paint to an icomap and use POV-Ray to convert that into an equirectangular.

I’ve been wanting to do a video tutorial for awhile. Mainly ’cause I’m not sure how clearly I’m describing the things I’m doing. Unfortunately, this being the first kind of ground up procedure, things got a little long. I’ve decided to split up the procedure into several separate sections. Along with each video, I’ve included a rough transcript. The video is good for showing where things are and roughly what is being done, but when you’ve got that out of the way and just want to refer back to find forgotten keystrokes and such text is much more efficacious.

NOTE: My limited talents as an actor, the learning curve of video editing software, and problems uploading to Youtube have all conspired to greatly delay this post. With that in mind, I have decided to post the text now and add in links to the videos as I can get them together. In the meantime, Andrew Price has tutorials for pretty much everything I know about Blender so far. If you dig into his tutorials and have a look at a few of the related videos on the Youtube sidebar, the text here should be pretty clear. My videos may even be anticlimactic. Oh well…

Part 1: Setting Up the Spherical Canvas

This video will demonstrate the process of creating and preparing the globe which will be the canvas on which the map will be painted.

Press shift-A to bring up the Add Object menu. Under Mesh, select Icosphere to create our globe.

The default Icosphere, as it turns out, is not a true icosahedron. It is a geodesic spherical shape consisting of triangular faces, but it’s not the classic d20 shape we all know and love. This is perfectly usable with the POV-Ray method for creating equirectangular maps, but I’d like to have a proper icomap straight out of the box. Just personal preference, that…

Hit the T-key with the mouse pointer in the 3D View space to bring up the Tools Panel, and at the bottom there will be a pane that allows editing of the parameters for the last command entered. The top entry for the Add Ico Sphere pane controls Subdivisions. Change it from the default setting of 2 to 1 to get a true icosahedron.

Now go into Edit Mode. Select Mesh>Faces>Shade Smooth in the 3D View footer menu or click the Smooth button under Shading in the Tools Pane. Hit the A-key deselect all faces and, making sure the select mode is set to Edges, select an edge at the north pole. Holding down shift, select the other four edges around the north pole and all five edges around the south pole. Now select an edge crossing the “waist” of the icosahedron. This is somewhat arbitrary, but if we want a particular orientation to our icomap, it pays to select the edge with care and take note of its direction.

Looking at the icomap generated by Flexify, we see that the edges on either side trend from northwest to southeast. The best edge to select, for that case would be the one that coincides with the positive Y-axis. The best way to find this edge is to look at the icosahedron in Object View. Later, this information will be used to select an appropriate look_at value for the POV-Ray camera. So make sure to write down your choice of direction.

Wilbur-generated icomaps have the opposite orientation, so the edge passing through the negative Y-axis would be appropriate.

Classic Traveller-style icomaps cut through the middle of a triangle. The best way to reproduce this effect would be to cut the triangle that the negative X-axis passes through using the knife tool. In the accompanying video, I demonstrate the Traveller style, both because it is the most challenging, and because it allows me to introduce a new and very useful tool, the knife. With the appropriate face and the north pole in view(the knife tool interferes with the view controls, a bit of a bug, frankly), hit the K key to use the knife. Click the vertex at the bottom of the triangle and, holding down the control key, select the midpoint of the upper edge. Click again on the north pole vertex and hit return to make the cut.

In the Shading/UVs tab of the Tools panel, under UVs, is, not surprisingly, the UV Mapping section. Below the Unwrap menu button click on the Mark Seam button. If you look at the 3D View canvas, you will see that the selected edges are highlighted in red to show that they are seams. Now, select all vertices of the icosahedron and click Unwrap. In the UV/Image editor, we will find that the unwrapped faces are displayed. In the UVs menu, check snap to pixels and constrain to image bounds. Using the Grab, Rotate, and Scale tools we will center the unwrapped faces and stretch them to almost fill the image space. “Almost,” because, even with large bleed, I had problems with blank spaces where edges were too close to the image boundaries. I’m hoping a bit of a margin will alleviate that.

Next, in the Properties View, give the icosahedron a basic material. Click the material tab, a coppery-colored ball with a faint checkerboard pattern. Under that tab, click the New button to create a new material. Don’t worry about the settings for now. It might be helpful to give the material a more informative name like “planet_surface” if you like. In the Options section of the material, make sure to check the UV Project box.

The last step in preparing the globe which will be the canvas for our map, will be adding a subdivision modifier. In the Properties view, select the tab with the wrench icon. This is the Object Modifiers tab. Under that tab, you will find a menu button which reads Add Modifier. Click on that and select Subdivision Surface under Generate. Under options, uncheck Subdivide UVs. Set Subdivisions in View to about 3 and in Render to about 4. If applied, 3 subdivisions will result in 960 faces and 4 subdivisions would result in about 3840. Keeping those face counts down could speed things up a lot down the line when painting.

Note that, while the particular way of marking seams for a Traveller-style icomap may be suitable for converting maps to equirectangular, the “sphere” that results is pretty badly distorted for display purposes. You can fix this with Spherize in Edit or Object mode(Shift-alt/option-S)!

Now on to Texture Paint!

Part 2: Painting the Texture

First off, in the main menu bar, open File>User Preferences… Look at the Add-Ons tab. Make sure Paint:Paint Palettes, Paint:Texture Paint Layer Manager and Import/Export:Wavefront OBJ format are all checked.

Set the 3D View to Texture Paint mode.

In the Tools panel(T-key), look under the Tools tab for the Color Palette section. This is probably an empty palette to start with. To add a color, click on the plus-icon next to Index and set the color in the usual manner. You can add a name for the color entry in the text field next to the color wheel icon. Repeat this process till you have your desired palette.

To save your new palette, start by clicking on the file folder icon at the bottom of the Color Palette section. This will allow you to select the directory from which to choose existing palettes or in which to save your new palette. At the top of the palette, there will be a pull-down menu saying Preset, find the plus-icon next to that and press it. Enter a name for your palette and press OK. The palette should be saved.

You’ll find the paintbrush settings in the Tools panel, at the top of the Tools tab. In this section, you can set the size and strength of your painting brush. Next to the radius and strength fields, there are buttons which allow control of the attributes by pen pressure. You can also set color here, but the palette will allow you to reliably repeat color selections.

Above the Color Palette section, you’ll find the Curve section. This allows you to set the softness of the edges and the sharpness of the center of the brush.

Finally, near the bottom of the Options tab of the Tools panel, you’ll find a Bleed option. A large bleed will make it less likely to render grey edges on the surface. Larger the safer. If you want to use the icomap you paint directly, it’s best to leave this at zero. Bleed also makes painting a bit slower…

The next point is the use of Texture Paint layers. Near the bottom of the numeric panel(N-key) are two sections of interest.

The first section is Texture Paint Layers. This allows you to select any materials associated with the object and, below that, any any existing textures that are part of the selected material. To edit any given texture, simply click on the paintbrush icon next to the texture’s name. If you don’t see any textures with paintbrush icons then you need to read the next paragraph.

Beneath Texture Paint Layers, you’ll find the Add Paint Layers section. If you don’t yet have a diffuse color texture, click on the Add Color button to add a new layer. Give it a name and you should find that texture listed in the Texture Paint Layers section above.

At this point just start painting on the globe.

Setting up a bump map layer can be a bit more complicated. While clicking Add Bump is simple, as far as I can tell it creates an 8-bit image. For bump maps, it’s best to use at least 16-bits to avoid stair-stepping. Also, part of the intent of this exercise is to create detailed map data down the line.

With that in mind, we’re still going to create the new bumpmap Texture by clicking Add Bump. Now we’re going to go into the UV/Image Editor View and, find the button with the plus sign next to the image browser pulldown. Click that, and in the window that pops up enter a name, a desired resolution. Make sure to check “32 bit Float” before you hit OK. In the Texture Properties make sure to select the 32 bit image you just made in the Image section, and in the Mapping section, make sure the Coordinates are set to UV and the Map your UVMap shows in the Map selection area. In the Influence section, make sure Normal, under Geometry is checked and everything else is unchecked. Make sure the normal influence is a positive value. I’d go with 1.0 while painting. You can adjust the value(probably downward)later, to make it pretty. Your canvas is now ready to paint in the bumps.

For best results, use one of the pointier brush Curves, fairly low Strength and Radius with pressure-sensitivity for both, and set the Blend to Add(to raise) or Subtract(to lower). For most purposes, leaving the color set to white is perfectly good. You should now be prepared to start painting bumps!

If your computer is decently fast you should use Textured viewport shading. I use a 5 year old bottom of the line MacBook, things get a little boggy, but it’s still usually worthwhile to be able to see what my bumpmapping looks like.

Once you’re done, save the color map to png and the bump map to 16 bit TIFF. I’d love to use full 32-bit OpenEXR, but my conversion paths are limited.

Part 3: Flattening to Equirectangular

In the main menu, select File>Export>Wavefront(.obj) to export the globe. Give it a name and save it.

Now open Wings3D. In the menu select File>Import>Wavefront(.obj)…, and find your saved globe object. Now, we’re going to turn right around and export to POV-Ray(File>Export>POV-Ray(.pov)). Wings3D is a capable and highly useful modeling tool, but this time all we’re doing is using it to translate between Blender and POV-Ray. Go figure…

Now we can go to our favorite text editor to change some settings in the generated .pov file. In global_settings, set ambient_light rgb vector to <0.7,0.7,0.7>. If this proves too dim after rendering, you can increase it later. Set camera_location equal to <0,0,0>. Comment out(//) right up angle and sky. Set camera look_at according to the location where you made the waist seam in the uv-mapping stage. Note that Y and Z are reversed in POV-Ray relative to Blender. So, if your cut was across the positive y-axis, you’ll want to look at the negative z-axis(<0,0,-1>). For the Traveller-style map, my cut was across the negative x-axis, so in my example, I’d set the look_at to <1,0,0>. Comment out light source(nest it between /* and */). Add the uv_mapping statement to texture. Go down to ambient, and comment out the color_rgbf statement. Add, image_map {png “name-of-map-image”}. You should be able to render now and save the image to disk.

Finally, we open the resulting image in Photoshop and flip the canvas Image>Image Rotation>Flip Canvas Horizontal. The analogous command in GIMP would be Image>Transform>Flip Horizontally. Save the result and you have your image as a proper equirectangular map.

Part 4: Converting the Bumpmap

To do the same for the bumpmap, you need to be able to convert the 32-bit image into something that POV-Ray can render to. You could possibly use Landserf to convert the 32-bit single-channel data into an RGBA separated png image, and project that in POV-Ray. Then come back to Landserf to recombine. You’ll would want to save the 32-bit bumpmap to OpenEXR in Blender, use Photoshop to save that to a 32-bit TIFF, then use GDAL to convert the TIFF to something Landserf can read(like BT ).


Posted in Mapping, World Building | Tagged , , , , , , , , , | Leave a comment

A Resource for Learning Quantum GIS

I found a nice set of video tutorials for learning the use of QGIS at Mango Map. The first module introduces the QGIS interface. The second module goes over the basics of creating a map. It looks like further posts are being made at roughly weekly intervals(like my own blog… in theory).

Hopefully this will be a good introduction to the use of the program, even if it doesn’t necessarily delve deeply into the particular problems of people trying to use QGIS to create maps of imaginary places. Fantasy mapping is still mapping, so the basics will be useful.

The Astrographer

Posted in Mapping | Tagged , , , , | Leave a comment

Big Planet Keep on Rolling

Same planet, slightly better render...

Same planet, slightly better render…

My intended post for last week took so long that I decided to simplify things a bit. I was going to discuss prettifying the tectonics.js output and making a blender animation of the prettified planet spinning. I’ve learned a lot about qgis(and wilbur) while trying to do this, but I’m still groping around. I’m not saying anything against tectonics.js, it’s my fault for pretty much ignoring too many of the useful hints the program gives and misinterpreting too many of the others. I also habitually underestimate just how wide my mountain ranges are. Anyway, for nor now I’m just going to focus on the animation using the planet I have. Not Earth, that’s just cheating, but I’m using the not altogether successful planet I tried to create over the last two weeks. I need a quicker workflow; one that doesn’t involve constantly googling for instructions…

I’ll start with a sphere similar to the one I put together for an earlier article on displaying your world. I replaced the bedrock color I previously used for the diffuse and specular color with a hypsometric texture I created in wilbur. My original intent was to create a more realistic satellite view with a simple climate model. That would have been awesome!

I used a 16-bit tiff image for the bumpmap. Sixteen bit png’s seem to fail in blender, so I used qgis to convert my png to tiff. I also wanted to create a subtle displacement map as well, but the sixteen bit tiff inflated the planet into a lumpy mess several times as large as the undisplaced sphere even with a nearly zero displacement influence. I decided to use a more conventional 8 bit  version of the map for a separate displacement texture.

First thing I tried was to use the gdal translate tool to convert my 32 bit floating point BT elevation map into an 8 bit png.

gdal_translate -ot Byte -of PNG ${input_file} ${output_file}
, where ${input_file} is the name and path to the input file…
,and ${output_file} is the desired name and location for the converted file.

Unfortunately, this failed badly. Basically, all the elevations were clipped to below 255 meters. Instead, I used the Raster Calculator to make an intermediate file with the following expression.
${input_elevation_layer} / 32.0
This will result in another 32 bit elevation file with values in the range of 0..255. It helped that I started with an elevation range from sea level to less than 8000 meters. The divisor may need to be larger if the range of values is larger and can be smaller for a smaller if the range is smaller. I then used the gdal function to convert that into an 8 bit png.

Since all I wanted was a very small relief like that on some globes, the 8 bit was sufficient. Unless you’re using something like terragen, there’s really no way to make a displacement map in realistic proportions, real planets are smoother than cue balls in proportion.

For the bumpmap I had used a normal influence of 12.0, for the relief texture, I used a displacement influence of point twelve. Even with values less than 256.

I decided to discard the clouds and atmospheric effects. Maybe this is a desk globe. Perhaps I should also model a stand for the thing… A slightly less subtle displacement might be in order.

Now that we have a kinda decent globe, let’s animate the thing. I started at frame zero, with the rotation set to zero. In the “tool palette” to the left of the 3d view(toggle it on and off with the “t” key), I scrolled down to find the keyframes section, clicked “insert” and selected “Rotation.”

At the bottom of the Timeline editor there are three numeric entry fields. The first two are labelled “Start:,” and “End:.” Predictably, these denote the starting and ending frames of the animation. This will be useful later. To the left of these is another numeric field with the current frame number displayed. Click on this and enter a frame number for the next desired keyframe. I chose to put in keyframes every 65 frames, so 0, 65, 130, 195, and  260. At each keyframe, I went to the numeric palette to the right of the 3d view(toggled with the “n” key), near the top you’ll find “transformations,” I added 180° to the z-axis rotation with each keyframe. So 0, 180, 360, 540 and 720.

With that done, it was time to go to the Properties editor and select the Render tab. There are sections here controlling the display of renders, resolution, anti-aliasing and the like. I invite you to experiment with other sections, but for this I’ll focus on the Dimensions and Output sections. In Dimensions select the desired resolution and frame rate. I went with a 960 by 800 pixel image size and 16 frames per second. If you change the resolution you may need to (g)rab and (r)otate the camera to restore the composition of your scene. I’ll wait.

Below the X and Y resolution there is an additional percentage field. This allows you to create fast test renders without messing around with the camera every time. This is a pretty simple project, but when you are dealing with more complex scenes and longer render times, its nice to be able to take a quick look at what your scene looks like to the camera.

Under the Output section, first select an output path. Since I’m going to render all the frames separately and stitch them together later, I decide to create a directory specifically for my render. Check overwrite and file extensions, you may need to redo things…

Below the Placeholders checkbox, which I leave unchecked there is an output format menu with a number of image and movie formats. You could choose a movie format like mov, avi or MPEG, but I’m going with png for individual numbered frames. I’m pretty sure you can give a C printf-style name template, but I’m not entirely sure.

To render an image press F12, to render an animation sequence, press ctrl-F12. You can also select them under Render in the Info panel menu.

Initially, I set the animation to start at frame 1, the frame after the initial keyframe and to end at frame 260, the last keyframe which returns the globe to its initial rotation. This is supposed to allow looping without hesitation, but when I rendered an avi internally, it seemed like the animation was accelerating up to speed and decelerating at the end. I’m not sure why this was happening, but the render time was a bit long, so I figured I’d render out a full rotation from the middle of the sequence and stitch images together in an outside program. Thus, I set start to 66 and end to 195. Once all the images were rendered and saved under names of the form 0066.png .. 0195.png, it was time for stitching.

From my understanding ffmpeg is the best free standalone program for stitching together images into movies(and a lot of other movie-related tasks; it’s kind of the image magick of movies).

In my unix terminal I enter the following command:
ffmpeg -r 16 -vsync 1 -f image2 -start_number 0066 -i %04d.png -vcodec copy -qscale 5

-r 16 sets the speed to 16 frames per second

-f image2 tells it to accept a sequence of images as input.

-start_number 0066 is important. It tells the program to start rendering from an image with frame number 66. Otherwise, if it doesn’t find an image with an index less than five it will assume files are missing and punt out.

-i %04d.png is a format descriptor telling ffmpeg where to look for input files. is the name and format of the desired output movie file.

The rest of the options may or may not matter. I’m not taking chances…

Next time, maybe I’ll add sound…

Comments, questions, corrections or suggestions are welcome! Thank you for your patience,
The Astrographer

Posted in Mapping, World Building | Tagged , , , , , | Leave a comment

Geometry for Geographers


Today, I’d like to share a few geometric formulae I’ve found useful in worldbuilding. There are formulae here for determining the distance between two points with known latitudes and longitudes, the inverse function(latitude and longitude of a destination given a known origin location and a direction and distance. The area of polygons on a sphere and the distance to the horizon for a planet of a given radius given a viewpoint height, and the area of a circle of given radius on a sphere.

Great Circle Distance Between Two Points on a Sphere

If you know the latitude and longitude of two points on a sphere, you can figure out the arc distance in radians between those points with just a little trigonometry. Point A is at latitude,lat_a , longitude,lon_a . Point B is at latitude,lat_b , longitude, lon_b. The difference of longitude is, P = lat_alat_b.

The arc distance is, GreatCircle_ArcDistance_FORMULA.

Thus distance, GreatCircle_Distance_FORMULA.

Once, you know the distance, you can readily calculate the initial bearing from point A to point B. Bearing, bearing_FORMULA. You can figure out the final bearing by interchanging b and a. This will prove useful in determining the area of spherical polygons. Keep it in mind.

Destination Given Distance and Bearing from Origin Point

Given a known point at lat_alon_a, a planet’s radius, R, a bearing, θ, and a distance, d, how do we find the new point lat_blon_b? Note, since Mathematica’s implementation of the atan2(y,x) function is apparently functionally identical to its atan(y/x) function, and its the same function name overloaded with inverted input order(ArcTan[x,y] == Arctan[y/x]), I decided to just go with the y/x form. In a Java or Python or, apparently, JS program, you’d use atan2(num, denom), instead.



For further information, check this page out.

Area of Spherical Polygons

The formula for the area of a spherical triangle is pretty simple looking. Just, Spherical_Triangle_AREA. A, B and C are the three inner angles of the triangle, R is the radius of the sphere and S is the surface area of the triangle. For each vertex, use the Great Circle formulas above to determine the distance and bearing to both neighboring vertices. The inner vertex angle is equal to the distance between the bearings to the two neighboring vertices.

The same principle is used to find the area of more complicated polygons. In the general polygon case, though, it’s important to keep track of convex and concave angles. It might be necessary to make diagrams to keep track of which angles are internal.

Spherical_Polygon_AREA, where σ is the sum of angles in radians, and n is the number of sides.

Distance to the Horizon

Figure 1

Figure 1

As shown in figure 1, point, P, is our central point of interest, point, H, is the point on the horizon of view from P, point, A, is the point on the surface directly beneath P, angle, θ, is the angle subtended, at the center of the sphere, between points P and H. As before, R is the radius of the sphere.

D, the direct distance between points P and H, is also known as the slant distance. The formula for slant distance is horizon_slant_FORMULA, where h is the distance of the viewing point above the ground(length PA).

The value for θ would be, horizon_theta_FORMULA.

The distance along the arc AH is d=Rθ, with θ in radians. Thus, the arc distance, which I call the map distance, since it would be the distance measured on a map, would be map_distance_FORMULA.

The area of a planet observable from a point at height, h, is, observable_area_FORMULA.

The fraction of a planet observable from that height would be, observable_fraction_FORMULA.

For reference planetary_surface_FORMULA, which is the formula for the total surface area of the planet.

Area of a Circle on the Surface of a Sphere

Figure 2

Figure 2

My next formula will be for the surface area of the circular region within a distance, d, of a point, P, on the surface of a sphere of radius, R, as shown in figure 2. From page 128 of the CRC Standard Mathematical Tables, 26th edition(similar information, with 3d figures, here), I find under spherical figures that the zone and segment of one base has a surface area of zone_and_segment_SURF. Incidentally, the volume of this portion of the sphere is, zone_and_segment_VOLM, not that we’re using that here. The arc distance from P to the edge of the area is d=Rθ. An examination of the geometry leads us to the conclusion that h-theta, so the area of the spherical surface within angular distance θ of the center is, circle_on_sphere_FORMULA.

Posted in World Building | Leave a comment