Painting Planets Using Blender’s Texture Paint Tool

I’ve been interested for awhile in using the Texture Paint tool from Blender to paint on the globe. There are a few things that you need to know how to do to make this technique work. First, you need a uv-mapped sphere. This can be done in a variety of methods, but I still find the icomap method I’ve used before to be the most reliable and effective. The uv-map image texture doesn’t necessarily have to fit any particular projection if you’re going

The Tissot Indicatrix of an icomap. Each of the "circles" on the map represents a true circle on the spherical surface, all of the same size.

The Tissot Indicatrix of an icomap. Each of the “circles” on the map represents a true circle on the spherical surface, all of the same size.

to use the POV-Ray spherical camera to reproject the globe, but it is best to have a mapping with a minimum of distortion of shape and area. This allows each pixel to reliably represent roughly the same surface area  with a minimum of distortion. The icomap projection does a very good job of this as shown by the Tissot indicatrix.

The fact that all those little “circles” have nearly the same area and are close to circular is a good indicator that distortion is minimal over nearly all of the surface. Although equirectangular projection(also referred to as “spherical,” “latlong,” “geographical,” “rectangular,” or “plate carée”) is a good universal interchange format, it is very distorted it’s miserable to create an appropriate uv map, which is required for the Texture Paint tool, so essentially I’ll paint to an icomap and use POV-Ray to convert that into an equirectangular.

I’ve been wanting to do a video tutorial for awhile. Mainly ’cause I’m not sure how clearly I’m describing the things I’m doing. Unfortunately, this being the first kind of ground up procedure, things got a little long. I’ve decided to split up the procedure into several separate sections. Along with each video, I’ve included a rough transcript. The video is good for showing where things are and roughly what is being done, but when you’ve got that out of the way and just want to refer back to find forgotten keystrokes and such text is much more efficacious.

NOTE: My limited talents as an actor, the learning curve of video editing software, and problems uploading to Youtube have all conspired to greatly delay this post. With that in mind, I have decided to post the text now and add in links to the videos as I can get them together. In the meantime, Andrew Price has tutorials for pretty much everything I know about Blender so far. If you dig into his tutorials and have a look at a few of the related videos on the Youtube sidebar, the text here should be pretty clear. My videos may even be anticlimactic. Oh well…

Part 1: Setting Up the Spherical Canvas

This video will demonstrate the process of creating and preparing the globe which will be the canvas on which the map will be painted.

Press shift-A to bring up the Add Object menu. Under Mesh, select Icosphere to create our globe.

The default Icosphere, as it turns out, is not a true icosahedron. It is a geodesic spherical shape consisting of triangular faces, but it’s not the classic d20 shape we all know and love. This is perfectly usable with the POV-Ray method for creating equirectangular maps, but I’d like to have a proper icomap straight out of the box. Just personal preference, that…

Hit the T-key with the mouse pointer in the 3D View space to bring up the Tools Panel, and at the bottom there will be a pane that allows editing of the parameters for the last command entered. The top entry for the Add Ico Sphere pane controls Subdivisions. Change it from the default setting of 2 to 1 to get a true icosahedron.

Now go into Edit Mode. Select Mesh>Faces>Shade Smooth in the 3D View footer menu or click the Smooth button under Shading in the Tools Pane. Hit the A-key deselect all faces and, making sure the select mode is set to Edges, select an edge at the north pole. Holding down shift, select the other four edges around the north pole and all five edges around the south pole. Now select an edge crossing the “waist” of the icosahedron. This is somewhat arbitrary, but if we want a particular orientation to our icomap, it pays to select the edge with care and take note of its direction.

Looking at the icomap generated by Flexify, we see that the edges on either side trend from northwest to southeast. The best edge to select, for that case would be the one that coincides with the positive Y-axis. The best way to find this edge is to look at the icosahedron in Object View. Later, this information will be used to select an appropriate look_at value for the POV-Ray camera. So make sure to write down your choice of direction.

Wilbur-generated icomaps have the opposite orientation, so the edge passing through the negative Y-axis would be appropriate.

Classic Traveller-style icomaps cut through the middle of a triangle. The best way to reproduce this effect would be to cut the triangle that the negative X-axis passes through using the knife tool. In the accompanying video, I demonstrate the Traveller style, both because it is the most challenging, and because it allows me to introduce a new and very useful tool, the knife. With the appropriate face and the north pole in view(the knife tool interferes with the view controls, a bit of a bug, frankly), hit the K key to use the knife. Click the vertex at the bottom of the triangle and, holding down the control key, select the midpoint of the upper edge. Click again on the north pole vertex and hit return to make the cut.

In the Shading/UVs tab of the Tools panel, under UVs, is, not surprisingly, the UV Mapping section. Below the Unwrap menu button click on the Mark Seam button. If you look at the 3D View canvas, you will see that the selected edges are highlighted in red to show that they are seams. Now, select all vertices of the icosahedron and click Unwrap. In the UV/Image editor, we will find that the unwrapped faces are displayed. In the UVs menu, check snap to pixels and constrain to image bounds. Using the Grab, Rotate, and Scale tools we will center the unwrapped faces and stretch them to almost fill the image space. “Almost,” because, even with large bleed, I had problems with blank spaces where edges were too close to the image boundaries. I’m hoping a bit of a margin will alleviate that.

Next, in the Properties View, give the icosahedron a basic material. Click the material tab, a coppery-colored ball with a faint checkerboard pattern. Under that tab, click the New button to create a new material. Don’t worry about the settings for now. It might be helpful to give the material a more informative name like “planet_surface” if you like. In the Options section of the material, make sure to check the UV Project box.

The last step in preparing the globe which will be the canvas for our map, will be adding a subdivision modifier. In the Properties view, select the tab with the wrench icon. This is the Object Modifiers tab. Under that tab, you will find a menu button which reads Add Modifier. Click on that and select Subdivision Surface under Generate. Under options, uncheck Subdivide UVs. Set Subdivisions in View to about 3 and in Render to about 4. If applied, 3 subdivisions will result in 960 faces and 4 subdivisions would result in about 3840. Keeping those face counts down could speed things up a lot down the line when painting.

Note that, while the particular way of marking seams for a Traveller-style icomap may be suitable for converting maps to equirectangular, the “sphere” that results is pretty badly distorted for display purposes. You can fix this with Spherize in Edit or Object mode(Shift-alt/option-S)!

Now on to Texture Paint!

Part 2: Painting the Texture

First off, in the main menu bar, open File>User Preferences… Look at the Add-Ons tab. Make sure Paint:Paint Palettes, Paint:Texture Paint Layer Manager and Import/Export:Wavefront OBJ format are all checked.

Set the 3D View to Texture Paint mode.

In the Tools panel(T-key), look under the Tools tab for the Color Palette section. This is probably an empty palette to start with. To add a color, click on the plus-icon next to Index and set the color in the usual manner. You can add a name for the color entry in the text field next to the color wheel icon. Repeat this process till you have your desired palette.

To save your new palette, start by clicking on the file folder icon at the bottom of the Color Palette section. This will allow you to select the directory from which to choose existing palettes or in which to save your new palette. At the top of the palette, there will be a pull-down menu saying Preset, find the plus-icon next to that and press it. Enter a name for your palette and press OK. The palette should be saved.

You’ll find the paintbrush settings in the Tools panel, at the top of the Tools tab. In this section, you can set the size and strength of your painting brush. Next to the radius and strength fields, there are buttons which allow control of the attributes by pen pressure. You can also set color here, but the palette will allow you to reliably repeat color selections.

Above the Color Palette section, you’ll find the Curve section. This allows you to set the softness of the edges and the sharpness of the center of the brush.

Finally, near the bottom of the Options tab of the Tools panel, you’ll find a Bleed option. A large bleed will make it less likely to render grey edges on the surface. Larger the safer. If you want to use the icomap you paint directly, it’s best to leave this at zero. Bleed also makes painting a bit slower…

The next point is the use of Texture Paint layers. Near the bottom of the numeric panel(N-key) are two sections of interest.

The first section is Texture Paint Layers. This allows you to select any materials associated with the object and, below that, any any existing textures that are part of the selected material. To edit any given texture, simply click on the paintbrush icon next to the texture’s name. If you don’t see any textures with paintbrush icons then you need to read the next paragraph.

Beneath Texture Paint Layers, you’ll find the Add Paint Layers section. If you don’t yet have a diffuse color texture, click on the Add Color button to add a new layer. Give it a name and you should find that texture listed in the Texture Paint Layers section above.

At this point just start painting on the globe.

Setting up a bump map layer can be a bit more complicated. While clicking Add Bump is simple, as far as I can tell it creates an 8-bit image. For bump maps, it’s best to use at least 16-bits to avoid stair-stepping. Also, part of the intent of this exercise is to create detailed map data down the line.

With that in mind, we’re still going to create the new bumpmap Texture by clicking Add Bump. Now we’re going to go into the UV/Image Editor View and, find the button with the plus sign next to the image browser pulldown. Click that, and in the window that pops up enter a name, a desired resolution. Make sure to check “32 bit Float” before you hit OK. In the Texture Properties make sure to select the 32 bit image you just made in the Image section, and in the Mapping section, make sure the Coordinates are set to UV and the Map your UVMap shows in the Map selection area. In the Influence section, make sure Normal, under Geometry is checked and everything else is unchecked. Make sure the normal influence is a positive value. I’d go with 1.0 while painting. You can adjust the value(probably downward)later, to make it pretty. Your canvas is now ready to paint in the bumps.

For best results, use one of the pointier brush Curves, fairly low Strength and Radius with pressure-sensitivity for both, and set the Blend to Add(to raise) or Subtract(to lower). For most purposes, leaving the color set to white is perfectly good. You should now be prepared to start painting bumps!

If your computer is decently fast you should use Textured viewport shading. I use a 5 year old bottom of the line MacBook, things get a little boggy, but it’s still usually worthwhile to be able to see what my bumpmapping looks like.

Once you’re done, save the color map to png and the bump map to 16 bit TIFF. I’d love to use full 32-bit OpenEXR, but my conversion paths are limited.

Part 3: Flattening to Equirectangular

In the main menu, select File>Export>Wavefront(.obj) to export the globe. Give it a name and save it.

Now open Wings3D. In the menu select File>Import>Wavefront(.obj)…, and find your saved globe object. Now, we’re going to turn right around and export to POV-Ray(File>Export>POV-Ray(.pov)). Wings3D is a capable and highly useful modeling tool, but this time all we’re doing is using it to translate between Blender and POV-Ray. Go figure…

Now we can go to our favorite text editor to change some settings in the generated .pov file. In global_settings, set ambient_light rgb vector to <0.7,0.7,0.7>. If this proves too dim after rendering, you can increase it later. Set camera_location equal to <0,0,0>. Comment out(//) right up angle and sky. Set camera look_at according to the location where you made the waist seam in the uv-mapping stage. Note that Y and Z are reversed in POV-Ray relative to Blender. So, if your cut was across the positive y-axis, you’ll want to look at the negative z-axis(<0,0,-1>). For the Traveller-style map, my cut was across the negative x-axis, so in my example, I’d set the look_at to <1,0,0>. Comment out light source(nest it between /* and */). Add the uv_mapping statement to texture. Go down to ambient, and comment out the color_rgbf statement. Add, image_map {png “name-of-map-image”}. You should be able to render now and save the image to disk.

Finally, we open the resulting image in Photoshop and flip the canvas Image>Image Rotation>Flip Canvas Horizontal. The analogous command in GIMP would be Image>Transform>Flip Horizontally. Save the result and you have your image as a proper equirectangular map.

Part 4: Converting the Bumpmap

To do the same for the bumpmap, you need to be able to convert the 32-bit image into something that POV-Ray can render to. You could possibly use Landserf to convert the 32-bit single-channel data into an RGBA separated png image, and project that in POV-Ray. Then come back to Landserf to recombine. You’ll would want to save the 32-bit bumpmap to OpenEXR in Blender, use Photoshop to save that to a 32-bit TIFF, then use GDAL to convert the TIFF to something Landserf can read(like BT ).

 

Posted in Mapping, World Building | Tagged , , , , , , , , , | Leave a comment

A Resource for Learning Quantum GIS

I found a nice set of video tutorials for learning the use of QGIS at Mango Map. The first module introduces the QGIS interface. The second module goes over the basics of creating a map. It looks like further posts are being made at roughly weekly intervals(like my own blog… in theory).

Hopefully this will be a good introduction to the use of the program, even if it doesn’t necessarily delve deeply into the particular problems of people trying to use QGIS to create maps of imaginary places. Fantasy mapping is still mapping, so the basics will be useful.

Thanks,
The Astrographer

Posted in Mapping | Tagged , , , , | Leave a comment

Big Planet Keep on Rolling

Same planet, slightly better render...

Same planet, slightly better render…

My intended post for last week took so long that I decided to simplify things a bit. I was going to discuss prettifying the tectonics.js output and making a blender animation of the prettified planet spinning. I’ve learned a lot about qgis(and wilbur) while trying to do this, but I’m still groping around. I’m not saying anything against tectonics.js, it’s my fault for pretty much ignoring too many of the useful hints the program gives and misinterpreting too many of the others. I also habitually underestimate just how wide my mountain ranges are. Anyway, for nor now I’m just going to focus on the animation using the planet I have. Not Earth, that’s just cheating, but I’m using the not altogether successful planet I tried to create over the last two weeks. I need a quicker workflow; one that doesn’t involve constantly googling for instructions…

I’ll start with a sphere similar to the one I put together for an earlier article on displaying your world. I replaced the bedrock color I previously used for the diffuse and specular color with a hypsometric texture I created in wilbur. My original intent was to create a more realistic satellite view with a simple climate model. That would have been awesome!

I used a 16-bit tiff image for the bumpmap. Sixteen bit png’s seem to fail in blender, so I used qgis to convert my png to tiff. I also wanted to create a subtle displacement map as well, but the sixteen bit tiff inflated the planet into a lumpy mess several times as large as the undisplaced sphere even with a nearly zero displacement influence. I decided to use a more conventional 8 bit  version of the map for a separate displacement texture.

First thing I tried was to use the gdal translate tool to convert my 32 bit floating point BT elevation map into an 8 bit png.

gdal_translate -ot Byte -of PNG ${input_file} ${output_file}
, where ${input_file} is the name and path to the input file…
,and ${output_file} is the desired name and location for the converted file.

Unfortunately, this failed badly. Basically, all the elevations were clipped to below 255 meters. Instead, I used the Raster Calculator to make an intermediate file with the following expression.
${input_elevation_layer} / 32.0
This will result in another 32 bit elevation file with values in the range of 0..255. It helped that I started with an elevation range from sea level to less than 8000 meters. The divisor may need to be larger if the range of values is larger and can be smaller for a smaller if the range is smaller. I then used the gdal function to convert that into an 8 bit png.

Since all I wanted was a very small relief like that on some globes, the 8 bit was sufficient. Unless you’re using something like terragen, there’s really no way to make a displacement map in realistic proportions, real planets are smoother than cue balls in proportion.

For the bumpmap I had used a normal influence of 12.0, for the relief texture, I used a displacement influence of point twelve. Even with values less than 256.

I decided to discard the clouds and atmospheric effects. Maybe this is a desk globe. Perhaps I should also model a stand for the thing… A slightly less subtle displacement might be in order.

Now that we have a kinda decent globe, let’s animate the thing. I started at frame zero, with the rotation set to zero. In the “tool palette” to the left of the 3d view(toggle it on and off with the “t” key), I scrolled down to find the keyframes section, clicked “insert” and selected “Rotation.”

At the bottom of the Timeline editor there are three numeric entry fields. The first two are labelled “Start:,” and “End:.” Predictably, these denote the starting and ending frames of the animation. This will be useful later. To the left of these is another numeric field with the current frame number displayed. Click on this and enter a frame number for the next desired keyframe. I chose to put in keyframes every 65 frames, so 0, 65, 130, 195, and  260. At each keyframe, I went to the numeric palette to the right of the 3d view(toggled with the “n” key), near the top you’ll find “transformations,” I added 180° to the z-axis rotation with each keyframe. So 0, 180, 360, 540 and 720.

With that done, it was time to go to the Properties editor and select the Render tab. There are sections here controlling the display of renders, resolution, anti-aliasing and the like. I invite you to experiment with other sections, but for this I’ll focus on the Dimensions and Output sections. In Dimensions select the desired resolution and frame rate. I went with a 960 by 800 pixel image size and 16 frames per second. If you change the resolution you may need to (g)rab and (r)otate the camera to restore the composition of your scene. I’ll wait.

Below the X and Y resolution there is an additional percentage field. This allows you to create fast test renders without messing around with the camera every time. This is a pretty simple project, but when you are dealing with more complex scenes and longer render times, its nice to be able to take a quick look at what your scene looks like to the camera.

Under the Output section, first select an output path. Since I’m going to render all the frames separately and stitch them together later, I decide to create a directory specifically for my render. Check overwrite and file extensions, you may need to redo things…

Below the Placeholders checkbox, which I leave unchecked there is an output format menu with a number of image and movie formats. You could choose a movie format like mov, avi or MPEG, but I’m going with png for individual numbered frames. I’m pretty sure you can give a C printf-style name template, but I’m not entirely sure.

To render an image press F12, to render an animation sequence, press ctrl-F12. You can also select them under Render in the Info panel menu.

Initially, I set the animation to start at frame 1, the frame after the initial keyframe and to end at frame 260, the last keyframe which returns the globe to its initial rotation. This is supposed to allow looping without hesitation, but when I rendered an avi internally, it seemed like the animation was accelerating up to speed and decelerating at the end. I’m not sure why this was happening, but the render time was a bit long, so I figured I’d render out a full rotation from the middle of the sequence and stitch images together in an outside program. Thus, I set start to 66 and end to 195. Once all the images were rendered and saved under names of the form 0066.png .. 0195.png, it was time for stitching.

From my understanding ffmpeg is the best free standalone program for stitching together images into movies(and a lot of other movie-related tasks; it’s kind of the image magick of movies).

In my unix terminal I enter the following command:
ffmpeg -r 16 -vsync 1 -f image2 -start_number 0066 -i %04d.png -vcodec copy -qscale 5 spinning_planet.mov

-r 16 sets the speed to 16 frames per second

-f image2 tells it to accept a sequence of images as input.

-start_number 0066 is important. It tells the program to start rendering from an image with frame number 66. Otherwise, if it doesn’t find an image with an index less than five it will assume files are missing and punt out.

-i %04d.png is a format descriptor telling ffmpeg where to look for input files.

spinning_planet.mov is the name and format of the desired output movie file.

The rest of the options may or may not matter. I’m not taking chances…

Next time, maybe I’ll add sound…

Comments, questions, corrections or suggestions are welcome! Thank you for your patience,
The Astrographer

Posted in Mapping, World Building | Tagged , , , , , | Leave a comment

Geometry for Geographers

Introduction

Today, I’d like to share a few geometric formulae I’ve found useful in worldbuilding. There are formulae here for determining the distance between two points with known latitudes and longitudes, the inverse function(latitude and longitude of a destination given a known origin location and a direction and distance. The area of polygons on a sphere and the distance to the horizon for a planet of a given radius given a viewpoint height, and the area of a circle of given radius on a sphere.

Great Circle Distance Between Two Points on a Sphere

If you know the latitude and longitude of two points on a sphere, you can figure out the arc distance in radians between those points with just a little trigonometry. Point A is at latitude,lat_a , longitude,lon_a . Point B is at latitude,lat_b , longitude, lon_b. The difference of longitude is, P = lat_alat_b.

The arc distance is, GreatCircle_ArcDistance_FORMULA.

Thus distance, GreatCircle_Distance_FORMULA.

Once, you know the distance, you can readily calculate the initial bearing from point A to point B. Bearing, bearing_FORMULA. You can figure out the final bearing by interchanging b and a. This will prove useful in determining the area of spherical polygons. Keep it in mind.

Destination Given Distance and Bearing from Origin Point

Given a known point at lat_alon_a, a planet’s radius, R, a bearing, θ, and a distance, d, how do we find the new point lat_blon_b? Note, since Mathematica’s implementation of the atan2(y,x) function is apparently functionally identical to its atan(y/x) function, and its the same function name overloaded with inverted input order(ArcTan[x,y] == Arctan[y/x]), I decided to just go with the y/x form. In a Java or Python or, apparently, JS program, you’d use atan2(num, denom), instead.

Destn_Lat

Destn_Lon.

For further information, check this page out.

Area of Spherical Polygons

The formula for the area of a spherical triangle is pretty simple looking. Just, Spherical_Triangle_AREA. A, B and C are the three inner angles of the triangle, R is the radius of the sphere and S is the surface area of the triangle. For each vertex, use the Great Circle formulas above to determine the distance and bearing to both neighboring vertices. The inner vertex angle is equal to the distance between the bearings to the two neighboring vertices.

The same principle is used to find the area of more complicated polygons. In the general polygon case, though, it’s important to keep track of convex and concave angles. It might be necessary to make diagrams to keep track of which angles are internal.

Spherical_Polygon_AREA, where σ is the sum of angles in radians, and n is the number of sides.

Distance to the Horizon

Figure 1

Figure 1

As shown in figure 1, point, P, is our central point of interest, point, H, is the point on the horizon of view from P, point, A, is the point on the surface directly beneath P, angle, θ, is the angle subtended, at the center of the sphere, between points P and H. As before, R is the radius of the sphere.

D, the direct distance between points P and H, is also known as the slant distance. The formula for slant distance is horizon_slant_FORMULA, where h is the distance of the viewing point above the ground(length PA).

The value for θ would be, horizon_theta_FORMULA.

The distance along the arc AH is d=Rθ, with θ in radians. Thus, the arc distance, which I call the map distance, since it would be the distance measured on a map, would be map_distance_FORMULA.

The area of a planet observable from a point at height, h, is, observable_area_FORMULA.

The fraction of a planet observable from that height would be, observable_fraction_FORMULA.

For reference planetary_surface_FORMULA, which is the formula for the total surface area of the planet.

Area of a Circle on the Surface of a Sphere

Figure 2

Figure 2

My next formula will be for the surface area of the circular region within a distance, d, of a point, P, on the surface of a sphere of radius, R, as shown in figure 2. From page 128 of the CRC Standard Mathematical Tables, 26th edition(similar information, with 3d figures, here), I find under spherical figures that the zone and segment of one base has a surface area of zone_and_segment_SURF. Incidentally, the volume of this portion of the sphere is, zone_and_segment_VOLM, not that we’re using that here. The arc distance from P to the edge of the area is d=Rθ. An examination of the geometry leads us to the conclusion that h-theta, so the area of the spherical surface within angular distance θ of the center is, circle_on_sphere_FORMULA.

Posted in World Building | Leave a comment

Displaying your Planet in Blender

I love it when a plan comes together!

I love it when a planet comes together!

Intro-Ducktion

In the process of making some pictures for recent blogs, I’ve found myself messing around quite a bit with Blender. There’s things I’ve done before on Blender that I had completely forgotten how to do, and there are other things that are somewhat involved to do in Blender that other programs pull off without a hitch.

Some of this comes down to the general purpose nature of Blender as compared to the more focussed purposes of other programs. Displaying a map on a rotating globe is fairly easy for gplates, because that’s one of its core competencies. On the other hand gplates isn’t capable of displaying raytraced specularity variation across a planet’s surface or showing proper hillshading due to surface topography. Bryce, on the other hand, is capable of doing these things, to some degree, and to some degree some of these things are easier. Bryce is getting pretty long in the tooth at this time, though, and even fairly simple renders are sloowww. Terragen is pretty sweet. Like Google Earth with raytracing and your own world’s terrain. Unfortunately, my family has to eat and stuff, so Terragen is right out…

Creating a Globe

Our first step will be to create the globe we’ll be texturing. On the menubar, select Add>Mesh>UV Sphere. Since we’re not going to do UV-mapping on this one, I’m going to go ahead and smooth the thing. First, go into Edit mode and in the 3D View menu select Mesh>Faces>Shade Smooth. Next, return to Object mode. In Parameters, select the Modifiers tab. Click Add Modifier(wrench icon) and select Subdivision Surface. Set Render to three subdivisions and click Apply Modifier. If you like, you can forego applying and simply leave the modifier in place. Whatever you choose, you now have a globe. Now to texture the thing.

Loading Spherical Image Textures

The first problem to solve is loading equirectangular projection(or “spherical”) images and using them as textures. Surprisingly, this seems easier with UV-mapped icosahedral textures. Although, to be honest, I did all the modeling and UV-mapping in Wings3D. For this purpose, I’ll be using textures generated by the Tectonics.js program. I am aware that these aren’t necessarily suitable as-is for this purpose, but this can be considered sort of an early evaluation prior to spending a lot of time optimizing them.

I’ll start with the bedrock texture. This is the simplest, because it’s simply a color, which is the easiest thing to apply in Blender. Making sure you have your globe selected, go to the Material editing tab(brass ball icon) in Properties. There will be a button that says New. To the left of that will be a pull-down that allows you to browse materials. If you started with an empty scene then the only material in the list will belong to the sphere you made. Select that. Rename it if you wish. For now we’ll leave the settings as-is. It’s hard to tell what effect the various shading options will have till you have the textures onboard.

Bring up the Textures tab(checkerboard square). The currently selected texture will be imaginatively named Tex, and its type will be None. Change the type to Image or Movie, and, if you like, change the name to something more descriptive of its role as surface coloration. While you’re up there set the Preview to Both, so you can see the texture image and get some idea what it’s going to do. Make sure the preview render is on a sphere.

Now, scroll down to Image and click Open. Pick out the desired image from the filesystem. In the preview, you’ll see that the texture is about what we expect, although squeezed into a square. The material preview, however, is going to be disappointing. This is because the projection of the flat texture onto the sphere is wrong. Let us now fix that.

Scroll down to Mapping. The coordinates seem to be fine as Generated, so we’ll leave that be. Let’s change the Projection to Sphere, and have us a look at the preview. The material should be much better.

Let’s make a trial render to see how this came out. Go to the Render tab(camera icon) and scroll down to dimensions. Set the X and Y resolution to whatever you’ll want as your final render size and set the scale to 50% to speed up your trial renders. If your desired resolution is much less than 1000×1000, maybe you should leave scale at something closer to 100%…

Scrolling down to Output, you can set your image format and related parameters. I’m not too worried about that at this stage. I’ll just let the trial renders live in slots within the program till I’m ready for a final render.

Scroll back up to the Render pane. I usually set Display to New Window for convenience, because it defaults to Image Editor and replaces your 3D View window with an Image Editor window. Set that as you like… Click Render or press F12. Not the prettiest thing ever, but the texture seems to work. It seems to me, the seas should have more glare than the land. Let’s see what we can do about that.

Now, previously in Photoshop, I created a Sea mask image by making a magic wand selection of the water in the bedrock image and saving the resulting channel to its own file. I also made a land mask image by saving an inverted version of same. I go back to the Texture tab and select an empty texture slot. Hit New and select Image or Movie. Scroll down to Image, hit open and select the sea mask image. Make sure to uncheck Use Alpha under Image. This image doesn’t have a useful alpha channel, so we want it to use the greyscale rgb as alpha, which is what it uses to control intensities. Set your mapping and such as with the previous texture. You’ll see the black and white image in the texture now, instead of the bedrock colors, but at least it ain’t a white cueball and everything’s in the right place.

Scroll down to Influence. Uncheck Diffuse Color, check Specular Intensity. Maybe check Hardness under Specular, as well. The sea colors seem a bit bright, so you could use this to put a large negative influence on diffuse intensity as well, but, in my limited experience, that is fraught with issues(it tends to brighten the land too much, it’s a bear to adjust, and the color of the water tends to get way too deep and saturated by the time you’ve gotten it dark enough). Best way to adjust colors, for the moment, is probably in the texture itself, using your favorite image editor(not, in my case, by any means, Blender). Try another trial render.

At this point, you should adjust the parameters on the material and textures. This will involve a certain amount of trial and error, jogging between the textures and the material controls and frequent trial renders. Try other controls as well, such as the other texture influences and stuff in the Shading panel of the Materials tab.

complete_planetNext thing to do is to give the globe a bit of relief. Once again, we select an empty texture slot, create a new image texture, load an image(this time elevations) and set the mapping and such. Uncheck all of the influences except Normal, reduce the strength of the normal to at most about 0.5. Unless of course you want to intentionally exaggerate relief in order to bring out smaller features.

This would be a good time to try a preliminary full render. Take a note on the dimensions of the planet sphere. Once we have the planet surface the way we want it, its safest to go up to the Outliner and restrict viewport selection of the planet surface object. Just click on the arrow icon to shadow it, click on it again if you need to change the planet in the future.

Making a Cloudsphere

Now we add a new sphere with the same center as the planet globe to put the clouds on. My notes say that the X/Y/Z dimensions of the globe are 12/12/12, and I want the clouds to hug the planet pretty closely, so I’ll size it to 12.35/12.35/12.35 after smoothing and such. Make sure to smooth and subdivide the clouds sphere as you did the planet. Create a new material, and zero its diffuse, specular and ambient values(at least initially). Check Transparent and set it to Raytrace. Set alpha to zero. Go down to the Options pane and turn Traceable off. Traceable always seems to make the planet surface render solid black, I’m not certain why. Do a quick test render to make sure the planet surface is still visible.

Add a new texture for your clouds. Figuring out a noise that looks good for global clouds is a problem I’ve yet to solve, so I’ll leave you to work out the details. I used a Distorted Noise with a Voronoi F2 basis and considerable Improved Perlin distortion. In Mapping, I stretched the size by about three in the z-coordinate. Best results could be attained by loading a real world global cloud map, but these sometimes show evidence of Earthly continent shapes to the wary. An artist could try painting in a cloud map, but my skills aren’t remotely up to that. For now, this will have to do.

complete_cloudsI gave the cloud texture influence over diffuse intensity, color and alpha, specular intensity an geometry normal. All of these were close to one with small adjustments downward.

I put a ramp on the colors. It’s all white, but the alpha is 0.8 on the right and 0.0 on the left. I added another 0.8 alpha stop at the 0.965 position, and another 0.0 alpha stop at position 0.480. The ramp allowed me finer control over cloud cover. A final render with clouds is in order.

Atmosphere

Next we add an atmosphere. This is still very much a work in progress. I’m trying for something like a LunarCell atmospher with more control and realism. I haven’t yet attained the first goal. I’m pretty sure Blender has a way to make volumetric density fall off with distance from the center, but I haven’t figured it out yet. If I can figure out how to make an object presence mask, like I can in Bryce, then I could possibly do something useful with a radial gradient in photoshop. No dice yet, though. To start with I’ll just settle for a volumetric ball with some scattering.

So, first we make a nicely smoothed and subdivided sphere with X/Y/Z dimensions of 13/13/13. We create a material for it. Make the material transparent, with density, oh, let’s push it down to 0.1. I’ll rack the scattering up to 1.0, with a -0.5 asymmetry, meaning that more light is back scattered. A test render and… that didn’t come out well. Must remember to uncheck Traceable in the Options pane of the Material. Try again… success! Looks a little extreme, though. Since density should already be pretty subtle, I’ll start by reducing the Scattering values a bit, especially the amount. By the time I’m done with the whole test render(30% size, now, ’cause it’s not quick), adjust, render again process, I have a density of 0.12 and a  scattering of 0.3 with 0.0 asymetry. It looks good, but maybe a little too wide so I reduce the size of the atmo sphere to 12.7/12,7/12.7.

I’m pretty happy with the results. The shaded relief needs work in Wilbur, and, in spite of a lot of fiddling, the cloudmap isn’t nearly as good as what LunarCell can do. Which isn’t actually very good. LunarCell is good for pretty pictures and it’s mapgen isn’t bad so far as noise-centered generation goes, but it’s cloudmap generation is socially awkward at best. Sadly, it’s about the best clouds-from-noise I’ve seen… Looks ok from a distance, but it needs work. I’ll probably just have to bite the bullet and use real-life clouds.

complete_atmosphere

Conclusion

Hopefully, this was useful to people. If not it should probably be a good reference for me. I’ve gotten pretty good with the very basics of Blender, but beyond rendering models as monochromatic plastic toys, materials have had me flummoxed. This should be useful next time I’m trying to texture a spaceship. It should also make a good background.

For my next trick, the real reason why I jumped into Blender with this in the first place, a revolving-head animation of the planet. Now I’m well away from familiar shores!

Thanks for reading all of this, and any comments and advice are very very welcome.
The Astrographer

Posted in Mapping, Planetary Stuff, World Building | Tagged , , , , , , , | 1 Comment

Realistic Plate Tectonics with Tectonics.js

gplates_ortho

For some time I’ve had an interest in terrain generation using simulated tectonic processes. I’ve successfully used PlaTec, but it’s strictly 2-d and the output is pretty limited. Another one that seemed promising was pytectonics, but since it froze my system dead, I’m not sure how good it might be(sour grapes and all that…).

Recently, I came across a plate tectonic simulator that runs in javascript on the web browser. Surprisingly, given all the trouble I’ve had with compatibility issues lately, it worked and was reasonably fast. Tectonics.js was created by Carl Davidson, who was also the author of the forementioned pytectonics. I’ve been engaged in a discussion with Mr. Davidson on reddit, and he has been very active and responsive to user suggestions.

The procedure, in a nutshell, will be, first, to create an attractive tectonic simulation, and then, second, to convert that into a decent map.

First, run Tectonics.js at a speed of about 5 Myr/s till the age reaches about a billion years or so. The goal, here, is to give the model time to reach a reasonable equilibrium without spending forever doing it. Slower speeds, on the other hand, tend to produce more attractive results. I’m using the Safari browser, which isn’t all that fast, but my attempts with Chrome, while much faster, also tend to crash out after roughly the first billion years. If your browser has a significantly faster javascript implementation, your computer is a bit less long in the tooth than mine or you’re a lot more patient than me, it could pay off to run at smaller time steps. Although it took most of a day, I’ve made runs at as small a time step as 0.25 Myr/s. For the most part, much cleaner.

From about a billion years, reduce the time step or “Speed” to 1 Myr/s. Run it like that till you approach a desired age, perhaps four to five billion years. Make sure you get at least the last half billion years or so at no more than 1 Myr/s time step. If desired, reduce the speed to around 0.25-0.5 Myr/s for the last few hundred million years.

When you’ve reached the desired time or the map is in a configuration a you find attractive, reduce the Speed to zero to stop the animation. Personally, I consider the Bedrock view attractive and useful and the Plates view is a crucial guide to building your world. The Elevation view is less useful than I’d hoped, but its still helpful. First, make sure that the projection is set to Equirectangular, and the screen is sized so that some black is showing all around the jagged edges of the map. This can take some window resizing and using the scroll wheel to bring the image in and out. It’s self explanatory once you try it. Next, set the view to Bedrock and press p to create a screenshot in a new tab. Save the new tab to a png in your working directory. Repeat this process with the view set to Plates, then again for Elevation. You can also save copies in other modes, like temperature and precipitation, but, as of this writing, those are less useful. The program is currently in active development, so those modes may be more useful later.

It can pay off to save intermediate imagery before you reach your desired time. Sometimes the model approaches an attractive configuration, then, in a fit of perversity, quickly morphs irretrievably into an ugly mess. Perhaps, even if you don’t initially intend to model the geological history of the planet, having maps of the earlier continental positions could be useful later. Particularly, if you’d like to model adaptive radiation of local lifeforms and such, having at least a sketchy history of the world’s tectonic drift could be helpful. I’ll deal with geological history in a later post. For now, you just want to pick out one time point that fits your needs.

Now, import the Bedrock image from your chosen time period to Photoshop or your favorite raster editing app. First, select the black background with the Magic Wand tool on zero tolerance. Next, invert the selection and copy. Now create a new image, retaining the default size, and paste from clipboard. In Photoshop, at least, a New image file defaults to a size just big enough to contain the selected area.

If you’re raster editor doesn’t behave similarly, you can simply Crop the image down till it just contains the map area instead of the procedure described in the previous paragraph.

If you examine your image, now, you’ll notice two things. First, the edges are jagged.

The prepared equirectangular bedrock map.

The prepared equirectangular bedrock map.

Second, the image size is not quite a 2:1 rectangle. I believe these both relate to the fact that the map is composed of discrete cells that don’t conform to the latitude, longitude grid. The easiest way to deal with this is to crop the image down so that the jagged edges don’t show and resample the result to a 2:1 rectangle. This will necessarily reduce precision a bit, but for most purposes it doesn’t matter. You might need to cleanup around the edges to fix shorelines that don’t quite line up. I made an attempt to line up the east and west edges, but they didn’t line up. Instead, I decided to keep the image as it is, use the eyedropper to sample the ocean color, and fill the background layer with ocean color. This works because I could center all the land on the map without overlaps. If it’s impossible to center land on the map such that it doesn’t overlap the edges, you’ll need to connect the land across the boundary somehow.

Now resample the image to a 2:1 rectangle.

The prepared equirectangular map of the tectonic plates.

The prepared equirectangular map of the tectonic plates.

Repeat for all of the output images. For the Elevation, I fill the ocean background with white to represent sealevel elevation. I then invert the image, because I prefer darker values for lower elevations. It’s a matter of taste, though you have to keep track. I also saved a seamask, based on the selection I created to mask out the blue seas. Except for resizing, I took Plates pretty much as-is, because the edge behavior is continuous, so any boundaries across the problem areas would be a work of imagination.

The prepared equirectangular elevation map.

The prepared equirectangular elevation map.

The imagery is now ready to be applied to gplates, of course. Each of them will have an extent of 90º N to 90º S, and 180º W to 180º E.

Once you have all that loaded into gplates, you can look at it with a nice graticule, varying opacity, so that you can for, instance, compare plate boundaries to continental shorelines, or a variety of other effects.

The final prepared, separated and inverted equirectangular map.

The final prepared, separated and inverted equirectangular map.

For the picture at the top of the page, I added a photoshop-derived hillshade to give a better sense of what the elevations look like straight out of the box. Photoshop or Wilbur with a judicious bit of well-applied noise could be used to enhance the appearance of the mountains. The vector editing tools in gplates or qgis could be used to mark shorelines, various kinds of plate boundaries, mountainous regions and other data derived from the tectonic simulation. I’ll leave that for a future article. For now, have fun with tectonics!

Thanks,
The Astrographer

Posted in Mapping, World Building | Tagged , , , , , , , , | 1 Comment

Placing Raster Features Using GPlates

Today we’re going to look at using the gplates program to place pre-created raster features on the globe. For minimal distortion, we will begin by placing the raster at the center of the map, where the central meridian crosses the equator. If we were placing raster features taken from particular parts of Earth, we would want to make sure they were in equirectangular(or geographic, or plate carée or latlong) projection and place them in the position they were in on the original map(this is good for importing real world data from sources such as the SRTM 90-m database). I am going to give instructions both for the use of real world data and island maps from Amit Patel’s Polygon Map Generation Demo.

A few tips I’ve picked up through previous experimentation. Raster layers which are imported into the same location(since we’re dropping unprojected imagery as close to the center as possible to minimize distortion) need to have separate associated vector shapefiles.

In my filesystem, I create a separate directory for each raster. Within that directory, I create a “raster” subdirectory, where I place the raster itself and a “vector” directory, where I place the associated shapefile. This will make it easier to keep track of everything.

To start with, I created a patch in photoshop. Just an ordinary tiff image. I used tiff to test whether I could import and reproject 16-bit or 32-bit rasters. GPlates choked on the 32-bit tiff, but successfully loaded the 16-bit version. The patch I created was small and silly, so I decided to make its geographic extent small, if this works in 16-bit I might, perhaps use it as a set of elevations for the somewhat outscale raster I imported as a continent earlier. So how do I set the georeferencing. Nine meter resolution is pretty common and excellent for moderately close in work, so I’m using that. To reference this to Earth, the  diameter of our planet is close enough to 12,756,000 meters. Given that the circumference of a circle is equal to its diameter times π(about 3.14159265…), that gives us a circumference of about 40,074,156 meters. As back of the envelope as this is getting 1-meter precision is more than sufficient. The resolution of my image is 1k-square(1024×1024), so that’s an extent of 9,216 meters square. A degree comes to about 111,317 meters, so, keeping track of units,

9,216 meters / 111,317 meters/º = 8.279-yada-yada x 10^-2º

I want this centered at the [0,0] point, so divide that by two to get the extents. Top latitude of 0.0414º N, bottom latitude of 0.0414º S(-0.0414), left longitude of 0.0414º W(-0.0414) and right longitude of 0.0414º E. Unfortunately, this throws an inescapable exception on gplates. I successfully import the raster with the extent being 0.2º on a side. That gives me a pixel size of about 21.7 meters(about 71 feet). Once I get that imported, I digitize a new polygon geometry roughly covering the area of the image. I gave it a classification of gpml:UnclassifiedFeature, a plateID and a name. I also made sure that the checkboxes for Distant Past and Distant Future were filled, not that it matters for what we’re doing here, but whatever… Create and Save to a shapefile in the Vector directory associated with the raster. In the Layers window click on the arrow next to the Reconstructed Raster you just imported. Under inputs, find Reconstructed polygons, click on “Add  new connection” and select the Reconstructed Geometry you just digitized. Use the Choose Feature tool to select the polygon associated with your raster. You can now use the Modify Reconstruction Pole tool to move the raster to where you want it. In my case I placed it somewhere in the mountains of the small continent I had placed while practicing to do this. Place it where you want it, hit Apply and hit OK a couple times. I had to jockey mine around a bit to get it right where I wanted it. If all of your edits are done without changing the Time setting, there will only be one entry on the rot-file.

Speaking of the rot-file, go to File>Manage Feature Collections(cmd-M or ctrl-M), and make sure all changes are saved.

Now, I’m going to load in an island generated on the Polygon Map Generation website. To figure out the extent for this, I looked at a map of the Hawaiian Islands and observed that the Big Island fit into a space a little over a degree on a side. The islands generated by Mr. Patel’s generator just have the feel of being much smaller than Hawaii. I’ve decided to give it an extent of about half a degree each way, and, since the island shapes seem to roughly fit with the look of the other islands, I’ll center it at 21.5º N by 157.5º W. That would be just off Oahu, and maybe just a bit bigger. So, I used a top latitude of 21.7º N, a bottom latitude of 21.3º N, left longitude of 157.7º W(-157.7) and a right longitude of 157.3º W(-157.3). I reduced the extent a little ’cause the island still seemed big.

This time, we’ll roughly hug the coast with the shape feature we create. This will minimize the amount of water color we have to clean up later. In this case, I’m just going to play around and pretend like the slop represents shallow water. Once you’ve digitized and created the feature with a unique plateID, save it to a new shapefile. I actually like this thing’s rough location, so I’m going to move it just a little, mostly rotating it. Maybe I’ll plant it over Oahu’s position and call it Notoahu…

Now, I’m going to add another couple of islands, but I’m going to add them both to the center of the map. The first one will re-use the same shapefile as the previous import, and I will locate it at 0.2,-0.2,-0.2,0.2, in the usual order. I’ll digitize an outline of this island and save it to the previously created shapefile. A possibly late word of warning, it’s best to give all of your files easily recognized names. It’s maddening to try to find, “planet_island-61462-2AF,” somewhere between “planet_island-61462-1TR,” and “planet_island-61462-2JL.” Anyway, I wound up using the Pole Rotation tool to place that island somewhere in the space between Maui, Lanai and Molokai on the Earth map.

The next island I placed at 0.3,-0.3,-0.3,0.3. Since the initial location of this island coincides with the previous, it needs its own shapefile, otherwise overlapping will be a problem. Once I’ve got everything digitized and connected, I’ll shift it over with the rest of my little island chain.

For my last trick, I want to move a tile of countryside taken from the SRTM Database at TileX 54 TileY 3. I select the GeoTiff radio button before hitting the button marked, “Click here to Begin Search. I observe from the download page that the filename is srtm_54_04.zip. The latitude has a minimum of 40º N, and a maximum of 45º N, the longitude has a minimum of 85º E and a maximum of 90º E. We also observe that the center point is at latitude 42.5º E by longitude 87.5º E. I chose Data Download(HTTP) to download the tiff.

Sadly, gplates has some serious problems with the 16-bit geotiff. This is really a shame, as moving fragments of real-world elevations and piecing them together is probably the single most useful aspect of this technique. Popping down pictures of islands is all well and good, but not a terribly powerful use-case.

It seems I might need to convince the developers of gplates to implement the import and reconstructed export of multi-byte elevation data. Failing that, the raster import/reconstruction/export abilities of this program are going to be functionally limited to imagery. Shame, really.

Hopefully, this could prove useful.

Thanks,
The Astrographer

Posted in Mapping, World Building | Tagged , , , , , , | Leave a comment

Working with the Conjugate Plate in GPlates

As promised last week, we are now going to demonstrate the usefulness of the conjugate plate in gplates.

I’m going to start with just the two lines preparing plate 100. To that I will add another pair of lines defining new plates 101, 102 and 103, which have plate 100 as their conjugate plate.

100  0.0   0.0    0.0    0.0  000 !1

100 150.0   0.0    0.0    0.0  000 !1

101  0.0   0.0    0.0    0.0  100 !Chris

101 150.0   0.0    0.0    0.0  100 !Chris

102  0.0   0.0    0.0    0.0  100 !Tom

102 150.0   0.0    0.0    0.0  100 !Tom

103  0.0   0.0    0.0    0.0  100 !Mary

103 150.0   0.0    0.0    0.0  100 !Mary

I’ll draw up a quick coastline in gplates and give it the PlateID of 100. I will now define three points using the Digitise New Multi-point Geometry M tool. I’ll  call them Chris, Tom and Mary. Maybe we’re chronicling the travels of three very slow ents…

Now, I’m going to use the Modify Reconstruction Pole P tool to describe their changing positions at various times over the next hundred million years. For convenience, I’ll copy the last position Euler coordinates to the 150.0 time row each time I make a modification.

Now that we’ve described the slow travels of our friends the slow-moving tree-people across continent 100, we now do the same thing for continent 100 itself.

Now, if we run the animation, we’ll see that the movements of our three little friends follow the continent as it moves. To make that clearer, try adding a third point that remains stationary until the last minute. Let’s call this stay-at-home ent, Taylor. I’ll have Taylor remain stationary until 125.0, Then I’ll have it meet up with some of the other points at 150.0. As you see, although there are no moves described for plate 104 until time 125.0, the point follows the movements of the continent.

It might not be clear that Taylor is truly stationary with respect to plate 100, because the interpolation of plate 100′s movement causes some jiggling. So select the menu item Reconstruction>Specify Anchored Plate ID… and set the PlateID to 100. Now rerun the animation.

I hope this demonstration was helpful.
The Astrographer

Posted in Mapping, Planetary Stuff, World Building | Tagged , , , , , , , , | Leave a comment

Tutorial for Forcing Icosahedral Maps onto Flat Maps

Setting up the beauty shot took longer than making the map.

Setting up the beauty shot took longer than making the map.

So. A really… really… long time ago, I posted a method for mapping an icosahedral map of the sort that RPGs like Traveller are so enamored of, to a sphere in a 3d app. Similar projections were used by many science fiction role-playing games, such as 2300AD, GURPS Space and Space Opera.

Even back in those days of hoary antiquity I was looking for a means to map that surface onto an equirectangular map(plate carée, geographic or latlong for the technical). Given the prevalence of apps like G.Projector and Flex Projector, both of which require equirectangular maps as input, this was very desirable. Even the Flexify filter, with its many available input projections, chokes on most interrupted projections on input.

At long last I have found a way to convert icomaps into equirectangular projection.

This isn’t just useful for getting old science fiction RPG maps into a more usable projection. Having seen how these icomaps look projected back onto globes, I have to say this is a dandy little projection for drawing new maps in. Distortion is surprisingly limited. In the hand-drawn map that I am going to use to demonstrate this technique I placed the island of Korsland very near the north pole. In spite of that it has no noticeable pinching. That is an awesome for anyone who wants to draw a decent map of an imaginary world. The polar-pinch problem is common for maps drawn on the flat and very difficult to eradicate, but it really isn’t much of a problem with the icomap projection.

This method could also be used to derive a flat map from a texture painted in Blender with Texture Paint or noise effects. A last possibility, if one had an excellent grasp of povray scripting, would be to create maps from scripted combinations of noise in povray.

Looking at the Wikipedia, I found this page on Tissot Indicatrices. Included on that page was the povray source code used to generate the templates. I puzzled at this, then I realized the magic of the spherical camera.

Unfortunately, It took some trial and error to figure out how to export a uv-mapped object to povray. Wings3D exports to povray, but the uv-mapping seems to be lost. I finally figured out how to get it to render. I’ll go over it again, here.

This can be done entirely with free apps. I used Photoshop, but everything that needs to be done here can be done just fine in the GIMP. There are apparently several implementations of povray; I’m using MegaPOV, myself ’cause it comes pre-compiled for the mac. You’ll also need Wings3D.

I have already successfully used this method to reproject maps of Regina from the World Builder’s Handbook by Digest Book Publications and Unnight from the GURPS Space worldbook of the same name by Steve Jackson Games. I wanted to use something I, myself owned and created. Since I started doing cartography in a serious way, I haven’t really used the icomap method very often. I knew that if I drew things out in equirectangular projection, I had a lot of apps that could readily reproject into a wide variety of other projections. I could also readily use it in 3D apps to create pictures as from space. I do have a few very old maps I made in my youth. To avoid getting into disputes about copyright law(a very popular subject on the net), I decided to use one of my own icomaps for this demonstration. For private use, it should be perfectly acceptable to use proprietary imagery in this manner, and the public exhibition of derivative works is… debatable. Private exhibition of derivative works should be completely kosher.

The scanned and stretched UV image I'm using.

The scanned and stretched UV image I’m using.

The map I’m using today was based very loosely on the map of Craw created by J. Andrew Keith for “A Referee’s Guide to Planetbuilding,” as found on page 25 of “The Best of the JTAS,” Volume 3. There are some significant differences, and the original was, oddly enough, in an equirectangular projection. I’ll call it Wark.

To start, let’s open Wings3D. To create our icosahedron, we’ll right-click somewhere on the screen and select “Icoshedron” from the menu. Now click somewhere on the icosahedron and press the B button to select the body. Now right click on the body and select “.UV Mapping.” from the menu.

You can control the view by clicking the middle mouse button and moving the mouse around to rotate. Hold down the middle mouse button and drag to dolly the view. Hit the left mouse button to get out of view control, when you’re happy about the view.

The AutoUV screen shown with the vertices nicely aligned with the image.

The AutoUV screen shown with the vertices nicely aligned with the image.

In the AutoUV Segmenting window, look at the top of the icosahedron. Hit the E button to select edges and click on the five edges around the north pole. Now look at the bottom and select the five edges around the south pole. Select one edge on the midsection of the body to connect one of the five selected edges in the north to one of the five selected edges in the south. Once you have these eleven edges selected, right click somewhere on the segmenting window and select “Mark Edges for Cut” from the menu. Now right click again and select “Continue”, now select “Unfolding.”

You’ll find in the AutoUV window that, if you have the triangles selected, a right click gives you a menu that includes the options, “Move,” “Scale,” and “Rotate.” Use these to move, scale and rotate the triangles are arranged horizontally, in roughly the orientation of your icomap and scaled so that the pretty nearly fill the square. Don’t worry too much about getting it perfect. We’ll adjust later.

Now right click again and select “Create Texture.” For Size, go with the biggest possible(2048×2048), and for Render select “Background” for 0, “Draw Edges” for 1, and “None” for 2. Hit OK.

In the Outliner window on the right of the screen, click on the checkerboard next to an item that says something like “icosahedron1_auv.” The number may vary. Now right click and select “Make External.” Pick out the location where you want to save the image and click Save.

Now in your favorite image editing app, open the image with the scanned map. You’ll want to scale this to match the texture resolution.

In Photoshop select the menu Image>Image Size…, uncheck “Constrain Proportions”, check “Resample Image:” and select Bicubic Sharper from the pop-down menu if your original map is smaller on any dimension than the texture resolution. Since the texture is 2048×2048 pixels, that is the Width and Height we want to set this image. The Document size stuff is irrelevant to our purposes. Click OK. Now the image is rescaled.

In gimp, select the menu Image>Scale Image…, click on the chain icon to unconstrain proportions, set Width and Height to 2048 pixels. Choose the Sinc(Lanczos 3) interpolation. Click Scale. Now the image is rescaled.

Now save your rescaled image in bmp format under the same name as the texture image, so as to replace it. For instance “icosahedron1_auv.bmp.”

Back in Wings3D, select the texture image once more. Right click and select “Refresh.” Give it a moment to load and you will find your icosahedron now has the map image projected on its surface. Sort of. Chances are things don’t quite line up. Now we fix that problem.

Let’s go back to the AutoUV window. Now hit the V key for vertex selection mode. For the sake of sanity, hit the spacebar to deselect all the vertices. As necessary, click vertices and drag them to the appropriate triangle corners. Selection is sticky, so if you want to select one vertex at a time(you will), hit the spacebar to deselect before selecting another vertex. To center the view on your vertex click on the AutoUV window menubar View>Highlight Aim, or just click the A key(which is much simpler). Zoom in using the scroll wheel. You’ll find that the same controls work in the 3d view and elsewhere. To move the selected vertex/vertices right click and select “Move.” You’ll probably want to “Free” move. Once you have all the vertices in the appropriate corners, have a look at the 3d view in the Geometry window. This should look much better now.

Once you get it looking satisfactory, something of a judgement call(if you’re satisfied, it’s satisfactory) save the icosahedron. Now, just as an experiment, select all twenty faces in the 3d view of the Geometry window. A short way to select all faces is to hit the B key to select the entire icosahedron, then hit the F key to change to face select mode. Now right click and select “Smooth from the menu, or just tap the S key. Repeat till you have about 960 faces. The next smooth after that increases abruptly to about 3840, which may be desirable, but for most purposes 960 faces looks pretty darn spherical. Even 240 faces might be sufficient for distant views and 3840 might need smoothing on extreme closeup. Not that the current texture is terribly suited to extreme closeup viewing. This ends our use of the eye candy, here. For the rest of this tutorial, we’ll be working with the straight icosahedron. Smoothing works for most purposes, and exports beautifully to the Wavefront OBJ format, but smoothing seems to break uv-mapping on povray export. Doesn’t matter, ’cause I think the geometry will still be perfect on the spherical projection.

Reload the uv-mapped icosahedron you saved earlier, and select the menu File>Export>Pov-Ray (.pov), making sure you click the little rectangle at the end. Under Camera, enter a Width of 2048 and a Height of 1024. Move the pull-down menu next to Camera from “Perspective,” to, “Spherical.” Click OK to export.

Now open your saved pov-file in MegaPov or your selected povray implementation. This needs some alterations. First, use your image editing app to save the texture file as a png.

You can try rendering, but it will likely fail.

First, comment out, “#include “rad_def.inc”.” Now comment out the entire “global” declaration. Change the camera_location to <0,0,0>.

In the camera block comment out the lines beginning with right, up, angle and sky. Change the look_at coordinates to <0,0,0>.

Comment out the light_source block.

In the texture block, add uv_mapping as the new first line.

Replace everything inside the pigment block with

image_map {

png “icosahedron_auv.png”

}

In the finish block change the ambient rgb vector to <1,1,1>. This will brighten up the rendered image a bit…

The parameters, as I set them are available for your perusal here.

Now you should get a successful rendering. If the image isn’t saving in

This is the image hot out of povray. A bit flipped it is.

This is the image hot out of povray. A bit flipped it is.

MegaPOV, go to Window>Render Preferences. If the pull-down menu under

Output File Options says “Don’t Save Image,” pull that down to “PNG.” Now try to render again. Now you should have an image you can open in Photoshop or gimp.

Last time I did this, I had to flip the canvas vertically because of mirroring, this time I had to flip horizontally. I’m not sure what was different, but if you examine the original icomap and compare to the reprojected version rendered in povray you should be able to figure out which way to go.

In Photoshop, you can flip the image by selecting Image>Image Rotation>Flip Canvas Horizontal(or vertical, if your image is upside down). I then used Filter>Other>Offset… to center my continents. This should only be a horizontal move, with the vertical move always set to zero.

In gimp, you can flip the image by selecting Image>Transform>Flip Horizontally(or vertically, if your image is upside down). I then used Layer>Transform>Offset to center my continents. This took a bit of trial and error, as the offset isn’t shown until you commit by hitting Offset. This should

This is the final map after flip and offset.

This is the final map after flip and offset.

only be a horizontal move, with the vertical move always set to zero.

When done save your map image. You can now import this image into gplates, Flex Projector, G.Projector or, with a suitable pgw file, GRASS or QGIS. If you have Flexify, you can also manipulate the projection in Photoshop.

This is what Wark looks like in gplates. Over the north pole, with most of the significant inhabited regions in view.

This is what Wark looks like in gplates. Over the north pole, with most of the significant inhabited regions in view.

By the third time I did this, it took me about eight minutes to do the uv-mapping. Nicely. The povray portion of the exercise, including export from Wings3D, editing the script, rendering and flip and image editing took less than six minutes. It took about fifteen minutes to set up the parameters for the Blender beauty shot at the top of this page.

Thank you for reading,
The Astrographer

Posted in Mapping, Planetary Stuff, World Building | Tagged , , , , , , , , , | 1 Comment

Working with the Rotations File in GPlates

I’m going to start out with a quick introduction to the rotation file.

First thing, let’s go over the basic data line in the rotation file. This describes a point in time and space.

100 0.0 0.0 0.0 0.0 000 !1

The first column is the PlateID that is being referenced.

The second column is the date for which this data is valid. In this case, this can be considered the start point.

Next, the third, fourth, and fifth columns describe the Euler rotation for this plate. For our purposes, it will suffice to know that this describes the way in which the feature is moved from its starting point. In this case, three zeroes means that there is no displacement or rotation of the feature or features from their state as defined in the input file. For what we’re doing, it will suffice to always enter zeroes, as any deformation of position will be done graphically in gplates. If you intend to do this on a more real-world basis, you should be able to find plenty of help on the internet. Here, would be a good starting point.

The sixth column is your conjugate plateID. All movements of this plate will be made relative to the conjugate plate, which can be considered “stationary.” In this case, the value of zero means we are basing the movements of this plate on the default reference coordinate system. More on this later.

The seventh column consists of an exclamation mark(!) followed by descriptive comments of some sort. Perhaps the name of the continent. This information is not processed by gplates and exists primarily for the user’s benefit.

Now this isn’t quite sufficient. gplates tries to animate by interpolating between points in time. It can’t extrapolate. This means we have to add a row describing end conditions. Like so…

100 0.0 0.0 0.0 0.0 000 !1

100 150.0 0.0 0.0 0.0 000 !1

As you see we’ve added a new line. The only difference is that the time is now 150.0, rather than 0.0. Now if you were to set the date in gplates to 75.0 and use the modify reconstruction pole to move a feature with a plateID of 100, gplates would automatically add another row between the two we defined with date 75.0 and whatever rotations required to put the feature where we placed it graphically. No muss, no fuss!

Now if you move the time forward, the feature will move back toward its starting point. This might not be desirable, you might want to know how you are moving the feature relative to its last position, not its initial position. In that case open the rotation file in your trusty text editor after saving it in gplates and replace the three columns defining the Euler rotation for time 150.0 with the ones that have been created for 75.0. Changing…

100  0.0   0.0    0.0    0.0  000 !1

100 75.0 -37.12  -11.67  -60.04  000 !Calculated interactively by GPlates

100 150.0   0.0    0.0    0.0  000 !1

Into…

100  0.0   0.0    0.0    0.0  000 !1

100 75.0 -37.12  -11.67  -60.04  000 !Calculated interactively by GPlates

100 150.0   -37.12  -11.67  -60.04  000 !1

A simple copy and paste. Now the plates will remain in the last defined position. Wash, rinse and repeat as you make changes…

Let’s say we want to extrapolate to 200.0 Ma. In that case we simply add another row with the time value set to 200.0. Thus…

100  0.0   0.0    0.0    0.0  000 !1

100 75.0 -37.12  -11.67  -60.04  000 !Calculated interactively by GPlates

100 150.0   -37.12  -11.67  -60.04  000 !1

100 200.0   -37.12  -11.67  -60.04  000 !1

And keep on rockin’!

Next monday I’ll post a silly “toy” example, demonstrating the use of the conjugate or “fixed” plate.

Hopefully this was helpful,
The Astrographer

Posted in Mapping, World Building | Tagged , , , , , , , , | Leave a comment