A Resource for Learning Quantum GIS

I found a nice set of video tutorials for learning the use of QGIS at Mango Map. The first module introduces the QGIS interface. The second module goes over the basics of creating a map. It looks like further posts are being made at roughly weekly intervals(like my own blog… in theory).

Hopefully this will be a good introduction to the use of the program, even if it doesn’t necessarily delve deeply into the particular problems of people trying to use QGIS to create maps of imaginary places. Fantasy mapping is still mapping, so the basics will be useful.

Thanks,
The Astrographer

Posted in Mapping | Tagged , , , , | Leave a comment

Big Planet Keep on Rolling

Same planet, slightly better render...

Same planet, slightly better render…

My intended post for last week took so long that I decided to simplify things a bit. I was going to discuss prettifying the tectonics.js output and making a blender animation of the prettified planet spinning. I’ve learned a lot about qgis(and wilbur) while trying to do this, but I’m still groping around. I’m not saying anything against tectonics.js, it’s my fault for pretty much ignoring too many of the useful hints the program gives and misinterpreting too many of the others. I also habitually underestimate just how wide my mountain ranges are. Anyway, for nor now I’m just going to focus on the animation using the planet I have. Not Earth, that’s just cheating, but I’m using the not altogether successful planet I tried to create over the last two weeks. I need a quicker workflow; one that doesn’t involve constantly googling for instructions…

I’ll start with a sphere similar to the one I put together for an earlier article on displaying your world. I replaced the bedrock color I previously used for the diffuse and specular color with a hypsometric texture I created in wilbur. My original intent was to create a more realistic satellite view with a simple climate model. That would have been awesome!

I used a 16-bit tiff image for the bumpmap. Sixteen bit png’s seem to fail in blender, so I used qgis to convert my png to tiff. I also wanted to create a subtle displacement map as well, but the sixteen bit tiff inflated the planet into a lumpy mess several times as large as the undisplaced sphere even with a nearly zero displacement influence. I decided to use a more conventional 8 bit  version of the map for a separate displacement texture.

First thing I tried was to use the gdal translate tool to convert my 32 bit floating point BT elevation map into an 8 bit png.

gdal_translate -ot Byte -of PNG ${input_file} ${output_file}
, where ${input_file} is the name and path to the input file…
,and ${output_file} is the desired name and location for the converted file.

Unfortunately, this failed badly. Basically, all the elevations were clipped to below 255 meters. Instead, I used the Raster Calculator to make an intermediate file with the following expression.
${input_elevation_layer} / 32.0
This will result in another 32 bit elevation file with values in the range of 0..255. It helped that I started with an elevation range from sea level to less than 8000 meters. The divisor may need to be larger if the range of values is larger and can be smaller for a smaller if the range is smaller. I then used the gdal function to convert that into an 8 bit png.

Since all I wanted was a very small relief like that on some globes, the 8 bit was sufficient. Unless you’re using something like terragen, there’s really no way to make a displacement map in realistic proportions, real planets are smoother than cue balls in proportion.

For the bumpmap I had used a normal influence of 12.0, for the relief texture, I used a displacement influence of point twelve. Even with values less than 256.

I decided to discard the clouds and atmospheric effects. Maybe this is a desk globe. Perhaps I should also model a stand for the thing… A slightly less subtle displacement might be in order.

Now that we have a kinda decent globe, let’s animate the thing. I started at frame zero, with the rotation set to zero. In the “tool palette” to the left of the 3d view(toggle it on and off with the “t” key), I scrolled down to find the keyframes section, clicked “insert” and selected “Rotation.”

At the bottom of the Timeline editor there are three numeric entry fields. The first two are labelled “Start:,” and “End:.” Predictably, these denote the starting and ending frames of the animation. This will be useful later. To the left of these is another numeric field with the current frame number displayed. Click on this and enter a frame number for the next desired keyframe. I chose to put in keyframes every 65 frames, so 0, 65, 130, 195, and  260. At each keyframe, I went to the numeric palette to the right of the 3d view(toggled with the “n” key), near the top you’ll find “transformations,” I added 180° to the z-axis rotation with each keyframe. So 0, 180, 360, 540 and 720.

With that done, it was time to go to the Properties editor and select the Render tab. There are sections here controlling the display of renders, resolution, anti-aliasing and the like. I invite you to experiment with other sections, but for this I’ll focus on the Dimensions and Output sections. In Dimensions select the desired resolution and frame rate. I went with a 960 by 800 pixel image size and 16 frames per second. If you change the resolution you may need to (g)rab and (r)otate the camera to restore the composition of your scene. I’ll wait.

Below the X and Y resolution there is an additional percentage field. This allows you to create fast test renders without messing around with the camera every time. This is a pretty simple project, but when you are dealing with more complex scenes and longer render times, its nice to be able to take a quick look at what your scene looks like to the camera.

Under the Output section, first select an output path. Since I’m going to render all the frames separately and stitch them together later, I decide to create a directory specifically for my render. Check overwrite and file extensions, you may need to redo things…

Below the Placeholders checkbox, which I leave unchecked there is an output format menu with a number of image and movie formats. You could choose a movie format like mov, avi or MPEG, but I’m going with png for individual numbered frames. I’m pretty sure you can give a C printf-style name template, but I’m not entirely sure.

To render an image press F12, to render an animation sequence, press ctrl-F12. You can also select them under Render in the Info panel menu.

Initially, I set the animation to start at frame 1, the frame after the initial keyframe and to end at frame 260, the last keyframe which returns the globe to its initial rotation. This is supposed to allow looping without hesitation, but when I rendered an avi internally, it seemed like the animation was accelerating up to speed and decelerating at the end. I’m not sure why this was happening, but the render time was a bit long, so I figured I’d render out a full rotation from the middle of the sequence and stitch images together in an outside program. Thus, I set start to 66 and end to 195. Once all the images were rendered and saved under names of the form 0066.png .. 0195.png, it was time for stitching.

From my understanding ffmpeg is the best free standalone program for stitching together images into movies(and a lot of other movie-related tasks; it’s kind of the image magick of movies).

In my unix terminal I enter the following command:
ffmpeg -r 16 -vsync 1 -f image2 -start_number 0066 -i %04d.png -vcodec copy -qscale 5 spinning_planet.mov

-r 16 sets the speed to 16 frames per second

-f image2 tells it to accept a sequence of images as input.

-start_number 0066 is important. It tells the program to start rendering from an image with frame number 66. Otherwise, if it doesn’t find an image with an index less than five it will assume files are missing and punt out.

-i %04d.png is a format descriptor telling ffmpeg where to look for input files.

spinning_planet.mov is the name and format of the desired output movie file.

The rest of the options may or may not matter. I’m not taking chances…

Next time, maybe I’ll add sound…

Comments, questions, corrections or suggestions are welcome! Thank you for your patience,
The Astrographer

Posted in Mapping, World Building | Tagged , , , , , | Leave a comment

Geometry for Geographers

Introduction

Today, I’d like to share a few geometric formulae I’ve found useful in worldbuilding. There are formulae here for determining the distance between two points with known latitudes and longitudes, the inverse function(latitude and longitude of a destination given a known origin location and a direction and distance. The area of polygons on a sphere and the distance to the horizon for a planet of a given radius given a viewpoint height, and the area of a circle of given radius on a sphere.

Great Circle Distance Between Two Points on a Sphere

If you know the latitude and longitude of two points on a sphere, you can figure out the arc distance in radians between those points with just a little trigonometry. Point A is at latitude,lat_a , longitude,lon_a . Point B is at latitude,lat_b , longitude, lon_b. The difference of longitude is, P = lat_alat_b.

The arc distance is, GreatCircle_ArcDistance_FORMULA.

Thus distance, GreatCircle_Distance_FORMULA.

Once, you know the distance, you can readily calculate the initial bearing from point A to point B. Bearing, bearing_FORMULA. You can figure out the final bearing by interchanging b and a. This will prove useful in determining the area of spherical polygons. Keep it in mind.

Destination Given Distance and Bearing from Origin Point

Given a known point at lat_alon_a, a planet’s radius, R, a bearing, θ, and a distance, d, how do we find the new point lat_blon_b? Note, since Mathematica’s implementation of the atan2(y,x) function is apparently functionally identical to its atan(y/x) function, and its the same function name overloaded with inverted input order(ArcTan[x,y] == Arctan[y/x]), I decided to just go with the y/x form. In a Java or Python or, apparently, JS program, you’d use atan2(num, denom), instead.

Destn_Lat

Destn_Lon.

For further information, check this page out.

Area of Spherical Polygons

The formula for the area of a spherical triangle is pretty simple looking. Just, Spherical_Triangle_AREA. A, B and C are the three inner angles of the triangle, R is the radius of the sphere and S is the surface area of the triangle. For each vertex, use the Great Circle formulas above to determine the distance and bearing to both neighboring vertices. The inner vertex angle is equal to the distance between the bearings to the two neighboring vertices.

The same principle is used to find the area of more complicated polygons. In the general polygon case, though, it’s important to keep track of convex and concave angles. It might be necessary to make diagrams to keep track of which angles are internal.

Spherical_Polygon_AREA, where σ is the sum of angles in radians, and n is the number of sides.

Distance to the Horizon

Figure 1

Figure 1

As shown in figure 1, point, P, is our central point of interest, point, H, is the point on the horizon of view from P, point, A, is the point on the surface directly beneath P, angle, θ, is the angle subtended, at the center of the sphere, between points P and H. As before, R is the radius of the sphere.

D, the direct distance between points P and H, is also known as the slant distance. The formula for slant distance is horizon_slant_FORMULA, where h is the distance of the viewing point above the ground(length PA).

The value for θ would be, horizon_theta_FORMULA.

The distance along the arc AH is d=Rθ, with θ in radians. Thus, the arc distance, which I call the map distance, since it would be the distance measured on a map, would be map_distance_FORMULA.

The area of a planet observable from a point at height, h, is, observable_area_FORMULA.

The fraction of a planet observable from that height would be, observable_fraction_FORMULA.

For reference planetary_surface_FORMULA, which is the formula for the total surface area of the planet.

Area of a Circle on the Surface of a Sphere

Figure 2

Figure 2

My next formula will be for the surface area of the circular region within a distance, d, of a point, P, on the surface of a sphere of radius, R, as shown in figure 2. From page 128 of the CRC Standard Mathematical Tables, 26th edition(similar information, with 3d figures, here), I find under spherical figures that the zone and segment of one base has a surface area of zone_and_segment_SURF. Incidentally, the volume of this portion of the sphere is, zone_and_segment_VOLM, not that we’re using that here. The arc distance from P to the edge of the area is d=Rθ. An examination of the geometry leads us to the conclusion that h-theta, so the area of the spherical surface within angular distance θ of the center is, circle_on_sphere_FORMULA.

Posted in World Building | Leave a comment

Displaying your Planet in Blender

I love it when a plan comes together!

I love it when a planet comes together!

Intro-Ducktion

In the process of making some pictures for recent blogs, I’ve found myself messing around quite a bit with Blender. There’s things I’ve done before on Blender that I had completely forgotten how to do, and there are other things that are somewhat involved to do in Blender that other programs pull off without a hitch.

Some of this comes down to the general purpose nature of Blender as compared to the more focussed purposes of other programs. Displaying a map on a rotating globe is fairly easy for gplates, because that’s one of its core competencies. On the other hand gplates isn’t capable of displaying raytraced specularity variation across a planet’s surface or showing proper hillshading due to surface topography. Bryce, on the other hand, is capable of doing these things, to some degree, and to some degree some of these things are easier. Bryce is getting pretty long in the tooth at this time, though, and even fairly simple renders are sloowww. Terragen is pretty sweet. Like Google Earth with raytracing and your own world’s terrain. Unfortunately, my family has to eat and stuff, so Terragen is right out…

Creating a Globe

Our first step will be to create the globe we’ll be texturing. On the menubar, select Add>Mesh>UV Sphere. Since we’re not going to do UV-mapping on this one, I’m going to go ahead and smooth the thing. First, go into Edit mode and in the 3D View menu select Mesh>Faces>Shade Smooth. Next, return to Object mode. In Parameters, select the Modifiers tab. Click Add Modifier(wrench icon) and select Subdivision Surface. Set Render to three subdivisions and click Apply Modifier. If you like, you can forego applying and simply leave the modifier in place. Whatever you choose, you now have a globe. Now to texture the thing.

Loading Spherical Image Textures

The first problem to solve is loading equirectangular projection(or “spherical”) images and using them as textures. Surprisingly, this seems easier with UV-mapped icosahedral textures. Although, to be honest, I did all the modeling and UV-mapping in Wings3D. For this purpose, I’ll be using textures generated by the Tectonics.js program. I am aware that these aren’t necessarily suitable as-is for this purpose, but this can be considered sort of an early evaluation prior to spending a lot of time optimizing them.

I’ll start with the bedrock texture. This is the simplest, because it’s simply a color, which is the easiest thing to apply in Blender. Making sure you have your globe selected, go to the Material editing tab(brass ball icon) in Properties. There will be a button that says New. To the left of that will be a pull-down that allows you to browse materials. If you started with an empty scene then the only material in the list will belong to the sphere you made. Select that. Rename it if you wish. For now we’ll leave the settings as-is. It’s hard to tell what effect the various shading options will have till you have the textures onboard.

Bring up the Textures tab(checkerboard square). The currently selected texture will be imaginatively named Tex, and its type will be None. Change the type to Image or Movie, and, if you like, change the name to something more descriptive of its role as surface coloration. While you’re up there set the Preview to Both, so you can see the texture image and get some idea what it’s going to do. Make sure the preview render is on a sphere.

Now, scroll down to Image and click Open. Pick out the desired image from the filesystem. In the preview, you’ll see that the texture is about what we expect, although squeezed into a square. The material preview, however, is going to be disappointing. This is because the projection of the flat texture onto the sphere is wrong. Let us now fix that.

Scroll down to Mapping. The coordinates seem to be fine as Generated, so we’ll leave that be. Let’s change the Projection to Sphere, and have us a look at the preview. The material should be much better.

Let’s make a trial render to see how this came out. Go to the Render tab(camera icon) and scroll down to dimensions. Set the X and Y resolution to whatever you’ll want as your final render size and set the scale to 50% to speed up your trial renders. If your desired resolution is much less than 1000×1000, maybe you should leave scale at something closer to 100%…

Scrolling down to Output, you can set your image format and related parameters. I’m not too worried about that at this stage. I’ll just let the trial renders live in slots within the program till I’m ready for a final render.

Scroll back up to the Render pane. I usually set Display to New Window for convenience, because it defaults to Image Editor and replaces your 3D View window with an Image Editor window. Set that as you like… Click Render or press F12. Not the prettiest thing ever, but the texture seems to work. It seems to me, the seas should have more glare than the land. Let’s see what we can do about that.

Now, previously in Photoshop, I created a Sea mask image by making a magic wand selection of the water in the bedrock image and saving the resulting channel to its own file. I also made a land mask image by saving an inverted version of same. I go back to the Texture tab and select an empty texture slot. Hit New and select Image or Movie. Scroll down to Image, hit open and select the sea mask image. Make sure to uncheck Use Alpha under Image. This image doesn’t have a useful alpha channel, so we want it to use the greyscale rgb as alpha, which is what it uses to control intensities. Set your mapping and such as with the previous texture. You’ll see the black and white image in the texture now, instead of the bedrock colors, but at least it ain’t a white cueball and everything’s in the right place.

Scroll down to Influence. Uncheck Diffuse Color, check Specular Intensity. Maybe check Hardness under Specular, as well. The sea colors seem a bit bright, so you could use this to put a large negative influence on diffuse intensity as well, but, in my limited experience, that is fraught with issues(it tends to brighten the land too much, it’s a bear to adjust, and the color of the water tends to get way too deep and saturated by the time you’ve gotten it dark enough). Best way to adjust colors, for the moment, is probably in the texture itself, using your favorite image editor(not, in my case, by any means, Blender). Try another trial render.

At this point, you should adjust the parameters on the material and textures. This will involve a certain amount of trial and error, jogging between the textures and the material controls and frequent trial renders. Try other controls as well, such as the other texture influences and stuff in the Shading panel of the Materials tab.

complete_planetNext thing to do is to give the globe a bit of relief. Once again, we select an empty texture slot, create a new image texture, load an image(this time elevations) and set the mapping and such. Uncheck all of the influences except Normal, reduce the strength of the normal to at most about 0.5. Unless of course you want to intentionally exaggerate relief in order to bring out smaller features.

This would be a good time to try a preliminary full render. Take a note on the dimensions of the planet sphere. Once we have the planet surface the way we want it, its safest to go up to the Outliner and restrict viewport selection of the planet surface object. Just click on the arrow icon to shadow it, click on it again if you need to change the planet in the future.

Making a Cloudsphere

Now we add a new sphere with the same center as the planet globe to put the clouds on. My notes say that the X/Y/Z dimensions of the globe are 12/12/12, and I want the clouds to hug the planet pretty closely, so I’ll size it to 12.35/12.35/12.35 after smoothing and such. Make sure to smooth and subdivide the clouds sphere as you did the planet. Create a new material, and zero its diffuse, specular and ambient values(at least initially). Check Transparent and set it to Raytrace. Set alpha to zero. Go down to the Options pane and turn Traceable off. Traceable always seems to make the planet surface render solid black, I’m not certain why. Do a quick test render to make sure the planet surface is still visible.

Add a new texture for your clouds. Figuring out a noise that looks good for global clouds is a problem I’ve yet to solve, so I’ll leave you to work out the details. I used a Distorted Noise with a Voronoi F2 basis and considerable Improved Perlin distortion. In Mapping, I stretched the size by about three in the z-coordinate. Best results could be attained by loading a real world global cloud map, but these sometimes show evidence of Earthly continent shapes to the wary. An artist could try painting in a cloud map, but my skills aren’t remotely up to that. For now, this will have to do.

complete_cloudsI gave the cloud texture influence over diffuse intensity, color and alpha, specular intensity an geometry normal. All of these were close to one with small adjustments downward.

I put a ramp on the colors. It’s all white, but the alpha is 0.8 on the right and 0.0 on the left. I added another 0.8 alpha stop at the 0.965 position, and another 0.0 alpha stop at position 0.480. The ramp allowed me finer control over cloud cover. A final render with clouds is in order.

Atmosphere

Next we add an atmosphere. This is still very much a work in progress. I’m trying for something like a LunarCell atmospher with more control and realism. I haven’t yet attained the first goal. I’m pretty sure Blender has a way to make volumetric density fall off with distance from the center, but I haven’t figured it out yet. If I can figure out how to make an object presence mask, like I can in Bryce, then I could possibly do something useful with a radial gradient in photoshop. No dice yet, though. To start with I’ll just settle for a volumetric ball with some scattering.

So, first we make a nicely smoothed and subdivided sphere with X/Y/Z dimensions of 13/13/13. We create a material for it. Make the material transparent, with density, oh, let’s push it down to 0.1. I’ll rack the scattering up to 1.0, with a -0.5 asymmetry, meaning that more light is back scattered. A test render and… that didn’t come out well. Must remember to uncheck Traceable in the Options pane of the Material. Try again… success! Looks a little extreme, though. Since density should already be pretty subtle, I’ll start by reducing the Scattering values a bit, especially the amount. By the time I’m done with the whole test render(30% size, now, ’cause it’s not quick), adjust, render again process, I have a density of 0.12 and a  scattering of 0.3 with 0.0 asymetry. It looks good, but maybe a little too wide so I reduce the size of the atmo sphere to 12.7/12,7/12.7.

I’m pretty happy with the results. The shaded relief needs work in Wilbur, and, in spite of a lot of fiddling, the cloudmap isn’t nearly as good as what LunarCell can do. Which isn’t actually very good. LunarCell is good for pretty pictures and it’s mapgen isn’t bad so far as noise-centered generation goes, but it’s cloudmap generation is socially awkward at best. Sadly, it’s about the best clouds-from-noise I’ve seen… Looks ok from a distance, but it needs work. I’ll probably just have to bite the bullet and use real-life clouds.

complete_atmosphere

Conclusion

Hopefully, this was useful to people. If not it should probably be a good reference for me. I’ve gotten pretty good with the very basics of Blender, but beyond rendering models as monochromatic plastic toys, materials have had me flummoxed. This should be useful next time I’m trying to texture a spaceship. It should also make a good background.

For my next trick, the real reason why I jumped into Blender with this in the first place, a revolving-head animation of the planet. Now I’m well away from familiar shores!

Thanks for reading all of this, and any comments and advice are very very welcome.
The Astrographer

Posted in Mapping, Planetary Stuff, World Building | Tagged , , , , , , , | 1 Comment

Realistic Plate Tectonics with Tectonics.js

gplates_ortho

For some time I’ve had an interest in terrain generation using simulated tectonic processes. I’ve successfully used PlaTec, but it’s strictly 2-d and the output is pretty limited. Another one that seemed promising was pytectonics, but since it froze my system dead, I’m not sure how good it might be(sour grapes and all that…).

Recently, I came across a plate tectonic simulator that runs in javascript on the web browser. Surprisingly, given all the trouble I’ve had with compatibility issues lately, it worked and was reasonably fast. Tectonics.js was created by Carl Davidson, who was also the author of the forementioned pytectonics. I’ve been engaged in a discussion with Mr. Davidson on reddit, and he has been very active and responsive to user suggestions.

The procedure, in a nutshell, will be, first, to create an attractive tectonic simulation, and then, second, to convert that into a decent map.

First, run Tectonics.js at a speed of about 5 Myr/s till the age reaches about a billion years or so. The goal, here, is to give the model time to reach a reasonable equilibrium without spending forever doing it. Slower speeds, on the other hand, tend to produce more attractive results. I’m using the Safari browser, which isn’t all that fast, but my attempts with Chrome, while much faster, also tend to crash out after roughly the first billion years. If your browser has a significantly faster javascript implementation, your computer is a bit less long in the tooth than mine or you’re a lot more patient than me, it could pay off to run at smaller time steps. Although it took most of a day, I’ve made runs at as small a time step as 0.25 Myr/s. For the most part, much cleaner.

From about a billion years, reduce the time step or “Speed” to 1 Myr/s. Run it like that till you approach a desired age, perhaps four to five billion years. Make sure you get at least the last half billion years or so at no more than 1 Myr/s time step. If desired, reduce the speed to around 0.25-0.5 Myr/s for the last few hundred million years.

When you’ve reached the desired time or the map is in a configuration a you find attractive, reduce the Speed to zero to stop the animation. Personally, I consider the Bedrock view attractive and useful and the Plates view is a crucial guide to building your world. The Elevation view is less useful than I’d hoped, but its still helpful. First, make sure that the projection is set to Equirectangular, and the screen is sized so that some black is showing all around the jagged edges of the map. This can take some window resizing and using the scroll wheel to bring the image in and out. It’s self explanatory once you try it. Next, set the view to Bedrock and press p to create a screenshot in a new tab. Save the new tab to a png in your working directory. Repeat this process with the view set to Plates, then again for Elevation. You can also save copies in other modes, like temperature and precipitation, but, as of this writing, those are less useful. The program is currently in active development, so those modes may be more useful later.

It can pay off to save intermediate imagery before you reach your desired time. Sometimes the model approaches an attractive configuration, then, in a fit of perversity, quickly morphs irretrievably into an ugly mess. Perhaps, even if you don’t initially intend to model the geological history of the planet, having maps of the earlier continental positions could be useful later. Particularly, if you’d like to model adaptive radiation of local lifeforms and such, having at least a sketchy history of the world’s tectonic drift could be helpful. I’ll deal with geological history in a later post. For now, you just want to pick out one time point that fits your needs.

Now, import the Bedrock image from your chosen time period to Photoshop or your favorite raster editing app. First, select the black background with the Magic Wand tool on zero tolerance. Next, invert the selection and copy. Now create a new image, retaining the default size, and paste from clipboard. In Photoshop, at least, a New image file defaults to a size just big enough to contain the selected area.

If you’re raster editor doesn’t behave similarly, you can simply Crop the image down till it just contains the map area instead of the procedure described in the previous paragraph.

If you examine your image, now, you’ll notice two things. First, the edges are jagged.

The prepared equirectangular bedrock map.

The prepared equirectangular bedrock map.

Second, the image size is not quite a 2:1 rectangle. I believe these both relate to the fact that the map is composed of discrete cells that don’t conform to the latitude, longitude grid. The easiest way to deal with this is to crop the image down so that the jagged edges don’t show and resample the result to a 2:1 rectangle. This will necessarily reduce precision a bit, but for most purposes it doesn’t matter. You might need to cleanup around the edges to fix shorelines that don’t quite line up. I made an attempt to line up the east and west edges, but they didn’t line up. Instead, I decided to keep the image as it is, use the eyedropper to sample the ocean color, and fill the background layer with ocean color. This works because I could center all the land on the map without overlaps. If it’s impossible to center land on the map such that it doesn’t overlap the edges, you’ll need to connect the land across the boundary somehow.

Now resample the image to a 2:1 rectangle.

The prepared equirectangular map of the tectonic plates.

The prepared equirectangular map of the tectonic plates.

Repeat for all of the output images. For the Elevation, I fill the ocean background with white to represent sealevel elevation. I then invert the image, because I prefer darker values for lower elevations. It’s a matter of taste, though you have to keep track. I also saved a seamask, based on the selection I created to mask out the blue seas. Except for resizing, I took Plates pretty much as-is, because the edge behavior is continuous, so any boundaries across the problem areas would be a work of imagination.

The prepared equirectangular elevation map.

The prepared equirectangular elevation map.

The imagery is now ready to be applied to gplates, of course. Each of them will have an extent of 90º N to 90º S, and 180º W to 180º E.

Once you have all that loaded into gplates, you can look at it with a nice graticule, varying opacity, so that you can for, instance, compare plate boundaries to continental shorelines, or a variety of other effects.

The final prepared, separated and inverted equirectangular map.

The final prepared, separated and inverted equirectangular map.

For the picture at the top of the page, I added a photoshop-derived hillshade to give a better sense of what the elevations look like straight out of the box. Photoshop or Wilbur with a judicious bit of well-applied noise could be used to enhance the appearance of the mountains. The vector editing tools in gplates or qgis could be used to mark shorelines, various kinds of plate boundaries, mountainous regions and other data derived from the tectonic simulation. I’ll leave that for a future article. For now, have fun with tectonics!

Thanks,
The Astrographer

Posted in Mapping, World Building | Tagged , , , , , , , , | 1 Comment

Placing Raster Features Using GPlates

Today we’re going to look at using the gplates program to place pre-created raster features on the globe. For minimal distortion, we will begin by placing the raster at the center of the map, where the central meridian crosses the equator. If we were placing raster features taken from particular parts of Earth, we would want to make sure they were in equirectangular(or geographic, or plate carée or latlong) projection and place them in the position they were in on the original map(this is good for importing real world data from sources such as the SRTM 90-m database). I am going to give instructions both for the use of real world data and island maps from Amit Patel’s Polygon Map Generation Demo.

A few tips I’ve picked up through previous experimentation. Raster layers which are imported into the same location(since we’re dropping unprojected imagery as close to the center as possible to minimize distortion) need to have separate associated vector shapefiles.

In my filesystem, I create a separate directory for each raster. Within that directory, I create a “raster” subdirectory, where I place the raster itself and a “vector” directory, where I place the associated shapefile. This will make it easier to keep track of everything.

To start with, I created a patch in photoshop. Just an ordinary tiff image. I used tiff to test whether I could import and reproject 16-bit or 32-bit rasters. GPlates choked on the 32-bit tiff, but successfully loaded the 16-bit version. The patch I created was small and silly, so I decided to make its geographic extent small, if this works in 16-bit I might, perhaps use it as a set of elevations for the somewhat outscale raster I imported as a continent earlier. So how do I set the georeferencing. Nine meter resolution is pretty common and excellent for moderately close in work, so I’m using that. To reference this to Earth, the  diameter of our planet is close enough to 12,756,000 meters. Given that the circumference of a circle is equal to its diameter times π(about 3.14159265…), that gives us a circumference of about 40,074,156 meters. As back of the envelope as this is getting 1-meter precision is more than sufficient. The resolution of my image is 1k-square(1024×1024), so that’s an extent of 9,216 meters square. A degree comes to about 111,317 meters, so, keeping track of units,

9,216 meters / 111,317 meters/º = 8.279-yada-yada x 10^-2º

I want this centered at the [0,0] point, so divide that by two to get the extents. Top latitude of 0.0414º N, bottom latitude of 0.0414º S(-0.0414), left longitude of 0.0414º W(-0.0414) and right longitude of 0.0414º E. Unfortunately, this throws an inescapable exception on gplates. I successfully import the raster with the extent being 0.2º on a side. That gives me a pixel size of about 21.7 meters(about 71 feet). Once I get that imported, I digitize a new polygon geometry roughly covering the area of the image. I gave it a classification of gpml:UnclassifiedFeature, a plateID and a name. I also made sure that the checkboxes for Distant Past and Distant Future were filled, not that it matters for what we’re doing here, but whatever… Create and Save to a shapefile in the Vector directory associated with the raster. In the Layers window click on the arrow next to the Reconstructed Raster you just imported. Under inputs, find Reconstructed polygons, click on “Add  new connection” and select the Reconstructed Geometry you just digitized. Use the Choose Feature tool to select the polygon associated with your raster. You can now use the Modify Reconstruction Pole tool to move the raster to where you want it. In my case I placed it somewhere in the mountains of the small continent I had placed while practicing to do this. Place it where you want it, hit Apply and hit OK a couple times. I had to jockey mine around a bit to get it right where I wanted it. If all of your edits are done without changing the Time setting, there will only be one entry on the rot-file.

Speaking of the rot-file, go to File>Manage Feature Collections(cmd-M or ctrl-M), and make sure all changes are saved.

Now, I’m going to load in an island generated on the Polygon Map Generation website. To figure out the extent for this, I looked at a map of the Hawaiian Islands and observed that the Big Island fit into a space a little over a degree on a side. The islands generated by Mr. Patel’s generator just have the feel of being much smaller than Hawaii. I’ve decided to give it an extent of about half a degree each way, and, since the island shapes seem to roughly fit with the look of the other islands, I’ll center it at 21.5º N by 157.5º W. That would be just off Oahu, and maybe just a bit bigger. So, I used a top latitude of 21.7º N, a bottom latitude of 21.3º N, left longitude of 157.7º W(-157.7) and a right longitude of 157.3º W(-157.3). I reduced the extent a little ’cause the island still seemed big.

This time, we’ll roughly hug the coast with the shape feature we create. This will minimize the amount of water color we have to clean up later. In this case, I’m just going to play around and pretend like the slop represents shallow water. Once you’ve digitized and created the feature with a unique plateID, save it to a new shapefile. I actually like this thing’s rough location, so I’m going to move it just a little, mostly rotating it. Maybe I’ll plant it over Oahu’s position and call it Notoahu…

Now, I’m going to add another couple of islands, but I’m going to add them both to the center of the map. The first one will re-use the same shapefile as the previous import, and I will locate it at 0.2,-0.2,-0.2,0.2, in the usual order. I’ll digitize an outline of this island and save it to the previously created shapefile. A possibly late word of warning, it’s best to give all of your files easily recognized names. It’s maddening to try to find, “planet_island-61462-2AF,” somewhere between “planet_island-61462-1TR,” and “planet_island-61462-2JL.” Anyway, I wound up using the Pole Rotation tool to place that island somewhere in the space between Maui, Lanai and Molokai on the Earth map.

The next island I placed at 0.3,-0.3,-0.3,0.3. Since the initial location of this island coincides with the previous, it needs its own shapefile, otherwise overlapping will be a problem. Once I’ve got everything digitized and connected, I’ll shift it over with the rest of my little island chain.

For my last trick, I want to move a tile of countryside taken from the SRTM Database at TileX 54 TileY 3. I select the GeoTiff radio button before hitting the button marked, “Click here to Begin Search. I observe from the download page that the filename is srtm_54_04.zip. The latitude has a minimum of 40º N, and a maximum of 45º N, the longitude has a minimum of 85º E and a maximum of 90º E. We also observe that the center point is at latitude 42.5º E by longitude 87.5º E. I chose Data Download(HTTP) to download the tiff.

Sadly, gplates has some serious problems with the 16-bit geotiff. This is really a shame, as moving fragments of real-world elevations and piecing them together is probably the single most useful aspect of this technique. Popping down pictures of islands is all well and good, but not a terribly powerful use-case.

It seems I might need to convince the developers of gplates to implement the import and reconstructed export of multi-byte elevation data. Failing that, the raster import/reconstruction/export abilities of this program are going to be functionally limited to imagery. Shame, really.

Hopefully, this could prove useful.

Thanks,
The Astrographer

Posted in Mapping, World Building | Tagged , , , , , , | Leave a comment

Working with the Conjugate Plate in GPlates

As promised last week, we are now going to demonstrate the usefulness of the conjugate plate in gplates.

I’m going to start with just the two lines preparing plate 100. To that I will add another pair of lines defining new plates 101, 102 and 103, which have plate 100 as their conjugate plate.

100  0.0   0.0    0.0    0.0  000 !1

100 150.0   0.0    0.0    0.0  000 !1

101  0.0   0.0    0.0    0.0  100 !Chris

101 150.0   0.0    0.0    0.0  100 !Chris

102  0.0   0.0    0.0    0.0  100 !Tom

102 150.0   0.0    0.0    0.0  100 !Tom

103  0.0   0.0    0.0    0.0  100 !Mary

103 150.0   0.0    0.0    0.0  100 !Mary

I’ll draw up a quick coastline in gplates and give it the PlateID of 100. I will now define three points using the Digitise New Multi-point Geometry M tool. I’ll  call them Chris, Tom and Mary. Maybe we’re chronicling the travels of three very slow ents…

Now, I’m going to use the Modify Reconstruction Pole P tool to describe their changing positions at various times over the next hundred million years. For convenience, I’ll copy the last position Euler coordinates to the 150.0 time row each time I make a modification.

Now that we’ve described the slow travels of our friends the slow-moving tree-people across continent 100, we now do the same thing for continent 100 itself.

Now, if we run the animation, we’ll see that the movements of our three little friends follow the continent as it moves. To make that clearer, try adding a third point that remains stationary until the last minute. Let’s call this stay-at-home ent, Taylor. I’ll have Taylor remain stationary until 125.0, Then I’ll have it meet up with some of the other points at 150.0. As you see, although there are no moves described for plate 104 until time 125.0, the point follows the movements of the continent.

It might not be clear that Taylor is truly stationary with respect to plate 100, because the interpolation of plate 100′s movement causes some jiggling. So select the menu item Reconstruction>Specify Anchored Plate ID… and set the PlateID to 100. Now rerun the animation.

I hope this demonstration was helpful.
The Astrographer

Posted in Mapping, Planetary Stuff, World Building | Tagged , , , , , , , , | Leave a comment

Tutorial for Forcing Icosahedral Maps onto Flat Maps

Setting up the beauty shot took longer than making the map.

Setting up the beauty shot took longer than making the map.

So. A really… really… long time ago, I posted a method for mapping an icosahedral map of the sort that RPGs like Traveller are so enamored of, to a sphere in a 3d app. Similar projections were used by many science fiction role-playing games, such as 2300AD, GURPS Space and Space Opera.

Even back in those days of hoary antiquity I was looking for a means to map that surface onto an equirectangular map(plate carée, geographic or latlong for the technical). Given the prevalence of apps like G.Projector and Flex Projector, both of which require equirectangular maps as input, this was very desirable. Even the Flexify filter, with its many available input projections, chokes on most interrupted projections on input.

At long last I have found a way to convert icomaps into equirectangular projection.

This isn’t just useful for getting old science fiction RPG maps into a more usable projection. Having seen how these icomaps look projected back onto globes, I have to say this is a dandy little projection for drawing new maps in. Distortion is surprisingly limited. In the hand-drawn map that I am going to use to demonstrate this technique I placed the island of Korsland very near the north pole. In spite of that it has no noticeable pinching. That is an awesome for anyone who wants to draw a decent map of an imaginary world. The polar-pinch problem is common for maps drawn on the flat and very difficult to eradicate, but it really isn’t much of a problem with the icomap projection.

This method could also be used to derive a flat map from a texture painted in Blender with Texture Paint or noise effects. A last possibility, if one had an excellent grasp of povray scripting, would be to create maps from scripted combinations of noise in povray.

Looking at the Wikipedia, I found this page on Tissot Indicatrices. Included on that page was the povray source code used to generate the templates. I puzzled at this, then I realized the magic of the spherical camera.

Unfortunately, It took some trial and error to figure out how to export a uv-mapped object to povray. Wings3D exports to povray, but the uv-mapping seems to be lost. I finally figured out how to get it to render. I’ll go over it again, here.

This can be done entirely with free apps. I used Photoshop, but everything that needs to be done here can be done just fine in the GIMP. There are apparently several implementations of povray; I’m using MegaPOV, myself ’cause it comes pre-compiled for the mac. You’ll also need Wings3D.

I have already successfully used this method to reproject maps of Regina from the World Builder’s Handbook by Digest Book Publications and Unnight from the GURPS Space worldbook of the same name by Steve Jackson Games. I wanted to use something I, myself owned and created. Since I started doing cartography in a serious way, I haven’t really used the icomap method very often. I knew that if I drew things out in equirectangular projection, I had a lot of apps that could readily reproject into a wide variety of other projections. I could also readily use it in 3D apps to create pictures as from space. I do have a few very old maps I made in my youth. To avoid getting into disputes about copyright law(a very popular subject on the net), I decided to use one of my own icomaps for this demonstration. For private use, it should be perfectly acceptable to use proprietary imagery in this manner, and the public exhibition of derivative works is… debatable. Private exhibition of derivative works should be completely kosher.

The scanned and stretched UV image I'm using.

The scanned and stretched UV image I’m using.

The map I’m using today was based very loosely on the map of Craw created by J. Andrew Keith for “A Referee’s Guide to Planetbuilding,” as found on page 25 of “The Best of the JTAS,” Volume 3. There are some significant differences, and the original was, oddly enough, in an equirectangular projection. I’ll call it Wark.

To start, let’s open Wings3D. To create our icosahedron, we’ll right-click somewhere on the screen and select “Icoshedron” from the menu. Now click somewhere on the icosahedron and press the B button to select the body. Now right click on the body and select “.UV Mapping.” from the menu.

You can control the view by clicking the middle mouse button and moving the mouse around to rotate. Hold down the middle mouse button and drag to dolly the view. Hit the left mouse button to get out of view control, when you’re happy about the view.

The AutoUV screen shown with the vertices nicely aligned with the image.

The AutoUV screen shown with the vertices nicely aligned with the image.

In the AutoUV Segmenting window, look at the top of the icosahedron. Hit the E button to select edges and click on the five edges around the north pole. Now look at the bottom and select the five edges around the south pole. Select one edge on the midsection of the body to connect one of the five selected edges in the north to one of the five selected edges in the south. Once you have these eleven edges selected, right click somewhere on the segmenting window and select “Mark Edges for Cut” from the menu. Now right click again and select “Continue”, now select “Unfolding.”

You’ll find in the AutoUV window that, if you have the triangles selected, a right click gives you a menu that includes the options, “Move,” “Scale,” and “Rotate.” Use these to move, scale and rotate the triangles are arranged horizontally, in roughly the orientation of your icomap and scaled so that the pretty nearly fill the square. Don’t worry too much about getting it perfect. We’ll adjust later.

Now right click again and select “Create Texture.” For Size, go with the biggest possible(2048×2048), and for Render select “Background” for 0, “Draw Edges” for 1, and “None” for 2. Hit OK.

In the Outliner window on the right of the screen, click on the checkerboard next to an item that says something like “icosahedron1_auv.” The number may vary. Now right click and select “Make External.” Pick out the location where you want to save the image and click Save.

Now in your favorite image editing app, open the image with the scanned map. You’ll want to scale this to match the texture resolution.

In Photoshop select the menu Image>Image Size…, uncheck “Constrain Proportions”, check “Resample Image:” and select Bicubic Sharper from the pop-down menu if your original map is smaller on any dimension than the texture resolution. Since the texture is 2048×2048 pixels, that is the Width and Height we want to set this image. The Document size stuff is irrelevant to our purposes. Click OK. Now the image is rescaled.

In gimp, select the menu Image>Scale Image…, click on the chain icon to unconstrain proportions, set Width and Height to 2048 pixels. Choose the Sinc(Lanczos 3) interpolation. Click Scale. Now the image is rescaled.

Now save your rescaled image in bmp format under the same name as the texture image, so as to replace it. For instance “icosahedron1_auv.bmp.”

Back in Wings3D, select the texture image once more. Right click and select “Refresh.” Give it a moment to load and you will find your icosahedron now has the map image projected on its surface. Sort of. Chances are things don’t quite line up. Now we fix that problem.

Let’s go back to the AutoUV window. Now hit the V key for vertex selection mode. For the sake of sanity, hit the spacebar to deselect all the vertices. As necessary, click vertices and drag them to the appropriate triangle corners. Selection is sticky, so if you want to select one vertex at a time(you will), hit the spacebar to deselect before selecting another vertex. To center the view on your vertex click on the AutoUV window menubar View>Highlight Aim, or just click the A key(which is much simpler). Zoom in using the scroll wheel. You’ll find that the same controls work in the 3d view and elsewhere. To move the selected vertex/vertices right click and select “Move.” You’ll probably want to “Free” move. Once you have all the vertices in the appropriate corners, have a look at the 3d view in the Geometry window. This should look much better now.

Once you get it looking satisfactory, something of a judgement call(if you’re satisfied, it’s satisfactory) save the icosahedron. Now, just as an experiment, select all twenty faces in the 3d view of the Geometry window. A short way to select all faces is to hit the B key to select the entire icosahedron, then hit the F key to change to face select mode. Now right click and select “Smooth from the menu, or just tap the S key. Repeat till you have about 960 faces. The next smooth after that increases abruptly to about 3840, which may be desirable, but for most purposes 960 faces looks pretty darn spherical. Even 240 faces might be sufficient for distant views and 3840 might need smoothing on extreme closeup. Not that the current texture is terribly suited to extreme closeup viewing. This ends our use of the eye candy, here. For the rest of this tutorial, we’ll be working with the straight icosahedron. Smoothing works for most purposes, and exports beautifully to the Wavefront OBJ format, but smoothing seems to break uv-mapping on povray export. Doesn’t matter, ’cause I think the geometry will still be perfect on the spherical projection.

Reload the uv-mapped icosahedron you saved earlier, and select the menu File>Export>Pov-Ray (.pov), making sure you click the little rectangle at the end. Under Camera, enter a Width of 2048 and a Height of 1024. Move the pull-down menu next to Camera from “Perspective,” to, “Spherical.” Click OK to export.

Now open your saved pov-file in MegaPov or your selected povray implementation. This needs some alterations. First, use your image editing app to save the texture file as a png.

You can try rendering, but it will likely fail.

First, comment out, “#include “rad_def.inc”.” Now comment out the entire “global” declaration. Change the camera_location to <0,0,0>.

In the camera block comment out the lines beginning with right, up, angle and sky. Change the look_at coordinates to <0,0,0>.

Comment out the light_source block.

In the texture block, add uv_mapping as the new first line.

Replace everything inside the pigment block with

image_map {

png “icosahedron_auv.png”

}

In the finish block change the ambient rgb vector to <1,1,1>. This will brighten up the rendered image a bit…

The parameters, as I set them are available for your perusal here.

Now you should get a successful rendering. If the image isn’t saving in

This is the image hot out of povray. A bit flipped it is.

This is the image hot out of povray. A bit flipped it is.

MegaPOV, go to Window>Render Preferences. If the pull-down menu under

Output File Options says “Don’t Save Image,” pull that down to “PNG.” Now try to render again. Now you should have an image you can open in Photoshop or gimp.

Last time I did this, I had to flip the canvas vertically because of mirroring, this time I had to flip horizontally. I’m not sure what was different, but if you examine the original icomap and compare to the reprojected version rendered in povray you should be able to figure out which way to go.

In Photoshop, you can flip the image by selecting Image>Image Rotation>Flip Canvas Horizontal(or vertical, if your image is upside down). I then used Filter>Other>Offset… to center my continents. This should only be a horizontal move, with the vertical move always set to zero.

In gimp, you can flip the image by selecting Image>Transform>Flip Horizontally(or vertically, if your image is upside down). I then used Layer>Transform>Offset to center my continents. This took a bit of trial and error, as the offset isn’t shown until you commit by hitting Offset. This should

This is the final map after flip and offset.

This is the final map after flip and offset.

only be a horizontal move, with the vertical move always set to zero.

When done save your map image. You can now import this image into gplates, Flex Projector, G.Projector or, with a suitable pgw file, GRASS or QGIS. If you have Flexify, you can also manipulate the projection in Photoshop.

This is what Wark looks like in gplates. Over the north pole, with most of the significant inhabited regions in view.

This is what Wark looks like in gplates. Over the north pole, with most of the significant inhabited regions in view.

By the third time I did this, it took me about eight minutes to do the uv-mapping. Nicely. The povray portion of the exercise, including export from Wings3D, editing the script, rendering and flip and image editing took less than six minutes. It took about fifteen minutes to set up the parameters for the Blender beauty shot at the top of this page.

Thank you for reading,
The Astrographer

Posted in Mapping, Planetary Stuff, World Building | Tagged , , , , , , , , , | 1 Comment

Working with the Rotations File in GPlates

I’m going to start out with a quick introduction to the rotation file.

First thing, let’s go over the basic data line in the rotation file. This describes a point in time and space.

100 0.0 0.0 0.0 0.0 000 !1

The first column is the PlateID that is being referenced.

The second column is the date for which this data is valid. In this case, this can be considered the start point.

Next, the third, fourth, and fifth columns describe the Euler rotation for this plate. For our purposes, it will suffice to know that this describes the way in which the feature is moved from its starting point. In this case, three zeroes means that there is no displacement or rotation of the feature or features from their state as defined in the input file. For what we’re doing, it will suffice to always enter zeroes, as any deformation of position will be done graphically in gplates. If you intend to do this on a more real-world basis, you should be able to find plenty of help on the internet. Here, would be a good starting point.

The sixth column is your conjugate plateID. All movements of this plate will be made relative to the conjugate plate, which can be considered “stationary.” In this case, the value of zero means we are basing the movements of this plate on the default reference coordinate system. More on this later.

The seventh column consists of an exclamation mark(!) followed by descriptive comments of some sort. Perhaps the name of the continent. This information is not processed by gplates and exists primarily for the user’s benefit.

Now this isn’t quite sufficient. gplates tries to animate by interpolating between points in time. It can’t extrapolate. This means we have to add a row describing end conditions. Like so…

100 0.0 0.0 0.0 0.0 000 !1

100 150.0 0.0 0.0 0.0 000 !1

As you see we’ve added a new line. The only difference is that the time is now 150.0, rather than 0.0. Now if you were to set the date in gplates to 75.0 and use the modify reconstruction pole to move a feature with a plateID of 100, gplates would automatically add another row between the two we defined with date 75.0 and whatever rotations required to put the feature where we placed it graphically. No muss, no fuss!

Now if you move the time forward, the feature will move back toward its starting point. This might not be desirable, you might want to know how you are moving the feature relative to its last position, not its initial position. In that case open the rotation file in your trusty text editor after saving it in gplates and replace the three columns defining the Euler rotation for time 150.0 with the ones that have been created for 75.0. Changing…

100  0.0   0.0    0.0    0.0  000 !1

100 75.0 -37.12  -11.67  -60.04  000 !Calculated interactively by GPlates

100 150.0   0.0    0.0    0.0  000 !1

Into…

100  0.0   0.0    0.0    0.0  000 !1

100 75.0 -37.12  -11.67  -60.04  000 !Calculated interactively by GPlates

100 150.0   -37.12  -11.67  -60.04  000 !1

A simple copy and paste. Now the plates will remain in the last defined position. Wash, rinse and repeat as you make changes…

Let’s say we want to extrapolate to 200.0 Ma. In that case we simply add another row with the time value set to 200.0. Thus…

100  0.0   0.0    0.0    0.0  000 !1

100 75.0 -37.12  -11.67  -60.04  000 !Calculated interactively by GPlates

100 150.0   -37.12  -11.67  -60.04  000 !1

100 200.0   -37.12  -11.67  -60.04  000 !1

And keep on rockin’!

Next monday I’ll post a silly “toy” example, demonstrating the use of the conjugate or “fixed” plate.

Hopefully this was helpful,
The Astrographer

Posted in Mapping, World Building | Tagged , , , , , , , , | Leave a comment

Plate Tectonics with GPlates and QGIS

Final-rasterLast time, in Building a World with a Globe and Paper, I described a simple hands-on model/game for simulating some of the effects of continental drift. Now we’re going to play the same game using gplates and qgis on the computer. I, personally, am using an Apple Macintosh, but the apps are open-source and compiled versions are available for major platforms.

I’ll start by creating a rotations file. The process is described in detail in an earlier post. For now, just enter the following code into a plaintext file and name the file, world_rotations.rot.

1000 0.0 0.0 0.0 0.0 0 !first
1000 150.0 0.0 0.0 0.0 0 !first
2000 0.0 0.0 0.0 0.0 0 !second
2000 150.0 0.0 0.0 0.0 0 !second
3000 0.0 0.0 0.0 0.0 0 !third
3000 150.0 0.0 0.0 0.0 0 !third
4000 0.0 0.0 0.0 0.0 0 !fourth
4000 150.0 0.0 0.0 0.0 0 !fourth
5000 0.0 0.0 0.0 0.0 0 !fifth
5000 150.0 0.0 0.0 0.0 0 !fifth
6000 0.0 0.0 0.0 0.0 0 !sixth
6000 150.0 0.0 0.0 0.0 0 !sixth

That will be sufficient for up to six separate plates. While the term, “Euler rotations,” is a bit intimidating, you really don’t need to know the math for what we’re doing here. You can just copy one of the pairs of rows above, just altering the plateID, which is the first column, and, possibly the identifying comment, which is the last column after the exclamation mark or “bang”(!).

Load your fresh new rotation file into gplates by opening the menu File>Open Feature Collection…, and selecting, “world_rotations.rot.”

Now we can start drawing some continental coastline polygons. Click the, “Digitise New Polygon Geometry G” button, digitise_polygon_35. To avoid polar pinching, make sure the view is set to, “3D Orthographic.” You can place the vertices

A nice view of the farside of the globe, showing, clearly, just how big I made that First continent.

A nice view of the farside of the globe, showing, clearly, just how big I made that First continent.

of your desired polygon by clicking on the globe. You can also rotate the

of the view by holding down the command key(ctrl for Windows) and dragging on the globe. When your satisfied with the shape of your continent, hit Create Feature near the lower right corner.

Since this is a polygon representing a continental shoreline, select gpml:Coastline from the list. If we wanted to define the edge of the continental slab(located roughly at the edge of the continental shelf), we could select gpml:ClosedContinentalBoundary, but coastline works for now. Hit next.

In the next window you only need to change the PlateID to 1000, to associate this polygon with the first two rows in the rotation file, check Distant Past and Distant Future to make the polygon visible for all times

and select a name. For other polygons, you would select a different PlateID if you want them to rotate independently. If, for example you wanted to add a feature representing a mountain range on this continent, you would also use PlateID 1000 for that feature, or a different ID corresponding to the continent that mountain range is associated with. Features can also be made to follow a given plate but still able to move independently, but that is beyond the scope of this discussion. Hit Next.

You can examine the existing properties of the new feature in this window. When you’re done with your ogling, hit Next.

In this window select < Create a new feature collection >. You now have a new continental coastline and an unnamed feature collection to contain it.

To keep things simple, we’ll save the feature set and give it a name.

Select the menu, File>Manage Feature Collections… Under Actions in the beige area next to New Feature Collection, click Save As. It’s the floppy disk icon with a pen. For Format select ESRI shapefile(*.shp), this will allow us to manipulate the features in qgis. Give it a clear name like Coastlines.shp. Hit Save.

Create additional continents as desired, giving them PlateIDs 2000, 3000, 4000 and 5000 up to 6000. Make sure to save your new features to Coastlines.shp or whatever you decided to name the shapefile. In the Layers window, check “Fill polygons” under “Reconstruction options” for the Coastlines layer. I like to set “Fill opacity” to 0.50 and use the menu View>Choose Background Colour… to give the basic globe a dark unsaturated blue color. Not necessary, but it’s why my screenshots look the way they do.

I made some really big continents, so I could only fit PlateIDs 1000 through

Not much room to move around...

Not much room to move around…

4000 on the globe, as shown in the Rectangular View to the right. Since their so big, I can split those little buggers up without overruning the PlateIDs I have already defined in the rotations file. But first lets move those things around a bit…

Fritz, set the time to 75.0 Ma in the Time text box in the upper left corner. Now use the Choose Feature F africa_highlight_clicked_35 tool to select one of your polygons. Now that you have selected a continent to move, use the Modify Reconstruction Pole P africa_pole_rotation_35 tool. With this tool you can simply drag around the globe to move a ghost image of the continent around on the globe. Hold down <shift> and drag to rotate the continent around the other axis. How cool is thaaat? When you get the ghost where you want the continent, hit Apply in the lower right.

The only thing you can really change here is the comment that will be inserted to the right of the exclamation mark in the corresponding row of the rot-file. You can leave it as is or put the name of your continent in there or some other descriptive comment that would make it easy where this is in the rotations file. That could be instructive. Do as thou wilt! Hit OK.

There’s your continent, in its new position. Go ahead and move all your other continents. I decided to create a supercontinent by dragging all my continents together at 75.0 ma. This will serve me well in the next stage. For purposes of the tutorial, at least try to have at least two of your continents collide. You’ll notice I’m not really following the rules of the little game here. I’d like to demonstrate some of the useful features of qgis and gplates, so I’m going to cheat a little. If I were playing the game straight, I’d move in smaller time intervals, perhaps 0.0 to 5.0 to 10.0, etc. making small movements in the directions dictated by the dice roll at each interval. This would leave the resulting rot-file and animation based on it as a record of my moves. That’s cool, too, but for now…

Now that you have your continents in their 75.0 Ma position with at least one collision, select the menu Reconstruction>Export… Press the Select Single Snapshot Instant radio button and set the tim to 75.0 ma.

Hit the Add Export button. Data type is Reconstructed Geometries. Output File Format is Shapefiles(*.shp). Maybe select Wrap polyline and polygon geometries to the dateline??? Hit OK.

Set your Target Directory. I like to have a separate subdirectory for each reconstruction date. That way, if I screw something up, I can throw it away before anyone notices!

Export Snapshot. Close.

Okay. Now, in gplates, I’m going to do something a little wild, here. Call this an advanced project. Especially if it doesn’t work.

Select one of the continents using the Choose Feature tool. Now hit the Edit Feature button in the lower right or click cmd-E(ctrl-E). Select the gml:validTime property in the list under the Edit Properties tab. Uncheck Distant Past and set the Begin(time of appearance) text field to 75.0. Then click Close. Repeat for each continent. This will cause all of your continent polygons to disappear for all times before 75.0 million years ago.

There’s two ways of making the collided continents polygons into a single polygon. Both of them will involve qgis. I will start with setting up QGIS.

First thing we’re going to do is make sure the Attributes, Digitizing and Advanced Digitizing toolbars are checked in the View>Toolbars menu. If they’re checked then the Attributes toolbar will contain the Select Single Feature QGISActionSelect button. The Digitizing toolbar will contain the Toggle Editing QGISActionToggleEditing button, the Current Edits QGISActionAllEdits button and the very useful Add Feature QGISActionAddPolygon button. The Advanced Digitizing Toolbar will contain the Merge Attributes of Selected Features QGISActionMergeAttributes button and the Merge Selected Features QGISActionMergeFeatures button as well as the Split Features QGISActionSplitFeatures button. Besides the usual panning and zooming tools this is all we’ll be using in qgis today.

For the first method we’ll start by importing the reconstructed shapefile for 75.0 Ma that we just exported from gplates into qgis. Now we’ll click on the Layers window to select the resulting vector layer. With that layer selected click Toggle Editing QGISActionToggleEditingto allow editing of the continental polygons.

I want the final supercontinent to be a solid polygon without voids. To do that, I will create a new polygon to cover the internal void areas using the Add Feature QGISActionAddPolygon tool. Simply click around so that the new polygon covers all the void areas internal to the outer coast shared by all the polygons without allowing any part of the polygon to go outside of the new supercontinent’s area. Right click to complete the polygon. We don’t give a flip about any of the attributes as these will be discarded anyway, so just click OK.

Now, using the Select Single Feature QGISActionSelect tool select one of the collided polygons. Holding down the cmd-button(ctrl for Windows), select the rest of the continent polygons we wish to merge into the new supercontinent.

With all of those selected let’s click Merge Attributes of Selected Features QGISActionMergeAttributes. In the resulting window, select the polygon with the PlateID1 attribute of 1000. Click “Take attributes from selected feature.” Now click OK.

This will allow us to merge the polygons together. Click Merge Selected Features QGISActionMergeFeatures to make the magic happen. All the features should have the same set of attributes, so just pick one. I guess this means the last step was unnecessary. Oh well. Voila. You now have a new supercontinent.

To save your changes click the little triangle in the Current Edits QGISActionAllEdits button and select Save for All Layers. OK. Now click Toggle Editing QGISActionToggleEditingto be safe.

Now import that shape into gplates in the usual way. Okay, as it turns out I made some mistakes. I really don’t want to re-use Plate ID 1000, because at 75.0 Ma it will be moved from it’s current location(which is it’s correct location for 75.0 ma) in the same way that the 1000 continent was moved from it’s 0.0 Ma location. Not good. PlateIDs 5000 and 6000 are still available so I’ll use 5000 for the supercontinent. After 75.0 ma, because all the older continents are gone, I can and will re-use the old PlateIDs. Though I will have to add a line to the rotation file for each PlateID re-zeroing them. I also need to reset the gml:validTime attribute to Distant Past to 75.0 Ma.

The other possible way of creating the supercontinent is to create a new continent manually in gplates by tracing the outer boundary of the collided continents. In some cases, this might be the better setup, especially if you one or more polygons that cross the edge of the rectangular map. Digitizing a new polygon geometry has already been covered, so I won’t review that here. This would also allow you to keep 1000 as the PlateID for the supercontinent, but will run into trouble later.

QGIS, showing my rift seed. The yellow polygon is the 100.0 Ma location of my supercontinent and the red polygon is the 75.0 Ma location of the same polygon.

QGIS, showing my rift seed. The yellow polygon is the 100.0 Ma location of my supercontinent and the red polygon is the 75.0 Ma location of the same polygon.

Now you can move your supercontinent around a bit, but eventually it will rift and separate. I’ll do that at 100.0 Ma. Set the Time to 100.0. Now I’m going to Export the 100.0 Ma Reconstruction to a shapefile as done before and import it into qgis. This time, I’m going to plant three rift seeds and split up the supercontinent using the rifting method given in my last post.

I will use the Split Features QGISActionSplitFeatures tool in Advanced Digitizing to make the rifts. Two of the three rifts will be made in a single cut through the seed. The third rift will be made in a second cut passing through the seed. Once the cuts are made, I’ll toggle editing off and save the changes.

Import that shapefile into qgis. Reset the valid dates for the supercontinent to

This is the way my rifts split.

This is the way my rifts split.

100.0 Ma to 75.0 Ma. Set the valid dates for the new continents to Distant Past to 100.0 Ma. Finally set the PlateIDs for your new continents to some value that corresponds to a zeroed out PlateID in the rotation file. I’ll use 1000, 2000 and 3000. To make things work properly make sure the three rotations are set to zero at the date of appearance of the polygon.

Go ahead and rotate things around to your heart’s content till you reach 150.0 Ma. I’m actually kind of running time backwards here since I’m going to use the 150.0 Ma shapes and position as “present day.” This could be done a lot more cleanly by running the initial shapes as 150.0 Ma and changing things as you run toward the present time, but this works. In retrospect, there’s a lot of things I could have done more cleanly…

For instance, the slice nearest the south pole has a little bit of a topological problem. It doesn’t match the coastline of the existing supercontinent. This is because qgis made different assumptions about the nature of the projection when it made the cut. Basically, it ignored the projection altogether and placed the vertex at the end of the cut on a straight line within the projection. this then got distorted by the projection. You can move the end vertexof one of the neighboring polygons around in gplates to make it as close as possible to the edge of the supercontinent. Do this at the highest magnification possible. Then select the corresponding vertex in the neighboring polygon. This won’t be perfect, but flaws should be essentially invisible at all but the highest magnifications. If doing this professionally, you’d want to use topological tools for this, but professional use is well outside the scope of this exercise. After a few millennia of erosion, the pieces wouldn’t fit perfectly anyway.

Looking at the animation, I can see one of the pitfalls of cutting corners on this kind of thing. My continents all start out kind of passing through each other like ghosts. This wouldn’t really have been an issue if I’d actually played the game, but I’ve decided to add a few in between rotations just to get things looking better. The plates are still going to jerk around a lot, but they should be a lot better.

My last post did… about the same thing… manually with bits of paper stuck to a globe. The fidelity was very low and I’ll be buggered if I’m anywhere near to converting the result into a useable form for mapping. It would practically require surveying instruments. What I had required most of a days work just to do the thing and take some pictures. Writing it up was extra time…

What I did on the computer today took a few hours, but a lot of that time was spent on writing, wrangling pictures through Photoshop and making this blog. The final result of my work includes shapefiles, animations which, while I can’t post them, could be useful and a final landsea image which would be quite useful as a basis for a map in itself.

You could run this thing straight into Photoshop or Campaign Cartographer and make a map out of it straightaway. With some guides generated from the animation I could make a very nice heightmap in Wilbur.

You could run this thing straight into Photoshop or Campaign Cartographer and make a map out of it straightaway. With some guides generated from the animation I could make a very nice heightmap in Wilbur.

I kind of wish I hadn’t made such huge continents to start with, but I think it turned out pretty darned well. Better than the globe. Now to try the actual game with a better set of initial and cut shapes.

I hope this proved useful to you,
The Astrographer

Posted in Mapping, Planetary Stuff, World Building | Tagged , , , , , , , , , | Leave a comment