More

Rename las files with Python

Rename las files with Python


I have a bunch of LAS files that have been delivered with different naming conventions. I want to rename the files to all have the same uniform naming - based on the easting and northing coordinates of each tiles centroid.
I have generated a text file with the old and new names, and am trying to piece together a python script - but am running into this error

Traceback (most recent call last): File "C:LASToolsTestReName_Files.py", line 64, in  newName = newList[indexOldName] NameError: name 'indexOldName' is not defined

Here's the script. I am very new to Python and so this could take me hours of research and trial and error (it already has).

import os folder = "C:LASToolsTest" lookupTable = open(folder + "LookupTable.txt") tableList = lookupTable.readlines() lookupTable.close() old = "TIFF" new = "TILE" newTable = [] for line in tableList: newLine = line.strip() newerLine = newLine.strip(",") newestLine = newerLine.strip("'") almostFinalLine = newestLine.replace('"',"") finalLine = almostFinalLine.split(",") newTable.append(finalLine) header = newTable[0] indexOld = header.index(old) - 1 indexNew = header.index(new) - 1 oldList = [] newList = [] for item in newTable: if item != newTable[0]: valueOld = item[indexOld] valueNew = item[indexNew] oldList.append(valueOld) newList.append(valueNew) for file in os.listdir(folder): nameAndExt = os.path.splitext(file) oldName = nameAndExt[0] extension = nameAndExt[1] print oldName if oldName in oldList: indexOldName = oldList.index(oldName) newName = newList[indexOldName] os.rename(folder + oldName + extension, folder + newName + extension)

I agree with Curlew, this is not a GIS question… however, you are using LasTools (it appears, you're not importing it!) which is a GIS lib.

Here's some code that might help - note it's in Esri arcpy objects but the Laspy you can use as a reference:

import sys sys.path.append(r'C:Python27ArcGIS10.1Libsite-packages') import os, string, laspy, arcpy from laspy.file import File if len(sys.argv) != 5: print "Not enough arguments!" sys.exit() InFolder = sys.argv[1] OutFolder = sys.argv[2] OutCoSys = sys.argv[3] IndexName = sys.argv[4] if not os.path.exists(InFolder): print "In folder not found" sys.exit() if not os.path.exists(OutFolder): print "Out folder not found" sys.exit() OutShape = "%s\%s.shp" % (OutFolder,IndexName) if os.path.exists(OutShape): try: print "Removing old index" arcpy.Delete_management(OutShape) except: print "Unable to remove, please check for write" print "access and permissions" sys.exit() arcpy.CreateFeatureclass_management(OutFolder,IndexName + ".shp","POLYGON","","DISABLED","DISABLED",OutCoSys) arcpy.AddField_management(OutShape,"FileName","TEXT","","","125") InsCur = arcpy.InsertCursor(OutShape,OutCoSys) for ThisFile in os.listdir(InFolder): FileName,Ext = os.path.splitext(ThisFile) if Ext.lower() == ".las": LasOpened = False try: LASfile = File(InFolder + "" + ThisFile,mode = 'r') # This is the bit you should get interested in # The MinCo and MaxCo is a list/tuple in the order [[X],[Y]] MinCo = LASfile.header.get_min() MaxCo = LASfile.header.get_max() # the centroid is then # CenX = (MinCo[0] + MaxCo[0]) / 2 # CenY = (MinCo[1] + MaxCo[1]) / 2 # Make your file name from that and then use # os.rename(InFolder + "" + ThisFile,InFolder + "" + NewName) # to rename LASfile.close() LasOpened = True except: print "Unable to open %s with LasPY" % ThisFile if LasOpened: Xmin = int( MinCo[0] / 1000) * 1000 Ymin = int( MinCo[1] / 1000) * 1000 Xmax = (int( MaxCo[0] / 1000) * 1000) + 1000#Xmin + 1000#int( MaxCo[0] / 1000) * 1000#Xmin + 1000 Ymax = (int( MaxCo[1] / 1000) * 1000) + 1000#Ymin + 1000#int( MaxCo[1] / 1000) * 1000#Ymin + 1000 Xmin = MinCo[0] Ymin = MinCo[1] Xmax = MaxCo[0] Ymax = MaxCo[1] PArray = arcpy.Array() NewPol = InsCur.newRow() TPnt = arcpy.Point(Xmin,Ymin) PArray.add(TPnt) TPnt = arcpy.Point(Xmin,Ymax) PArray.add(TPnt) TPnt = arcpy.Point(Xmax,Ymax) PArray.add(TPnt) TPnt = arcpy.Point(Xmax,Ymin) PArray.add(TPnt) TPnt = arcpy.Point(Xmin,Ymin) PArray.add(TPnt) Poly = arcpy.Polygon(PArray,OutCoSys) NewPol.shape = Poly NewPol.setValue("FileName",ThisFile) InsCur.insertRow(NewPol) del PArray del NewPol del InsCur

I've put some comments in there to direct your attention. For users of Esri this is a good way to get a shapefile of your coverage of LAS files. Note, I wrote this prior to 10.1 with the LASdataset so it's mostly redundant.


if oldName in oldList: indexOldName = oldList.index(oldName) newName = newList[indexOldName]

The problem is whenoldNameis never inoldListthenindexOldNamenever gets set to anything (defined), and the error is generated.

Make sure oldName is really in oldList, and set toindexOldNamesomething, such as:

if oldName in oldList: indexOldName = oldList.index(oldName) else: indexOldName = 0 newName = newList[indexOldName]

Fun with Colorizing Lidar with Imagery

by ArthurCrawford

This blog is developing, just starting and will grow with time:

Over the last few years, I have been publishing lidar colorized point clouds as supplements to the 3D cities, like St. Louis area, that I have created using the 3D Basemap solution and extracting buildings from lidar. Sean Morrish wrote a wonderful blog on the mechanics of publishing lidar. Lately, I have looked into using the colorized lidar point clouds as a effective and relatively cheap way to create 3D Basemaps. It has some advantages as most states, counties and cities already have lidar available, most have high resolution imagery and all have NAIP imagery that is needed to create these scenes.

Mantiowoc County lidar scenes:

Some counties, cites and states are doing this already. Manitowoc County, WI., has a 3D LiDAR Point Cloud and also creating scenes with the data. Manitowoc County also did a great story map showing how their lidar is used as a colorized lidar point cloud here and highly recommend taking a look at it .

The StoryMap shows with a video how to do capture vertical, horizontal, and direct distances of LiDAR ground surfaces at any location Countywide. How to obtain detailed measurements of small LiDAR surface features such as depth of ditches to the road.

Measure LiDAR point clouds distances relative to LiDAR ground surfaces using house roofs as an example to see the height of the building.

Here's one of Manitowac Buildings scene layer where they had the creative idea of using the same colorized lidar to give fake building sides by showing the lidar classified as buildings several times over, each with a slightly less elevation by changing the offset in the symbology. Further down in the blog, I show how to do this.

Hats off to Bruce Riesterer of Manitowoc County who put this all together before retiring, including coming up with the idea to use the building points multiple times with different colors to show the sides of buildings and now is working for private industry, see his new work at RiestererB_AyresAGO .

State of Connecticut 3D Viewer:

I helped the State of Connecticut CLEAR colorize their lidar using NAIP imagery as the first statewide lidar point cloud published to ArcGIS Online. It turned out to be about 650GBs of lidar broken into two scene layer packages. The time spent on it was mainly processing time and loading time. The State of Connecticut CLEAR sent me the newest NAIP imagery they had and with all the data on a computer, I just let it run colorizing the lidar. With that layer and other layers CLEAR had online, a web scene was created. A feature class was added that has links to their laz files, to the imagery, DEM and other layers. This allows users to preview the lidar in a 3D viewer before downloading. Users can even do measurements with lines or areas in 3D that allow most users to view it before.

Here's several views using different symbology and filters on the published lidar point cloud for Connecticut.

Class Code modulated with intensity: Modulated Intensity allows the features to show up like roads, sidewalks, details in roof tops and trees.

Elevation modulated with intensity:

Color filtered to show buildings:

Here's some examples of how to display from their blog:

Recently, I was asked by the Urban team to help with Chicago and created basic 3D Building footprints from the Chicago Data Portal building footprints. I used the 8ppm Geiger lidar to create the DSM, nDSM and DTM for the 3D Basemap Solution. I then colorized the lidar using NAIP imagery and again the high resolution leaf off 2018 imagery. I then extracted the points classified as vegetation and used it to replace the high resolution leaf off imagery in the scene layer to show trees as green, but to get the roof tops using the high resolution.

Above is the high resolution leaf off imagery used to colorize the lidar in the scene. Below is he same area with the lidar colorized with NAIP for the vegetation (some building sides were classified by the vendor as vegetation in delivery to the USGS of the 8ppm Geiger lidar). You can see how the trees are much more identifiable using the lidar colorized with NAIP.

This could be used as a 3D Basemap. The 3D buildings do not have segmented roofs (divided by height), but the lidar shows the detail of the buildings. Below is the John Hancock Building identified in blue with a basic multipatch polygon in a scene layer with transparency applied.

Here's a view of the Navy Pier with both the NAIP colorized trees and High Resolution colorized lidar on at the same time.

Picking the Imagery to use:

Using the Chicago Lidar, I also built a scene to compare the different types of imagery used to colorize the lidar. It's a guide using the the video below to show how high resolution imagery leaf on vs. high resolution imagery leaf off vs. 1m NAIP leaf on. You can see below how leaf on imagery is great for showing trees. Your imagery does not have to meet exactly the year of the lidar, but the closer the better. But making it visual appealing is import in 3D cartography with colorized point clouds.


Sometimes colorful fall imagery before the leaves drop can be a great for scenes, but it all depends on what you want to show.

Way to add trees with limbs and points as leaves:

Here's a test I did using Dead Tree symbology to represent the trunk and branches, while the leaves come from the lidar colorized. My tool Trees from LIdar or 3D Basemap Solution tree tools can create the trees from lidar and then the points can be symbolized with the Dead Tree symbol. This image was done in ArcGIS Pro, not a scene.

City of Klaipėda 3D viewer (Lithuania):

The City of Klaipėda in Lithuania has used the colorized point clouds of trees with cylinders to represent trunks. The mesh here is very detailed. Because of being zoomed in so far, the points appear fairly large.

Here's another view zoomed out:

And another further zoomed out:

Some of the scenes above use NAIP imagery instead of higher resolution imagery. Why, when higher resolution is available. Often high resolution imagery is somewhat oblique where NAIP is collected usually from a higher altitude. With the higher altitude, you get less leans of buildings in the imagery and often the spacing of the lidar does not support the higher resolution imagery. In these cases NAIP is often preferred, but make point clouds with a couple las files to see what the differences are.

Here's a one over Fairfax County, VA, using their published lidar colorized (from their 3D LiDAR Viewer) and using intensity to show the roofs. With intensity, you can see the roof details better than with the NAIP imagery.

With large buildings, the building sides and roofs using intensity for symbology really shows off the detail. The sides of the buildings are just representative, not showing what the actual sides look like.

Here it is below with just using NAIP colorized points without the building sides and roofs using intensity for symbology.

The missing sides of the buildings make it hard to see where the building actually is and much of the roof detail is lost. Colorizing with intensity allows more detail because i ntensity is a measure, collected for every point, of the return strength of the laser pulse that generated the point. It is based, in part, on the reflectivity of the object struck by the laser pulse. The imagery often does not match exactly with the lidar due to collection maybe slightly off and slight oblique angle found in the imagery orthorectification.

Colorizing Redlands 2018 lidar:

Here's an example of colorizing the City of Redlands with the 2018 lidar that I worked on to support one of the groups here at Esri. Highly recommend taking just one las file to begin with, run through the process all the way to publishing to make sure there are no issues. I have in the past run through hours of download and colorization, just to learn a projection is wrong or something else is not correct (bad imagery, too dark or too light, etc.).

1. Downloaded the 2018 lidar from the USGS.

a. Download the meta data file with footprints, unzipped it and added the Tile shapefile to ArcGIS Pro

b. I got the full path and name of the laz files from the USGS site. Here's an example:of the path for the file to download:

c. Added a path field to the Tiles and calculated the path with the individual name file to replace the original path copied in b. This is the formula I used in calculate field:

"ftp://rockyftp.cr.usgs.gov/vdelivery/Datasets/Staged/Elevation/LPC/Projects/USGS_LPC_CA_SoCal_Wildfires_B1_2018_LAS_2019/laz/USGS_LPC_CA_SoCal_Wildfires_B1_2018_" & !Name! & "_LAS_2019.laz"

d. Then exported to a text file and used an free download software using FTP to mass download all the files from the USGS lidar laz files for the tiles that intersected the City of Redlands Limits feature class.

2. Use Convert LAS to convert the data from laz format to LAS. LAS is required to colorize the lidar. Got an error, but it was because the projection was not supported. Turned off the rearrange and it converted from laz to las.

3. Looked at the projection in the reports in the meta data downloaded and found it was not supported because it used meters instead of feet, modified a projection and then copied it using a simple model builder tool to give a projection file to each LAS file.

4. Evaluated the lidar by creating a populating a las dataset: ground, bridge decks, water, overlap, etc. were classified. Buildings and vegetation were not.

5. Used Classify LAS Building with aggressive option to classify the buildings. Classifying the buildings allows you to filter it in the future scene. It's also good for using to extract building footprints, another operation I commonly do.

6. Used Classify LAS by Height to classify the vegetation. (3) Low Vegetation set to 2m, (4) Medium Vegetation set to 5m, (5) High Vegetation set to 50m. This caused the Low Vegetation to be non-ground from 0 to 2m, Medium >2m to 5m and High Vegetation >5m to 50m. This is done so you can turn off the vegetation in a scene.

7. Used the ArcGIS Pro NAIP imagery service with the Template set to None and then Split Raster tool to download the area I need based on the Redlands City Limits.

8. Created a mosaic dataset of the NAIP imagery. Applied function to resamples from 0.6m to 0.2m and applied a statistics with a 2 circle on it. This will take the course 0.6m (or 1m) NAIP imagery and smooth it for better colorization.

9. Colorized the lidar with the Color LAS tool. Set it to colorized RGB and Infrared with the 1,2,3,4 bands and an output folder. The default puts a "_colorized" on it, I usually do not do this and simply have the output folder to be called colorlas. Set the output colorized lasd to be one directory above with the same name.

10. Added to Pro and reviewed the las files lasd. Found it covered the area, the lidar was colorized properly and it had a good look to it. Set the symbology to RGB.

11. Create Point Cloud Scene Layer Package Tool with the output lidar to create a slpk file and then added to ArcGIS Pro to make sure it was working correctly. Change the defaults.

12. Used my ArcGIS Online account to load the slpk or the Share Package tool.

13. Publish the scene and view it online.

14. Add the lidar scene layers multiple times and use symbolize or properties to show it the what you would like. You can see an example at the Chicago 3D. If you open it and save it as your own (logged in), you can access the layers in the scene layers to see how they are set up with Symbology and Properties.

Below is some work I helped the State of Kentucky with. The result is similar in someways to the work of neo-Impressionists Georges Seurat and Paul Signac that pioneered a painting technique, dubbed Pointillism.

Georges Seurat's Seascape at Port en Bessin Normandy below:

Adding tree trunks in the background makes the point cloud scenes just look a little more real and allows viewers to better view that it's a tree. I recently worked on a park area in Kentucky with supporting Division of Geographic Information (DGI) to try to create tree trunks like the one in the City of Klaipėda scene in Lithuania and then add sides of buildings as a test. I did not want the tree trunks to overwhelm the scene, just to be a background item to make the scenery more realistic. The image and link to the scene shows the red arrows point to the tree trunks. I used raster functions applied to the Range of Values using Las Point Statistics as Raster. First I created a Range of Elevation Values using 5ft sampling value using Las Point Statistic s as Raster. Then I u sed raster functions to create a raster showing the high points as show below.(1).Statistics: 5x5 mean. (2). Minus: Statistics – Range, Max Of, Intersection Of. (3). Remap: Minimum -0.1 to 0.1, Output 0, Change missing values to NoData. (4). Plus: Range plus Max Of, Intersection Of . (5). Remap: 0 to 26 set to no data. I then used Raster to Points to create a point layer of the raster. Add field and calculate it to be Height from gridcode, Add field TreeTrunkRadius and calculate it to be (!Height!/60) + .5, this gave me a width to use later for the 3D polygons. Placed the points into the 3D Layers and applied the Type – Base height and applied field to Height and US Feet. Ise an expression for the Height using the Expression Builder $feature.Height *0.66, because I wanted the tree trunks to go up only 2/3 of the height of the trees. I used 3D Layer to Feature Class to create a line feature class and then use it as input to Buffer 3D using the Field option for distance and the TreeTrunkRadius as the input field. I then use Create 3D Object Scene Layer Package to output the TreeTrunks.slpk. I used Add item to your ArcGIS Online Account and publish it. Once online, I played with the colors to get it brown. This process could also have use the Trees from Lidar tool, but the NAIP imagery was fall and did not have the same NDVI reflectivity for that process.

To fill the building sides, I classified the buildings using Classify LAS Building. Then used Extract LAS with a filter on the lidar dataset point cloud to extract only the ones with the building class code. I published this and added it 9 times to a group in the scene. The first three building layers, I adjusted them up .5, 0 and -0.5 meters and left them colorized with the imagery. This make the roof slightly more solid looking as there were gaps. Second, I took the remaining 6 layers and set them to intensity color with increments of -1m, -2m, -3m, -4m, -5m and -6m. This created the walls of the buildings. I changed the range of colors for the -3m to give a line to go across it that was slightly darker. I grouped all the layers together so they would be one layer. You can also use the filter for buildings and colorize with Class Code, then set the class code color to be what ever color you wish for building sides. If you set the modulated with intensity, this too change change the appearance, often giving shadowed the sides of buildings or what looks like windows at times.

Standard look before adding building sides and thickness to the roof:

With the illusion of walls added and roof thickened (Cumberland, KY):

Here's another view showing the building sides:

It is an illusion with the sides of the buildings, the colors do not match the real side of the buildings and the differences in the intensity of the lidar used do not show the true differences in the sides like the perceived windows above. Like most illusions or representations, it tricks the eyes into thinking it's the sides of the buildings. Sometimes, like most illusions or representations, it does not work from all angles as this building did with a low amount of points:

Overall, I think the representative sides help you to identify what is a building and what is not. The colors of the building sides will not match reality, nor do not show true windows or true doors. Doing so is usually a very costly and time consuming process after generating 3D models of the buildings. You could still generate the 3D buildings with the Local Government 3D Basemap Solution for ArcGIS Pro and have them semi-transparent and use for selection and analysis without any coloring. You could also use rpks applied to the buildings to represent the 3D models more accurately, but this would again be a representation without very detailed data.

How to get Lidar easily for large area: Here's a video of how to download high volumes of Lidar tiles easily from the USGS for a project area. Includes how to get a list of the laz files to download. Once downloaded, use Convert LAS to take the files from LAZ format to LAS. LAS format is needed to run classification, colorization and manual editing of the point cloud.

Covering the basics of Colorized Lidar Point Clouds, creating scene layer packages and visualizing:

Here's a video where I go through the process of Colorizing Lidar and what things I look for with times below to go to certain items (26 minutes long):

0:30 Seconds talking lidar input and some in classification of lidar.

1:05 Adding imagery services and review of the imagery alignment to use for the colorization of lidar.

1:49:Adding Getting NAIP imagery service.

2:20: Hi Resolution imagery services.

3:35: Talking about the colorizing trees with leaf off vs. leaf on imagery.

4:55: Comparing imagery, Sidewalks coloring the trees with leaf off.

6:00: Using Split raster to download imagery to use from a service.

8:25: Mosaic to New Raster the split images downloaded.

10:50: Check download mosiac

13:52: Talking about lidar, how big it is, how long it takes to colorized, download times from services using Split Raster, size of scene layer package in relation to the before uncolorized lidar.

15:40: Adding the colorized lidar and seeing it in 3D.

16:40: Upping the Display Limit to allow more points to be seen in the 3D viewer.

17:00: Adding imagery and reviewing the lidar colorized.

17:56: Increasing Point Size of the points to better see the roofs, trees and sides of buildings in the colorized lidar. Talking about how color does not match for sides of buildings.

19:00: Looking at trees, shadows.

19:25: Looking at roofs and down into the streets.

20:10: Creating a Create Point Cloud Scene Layer Package.

21:30: Adding Scene Layer Package to Pro and seeing the increased speed because of tilling and formatting. Review it this way to speed.

22:30: Looking at it with different settings in symbology, Elevation, Class, Intensity, Return (Geiger lidar does not have returns).

24:35: Looking at a single tree.

26:35: Showing the web scene

There is also this video done a couple years ago that covers the topic in a different ways.

Loading a higher resolution DTM (DEM) as the ground is often needed. In this link you can turn off the DTM1m_tif whidh is the lidar ground DTM vs. the World Terrain Service to see the difference. In the US, most of the time NED is usually good enough that your colorized las will be very close, but sometimes your colorized buildings will go into the ground and the lidar points near the ground will be under it or float on top of it. You can take your las files and in ArcGIS Pro see if the ground points fall below or above the terrain. If the difference is too much, you need to publish your DTM. The 3D Basemap solution (called Local Government 3D Basemap solution right now) can guide you through this process and might in the future guide you through the colorization and publication of lidar. Here's a blog that goes through how to do it and the help. The Local Government 3D Basemaps solution has tasks that can also walk you through this process for publishing an elevation.

Import tools for building colorized lidar packages:

Edit LAS file classification codes - Every lidar point can have a classification code assigned to it that defines the type of object that has reflected the laser pulse. Lidar points can be classified into a number of categories, including bare earth or ground, top of canopy, and water. The different classes are defined using numeric integer codes in the LAS files.

Covert LAS - Converts LAS files between different compression methods, file versions, and point record formats.

Extract LAS - Filters, clips, and reprojects the collection of lidar data referenced by a LAS dataset.

Colorize LAS - Applies colors and near-infrared values from orthographic imagery to LAS points.

Classify LAS Building - Classifies building rooftops and sides in LAS data.

Classify Ground - Classifies ground points in aerial lidar data.

Classify LAS by Height - Reclassifies lidar points based on their height from the ground surface. Primarily used for classifying vegetation.

Classify LAS Noise - Classifies LAS points with anomalous spatial characteristics as noise.

Classify LAS Overlap - Classifies LAS points from overlapping scans of aerial lidar surveys.

Change LAS Class Codes - Reassigns the classification codes and flags of LAS files.

Create Point Cloud Scene Layer Package - Creates a point cloud scene layer package (.slpk file) from LAS, zLAS, LAZ, or LAS dataset input.

Share Package - Shares a package by uploading to ArcGIS Online or ArcGIS Enterprise.

Here's some lidar colorized to look at:

Helsinki point cloud (Finland) Scene Viewer

Barneget Bay (New Jersey, US) Scene Viewer

City of Denver Point Cloud (Colorado, US) Scene Viewer

City of Redlands (California, US) Scene Viewer Has a comparison of high resolution vs. NAIP colorized lidar.

Kentucky Lidar Test (Cumberland, Kentucky, US) Scene Viewer Fall imagery applied to trees, building sides added, tree trunks added.


gxapi. DH_MASK_APPEND = 0 DH_MASK_NEW ¶

gxapi. DH_PLOT_PLAN = 0 DH_PLOT_SECTION ¶

gxapi. DH_PLOT_SECTION = 1 DH_PLOT_STRIPLOG ¶

gxapi. DH_PLOT_STRIPLOG = 2 DH_PLOT_HOLE_TRACES ¶

gxapi. DH_PLOT_HOLE_TRACES = 3 DH_PLOT_3D ¶

gxapi. DH_PLOT_3D = 4 DH_PLOT_SECTION_STACK ¶

gxapi. DH_PLOT_SECTION_STACK = 5 DH_PLOT_SECTION_FENCE ¶

gxapi. DH_PLOT_SECTION_FENCE = 6 DH_PLOT_SECTION_CROOKED ¶

gxapi. DH_PLOT_SECTION_CROOKED = 7


Arthur's Feature Extraction from LiDAR, DEMs and Imagery

by ArthurCrawford

7/26/2019: Added how to make a tree inventory 3D without lidar or height (hint DBH).

10/24/2018: Added link and instructions to create 3D buildings using lidar from Microsoft Building Footprints US.

10/4/2018: Added an update to the tools in the 3D mapping with Lidar Point Clouds below called ArthursLidarBuildingExtractionTools_10_04_2018.zip. This fixes issues with the Unclassified approach to building collection from lidar, a tool to clip buildings extracted to parcels, and Jagged tool which help to identify jagged buildings that come as a result of regularization. These tools are improvements that I have tested with many cities/counties for building extraction. These updates will eventually be applied to the 3D mapping with Lidar Point Clouds. For creating 3D buildings from the extracted 2D building footprints, please use the latest version of the Local Government 3D Basemaps solution.

9/28/2017: Please see 3D mapping with Lidar Point Clouds, this are ArcGIS Pro Tools, example data and workshop exercises from the IMF workshop we developed for this year's Imagery and Mapping Forum before the ESRI UC. This covers Extracting Building Footprints from Classified Lidar for buildings, Extracting Building Footprints from Unclassified Lidar for Buildings, and the 3D Basemaps for taking the footprints to 3D and adding trees. This replaces the methodology with Geoff Taylor showed in Extracting Building Footprints from Classified LiDAR using some of my older techniques.

12/5/2016: Added extensively to the Regularize Building Footprint tool suggestions.

12/8/2016: Changed 1200 to 12000 in the Regularize Building Footprint tool suggestions.

12/21/2016: Added Clean Extracted Buildings Using Regularize Building Footprint Tool

12/22/2016: Corrected error in models for Clean Extracted Buildings Using Regularize Building Footprint Tool due to bug in Regularize Building Footprint tool (inputting empty featureclass).

01/03/2017: Issue found with Clean Extracted Buildings Using Regularize Building Footprint Tool not working if there are no large buildings (over 25,000 sq. feet), looking into.

This blog will be dedicated to feature extraction from raster or point clouds to vector. I have been generating content for the World Topo Map Basemaps including the Contours, Spot Elevations, Vegetation and even buildings from Lidar. Lately, I have been focused on extracting buildings and trees from Lidar and/or imagery. I wrote a tool to extract Trees from Lidar 5 years ago and it's published here.

Instructions to create 3D Buildings from Microsoft US Building Footprints:

Here's a link to see it in 3D: Scene Viewer

Semi-automatic extraction of building footprints from Imagery

Attached is the tool for ArcGIS Pro 1.3 for Extract Buildings From Imagery. You will need 3D Analyst and Spatial Analyst for this tool. It takes imagery segmented with the Segment Mean Shift tool and converted to vectors. You then select the vectors, run the tool. It uses a model to convert into corrected buildings from the raw raster vectors and append to a feature class. Below is an example of segmentation vectors selected and then run through the tool to get building footprints. I suggest selecting multiple buildings and then running the tool. The tool is attached at the bottom as ExtractBuildingsFromImageryPro1_3v4.zip. After unzipping the data, be sure to use define projection on the building featureclass in the output.gdb to match the projection of your imagery.

There was an issue with the pathways not coming through correctly. I believe this is now fixed as the workspace is defined as in_memory. If you get an error for pathway not found, open the model and change the pathways in the model for C:UsersstudentDocumentsArcGISProjectsImageryExtractionImageryExtraction.gdbseg to in_memoryseg. You will have to do this for all the temporary files in the model. Thanks to those who reported this error.

  1. Download the tool from my blog.
  2. Unzip the tool.
  3. Get high resolution imagery of the area you wish to extract buildings from (6 inch is best).
  4. Open ArcGIS Pro (1.3 or better) Map
  5. Add your imagery to the Map
  6. Under the Insert Tab, use Add Toolbox to add the ExtractBuildingsFromImageryPro1_3.tbx toolbox.
  7. Run the Imagery Segmentation to Vector tool against your imagery. This takes time (5 minutes for a 349mg file). If four band, enter 4 2 1 into the Band Indexes.
  8. Change the symbology of the output to red and put on top of the segment mean shift image. It should look like this:
  9. Turn off the Segmentation image and with the original imagery underneath, select polygons for houses you want to convert to building polygons.
  10. For the first time, run Define Projection to the building featureclass to match your imagery.
  11. Open the Segmentation Vectors to feature tool and run after setting your parameters.
  12. Inspect your results:

Regularize Building Footprint Tool - Clean up raw extracted features extracted from lidar or imagery.

Raw Lidar and Raster extracted buildings need to be cleaned up and the Regularize Building Footprint tool does a good job, but it's difficult to know what to enter in the parameters for the tool. I have been working on models to do so and now have added a model to run the Regularize Building Footprint tool against raster extracted buildings call Clean Extracted Buildings Using Regularize Building Footprint Tool.

The Clean Extracted Buildings Using Regularize Building Footprint Tool requires ArcGIS ArcMap 10.4 with ArcGIS for Desktop Advanced and requires the 3D Analyst extension. This is the first version released is derived from the processes below. The input for this tool can be rasterized building auto extracted from imagery/digital maps/hard copy scanned map. I have used similar processes to extract the buildings from OS maps, lidar rasters generated and even classified imagery for buildings. This goes beyond the tool above in the clean up of features using the Regularize Building Footprint tool.

Time to run both operations (Warning: The tool will use most of the computing power on your system):

Intel Core i7 - 4600 CPU @ 2.10 GHz 2.70 GHz U with 8GB memory (64bit) - 25 minutes for 10,000 features

Intel Xeon CPU E5 -1620 0 @ 3.60GHz 3.60GHz with 32GB memory (64bit) - 8 minutes for 10,000 features

Below in white is the original raw lidar extracted buildings and Aqua Green is the CircleBuildings extracted. The Red are the Buildings output. The buildings under the circlebuildings will need to be deleted

1. To use this tool, first run a Dissolve on your features with multipart turned off.

2. Download and unzip the attached file Clean Extracted Buildings Using Regularize Building Footprint Tool.zip

Run the "1. Regularize Building Footprints For Circle (Feet Projection)" tool inside first.

3. Run the "2. Regularize Building Footprints All Non-Circle (Feet Projection)" next.

4. Review your CircleBuildings output first, the should not be that many and you need to check to see if they are actually

circles. If not, delete. This should collect tanks, round buildings, gazebos, etc. If a building is not completely round,

but is like a hexagon or something similar, try using the generalize button in editing with a 1 or 2 or 3 to get the sides

straight as a hexagon or octagon. Add addition circle structures you see, often they are near each other. Save when

5. Add your Buildings featureclass and select by location those with there centroid in the CircleBuildlings, then delete.

6. Review your buildings. If you have general extraction issues, please look at the type of building. Go into the "2.

Regularize Building Footprints All Non-Circle (Feet Projection)" tool and adjust those settings using the

Regularize Building Footprint tool help.

Included is some sample data and the output before being reviewed manually. You can see some stair stepping in the

features comes through. This is due to the tool only set to handle Right and Diagonal Angles for Large and Medium

buildings types (See field call Type). The TotalProcess field in the Buildings featureclass shows you the processes

used. Example: "Regularized, Simplified, Simplified, Simplified, Simplified" means that it had a Regularized Building

Footprint tool with an output of Regularized, then it was simplified using a 2, 4, 6 and 8 Simplify Buildings tool.

* Issue found with Clean Extracted Buildings Using Regularize Building Footprint Tool not working if there are no large buildings (over 25,000 sq. feet), looking into how to fix this.

Manual Process outlined to help you if you want to better understand what the tool is doing :

First start with extracting the circles from your data. I usually run a county or two at a time with this.

Run Circles first seven times – Regularize Building Footprint with Tolerance 2, 4, 5, 6, 8, and Tolerance 12 and

the definition queries below:

Tolerance 1.6 with the raw vectors using a definition query of Shape_Area <= 220 AND Shape_Area > 175

Tolerance 1.8 with the raw vectors using a definition query of Shape_Area <= 240 AND Shape_Area > 220

Tolerance 2 with the raw vectors using a definition query of Shape_Area <= 1400 AND Shape_Area > 240

Tolerance 4 with the raw vectors using a definition query of Shape_Area <= 2500 AND Shape_Area > 1400

Tolerance 5 with the raw vectors using a definition query of Shape_Area <= 3000 AND Shape_Area > 2500

Tolerance 6 with the raw vectors using a definition query of Shape_Area <= 8000 AND Shape_Area > 3000

Tolerance 8 with the raw vectors using a definition query of Shape_Area <= 12000 AND Shape_Area > 8000

Tolerance 12 with the raw vectors using a definition query of Shape_Area > 12000

Merge the output to BuildingsCircle and then apply a definition query of STATUS = 0. This should work to auto extract circles.

I suggest reviewing the circles to make sure they are correct, deleting any that are not.

You will want to start editing on the merged circles, select all and using the generalize with a input of 0.1 foot.

This turns the perfect circles to ones with vertices that are used as input for Local Government Scenes.

Select the raw extracted buildings that are not circles using Select by Location. Run a Dissolve with Create mulitpart features clicked off.

Divide into three groups by size below and run through Regularize Building Footprint with the values below:

• Small: " Shape_Area "< 5000 sq. feet

Right_ANGELS Tolerance 2, Densification 2, Precision 0.25

• Medium: " Shape_Area " >= 5000 AND " Shape_Area" <= 25000 Right_ANGLES_AND_DIAGONAL Tolerance 3, Densification 3,
Precision 0.25, Diagonal Penalty 1.5

Large: " Shape_Area ">= 25000 Right_ANGLES_AND_DIAGONAL
Tolerance 4, Densification 4, Precision 0.25, Diagonal Penalty 1.5

After running these groups, merge the result to BuildingsNonCircle and run dissolve on the features. Run a Eliminate Polygon Part on the features with at least a 20 Square Feet and Eliminate contained part only. This is to get rid of interior holes in the cleaned polygons. Run a Simplify Buildings with Simplification Tolerance 2 and then again with that as the input with a Simplification Tolerance 4.

Merge your circle buildings and the BuildingsNonCirclesb4 to get your finished buildings. Start reviewing buildings against imagery/lidar to find errors (there will always be errors it seems like).

Extracting Buildings and Trees from Lidar

Here is also a tool for Building and Tree Identification from Lidar developed by Joe McGlinchy and Roslyn Dunn. It's a work in progress and only works with ArcGIS Desktop 10.3.1 currently. It collects building footprints and circle trees that can be run through the Regularize Building Footprint Tool above.

Another tool for extracting trees I developed uses Lidar and NAIP imagery is here. A version for Pro is attached (Trees From LIDAR and NAIP_Pro).

Adding Color to Roofs of Buildings for display in ArcGIS Pro (Scene Viewer)

Adding color to roofs in ArcGIS Pro really makes the scenes look more realistic with Local Government Scenes. Over a year ago, I developed a model to add hex colors to buildings as a field to display in ArcGIS Pro. With the Local Government Scenes process, you can use a field with a Hex Color to get colors added to display in ArcGIS Pro using their layers. I often use two sources of imagery from different times and use one for the roof, then another for the side color (add another field called side color). Though using it for the side color is ideal, it's an easy way to color buildings in mass.

1. Clip leaf off high resolution imagery to the area where you have buildings you wish to add color.

2. Resample to 3ft or so, this will allow the Segment Mean Shift tool to run much faster.

3. Run Segment Mean Shift on the leaf-off imagery with defaults parameters.

4. Run the Add Hex Color From Imagery to Roofs 10.4 or ArcGIS Pro Add Hex Color to Roofs (attached below).

5. Download the Old Local Government Scenes layer files (attached below zipped) and unzip.

6. Move your buildings to the 3D in scene in Pro and click on properties to set elevation to absolute.

7. In Pro, run the Apply Symbology from Layer using the LOD2BuildingShellsFloors_feet.lyrx or LOD2BuildingShells_meters.lyrx in the OldLocalGovernmentScenesLayerFiles directory that you unzipped. The new version of the Local Government Scenes does not have color for building roofs or building sides, this will likely be changed in future versions.

7. Set the roof color to your hex color field in the symbology and apply.

Below is St. Louis in 3D with color added to the buildings. The Arch was the only feature drawn. All building colors were applied through imagery using the process above. Trees were extracted using the Trees From LIDAR (here). Buildings were segmented using Segment Mean Shift tool on a reclassified DEM clipped to the building footprints and then converted to polygons. This gives the 3D buildings multiple heights for different sections. Add Hex Color From Imagery to Roofs was run on individual polygons for the different roof parts.

Here's an another example over St. Charles, County, MO. They recently made their data Open Data through MSDIS and the next weekend I processed 160,000 buildings to 3D using Local Government Scenes with a model around it to run in batch(Scene Viewer). For the Building Side colors, I used this formula to calculate a color off the roof colors. The side colors are not correct, but it gives it a more realistic view.

"#" + hex(int(((int(!HexColor![1:3],16))-random.randrange(30)+20)*-1))[-2:] + hex(int(((int(!HexColor![3:5],16))-random.randrange(30))*-1)+20)[-2:]+ hex(int(((int(!HexColor![5:7],16))-random.randrange(30))*-1)+20)[-2:]

Another option is to use real side of buildings color. To do this, start with an image search on a web browser for houses in the area you are doing. You can then capture the images, georeference them side by side in ArcMap and mosaic them together. Run Segment Mean Shift on the mosaicked image and then create a featureclass to put fake buildings over the areas you want to get colors from. Run the Add_Hex_Color_From_RGB_Imagery_to_Roofs_10.4 tool on those buildings and then export it out to a table. Look at how many records are in the table, add a field to buildings called join and calculate it with a random number equal to the number of records in the table (import random in code block, random.randint(1,100) , 100 being the number of records). Then add a field to your building called side color and calculate it after doing a join with the table using the objectid and join field. Use this field as input to the symbology fo the sides of the buildings. An example is below of this for Fort Bend, TX, where we extracted the buildings footprints from lidar, ran them through the Local Government Scenes and then applied color to roofs from imagery and the side colors randomly.

Fort Bend County, TX, in 3D:

Taking your Tree Inventory and Displaying it in 3d in ArcGIS Pro:

A while back a local park asked me about what to do with their new Tree Inventory to their board that was delivered as a point featureclass with genus. I suggested to show it off it using ArcGIS Pro in 3D. Below are instructions for adding a height to the tree points from Lidar and then calculating a field for the Genus from the scientific name. By calculating the height and a field with just the Genus, you can display the trees in 3D in Pro easily. Here's a link to all the trees available:

With the trees, having it driven by Genus means that a Silver Maple (Acer saccharinum) will be show in the field as just "Acer" to drive symbology. With the link above, you can see that Acer will be shown as a Sugar Maple tree. It's close enough usually to give a good representation of the tree type.

Looking through research, it seems getting the height of a tree from the DBH does not work great. So, I would suggest to use the Lidar or the height if it's in the Tree Inventory. The only issue with this I came across was newly planted trees that the lidar did not have in it or we smaller when the lidar was done.

  1. Review the points, you will likely want to reproject the points to the same as your lidar.
  2. First step was to download the LAS file of the area
    (lidar format of data).
  3. Second Step was to create a las dataset with the las
    files.
  4. Next we used the LAS Point as Statistics as Raster using Z_Range
    method to get a DEM of the heights of the trees.
  5. I ran a Focal Statistics tool with Max of 3 circle to
    get the spread of the highest point so it intersects the point.
  6. Ran Add Surface information to get the Z from the
    raster to the Tree Inventory points, this is the height of the tree.
  7. Add a field call Genus and calculated the field using
    the Scientific name. ArcGIS needs
    an input of the Genus only, so I used the this vb script in 10.4 to do this from the Scientific name in python (ArcGIS Pro) !BOTANICAL!.split( ' ' ,1)[0] or vb script Split([BOTANICAL] , " ",2)(0)
  8. Opened ArcGIS Pro Scene and add the trees as Add Preset layers
    Realistic Trees using that field Genus, you get a more correct symbology
    for the trees and the height field. This does not work with all of your
    trees, but many are supported.
    Again, you can see the ones supported at this link: Supported tree genus types—ArcGIS Pro | ArcGIS for Desktop
  9. Add imagery and review your trees to make sure they look correct. A common error is the height may be in meters or feet, but I had choose the wrong one.

Here's some data I tried it with from Amherst, MA with the source data and a subset processed attached called SubsetOfAmherstMATreeInventory2010.zip :

Another one to look at is Tower Grove Park in St. Louis, MO. The scene has over 6,000 trees rendered. Some trees do not come in with the correct genus and show as white.

I received some tree inventory data from Bellefontaine Cemetery here in St. Louis, MO. I used a join with similar trees to the scientific genus name the Missouri Botanical Garden has done for their trees to render in 3D. Esri supports about 60 genus of trees for 3D rendering. Then I had used the lidar to get a simple formula to convert the DBH (Diameter at Breast Height) to tree Height. It's not accurate, but rendered the trees fairly well. There are very complex formulas out there I might investigate further to tie DBH to height, this might allow many tree inventories to be come 3D easily. The table to calculate a similar genus is below with the other downloads called MissouriBotanicalGardenSubstitutesForTreeGenus.txt.

Scene showing trees made 3D using a formula applied to the DBH ((DBH x 2)+20). 5,150 of the 9,153 tree's genus were supported using the 63 genus types supported by Esri ArcGIS Pro . 2,120 trees did not match the support genus for rendering, but a table was used supplied by the Missouri Botanical Garden for like trees genus. The trees in white were not supported or mapped over for genus symbology, this included 1,883 trees and shrubs. To Calculate Genus, I used the SPP field (Scientific Name) with !SPP!.split( ' ' ,1)[0]

Scene Viewer - Be sure to turn on the lidar to see how close or not close the trees are in height.

Special thanks to St. Charles County, Monica Balestreri, MSDIS, Michael Owen, Town of Amherst (MA), Dan Hedges, Joe McGlinchy, Roslyn Dunn, Thomas Maurer, Hua Wei, Jeff Liedtke, J. D. Overton, Khalid Duri, David Altergott, Tower Grove Park, Andy Watson and the others who have help me.


Grid Definitions

A grid definition builds upon a spatial reference by also defining the spatial extent of an image. Thus, it provides a complete picture of the geographic positioning of an image or a desired study area. A grid definition is only a definition, not a spatial reference or image.

When you need to co-register a time series of images or create a layer stack of bands with different resolutions, a grid definition specifies the common coordinate system, spatial reference, and spatial extent that they will share.

A complete grid definition can be specified using any of the following combinations:

  • Coordinate system + spatial extent + pixel size
  • Coordinate system + spatial extent + number of rows + number of columns
  • Coordinate system + pixel size + number of rows + number of columns

The following tools allow you to specify a grid definition using any of these combinations. They can also create a grid definition automatically based on the area where multiple images overlap (geometric intersection) or the combined spatial extent of multiple images (geometric union).


Course file compatibility between Steam and Skytrak TGC2019 version

I am having problems getting a course design I'm working on in the Steam PC version of TGC2019 Designer, and my Skytrak simulator version, so I can play the course on my simulator. This is a LIDAR course initially created using Chad's great tool. I had two reasons I switched to the Steam version: 1) I use a different computer in my garage with my Skytrak device, and 2) I couldn't get Chad's tool to load the course I saved from the Skytrak version. That was my first red flag that the .course file format was not compatible between the two version of TGC2019.

Now I'm trying to take a draft of the course design from my laptop (Steam version) and import it into my Skytrak version. The Designer in the Skytrak version doesn't list the file that I put in the same directory as a new course file saved from the Skytrak version. So there definitely compatibility issues.

Has anyone else run into this? If so, is there any way around this? The only other solution I can think of is to work on Chad's code (I am a Python programmer) and try to get it to support both file formats.


Rename las files with Python - Geographic Information Systems

Real-time 3D visualization of geospatial data with Blender

This is material for the FOSS4G workshop held in Boston, MA in August 14, 2017.

What if your geospatial data and simulations like flooding, fire-spread and viewshed computations are converted on-the-fly into realistic, interactive and immersive 3D worlds, without the need to deal with overly complicated or proprietary 3D modeling software? In this hands-on workshop we will explore how to automate importing and processing of various types of geospatial data (e.g., rasters, vectors) using Blender, an open-source 3D modeling and game engine software. We will start with a brief and focused introduction into Blender graphical user interface (GUI), Python API, as well as the GIS and Virtual reality addons. Once we import our GIS data into Blender, we will go over the techniques (both with GUI and command line) to increase the realism of our 3D world through applying textures, shading, and lighting. To make our work reusable for different projects, we will automate all importing and processing workflows using Python. Finally, we will show how to publish it online to share it with the world.

Part 1. Basics of Blender interface and functionalities

Part 2. Processing, shading and rendering geospatial data

Part 3. Real-time 3D modeling and coupling

Part 4. Publish your work online using Blend4Web

Section Duration
Part 1 8:00- 9:45
Break 15 min
Part 2 10:00-11:00
Part 3 11:00-11:30
Part 4 11:30-12:00

I. What is Blender and why using Blender?

Blender is an open-source 3D modeling, rendering and game engine software. You can create photorealistic scenes and life-like animations with it. The feature that makes Blender highly suitable for geospatial visualization is its capability to import various georeferenced data thanks to BlenderGIS addon. Almost every operation done in the blender interface, can be scripted in the Python scripting environment, allowing you to automate or batch process your 3D modeling workflow. Moreover, using Blend4Web or sketchfab addons, you would be able to publish your geospatial models online, so that everyone can interactively explore or download your work.

II. Basic components of the Blender interface

Blender has numerous components and features that, thanks to it open-source capabilities, are growing every day. Covering all aspects of the software itself require several lessons. The purpose of this section is to provide a brief introduction to Blender's graphical user interface and some of its features that are essential for working with geospatial data, and will be used throughout this tutorial. We will specifically introduce the following components: Areas, Editors, Tabs, Headers, Panels

Blender's application window can be flexibly arranged and divided up into a number of Areas. An area contains the workspace for a particular type of editor, like a 3D View Editor, or an Outliner. In figure above you can see the application window is divided into five areas, each assigned to an editor.

Editors are responsible for displaying and modifying different aspects of data. Imagine editors as full-fledge software each specialized for a specific tasks, like changing data properties, image editing, video editing, animation design, game design, and etc. You can assign an area to a specific editor using Editor Type selector , the first button at the left side of a header (figure below, left). Every area in Blender may contain any type of editor and it is also possible to open the same type multiple times.

Tabs are overlapping sections in the user-interface. Tabs can be vertical (Tool Shelf) or horizontal (Properties Editor, User Preferences).

Another common feature is the Header, that contain menus and commonly used tools specific to each editor. It has a small horizontal strip shape with a lighter gray background, which sits either at the top or bottom of the area.

Finally, the smallest organizational unit in the user interface is a Panel. You can usually collapse panels to hide their contents. They are used in the Properties Editor, but also for example in the Tool Shelf and the Properties region. In the image below on the right you can see three panels one of them is expanded and the rest are collapsed.

Above: Editor type selector (left), A Toolbar with tabs (middle), Toolbar Panels (right) Below: A Header with tabs

Now that you have some general ideas about the interface, we will review some of the commonly used editors.

The 3D View is the visual interface with the 3D data and scene with numerous functionalities for modeling, animation, texture painting, etc. Unlike the 2D environment of GIS software, where you can only navigate in x and y directions, 3D viewport allows full control over our viewing angle, depth, size, etc. You can press and hold down mouse scroll (or middle click) button to change the viewing angle (or orbiting around), shift and drag to pan, and roll to zoom back and forth.

Now note the panel on the left side of the region which is called Tool shelf and it has a variety of the tools for 3D editing. Newly installed addons also appear in this toolbar. Now notice the bottom Header. It includes menus for adding, editing objects as well as viewing and shading options.

3D view header (retrieved from Blender manual)

Header's View menu allows you to select a specific viewpoint such as top, left or different perspectives. Each of these commands have a keyboard shortcut ,for example you can press numpad 3 (if you have a full keyboard) to switch to top view.

Add menu provides a list of different types of 2D and 3D object that can be added to a scene

In Object Interaction mode you can different aspect of data. In this tutorial we focus Object mode and Edit mode. Edit mode allows you to access more low-level structures of your object, like faces, and vertices. In the examples that we complete in this tutorial, we will use some of these options to refine the surface model. It is important to get familiar with the 3 core elements, Faces, Edges and Vertex. You can select these elements by clicking on their corresponding icons.

On the right side of the interaction mode, is the viewport Shading mode which you can use to choose the visualization and viewport rendering methods. Solid mode is the default mode and shows objects with solid faces, but without textures and shading. The Texture mode shows the object with textures. Material mode is the fast approximation of the complete material including texture and shading. Rendered mode enables real-time rendering, which computes the near-to-final product on-the-fly as you interact with the object (with accurate materials and lighting).

Basic object selection and interaction

Objects are basically everything that you see in the 3D view. They include 3D objects, lights, cameras and more. You can select any object in the scene using the right-click. Selected objects are highlighted in orange. Use the 3 axes (i.e., handles) to move the object in your preferred direction. To select multiple objects, press and hold control key and right click on objects to add to your selection. To unselect hold shift and right-click on the object. You can move (grab) objects by pressing G , rotate them by pressing R , or scale them using S key. You can constrain any transformation to a specific axis by pressing x , y , z . You can delete the object by selecting it, pressing delete key and selecting ok.

As its name suggests, outliner lists and organizes the scene objects. From there you can set the hierarchy, visibility of the 3D objects or lock them if you need. You can also select and activate objects by clicking on their name in the list. Figure below shows Outliner editor that list three objects (Camera, Cube and Lamp) and the Lamp object is selected.


The Python console is a useful editor for testing and executing short commands, which then can be integrated in larger workflows. The Blender modeling and gaming modules are already loaded in python console so you can you can test your code snippets without extra effort of calling the modules.


Python console (retrieved from Blender manual)

Example 1. Simple object operation using python console.

  • Call Cube object and print its location
    • Copy and paste the individual command lines in the console and press enter

    Text editor allows you to edit your python script and run it inside Blender. By pressing the + icon you can start a new file and click on Run Script to execute your code. You need to call modeling and gaming modules in text editor.

    Example 2. Batch processing simple object operations using text editor

    • Create a matrix of 20 by 20 Cubes with varied size and location.
      • In the text editor click on the + icon to create a new textfile
      • Copy and paste the snippet below and click on Run script button
      • The results should look like the figure below
      • Delete all cube objects, add a Monkey object, and add a Plane object
        • Open a new text window or delete the contents of existing ones (select content with ctrl + A and press del )

        Properties editor allows you to modify the properties of the scene, rendering setting, transforming objects or changing their material or texture properties. The components that we will work with in the following examples are Object, Material and Texture properties.

        Note: Properties editor's interface is dynamically changing according to the selected object. For example, if you select the light, the little sun icon will appear to set the light properties and similarly you should select camera to be able to see the camera tab and modify the properties.

        Object properties tab allows you to transform the location, orientation and scale of the object, along with their display properties. You can use numeric input for transformation parameters.

        Example 3. Basic object transformation using properties modifier

        • Make sure that Suzanne object is selected. It should be highlighted in outliner
        • Go to Properties editorObject tab ‣ expand the Transform panel
        • Type 3, 2, 4 for X, Y, Z parameters, respectively.
        • Change Rotation and Scale parameters to see how they affect the object

        Materials tab allows you to assign or change an object’s material. You can add and remove material, or use material browser to assign previously created materials to the object. In this tutorial we briefly introduce two basic components of Materials, Shaders and Textures.

        Shading (or coloring) allows you to adjust the base color (as modified by the diffusion and specular reflection phenomenon) and the light intensity. You can also assign Texture to the objects, which is called Texture mapping. Texture mapping is often used to add detail to surfaces by projecting images and patterns onto those surfaces. Through the following examples we practice simple shading and texture mapping.

        Example 4. Assigning simple Shaders and Textures

        • Shaders
          • From outlier select object make sure that Suzanne object is selected
          • Go to properties editorobject tab ‣ click on the + New button to create a new material
          • Double click on the material name (e.g., Material.001) and change it to Mymat
          • Expand the preview panel to see a live preview of the material as you are changing it
          • Change the color parameter to red
          • Go to 3D editor bottom HeaderViewport shadingrendered to see the object render in realtime
          • Change the color to yellow
          • Click on the Diffuse BSDF field in front of the surface parameter and select Glass BSDF
          • Now try Emission BSDF and Glossy BSDF shaders while the viewport shader is on Rendered mode to see the effect on rendering. Your material preview and scene rendering should look like the figure shown below

          Left to right: Diffuse BSDF , Glass BSDF, Glossy BSDF, Emission

          • Textures
            • While the shader is still on “Glossy BSDF”, click on the radio button in front of the “Color” parameter. A widget with several columns will appear. From the texture column, select “Voronoi” to see how texture impact the rendering.
            • Now try “Gradient” texture. Your material preview and scene rendering should look like the left two columns in the figure below.

            Left to right: Gradient texture, Voronoi texture, Glossy BSDF, Mix Shader

            For creating more sophisticated materials you can use Node editor . Node editor is a visual interface for viewing and creating an interconnected network of nodes. In this network, each individual node performs some operation on the material, changing how it will appear when applied to the mesh, and passes it on to the next node. In this way, very complex material appearances can be achieved.

            Example 5. Setup a Mix Shader using node editor. In this example we mix a glossy shader with a diffuse shader to make a composite material.

            • Right click on the Monkey object (Suzanne) to select it
            • Switch the python console editor (bottom left area) to Node editor (figure below, left).
            • In the node editor You will see the nodes we have already setup. The Glossy node shader’s output is connected to the surface input of the Material output.
              We will now add two other shaders, a diffuse shader, and a mix shader.
            • From the Node Editor’s bottom Header ‣ AddShaderDiffuse BSDF
            • From the Node Editor’s bottom Header ‣ AddShaderMix shader. You should be able to see both nodes have been added in to your node editor.
            • Change the color value of the Diffuse node to R: 0.075 G: 0.35 B: 0.50
            • Disconnect the Glossy BSDF input from the surface
            • Connect BSDF output of both Diffuse and Glossy shaders to the inputs on the left side of the Mix (Shader)
            • Connect Shader output (on the right side) to the Surface input of the Material output nodes (figure below, right).
            • With the Fac parameter, you can adjust the mixture level.
            • Your material should look like the right column of the above figure learn more about nodes

            Other Complementary resources for learning blender interface

            Part 2. Processing, shading and rendering geospatial data

            In this section we will learn how to setup blender GIS addon and georeference the scene. We will also review the procedure for importing, processing and shading vector and raster data. We will proceed with the instructions using a simple viewshed assignment. Assignment goal is to visualize and compare viewshed simulations computed for 4 different locations across a site. The general workflow is as following.

            I) Preparing the scene and lighting II) Georeferencing the scene III) Importing and processing the digital surface raster IV) Importing and processing the viewpoint shapefile V) Draping the viewshed map and orthophoto on the surface model

            Note: Viewshed is a raster map showing a surface's visible areas from a given location.

            There are two ways to complete the example Scripting method (using blender's Python editor) and GUI (graphical user interface) method. For each step, the GUI procedure is listed as bullet points. Below that you can find the code snippet if you would like to follow the Scripting procedure. To execute the code snippet open a new text file in Text Editor and for each step directly copy-paste the code snippet into the editor and click on Run Script to execute the code.

            Method Duration difficulty
            GUI 1-2 hours Complex
            Python editor 10-15 minutes Easy

            Note For better learning complete the example with both methods but do not mix and match. Try to follow one method from the beginning to the end.

            The first step is to clean the scene and setup rendering and lighting parameters. GUI

            • Run Blender and open the file Example_A.blend.
            • Select the default Cube object in 3D viewport and delete it (right-click on the object ‣ press delete ‣ ok )
            • Make sure that the render engine is set to "Cycles". You can find it in the top header, the default is Blender Render
            • To increase the Lamp elevation and change the Lamp type to Sun for appropriate lighting:
              • Left click on the Lamp object in Outliner to select it
              • Go to Properties editorObject (the orange cube icon) ‣ Transform panel ‣ in the Location matrix, change the Z value to 1000 (see below figure if needed)
              • In Properties editorLamp (two icons to the right of the Object icon) ‣ expand the Lamp panel ‣ Change lamp type to Sun
              • Expand the Nodes panel ‣ Click on Use Nodes to enable modifying Sun parameters.
              • Set the Strength parameter to 6.00

              Python editor

              II. Georeferencing the Blender Scene

              Setting up Blender GIS addon

                the BlenderGIS addon
            • Go to fileuser preferences ( Alt + Ctrl + U ) ‣ Add-onsInstall from File (bottom of the window)
            • Browse and select "BlenderGIS-master.zip" file
            • You should be able to see the addon 3Dview: BlenderGIS added to the list. If not, type "gis" in the search bar while making sure that in the Categories panel All is selected. In the search results you should be able to see 3Dview: BlenderGIS. Select to load the addon.
            • From the bottom of the preferences window click Save User Settings so the addon is saved and autmatically loaded each time you open blender
            • Adding a new predefined coordinate reference system (CRS)

              Before setting up the coordinate reference system of the Blender scene and configuring the scene projection, you should know the Coordinate Reference System (CRS) and the Spatial Reference Identifier (SRID) of your project. You can get the SRID from http://epsg.io/ or spatial reference website using your CRS. The example datasets in this exercise uses a NAD83(HARN)/North Carolina CRS (SSRID EPSG: 3358)

              • In BlenderGIS add-on panel (in preferences windows), select to expand the 3D View: BlenderGIS
              • In the preferences panel find Spatial Reference Systems and click on the + Add button
              • In the add window put "3358" for definition and "NAD83(HARN)/North Carolina" for Description. Then select Save to addon preferences
              • Select OK and close the User Preferences window

              Learn more about Georefencing management in Blender

              Setting up the scene coordinate system

              • Find and click on GIS addon’s interface in 3D viewport’s left toolbar. In the “Geoscene” panel , click on the gear shape icon and switch to NAD83(HARN), click ok.

              III. Importing digital surface model

              Rasters can be imported and used in different ways. You can import them As DEM to use it as a 3D surface or as_Raw DEM_ to be triangulated or skinned inside Blender. You can select On Mesh to drape them as a texture on your 3D meshes. In this example, we import a digital surface model (DSM) derived from Lidar data points dataset as a 3D mesh using As DEM method. Note: Blender GIS imports both Digital Elevation Model (DEM) and Digital Surface Model (DSM) through As DEM method.

              • Go to fileimportGeoreferenced Raster
              • On the bottom left side of the window find Mode and select As DEM
              • Set subdivision to Subsurf and select NAD83(HARN) for georeferencing
              • Browse to the 'workshop_material' folder and select 'example1_dsm.tif'
              • Click on Import georaster on the top right header
              • If all the steps are followed correctly, you should be able to see the terrain in 3D view window

              Python editor

              Surface subdivision and refinement

              Usually when surface or elevation models are imported in Blender they are downsampled to a defaults subdivision resulting in smoothing out the surface details. The following procedure subdivides the imported mesh into smaller faces to enhance the surface representation. In this step we increase the subdivision units to acquire a more detailed surface.

              • Select surface model (right click on the object)
              • Go to 3D view editor's bottom toolbar ‣ Object interaction modeEdit Mode
              • Switch to Face select
              • If object is not orange in color (i.e., nothing is selected), go to Select(De)select All (or press A ) to select all faces (when the object faces are selected, they will turn orange)
              • Go to Tools (left toolbar) ‣ Mesh ToolsSubdivide . The subdivide dialogue should appear on the bottom left on the toolbar. Type "4" in the number of cuts tab
              • Go to 3D view editor's bottom toolbar ‣ Object interaction modeObject Mode . You should be able to see the surface details at this point (bottom figure, right image).

              Python editor

              IV. Importing viewpoint shapefile

              In this step we will import viewpoint locations as a point feature shapefile. With those features we can visualize the observer location from which the viewsheds are computed on the digital surface. BlenderGIS addon can import shape features respecting their attributes. In this example the "viewpoint.shp" has Elevation and Name attributes that we will use to assign hight and unique name to our viewshed points.

              • To Import viewpoint shape file:
                • Go to fileimportShapefile
                • Browse workshop data directory, select vpoints.shp and click on Import Shp . The shape import dialogue should appear in front of the GIS addon interface.
                • Activate Elevation from field and in field panel select height
                • Activate Separate objects
                • Activate Object name from field and in field panel select Name, you should be able to see 4 the points on the surface and 4 objects added to the Outliner with the names Viewshed_1, Viewshed_2,Viewshed_3, Viewshed_4
                • Select OK

                Python editor

                Creating viewpoint markers

                Imported points are 2D vectors that cannot be rendered as they don't have actual surfaces. Now we create 4 small spheres and match their location with the imported points to visualize observer locations in 3D.

                • To create spheres on the viewpoint location:
                  • Go to 3D Viewport’s bottom headerAddMeshUV sphere. The Add UV sphere dialogue will open on the left side of the Toolbar. Set the Size parameter to 3.000
                  • Select Sphere object (by clicking on it in Outliner) and press ctrl+c , ctrl+v to make a copy of the object, you should see the Sphere.001 in the outline. Make 3 copies of the sphere.
                  • In the outliner rename the sphere objects to Sphere1, Sphere2, . , Sphere4. You can do that by clicking on the object name.
                  • From Outliner select the object Viewshed_1
                  • Go to Properties EditorObjectTransformLocation to retrieve the viewshed point’s coordinates (X,Y,Z)
                  • Copy and paste the retrieved coordinates from Viewshed_1 into the location parameters for Sphere1
                  • Add 2.0 extra units to the Z parameter (only for Location) to raise the spheres above the ground
                  • Repeat this process for each Viewshed and each Sphere
                  • You should now have 4 spheres aligned on the imported viewshed points.

                  Python editor

                  Generating 4 copies of the surface and viewpoint spheres

                  In this step we create 3 additional copies of the surface model and move each of the viewpoint spheres to the
                  corresponding surface.

                  • Select DSM object and press ctrl+c , ctrl+v to make a copy of the object , you should see the object example1_dsm.001 in the outliner
                  • Select object example1_dsm001
                  • go to Properties EditorObject (cube icon)
                  • In the Transform panel ‣ Delta TransformX: type 750 to move the duplicated surface 750 meters to the east
                  • Create another copy of the DSM , put -750 for Y parameter to move the duplicate surface 750 meters to the south
                  • Create another copy of the DSM, put 750 for X parameter and -750 in Y parameter. The final model should look like figure
                  • Repeat the same procedure for the 4 Spheres (starting from Sphere 1) so each of them are moved to one of the surface.

                  Python editor

                  Now we will create a mixed material to combine Orthophoto and viewshed maps. We will use emission shaders to show viewsheds as glowing surfaces. For doing that we have created grayscale viewshed maps with black background, emmission shader assigns greater light emmission power to the lighter pixels.

                  • Make sure that the Render engine is set to Cycles and 3D viewport Shading is set to Material
                  • Change the bottom editor panel to Node editor. This can be done by simply changing the Editor Type selector (bottom left hand side of the window).
                  • Select the first DSM object "example_dsm1"
                  • Go to Properties tabMaterial Press + New button to add material
                    • Rename the Material to "Viewshed1"
                    • Expand the Surface panel and click on the gray square shaped icon on the right side of the Surface parameter to see a popup window with texture parameters. Select Mix Shader . You should be able to see two Shaders added below the mix shader.
                    • Click on the radio button on the right side of the color field ‣ textureImage texture
                    • Click on Open and load "viewshed_1_1.png". You should be able to see the viewshed draped on the DSM surface
                    • Change the Strength slider to 1.8 to increase the viewshed's emission power
                    • Click on the radio button on the right side of the color field ‣ textureImage texture
                    • Click on Open and load "ortho.png". You should be able to see the viewshed draped on the DSM surface

                    Now notice how the material logic and workflow is represented in Node editor. You can play with each of the individual nodes ,the links between them and the values.

                    • Play with the Fac slider on the Mix shader node to adjust the mixture level
                    • To repeat the shading procedure for the other 3 objects using "Viewshed_2_1.png", "Viewshed_3_1.png", "Viewshed_4_1.png"
                      • Select the Surface object and Go to Properties tabMaterial Press on whirpool shaped button (next to new botton) to browse and load "Viewshed 1" matarial. Make a new copy of the material by pressing the number button on the left side of the material name field. Rename the new material to "Viewshed 2".
                      • Now either from the Node editor or in the Material tab change the emmission texture to "viewshed_2_1.png"
                      • Repeat the same procedure for two other surfaces.

                      Python editor

                      Now follow the same workflow to shade viewpoint spheres but this time only use diffuse node (Diffuse BSDF) a with solid orange color.

                      • Select the first sphere, create a new material using Diffuse BSDF
                      • Change the surface color to orange
                      • Load the material in all sphere objects

                      Now lets try to run the entire procedure with a python file using Text editor and Python console
                      ‣ GUI

                      • From top header goto fileNew to Open a fresh copy of Blender
                      • Save the blender file with you preferred name in the workshop directory. Note: This is an important step since your all the paths in python code are linked to that directory
                      • In the top header find Layout (right next to help ) and switch the layout to Scripting The scripting layout includes : a Text editor(left), a Python Console (bottom) and 3D view (right)
                      • Procedure for Text editor
                        • In Text editorOpen ‣ Go to workshop directory and find example_a.py
                        • Click on run script
                        • type the following lines in the console. Note that you need to type in the workshop path in you computer in the first line.

                        Python Console>>>

                        Python editor

                        Part 3. Real-time 3D modeling and coupling

                        I. Intro to coupling with Modal Timer

                        In this section we learn the basics to setup simple coupling for importing and processing of geospatial data, in realtime. We do that by setting up a monitoring system inside blender that continuously looks for incoming commands (e.g, through sockets), files (e.g, shape file, raster files, etc.), or user interaction (e.g, mouse, joystick, keyboard). In Blender this procedure is handled through a module called Modal Timer Operator. The reason that we focus on this specific module is that routine monitoring libraries like Watchmode or Threading are not well handled in Blender and often results in crashes. These modules interfere with blender's ability to run multiple operators at once and update different parts of the interface as the tool runs.

                        The data can be transferred locally or over network, using simple file copy or more advanced methods like sockets. As an example, the following video shows a real-time coupling with GRASS GIS. GrassGIS itself is paired with Kinect to scan the elevation and color changes in the physical model. As user interacts with the physical model, GRASS GIS runs various simulations, and exports them as raster and shape formats to a system directory. In Blender Modal timer is continuously monitoring the directory to update the model based on incoming data types. Those include terrain surface (Geotiff), ponding simulation (3Dpolygon), landcover patches (3D polygon), camera location (3Dpolyline), trail (3Dpolyline).

                        Lets take a peek at the module's components and functionality in the following example.

                        Run the script that is loaded in the text editor

                        Select the Monkey object and move it around. You will see that as you are moving the object, three operations are running simultaneously: 1) the RGB values change, 2) a text object changes to show the updated RGB values, 3) and the timer text object changes to show the elapsed time in seconds.

                        Cancel the modal mode using "Esc" key.

                        Take a quick look at the commented code to check the modules components and their functionality

                        II. Coupling with GIS data

                        In this example we are using modal timer to monitor a system directory, In the workshop_materials folder you can see two folders named "Watch" and "scratch". The scratch folder contains 45 shape files and 45 images. The shapefiles represent viewpoints across a path, and textures represent viewsheds simulated from those locations. Viewsheds are combined with landcover to show the landuse composition of visible surface. Through a python script we setup modal timer to constantly look for files to import and process. To emulate the geospatial simulation we setup a second modal timer that copies the geospatial data from the Scratch folder to Watch folder (look at above scheme). The python script is consisted of the following python classes.

                        1. adapt class processes the incoming files and scene objects. Specifically it performs the following operations.

                        • Imports the viewshed map
                        • Replaces the emission texture of the DSM object with the imported map
                        • Imports the viewpoint shape file
                        • Aligns the location of the viewshed marker (Torus object) with the location of the imported viewpoint.

                        2. Modal timer Looks into the Watch directory, detects the type of incoming file, sends them to adapt class and finally removes the file from the watch folder.
                        3. Modal_copy acts as a surrogate for your GIS software and copies texture and Point shape files from Scratch folder to the Watch folder to simulate the condition where your GIS application is automatically sending files over the network or locally. 4. Panel a small widget with buttons to run the modules (2 and 3)

                        • Go to file ‣ preferences ‣ addons ‣ BlenderGIS ‣ import/Export panel
                        • Unselect Adjust 3D view and Forced Textured solid shading.
                        • Now run the script loaded into the text editor
                        • The scripts adds a new panel in 3D view's toolbar (left side) with two buttons, Watch mode and Copy files
                        • First Press Watch mode and then press Copy files
                        • You should be able to see the viewshed maps and the observer location object updating along the path.

                        Part 4. Publish your work online using Blend4Web

                        Blend4Web is a powerful tool for easy publishing and sharing your data online, inside blender. Specially for non-coders, the addon is a convenient tool to create sophisticated interactive models. To take a peek of the Blend4Web functionalities, look at the following applications featured in Blend4Web examples library.
                        Everest
                        Low poly rendering

                        Example . Isosurfaces

                        In this example we use Blend4Web addon to export a 3D model online. The sample data is borrowed from a project focused on visualizing spatio-temporal patterns of pedestrian behavior using data gathered from public webcams. The data is visualized using an Isosurface created in Paraview. The shape of an isosurface shows the spatio-temporal evolution of pedestrian density. The time axis is represented as a color ramp draped over the isosurface.

                        I. Setting up the Blend4Web addon

                        • From the workshop_materials directory locate and open isosurface.blend
                        • To setup Blend4Web addon
                          • Go to FilePreferencesaddonsinstall from file
                          • From the workshop_materials directory locate and select blend4web_addon_17_06.zip
                          • Make sure that the addon is setup and selected
                          • Click save user settings and close the preferences window

                          II. Exporting blender scene to web format

                          To export the blender scene as .html using GUI

                          • Go to FileExportBlend4Web(.html)
                          • Name your file and click on B4W Export HTML button
                          • Double click on the html file to open it in the browser

                          To export the blender scene as .html using editor

                          This work is built upon great contributions and support of Blender team, Blender GIS addon developers (domlysz/BlenderGIS) , Center for Geospatial Analytics, NC State's Geoforall lab and Garrett Millar.


                          There are several organizations that provide elevation data. As an example, let's download a DEM file of Mount St. Helens before or after its eruption back in the ྌs. These files are in public domain and are distributed by USGS.

                          Unzip the file and rename it mtsthelens.dem as follows:

                          Usually, DEM files have big resolutions and Gazebo cannot handle it, so it's a good idea to adjust the resolution of your DEM. The next command will scale the terrain to 129x129 and will copy into the Gazebo media/dem/ directory.

                          A DEM file in Gazebo is loaded in the same way that you load a heightmap image. Gazebo automatically detects if the file is a plain image or a DEM file. Create the file volcano.world and copy the next content. Save the file anywhere you want, for example, in /tmp .

                          The <heightmap><size> element in the code above tells Gazebo whether to load the DEM with the original dimensions (when <size> is not present) or to scale it (when <size> is present). In case you prefer to scale the DEM, the <size> element tells Gazebo the size in meters that the terrain will have in the simulation. If you want to maintain the correct aspect ratio, be sure to properly calculate its size in all three dimensions. In our example, the DEM will be scaled to a square of 150 x 150 meters and a height of 50 meters. The minimum elevation for this particular DEM file is 685 meters, so in the <pos> element, we translate the entire DEM in negative z direction so that it sits at z=0 in the world.

                          Launch Gazebo with the world containing your DEM file and you should see the volcano. In our case, the file is in the /tmp directory.

                          Try doing the same with the DEM file for Mount St. Helens after the eruption. You should get a heightmap in Gazebo similar to the image below:


                          QGIS 2.10 RPMs for Fedora 21, Centos 7, Scientific Linux 7

                          Thanks to the work of Volker Fröhlich and other Fedora/EPEL packagers I was able to create RPM packages of QGIS 2.10 Pisa for Fedora 21, Centos 7, and Scientific Linux 7 using the great COPR platform.

                          The following packages can now be installed and tested on epel-7-x86_64 (Centos 7, Scientific Linux 7, etc.), and Fedora-21-x86_64:

                          • qgis 2.10.1
                          • qgis-debuginfo 2.10.1
                          • qgis-devel 2.10.1
                          • qgis-grass 2.10.1
                          • qgis-python 2.10.1
                          • qgis-server 2.10.1

                          Installation instructions (run as “root” user or use “sudo”):


                          Rename las files with Python - Geographic Information Systems

                          A Digital Elevation Model (DEM) is a 3D representation of a terrain's surface that does not include any objects like buildings or vegetation. DEMs are frequently created by using a combination of sensors, such as LIDAR, radar, or cameras. The terrain elevations for ground positions are sampled at regularly-spaced horizontal intervals. Wikipedia is a good resource for getting more details about DEMs.

                          The term DEM is just a generic denomination, not a specific format. In fact, the DEMs can be represented as a grid of elevations (raster) or as a vector-based triangular irregular network (TIN). Currently, Gazebo only supports raster data in the supported formats available in GDAL.

                          The main motivation to support DEMs in Gazebo is to be able to simulate a realistic terrain. Rescue or agriculture applications might be interested in testing their robot behaviors using a simulated terrain that matches the real world.

                          Bring DEM support to Gazebo

                          In order to work with DEM files you should install GDAL libraries.

                          DEM file and the definition into SDF format

                          There are several organizations that provide elevation data. As an example, let's download a DEM file of Mount St. Helens before or after its eruption back in the '80s. These files are in public domain and are distributed by USGS.

                          Unzip the file and rename it mtsthelens.dem as follows:

                          Usually, DEM files have big resolutions and Gazebo cannot handle it, so it's a good idea to adjust the resolution of your DEM. The next command will scale the terrain to 129x129 and will copy into the Gazebo media/dem/ directory.

                          A DEM file in Gazebo is loaded in the same way that you load a heightmap image. Gazebo automatically detects if the file is a plain image or a DEM file. Create the file volcano.world and copy the next content. Save the file anywhere you want, for example, in /tmp .

                          The <heightmap><size> element in the code above tells Gazebo whether to load the DEM with the original dimensions (when <size> is not present) or to scale it (when <size> is present). In case you prefer to scale the DEM, the <size> element tells Gazebo the size in meters that the terrain will have in the simulation. If you want to maintain the correct aspect ratio, be sure to properly calculate its size in all three dimensions. In our example, the DEM will be scaled to a square of 150 x 150 meters and a height of 50 meters. The minimum elevation for this particular DEM file is 685 meters, so in the <pos> element, we translate the entire DEM in negative z direction so that it sits at z=0 in the world.

                          Launch Gazebo with the world containing your DEM file and you should see the volcano. In our case, the file is in the /tmp directory.

                          Try doing the same with the DEM file for Mount St. Helens after the eruption. You should get a heightmap in Gazebo similar to the image below:

                          How do I get a DEM file of my region of interest?

                          Next, we are going to describe one method for obtaining a DEM file of a specific region of interest.

                          Global Land Cover Facility maintains a high-resolution digital topographic database of Earth. Go to its Search and Preview tool and you will see something similar to the image below. Every terrain patch has a unique path and row that you should know before using the tool. We'll use QGIS to discover the path/row of our region of interest.

                          QGIS is a cross-platform open source geographic information system program that provides data viewing, editing, and analysis capabilities. Download QGIS following the instructions detailed on the QGIS website.

                          Open up QGIS, click on the left column icon labeled WMS/WMTS layer , click on Add default servers , select Lizardtech server , and then, press the connect button. Select the MODIS value and press Add . Close the pop-up window. The next step is to add another layer with all the different patches available. Download this shapefile and decompress it in any folder. Go back to QGIS and press Add Vector Layer (left column icon). Press Browse , and select your previously uncompressed wrs2_descending.shp file. Press Open in the window that opens. Now, you'll see both layers on the main window. Let's change the transparency of the wrs2_descending layer to be able to see both layers at the same time. Double click on wrs2_descending layer, and then, modify its transparency value to something around 85%.

                          Use the scroll and left button to navigate to your region of interest. Then click on the icon labeled Identify Features on the top bar. Click on your region of interest and all the terrain patches around the area will be highlighted. A new pop up window will show the path/row values for each highlighted patch. In the image below you can see the path and row of the DEM patch containing Las Palmas, one of the heavenly places of the Canary Islands, Spain.

                          Go back to your browser with the GLCF search tool and write the path/row values in the columns labeled Start Path and Start Row . Then click in Submit Query press Preview and Download to see your results. Choose your terrain file and press Download . Finally, select your file with extension .gz, and decompress it in your favorite folder. Global Land Cover Facility files are in GeoTiff format, one of the most common format of DEM files available.

                          Preparing DEM data for use in Gazebo

                          DEM data is usually created at very high resolution. Use gdalwarp to reduce the resolution of the terrain to a more manageable size before using it in Gazebo.

                          DEM data often contain "holes" or "void" areas. These sections correspond to areas where data could not be collected while the DEM was created. In the case of a data "hole", the hole will be assigned the minimum or maximum value of the data type that is used in that DEM.

                          Always try to download "finished" versions of DEM data sets, where the holes have been filled. If your DEM terrain contains holes (also known as NODATA values), try to manually repair it using gdal tools, such as gdal_fillnodata.py.

                          Working with multiple DEMs in Gazebo

                          Although Gazebo does not directly support multiple DEMs, GDAL has a set of utilities for merging a set of DEMs into a single one. The first step is to download the set of DEMs that you want to merge. Note that the patches can even overlap with one another GDAL will merge them seamlessly. Assuming that your current directory contains a set of Geotiff files ready to be merged, run the next command.