Hello, pokemon trainers of the World! Today, I would like to explain Geographic Analysis using the ideas of the Pokemon GO game that you know only too well. I hope that you will return to the game with a good understanding of the geographic concepts and the geospatial technology behind it.
Safe for some serious cheating, you have to move around this thing called THE REAL WORLD with your location-enabled device in order to “catch’em all”. Smartphone producers make it really difficult to manipulate GPS location, because it is such a critical function of your device. So, unless you are truly close to that poke stop, you won’t be able to access its resources: free poke balls, razz berries, etc. In Geography, we often study the location of points-of-interest or services. For example, if you live or work close to a specific shopping mall or hospital, you are likely to use their services at one point or another. Or, if you are far away from a college or university and still choose to pursue higher education, you may have to move in order to be within reach of that institution.
To use a poke stop or gym, or to catch a pokemon, you do not need to be at their exact coordinate locations, but you need them to appear within your proximity circle as you move around. In Geographic Analysis, we often examine this “reach”, or catchment area, that is defined by proximity to locations of interest. For example, when a coffee chain looks to open a new store, Geographers will examine their competitors’ locations and surrounding neighbourhood profiles to determine whether there is a gap in coverage or whether there are catchment areas that include enough people of the right demographic to support an additional cafe. In Retail Geography, we call these areas “trade areas”. That’s why you can find clusters of Tim Horton’s, Second Cup, and/or Starbucks at major intersections where the geodemographics are favourable – yes, this is likely a Geospatial Analyst’s work! And that’s also why you can find clusters of poke stops in some of your favourite busy locations.
To support business decision-making, AKA “location intelligence”, Geographers use data on population, household incomes and employment, the movement of people, and the built environment. If you have ever “watched” pokevision.com for different locations, you will have noticed great variation in the pokemon spawn density and frequency. For example, in our screenshots below you can see tons of pokemon in downtown Toronto, but not a single one in an area of rural Ontario. Similarly, there are dozens of poke stops and several gyms within walking distance in the City but a lone poke stop in rural Ontario. The Pokemon GO vendor, Niantic, seems to be using geodemographics in determining where pokemon will spawn. They make it more likely for pokemon to spawn where there are “clients”: that is, yourselves, the trainers/players.
(a) (b) (c)
Fig. 1: poke stops locations and pokemon appearances in downtown Toronto (a, b), compared to rural Ontario (c)
Geographic space is a unique dimension that critically influences our lives and societies. The spatial distribution of people and things is something that Geographers are studying. Just like the spawning of pokemon in general, the appearance of the different types of pokemon is not randomly distributed either. For example, it has been shown that water-type pokemon are more likely to appear near water bodies. See all those Magicarps near the Toronto lakefront in the screenshot below? A few types of pokemon even seem restricted to one continent such as Tauros in North-America and won’t appear on another (e.g., Europe). The instructions by “Professor Willow” upon installation of the app actually refer to this regional distribution of pokemon. I also believe that the points-of-interest, such as buildings, that serve as poke stops, determine the pokemon type spawning near them. For example, the Ontario Power Building at College St. and University Ave. in Toronto regularly spawns an Elektrabuzz, as shown in the last screenshot below.
(a) (b) (c)
Fig.2: (a), “Professor Willow” explaining his interest in studying the regional distribution of pokemon (what a great-looking Geographer he is!); screenshots of pokevision.com with (a) Magicarps at the Toronto lakefront and (b) an Elektrabuzz near the Ontario Power Building
In Environmental Geography, we often analyze (non-pokemon) species distribution, which is also not random. The availability of suitable habitat is critical, just like for pokemon. In addition, spatial interactions between species are important – remember the food chain you learned about in school. I am not sure that different pokemon types interact with one another; maybe that could be the topic of your first course project, as you enter a Geography program at university?
The techniques that we use within Geographic Information Systems (GIS) include suitability mapping, distance and buffer analysis, and distance decay. Distance decay means that it is becoming less and less likely to encounter a species as you move away from suitable habitat. Or in the business field, it is becoming less and less likely that people will shop at a specific mall the further away they live from it. A buffer is an area of a specified distance around a point, line, or polygon, just like the proximity circle around your pokemon avatar. GIS software can determine if other features are within the buffer around a location. Instead of enabling access to poke stops or gyms around your avatar, Geographers would use buffer analysis to determine which residents have access to public transit, e.g. if they are within walking distance of 500m or 1km of a transit stop.
A final thought about how Pokemon GO has brought Geography to the headlines concerns important professional and societal challenges that Geographers can tackle. These range from map design and online map functionality to crowdsourcing of geospatial data, as well as the handling of big data, privacy concerns, and ultimately the control of people’s locations and movement. The now-defunct pokevision.com Web map used Esri online mapping technology, one of the world-leading vendors of GIS software and promoters of professional Geography. Another approach, which is used by pokemonradargo.com, has trainers (users) report/upload their pokemon sightings in real-time. This geospatial crowdsourcing comes with a host of issues around the accuracy of, and bias in, the crowdsourced data as well as the use of free labour. For example, poke stops were created by players of a previous location-based game called “Ingress” and are now used by Niantic in a for-profit venture – Pokemon GO! Finally, you have all read about the use and misuse of lure to attract people to poke stops at different times of day and night. The City of Toronto recently requested the removal of poke stops near the popular island ferry terminal for reasons of pedestrian control and safety. Imagine how businesses or government could in the future control our movement in real space with more advanced games.
I hope I was able to explain how Pokemon GO is representative of the much larger impact of Geography on our everyday lives and how Geographers prepare and make very important, long-term decisions in business and government on the basis of geospatial data analysis. Check out our BA in Geographic Analysis or MSA in Spatial Analysis programs to find out more and secure a meaningful and rewarding career in Geography. And good luck hunting and training more pokemon!
As a follow-up to my post on “Geospatial Data Preparation for 3D Printed Geographies” (19 Sept 2015), I am providing an update on the different approaches that I have explored with my colleague Dr. Claire Oswald for our one-year RECODE grant entitled “A 3D elevation model of Toronto watersheds to promote citizen science in urban hydrology and water resources”. The tools that we have used to turn geospatial data into 3D prints include the program heightmap2stl; direct loading of a grey scale image into the Cura 3D modeling software; the QGIS plugin DEMto3D; the script shp2stl.js; and a workflow using Esri’s ArcScene for 3D extraction, saving in VRML format, and translating this file into STL format using the MeshLab software.
The starting point: GIS and heightmap2stl
Being a GIS specialist with limited knowledge of 3D graphics or computer-aided design, all of the techniques used to make geospatial data printable rely heavily on the work of others, and my understanding of the final steps of data conversion and 3D print preparation is somewhat limited. With this in mind, the first approach to convert geospatial data, specifically a digital elevation model, used Markus Fussenegger’s Java program heightmap2stl, which can be downloaded from http://www.thingiverse.com/thing:15276/#files and used according to detailed instructions on “Converting DEMs to STL files for 3D printing” by James Dittrich of the University of Oregon. The process from QGIS or ArcGIS project to greyscale map image to printable STL file was outlined in my previous post at http://gis.blog.ryerson.ca/2015/09/19/geospatial-data-preparation-for-3d-printed-geographies/.
Quicker and not dirtier: direct import into Cura
The use of the heightmap2stl program in a Windows environment requires a somewhat cumbersome process using the Windows command line and the resulting STL files seemed exceedingly large, although I did not systematically investigate this issue. I was therefore very pleased to discover accidentally that the Cura software, which I am using with my Lulzbot Taz 5 printer, is able to load greyscale images directly.
The following screenshot shows the available parameters after clicking “Load Model” and selecting an image file (e.g. PNG format, not an STL file). The parameters include the height of the model, height of a base to be created, model width and depth within the available printer hardware limits, the direction of interpreting greyscale values as height (lighter/darker is higher), and whether to smoothen the model surface.
The most ‘popular’ model created using this workflow is our regional watershed puzzle. The puzzle consists of a baseplate with a few small watersheds that drain directly into Lake Ontario along with a set of ten separately printed watersheds, which cover the jurisdiction of the Toronto and Region Conservation Authority (TRCA).
— TRCA Monitoring (@TRCA_Monitoring) April 12, 2016
Controlling geographic scale: QGIS plugin DEMto3D
Both of the first two approaches have a significant limitation for 3D printing of geography in that they do not support controlling geographic scale. To keep track of scale and vertical exaggeration, one has to calculate these values on the basis of geographic extent, elevation differential, and model/printer parameters. This is where the neat QGIS plugin DEMto3D comes into play.
As can be seen in the following screenshot, DEMto3D allows us to determine a print extent from the current QGIS project or layer extents; set geographic scale in conjunction with the dimension of the 3D print; specify vertical exaggeration; and set the height at the base of the model to a geographic elevation. For example, the current setting of 0m would print elevations above sea level while a setting of 73m would print elevations of the Toronto region in relation to the surface level of Lake Ontario. One shortcoming of DEMto3D is that vertical exaggeration oddly is limited to a factor of 10, which we found not always sufficient to visualize regional topography.
Using DEMto3D, we recently printed our first multi-part geography, a two-piece model of the Oak Ridges Moraine that stretches over 200km in east-west direction to the north of the City of Toronto and contains the headwaters of streams running south towards Lake Ontario and north towards Lake Simcoe and the Georgian Bay. To increase the vertical exaggeration for this print from 10x to 25x, we simply rescaled the z dimension in the Cura 3D printing software after loading the STL file.
— Claus Rinner (@ClausRinner) April 21, 2016
Another Shapefile converter: shp2stl
The DEMto3D plugin strictly requires true DEM data (as far as I have found so far), thus it would not convert a Shapefile with building heights for the Ryerson University campus and surrounding City of Toronto neighbourhoods, which I wanted to print. Additionally, the approach using a greyscale image of campus building heights and one of the first two approaches above also did not work, as the 3D buildings represented in the resulting STL files had triangulated walls.
In looking for a direct converter from Shapefile geometries to STL, I found Doug McCune’s shp2stl script at https://github.com/dougmccune/shp2stl and his extensive examples and explanations in a blog post on “Using shp2stl to Convert Maps to 3D Models“. This script runs within the NodeJS platform, which needs to be installed and understood – the workflow turned out to be a tad too complicated for a time-strapped Windows user. Although I managed to convert the Ryerson campus using shp2stl, I never printed the resulting model due to another, unrelated challenge of being unable to add a base plate to the model (for my buildings to stand on!).
Getting those walls straight: ArcScene, VMRL, and Meshlab
Another surprise find, made just a few days ago, enabled the printing of my first city model from the City of Toronto’s 3D massing (building height) dataset. This approach uses a combination of Esri’s ArcScene and the MeshLab software. Within ArcScene, I could load the 3D massing Shapefile (after clipping/editing it down to an area around campus using QGIS), define vertical extrusion on the basis of the building heights (EleZ variable), and save the 3D scene in the VRML format as a *.wrl (“world”) file. Using MeshLab, the VRML file could then be imported and immediately exported in STL format for printing.
— Claus Rinner (@ClausRinner) April 21, 2016
While this is the only approach included in this post that uses a commercial tool, ArcScene, it is likely that the reader can find alternative workflow based on free/open-source software to extrude Shapefile polygons and turn them into STL, whether or not this requires the intermediate step through the VRML format.
Ryerson students, faculty, staff, and the local community are invited to explore and celebrate Geographic Information Systems (GIS) research and applications. Keynote presentations will outline the pervasive use of geospatial data analysis and mapping in business, municipal government, and environmental applications. Research posters, software demos, and course projects will further illustrate the benefits of GIS across all sectors of society.
Date: Wednesday, November 18, 2015
Location: Library Building, 4th Floor, LIB-489 (enter at 350 Victoria Street, proceed to 2nd floor, and take elevators inside the library to 4th floor)
- 1:00 Soft kick-off, posters & demos
- 1:25 Welcome
- 1:30-2:00 Dr. Namrata Shrestha, Senior Landscape Ecologist, Toronto & Region Conservation Authority
- 2:00-2:30 posters & demos
- 2:30-3:00 Andrew Lyszkiewicz, Program Manager, Information & Technology Division, City of Toronto
- 3:00-3:30 posters & demos
- 3:30-4:00 Matthew Cole, Manager, Business Geomatics, and William Davis, Cartographer and Data Analyst, The Toronto Star
- 4:00 GIS Day cake!
- 5:00 End
GIS Day is a global event under the motto “Discovering the World through GIS”. It takes place during National Geographic’s Geography Awareness Week, which in 2015 is themed “Explore! The Power of Maps”, and aligns with the United Nations-supported International Map Year 2015-2016.
Event co-hosted by the Department of Geography & Environmental Studies and the Geospatial Map & Data Centre. Coffee/tea and snacks provided throughout the afternoon. Contact: Dr. Claus Rinner
I am collaborating with my colleague Dr. Claire Oswald on a RECODE-funded social innovation project aimed at using “A 3D elevation model of Toronto watersheds to promote citizen science in urban hydrology and water resources”. Our tweets of the first prototypes printed at the Toronto Public Library have garnered quite a bit of interest – here’s how we did it!
The process from geography to 3D print model includes four steps:
- collect geospatial data
- process and map the data within a geographic information system (GIS)
- convert the map to a 3D print format
- verify the resulting model in the 3D printer software
So far, we made two test prints of very different data. One is a digital elevation model (DEM) of the Don River watershed, the other represents population density by Toronto Census tracts. A DEM for Southern Ontario created by the Geological Survey of Canada was downloaded from Natural Resources Canada’s GeoGratis open data site at http://geogratis.gc.ca/. It came in a spatial resolution of 30m x 30m grid cells and a vertical accuracy of 3m.
The Don River watershed boundary from the Ontario Ministry of Natural Resources was obtained via the Ontario Council of University Libraries’ geospatial portal, as shown in the following screenshot.
The population density data and Census tract boundaries from Statistics Canada were obtained via Ryerson University’s Geospatial Map and Data Centre at http://library.ryerson.ca/gmdc/ (limited to research and teaching purposes).
The Don River watershed DEM print was prepared in the ArcGIS software by clipping the DEM to the Don River watershed boundary selected from the quaternary watershed boundaries. The Don River DEM was visualized in several ways, including the “flat” greyscale map with shades stretched between actual minimum and maximum values, which is needed for conversion to 3D print format, as well as the more illustrative “hillshade” technique with semi-transparent land-use overlay (not further used in our 3D project).
The population density print was prepared in the free, open-source QGIS software. A choropleth map with a greyscale symbology was created, so that the lighter shades represented the larger population density values (yes, this is against cartographic design principles but needed here). A quantile classification with seven manually rounded class breaks was used, and the first class reserved for zero population density values (Census tracts without residential population).
In QGIS’ print composer, the map was completed with a black background, a legend, and a data source statement. The additional elements were kept in dark grey so that they would be only slightly raised over the black/lowest areas in the 3D print.
The key step of converting the greyscale maps from the GIS projects to 3D print-compliant STL file format was performed using a script called “heightmap2stl.jar” created by Markus Fussenegger. The script was downloaded from http://www.thingiverse.com/thing:15276/#files, and used with the help of instructions written by James Dittrich of the University of Oregon, posted at http://adv-geo-research.blogspot.ca/2013/10/converting-dems-to-stl-files-for-3d.html. Here is a sample run with zero base height and a value of 100 for the vertical extent.
The final step of pre-print processing involves loading the STL file into the 3D printer’s proprietary software to prepare the print file and check parameters such as validity of the structure, print resolution, fill options for hollow parts, and overall print duration. At the Toronto Public Library, 3D print sessions are limited to two hours. The following screenshot shows the Don River DEM in the MakerBot Replicator 2 software, corresponding to the printer used in the Library. Note that the model shown was too large to be printed in two hours and had to be reduced below the maximum printer dimensions.
The following photo by Claire Oswald shows how the MakerBot Replicator 2 in the Toronto Reference Library’s digital innovation hub prints layer upon layer of the PLA plastic filament for the DEM surface and the standard hexagonal fill of cavities.
The final products of our initial 3D print experiments have dimensions of approximately 10-20cm. They have made the rounds among curious-to-enthusiastic students and colleagues. We are in the process of improving model quality, developing additional models, and planning for their use in environmental education and public outreach.
This text was first posted as a guest contribution to WhyRyerson?, the Undergraduate Admissions and Recruitment blog at Ryerson University. Images were added after the initial posting.
Geography@Ryerson is different. Atlases, globes, and Google Maps are nice pastimes, but we are more interested in OpenStreetMap, CartoDB, and GeoDA. We map global flight paths, tweets, invasive species, and shoplifters. As a student in Geographic Analysis you will gain real-world, or rather real-work, experience during your studies. This degree is unique among Geo programs in Ontario, if not in Canada, for its career focus.
Mapping global flight paths.
(Source: Toronto Star, 24 May 2013)
The BA in Geographic Analysis has a 40-year record of placing graduates in planning and decision-making jobs across the public and private sectors. Jobs include Data Technician, Geographic Information Systems (GIS) Specialist, Geospatial Analyst, Mapping Technologist, GIS Consultant, Environmental Analyst, Market Research Analyst, Real-Estate Analyst, Crime Analyst, and many more. You name the industry or government branch, we’ll tell you what Geographers are doing for them. And these jobs are secure: Many are within government, or, if they are in the private sector, they tend to be in units that make businesses more efficient (and therefore are essential themselves!).
And these are great jobs, too. In November 2013, GIS Specialists were characterized as a low-stress job by CNN Money/PayScale. There were half a million positions in the US, with an expected 22% growth over 10 years, and a median pay of US$53,400 per year. In their previous survey, Market Research Analysts had made the top-10, with over a quarter million jobs, over 40% expected growth, and a median pay of US$63,100. The 2010 survey described GIS Analyst as a stress-free job with a median salary of US$75,000.
Mapping Technologist, one of Canada’s best jobs!
(Source: Canadian Business, 23 April 2015)
Closer to home, in April 2015 Canadian Business magazine put Mapping Technologists among the top-10 of all jobs in Canada! They note that “The explosion of big data and the growing need for location-aware hardware and software has led to a boom in the field of mapping”. With a median salary of CA$68,640, a 25% salary growth, and a 20% increase in jobs over five years, “this class of technology workers will pave the way”. According to Service Canada, “Mapping and related technologists and technicians gather, analyze, interpret and use geospatial information for applications in natural resources, geology, environment and land use planning. […] They are employed by all levels of government, the armed forces, utilities, mapping, computer software, forestry, architectural, engineering and consulting firms”. Based on the excellent reputation of our program in the Toronto area, you can add the many jobs in the business, real-estate, social, health, and safety fields to this list!
Sample applications of Geographic Analysis
(Source: Google image search)
While you may find the perspective of a well-paid, laid-back job in a growing field attractive enough, there is more to being a Ryerson-trained Geographer. Your work will help make important decisions in society. This could be with the City of Toronto or a Provincial or Federal ministry, where you turn geospatial data into maps and decision support tools in fields such as environmental assessment, social policy, parks and forestry, waste management, immigration, crime prevention, natural resources management, utilities, transportation, … . Or, you may find yourself analysing socio-economic data and crime incidents for a regional police service in order to guide their enforcement officers, as well as crime prevention and community outreach activities. Many of our graduates work for major retail or real-estate companies determining the best branch locations, efficient delivery of products and services, or mapping and forecasting population and competitors. Or you could turn your expertise into a highly profitable free-lance GIS and mapping consultancy.
Geography is one of the broadest fields of study out there, which can be intimidating. Geography@Ryerson however is different, as we provide you with a “toolkit” to turn your interest in the City, the region, and the world, and your fascination with people and the environment, into a fulfilling, secure, laid-back, yet meaningful job!
Minecraft is a fascinating video game that remains popular with the pre-teen, teen, and post-teen crowds. You build and/or exploit a 3D world by manipulating blocks of various materials such as “stone”, “dirt”, or “sand”. In the footsteps of my colleague Pamela Robinson in the School of Urban and Regional Planning, and her student Lisa Ward Mather, I became interested in ‘serious’ applications of Minecraft. Lisa studied the use of the game as a civic engagement tool. Apparently, the blocky 3D nature of Minecraft worlds can be useful in planning to give viewers an idea of planned building volumes while making it clear that preliminary display are not architectural plans.
Taking a geographic perspective, I am interested in the potential of Minecraft to educate kids about larger areas, say the City of Toronto. In this post, I outline the conversion of a digital elevation model (DEM) into a Minecraft terrain. I imagine the output as a novel way for ‘gamers’ to explore and interact with the city’s topography. Some pointers to related, but not Toronto-specific work include:
- GIS StackExchange discussion on “Bringing GIS data into Minecraft“, including links to the UK and Denmark modeled in Minecraft
- A video conversation about “Professional Minecraft GIS“, where Ulf Mansson combined OpenStreetMap and open government data
- Workflow instructions for converting “Historical Maps into Minecraft” using WorldPainter, which automatically converts DEMs into Minecraft terrain (if I had seen this before I started implementing the Python script outlined below…)
- An extensive webinar on “Geospatial and Minecraft” by FME vendor Safe Software, touching on creating Minecraft worlds from DEMs, GPS, LiDAR, building information management, and the rule-based CityEngine software
The source data for my modest pilot project came from the Canadian Digital Elevation Model (CDEM) by Natural Resources Canada, accessed using the GeoGratis Geospatial Data Extraction tool at http://geogratis.gc.ca/site/eng/extraction. In QGIS, I converted the GeoTIFF file to ASCII Grid format, which has the advantage of being human-readable. I also experimented with clipping parts from the full DEM and/or reducing the raster resolution, since the first attempts at processing would have taken several hours. The QGIS 2.2 raster translate or clip operations ran a GDAL function along the following lines (see http://www.gdal.org/gdal_translate.html and http://www.gdal.org/formats_list.html for details):
gdal_translate -projwin [xmin ymin xmax ymax] -outsize 25% 25% -of AAIGrid [input_file.tif] [output_file.asc]
On the Minecraft side, you need an account (for a small cost), a working copy of the game, and an installation of MCEdit. Player accounts are sold and managed by the game’s developer company, Mojang, see https://minecraft.net/store/minecraft. The Minecraft software itself is launched from the Web – don’t ask about the details but note that I am using version 1.8.7 at the time of writing. MCEdit is a free tool for editing saved Minecraft worlds. It has an option to add functionality through so-called ‘filters’.
The MCEdit filter I wrote is “dem_gen.py”, a Python script that collects a few input parameters from the user and then reads an ASCII GRID file (currently hard-coded to the above-mentioned Toronto area DEM), iterates through its rows (x direction) and columns (y direction in GIS, z in Minecraft), and recreates the DEM in Minecraft as a collection of ‘columns’ (z direction in GIS, y in Minecraft). Each terrain column is made of stone at the base and dirt as the top-most layer(s), or of other user-defined materials.
I have freshly uploaded the very first version 0.1 to GitHub, see https://github.com/crinner/mc_dem_gen. (This also serves as my first developer experience with GitHub!) The general framework for an MCEdit filter and the loop creating the new blocks were modified from the “mountain_gen.py” (Mountain Generator) filter found at http://www.mediafire.com/download.php?asfkqo3hk0lkv1f. The filter is ‘installed’ by placing it in the filter subfolder in the MCEdit installation. The process then simply involves creating an empty world (I used a superflat world with only a bedrock layer) and running the DEM Generator filter. To run any filter in MCEdit, select an area of the world, press ‘5’, and select the filter from the list.
Converting the 2,400 by 1,600 pixel CDEM dataset shown in the above screenshot of my QGIS project took about half a day on a middle-aged Dell Latitude E6410 laptop. The screenshot below shows that many data “chunks” are missing from this preliminary result, perhaps an issue when saving the terrain in MCEdit.
With a coarser DEM resolution of 600 by 400 pixels and using a newer Dell XPS 12 tablet (!), the processing time was reduced to 10 or so minutes and the result is promising. In the following screenshots, we are – I believe – looking at the outlets of the Humber River and Don River into Lake Ontario. Note the large vertical exaggeration that results from the horizontal dimensions being shrunk from around 1 block = 20m to 1 block = 80m, while vertically 1 block corresponds to 5m.
There remain a number of challenges, including a problem translating the geographic x/y/z coordinate system into the game’s x/-z/y coordinate system – the terrain currently is not oriented properly. More thought also has to be put into the scaling of the horizontal dimensions vis-a-vis the vertical dimension, adding the Lake Ontario water level, and creating signs with geographic names or other means of orientation. Therefore, your contributions to the GitHub project are more than welcome!
Update, 10 June 2015: I was made aware of the #MinecraftNiagara project, which Geospatial Niagara commissioned to students in the Niagara College GIS program. They aim to create “a 1:1 scale representation of Niagara’s elevation, roads, hydrology and wooded areas” to engage students at local schools with the area’s geography. It looks like they used ArcGIS and the FME converter, as described in a section of this blog post: http://geospatialniagara.com/backlog-of-updates/. Two screenshots of the Lower Balls Falls near St. Catharines were provided by @geoniagara’s Darren Platakis (before and after conversion):
The 2015 Annual Meeting of the Association of American Geographers (AAG) in Chicago is long gone – time for a summary of key lessons and notable ideas taken home from three high-energy conference days.
Choosing which sessions to attend, was the first major challenge, as there were over ninety (90!) parallel sessions scheduled in many time slots. I put my program together based on presentations by Ryerson colleagues and students (https://gis.blog.ryerson.ca/2015/04/17/ryerson-geographers-at-aag-2015/) and those given by colleagues and students of the Geothink project (http://geothink.ca/american-associaton-of-geographers-aag-2015-annual-meeting-geothink-program-guide/), as well as by looking through the presenter list and finding sessions sponsored by select AAG specialty groups (notably GIScience and Cartography). Abstracts for the presentations mentioned in this blog can be found via the “preliminary” conference program at http://meridian.aag.org/callforpapers/program/index.cfm?mtgID=60.
Upon arrival, I was impressed by the size and wealth of the industrial and transportation infrastructure in Chicago as well as the volume of the central business district, as seen from the airport train and when walking around in the downtown core.
My conference started on Wednesday, 22 April 2015, with Session 2186 “Cartography in and out of the Classroom: Current Educational Practices“. In a diverse set of presentations, Pontus Hennerdal from Stockholm University presented an experiment with a golf-like computer game played on a Mercator-projected world map to help children understand map projections. Pontus also referred to the issue of “world map continuity” using an animated film that is available on his homepage at http://www.su.se/profiles/poer5337-1.188256. In the second presentation, Jeff Howarth from Middlebury College assessed the relationship between spatial thinking skills of students and their ability to learn GIS. This research was motivated by an anonymous student comment about a perceived split of GIS classes into those students who “get it” vs. those who don’t. Jeff notes that spatial thinking along with skills in orientation, visualization, and a sense of direction sets students up for success in STEM (science, technology, engineering, math) courses, including GIS. Next was Cindy Brewer, Head of the Department of Geography at Penn State University, with an overview of additions and changes to the 2nd edition of her Esri Press book “Designing Better Maps”. The fourth presentation was given by David Fairbairn of Newcastle, Chair of the Commission on Education and Training of the International Cartographic Association. David examined the accreditation of cartography-related programs of study globally, and somewhat surprisingly, reported his conclusion that cartography may not be considered a profession and accreditation would bring more disadvantages (incl. management, liability, barriers to progress) than benefits to the discipline. Finally, Kenneth Field of Esri took the stage to discuss perceptions and misconceptions of cartography and the cartographer. These include the rejection of the “map police” when trained cartographers dare to criticize the “exploratory playful” maps created by some of today’s map-makers (see my post at http://gis.blog.ryerson.ca/2015/04/04/about-quick-service-mapping-and-lines-in-the-sand/).
A large part of the remainder of Wednesday was spent in a series of sessions on “Looking Backwards and Forwards in Participatory GIS“. Of particular note the presentations by Renee Sieber, professor of many things at McGill and leader of the Geothink SSHRC Partnership Grant (http://www.geothink.ca), and Mike McCall, senior researcher at Universidad Nacional Autonoma de Mexico. Renee spoke thought-provokingly, as usual, about “frictionless civic participation”. She observes how ever easier-to-use crowdsourcing tools are reducing government-citizen interactions to customer relationships, and participation is becoming a product being delivered efficiently, rather than a democratic process that engages citizens in a meaningful way. Mike spoke about the development of Participatory GIS (PGIS) in times of volunteered geographic information (VGI) and crowdsourcing, arguing to operationalize VGI within PGIS. The session also included a brief discussion among members of the audience and presenters about the need for base maps or imagery as a backdrop for PGIS – an interesting question, as my students and I are arguing that “seed contents” will help generate meaningful discussion, thus going even beyond including just a base map. Finally, two thoughts brought forward by Muki Haklay of University College London: Given the “GIS chauffeurs” of early-day PGIS projects, he asked whether we continue to need such facilitators in times of Renee Sieber’s frictionless participation? And, he observed that the power of a printed map brought to a community development meeting is still uncontestable. Muki’s extensive raw notes from the AAG conference can be found on his blog at https://povesham.wordpress.com/.
In the afternoon, I dropped in to Session 2478, which celebrated David Huff’s contribution to applied geography and business. My colleague Tony Hernandez chaired and co-organized the session, in which Tony Lea, Senior VP Research of Toronto-based Environics Analytics and instructor in our Master of Spatial Analysis (MSA) program, and other business geographers paid tribute to the Huff model for predicting consumers’ spatial behaviour (such as the probability of patronizing specific store locations). Members of the Huff family were also present to remember the man behind the model, who passed away in Summer 2014. A written tribute by Tony Lea can be found at http://www.environicsanalytics.ca/footer/news/2014/09/04/a-tribute-to-david-huff-the-man-and-the-model.
Also on my agenda was a trip to the AAG vendor expo, where I was pleased to see my book – “Multicriteria Decision Analysis in Geographic Information Science” – in the Springer booth!
Thursday, 23 April 2015, began with an 8am session on “Spatial Big Data and Everyday Life“. In a mixed bag of presentations, Till Straube of Goethe University in Frankfurt asked “Where is Big Data?”; Birmingham’s Agnieszka Leszczynski argued that online users are more concerned with controlling their personal location data than with how they are ultimately used; Kentucky’s Matt Wilson showed select examples from half a century of animated maps that span the boundary between data visualization and art; Monica Stephens of the University at Buffalo discussed the rural exclusions of crowdsourced big data and characterized Wikipedia articles about rural towns in the US as Mad Libs based on Census information; and finally, Edinburgh’s Chris Speed conducted an IoT self test, in which he examined the impact of an Internet-connected toilet paper holder on family dynamics…
The remainder of Thursday was devoted to CyberGIS and new directions in mapping. The panel on “Frontiers in CyberGIS Education” was very interesting in that many of the challenges reported in teaching CyberGIS really are persistent challenges in teaching plain-old GIS. For example, panelists Tim Nyerges, Wenwen Li, Patricia Carbajalas, Dan Goldberg, and Britta Ricker noted the difficulty of getting undergraduate students to take more than one or two consecutive GIS courses; the challenge of teaching advanced GIS concepts such as enterprise GIS and CyberGIS (which I understand to mean GIS-as-a-service); and the nature of Geography as a “discovery major”, i.e. a program that attracts advanced students who are struggling in their original subjects. One of the concluding comments from the CyberGIS panel was a call to develop interdisciplinary, data-centred program – ASU’s GIScience program was named as an example.
Next, I caught the first of two panels on “New Directions in Mapping“, organized by Stamen’s Alan McConchie, Britta Ricker of U Washington at Tacoma, and Kentucky’s Matt Zook. A panel consisting of representative of what I call the “quick-service mapping” industry (Google, Mapbox, MapZen, Stamen) talked about job qualifications and their firms’ relation to academic teaching and research. We heard that “Geography” has an antiquated connotation and sounds old-fashioned, that the firms use “geo” to avoid the complexities of “geography”, and that geography is considered a “niche” field. My hunch is that geography is perhaps rather too broad (and “geo” even broader), but along with Peter Johnson’s (U Waterloo) comment from the audience, I must also admit that you don’t need to be a geographer to make maps, just like you don’t have to be a mathematician to do some calculations. Tips for students interested in working for the quick-service mapping industry included to develop a portfolio, practice their problem-solving and other soft skills, and know how to use platforms such as GitHub (before learning to program). A telltale tweet summarizing the panel discussion:
Realizing that geographers have done a worse job than I thought of explaining the relevance of our work to mappers in industry… #AAG2015
— Emma Slager (@EmmaSlager) April 23, 2015
Thursday evening provided an opportunity to practice some burger cartography. It was time for the “Iron Sheep” hackathon organized by the FloatingSheep collective of academic geographers. Teams of five were given a wild dataset of geolocated tweets and a short 90-or-so minute time frame to produce some cool & funny map(s) and win a trophy for the best or worst or inbetween product. It was interesting to see how a group of strangers new to the competition and with no clue about how to get started, would end up producing a wonderful map such as this :-)
My last day at AAG 2015, Friday, April 24, took off with a half-day technical workshop on “Let’s Talk About Your Geostack”. The four active participants got a tremendous amount of attention from instructor-consultant @EricTheise. Basically, I went from zero to 100 in terms of having PostgreSQL, PostGIS, Python, NodeJS, and TileMill installed and running on my laptop – catching up within four hours with the tools that some of my students have been talking about, and using, in the last couple of years!
In the afternoon, attention turned to OpenStreetMap (OSM), with a series of sessions organized by Muki Haklay, who argues that OSM warrants its own branch of research, OpenStreetMap Studies. I caught the second session which started with Salzburg’s Martin Loidl showing an approach in development to detect and correct attribute (tag) inconsistencies in OSM based on information contained in the OSM data set (intrinsic approach). Geothink co-investigator Peter Johnson of UWaterloo presented preliminary results of his study of OSM adoption (or lack thereof) by municipal government staff. In eight interviews with Canadian city staff, Peter did not find a single official use of OSM. Extensive discussions followed the set of four presentations, making for a highly informative session. One of the fundamental questions raised was whether OSM is distinct enough from other VGI and citizen science projects that it merits its own research approach. While typically considered one of the largest crowdmapping projects, it was noted that participation is “shallow” (Muki Haklay) with only 10k active users among 2 million registered users. Martin Loidl had noted that OSM is focused on geometry data, yet with a flat structure and no standards other than those agreed-upon via the OSM wiki. Alan McConchie added the caution that OSM contributions only make it onto the map if they are included in the “style” files used to render OSM data. Other issues raised by Alan included the privacy of contributors and questions about authority. For example, contributors should be aware of the visualization and statistics tools developed by Pascal Neis at http://neis-one.org/! We were reminded that Muki Haklay has developed a code of engagement for researchers studying OSM (read the documentation, experience actively contributing, explore the data, talk to the OSM community, publish open access, commit to knowledge transfer). Muki summarized the debate by suggesting that academics should act as “critical friends” vis-à-vis the OSM community and project. To reconcile “OSM Studies” with VGI, citizen science, and the participatory Geoweb, I’d refer to the typology of user contributions developed by Rinner & Fast (2014). In that paper, we do in fact single out OSM (along with Wikimapia) as a “crowd-mapping” application, yet within a continuum of related Geoweb applications.
This is a brief account of two “Mapping for Nepal” sessions at Ryerson University’s Department of Geography and Environmental Studies. In an earlier post found at http://gis.blog.ryerson.ca/2015/04/27/notes-for-nepalquake-mapping-sessions-ryersonu-geography/, I collected information on humanitarian mapping for these same sessions.
Mapathon @RyersonU, Geography & Spatial on Monday, 27 April 2015, 10am-2pm. 1(+1) prof, 2 undergrads, 3 MSAs, 1 PhD, 1 alumnus came together two days after the devastating earthquake to put missing roads, buildings, and villages in Nepal on the map using the Humanitarian OpenStreetMap Team’s (HOT) task manager. Thank you to MSA alumnus Kamal Paudel for initiating and co-organizing this and the following meetings.
Mapathon @RyersonU, Geography & Spatial on Sunday, 3 May 2015, 4pm-8pm. Our second Nepal mapathon brought together a total of 15 volunteers, including undergraduate BA in Geographic Analysis and graduate Master of Spatial Analysis (MSA) students along with MSA alumni, profs, and members of the Toronto-area GIS community. On this Sunday afternoon we focused on completing and correcting the road/track/path network and adding missing buildings to the map of Nepal’s most affected disaster zones. Photos via our tweets:
— Claus Rinner (@ClausRinner) May 3, 2015
My observations and thoughts from co-organizing and leading these sessions, and participating in the HOT/OSM editing:
- In addition to supporting the #EqResponseNp in a small way, the situation provided an invaluable learning opportunity for everyone involved. Most participants of our sessions had never contributed to OSM, and some did not even know of its existence, despite being Geography students or GIS professionals. After creating OSM accounts and reading up on the available OSM and Nepal-specific documentation, participants got to map hundreds of points, lines, or polygons within just a couple of hours.
- The flat OSM data model – conflating all geometries and all feature types in the same file – together with unclear or inconsistent tagging instructions for features such as roads, tracks, and paths challenged our prior experience with GIS and geographic data. Students in particular were concerned about the fact that their edits would go live without “someone checking”.
- While the HOT task manager and general workflow of choosing, locking, editing, and saving an area was a bit confusing at first, the ID editor used by most participants was found to be intuitive and was praised by GIS industry staff as “slick”.
- The most recent HOT tasks were marked as not suitable for beginners after discussions among the OSM community about poor-quality contributions, leaving few options for (self-identified) beginners. It was most interesting to skim over the preceding discussion on the HOT chat and mailing list, e.g. reading a question about “who we let in”. I am not sure how the proponent would define “we” in a crowd-mapping project such as OSM.
- There was a related Twitter #geowebchat on humanitarian mapping for Nepal: “How can we make sure newbies contribute productively?”, on Tuesday, 5 May 2015 (see transcript at http://mappingmashups.net/2015/05/05/geowebchat-transcript-5-may-2015-how-can-newbies-contribute-productively-to-humanitarian-mapping/).
- The HOT tasks designated for more experienced contributors allowed to add post-disaster imagery as a custom background. I was not able to discern whether buildings were destroyed or where helicopters could land to reach remote villages, but I noticed numerous buildings (roofs) that were not included in the standard Bing imagery and therefore missing from OSM.
- The GIS professionals mentioned above included two analysts with a major GIS vendor, two GIS analysts with different regional conservation authorities, a GIS analyst with a major retail chain, and at least one GIS analyst with a municipal planning department (apologies for lack of exact job titles here). The fact that these, along with our Geography students, had mostly not been exposed to OSM is a concern, which however can be easily addressed by small changes in our curricula or extra-curricular initiatives. I am however a bit concerned as to whether the OSM community will be open to collaborating with the #GIStribe.
- With reference to the #geowebchat, I’d posit that newbie != newbie. Geographers can contribute a host of expertise around interpreting features on the ground, even if they have “never mapped” (in the OSM sense of “mapping”). Trained GIS experts understand how feature on the ground translate into data items and cannot be considered newbies either. In addition, face-to-face instructions by, and discussion with, experienced OSM contributors would certainly help to achieve a higher efficiency and quality of OSM contributions. In this sense, I am hoping that we will have more crowd-mapping sessions @RyersonU Geography, for Nepal and beyond.
This is an impromptu collection of information to support a series of meetings of Ryerson students, faculty, and alumni of the Department of Geography and Environmental Studies with getting started with OpenStreetMap (OSM) improvements for Nepal. As part of the international OSM community’s response, contributions may help rescuers and first-responders to locate victims of the devastating earthquake.
Note that I moved the reports on our mapping sessions out into a separate post at http://gis.blog.ryerson.ca/2015/05/04/notes-from-nepalquake-mapping-sessions-ryersonu-geography/.
Information from local mappers: Kathmandu Living Labs (KLL), https://www.facebook.com/kathmandulivinglabs. KLL’s crowdmap for reports on the situation on the ground: http://kathmandulivinglabs.org/earthquake/
Humanitarian OpenStreetMap Team (HOT): http://hotosm.org/, http://wiki.openstreetmap.org/wiki/2015_Nepal_earthquake
Guides on how to get started with mapping for Nepal:
Communications among HOT contributors worldwide: https://kiwiirc.com/client/irc.oftc.net/?nick=mapper?#hot. Also check @hotosm and #hotosm on Twitter.
Things to consider when mapping:
- When you start editing, you are locking “your” area (tile) – make sure you tag along, save your edits when you are done, provide a comment on the status of the map for the area, and unlock the tile.
- Please focus on “white” tiles – see a discussion among HOT members on the benefits and drawbacks of including inexperienced mappers in the emergency situation, http://thread.gmane.org/gmane.comp.gis.openstreetmap.hot/7540/focus=7615 (via @clkao)
- In the meantime (May 3rd), some HOT tasks have been designated for “more experienced mappers” and few unmapped areas are left in other tasks; you can however also verify completed tiles or participate in tasks marked as “2nd pass” in order to improve on previous mapping.
- Don’t use any non-OSM/non-HOT online or offline datasets or services (e.g. Google Maps), since their information cannot be redistributed under the OSM license
- Don’t over-estimate highway width and capacity, consider all options (including unknown road, track, path) described at http://wiki.openstreetmap.org/wiki/Nepal/Roads. Here is a discussion of the options, extracted from the above-linked IRC (check for newer discussions on IRC or HOT email list):
11:23:18 <ivansanchez> CGI958: If you don’t know the classification, it’s OK to tag them as highway=track for dirt roads, and highway=road for paved roads
11:26:06 <SK53> ivansanchez: highway=road is not that useful as it will not be used for routers, so I would chose unclassified or track
12:31:12 <cfbolz> So track is always preferable, if you don’t have precise info?
12:32:11 <cfbolz> Note that the task instructions directly contradict this at the moment: “highway=road Roads traced from satellite imagery for which a classification has not been determined yet. This is a temporary tag indicating further ground survey work is required.”
Another example of a discussion of this issue: http://www.openstreetmap.org/changeset/30490243
- Map only things that are there, not those that may/could be there. Example: Don’t map a helipad object if you spot an open area that could be used for helicopter landing, create a polygon with landuse=grass instead (thanks to IRC posters SK53 and AndrewBuck).
- Buildings as point features vs. residential areas (polygons): To expedite mapping, use landuse=residential, see IRC discussion below.
More about mapping buildings: http://wiki.openstreetmap.org/wiki/Nepal_remote_mapping_guide
- Be aware that your edits on OSM are immediately “live” (after saving) and become part of the one and only OSM dataset. In addition, your work can be seen by anyone and may be analyzed in conjunction with your user name and locations (and thus potentially with your personal identity)
Note that I am a geographer (sort of) and GIScientist, but not an OpenStreetMap expert (yet). If you have additions or corrections to the above, let me know!