[Skip first paragraph if you are not interested in the German context of the false positives issue;]
On June 5, 2020, OVALMedia’s Robert Cibis interviewed the Austrian microbiologist and infectious disease specialist Dr. Martin Haditsch about laboratory tests and specifically the PCR test that is used globally to detect the Sars-CoV-2 virus in a person. The interview [in German] broached the issue of false-positive test results in the context of a low-prevalence disease and imperfect tests. Two Youtube copies of the one-hour interview have a total of over 100,000 views at the time of writing. The next day, Swiss entrepreneur and Youtuber Samuel Eckert presented a 20-minute summary and explanation of the false-positive issue using an interactive Excel spreadsheet. His video currently boasts over 225,000 views with 15,000 likes. Possibly in response, the German Federal Minister of Health Jens Spahn, a banker by training, said in a brief interview contained in a tweet from public TV channel ARD on June 14 that if the COVID-19 prevalence continued to drop and testing was simultaneously expanded (as has been the case in many Western countries since mid-April) into the millions then you would eventually obtain more false-positive than correct-positive results.
Six weeks later, in a media briefing about the Province of Ontario’s safe reopening of schools, Associate Chief Medical Officer of Health Dr. Barbara Yaffe cautioned against seeing wide-spread COVID-19 testing as a solution. She went on to state that “in fact, if you’re testing in a population that doesn’t have very much COVID, you’ll get false positives almost half the time.”
In order to understand the impact of two test characteristics – sensitivity and specificity – we can use a confusion matrix to display true and false positive test results along with true and false negative test results. For example, in Geography we use confusion matrices to assess the accuracy of the classification of a remotely sensed image. The confusion matrix shows how many pixels of a certain land use class such as agricultural were correctly classified as agricultural or misclassified as another land use like forest. This is based on a ground-truthed image for comparison in a limited part of the study area. Similarly, the following spreadsheet estimates how often a positive or negative PCR test result is found in a true carrier of the virus vs an unaffected person.
The true proportions and counts of “infected” and not “infected” are based on the number of tests completed and the true prevalence of COVID-19 in the population, which is largely unknown. At the tail end of winter, coronaviruses are typically causing cold infections in some 5% to 10% of the population. As COVID-19 has tapered off over the summer, a value below 1% seems reasonable, here set to 0.5% as an example. In recent days, there were 40,000 or more tests completed in Ontario with some 400 new “cases” detected for a test-positivity rate of around 1%. Note that in conjunction with the PCR test, I like to put “cases” and “infected” in quotation marks since the test does not distinguish sick people carrying an infectious virus load from healthy, presymptomatic, or asymptomatic people carrying traces of inactive genetic material from the virus.
With a prevalence of e.g. 0.5%, we expect that 200 out of 40,000 tests are true positives. If we assume the sensitivity characteristic of the test at 99%, we get 198 correct positives out of the 200 true “infections” that should be detected, while two are missed. These two misses are false-negative results. False negatives are problematic, since potentially infectious persons are told that they don’t pose a risk to their environment. However, at a low prevalence of the disease, these misses are very small, even negligible, in comparison with the correctly identified negatives.
After subtracting 200 “infected” from the 40,000 tests completed, there are 39,800 left who should test as not “infected”. However, medical tests are usually imperfect in that they can both miss a condition present (false negatives, see above) as well as indicate the presence of a condition when it is not there (false positives). The characteristic that describes how accurate the test is in this respect is called specificity. It refers to how specific the test is geared towards its target, here the Sars-CoV-2 virus, rather than picking up other targets. The specificity of the PCR test for Sars-CoV-2 is a bit of a mystery and moving target, but before I discuss it, I will go through one example to explain the significance of false-positive results in the current phase of the pandemic.
With an assumed specificity of 99.5%, our test would determine 39,601 correct negatives out of 39,800 persons who are truly not “infected”. An issue arises with the remaining 199 false positives, which are wrongly detected by the test although they do not carry the virus. While the number is small in proportion to the correct negatives, we need to view it in relation to the 198 correct positives. From the perspective of each person testing positive, the chance that they are in fact falsely positive is 199 : 198, thus 50% of the 397 total positive test results are false positives, in line with the warnings by the officials cited above. The relationship is only this large due to the low prevalence of the disease. You can use the spreadsheet link above to examine the effect of increasing or decreasing the prevalence (try 0%!) and modifying the test characteristics.
Due to the significant restrictions imposed on test-positive persons’ lives, 50% false positives is a highly problematic proportion. In addition, a difference between 200 and 400 new “cases” may affect the assessment of the current public health threat, although the raw count of positive PCR tests is almost meaningless without considering who was tested and how many tests were completed, an issue which shall be discussed at another time.
Some critics of the pandemic response are using a lower specificity of 98.6% and come to the conclusion that all positive test results in certain jurisdictions such as Germany are false positives and therefore the pandemic has ended. That values stems from a study of lab results (external PDF, see page 12) conducted by a German accreditation body in April, which found an average of 98.6% correct negative results across over 400 participating labs. Others have noted that the actual proportions of positive test results has gone down to values as low as 0.6% in Germany, 0.3% in Canada, and even 0% in New Zealand, which – given the large numbers of tests completed – would not be possible if the false positives were any higher than these values. This can be explained by the fact that the quality assurance study reported the results for individual gene sequences but the testing protocols in some countries were modified to require testing of at least two gene sequences. In this case, the false positive rates of 0.5% for the E gene region (which I used in the above example) and e.g. around 2% for another characteristic gene would have to be multiplied, resulting in extremely low false-positives of 1 in 10,000 or so. However, Public Health Ontario’s in-house test methods target the E gene and clearly state that “Specimens with a single target detected … will be reported as COVID-19 virus detected, which is sufficient for laboratory confirmation of COVID-19 infection.” (emphasis added)
In summary, false-positive PCR test results likely are, or have been, an issue in some jurisdictions during some phases of the corona crisis, and politicians and health officials should be more transparent about it. Questions to be asked by the media include: When did you become aware of the impact of false positives? What is the magnitude of the problem currently? Have lab testing procedures been modified to address the issue? And is wide-spread testing still meaningful at this point in the pandemic? In fact, just yesterday, the Ontario government changed course by discouraging asymptomatic people from getting tested. Yet this reversal seems more related to preserving lab capacity for symptomatic persons and those with suspected exposure, who need faster test results, than to the fundamental issues with over-testing.
Karina Reiß, Sucharit Bhakdi: Corona Fehlalarm? Zahlen, Daten und Hintergründe [Corona False Alarm? Numbers, Data and Background]. Goldegg, Vienna Austria. Published 1 June 2020 (eBook, EAN 9783990601907) and 23 June 2020 (paperback, 160 pages, ISBN: 978-3-99060-191-4)
Published in the midst of the SARS-CoV-2 pandemic, „Corona Fehlalarm?“ (German for “Corona False Alarm?”) gives reason for deep reflection on where humanity stands with respect to rational decision-making, public health, and the social contract. In fact, the authors would argue that we are in a panic rather than a pandemic, and that we are not in the midst but at the end of the COVID-19 curve, though we may only be at the beginning of much worse collateral damage inflicted by the global overreaction to the appearance of the novel coronavirus in December 2019.
Professor Karina Reiß, whose natural sciences doctorate is in cell biology, is a faculty member in the Department of Dermatology and Allergology at the University Hospital of the northern German province of Schleswig-Holstein in Kiel. Her co-author and husband, retired professor Sucharit Bhakdi, holds a medical doctorate and spent his career as a faculty member in institutes for Medical Microbiology at the universities of Gießen and Mainz in Germany. Dr. Bhakdi has been one of the early critics of the lockdown measures in Germany. On March 26, he publicly asked German Chancellor Dr. Angela Merkel five questions around the threat assessment of COVID-19, which remained unanswered. In a June 3 video interview with alternative news magazine Rubikon, Dr. Bhakdi explained the genesis of the book “Corona False Alarm?” out of his and his wife’s frustration with the repeated extension of many emergency measures by the German governments.
The book consists of ten chapters framed by an introduction and conclusion. The introduction, subtitled “Start of a Nightmare?”, outlines the first half of 2020 that most of us have experienced as an onslaught of bad news and terrifying images from Wuhan’s hospitals to northern Italy’s morgues, supermarket lineups and empty shelves, isolated seniors trapped in long-term care homes and police drones surveying deserted city streets and parks. The authors briefly outline the discovery of the novel coronavirus SARS-CoV-2 and the associated illness COVID-19. Subsequent chapters discuss the data and state of scientific knowledge concerning the public health threat from SARS-CoV-2; describe the pandemic situation in Germany; explain collateral damage from the lockdown measures; and compare the lockdown with Sweden’s light-handed response. Additional chapters suggest alternative emergency measures that could have been taken and analyze the role of the media in this crisis situation. The book ends with a short chapter asking where we will go from here, a concluding note, and an appendix with 208 numbered online references.
The first substantial chapter analyses the threat level of the so-called “killer virus” based on the relation of fatalities to infections. The authors explain three tremendous challenges with counting infections: (1) use of the non-validated PCR test with unknown false-positive and false-negative rates, which becomes problematic when the (true) prevalence of the infection decreases towards the end of an epidemic; (2) testing being limited to symptomatic patients instead of sampling across the entire population from an early stage; and (3) lack of attention to the fact that the number of tests completed directly influences the number of infections found, potentially resulting in a “lab pandemic”. The authors only present a hypothetical example here; I believe they could have easily used the example of Germany or any other country, in which the number of tests conducted were increased significantly as lab capacities became available in the early stages of the corona crisis, resulting in an apparent exponential growth of cases while the percent of positive test results quickly started to decline.
With respect to COVID-19 fatalities, Drs. Reiß and Bhakdi emphasize that the official guidelines in Germany, the UK, Sweden, the US, and probably most other countries are known to count anyone as a “corona death” who has tested positive for the virus, regardless of the ultimate cause of death. In some countries, a suspected infection was enough to be included in the death count. In addition, the agencies discouraged or prohibited autopsies for fear of endangering the pathologist. Nevertheless, a dissenting pathologist in Hamburg conducted autopsies on over 100 corona-related fatalities and found that all of them had at least one co-morbidity, most frequently cardio-vascular diseases. Similar observations are reported from Switzerland and Italy, casting doubt on the degree to which the SARS-CoV-2 virus caused the patients’ death. In this context, the authors place the “corona deaths” in context with Germany’s regular mortality of 2,500 to 3,000 deaths per day, and specifically with the death rate among people over 80 years. The text and graphic are a bit confusing here, but they nevertheless illustrate the minimal impact of COVID-19 even on elderly mortality compared to the big killers: heart disease and cancer.
Still in the same chapter, Drs. Reiβ and Bhakdi summarize different ways to compare the risks from COVID-19 and influenza. The infection-fatality rate (IFR) of a normal flu season in Germany is 0.1% to 0.2% with a few hundred deaths. However, in 2017/18, 25,000 patients died from the flu with 330,000 reported cases, resulting in a stunning 8% lethality. Even the original WHO estimate of 3% to 4% IFR for COVID-19 was lower, while current estimates were revised to 0.4% or less, taking into account a large number of undetected infections. For example, the CDC’s best estimate is now 0.26%, identical to the estimate from a comprehensive population-wide study by Prof. Streeck and team of the corona hotspot Heinsberg county in western Germany. Our book authors emphasize that this makes COVID-19 comparable to a moderate flu season and dispels the myth of the “killer virus”. They also note that while elderly are at much higher risk than the young, it is the co-morbidities that cause death and that many healthy elderly have survived the infection.
This chapter ends with a selective review of local factors that may have influenced the higher death counts and rates in COVID-19 hotspots in Italy, Spain, Britain, and the US. These factors include different testing regimes, historic underfunding of the hospital and healthcare system, hospital infections, antibiotics resistance, ad-hoc guidelines for medical treatment of COVID-19, classification of fatalities, local funeral logistics, fear and panic generated by media images, age structure, and regional air pollution.
Chapter 3 of “Corona False Alarm?” is a sharp critique of the prevailing expert advice and political decisions in Germany, yet it provides many lessons for other countries. The authors denounce the ever-changing goalposts for the pandemic threat assessment, from the case-doubling rate to the effective reproduction number R, the calculation of which changed several times, to thresholds on regional counts of new infections per 100,000 population currently in place. An extensive quote by Stanford Professor John Ioannidis is presented in contrast to the seemingly erratic government communications and decisions around mid-March 2020. Fear-mongering with spurious models, best known from the Imperial College group around Prof. Ferguson, and individual narratives by Germany’s “top” virologist Prof. Drosten about exploding cases and triage decisions in an overburdened healthcase system inevitably led to the lockdown decision effective March 23. Among other evidence that the lockdown was ill-advised, the authors present a copy of the infectious disease agency RKI’s estimated R curve, published mid-April, that shows that the peak of the pandemic was passed in early March before any measures were taken.
Readers with a critical disposition will already know many of these and the following details, but seeing them organized and summarized in book form gives them additional logic and credibility. The RKI’s R curve was extensively scrutinized by Prof. Homburg of the University of Hannover. Another early warning that the pandemic was “over” in late March came from Dr. Wittkowski, whose testimonials could be added to the book. Despite the evidence, the German lockdown was extended and makeshift face-coverings required in some indoor settings such as stores, a move the authors decry as capricious at best. Brainwashing through the mainstream media and the stoking of fear of a “second wave” by Prof. Drosten and others resulted in broad compliance with the lockdown, distancing, and mask regulations. This contrasts with the known seasonality of coronaviruses, illustrated in the book with reference to a 1998 study from Finland. The authors’ frustration is tangible when they report the slow pace of re-opening throughout May and the further extension of many measures until the end of June, and Chancellor Merkel’s recent statements that “we are still at the beginning of the pandemic” …
In Chapter 4, the German healthcare system and the occupancy of intensive-care beds and respirators throughout the pandemic are discussed. With reference to official data and a model from the “Corona Initiative of German SMEs”, the authors show that the system was nowhere near capacity at any point in time. In addition, they criticize the practice of bringing frail elderly patients, who would have gone into palliative care during normal times, into ICU and expose them to futile respirator treatment. The chapter ends with a summative assessment of the COVID-19 situation in Germany, including that there has never been an exponential growth of infections to begin with, that government decision-makers declared a pandemic emergency without justification and enforced nonsensical measures instead of living up to their oath of office: to work towards the wellbeing of Germans and protect them from harm. A section on “what the government did right” is left demonstratively empty.
Chapter 5 deals with the collateral damage from the lockdown measures. Reference is made to a leaked crisis management analysis from the German Ministry of the Interior, which suggests that the pandemic may have been a global false alarm and its “cure” comes with a disproportionate cost of lives (e.g. from deferred surgeries and stroke sufferers avoiding hospitals), wellbeing (e.g. loneliness, depression, violence, abuse), and prosperity (e.g. unemployment, bankruptcies). A particularly cruel side effect of the social distancing requirements was the isolation of seniors. The authors also highlight the impact on children and on the poorest regions in the world, before turning their sights in Chapter 6 to a handful of countries that averted general lockdowns. Given that more specific and proven infectious disease control measures existed, it is not surprising that high-density Japan (with little testing), South Korea (with extensive testing and tracing), Hong Kong, Iceland, and even the (in)famous Sweden have similar (or better) epidemic curves and death rates as the countries with the strictest and longest lockdowns, including France, Italy, and Spain. The authors call out German politicians and media for putting illicit pressure on Swedish decision-makers to follow suit with the Europe-wide lockdowns. Since Sweden has now reached one of the higher death rates in the world, it would be helpful to add details that may explain the – in today’s perspective – mixed results of the Swedish approach. Conversely, an interesting example included in the book is the Czech Republic where due to a court decision some restrictions were eased much earlier (late April) than elsewhere, without noticeable impact on COVID-19 cases.
According to Drs. Reiβ and Bhakdi, consistent protection of the at-risk population, in particular the residents of long-term care homes, would have been the right approach to addressing SARS-CoV-2. Chapter 7 also deconstructs politicians’ claims that the pandemic continues and normality will not return “until a vaccine is found”. Lockdown sceptics were particularly dismayed when Bill Gates got to make a 9-minute statement on public TV’s 15-minute prime time news show, decreeing that all 7 billion humans will be immunized with a vaccine developed in a time span compacted from five years to 18 months by skipping some of the required safety checks. Our authors explain immunity to coronaviruses on the basis of two natural mechanisms involving anti-bodies and t-cells, noting that t-cell immunity against coronaviruses has been largely ignored in public discourse. The much cited “herd immunity” for corona and flu viruses is described as a relative concept, which also relies on cross-immunity from earlier virus variants. Existing cross-immunity may very well explain the high percentage of asymptomatic and mild infections with SARS-CoV-2. Importantly, the same virus can never cause a catastrophic second wave, although a new, significantly different variant could. Given these factors, the authors call the aspiration to develop a SARS-CoV-2 vaccine foolish. They note parallels to the 2009 swine flu outbreak and the role of the WHO in determining what constitutes a pandemic. The same government advisors of today warned of a deadly disease then, and recommended the purchase of millions of doses of a quickly developed vaccine, which later had to be destroyed. And some of the same critics raised their voice, including physician and health politician Dr. Wodarg and one of our authors, Prof. Bhakdi, competent voices of reason that again today are ignored by decision-makers.
Chapter 8 turns to one of the most disturbing developments in the corona crisis of 2020: the role of the mainstream media, their journalists, and the censorship of social media and the web. The public broadcasters in Germany and many other European countries are considered the fourth branch of societal power, with a mandate to control the legislative, executive, and judiciary branches. They are legally required to be politically independent and contribute to the formation of public opinion – supposedly by reporting on opposing views regarding major questions and events. The book illustrates the complete failure of Germany’s public broadcasters along with private mainstream media (and parliamentary opposition) to critically monitor government action. The authors outline the fear-mongering on national and regional TV, the uncritical reporting on a limited subset of science and modeling, and the discrediting and silencing of dissenting viewpoints. What should be added here is the emergence of a grassroots democratic resistance movement, whose goal to restore and protect the constitution was equally ignored, if not ridiculed, by the mainstream media.
In addition, internal and external “fact checkers” produced inaccurate ratings that flagged alternative perspectives as conspiracy theories and led to shadow banning or complete removal of YouTube videos and Facebook posts as well as temporary web site closures. Meanwhile, the often changing and contradictory messages from governments and WHO were taken as the only permissible narrative. In interviews, Prof. Bhakdi repeatedly stated that it should not be considered “courageous” in a democracy to state one’s dissenting opinion. Yet, disturbingly, we have indeed reached this point, both with respect to personal opinions vis-à-vis family members, friends, and neighbours as well as regarding expert opinions. The authors of “Corona False Alarm?” take government, opposition, the media, and those in the know – here doctors and scientists – to task and accuse those, who remain silent, of complicity with regards to the collateral damage done.
The book ends with an even darker concluding Chapter 9 and a brief and faintly hopeful summary. The suspension of constitutional freedoms of opinion, speech, movement, assembly, exercise of religion, and choice of profession was not proportional to the public health threat from SARS-CoV-2. Germans should have been particularly vigilant when critical journalism went missing, mass hysteria was stoked, and public opinion constrained to a single narrative. The invitation to snitch on fellow citizens for violations of lockdown regulations is another sign of totalitarian practices established within a few months in what many of us considered a healthy democracy. I concur with Drs. Reiβ and Bhakdi that there will be extensive research and inquiry needed to learn from the corona crisis. The authors express their hope that the book will help prevent that (this!) history ever repeats itself.
Although it must have been put together with a red-hot needle (or keyboard?), the book reads well with a stringent storyline and fitting transitions between chapters. A few inaccuracies, duplications, and omissions are excused by the urgency to publish this important perspective on the ongoing corona crisis. While the information is often specific to German events and actors, some additions could be made to cover the development of the crisis in German-speaking Austria and Switzerland, which had their own distinct experiences. Translations into other languages would likely require some clarifications, if not the addition of regionally relevant contents. Owing to the subject, reading “Corona False Alarm?” could be quite upsetting for the unsuspecting reader, yet it is a must-read for anyone who wants to understand what on earth just happened!
This week, geographers from far and wide will converge onto New Orleans, Louisiana, for the 2018 edition of the Annual Meeting of the American Association of Geographers. Ryerson’s geography faculty and graduate students are no exception and there are even two senior undergraduate students presenting. Here are their research topics and presentation details from the conference program at https://aag.secure-abstracts.com/AAG%20Annual%20Meeting%202018/abstracts-gallery, sorted by abstract title:
Authors: Christopher Daniel*, Centre for the Study of Commercial Activity – Ryerson University, Tony Hernandez, Centre for the Study of Commercial Activity, Ryerson University Topics: Business Geography, Urban and Regional Planning, Applied Geography Keywords: Business Geography, Retail Geography, Urban Planning Session Type: Paper Scheduler ID: THU-026-8:00 a.m. Day: 4/12/2018 Start / End Time: 8:00 AM / 9:40 AM Room: Bacchus, Marriott, 4th Floor
Authors: Amber Grant*, Ryerson University, Environmental Applied Science and Management, Andrew Millward, Ryerson University, Department of Geography and Environmental Studies, Sara Edge, Ryerson University, Department of Geography and Environmental Studies, Ekow Ashun-Stone, Ryerson University, Department of Geography and Environmental Studies Topics: Urban Geography, Social Geography Keywords: urban forestry, tree cover, cities, management plan, equity, justice, Session Type: Paper Scheduler ID: THU-102-3:20 p.m. Day: 4/12/2018 Start / End Time: 3:20 PM / 5:00 PM Room: Bourbon Room, Astor, Mezzanine
Authors: Stephen Swales*, Ryerson University, K. Forsythe, Ryerson University, Nicole Serrafero, Ryerson University Topics: Business Geography, Geographic Information Science and Systems, Applied Geography Keywords: geodemographics, business geography, upscale retail, Canadian cities Session Type: Paper Scheduler ID: FRI-026-8:00 a.m. Day: 4/13/2018 Start / End Time: 8:00 AM / 9:40 AM Room: Bacchus, Marriott, 4th Floor
Authors: Tony Hernandez*, Ryerson University Topics: Business Geography Keywords: Retail, business model, customer behaviour Session Type: Paper Scheduler ID: FRI-026-10:00 a.m. Day: 4/13/2018 Start / End Time: 10:00 AM / 11:40 AM Room: Bacchus, Marriott, 4th Floor
Authors: Joseph Aversa*, Ryerson University, Tony Hernandez , Ryerson University, Sean Doherty, Wilfrid Laurier University Topics: Business Geography, Marketing Geography, Economic Geography Keywords: Big Data, Retail Location Planning, Retail Decision Making Session Type: Paper Scheduler ID: THU-026-10:00 a.m. Day: 4/12/2018 Start / End Time: 10:00 AM / 11:40 AM Room: Bacchus, Marriott, 4th Floor
The Urban and Regional Information Systems Association (URISA) held its first conference on “Urban Planning Information Systems and Programs” in 1963 at the University of Southern California. Now dubbed “GIS-Pro”, the conference and URISA as an organization are the preeminent destinations for exchange of best-practices among Geographic Information Systems (GIS) professionals. This year, Canada, the birth place of GIS, welcomed URISA back for its 54th annual conference held at Toronto’s Westin Harbour Castle hotel from Oct 31-Nov 3, 2016.
The conference drew over 350 participants, with some 200 from Canada (including 150 from Ontario) and most of the remainder from the United States. Representatives from Australia, Barbados, Japan, Malaysia, Republic of Korea, Saudi Arabia, South Africa, and the United Kingdom rounded out the pre-conference attendee list. URISA is greatly engaged in the professional development of its members, and consequently, over 100 participants held the GISP designation. URISA is a founding member of the GIS Certification Institute, which awards the “GISP” status and was an exhibitor and workshop organizer at the conference. URISA’s Vanguard Cabinet of young geospatial professionals, URISA’s GISCorps of worldwide GIS volunteers, its GIS Management Institute, and its regional chapters were all involved in organizing the conference. In one of the conference highlights, Esri Canada founder and president Alex Miller was inducted to the URISA GIS Hall of Fame. More information about URISA can be found at http://www.urisa.org/main/about-us/.
My former graduate students Justin Pierre and Richard Wen had signed up for a session on open-source geospatial software (https://gispro2016.sched.org/event/6nv7/free-puppies-and-solutions-open-source-and-commercial-software). Justin presented on his Master of Spatial Analysis (MSA) major research paper “Developing an Argumentation Platform in an Open Source Stack”. His map-based discussion forum on Toronto’s bike lane network runs on Ryerson’s cloud at https://cartoforum.com/bikelanes/, albeit not always as reliably as we wish. Richard outlined his MSA thesis research on “Using Open Source Python Packages for Machine Learning on Vector Geodata”. He applied the “random forest” algorithm to the task of detecting outliers in OpenStreetMap data, with the goal of developing tools for semi-automated data input and quality control in volunteered geographic data. Richard’s code and thesis are available at https://github.com/rrwen/msa-thesis. Both of these student were part of the Geothink SSHRC Parternship Grant, http://geothink.ca/, which supported their conference participation.
@RyersonGeo also had a booth in the GIS-Pro 2016 exhibit hall. While conference participants were interested in the Department’s programs and research expertise, the main attraction of our booth was an augmented-reality (AR) sandbox. The sandbox was built, set up, and staffed by our collaborators in the GIS team at the Central Lake Ontario Conservation Authority (CLOCA – http://cloca.ca/). CLOCA staff had attended Dr. Oswald’s GeovisUW workshop (https://storify.com/ClausRinner/geovisuw-workshop-ryersongeo) in June 2016 and were inspired by the visit of Ryerson’s Digital Media Experience Lab, which demo’ed an AR sandbox. In subsequent discussions about public outreach around surface- and groundwater protection, we proceeded with 3D-prining of CLOCA’s watershed geography and terrain, while CLOCA staff endeavoured to build the sandbox. The two displays were used by CLOCA at the 2016 Durham Children’s Groundwater Festival in late September. At the GIS-Pro 2016 conference, some participants were wondering about combining the two technologies, while others were interested in using the sandbox to model real-world terrain and simulating flooding. While accurate modeling of terrain and water flow may prove difficult, we are indeed planning to test the sandbox with semi-realistic scenarios.
In conclusion, applied GIS researchers and practicing GIS professionals are a friendly, close-knit group. The conference volunteers from our BA in Geographic Analysis, BA in Environment and Urban Sustainability, and MSA in Spatial Analysis programs were given a lot of free time and thoroughly enjoyed the conference. They were truly impressed by the large number and variation in GIS applications presented, and left the conference with a greater sense for the professional community. For me, the conference confirmed that research and development of GIS should be led by geographers, within Geography departments, as we are best positioned to understand the professional end-user’s needs, yet also have the technical expertise, at least @RyersonGeo, to contribute to GIS R&D.
The session began with a presentation from the Government of North Carolina, discussing the importance of metadata. They are currently collaborating with a number of agencies to create and share a metadata profile to help others open up their data and understand how to implement the standards suggested. They have produced a living document which can be accessed through their webpage http://nconemap.com/DiscoverGetData/Metadata.aspx.
The next speaker at the session represented Pitkin County in Colorado. They represent an open data success story with a number of great resources available for download on their website including high quality aerial imagery. An important aspect to their open data project was their engagement with their local community to understand what data should be opened, and then marketing those datasets which were released.
The Government of Ontario was also present as this session, presenting on the current status of open data for the province. The Ontario Government promotes an Open by Default approach and currently has over 500 datasets from 49 agencies available to download through their portal at https://www.ontario.ca/search/data-catalogue?sort=asc. They are working towards continuing to increase their open datasets available.
A presentation by MapYourProperty (http://mapyourproperty.com/) provided an interesting perspective from the private sector using open data to successfully run their business. They heavily depend on visualizing open data to provide a web-based mapping application for the planning and real estate community to search properties, map zoning information and create a due diligence report based on the information found. This is one example of many that exist in the private sector of open data helping build new companies, or help existing companies thrive.
Lastly, a representative from Esri Canada’s (http://esri.ca/) BC office wrapped up the session reminding us all of the importance of opening data. This included highlighting the seemingly endless benefits to open data, including providing information to help make decisions, supporting innovation, creating smart cities and building connections. Of course, open data is big business for Esri too, with the addition of ArcGIS Open Data as a hosted open data catalog to the ArcGIS Online platform.
This session showcased some great initiatives taking place in Canada and the United States that are proving the importance of opening up data and how this can be done successfully. It is exciting to see what has been taking place locally and internationally and it will be even more exciting to see what happens in the future, as both geospatial and a-spatial data products continue to become more openly available.
Hello, pokemon trainers of the World! Today, I would like to explain Geographic Analysis using the ideas of the Pokemon GO game that you know only too well. I hope that you will return to the game with a good understanding of the geographic concepts and the geospatial technology behind it.
Safe for some serious cheating, you have to move around this thing called THE REAL WORLD with your location-enabled device in order to “catch’em all”. Smartphone producers make it really difficult to manipulate GPS location, because it is such a critical function of your device. So, unless you are truly close to that poke stop, you won’t be able to access its resources: free poke balls, razz berries, etc. In Geography, we often study the location of points-of-interest or services. For example, if you live or work close to a specific shopping mall or hospital, you are likely to use their services at one point or another. Or, if you are far away from a college or university and still choose to pursue higher education, you may have to move in order to be within reach of that institution.
To use a poke stop or gym, or to catch a pokemon, you do not need to be at their exact coordinate locations, but you need them to appear within your proximity circle as you move around. In Geographic Analysis, we often examine this “reach”, or catchment area, that is defined by proximity to locations of interest. For example, when a coffee chain looks to open a new store, Geographers will examine their competitors’ locations and surrounding neighbourhood profiles to determine whether there is a gap in coverage or whether there are catchment areas that include enough people of the right demographic to support an additional cafe. In Retail Geography, we call these areas “trade areas”. That’s why you can find clusters of Tim Horton’s, Second Cup, and/or Starbucks at major intersections where the geodemographics are favourable – yes, this is likely a Geospatial Analyst’s work! And that’s also why you can find clusters of poke stops in some of your favourite busy locations.
To support business decision-making, AKA “location intelligence”, Geographers use data on population, household incomes and employment, the movement of people, and the built environment. If you have ever “watched” pokevision.com for different locations, you will have noticed great variation in the pokemon spawn density and frequency. For example, in our screenshots below you can see tons of pokemon in downtown Toronto, but not a single one in an area of rural Ontario. Similarly, there are dozens of poke stops and several gyms within walking distance in the City but a lone poke stop in rural Ontario. The Pokemon GO vendor, Niantic, seems to be using geodemographics in determining where pokemon will spawn. They make it more likely for pokemon to spawn where there are “clients”: that is, yourselves, the trainers/players.
(a) (b) (c)
Fig. 1: poke stops locations and pokemon appearances in downtown Toronto (a, b), compared to rural Ontario (c)
Geographic space is a unique dimension that critically influences our lives and societies. The spatial distribution of people and things is something that Geographers are studying. Just like the spawning of pokemon in general, the appearance of the different types of pokemon is not randomly distributed either. For example, it has been shown that water-type pokemon are more likely to appear near water bodies. See all those Magicarps near the Toronto lakefront in the screenshot below? A few types of pokemon even seem restricted to one continent such as Tauros in North-America and won’t appear on another (e.g., Europe). The instructions by “Professor Willow” upon installation of the app actually refer to this regional distribution of pokemon. I also believe that the points-of-interest, such as buildings, that serve as poke stops, determine the pokemon type spawning near them. For example, the Ontario Power Building at College St. and University Ave. in Toronto regularly spawns an Elektrabuzz, as shown in the last screenshot below.
(a) (b) (c)
Fig.2: (a), “Professor Willow” explaining his interest in studying the regional distribution of pokemon (what a great-looking Geographer he is!); screenshots of pokevision.com with (a) Magicarps at the Toronto lakefront and (b) an Elektrabuzz near the Ontario Power Building
In Environmental Geography, we often analyze (non-pokemon) species distribution, which is also not random. The availability of suitable habitat is critical, just like for pokemon. In addition, spatial interactions between species are important – remember the food chain you learned about in school. I am not sure that different pokemon types interact with one another; maybe that could be the topic of your first course project, as you enter a Geography program at university?
The techniques that we use within Geographic Information Systems (GIS) include suitability mapping, distance and buffer analysis, and distance decay. Distance decay means that it is becoming less and less likely to encounter a species as you move away from suitable habitat. Or in the business field, it is becoming less and less likely that people will shop at a specific mall the further away they live from it. A buffer is an area of a specified distance around a point, line, or polygon, just like the proximity circle around your pokemon avatar. GIS software can determine if other features are within the buffer around a location. Instead of enabling access to poke stops or gyms around your avatar, Geographers would use buffer analysis to determine which residents have access to public transit, e.g. if they are within walking distance of 500m or 1km of a transit stop.
A final thought about how Pokemon GO has brought Geography to the headlines concerns important professional and societal challenges that Geographers can tackle. These range from map design and online map functionality to crowdsourcing of geospatial data, as well as the handling of big data, privacy concerns, and ultimately the control of people’s locations and movement. The now-defunct pokevision.com Web map used Esri online mapping technology, one of the world-leading vendors of GIS software and promoters of professional Geography. Another approach, which is used by pokemonradargo.com, has trainers (users) report/upload their pokemon sightings in real-time. This geospatial crowdsourcing comes with a host of issues around the accuracy of, and bias in, the crowdsourced data as well as the use of free labour. For example, poke stops were created by players of a previous location-based game called “Ingress” and are now used by Niantic in a for-profit venture – Pokemon GO! Finally, you have all read about the use and misuse of lure to attract people to poke stops at different times of day and night. The City of Toronto recently requested the removal of poke stops near the popular island ferry terminal for reasons of pedestrian control and safety. Imagine how businesses or government could in the future control our movement in real space with more advanced games.
I hope I was able to explain how Pokemon GO is representative of the much larger impact of Geography on our everyday lives and how Geographers prepare and make very important, long-term decisions in business and government on the basis of geospatial data analysis. Check out our BA in Geographic Analysis or MSA in Spatial Analysis programs to find out more and secure a meaningful and rewarding career in Geography. And good luck hunting and training more pokemon!
As a follow-up to my post on “Geospatial Data Preparation for 3D Printed Geographies” (19 Sept 2015), I am providing an update on the different approaches that I have explored with my colleague Dr. Claire Oswald for our one-year RECODE grant entitled “A 3D elevation model of Toronto watersheds to promote citizen science in urban hydrology and water resources”. The tools that we have used to turn geospatial data into 3D prints include the program heightmap2stl; direct loading of a grey scale image into the Cura 3D modeling software; the QGIS plugin DEMto3D; the script shp2stl.js; and a workflow using Esri’s ArcScene for 3D extraction, saving in VRML format, and translating this file into STL format using the MeshLab software.
The use of the heightmap2stl program in a Windows environment requires a somewhat cumbersome process using the Windows command line and the resulting STL files seemed exceedingly large, although I did not systematically investigate this issue. I was therefore very pleased to discover accidentally that the Cura software, which I am using with my Lulzbot Taz 5 printer, is able to load greyscale images directly.
The following screenshot shows the available parameters after clicking “Load Model” and selecting an image file (e.g. PNG format, not an STL file). The parameters include the height of the model, height of a base to be created, model width and depth within the available printer hardware limits, the direction of interpreting greyscale values as height (lighter/darker is higher), and whether to smoothen the model surface.
The most ‘popular’ model created using this workflow is our regional watershed puzzle. The puzzle consists of a baseplate with a few small watersheds that drain directly into Lake Ontario along with a set of ten separately printed watersheds, which cover the jurisdiction of the Toronto and Region Conservation Authority (TRCA).
Both of the first two approaches have a significant limitation for 3D printing of geography in that they do not support controlling geographic scale. To keep track of scale and vertical exaggeration, one has to calculate these values on the basis of geographic extent, elevation differential, and model/printer parameters. This is where the neat QGIS plugin DEMto3D comes into play.
As can be seen in the following screenshot, DEMto3D allows us to determine a print extent from the current QGIS project or layer extents; set geographic scale in conjunction with the dimension of the 3D print; specify vertical exaggeration; and set the height at the base of the model to a geographic elevation. For example, the current setting of 0m would print elevations above sea level while a setting of 73m would print elevations of the Toronto region in relation to the surface level of Lake Ontario. One shortcoming of DEMto3D is that vertical exaggeration oddly is limited to a factor of 10, which we found not always sufficient to visualize regional topography.
Using DEMto3D, we recently printed our first multi-part geography, a two-piece model of the Oak Ridges Moraine that stretches over 200km in east-west direction to the north of the City of Toronto and contains the headwaters of streams running south towards Lake Ontario and north towards Lake Simcoe and the Georgian Bay. To increase the vertical exaggeration for this print from 10x to 25x, we simply rescaled the z dimension in the Cura 3D printing software after loading the STL file.
The DEMto3D plugin strictly requires true DEM data (as far as I have found so far), thus it would not convert a Shapefile with building heights for the Ryerson University campus and surrounding City of Toronto neighbourhoods, which I wanted to print. Additionally, the approach using a greyscale image of campus building heights and one of the first two approaches above also did not work, as the 3D buildings represented in the resulting STL files had triangulated walls.
In looking for a direct converter from Shapefile geometries to STL, I found Doug McCune’s shp2stl script at https://github.com/dougmccune/shp2stl and his extensive examples and explanations in a blog post on “Using shp2stl to Convert Maps to 3D Models“. This script runs within the NodeJS platform, which needs to be installed and understood – the workflow turned out to be a tad too complicated for a time-strapped Windows user. Although I managed to convert the Ryerson campus using shp2stl, I never printed the resulting model due to another, unrelated challenge of being unable to add a base plate to the model (for my buildings to stand on!).
Getting those walls straight: ArcScene, VMRL, and Meshlab
Another surprise find, made just a few days ago, enabled the printing of my first city model from the City of Toronto’s 3D massing (building height) dataset. This approach uses a combination of Esri’s ArcScene and the MeshLab software. Within ArcScene, I could load the 3D massing Shapefile (after clipping/editing it down to an area around campus using QGIS), define vertical extrusion on the basis of the building heights (EleZ variable), and save the 3D scene in the VRML format as a *.wrl (“world”) file. Using MeshLab, the VRML file could then be imported and immediately exported in STL format for printing.
While this is the only approach included in this post that uses a commercial tool, ArcScene, it is likely that the reader can find alternative workflow based on free/open-source software to extrude Shapefile polygons and turn them into STL, whether or not this requires the intermediate step through the VRML format.
Another year has passed, and another annual meeting of the Association of American Geographers (AAG) is about to start in San Francisco this week. The Department of Geography and Environmental Studies at Ryerson is sending its usual strong complement to AAG 2016, although the writer of these lines is sadly staying behind in a cold and rainy Toronto.
Contributions from @RyersonGeo have a traditional focus in Business Geography, with additional abstracts in the areas of urban forest, population health, migration & settlement, local food, renewable energy, and sustainability science. In approximate chronological order of presentation:
Dieter Kogler (University College Dublin), Peter Kedron, Sharmistha Bagchi-Sen (SUNY – Buffalo): From individual to regional networks: patterns and pathways in biofuel innovation. Presented in session on Economic Geography II – Evolutionary Ideas: Agents and Spaces, scheduled on Tuesday, 3/29/2016 at 10:00 AM. Abstract at http://meridian.aag.org/callforpapers/program/AbstractDetail.cfm?AbstractID=74449
In addition to these contributions, Dr. Hernandez also serves as chair, introducer, organizer, and/or panelist of sessions on
BGSG Career Achievement Award: A Conversation with Ken Smith
Connecting Practitioners and Students – Advice on Career Development in the Field of Location Intelligence
Location Intelligence Trends in the Contemporary Omni-channel Retail Marketplace
Retail and Business Geography I & II
Dr. Millward also serves as chair of the session on “Arboriculture and Urban Forestry” and Dr. Steenberg is a panelist in the session entitled “Disrupt Geo 1: new ideas from the front lines of maps, mobile, and big data”.
We wish our colleagues and all participants a productive and enjoyable AAG 2016!
Blog post co-authored by Victoria Fast, Daniel Liadsky, and Claus Rinner
Ryerson’s Department of Geography and Environmental Studies is celebrating two gold medal recipients this fall. The Ryerson Gold Medals are the University’s highest honours, presented annually to one graduate of each Faculty. Victoria Fast (PhD in Environmental Applied Science and Management, supervised by Dr. Claus Rinner) received the Gold Medal for the interdisciplinary programs housed at the Yeates School of Graduate Studies, while Daniel Liadsky (MSA in Spatial Analysis, supervised by Dr. Brian Ceh) received the Gold Medal for the Faculty of Arts.
Victoria’s PhD research investigated the potential of novel geographic information techniques to reshape the interaction of government with community organizations and citizens through crowdsourcing and collaborative mapping. The study applied a VGI systems approach (Fast & Rinner 2014) to actively engage with urban food stakeholders, including regional and municipal government, NGOs, community groups, and individual citizens to reveal and map uniquely local and community-driven food system assets in Durham Region. The Durham Food Policy Council and Climate Change Adaptation Task Force are currently using the results to support informed food policy and program development. Victoria’s research contributes to geothink.ca, a SSHRC Partnership Grant on the impact of the participatory Geoweb on government-citizen interactions.
Daniel’s research in the Master of Spatial Analysis (MSA) examined how dietary intake is mediated by individual, social, and environmental factors. The Toronto-based study was stratified by gender and utilized self-reported data from the Canadian Community Health Survey as well as measures of the food environment derived from commercial retail databases. The results uncovered some of the complex interactions between the food environment, gender, ethnocultural background, and socioeconomic restrictions such as low income and limited mobility. In addition and as part of an unrelated investigation, Daniel undertook a feasibility study into a mapping and data analytics service for the non-profit sector.
Ryerson students, faculty, staff, and the local community are invited to explore and celebrate Geographic Information Systems (GIS) research and applications. Keynote presentations will outline the pervasive use of geospatial data analysis and mapping in business, municipal government, and environmental applications. Research posters, software demos, and course projects will further illustrate the benefits of GIS across all sectors of society.
Date: Wednesday, November 18, 2015
Location: Library Building, 4th Floor, LIB-489 (enter at 350 Victoria Street, proceed to 2nd floor, and take elevators inside the library to 4th floor)
1:00 Soft kick-off, posters & demos
1:30-2:00 Dr. Namrata Shrestha, Senior Landscape Ecologist, Toronto & Region Conservation Authority
2:00-2:30 posters & demos
2:30-3:00 Andrew Lyszkiewicz, Program Manager, Information & Technology Division, City of Toronto
3:00-3:30 posters & demos
3:30-4:00 Matthew Cole, Manager, Business Geomatics, and William Davis, Cartographer and Data Analyst, The Toronto Star
4:00 GIS Day cake!
GIS Day is a global event under the motto “Discovering the World through GIS”. It takes place during National Geographic’s Geography Awareness Week, which in 2015 is themed “Explore! The Power of Maps”, and aligns with the United Nations-supported International Map Year 2015-2016.
Event co-hosted by the Department of Geography & Environmental Studies and the Geospatial Map & Data Centre. Coffee/tea and snacks provided throughout the afternoon. Contact: Dr. Claus Rinner