Tag Archives: crosspost

A New Narrative for Collecting Statistical Data: Statistics Canada’s Crowdsourcing Project

This is a guest post from Statistics Canada on their new initiative on crowdsourcing geospatial data

Statistics Canada’s crowdsourcing project offers an exciting new opportunity for the agency to collaborate with stakeholders and citizens to produce and share open data with the general public — that is to say, data that can be freely used and repurposed.

Data collection is evolving with technology; for example, paper-based and telephone surveys are increasingly replaced with online surveys. With an array of modern technologies that most Canadians can access, such as Web 2.0 and smartphones, a new mechanism for data sharing can be piloted through open data platforms that host online crowds of data contributors. This project provides insight into how Statistics Canada can adapt these modern technologies, particularly open source tools and platforms, to engage public and private stakeholders and citizens to participate in the production of official statistics.

For the pilot project, Statistics Canada’s goal is to collect quality crowdsourced data on buildings in Ottawa and Gatineau. The data include attributes such as each building’s coordinate location, address and type of use. This crowdsourced data can fill gaps in national datasets and produce valuable information for various Statistics Canada divisions.

On September 15, 2016, Statistics Canada launched a web page and communications campaign to inform and motivate the citizens of Ottawa and Gatineau to participate in the pilot project. This pilot project is governed and developed by Statistics Canada’s Crowdsourcing Steering Committee. Statistics Canada’s communications with the local OpenStreetMap (OSM) community and collaboration with stakeholders and municipalities have allowed the pilot project to succeed.

To crowdsource the data, the project uses OpenStreetMap, an open source platform that aims to map all features on the Earth’s surface through user-generated content. OSM allows anyone to contribute data and, under the Open Data Commons Open Database License (ODbL), anyone can freely use, disseminate and repurpose OSM data. In addition to the web page and campaign to encourage participation, Statistics Canada developed and deployed a customized version of OSM’s iD-Editor. This adapted tool allows participants to seamlessly add points of interest (POIs) and polygons on OSM. The platform includes instructions on how to sign up for OSM and how to edit, allowing anyone, whether tech-savvy or not, to contribute georeferenced data (Figure 1).

Figure 1. Snapshot of the customized version of OSM’s iD-Editor. Users can select a building or POI to see the attributes. Users can edit these attributes or they can create an entirely new point or area.

Statistics Canada has maintained communications with its stakeholders and participants through outreach, and has monitored contributions through dashboards. Outreach has taken place by communicating with the global and local OSM communities by using mailing lists and having local meetups, as well as by organizing webinars, presenting at local universities and participating in conferences associated with open data. Negotiation and collaboration with the City of Ottawa have also opened building footprints and addresses for contributors to add to the map.

The project has been monitored using an open source dashboard developed by Statistics Canada. The dashboard provides a timeline (currently covering August 2016 to February 15, 2017) that specifies the number of buildings mapped, the number of users and the average number of tags contributed on OSM in each target city. Furthermore, it shows the amount of certain building types (e.g., house, residential, commercial) and the number of missing address fields by percentage (Figure 2). In general, the dashboard highlights the increased OSM contributions in Ottawa and Gatineau since the initiation of the project.

Figure 2. The open source dashboard monitors the production of data on OSM within the pilot project’s geographic scope of Ottawa and Gatineau. In the image above, both Ottawa and Gatineau have been selected. As seen in the top graph, buildings mapped in both cities have increased since the project’s initiation.

In the second year of the pilot project, Statistics Canada intends to develop a mobile app that will allow contributors to map on the go. Outreach will be maintained and, as more data are collected, quality assessments will be conducted. Success has been derived through collaborations, learning and sharing ideas, and developing user-friendly open source tools. As the project expands over time, Statistics Canada will uphold these values and approaches to ensure both an open and collaborative environment.

If you are interested in participating in the project, visit Statistics Canada’s Crowdsourcing website for a tutorial or to start mapping. Feel free to contact us at statcan.crowdsource.statcan@canada.ca to subscribe to a distribution list for periodic updates or to ask questions about the project.

Leveraging Open Data: International perspectives presented at URISA’s GIS-Pro 2016 conference

This is a cross-post from Geothink co-applicant Dr. Claus Rinner‘s website, written by Geothink student Sarah Greene, Ryerson University. Sarah is Candidate for the Master’s of Spatial Analysis at Ryerson University. Her research focusses on open data.

By Sarah Greene

This past week, URISA held its 54th annual GIS-Pro conference in Toronto, bringing together GIS professionals and businesses from around the world. The conference provided many interesting sessions including one focused entirely on open data. This session, titled “Leveraging Open Data”, included government as well as private sector perspectives.

The session began with a presentation from the Government of North Carolina, discussing the importance of metadata. They are currently collaborating with a number of agencies to create and share a metadata profile to help others open up their data and understand how to implement the standards suggested. They have produced a living document which can be accessed through their webpage.

The next speaker at the session represented Pitkin County in Colorado. They represent an open data success story with a number of great resources available for download on their website including high quality aerial imagery. An important aspect to their open data project was their engagement with their local community to understand what data should be opened, and then marketing those datasets which were released.

The Government of Ontario was also present as this session, presenting on the current status of open data for the province. The Ontario Government promotes an Open by Default approach and currently has over 500 datasets from 49 agencies available to download through their portal. They are working towards continuing to increase their open datasets available.

A presentation by MapYourProperty provided an interesting perspective from the private sector using open data to successfully run their business. They heavily depend on visualizing open data to provide a web-based mapping application for the planning and real estate community to search properties, map zoning information and create a due diligence report based on the information found. This is one example of many that exist in the private sector of open data helping build new companies, or help existing companies thrive.

Lastly, a representative from Esri Canada’s BC office wrapped up the session reminding us all of the importance of opening data. This included highlighting the seemingly endless benefits to open data, including providing information to help make decisions, supporting innovation, creating smart cities and building connections. Of course, open data is big business for Esri too, with the addition of ArcGIS Open Data as a hosted open data catalog to the ArcGIS Online platform.

This session showcased some great initiatives taking place in Canada and the United States that are proving the importance of opening up data and how this can be done successfully. It is exciting to see what has been taking place locally and internationally and it will be even more exciting to see what happens in the future, as both geospatial and a-spatial data products continue to become more openly available.

A talk at the GIS Pro 2016 conference. Photo credit: Claus Rinner

A talk at the GIS Pro 2016 conference. Photo credit: Claus Rinner

See the original post here

Crosspost: In Search of the Mother of GIS? Thoughts on Panel Session 1475 Gender & GIScience at AAG 2016

http://b-i.forbesimg.com/yec/files/2013/06/mentors.jpg

Female mentors in GIS abound at the 2016 American Association of Geographers (AAG) Annual Meeting.

By Victoria Fast


This post was originally published on GIS2 at Ryerson University: Geographic Information Science and Systems on April 6, 2016. We re-publish it here with permission of Dr. Victoria Fast who presented at this year’s Annual Meeting of the American Association of Geographers (AAG).


Roger Tomlinson has passed, and Mike Goodchild is in (a very active) retirement. So, this panel made me consider: are we searching for a new father of GIS? In fact, do we need a father of GIS? Would a mother of GIS balance the gender scales? It seems all disciplines need leaders, and the powerful panellists in this session—populated with many of my mentors and leaders in the field, including Renee Sieber, Nadine Schuurman, Sarah Elwood, Agnieszka Leszczynski, Britta Ricker, and Matthew Wilson—demonstrates that we indeed have strong leadership in GIScience. This mostly female panel is a reminder that, in fact, there are many influential female scholars. But do we hear these influences? Do we hear them equally? Have we heard them in the past? Based on the discussion in this session, the answer in overwhelmingly ‘no’.

The discussion in this session revolved around the ways in which our science has been heavily masculinized, epitomized by the commonly accepted ‘Father of GIS’ notion. The discipline has been dominated by all-male panels, focused on programming and physical science, subdued critical or theoretical work, and “straight up misogyny in GIScience” (Renee Sieber’s words). Female scholars are less frequently cited, underrepresented as researchers in the field, and almost absent in the representation of the history of the discipline.

This made me think of deep-rooted masculinization I have faced in my GIS journey, as a student and now as an educator. Issues related to working in the ‘old boys club’ aside, masculinization was especially predominant when I taught a second year Cartography course. The textbook “Thematic Cartography and Geovisualization” contains a chapter of the History of Cartography. Without sounding ‘…ist’ myself, the chapter largely recognized the contribution of older, white males. I didn’t feel comfortable teaching my students that narrow history of Cartography, so instead went looking for my own resources to populate a ‘History of Cartography’ lecture.

I was delightfully surprised that there are so many resources available that show multi-faceted sides of cartography (and GISci more broadly). These perspectives and resources are often shared via disparate sources in journal articles, blogs, and discussion forums. For example, Monica Stephens has a great publication on Gender and the Geoweb in Geojournal [2013, 78(6)]. City Labs also has a great series on the Hidden Histories of Maps Made by Women (thanks for sharing Alan McConchie): http://www.citylab.com/design/2016/03/women-in-cartography-early-north-america/471609/. Unfortunately, they refer to it as the “little seen contributions to cartography”, but panels like this help address that while they’re little seen, they are highly impactful contributions. Over time, these blog posts, journal articles, and conference panels will (hopefully) amass and make their way to more formalized forms of textbook knowledge. (There was a great deal of interest by those attending this session in a published version of these compiled resources. Given the overwhelming response, I’m considering compiling a manuscript… stay tuned.)

I recognize that it is impossible to undo the deep-rooted masculinization that has persisted in GIScience. However, we can change how we address it moving forward. Let’s recognize that we don’t need a father (or mother) of GIS; we need leaders, visionaries, and mentors of all shapes, sizes, colours, backgrounds, and genders. I challenge all those who are GI Professionals in training to look for the untold story, the hidden history of GIS, and the little-seen influences on the discipline. I challenge those who teach GIS to go beyond the ‘truth’ presented in the textbooks. And lastly, I want to conclude by saying thank you to the powerful female mentors on this panels and ones not represented here; mentors who transcend the need for a ‘Mother of GIS’.

Dr. Victoria Fast is a recent doctoral graduate of the Department of Geography and Environmental Studeis at Ryerson University. Contact her at vfast (at) ryerson.ca.

Crosspost: Being Philosophical About Crowdsourced Geographic Information

This Geo: Geography and Environment blog post is cross-posted with permission from the authors, Renée Sieber (McGill University, Canada) and Muki Haklay (University College London, UK).
By Renée Sieber and Muki Haklay

Our recent paper, The epistemology(s) of volunteered geographic information: a critique, started from a discussion we had about changes within the geographic information science (GIScience) research communities over the past two decades. We’ve both been working in the area of participatory geographic information systems (GIS) and critical studies of geographic information science (GIScience) since the late 1990s, where we engaged with people from all walks of life with the information that is available in GIS. Many times we’d work together with people to create new geographic information and maps. Our goal was to help reflect their point of view of the world and their knowledge about local conditions, not always aim for universal rules and principles. For example, the image below is from a discussion with the community in Hackney Wick, London, where individuals collaborated to ensure the information to be captured represented their views on the area and its future, in light of the Olympic works that happened on their doorstep. The GIScience research community, by contrast, emphasizes quantitative modelling and universal rules about geographic information (exemplified by frequent mentioning of Tobler’s first law of Geography). The GIScience research community was not especially welcoming of qualitative, participatory mapping efforts, leaving these efforts mostly in the margins of the discipline.

Photo of 2007 participatory mapping contributors working together in Hackney Wick, London, 2007

Participatory Mapping in Hackney Wick, London, 2007

Around 2005, researchers in GIScience started to notice that when people used their Global Positioning System (GPS) devices to record where they took pictures or used online mapping apps to make their own maps, they were generating a new kind of geographic information. Once projects like OpenStreetMap and other user-generated geographic information came to the scene, the early hostility evaporated and volunteered geographic information (VGI) or crowdsourced geographic information was embraced as a valid, valuable and useful source of information for GIScience research. More importantly, VGI became an acceptable research subject, with subjects like how to assess quality and what motivates people to contribute.

This about-face was puzzling and we felt that it justified an investigation of the concepts and ideas that allowed that to happen. Why did VGI become part of the “truth” in GIScience? In philosophical language, the questions ‘where does knowledge come from? how was it created? What is the meaning and truth of knowledge?’ is known as epistemology and our paper evolved into an exploration of the epistemology, or more accurately the multiple epistemologies, which are inherent in VGI. It’s easy to make the case that VGI is a new way of knowing the world, with (1) its potential to disrupt existing practices (e.g. the way OpenStreetMap provide alternative to official maps as shown in the image below) and (2) the way VGI both constrains contributions (e.g., 140 chars) and opens contributions (e.g., with its ease of user interface; with its multimedia offerings). VGI affords a new epistemology, a new way of knowing geography, of knowing place. Rather than observing a way of knowing, we were interested in what researchers thought was the epistemology of VGI. They were building it in real-time and attempting to ensure it conformed to existing ways of knowing. An analog would be: instead of knowing a religion from the inside, you construct your conception of it, with your own assumptions and biases, while you are on the outside. We argue that construction was occurring with VGI.

OpenStreetMap mapping party (Nono Fotos)

OpenStreetMap mapping party (Nono Fotos)

We likewise were interested in the way that long-standing critics of mapping technologies would respond to new sources of data and new platforms for that data. Criticism tends to be grounded in the structuralist works of Michel Foucault on power and how it is influenced by wider societal structures. Critics extended traditional notions of volunteerism and empowerment to VGI, without necessarily examining whether or not these were applicable to the new ‘ecosystem’ of geospatial apps companies, code and data. We also were curious why the critiques focussed on the software platforms used to generate the data (e.g., Twitter) instead of the data themselves (tweets). It was as if the platforms used to create and share VGI are embedded in various socio-political and economic configurations. However, the data were innocent of association with the assemblages. Lastly, we saw an unconscious shift in the Critical GIS/GIScience field from the collective to the personal. Historically, in the wider field of human geography, when we thought of civil society mapping together by using technology, we looked at collective activities like counter-mapping (e.g., a community fights an extension to airport runway by conducting a spatial analysis to demonstrate the adverse impacts of noise or pollution to the surrounding geography). We believe the shift occurred because Critical GIS scholars were never comfortable with community and consensus-based action in the first place. In hindsight, it probably is easier to critique the (individual) emancipatory potential as opposed to the (collective) empowerment potential of the technology. Moreover, Critical GIS researchers have shifted their attention away from geographic information systems towards the software stack of geospatial software and geosocial media, which raises question about what is considered under this term. For all of these reasons and more we decided to investigate the “world building” from both the instrumentalist scientists and from their critics.

We do use some philosophical framing—Borgmann has a great idea called the device paradigm—to analyse what is happening, and we hope that the paper will contribute to the debate in the critical studies of geographical information beyond the confines of GIScience to human geography more broadly.

About the authors: Renée E. Sieber is an Associate Professor in the Department of Geography and the School of Environment at McGill University. Muki Haklay is Professor of Geographical Information Science in the Department of Civil, Environmental and Geomatic Engineering at University College London.

Crosspost: Green Cities and Smart Cities: The potential and pitfalls of digitally-enabled green urbanism

The Vancouver Convention Centre in Vancouver, BC, Canada was the world's first LEED Platinum-certified convention center.  It also has one of the largest green roofs in Canada. Image Credit:  androver / Shutterstock.com

The Vancouver Convention Centre in Vancouver, BC, Canada was the world’s first LEED Platinum-certified convention center. It also has one of the largest green roofs in Canada. Image Credit: androver / Shutterstock.com

This post is cross-posted with permission from Alexander Aylett, from UGEC Viewpoints. Aylett is an Assistant Professor at the Centre on Urbanisation, Culture and Society at the National Institute for Scientific Research (UCS-INRS) in Montreal, Quebec.

By Alexander Aylett

Since its early days, the discourse around “smart cities” has included environmental sustainability as one of its core principles. The application of new digital technologies to urban spaces and processes is celebrated for its ability to increase the well-being of citizens while reducing their environmental impacts. But this engagement with sustainability has been limited to a technocratic focus on energy systems, building efficiency, and transportation. It has also privileged top-down interventions by local government actors. For all its novelty, the smart cities discussion is operating with a vision of urban sustainability that dates from the 1990s, and an approach to planning from the 1950s.

This definition of “urban sustainability” overlooks key facets of a city’s ecological footprint (such as food systems, resource consumption, production related greenhouse gas emissions, air quality, and the urban heat island effect). It also ignores the ability of non-state actors to contribute meaningfully to the design and implementation of urban policies and programs. But that doesn’t need not be the case. In fact, if employed properly, new information technologies seem like ideal tools to address some of urban sustainability’s most persistent challenges.

Progress and Lasting Challenges in Local Climate Governance

Let’s take a step back. Often discussions of smart cities begin with an account of the capabilities of specific technologies or interfaces and then imagine urbanism – and urban sustainability – through the lense of those technologies. I’d like to do the opposite: beginning with the successes and lasting challenges faced by urban sustainability and interpreting the technologies from within that context. To understand the role that “smart” technologies could play in enabling sustainable cities, it’s useful to first look at what we have managed to accomplish so far, and what still needs to be done.

For those of us working on sustainable cities and urban responses to climate change, the past two decades have been a period of both amazing successes and enduring challenges. In the early 1990s a handful of cities began promoting the (at that time) counterintuitive idea that local governments had a key role to play in addressing global climate change. Since then, the green cities movement has won significant discursive, political, and technical battles.

Global inter-municipal organizations like ICLEI or the C40 now have memberships that represent thousands of cities. Two decades of work have created planning standards and tools and an impressive body of “best practice” literature. Through the sustained efforts of groups like ICLEI, cities are now recognized as official governmental stakeholders in the international climate change negotiations coordinated by the United Nations.

But – crucially – real urban emissions reductions are lagging well below what is needed to help keep global CO2 within safe limits. Looking at the efforts of individual cities and the results of a global Urban Climate Change Governance survey that I conducted while at MIT (Aylett 2014, www.urbanclimatesurvey.com ) shows why. Apart from a small contingent of charismatic cities like Vancouver, Portland, or Copenhagen, cities are struggling to move beyond addressing the “low hanging fruit” of emission from municipal facilities ( i.e., vehicle fleet, municipal buildings, street lighting – known as “corporate emissions”) to taking action on the much more significant emissions generated by the broader urban community (i.e., business, industry, transportation, and residential emissions).

This problem has been with us since the early days of urban climate change responses. But how we understand it has changed significantly. Where some cities used to inventory only their corporate emissions, this is now rare. Current guidelines cover community-wide emissions and work is underway to create a global standard for emissions inventories that will also engage with emissions produced in the manufacture of the goods and services consumed within cities (see Hoornweg et al. 2011).

Built on the increased scope of our technical understanding of urban emissions, is a change in how we understand the work of governing climate change at the local level. A top-down vision of climate action focused on the regulatory powers of isolated local government agencies is being replaced by one that is horizontal, relational, and collaborative. This approach transforms relationships both inside and outside of local governments, by linking together traditionally siloized municipal agencies and also forging partnerships with civil-society and business actors (Aylett 2015).

The increased prominence of non-state actors in urban climate change governance has led to growing calls for partnerships across the public-private divide (Osofsky et al. 2007; Andonova 2010; Bontenbal and Van Lindert 2008). These partnerships play an important role in overcoming gaps in capacity, translating the climate change impacts and response options into language that is meaningful to different groups and individuals, and accelerating the development of solutions. Follow-up analysis of the 2014 MIT-ICLEI Climate survey shows that these partnerships have an important positive impact on the scope of concrete emissions reductions. Cities with stronger partnerships appear to be more able to create concrete emissions reductions outside of areas directly controlled by the municipality.

The street car in Portland, Oregon, USA.   Image Credit: Shutterstock.com

The street car in Portland, Oregon, USA. Image Credit: Shutterstock.com

This evolution in approaches to climate change planning follows a broader current in urban planning more generally which, since the 1960s have moved away from expert driven and technocratic processes and created increasing amounts of space for participatory processes and facilitative government.

In a nutshell, an increasingly complex and holistic technical understanding of urban emissions is being matched by an increasing horizontal and networked approach to governing those emissions. (A similar shift is taking place in the more recent attention to urban adaptation and resilience.)

But plans and programs based on this understanding quickly run into the significant barriers of institutional siloization and path dependency, a lack of effective information sharing, challenges of data collection and analysis, and difficulty mobilizing collective and collaborative action across multiple diverse and dispersed actors (Aylett 2014). The strength of collaborative multi-stakeholder responses is also their weakness. While effective climate change action may not be possible without complex networks of governance, coordinating these networks is no simple task. The subject of urban climate change governance has been the focus of an expanding body of research (Aylett 2015, 2014, 2013; Betsill & Bulkeley 2004, 2007; Burch 2010; Burch et al. 2013; Romero-Lankao et al. 2013.)

“Smart” Urban Climate Governance

Seen from this perspective, the allure of “smart” approaches to green cities is precisely the fact that information technology tools seem so well suited to the challenges that have stalled progress so far. Collecting, sharing and analysing new and existing data, and coordinating complex multi-scalar social networks of collaborative design and implementation are precisely what has drawn attention to new technologies in other sectors.

Disappointingly, current applications of a data-driven and technologically enabled approach to urban sustainability are far from delivering on this potential. Reading through the literature shows that the many interesting works that address the impacts of new technologies on urban governance (for example Elwood 2010, Evans-Cowley 2010, Goldsmith and Crawford 2015, Moon 2002) have nothing to say about the governance of urban sustainability. Work that does address environmental sustainability is dominated by a technocratic focus on energy systems, building efficiency, and transportation that privileges top-down action by municipal experts and planning elites (The Climate Group 2008, Boorsma & Wagener 2007, Kim et al. 2009, Villa & Mitchell 2009). This literature review is ongoing, and I continue to hope to find a body of work that combines a developed understanding of urban sustainability with a detailed reflection on digital governance. As it is, we seem to be working with outdated approaches to both urban sustainability and planning.

An off-shore wind farm near Copenhagen, Denmark. Image Credit: Shutterstock.com

An off-shore wind farm near Copenhagen, Denmark. Image Credit: Shutterstock.com

How to update this approach, and use the full potential of data-driven, technologically enabled, and participatory approaches to spur accelerated transitions to sustainable cities is a key question. This research is necessary if we are going to unlock the full potential of the “smart” urbanism to address the necessity of building sustainable cities. It is also important that we avoid rolling back the clock on two decades of “green cities” research by basing our digital strategies around outdated understandings of the urban sustainability challenge.

Conclusions

Cities are responsible for as much as 70% of global greenhouse gas emissions and consume 75 percent of the world’s energy (Satterthwaite 2008). These figures are often repeated. But taking action at that scale requires both technological and socio-institutional innovations. Efforts to reduce urban emissions are challenged by the complexity of coordinating broad coalitions of action across governmental, private, and civil-society actors, and the need to effectively collect, share, and analyse new and existing data from across these traditionally siloized sectors.

These complexities have played an important role in limiting actual urban emissions reductions far below what is needed to stabilize global emissions within a safe range. Interestingly, these complexities are also the very strengths of emerging information and communications technologies (ICT) tools and Geoweb enabled approaches to urban planning and implementation. Currently, the use of “smart” approaches to address the urban climate challenge has been limited to narrow and technocratic initiatives. But much more is possible. If effective bridges can be built between the ICT and Urban Sustainability sectors, a profound shift in approaches to the urban governance of climate change could be possible. It is important to increase both sustainability and digital literacy among those involved. Only then will innovations in urban sustainability benefit from a deep understanding of both the new tools at our disposal, and the complex challenge to which we hope to apply them.

(A previous version of this was presented as part of the Geothink pre-event at the 2015 American Association of Geographers conference in Chicago. IL. See: www.geothink.ca)

Alexander Aylett is Assistant Professor at the Centre on Urbanisation, Culture and Society at the National Institute for Scientific Research (UCS-INRS) in Montreal, Quebec, Canada.

Crosspost: Canada’s Information Commissioner Tables Recommendations to Overhaul Access to Information Act

The Access to Information Act was first passed by parliament 1983 (Photo courtesy of en.wikipedia.org).

The Access to Information Act was first passed by parliament in 1983 (Photo courtesy of en.wikipedia.org).

This post is cross-posted with permission from Teresa Scassa, from her personal blog. Scassa is the Canada Research Chair in Information Law at the University of Ottawa.

By Teresa Scassa

Canada’s Access to Information Act is outdated and inadequate – and has been that way for a long time. Information Commissioners over the years have called for its amendment and reform, but generally with little success. The current Information Commissioner, Suzanne Legault has seized the opportunity of Canada’s very public embrace of Open Government to table in Parliament a comprehensive series of recommendations for the modernization of the legislation.

The lengthy and well-documented report makes a total of 85 recommendations. This will only seem like a lot to those unfamiliar with the decrepit statute. Taken as a whole, the recommendations would transform the legislation into a modern statute based on international best practices and adapted both to the information age and to the global movement for greater government transparency and accountability.

The recommendations are grouped according to 8 broad themes. The first relates to extending the coverage of the Act to certain institutions and entities that are not currently subject to the legislation. These include the Prime Minister’s Office, offices of Ministers, the bodies that support Parliament (including the Board of Internal Economy, the Library of Parliament, and the Senate Ethics Commissioner), and the bodies that support the operations of the courts (including the Registry of the Supreme Court, the Courts Administration Service and the Canadian Judicial Council). A second category of recommendations relates to the need to bolster the right of access itself. Noting that the use of some technologies, such as instant messaging, may lead to the disappearance of any records of how and why certain decisions are made, the Commissioner recommends instituting a legal duty to document. She also recommends adding a duty to report any unauthorized loss or destruction of information. Under the current legislation, there are nationality-based restrictions on who may request access to information in the hands of the Canadian government. This doesn’t mean that non-Canadians cannot get access – they currently simply have to do it through a Canadian-based agent. Commissioner Legault sensibly recommends that the restrictions be removed. She also recommends the removal of all fees related to access requests.

The format in which information is released has also been a sore point for many of those requesting information. In a digital age, receiving information in reusable digital formats means that it can be quickly searched, analyzed, processed and reused. This can be important, for example, if a large volume of data is sought in order to analyze and discuss it, and perhaps even to convert it into tables, graphs, maps or other visual aids in order to inform a broader public. The Commissioner recommends that institutions be required to provide information to those requesting it “in an open, reusable, and accessible format by default”. Derogation from this rule would only be in exceptional circumstances.

Persistent and significant delays in the release of requested information have also plagued the system at the federal level, with some considering these delays to be a form of deliberate obstruction. The Report includes 10 recommendations to address timeliness. The Commissioner has also set out 32 recommendations designed to maximize disclosure, largely by reworking the current spider’s web of exclusions and exemptions. The goal in some cases is to replace outright exclusions with more discretionary exemptions; in other cases, it is to replace exemptions scattered across other statutes with those in the statute and under the oversight of the Information Commissioner. In some cases, the Commissioner recommends reworking current exemptions so as to maximize disclosure.

Oversight has also been a recurring problem at the federal level. Currently, the Commissioner operates on an ombuds model – she can review complaints regarding refusals to grant access, in adequate responses, lack of timeliness, excessive fees, and so on. However, she can only make recommendations, and has no order-making powers. She recommends that Canada move to an order-making model, giving the Information Commissioner expanded powers to oversee compliance with the legal obligations set out in the legislation. She also recommends new audit powers for the Commissioner, as well as requirements that government institutions consult on proposed legislation that might affect access to information, and submit access to information impact assessments where changes to programs or activities might affect access to information. In addition, Commissioner Legault recommends that the Commissioner be given the authority to carry out education activities aimed at the public and to conduct or fund research.

Along with the order-making powers, the Commissioner is also seeking more significant consequences for failures to comply with the legislation. Penalties would attach to obstruction of access requests, the destruction, altering or falsification of records, failures to document decision-making processes, and failures to report on unauthorized loss or destruction of information.

In keeping with the government’s professed commitments to Open Government, the report includes a number of recommendations in support of a move towards proactive disclosure. The goal of proactive disclosure is to have government departments and institutions automatically release information that is clearly of public interest without waiting for an access to information request that they do so. Although the Action Plan on Open Government 2014-2016 sets goals for proactive disclosure, the Commissioner is recommending that the legislation be amended to include concrete obligations.

The Commissioner is, of course, not alone in calling for reform to the Access to Information Act. A private member’s bill introduced in 2014 by Liberal leader Justin Trudeau also proposes reforms to the legislation, although these are by no means as comprehensive as what is found in Commissioner Legault’s report.

In 2012 Canada joined the Open Government Partnership, and committed itself to an Action Plan on Open Government. This Action Plan contains commitments grouped under three headings: Open Information, Open Data and Open Dialogue. Yet its commitments to improving access to information are focussed on streamlining processes (for example, by making it possible to file and pay for access requests online, creating a virtual library, and making it easier to search for government information online.) The most recent version of the Action Plan similarly contains no commitments to reform the legislation. This unwillingness to tackle the major and substantive issues facing access to information in Canada is a serious impediment to realizing an open government agenda. A systemic reform of the Access to Information Act, such as that proposed by the Information Commissioner, is required.

What do you think about Canada’s Access to Information Act? Let us know on twitter @geothinkca.

If you have thoughts or questions about this article, get in touch with Drew Bush, Geothink’s digital journalist, at drew.bush@mail.mcgill.ca.

Crosspost: Looking at Crowdsourcing’s Big Picture with Daren Brabham

This post is cross-posted with permission from Daren C. Brabham, Ph.D. the personal blog of Daren C. Brabham. Brabham is a Geothink partner at the University of Southern California Annenberg School for Communication and Journalism where he was the first to publish scholarly research using the word “crowdsourcing.”

by Daren C. Brabham

In this post, I provide an overview of crowdsourcing and a way to think about it practically as a problem solving tool that takes on four different forms. I have been refining this definition and typology of crowdsourcing for several years and in conversation with scholars and practitioners from diverse fields. Plenty of people disagree with my characterization of crowdsourcing and many have offered their own typologies for understanding this process, but I submit that this way of thinking about crowdsourcing as a versatile problem solving model still holds up.

I define crowdsourcing as an online, distributed problem solving and production model that leverages online communities to serve organizational goals. Crowdsourcing blends bottom-up, open innovation concepts with top-down, traditional management structures so that organizations can effectively tap the collective intelligence of online communities for specific purposes. Wikis and open source software production are not considered crowdsourcing because there is no sponsoring organization at the top directing the labor of individuals in the online community. And when an organization outsources work to another person–even if that work is digital or technology-focused–that is not considered crowdsourcing either because there is no open opportunity for others to try their hands at that task.

There are four types of crowdsourcing approaches, based on the kinds of problems they solve:

1. The Knowledge Discovery and Management (KDM) crowdsourcingapproach concerns information management problems where the information needed by an organization is located outside the firm, “out there” on the Internet or in daily life. When organizations use a KDM approach to crowdsourcing, they issue a challenge to an online community, which then responds to the challenge by finding and reporting information in a given format back to the organization, for the organization’s benefit. This method is suitable for building collective resources. Many mapping-related activities follow this logic.

2. The Distributed Human Intelligence Tasking (DHIT) crowdsourcing approach concerns information management problems where the organization has the information it needs in-hand but needs that batch of information analyzed or processed by humans. The organization takes the information, decomposes the batch into small “microtasks,” and distributes the tasks to an online community willing to perform the work. This method is ideal for data analysis problems not suitable for efficient processing by computers.

3. The Broadcast Search (BS) crowdsourcingapproach concerns ideation problems that require empirically provable solutions. The organization has a problem it needs solved, opens the problem to an online community in the form of a challenge, and the online community submits possible solutions. The correct solution is a novel approach or design that meets the specifications outlined in the challenge. This method is ideal for scientific problem solving.

4. The Peer-Vetted Creative Production (PVCP) crowdsourcing approach concerns ideation problems where the “correct” solutions are matters of taste, market support, or public opinion. The organization has a problem it needs solved and opens the challenge up to an online community. The online community then submits possible solutions and has a method for choosing the best ideas submitted. This way, the online community is engaged both in the creation and selection of solutions to the problem. This method is ideal for aesthetic, design, or policy-making problems.

This handy decision tree below can help an organization figure out what crowdsourcing approach to take. The first question an organization should ask about their problem solving needs is whether their problem is an information management one or one concerned with ideation, or the generation of new ideas. In the information management direction, the next question to consider is if the challenge is to have an online community go out and find information and assemble it in a common resource (KDM) or if the challenge is to use an online community to process an existing batch of information (DHIT). On the ideation side, the question is whether the resulting solution will be objectively true (BS) or the solution will be one that will be supported through opinion or market support (PVCP).

Untitled1

A decision tree for determining appropriate crowdsourcing approaches for different problems. Source: Brabham, D. C., Ribisl, K. M., Kirchner, T. R., & Bernhardt, J. M. (2014). Crowdsourcing applications for public health. American Journal of Preventive Medicine, 46(2), 179-187.

I hope this conception of crowdsourcing is easy to understand and practically useful. Given this outlook, the big question I always like to ask is how we can mobilize online communities to solve our world’s most pressing problems. What new problems can you think of addressing with the power of crowds?

Daren C. Brabham, Ph.D., is an assistant professor in the Annenberg School for Communication & Journalism at the University of Southern California. He was the first the publish scholarly research using the word “crowdsourcing,” and his work focuses on translating the crowdsourcing model into new applications to serve the public good. He is the author of the books Crowdsourcing (MIT Press, 2013) and Crowdsourcing in the Public Sector (Georgetown University Press, 2015).

Crosspost: Geoweb, crowdsourcing, liability and moral responsibility

This post is cross-posted with permission from Po Ve Sham – Muki Haklay’s personal blog. Muki is a Geothink collaborator at the University College London and the co-director of ExCiteS.

By Muki Haklay

Yesterday [March 3rd, 2015], Tenille Brown led a Twitter discussion as part of the Geothink consortium. Tenille opened with a question about liability and wrongful acts that can harm others

If you follow the discussion (search in Twitter for #geothink) you can see how it evolved and which issues were covered.

At one point, I have asked the question:

It is always intriguing and frustrating, at the same time, when a discussion on Twitter is taking its own life and many times move away from the context in which a topic was brought up originally. At the same time, this is the nature of the medium. Here are the answers that came up to this question:

 

 

You can see that the only legal expert around said that it’s a tough question, but of course, everyone else shared their (lay) view on the basis of moral judgement and their own worldview and not on legality, and that’s also valuable. The reason I brought the question was that during the discussion, we started exploring the duality in the digital technology area to ownership and responsibility – or rights and obligations. It seem that technology companies are very quick to emphasise ownership (expressed in strong intellectual property right arguments) without responsibility over the consequences of technology use (as expressed in EULAs and the general attitude towards the users). So the nub of the issue for me was about agency. Software does have agency on its own but that doesn’t mean that it absolved the human agents from responsibility over what it is doing (be it software developers or the companies).

In ethics discussions with engineering students, the cases of Ford Pinto or the Thiokol O-rings in the Discovery Shuttle disaster come up as useful examples to explore the responsibility of engineers towards their end users. Ethics exist for GIS – e.g. the code of ethics of URISA, or the material online about ethics for GIS professional and in Esri publication. Somehow, the growth of the geoweb took us backward. The degree to which awareness of ethics is internalised within a discourse of ‘move fast and break things‘, software / hardware development culture of perpetual beta, lack of duty of care, and a search for fast ‘exit’ (and therefore IBG-YBG) make me wonder about which mechanisms we need to put in place to ensure the reintroduction of strong ethical notions into the geoweb. As some of the responses to my question demonstrate, people will accept the changes in societal behaviour and view them as normal…

See the original post here. twitter