Tag Archives: Teresa Scassa

The Future of AR: Negotiating Virtual Space Guided Movements


This is a guest post from Geothink student Wei Jiang at the University of Ottawa, Faculty of Law, under the supervision of Professor Teresa Scassa.


By Wei Jiang

While not everyone is out to catch ‘em all, few people in Canadian cities and in many countries around the world are unaffected by the recent Pokémon Go craze. Alongside the wide range of more or less amusing incidents that have been reported arising out of Pokémon Go, articles have also explored the current legal ramifications of this popular Artificial Reality (AR) app. In this blog post, I explore the possible legal developments that may be necessary in response to the potential explosion of AR apps like Pokémon Go.

Though the Pokémon Go craze appears to be fading, the impact of the popular AR app, which overlays virtual critters (Pokémon) on the geography of the real world, is likely to remain. Already, Niantic and other app developers are working on the next wave of games that redefine how we interact with our physical surroundings. Furthermore, as Virtual Reality (such as Oculus Rift and HTC Vive) and wearable technologies mature, AR apps could see a further boost in popularity.

Currently, legal analysis of Pokémon Go focus mainly on the impacts of the app in terms of the existing legal framework. These include legal actions like trespass, nuisance, and infringements of intellectual property (IP) rights. Homeowners not only face the prospect of trespassers damaging their property, but could also be responsible for harm that trespassers sustain on their property as part of their occupier liability. Indeed, with homeowners responsible for the conditions of sections of the sidewalk in many Canadian cities, the increase in the number of pedestrians playing Pokémon Go could present a significant risk. At the root of these potential legal actions is one fundamental reality: someone has altered the qualities of a physical space (be it a home, park, or restaurant) by designating it as a virtual landmark known as a “Pokéstop” or “gym”.

In broader terms, the challenge posed by AR apps is who can decide the qualities of the virtual space that overlays the physical world. Although future AR apps may not turn real world locations into “Pokéstops” and “gyms”, the core attraction of AR remains unchanged: the juxtaposition of the real world geography with a set of virtual meanings and rules. Currently, it is Niantic (the company behind the overlaying of virtual materials over physical geography) that asserts the right to determine the meanings associated with virtual space, presumably because the virtual space is a part of an application over which they have IP rights.

There is, however, a danger in applying a purely intellectual property framework to the situation of AR apps. IP ownership is only one aspect of overlaying a virtual space on top of a physical one. Other aspects of this behaviour, mainly issues of allocation of risk in case something bad happens, are often separated from the beneficial aspects. Such is the situation with Pokémon Go: while profiting from the IP aspects of Pokéstops and gyms, Pokémon Go developers subtly avoid confronting the issue of why property owners should bear increased risks associated with the same action of designating a location as a Pokéstop and gym.

The development history of Pokémon Go’s Pokéstops and gyms serves to illustrate the interests in keeping the IP and risk dimensions of Pokémon Go separate. Pokémon Go developed relatively quickly by importing a network of virtual landmarks from Niantic’s previous AR app – “Ingress”. These virtual landmarks were submitted by the users of “Ingress”, but did not draw much attention because of the relatively smaller player-base of that app. Any risk of legal liabilities was passed on to the app’s users through the terms of service. With Pokémon Go’s success, however, the developers are beginning to monetize their virtual landmarks by selling the right to become a “Pokéstop” or “gym” to businesses. For example, McDonald’ s in Japan was the first business to sign on to the “sponsored locations” scheme. In spite of the app’s recent decline in popularity, businesses are still signing on to this model.

Presumably, the logic of sponsored locations is that businesses can leverage the success of Pokémon Go’s brand to increase their own revenues. However, this IP-focused interpretation narrows in on only the commercial aspect of being designated a virtual landmark and keeps the other, potentially less positive, dimensions separate. In reality, when McDonald’ s signed on to the sponsored locations scheme, the full range of consequences was probably considered and accounted for: the increase in occupier liability, the possible nuisance created by the swarming players, and the possibility of attracting unwanted app users. People living on or near virtual landmarks imported from Ingress, however, often did not even know that they were affected by the app and thus did not have the opportunity to negotiate the placement of the marker. Risk was allocated to them without their knowledge or consent.

Indeed, considering that Pokémon Go’s successful system depends on these virtual landmarks, it could even be argued that the company took advantage of someone else’s rights without paying compensation. The problem with this assertion is that there are no rights to the virtual space that exists at a particular location. While some thinkers have began questioning whether real property rights should extend to the virtual space on top of it, few have explored this idea in detail.

One way to think about this question is to compare the placing of a virtual landmark to the placing of a sign on a physical space: both markers transmit information, impact the physical location, and have value because of the qualities of that physical location. The difference between signs on the internet and these virtual landmarks in an AR app is precisely that AR apps depend on and affect these physical locations.

Unlike advertising on the internet, virtual landmarks, where information is embedded in a location in virtual space as part of an AR app, are intricately bound up with the physical location on which they sit. Pokéstops are often established on top of landmarks and scenic locations because Pokémon Go advertises itself as an application that guides people to explore interesting locations in the real world. In addition, a certain concentration of virtual landmarks is required for the game to function properly (which is part of the reason why Pokémon Go is so difficult to play in rural regions). In both instances, Pokéstops derive value for the game based on attributes of the physical space on top of which they are placed.

Simultaneously, the benefit derived by Pokémon Go from placing these virtual landmarks also has an impact on the underlying physical space. The main impact is the increase in the number of people visiting a particular location, which carries with it associated consequences like increased noise levels, congestion on sidewalks, loitering, and the risk of harm. Only certain kinds of businesses can appropriately leverage the increase in visitors. For most residential areas, the result of being designated a virtual landmark is negative. Indeed, any potentially positive aspects of being designated a virtual landmark, such as possible increases in real estate value, could turn out to be less certain since the app developers can decide to remove the virtual landmark at their discretion.

Finally, the impact of layering information on top of a physical location is not to be underestimated. The Auschwitz Holocaust Museum incident, where a Pokémon Go player snapped a picture of a poison-gas Pokémon inside the museum, is a good example of how losing control of the ability to determine the meaning associated with a property publicly could undermine important aspects of the property, especially those with cultural significance. The Chinese takeover of the Pokémon Gym on top of Japan’s Yasukuni Shrine is another example of how dramatically an AR app could interfere with an owner or community’s ability to determine and preserve the meaning of a physical property. While everyone is free to hold their own opinions about what things mean, the overlaying of information through AR presents a new realm that resides in between the public display and the private mind.

Many of these issues exist because the legal dimensions of AR applications are ill-defined. As AR continues to develop, essential questions to be considered include “what is a virtual object” and “where is a virtual location”? Two legal frameworks come to mind. First, rights to physical space could be extended to the overlaid virtual space. This essentially makes the virtual space on top of a physical location an additional wall or sign area that is available for transmitting information, thus giving owners the ability to bargain for its use. Second, defining aspects of AR applications (such as virtual landmarks) as objects that could interact with the physical world may allow property owners to better defend themselves through the trespass framework, as they could now resist the placement of the virtual objects pre-emptively rather than wait for the scattered trespasses and nuisances that occur as a consequence of the placement of that object.

These developments could come either as a result of legislation or with courts interpreting virtual property into the existing property law frameworks. Another potential development in response to AR is the regulation of public space. With AR apps sending more people onto streets and into public spaces, issues of overcrowding in downtown spaces by AR players may prompt governments to regulate how AR developers guide player movement. As Professor Renee Sieber points out, the algorithms for Pokémon Go are not objective and contain biases that affect where people playing the game are attracted to. How the movement aspect of AR apps is regulated can have significant implications not only for issues of discrimination, but also for issues of access to public spaces and the gentrification of space. Developers and regulators should be aware of not only how AR apps create movement and gatherings, but also who the AR app users are pushing out of particular spaces, so as to avoid doing damage to already marginalized groups.

Wei Jiang is a J.D. student at the University of Ottawa, Faculty of Law. He is a Geothink student under the supervision of Professor Teresa Scassa.

Geothoughts Talks 4, 5, 6, & 7: Four Talks to Remember from the 2016 Summer Institute

Peter Johnson was one of four Geothink Co-Applicants who gave presentations at the 2016 Geothink Summer Institute. Listen to their lectures here as podcasts.

Peter Johnson was one of four Geothink Co-Applicants who gave presentations on day two of the 2016 Geothink Summer Institute. Listen to their lectures here as podcasts.

By Drew Bush

Geothink’s Summer Institute may have concluded but, for those of you who missed it, we bring you four talks to remember. These lectures come from day two of the institute when four Geothink faculty members gave short talks on their different disciplinary approaches to evaluating open data.

The lectures feature Peter Johnson, an assistant professor at Waterloo University’s Department of Geography and Environmental Planning; Teresa Scassa, Canada Research Chair in Information Law at the University of Ottawa; Pamela Robinson, associate professor in Ryerson University’s School of Urban and Regional Planning; And, Geothink Head Renee Sieber, associate professor in McGill University’s Department of Geography and School of Environment.

Students at this year’s institute learned difficult lessons about applying actual open data to civic problems through group work and interactions with Toronto city officials, local organizations, and Geothink faculty. The last day of the institute culminated in a writing-skill incubator that gave participants the chance to practice communicating even the driest details of work with open data in a manner that grabs the attention of the public.

Held annually as part of a five-year Canadian Social Sciences and Humanities Research Council (SSHRC) partnership grant, each year the Summer Institute devotes three days of hands-on learning to topics important to research taking place in the grant. This year, each day of the institute alternated lectures and panel discussions with work sessions where instructors mentored groups one-on-one about the many aspects of open data.

Below we present you with a rare opportunity to learn about open data with our experts as they discuss important disciplinary perspectives for evaluating the value of it. You can also subscribe to these Podcasts by finding them on iTunes.

Geothoughts Talk 4: Reflecting on the Success of Open Data: How Municipal Governments Evaluate Open Data Programs
Join Peter Johnson as he kicks off day two of Geothink’s 2016 Summer Institute by inviting students to dream that they are civil servants at the City of Toronto when the city receives a hypothetical “F” rating for its open data catalogue. From this starting premise, Johnson’s lecture interrogates how outside agencies, academics, and organizations evaluate municipal open data programs. In particular, he discusses problems with current impact studies such as the Open Data 500 and what other current evaluation techniques look like.

Geothoughts Talk 5: The Value of Open Data: A Legal Perspective

Teresa Scassa starts our fifth talk by discussing how those working in the discipline of law don’t usually participate in the evaluation of open data. While those in law don’t actually evaluate open data, however, legal statutes often are responsible for mandating such valuation, she argues. In particular, legal statutes often require specific types of data to be open. Furthermore, provisions in Canadian law such as the Open Courts Principle mean that many aspects of Canada’s legal system can be open-by-default.

Geothoughts Talk 6: Open Data: Questions and Techniques for Adding Civic Value
Pamela Robinson dispels the notion that open data derives value from economic benefits by instead discussing how such data can be used to fundamentally shift the relationship between civil society and institutions. She elaborates on this idea by noting that not all open data sets are created equal. Right now, she argues, the mixed ways in which open data is released can dramatically impact whether or not it’s useful to civic groups hoping to work with such data.

Geothoughts Talk 7: Measuring the Value of Open Data
In a talk that helps to summarize the previous three presenters, Renee Sieber discusses the different ways in which open data can be evaluated. She details many of the common quantitative metrics used—counting applications generated at a hackathon, the number of citizens engaged, or the economic output from a particular dataset—before discussing some qualitative indicators of the importance of a specific open data set. Some methods can likely capture certain aspects of open data better than others. She then poses a series of questions on how one can actually attach a value to the increased democracy or accountability gained by using open data.

If you have thoughts or questions about these podcasts, get in touch with Drew Bush, Geothink’s digital journalist, at drew.bush@mail.mcgill.ca

Measuring the Value of Open Government Data – Summer Institute Day 2

 Day two of Geothink's 2016 Summer Institute began with short lectures on specific disciplinary perspectives on open data. Teresa Scassa, Canada Research Chair in Information Law at the University of Ottawa, gave a legal perspective on the value of open data.

Day two of Geothink’s 2016 Summer Institute began with short lectures on specific disciplinary perspectives on open data. Teresa Scassa, Canada Research Chair in Information Law at the University of Ottawa, gave a legal perspective on the value of open data.

By Drew Bush

Day two of the 2016 Summer Institute began with presentations from Geothink’s faculty that aimed to provide different disciplinary approaches to evaluating open data. Armed with this information, students spent the rest of the day working in groups to first create measures to value open data, and, second, role-play how differing sectors might use a specific type of data.

The morning began with 30-minute presentations from members of Geothink’s faculty. Peter Johnson, an assistant professor at Waterloo University’s Department of Geography and Environmental Planning, led off with a presentation on how municipal governments evaluate the success of their open data programs.

“This is the situation that we sort of find ourselves in when it comes to evaluating open data,” Johnson told students. “There’s this sort of world outside of government that’s bent on evaluating open data. And those are people like me, academics, those are non-profits, those are, you know, private sector organizations who are looking at open data and trying to understand how is it being used. So this is kind of, I think, a sign that open data has arrived a little bit. Right? It’s not just this sort of dusty, sort of nerdy cobweb in the corner of the municipal government basement. It’s something that other people are noticing and other people are taking an interest in.”

Johnson was followed by Teresa Scassa, Canada Research Chair in Information Law at the University of Ottawa, with a legal perspective on the value of open data. Pamela Robinson, associate professor in Ryerson University’s School of Urban and Regional Planning, gave a civic-oriented approach to the value of open data, one that was intentionally at odds with the private sector.

“I’ll be really blunt, I’m not that interested in making money from open data,” Robinson told students in regard to the common municipal reason for opening data. “It’s important but it’s not my thing. As an urban planner, my primary preoccupation is about citizen’s relationships with their government. And I’m interested in the proposition that open data as an input into open government can fundamentally shift the relationship between civil society and institutions.”

Finally, Geothink Head Renee Sieber, associate professor in McGill University’s Department of Geography and School of Environment, provided a summary of the methods for evaluating open data.

Each of these short lectures were part of a comprehensive look at open data during the three-day institute. Students at this year’s institute learned difficult lessons about applying actual open data to civic problems and on how to evaluate the success of an open data program. In between activities on day two, students also heard from a panel of municipal officials and representatives of Toronto-based organizations working with open data.

Held annually as part of a five-year Canadian Social Sciences and Humanities Research Council (SSHRC) partnership grant, each year the Summer Institute devotes three days of hands-on learning to topics important to research taking place in the grant. This year, each day of the institute alternated lectures and panel discussions with work sessions where instructors mentored groups one-on-one about the many aspects of open data.

But many students struggled not only with thinking about how to evaluate the open data that they were working with, but also with how to determine the impact of any project that utilizes such an information source.

“I think a big challenge that I personally am facing is this idea of it’s supposed to have real improvement for society, it’s suppose to help society,” Rachel Bloom, from McGill University, said. “But we find that a lot of vulnerable populations actually won’t have access to these applications and the technology. So it’s kind of like trying to reconcile this idea of helping while also being aware that like maybe you are not actually reaching the population you are trying to help. Which is kind of what openness is about—is actually engaging the people personally.”

It is for such reasons that evaluating open data can be quite nuanced—an idea represented in student group presentations on the topic. The presentations varied greatly with some student groups choosing metrics based on the things that a community might value and then establishing an outside monitor to observe datasets and report back to the community. Other students established a workflow to harness citizen input to evaluate open data through instruments such as online surveys.

An afternoon panel comprised of local city officials and representatives from groups concerned with open data discussed the practical side of publishing, using, and evaluating open data as it stands today. The panel included Keith McDonald, former open data lead for City of Toronto; Bryan Smith, co-founder and Chief-Executive-Officer of ThinkData Works; Marcy Burchfield and Vishan Guyadeen, from The Neptis Foundation; And, Dawn Walker and Curtis McCord, Geothink students from University of Toronto who designed the Citizen’s Guide to Open Data.

Two of the primary concerns shared by panelists included the lack of standards for which differing municipalities provide open data, and the gap that exists between how open data is provided and what businesses or citizens require to actually use it. Smith spoke of how early visions of students and application developers using open data to radically transform life in cities have not scaled up to the national level particularly well.

“What we are seeing, which I don’t think anyone predicted, is the large companies—mostly companies that run a bunch of apps that probably everyone here has on their phones—are the ones who are the biggest purveyors of open data,” Smith told students. Issues with the type and quantity of data (as well as differences between how data is provided in different places) have limited other players and even some of these big developers too.

For more on this discussion, check out an excerpt of the panel discussion below. We pick up the discussion as the panelists talk about standards in relation to the Open Government Partnership.

In role-playing activities, students considered the issues raised by the panel as well as the practical problems citizens or other groups might face in finding the open data they require. Concluding presentations included those from students playing the role of real estate developers, non-profits concerned with democracy, and a bicycle food courier service.

Stay tuned for the full audio of each professors’ talk presented as podcasts here. Also check back on Geothink for a synopsis of day three, and, of course, watch more of our video clips (which we’ll be uploading in coming days) here.

If you have thoughts or questions about this article or the videos, get in touch with Drew Bush, Geothink’s digital journalist, at drew.bush@mail.mcgill.ca.

Geothink at the 2016 Annual Meeting of the American Association of Geographers

By Drew Bush

From March 29 to April 2, 2016, Geothink’s students, co-applicants, and collaborators presented their research and met with colleagues at the now concluded 2016 Association of American Geographers (AAG) Annual Meeting in San Francisco, CA. Over the week, Geothinkers gave 11 presentations, organized six sessions, chaired five sessions, and were panellists on four sessions. See who attended here.

“This year’s AAG provided a great opportunity to get geographically diverse Geothinkers together,” Victoria Fast, a recently graduated doctoral student in Ryerson University’s Department of Geography and Environmental Studies, wrote in an e-mail to Geothink.ca. “I can’t think of a better place for a meeting about a special journal issue on open data; there are so many fresh, uncensored ideas flying around the conference, both inside and outside of sessions.”

Of particular note for Fast was Panel Session 1475 Gender & GIScience (see her Geothink.ca guest post here). Panelists in the session included Geothink Head Renee Sieber, associate professor in McGill University’s Department of Geography and School of Environment; And, Geothink collaborator Sarah Elwood, a professor in University of Washington’s Department of Geography.

Others agreed.

“A panel on gender and GIScience was refreshing and enlightening,” Geothink Co-Applicant Scott Bell, a professor of Geography and Planning at University of Saskatchewan, wrote to Geothink.ca.

“My presentation was in a day long symposium on human dynamism,” he added. “It summarized a recently published Geothink aligned paper on human mobility tracking and active transportation (published in the International Journal of Geographical Information Science). It seemed to go over pretty well, I’m glad I was in the day-long event as the room was packed most of the day.”

For others, the high cost of the location meant they couldn’t stay for a full week or attend every single session. Still they reported good turnout by members of the Geothink team.

“This year we did not organize a specific panel or panels, or specific sessions to showcase Geothink work,” wrote Geothink Co-Applicant Teresa Scassa, Canada Research Chair in Information Law and professor in the Faculty of Law at the University of Ottawa. “This meant that our presentations were dispersed across a variety of different sessions, on different days of the week.”

Many Geothinkers were also intimately involved in running parts of the conference.

“This was a standout AAG for me,” wrote Geothink researcher Alexander Aylett, a professor and researcher at the Institut national de la recherche scientifique, who ran three sessions (Find an overview of what Aylette’s sessions did at www.smartgreencities.org). In collaboration with Andrés Lluque-Ayla from Durham University we ran a full day of sessions on the overlap between “Smart” and “Sustainable” cities.   We had some excellent presentations—including one from fellow Geothinker Pamela Robinson—and a strong turn out throughout the whole day. (Even at 8 AM, which was a shock to me!).”

For some students, it was the first time they had attended the meeting or presented their own research.

“This was my first time at the AAG,” said Geothink Newsletter Editor, Suthee Sangiambut, a maser’s student in McGill University’s Department of Geography with Sieber. “I was quite excited to be at the event and was able to meet all kinds of geographers, all of whom had different ideas on what geography exactly is.”

“It was great to see how global events of the past years were shaping our discussions on the Geoweb, privacy, surveillance, national identity, immigration, and more,” he added. “Those at the Disrupt Geo session were able to hear perspectives from private sector and civil society sides, which was quite refreshing and is something I would like to see more of in the future.”

The AAG annual meeting has been held every year since the association’s founding in 1904. This year’s conference included more than 9,000 attendees.

If you have thoughts or questions about this article, get in touch with Drew Bush, Geothink’s digital journalist, at drew.bush@mail.mcgill.ca. We also want to thank Victoria Fast for her willingness to share photos from the 2016 AAG Annual Meeting.

Please find an abstract for the presentation mentioned in this article below.

Leveraging Sensor Networks to Study Human Spatial Behavior

Abstract:
In the past decade society has entered a technological period characterized by mobile and smart computing that supports input and processing from users, services, and numerous sensors. The smartphones that most of us carry in our pockets offer the ability to integrate input from sensors monitoring various external and internal sources (e.g., accelerometer, magnetometer, microphone, GPS, wireless internet, Bluetooth). These relatively raw inputs are processed on the phones to provide us with a seemingly unlimited number of applications. Furthermore, these raw inputs can be integrated and processed in ways that can offer novel representations of human behavior, both dissagregate and aggregate. As a result, new opportunities to examine and better understand human spatial behaviour are available. An application we report here involved monitoring of a group of people over an extended period of time. Monitoring is timed at relatively tightly spaced intervals (every 2 minutes). Such a research setting lends itself to both planned and natural experiments; the later of which emerge as a result of the regular and on going nature of data collection. We will report on both a natural experiment  and planned observations resulting from 3 separate implementations of our smartphone based observations. The natural experiment that emerged in the context of our most recent month-long monitoring study of 28 participants using mobile phone-based ubiquitous sensor monitoring will be our focus, but will be contextualized with related patterns from earlier studies. The implications for public health and transportation planning are discussed.

Citizen Science and Intellectual Property: A Guide for the Perplexed

citizen science image

Citizen scientists help researchers transcribe historical climate records and photograph natural phenomena.

By Naomi Bloch

The concept of the science hobbyist —­ the backyard astronomer staring up at the sky or the amateur ornithologist taking part in the annual Christmas bird count — is hardly a new one. What is notable today, however, is the scale and scope of new collaborations between research institutions and volunteer citizen scientists. These kinds of citizen science partnerships have inspired a new study by Geothink co-applicant Teresa Scassa and doctoral candidate Haewon Chung, called “Managing Intellectual Property Rights in Citizen Science: A Guide for Researchers and Citizen Scientists.”

Chung, now a Geothink Ph.D. student researcher at the University of Ottawa’s Faculty of Law, is interested in intellectual property (IP) law with a particular focus on digital ethics. Scassa is a Canada Research Chair in Information Law at the University of Ottawa. The two researchers will participate in a panel discussion and launch of their new work at the Wilson Center Commons Lab, in Washington, D.C., on December 10.

Teresa Scassa head shot

Teresa Scassa, Canada Research Chair in Information Law at the University of Ottawa

“Part of the point of the guide is to encourage people to take a proactive approach and think about what they want to get out of the citizen science project, and what they need to get out of the citizen science project,” Scassa said. “For example, if you need to publish your research results, or if you need to keep your data confidential because you’ve got private sector funding that requires that, how do you structure the IP side of things so that you can do that?”

Citizen science, broadly construed, involves the participation of non-professional scientists in scientific data gathering and the production of new scientific knowledge. In the realm of Geothink, this generally takes place in the context of volunteered geographic information (VGI) and contributions to geographic knowledge. Some projects may involve volunteers helping with laborious tasks such as transcribing historical climate data. In other cases, participants may be sharing geocoded photos or video footage, recording audio, or producing text narratives. In countries like the U.S. and Canada, such original, creative efforts are inherently protected by copyright law — something participants themselves may not even realize.

Do you copy?

The 80-page report is divided into three parts, beginning with a concise review of relevant areas of intellectual property law including copyright, patent, trademark and trade secret law, as well as specific considerations involved in the protection of traditional knowledge. Though copyright law differs around the world, essentially copyright grants certain exclusive rights to the authors of creative works. These rights usually include the right to control how work is distributed, reproduced, and re-used. “It’s also significant because it arises inadvertently,” said Scassa.

Unlike other types of intellectual property such as patents, in many countries the creator is automatically granted copyright protections without taking any specific legal actions. “There are going to be copyright issues with respect to any website that’s created, and with respect to many different types of contributions that users might make, whether they’re text-based or photographs or video clips or whatever they might be,” Scassa said. “There are copyright issues with respect to compilations of data. And then, of course, those copyright issues are relevant if, for example, the researcher decides to publish in a closed access journal and the participants want access to those research results, and all these sorts of things.”

When institutional researchers initiate citizen science projects, there are commonly expectations regarding eventual publication of findings, data sharing, posting information online, as well as educational and civic aims. “Depending on the nature of the project, the users may expect to have total access to the research results — to any publications, but maybe also to all of the data that’s been gathered,” said Scassa. “So we encourage the researchers who are creating citizen science research projects to think about what the user community may be expecting from them in terms of the project design.”

Ethics and law in the balance

In the second section of the study, the authors explore some of the ethical issues that arise in light of IP law. This includes everything from appropriate attribution to uses of participants’ contributions as well as research output. “If you’re going to be collecting stories or traditional knowledge from a community, for example, then that’s going to result in some intellectual property,” Scassa said. “And the ethical requirements may be different from the bare legal requirements. Part of it is being aware of what the legal defaults are and how those might need to be altered in the context of the relationship that you have with your participants.”

Scassa notes that researchers’ relationship with citizen scientists is generally one among many. “Researchers at universities have a complex web of relationships,” Scassa said. “Their universities have IP policies; those IP policies might provide that all IP stemming from this research may belong to the university and not the researcher, so they may not be able to promise certain things in their projects. Their funders may have expectations, and their publishers may have expectations. They may also have expectations in terms of the ability, perhaps, at some point in the future to patent some of their research. So they have this complex web of relationships and their relationship with citizen scientists is one of those relationships. We encourage them to think about this web of relationships and these expectations and try and design accordingly.”

To help with this process, the third section of the study guides readers through the various types of licensing options that can be applied. The authors provide diverse examples from real-world citizen science projects both local and global, and a toolset to help project designers as well as participants understand their options. “We don’t want to create barriers,” Scassa said. “It’s a really complex area. We’re trying to make it as accessible and as useful as possible, just to try to get people thinking about these ideas.”

The Wilson Center panel discussion, “Legal Issues and Intellectual Property Rights in Citizen Science,” takes place at the Wilson Center Commons Lab, Washington, D.C., Wednesday, Dec. 10, 11am –12:30pm ET. There will be a live webcast of the event.

Interested in learning more about intellectual property law and citizen science? Reach out to Teresa Scassa on Twitter: @TeresaScassa.

If you have thoughts or questions about this article, get in touch with Naomi Bloch, Geothink’s digital journalist, at naomi.bloch2@gmail.com.

Journalism: Storytelling in the Geodata Age

By Naomi Bloch

The rise of more accessible geospatial web tools along with expanding sources of open data have fostered a potent—if somewhat techno-utopian—civic vision. For those immersed in understanding this new digital landscape, one question often surfaces: who’s truly putting these resources to use?

The most reliable answer is perhaps an obvious one. “Journalists are making huge use of mapping and geodata for storytelling, for the visualization of stories, and for investigative reporting purposes,” said April Lindgren, an associate professor at Ryerson University’s School of Journalism and founding director of the Ryerson Journalism Research Centre, a Geothink partner organization.

As a scholar, Lindgren’s own research employs data mapping techniques to examine the geography of news coverage and the role of Canadian media in society. “Maps have actually been quite a powerful tool for us to explore patterns of local news and understand how it works. It opened up a whole new way of getting at and understanding the data because we were able to visualize it.

“Before that, it was the old problem of columns and reams of numbers” Lindgren said. “But being able to map it allowed us to show geographically, yes, most of the news coverage is focused on downtown Toronto. So why is that? And what are the implications of not doing much coverage in other areas of the city? And furthermore, we mapped the types of topics. So what does it mean when most of the news that they publish about certain areas is crime coverage? What does that do in terms of the geographic stereotyping?”

Computer-assisted reporting revisited

Lindgren notes that the use of mapping and data analysis for actual journalistic purposes is not a new phenomenon. Over twenty years ago, in 1993, Miami Herald research editor Steve Doig won a Pulitzer Prize for his investigative coverage of Hurricane Andrew’s aftermath in Florida. The year prior, Doig and his colleagues spent several intensive months processing and evaluating two data sets—one that helped to map out property damage caused by the hurricane and another documenting wind speeds at different locations and times throughout the storm. “They noticed from using mapping that the damage was much more extensive in certain areas than in others, and then they started trying to figure out why that was, because weather-wise it was the same storm,” Lindgren explained.

What Went Wrong > Miami Herald, December 20, 1992 > Page 1

“What Went Wrong > Miami Herald, December 20, 1992 > Page 1” (originally published Dec. 20, 1992). Flickr photo by Daniel X. O’Neil, licensed under CC BY 2.0

Further investigation unveiled that several different developers had been responsible for real estate construction in different regions. “And it led them to a conclusion and a very powerful piece of journalism showing that it had to do with the building standards of the different developers,” said Lindgren. “So that was one of the early uses of mapping and data journalism, showing what a useful tool it could be.”

As researchers raise questions about the skills and motivations that enable citizen engagement with open data and geospatial technologies, journalism schools are increasingly recognizing the need to integrate a formal understanding of data journalism into the curriculum.

At the 2014 Geothink Annual General Meeting, Lindgren met a fellow researcher with complementary interests—Marcy Burchfield, executive director of the Toronto-based Neptis Foundation. The aim of Neptis has been to apply the unique capabilities of mapping and spatial analysis to help decision makers and the public understand regional issues in the Greater Toronto Area. The Geothink encounter led to the development of a Neptis-led geodata workshop for senior-level students enrolled in Ryerson’s journalism school, exposing students to some statistics basics as well as the various challenges of working with spatial data to develop meaningful stories.

“Getting the data into a usable form, I think, is probably the biggest challenge technically for journalists,” said Lindgren. “Although the skills are rapidly increasing and we’re training our students to do that.”

At Ryerson, undergraduates are required to take an introductory digital journalism course that critically engages with social media and citizen journalism along with new forms of multimedia and alternative storytelling methods. A separate “visualizing facts” elective course aims to provide hands-on experience with various data visualization techniques including mapping, while reinforcing numeracy skills (something that, historically, journalists have not been known for).

Data’s fit for purpose?

CBC News Pledge to Vote Map

CBC News’s crowdsourced, interactive “Pledge to Vote” map, part of their 2015 Canada Votes coverage.

In recent years Canadian data journalists have garnered international attention both for their creative uses of geodata and their involvement in the push for open access to government information. “One of the big problems is the availability of data,” Lindgren said. “What’s available? How good is it? How hard do you have to fight for it? Is it really available through an open data source or do you have to go through Freedom of Information to get it?”

While increasingly media outlets are exploring the possibilities of engaging the public to create crowdsourced content by volunteering their geodata, the data sets that journalists tend to be interested in—ideally, data that can support rich, informative stories relevant to public interest—are not typically collected with the journalist in mind. In particular, government data sources have often been generated to support internal administrative needs, not to address transparency and accountability concerns per se. Data input decisions may not be documented, and agencies may “silently” post-process the information before distributing it to journalists or the greater public. This makes the process of learning how to clean up inconsistent, non-standardized data developed for a very different audience a particularly important skill for journalists to acquire. Only then can a journalist build an understanding of the data’s patterns and the stories they can support.

“You’re only as good as your data,” Lindgren emphasized. “In some ways the act of journalism allows you to test the data and see how good it is. Because the data may be telling you one thing, but then when you go out on the ground and you start interviewing and looking around you may find that what you’re seeing and hearing doesn’t seem to match what the data is telling you.

“So right away, as a journalist you’re going to be suspicious of that. And there are two places where this could be wrong. Either you’re talking to the wrong people or you’re not talking to a broad enough range of people—or there might be something wrong with the data.”

Verifying data accuracy is a time-honoured tradition

Lindgren shared the example of a colleague who was investigating the issue of slum landlords. The reporter asked the municipality to provide data on property standards complaints. Upon receiving and eventually mapping the data, the reporter and his colleagues made a surprising discovery. “They noticed that there was a section of the city that didn’t have any complaints. They thought that was odd, because they knew that there were a lot of rental areas and low-income areas there, with people living in somewhat vulnerable housing situations.”

Ultimately, the dissonance between the story on the ground and the story in the data led the reporter to go back to the city seeking further verification, and the nature of the problem soon revealed itself. It seems that a summer student had been in charge of aggregating and disseminating the data to the journalists when the information was requested, and that student had overlooked one section of the city.

While this particular story reflects human error during the communication phase rather than the data collection phase, Lindgren points out that the strong journalistic traditions of seeking verification and being suspicious of information sources puts the media in a unique position to evaluate data’s quality. “Verification is a fundamental element of journalism. That’s what we do that’s different from anybody who is just commenting out there online. The main issue is: is it verifiable, and what’s the public interest? That’s the starting point.”

Where public and private interests intersect

What constitutes “public interest” is a conversation that still needs to happen. The push for open data and the fact that personal information is increasingly accessible online has led parties both within and beyond government to raise concerns about how to strike the balance between privacy and transparency—and what the right balance may be.  Data sets often contain personal or identifying information. Cleansing the data of that information is not straightforward. Even when data appear on the surface anonymized, there are ever increasing opportunities to combine and process seemingly unrelated data sets in ways that can identify individuals and compromise personal information. As Geothink co-applicant researcher Teresa Scassa has addressed more than once in her work, this is not a theoretical problem but a reality that is already occurring.

Lindgren, however, said she does not see data journalism as giving rise to new types of ethical concerns for the media. “Obviously, a balance has to be struck. But the reality is that oftentimes the data is very generalized. It really depends on what the issue is and what the information is.

“The whole privacy issue is really a red flag, a lot of times, for journalists, because it can be used by governments as a pretext for not releasing information that governments just don’t want the public to know. The two reasons they don’t release information is privacy and violating commercial interests, and then the third reason is political consideration, but they can’t couch it in those terms.”

In terms of how journalists themselves strike that balance, Lindgren said this must be assessed on a case by case basis. “Basically, our job is invading people’s space, quite often. So we have to—and we do—make those judgment calls every day. The data is just another layer of that, or another area where we’d have to think about it and have those discussions.

“What it comes down to is you’re weighing, what’s the public interest in this information? There’s no hard and fast rule. It depends on what the information is.”

If you have any questions for April, reach her on Twitter here: @aprilatryerson

If you have thoughts or questions about this article, get in touch with Naomi Bloch, Geothink’s digital journalist, at naomi.bloch2@gmail.com.

Crosspost: Canada’s Information Commissioner Tables Recommendations to Overhaul Access to Information Act

The Access to Information Act was first passed by parliament 1983 (Photo courtesy of en.wikipedia.org).

The Access to Information Act was first passed by parliament in 1983 (Photo courtesy of en.wikipedia.org).

This post is cross-posted with permission from Teresa Scassa, from her personal blog. Scassa is the Canada Research Chair in Information Law at the University of Ottawa.

By Teresa Scassa

Canada’s Access to Information Act is outdated and inadequate – and has been that way for a long time. Information Commissioners over the years have called for its amendment and reform, but generally with little success. The current Information Commissioner, Suzanne Legault has seized the opportunity of Canada’s very public embrace of Open Government to table in Parliament a comprehensive series of recommendations for the modernization of the legislation.

The lengthy and well-documented report makes a total of 85 recommendations. This will only seem like a lot to those unfamiliar with the decrepit statute. Taken as a whole, the recommendations would transform the legislation into a modern statute based on international best practices and adapted both to the information age and to the global movement for greater government transparency and accountability.

The recommendations are grouped according to 8 broad themes. The first relates to extending the coverage of the Act to certain institutions and entities that are not currently subject to the legislation. These include the Prime Minister’s Office, offices of Ministers, the bodies that support Parliament (including the Board of Internal Economy, the Library of Parliament, and the Senate Ethics Commissioner), and the bodies that support the operations of the courts (including the Registry of the Supreme Court, the Courts Administration Service and the Canadian Judicial Council). A second category of recommendations relates to the need to bolster the right of access itself. Noting that the use of some technologies, such as instant messaging, may lead to the disappearance of any records of how and why certain decisions are made, the Commissioner recommends instituting a legal duty to document. She also recommends adding a duty to report any unauthorized loss or destruction of information. Under the current legislation, there are nationality-based restrictions on who may request access to information in the hands of the Canadian government. This doesn’t mean that non-Canadians cannot get access – they currently simply have to do it through a Canadian-based agent. Commissioner Legault sensibly recommends that the restrictions be removed. She also recommends the removal of all fees related to access requests.

The format in which information is released has also been a sore point for many of those requesting information. In a digital age, receiving information in reusable digital formats means that it can be quickly searched, analyzed, processed and reused. This can be important, for example, if a large volume of data is sought in order to analyze and discuss it, and perhaps even to convert it into tables, graphs, maps or other visual aids in order to inform a broader public. The Commissioner recommends that institutions be required to provide information to those requesting it “in an open, reusable, and accessible format by default”. Derogation from this rule would only be in exceptional circumstances.

Persistent and significant delays in the release of requested information have also plagued the system at the federal level, with some considering these delays to be a form of deliberate obstruction. The Report includes 10 recommendations to address timeliness. The Commissioner has also set out 32 recommendations designed to maximize disclosure, largely by reworking the current spider’s web of exclusions and exemptions. The goal in some cases is to replace outright exclusions with more discretionary exemptions; in other cases, it is to replace exemptions scattered across other statutes with those in the statute and under the oversight of the Information Commissioner. In some cases, the Commissioner recommends reworking current exemptions so as to maximize disclosure.

Oversight has also been a recurring problem at the federal level. Currently, the Commissioner operates on an ombuds model – she can review complaints regarding refusals to grant access, in adequate responses, lack of timeliness, excessive fees, and so on. However, she can only make recommendations, and has no order-making powers. She recommends that Canada move to an order-making model, giving the Information Commissioner expanded powers to oversee compliance with the legal obligations set out in the legislation. She also recommends new audit powers for the Commissioner, as well as requirements that government institutions consult on proposed legislation that might affect access to information, and submit access to information impact assessments where changes to programs or activities might affect access to information. In addition, Commissioner Legault recommends that the Commissioner be given the authority to carry out education activities aimed at the public and to conduct or fund research.

Along with the order-making powers, the Commissioner is also seeking more significant consequences for failures to comply with the legislation. Penalties would attach to obstruction of access requests, the destruction, altering or falsification of records, failures to document decision-making processes, and failures to report on unauthorized loss or destruction of information.

In keeping with the government’s professed commitments to Open Government, the report includes a number of recommendations in support of a move towards proactive disclosure. The goal of proactive disclosure is to have government departments and institutions automatically release information that is clearly of public interest without waiting for an access to information request that they do so. Although the Action Plan on Open Government 2014-2016 sets goals for proactive disclosure, the Commissioner is recommending that the legislation be amended to include concrete obligations.

The Commissioner is, of course, not alone in calling for reform to the Access to Information Act. A private member’s bill introduced in 2014 by Liberal leader Justin Trudeau also proposes reforms to the legislation, although these are by no means as comprehensive as what is found in Commissioner Legault’s report.

In 2012 Canada joined the Open Government Partnership, and committed itself to an Action Plan on Open Government. This Action Plan contains commitments grouped under three headings: Open Information, Open Data and Open Dialogue. Yet its commitments to improving access to information are focussed on streamlining processes (for example, by making it possible to file and pay for access requests online, creating a virtual library, and making it easier to search for government information online.) The most recent version of the Action Plan similarly contains no commitments to reform the legislation. This unwillingness to tackle the major and substantive issues facing access to information in Canada is a serious impediment to realizing an open government agenda. A systemic reform of the Access to Information Act, such as that proposed by the Information Commissioner, is required.

What do you think about Canada’s Access to Information Act? Let us know on twitter @geothinkca.

If you have thoughts or questions about this article, get in touch with Drew Bush, Geothink’s digital journalist, at drew.bush@mail.mcgill.ca.

Geothink Video Interview 1: Teresa Scassa, University of Ottawa

By Drew Bushfaculty_olympics

This Geothink Video Interview brings us a closeup look at the work and ideas of Teresa Scassa, Canada Research Chair in Information Law at the University of Ottawa. In particular, we talk with her about her views on Canada’s Action Plan for Open Government 2.0, problems with open access under the plan, the idea of making government data open by default and the role of academics (like those in Geothink) in making government more transparent.

Find the interview below. As always, all thoughts and comments are welcome. And, of course, stay tuned for more videos and podcasts soon on Geothink.ca.

If you have thoughts or questions about the video, get in touch with Drew Bush, Geothink’s digital journalist, at drew.bush@mail.mcgill.ca.