Category Archives: In The News

Crosspost: Looking at Crowdsourcing’s Big Picture with Daren Brabham

This post is cross-posted with permission from Daren C. Brabham, Ph.D. the personal blog of Daren C. Brabham. Brabham is a Geothink partner at the University of Southern California Annenberg School for Communication and Journalism where he was the first to publish scholarly research using the word “crowdsourcing.”

by Daren C. Brabham

In this post, I provide an overview of crowdsourcing and a way to think about it practically as a problem solving tool that takes on four different forms. I have been refining this definition and typology of crowdsourcing for several years and in conversation with scholars and practitioners from diverse fields. Plenty of people disagree with my characterization of crowdsourcing and many have offered their own typologies for understanding this process, but I submit that this way of thinking about crowdsourcing as a versatile problem solving model still holds up.

I define crowdsourcing as an online, distributed problem solving and production model that leverages online communities to serve organizational goals. Crowdsourcing blends bottom-up, open innovation concepts with top-down, traditional management structures so that organizations can effectively tap the collective intelligence of online communities for specific purposes. Wikis and open source software production are not considered crowdsourcing because there is no sponsoring organization at the top directing the labor of individuals in the online community. And when an organization outsources work to another person–even if that work is digital or technology-focused–that is not considered crowdsourcing either because there is no open opportunity for others to try their hands at that task.

There are four types of crowdsourcing approaches, based on the kinds of problems they solve:

1. The Knowledge Discovery and Management (KDM) crowdsourcingapproach concerns information management problems where the information needed by an organization is located outside the firm, “out there” on the Internet or in daily life. When organizations use a KDM approach to crowdsourcing, they issue a challenge to an online community, which then responds to the challenge by finding and reporting information in a given format back to the organization, for the organization’s benefit. This method is suitable for building collective resources. Many mapping-related activities follow this logic.

2. The Distributed Human Intelligence Tasking (DHIT) crowdsourcing approach concerns information management problems where the organization has the information it needs in-hand but needs that batch of information analyzed or processed by humans. The organization takes the information, decomposes the batch into small “microtasks,” and distributes the tasks to an online community willing to perform the work. This method is ideal for data analysis problems not suitable for efficient processing by computers.

3. The Broadcast Search (BS) crowdsourcingapproach concerns ideation problems that require empirically provable solutions. The organization has a problem it needs solved, opens the problem to an online community in the form of a challenge, and the online community submits possible solutions. The correct solution is a novel approach or design that meets the specifications outlined in the challenge. This method is ideal for scientific problem solving.

4. The Peer-Vetted Creative Production (PVCP) crowdsourcing approach concerns ideation problems where the “correct” solutions are matters of taste, market support, or public opinion. The organization has a problem it needs solved and opens the challenge up to an online community. The online community then submits possible solutions and has a method for choosing the best ideas submitted. This way, the online community is engaged both in the creation and selection of solutions to the problem. This method is ideal for aesthetic, design, or policy-making problems.

This handy decision tree below can help an organization figure out what crowdsourcing approach to take. The first question an organization should ask about their problem solving needs is whether their problem is an information management one or one concerned with ideation, or the generation of new ideas. In the information management direction, the next question to consider is if the challenge is to have an online community go out and find information and assemble it in a common resource (KDM) or if the challenge is to use an online community to process an existing batch of information (DHIT). On the ideation side, the question is whether the resulting solution will be objectively true (BS) or the solution will be one that will be supported through opinion or market support (PVCP).

Untitled1

A decision tree for determining appropriate crowdsourcing approaches for different problems. Source: Brabham, D. C., Ribisl, K. M., Kirchner, T. R., & Bernhardt, J. M. (2014). Crowdsourcing applications for public health. American Journal of Preventive Medicine, 46(2), 179-187.

I hope this conception of crowdsourcing is easy to understand and practically useful. Given this outlook, the big question I always like to ask is how we can mobilize online communities to solve our world’s most pressing problems. What new problems can you think of addressing with the power of crowds?

Daren C. Brabham, Ph.D., is an assistant professor in the Annenberg School for Communication & Journalism at the University of Southern California. He was the first the publish scholarly research using the word “crowdsourcing,” and his work focuses on translating the crowdsourcing model into new applications to serve the public good. He is the author of the books Crowdsourcing (MIT Press, 2013) and Crowdsourcing in the Public Sector (Georgetown University Press, 2015).

An Emergent Field: Making Better Maps By Applying Geographical Science to the Human Brain

e3abbecd4

Monitoring the human brain. (Photo courtesy of Flickr user Tabsinthe)

By Drew Bush

City planners often use socioeconomic data collected in surveys to determine neighbourhoods that might benefit from improved services. Yet such types of data can have a significant margin of error, especially when they’re collected from a relatively small group.

“Especially if you want to look at a small subset of the population—say, kids under 5 who are living in poverty—the uncertainty level is just huge,” Amy Griffin, a senior lecturer in the School of Physical, Environmental, and Mathematical Sciences at Australia’s University of New South Wales told CityLab’s Laura Bliss last November. “A lot of times, people are making decisions based on highly uncertain census data.”

Her research looks at how different kinds of visualizations can affect decision-making, with an emphasis on understanding the cognitive processes behind the use of maps. Still others in her field are engaged in understanding how the human brain engages with maps in order to improve map-making and the role it plays in municipal governance.

Some in the field study neuroscience from the uniquely geographical perspective of how the human brain reacts to maps with differing standards, visual cues and rules. Sara Irina Fabrikant, head of the Geography Department at the University of Zurich, dedicates her time to understanding how users make inferences from the design elements on a map, and how mapmakers might then design maps that convey data more clearly.

For example, in experiments she conducted to compare a NOAA (National Oceanic and Atmospheric Administration) mass-media weather map versus one of her own design, she found design elements could be used to simplify confusing variables and help users better understand characteristics like wind pressure. Her map included well-defined contour lines for wind pressure and less emphasis on temperature colors.

“Are our rules really good ones?” Fabrikant told CityLab. “If so, how? If they’re not, how can we improve them?”

Amy Lobben, head of the Department of Geography at the University of Oregon, asks a different question. She wants to know what neurological processes are at play as individual brains perform map-related tasks. Her possible goal: to create a map that plays off a given person’s cognitive strengths and weaknesses.

“You could potentially design a map that works with individuals’ innate abilities,” she told CityLab.

If you have thoughts or questions about the article, get in touch with Drew Bush, Geothink’s digital journalist, at drew.bush@mail.mcgill.ca.

Open Data Gets Boost from Obama’s 2016 Budget Proposal

obama-budget-plan

Obama’s 2016 federal budget proposal (Photo courtesy of www.philstar.com).

By Drew Bush

Lost in the details of a $4 trillion budget plan proposed by U.S. President Barack Obama are several provisions that seek to increase public access to government data, strengthen government analysis and collection of data and improve data-driven government decision-making, according to a story first reported in the Federal Times.

Released on February 2, Obama’s proposed 2016 budget for the federal government will likely meet strong resistance from a Republican controlled Congress. However, of more interest to the Geothink audience is the administration’s continuing support for programs which help collect, analyze and share the petabytes of data which the U.S. Government collects each day

“The administration is committed to continuing cost-effective investment in federal statistical programs in order to build and support agencies’ capacity to incorporate evidence and evaluation analyses into budget, management and policy decisions,” the budget reads. “The 2016 budget includes a package of proposals that would make additional administrative data from federal agencies and programs legally and practically available for policy development, program evaluation, performance measurement and accountability and transparency efforts.”

In terms of numbers, the proposed budget would increase funding for statistical programs by 2.5 percent, from $4.2 billion in 2015 to $5.2 billion in 2016. One of the largest shares would go to the U.S. Census Bureau which would receive $10 million to continue building out its collection of datasets and the infrastructure that allows users to collate, analyze and share data.

The funding would also help the federal government acquire state and municipal datasets that could then become accessible to the public. Furthermore, an additional $2 million would raise the General Service Administration’s E-Government initiative to $16 million. This program seeks to “develop and provides direction in the use of Internet-based technologies to make it easier for citizens and businesses to interact with the Federal Government, save taxpayer dollars, and streamline citizen participation.”

The administration has also supported the creation of a legislative commission proposed by Rep. Paul Ryan, R-Wisc., and Sen. Patty Murray, D-Wash., to examine ways to use government data to improve federal operations and support private research and industry.

If you have thoughts or questions about the article, get in touch with Drew Bush, Geothink’s digital journalist, at drew.bush@mail.mcgill.ca.

Tracey P. Lauriault on Citizen Engagement (or lack thereof) with Canada’s Action Plan on Open Government 2.0

tlauriault312

Tracey is a Postdoctoral Researcher in the new field of Critical Data Studies.

By Drew Bush

More than 1,450 individuals collectively generated 2,010 ideas, comments and questions for the Canadian Government on its Action Plan for Open Government 2.0. But one researcher with The Programmable City project who studies open data and open government in Canada feels these numbers miss the real story.

The process leading up to the “What We Heard” report, issued after the completion of consultations from April 24–October 20, 2014, only reflected the enthusiasm of the open data programming community, she says. A broader engagement with civil society organizations that most need help from the government to accomplish their work was severely lacking.

“They might be really good at making an app and taking near real time transit data and coming up with a beautiful app with a fantastic algorithm that will tell you within the millisecond how fast the bus is coming,” Tracy Lauriault, a postdoctoral researcher at the National Institute for Regional and Spatial Analysis (NIRSA), said. “But those aren’t the same people who will sit at a transit committee meeting.”

She believes the government has failed to continue to include important civil society groups in discussions of the plan. Those left out have included community-based organizations, cities having urban planning debates, anti-poverty groups, transportation planning boards and environmental groups. She’s personally tried to include organizations such as the Canadian Council on Social Development or the Federation of Canadian Municipalities only to have their opinions become lost in the process.

“There is I think a sincere wish to collect information from the people who attend but then that’s it,” she said.  “There is no follow up with some people or the comments that are made—or even an assessment, a careful assessment, of who’s in the room and what they’re saying.”

“I’m generally disappointed in what I see in most of these documents,” she added. “When they were delivering or working towards open data back in 2004, 2005 it was really about democratic deliberation and evidenced-informed decision-making—making sure citizens and civil society groups could debate on par with the same types of resources government officials had.”

For it’s part, the government notes that 18 percent of the participants came from civil society groups. But such groups were really just ad-hoc groups who advocate for data or are otherwise involved in aspects of new technology, according to Lauriault. Such input, while useful, is usually limited to requests on datasets, ranking what kind of dataset you’d like to see or choosing what platforms to use to view it, she added.

The report itself notes comments came from the Advisory Panel on Open Government, online forums, in-person sessions, email submissions, Twitter (hash tag #OGAP2), and LinkedIn. In general, participants requested quicker, easier, and more meaningful access to their government, and a desire to be involved in government decision making beyond consultations.

Some suggested that the Government of Canada could go even further toward improving transparency in the extractives sector. For example, proposed legislation to establish mandatory reporting standards could stipulate that extractives data be disclosed in open, machine-readable formats based on a standard template with uniform definitions.

figure5-eng

Major themes to emerge from citizen comments on the “What We Heard Report” (Image courtesy of the Government of Canada).

Find out more about this figure or the “What We Heard” report here.

If you have thoughts or questions about the article, get in touch with Drew Bush, Geothink’s digital journalist, at drew.bush@mail.mcgill.ca.

CODE Hackathon Set to Kick-Off as New Report finds the World’s Governments Slow to Open Governmental Data

open-data-mortar-20120416-frontpage

A new year for open data? (Photo Credit: Tactical Technology Collective)

By Drew Bush

In the first weeks of the New Year, two important news items for the Geothink audience made headlines. In Toronto, the Canadian federal government got ready to kick-off its second annual multi-city Canadian Open Data Experience (CODE) while the World Wide Web Foundation ranked the United States 2nd and Canada 7th for openness of governmental data in its second annual Open Data Barometer.

Canada Ranked 7th

Canada tied with Norway out of 86 countries surveyed based on whether government data was “open by default” as stipulated in the 2013 G8 Open Data Charter. Of more importance, however, was the country’s positive movement in the rankings and scores from last year, moving one spot up the index.

The survey examines availability of core government data such as company registers, public sector contracts, land titles, how governments spend money and how well public services perform. The U.K. is considered the global leader for open government data, publishing nearly all of these types of data.

Globally, the authors of the report state “there is still a long way to go to put the power of data in the hands of citizens. Core data on how governments are spending our money and how public services are performing remains inaccessible or pay-walled in most countries.”

That’s because fewer than 8 percent of surveyed countries publish datasets on subjects like government budgets, spending and contracts, and on the ownership of companies, in bulk machine-readable formats and under open re-use licenses.

A few key highlights of the report: 1. Only the U.K. and Canada publish land ownership data in open formats and under open licenses; 2. Only the U.K. and the U.S. publish detailed open data on government spending; 3. And, only the U.S., Canada and France publish open data on national environment statistics. Finally, open mapping data is only published in the U.K., the U.S. and Germany (an area where Canada lags).

CODE Hackathon Kicks-Off

In Toronto, developers, graphic designers, students, and anyone interested in trying their hand at coding are getting ready to create innovative uses for the Canadian government’s open data and to win up to $15,000 from the Government of Canada. The 48-hour event is set to begin on February 20th.

Innovations developed at hackathons like this could one day fuel improvements in access to government data. The event attracted 927 developers in 2013 and that number increased to over 1,000, organizers said, the day of the event.

“Open data is a brand new industry,” Ray Sharma, founder of the event and XMG Studios, told CTV News. “We are in an ice berg situation where we’ve only seen the tip of the data that will become available.”

But just what kind of industry is open to debate, as Geothink researchers Peter Johnson and Pamela Robinson examined in a recent paper. Their questions included whether civic hackathons have the potential to replace the traditional ways that government purchases products and services, and whether these events can be considered new vectors for citizen engagement, according to a post Johnson wrote for Geothink.

For more on CODE, you can watch Canada’s President of Treasury Board, Tony Clement here or read more about this year’s event here.

If you have thoughts or questions about the article, get in touch with Drew Bush, Geothink’s digital journalist, at drew.bush@mail.mcgill.ca.

Canada Action Plan on Open Government 2.0: Much Still To Do?

open_canada_feature_360x203-eng_0

Canada recently completed their public consultation on Open Government (Photo source).

By Drew Bush

Introduction

For the savvy traveller headed over Canada’s border this holiday season, Canada’s Action Plan on Open Government 2.0 holds promise. A visit to the site in December 2014 yielded a multi-media list of steps to follow when travelling abroad and even an iOS “Travel Smart” application.

Drafted after a June 2013 G8 Summit, Canada’s plan results from agreements it made when it signed on to the summit’s Open Data Charter that lays the foundation for usage of open data to promote best government practice.

As a result, Canadians can now get online help with more than just travel. Ever wanted to know how much tax money you spend on government contracts? Or need information on the fuel consumption of a car you might buy?

The goal of the 65 nations committed to these plans is to increase government transparency and accountability, encourage citizen engagement, and stimulate innovation and economic opportunities.

History

Making this type of data more freely available fits with a long tradition in Canada. When the country began participating in the Open Government Partnership (OGP) in September 2011, it committed to making open data (or machine readable, freely used, re-used and redistributed data) open to anyone able to attribute and share it.

Applications of Web 2.0 technologies and social media allow for these types of interactions online with information, datasets and records. In fact, many modern computer programs incorporate Application Programming Interfaces (or APIs) to gain access to datasets for users.

The Open Data Charter recognizes the central role open data plays in improving governance and stimulating innovation in data-driven products and services. It endorses the principle of open by default, an idea also supported by U.S. President Barack Obama’s 2013 Executive Order on open data.

The drafting of the Charter and Obama’s order have elicited praise but also criticism. As Rufus Pollock, Founder of the Open Knowledge Foundation, wrote on his foundation’s blog, “there is still much for the G8, and other countries, to do.” In particular, the early results from an Open Data Census in July 2013 show that G8 countries have a long way to go in opening up essential data.

User Generated Input

Making data and information more available to Canadians isn’t the only goal of the plan. Open government is increasingly becoming a positive force for unity and international cooperation, according to Canada’s President of the Treasury Board, Tony Clement, in his statement “About Open Government”. He claims that open data makes government “more open, accessible, and responsive” by harnessing the “collective ingenuity, drive, and imagination of its people.”

In Canada, this means finding a way for citizens to engage in a two-way dialogue and even contribute datasets. In 2014, the Canadian Open Data Experience appathon again brought together government, industry, academia, and the public to mash up, reuse and remix federal government data. Events like these and communities the plan encourages around interest areas like maps, labour and law help encourage the development of useful, effective applications that use government data.

Short History of Open Government in Canada

  • The Open Government Partnership formally launched on September 20, 2011 when eight founding governments (Brazil, Indonesia, Mexico, Norway, the Philippines, South Africa, the United Kingdom and the United States) endorsed the Open Government Declaration, and announced their country action plans. Canada joined the partnership later that year.
  • On March 18, 2011, the Government of Canada announced its commitment to an open government initiative that focuses on three areas: 1. Making information such as records and activity more easily accessible; 2. Making raw data available in machine-readable formats to citizens, governments, and non-profit/private sector organizations; 3) Giving citizens an opportunity for dialogue on federal policies.
  • In 2011 the Government of Canada launched an Open Data Portal – data.gc.ca – which now has more than 272,000 datasets from 20 departments and which has already resulted in over 100,000 dataset downloads since its launch.
  • All government departments began publishing summaries of completed Access to Information (ATI) requests 2012 monthly on their Web sites.
  • In 2012, the Government of Canada issued its enhanced Values and Ethics Code of conduct for all public officials.
  • A 2013 Government of Canada Social Sciences and Humanities Research Council (SSHRC) partnership grant asks ‘How the Geospatial Web 2.0 is Reshaping Government-Citizen Interactions.’ GeoThink now includes 13 team members and 36 collaborators and partners.

If you have thoughts or questions about the article, get in touch with Drew Bush, Geothink’s digital journalist, at drew.bush@mail.mcgill.ca.

Re-identification Risk and Proactive Disclosure of Data for Open Government: Lessons from the Supreme Court of Canada?

By Teresa Scassa

One of the challenges with the proactive disclosure of government data, and with open data more generally, is the obligation that governments have to not disclose personal information. This challenge is made more acute by the fact that the definition of “personal information” is, generally speaking, “information about an identifiable individual”.

Courts in Canada have said that identifiability is not considered solely in the context of the particular data set in question – information is personal information if it can lead to the identification of an individual when it is matched with information from virtually any other source.

The Supreme Court of Canada has just released a decision dealing with the issue of whether Ontario’s Ministry of Community Safety and Correctional Services was right to refuse to disclose information relating to the province’s sex offender registry. The concern in this case was that although the applicant sought only data about sex offenders living within the forward sortation areas, indicated by the first 3 letters of a postal code, this information could still be matched with other available information to specifically identify and locate individuals. Although the case deals with the province’s access to information regime, lessons can be extracted that are relevant to both the proactive disclosure of government information and to open data.

For more detail, see my blog post about this case, here: http://www.teresascassa.ca/index.php?option=com_k2&view=item&id=159:re-identification-risk-and-proactive-disclosure-of-data-for-open-government-lessons-from-the-supreme-court-of-canada?&Itemid=80

Teresa Scassa University of Ottawa, Faculty of Law

Hopping the Geofence: A Quick Look at Geofencing Practices

By Matthew Tenney

As we walk, drive, or skip down the road most of us are actively sharing bits of information about ourselves to anyone who cares to listen. The piece(s) of mobile technology we carry with us, nearly ever place we go, is being bombarded by a field of sensors that hear where and who we are.  Often these sensors can talk back to us through emails, SMS, or many other mediums directly to our smartly connected pockets. What they are looking for and what they do with this information is, however, a complex system of applications that vary depending on the kinds of hardware being used to what purpose someone has for listening in the first place.

Strategies known as geofencing utilizes location based services (LBS) within certain geographic zones and are delineated by sensor networks across real-world geographic areas. These invisible fences act as both partitions and catchment areas, which quiet heavily used in today’s digital world whether you are aware of it or not.

One example of geofencing is for commercial enterprises like consumer centers and dissemination of marketing materials. A shopping center can create a radius of interest (or guesstimated trade area) around their locations and “watch” all of those that enter and exit via different LBS. Sometimes these fences will alert you with an ad about their big fire-sale on paperback books, or, maybe extend an exclusive deal to you for: being in the right place, at the right time.

Another popular use of geofences is for safety and security purposes. Acting in this case, more like their traditional counterpart, as a barrier between different places or people. Digital Childcare services can offer a means to track the real-time whereabouts of children and provide different levels of safeguard measures to send alerts when these borders are crossed. Some high-security facilities can take advantage of geofences both inside and outside of buildings. When sensitive materials are at risk geofences can act as an invisible alarm systems that protects both digital and physical materials from leaving authorized areas.

Civic and community organizations have also been using geofences. School and college campuses offer geofences for secure network access to things like student records and other services. Sporting events can send real-time alerts to fans out in the parking lot about the game. Neighborhoods provide community wide Wi-Fi to residents or visitors and share community events. Residents can also use these geofenced zones as if they were mirroring physically gated-neighborhoods and extend heartfelt welcomes – or stern warnings – to those that enter its perimeter.  The state of Texas in the U.S. also sends out SMS alerts to automobile drivers on the interstate system about accidents, missing persons, or other public-service and emergency announcements.

For social networking, geofences can provide an intranet of connections between people that occupy the same geographic location. Providing a means to share messages to peers and outsiders about events and activities, geofencing, can allow people to form co-located digital cliques based on similarities of interest and location. |=|A|+|B|-|A∩B|
Figure 1: Inclusion-exclusion principle of mathematics. For illustrative purposes only. After all, everyone loves a good equation now and again.

While geofencing carries with it plenty of straightforward advantages, it is by definition a procedure of separating people, places, and things through processes of inclusion and exclusion. These processes, be they engineered or naturally formed, define more than just geographic regions, but also claim people, services, and resources via the quick and ready use of widespread modern technologies.

| A∪B|=|A|+|B|-|A∩B|
Figure 1: Inclusion-exclusion principle of mathematics. For illustrative purposes only. After all, everyone loves a good equation now and again.

Social exclusion refers to processes in which individuals or groups of people are blocked from rights, opportunities and resources (e.g. housing, employment, healthcare, civic engagement, democratic participation and due process) that are normally available to members of society and which are key to social integration. Where social inclusion is the opposite processes of offering these things to people and places that they belong.

There is apparent risk of geofencing to be an updated version of redlining, a process of discrimination that isolated certain people by socio-demographic traits like race and class, that dynamically dictate membership of “now you’re in, now you’re not” instantaneously.

A geofencing community can lay claim to another geographic area or alter its boarders as needed to grab a few people here and cut a few over there out. This can both create territorial bounds for community and individual identities and destroy the reputation of others. Social elitists can demarcate the newest hip-scene to be seen in and can at the same time kick to the curb outdated venues or areas as yesterdays hangout spots.

The implications of this rapid construction and destruction of identities has yet to be fully understood, but one can wonder what will the Brooklyn of tomorrow look like? Or, better yet: where will it be tomorrow and who is “in” now and “out” like yesterday?

In order to understand, promote, and prevent the right-and-wrong outcomes of geofencing requires a deeper understanding of what kinds of information is being shared, who is using it, and for what purposes. While technology shows no sign in abating the amount of digital information that can be shared through LBS, geofencing is an inevitable concern for all of us. Whether on onside of the digital divide or another, geofencing will likely define how we understand concepts of the city and ourselves in the future.