Geothink Program Guide for the Associaton of American Geographers (AAG) 2015 Annual Meeting

This year's American Association of Geographer's (AAG) Annual Meeting is in Chicago, Illinois.

This year’s American Association of Geographer’s (AAG) Annual Meeting is in Chicago, Illinois (Photo courtesy of AAG.org).

By Drew Bush

A long line-up of Geothinkers will be presenting at this year’s Association of American Geographers (AAG) Annual Meeting in Chicago next week. You’ll definitely not want to miss four of our team members as panelists on Civic technology: governance, equity and inclusion considerations on Thursday at 8:00 AM. Other highlights include presentations by Geothink Principal Investigator Renee Sieber and our students including Cheryl Power and Tenille Brown.

Below we’ve compiled the schedule for all of the project’s team members, collaborators and students who will be presenters, panelists and chairs during the conference. Find a PDF of our guide here. We hope you find this useful for finding the right sessions to join. You can also find the full preliminary AAG program here.

If you’re not able to make the conference, you can follow along on Twitter and use our list of Twitter handles below to join the conversation with our participants.

Join the Conversation on Twitter
Alex Aylett: @openalex_                                    Peter Johnson: @peterajohnson
Zorica Nedovic-Budic: @TurasCities               Andrea Minano: @Andrea_Minano
Tenille Brown: @TenilleEBrown                       Claus Rinner: @ClausRinner
Jonathan Corbett: @joncorbett                       Pamela Robinson: @pjrplan
Sarah Elwood: @SarahElwood1                       Teresa Scassa: @teresascassa
Victoria Fast: @VVFast                                       Renee Sieber: @RE_Sieber
Muki Haklay: @mhaklay                                    Harrison Smith: @Ambiveillance

And remember to use the conference hashtag #AAG2015 and our hashtag #Geothink or address @geothinkca when you Tweet.

Come to our Sessions at AAG 2015

Tuesday, April 21

Wednesday, April 22

Thursday, April 23

Friday, April 24

Saturday, April 25

If you have thoughts or questions about this article, get in touch with Drew Bush, Geothink’s digital journalist, at drew.bush@mail.mcgill.ca.

Crosspost: Canada’s Information Commissioner Tables Recommendations to Overhaul Access to Information Act

The Access to Information Act was first passed by parliament 1983 (Photo courtesy of en.wikipedia.org).

The Access to Information Act was first passed by parliament in 1983 (Photo courtesy of en.wikipedia.org).

This post is cross-posted with permission from Teresa Scassa, from her personal blog. Scassa is the Canada Research Chair in Information Law at the University of Ottawa.

By Teresa Scassa

Canada’s Access to Information Act is outdated and inadequate – and has been that way for a long time. Information Commissioners over the years have called for its amendment and reform, but generally with little success. The current Information Commissioner, Suzanne Legault has seized the opportunity of Canada’s very public embrace of Open Government to table in Parliament a comprehensive series of recommendations for the modernization of the legislation.

The lengthy and well-documented report makes a total of 85 recommendations. This will only seem like a lot to those unfamiliar with the decrepit statute. Taken as a whole, the recommendations would transform the legislation into a modern statute based on international best practices and adapted both to the information age and to the global movement for greater government transparency and accountability.

The recommendations are grouped according to 8 broad themes. The first relates to extending the coverage of the Act to certain institutions and entities that are not currently subject to the legislation. These include the Prime Minister’s Office, offices of Ministers, the bodies that support Parliament (including the Board of Internal Economy, the Library of Parliament, and the Senate Ethics Commissioner), and the bodies that support the operations of the courts (including the Registry of the Supreme Court, the Courts Administration Service and the Canadian Judicial Council). A second category of recommendations relates to the need to bolster the right of access itself. Noting that the use of some technologies, such as instant messaging, may lead to the disappearance of any records of how and why certain decisions are made, the Commissioner recommends instituting a legal duty to document. She also recommends adding a duty to report any unauthorized loss or destruction of information. Under the current legislation, there are nationality-based restrictions on who may request access to information in the hands of the Canadian government. This doesn’t mean that non-Canadians cannot get access – they currently simply have to do it through a Canadian-based agent. Commissioner Legault sensibly recommends that the restrictions be removed. She also recommends the removal of all fees related to access requests.

The format in which information is released has also been a sore point for many of those requesting information. In a digital age, receiving information in reusable digital formats means that it can be quickly searched, analyzed, processed and reused. This can be important, for example, if a large volume of data is sought in order to analyze and discuss it, and perhaps even to convert it into tables, graphs, maps or other visual aids in order to inform a broader public. The Commissioner recommends that institutions be required to provide information to those requesting it “in an open, reusable, and accessible format by default”. Derogation from this rule would only be in exceptional circumstances.

Persistent and significant delays in the release of requested information have also plagued the system at the federal level, with some considering these delays to be a form of deliberate obstruction. The Report includes 10 recommendations to address timeliness. The Commissioner has also set out 32 recommendations designed to maximize disclosure, largely by reworking the current spider’s web of exclusions and exemptions. The goal in some cases is to replace outright exclusions with more discretionary exemptions; in other cases, it is to replace exemptions scattered across other statutes with those in the statute and under the oversight of the Information Commissioner. In some cases, the Commissioner recommends reworking current exemptions so as to maximize disclosure.

Oversight has also been a recurring problem at the federal level. Currently, the Commissioner operates on an ombuds model – she can review complaints regarding refusals to grant access, in adequate responses, lack of timeliness, excessive fees, and so on. However, she can only make recommendations, and has no order-making powers. She recommends that Canada move to an order-making model, giving the Information Commissioner expanded powers to oversee compliance with the legal obligations set out in the legislation. She also recommends new audit powers for the Commissioner, as well as requirements that government institutions consult on proposed legislation that might affect access to information, and submit access to information impact assessments where changes to programs or activities might affect access to information. In addition, Commissioner Legault recommends that the Commissioner be given the authority to carry out education activities aimed at the public and to conduct or fund research.

Along with the order-making powers, the Commissioner is also seeking more significant consequences for failures to comply with the legislation. Penalties would attach to obstruction of access requests, the destruction, altering or falsification of records, failures to document decision-making processes, and failures to report on unauthorized loss or destruction of information.

In keeping with the government’s professed commitments to Open Government, the report includes a number of recommendations in support of a move towards proactive disclosure. The goal of proactive disclosure is to have government departments and institutions automatically release information that is clearly of public interest without waiting for an access to information request that they do so. Although the Action Plan on Open Government 2014-2016 sets goals for proactive disclosure, the Commissioner is recommending that the legislation be amended to include concrete obligations.

The Commissioner is, of course, not alone in calling for reform to the Access to Information Act. A private member’s bill introduced in 2014 by Liberal leader Justin Trudeau also proposes reforms to the legislation, although these are by no means as comprehensive as what is found in Commissioner Legault’s report.

In 2012 Canada joined the Open Government Partnership, and committed itself to an Action Plan on Open Government. This Action Plan contains commitments grouped under three headings: Open Information, Open Data and Open Dialogue. Yet its commitments to improving access to information are focussed on streamlining processes (for example, by making it possible to file and pay for access requests online, creating a virtual library, and making it easier to search for government information online.) The most recent version of the Action Plan similarly contains no commitments to reform the legislation. This unwillingness to tackle the major and substantive issues facing access to information in Canada is a serious impediment to realizing an open government agenda. A systemic reform of the Access to Information Act, such as that proposed by the Information Commissioner, is required.

What do you think about Canada’s Access to Information Act? Let us know on twitter @geothinkca.

If you have thoughts or questions about this article, get in touch with Drew Bush, Geothink’s digital journalist, at drew.bush@mail.mcgill.ca.

Behind the Scenes with Geothink Partner Yvonne Chen: Edmonton’s Open City Initiative

By Drew Bush

Sometime early next week, Edmonton’s City Council will vote to endorse an Open City Initiative that will help cement the city’s status as a smart city alongside cities like Stokholm, Seattle and Vienna. This follows close on the heels of a 2010 decision by city leaders to be the first to launch an open data catalogue and the 2011 awarding of a $400,000 IBM Smart Cities Challenge award grant.

Yvonne Chen, Strategic Planner at City of Edmonton

Yvonne Chen, Strategic Planner at City of Edmonton.

“Edmonton is aspiring to fulfill its role as a permanent global city, which means innovative, inclusive and engaged government,” said Yvonne Chen, a strategic planner for the city who has helped orchestrate the Open City Initiative. “So Open City acts as the umbrella that encompasses all the innovative open government work within the city of Edmonton.”

This work has included using advanced analysis of open data streams to enhance crime enforcement and prevention, an “open lab” to provide new products that improve citizen interactions with government, and interactive neighbourhood maps that will help Edmontonians locate and examine waste disposal services, recreational centres, transit information, and capitol projects.

For Chen, last week’s release of the Open City Initiative represents the culmination of years of work.

“My role throughout this entire release was I was helping with the public consultation sessions, I was analyzing information, and I was writing the Open City initiative documents as well as a lot of the policy itself,” she said.

In fact, the document outlines city policy, action plans for specific initiatives, environmental scans (or reviews), and results from public consultations on city initiatives. More than 1,800 Edmontonians commented on the Initiative last October before it was revised and updated for the Council.

“The City of Edmonton’s Open City Initiative is a municipal perspective of the broader open government philosophy,” Chen adds. “It guides the development of innovative solutions in the effort to connect Edmontonians to city information, programs, services and any engagement opportunities.”

Like many provincial and federal open city policies, the document focuses on making Edmonton’s government more transparent, accountable and accessible. What sets Edmonton apart, Chen said, is a focus on including citizens in the design and delivery of city programs and services through deliberate consultations, presentations, public events and online citizen panels. In fact, more than 2,200 citizens are on just the panel asked questions by city officials as they design the Initiative’s infrastructure.

So what will it mean to live in one of North America’s smart cities? Edmonton is already beginning to provide free public Wi-Fi around the city, developing a not yet ready 311 application for two-way communication about city services, and working to fully integrate its electronic services across city departments.

“So one of the projects which has been reviewed very, very popularly is we have opened public Wi-Fi on the LRT stations,” Chen said of a project that includes plans for expanding the service to all train and tunnels. “A lot of commuters are utilizing the service and utilizing the free Wi-Fi provided by the city while waiting for their trains.”

What do you think about Edmonton’s Open City initiative? Let us know on twitter @geothinkca.

If you have thoughts or questions about this article, get in touch with Drew Bush, Geothink’s digital journalist, at drew.bush@mail.mcgill.ca.

Torts of the Geoweb: (or the liability question) Part I

Screenshot

Mapping Ottawa’s open data on tobogganing hills (Photo courtesy of ottawastart.com)

By Tenille Brown, PhD student in the Faculty of Law at the University of Ottawa

Recently, on March 3rd as part of the continuing Geothink Project, I hosted a Twitter chat about tort liability with Mapping Mashups. This online forum was joined by Geothink partners and friends and the primary topic discussed was the role of tort law and how and where it fits in the context of the Geoweb, liability and moral responsibility. One active participant of this Twitter discussion was British academic Muki Haklay, a collaborator on the Geothink project more broadly, and Haklay later wrote up some highlights from this discussion, available here. I have been considering the role of tort liability in multiple contexts for some time now, both prior to the online discussion and subsequent to it. I have not been thinking of this idea so much in a typical “finding a problem” lawyerly way, but more in a “trying to understand the allocation of responsibility” kind of way. From the legal perspective, questions about how we should handle the mountains of data collected and produced by governments and citizens alike, jumps out at me. For these reasons I chose to place the focus of the Twitter chat on tort liability rather than the challenges of protecting the privacy of personal information, or copyright issues in geospatial information, which have been discussed elsewhere.

With the increase in platforms and data sources (both government and volunteered) on the geoweb, there is also an increase in opportunities for legal liability to attach to this information. With Canadian cities releasing data sets of all types of information, from proposed roadways to beach water sampling data, the liability question is not hypothetical, but of increasing importance. Of course, cities are carrying out their due-diligence by ensuring personal information does not get released, following the principles of the open government license. But still, some questions remain to be answered such as, what legal tools are in place to deal with third parties who take government information and use that information in a way that causes harm?

One example that immediately comes to mind is the use of open data to create apps for the reporting of pot-holes through cities 311 app, as happens in Toronto. A more apt example for Ottawa is the recently released information about hills open to tobogganing throughout the city, which was collated in a map here. Does liability attach to this information? If so, would information which highlights any hazards on the hill amount to a defence in a negligence action? How would we assign liability if citizenship were to take government data and create an open data app which contains outdated data?

In his write-up about the chat, I think Muki Haklay framed this problem correctly as an ethics problem. Haklay writes, Somehow, the growth of the geoweb took us backward. The degree to which awareness of ethics is internalised within a discourse of ‘move fast and break things‘, software / hardware development culture of perpetual beta, lack of duty of care, and a search for fast ‘exit’ (and therefore IBG-YBG) make me wonder about which mechanisms we need to put in place to ensure the reintroduction of strong ethical notions into the geoweb. As some of the responses to my question demonstrate, people will accept the changes in societal behaviour and view them as normal… In fact, tort liability principles recognize that if a wrong has been committed (sometimes even without intent), then the person who committed the harm might be required to compensate the individual. The very basis of tort law is that we ought to provide remedies for those wronged. Based on this aim, the courts don’t always uphold contracts of adhesion (which seek to limit liability).

The principles of tort liability understood as a matter of ethics and responsibility, provides opportunities for the prevention of harm and the accountability of government. This has long been recognized in the New York context, where the law stipulates that should a person trip on the sidewalk (or pothole), the city is only liable if it has been reported. To ensure reporting, every year the Big Apple Pothole and Sidewalk Corporationmaps out the cracks, holes and potholes throughout the city (and here). For its part, Toronto reports it has filled in almost 50,000 potholes in 2015 to date and over the past years there has been a 40% increase in drivers receiving compensation from pot-hole induced damage to cars. (The same report does not detail the number of complaints that have been made by the 311 reporting service).

The twitter conversation demonstrates that legal analysis questions, such as who has standing to bring a legal claim, who bears legal responsibility for information, and which courts have jurisdiction, are only the beginning of tort legal questions. A second analysis begs that we understand data in a larger framework which takes into account duties and responsibilities. Focusing on the prevention of harm, we could argue, that there should be a larger set of core activities or areas for which liability cannot be contracted out. These core areas presumably would pertain to the health, safety and well-being of citizenship, particularly that they be tailored to protect the interests of those who cannot be expected to know the details of tortious liability, nor necessarily how to navigate geoweb activities.

Tenille Brown is a PhD student in the Faculty of Law at the University of Ottawa and a Geothink student member. Her research is in the areas of legal geography, including property, spatial and citizen engagement, in the Ottawa context.

She can be reached on twitter, @TenilleEBrown and via email, Tenille.Brown@uottawa.ca.

Studying Public Transport Behavior By Accident: Lessons Learned from A Graduate Class in Computer Science with Scott Bell

Saskatchewan's bus lockout lasted one month (Photo courtesy of wn.com).

Saskatchewan’s bus lockout lasted one month (Photo courtesy of wn.com).

By Drew Bush 

Sometimes the best research grows out of collaboration and study within the confines of a classroom. For Scott Bell, an associate professor of Geography and Planning at the University of Saskatchewan, a graduate class in computer science provided just such an opportunity.

The past few years, Bell has collaborated with a colleague in Saskatchewan’s Department of Computer Science, Kevin Stanley, to teach a research-based class that gives students hands-on experience using an Android application for smartphones to measure human behavior at 2-minute intervals. Specifically, the application uses a phone’s accelerometer, GPS, camera, Bluetooth and other sensors to monitor participants’ movements and behaviors.

Last fall, students in the class wanted to examine health beliefs and constructed a survey to administer to participants before the phones were given out. Unrelated to the class’s planned content, the City of Saskatoon locked out its bus drivers, resulting in a month-long transit lockout. The lockout began two weeks before class started.

“We didn’t know what was going to happen during the shutdown,” Bell said. “But during our design stage we included in our survey a series of questions about how the participants get to work, and all of our participants were students at the [University of Saskatchewan]… Do they prefer to take public transit? Are they regular transit users? That kind of thing.”

Afterwards, Bell’s class gave these participants the phones and tracked them for a month—with the lockout ending two weeks into this period. Bell and Stanley’s students could then look at how student movement patterns and behaviors changed during and after a time period in which public transport was unavailable.

“It was interesting,” Bell said. “And one of the main findings, was that there wasn’t a change in attendance. So when the strike, when the lockout was on, everybody was still coming to the university at about the same rate as they came after the lockout ended, so when transit was fully available again.”

However, Bell and Stanley’s students noticed that when the lockout ended student trips to and from school actually became quite a bit shorter. They hypothesized that when their participants were forced to find alternate means of transportation they often relied on friends with private vehicles or on their own car.

This new reliance on private vehicles made additional trips, like running errands, possible on the way home thanks to the flexibility they allow compared to public transit. While not yet confirmed in this research, Bell calls this type of behavior “trip-chaining,” an idea often mentioned in transportation geography. Once the lockout ended and these participants returned to public transport, such behavior ended.

“We did learn in this study that we could use this technique to study transit behavior and we’re thinking a little bit more about that,” Bell said. “About how we can maybe do that with some of the Geothink partners that have more open data policies regarding their transit data and transit use.”

If Bell and Stanley’s students had only examined student attendance rates and arrival times at University of Saskatchewan, then they might have concluded the lockout had little affect on student behavior. However, by using surveys and smartphones, this technique established how stressful it was for students and, perhaps most importantly, how behaviors actually changed on the ground.

If you have thoughts or questions about this article, get in touch with Drew Bush, Geothink’s digital journalist, at drew.bush@mail.mcgill.ca.

Open North’s Inventory: Coming Up With Standards for Open Data

Open data standards are the subject of a new OGP report (Photo courtesy of opensource.com).

Open data standards are the subject of a new OGP report (Photo courtesy of opensource.com).

By Drew Bush

Open North’s James McKinney, Stéphane Guidoin and Paulina Marczak completed an inventory of global open data standards last week that seeks to establish a global viewpoint on the subject and identify any missing pieces. Their work was completed as part of the Open Government Partnership (OGP) Working Group, a group that aims to support governments seeking transparency through open data.

“The objective…is to promote the use of open data standards to improve transparency, create social and economic value, and increase the interoperability of open data activities across multiple jurisdictions,” the authors write in their report. “Its first deliverable is to complete an inventory of open data standards by type to develop a global view and to identify gaps and overlaps. Its final deliverable is an OGP document outlining baseline standards and best practices for open data, along with guidance for adoption and implementation.”

In their report, the authors used scripts to automatically collect, normalize and analyze data from 40 OGP members’ catalogs. Their goal was to determine how to standardize the ways such data is licensed, how metadata is used, what types of file formats catalogs make use of, and the overall structure of each catalog. As they wrote, they did not seek to “pursue a comprehensive inventory of data standards” but rather to focus on those “most relevant to OGP members.”

A myriad number of findings result from their analysis. In particular, they found OGP members have no common structure to their catalogs, a need for a common vocabulary for metadata (or data about data), and that there are significant problems with the metadata used to specify licensing in some countries (with “8 out of 24 catalogs, the licenses of over 10 percent of datasets are either not specified or underspecified”).

OGP’s Working Group consists of four streams that include Principles, Measurement, Standards and Capacity Building. Each consists of leads from the government, private and nonprofit world who work to identify and share practices that help OGP governments implement commitments and develop more ambitious and innovative open data plans.

McKinney serves as the lead for the Standards theme which promotes the use of open data standards to improve transparency and to increase the interoperability of open data activities across multiple jurisdictions. His organization, the Canadian non-profit Open North, creates online tools for civil society and government to educate and empower citizens to participate actively in Canadian democracy. Open North is also a Geothink partner.

Find the report here.

If you have thoughts or questions about this article, get in touch with Drew Bush, Geothink’s digital journalist, at drew.bush@mail.mcgill.ca.

Crosspost: Looking at Crowdsourcing’s Big Picture with Daren Brabham

This post is cross-posted with permission from Daren C. Brabham, Ph.D. the personal blog of Daren C. Brabham. Brabham is a Geothink partner at the University of Southern California Annenberg School for Communication and Journalism where he was the first to publish scholarly research using the word “crowdsourcing.”

by Daren C. Brabham

In this post, I provide an overview of crowdsourcing and a way to think about it practically as a problem solving tool that takes on four different forms. I have been refining this definition and typology of crowdsourcing for several years and in conversation with scholars and practitioners from diverse fields. Plenty of people disagree with my characterization of crowdsourcing and many have offered their own typologies for understanding this process, but I submit that this way of thinking about crowdsourcing as a versatile problem solving model still holds up.

I define crowdsourcing as an online, distributed problem solving and production model that leverages online communities to serve organizational goals. Crowdsourcing blends bottom-up, open innovation concepts with top-down, traditional management structures so that organizations can effectively tap the collective intelligence of online communities for specific purposes. Wikis and open source software production are not considered crowdsourcing because there is no sponsoring organization at the top directing the labor of individuals in the online community. And when an organization outsources work to another person–even if that work is digital or technology-focused–that is not considered crowdsourcing either because there is no open opportunity for others to try their hands at that task.

There are four types of crowdsourcing approaches, based on the kinds of problems they solve:

1. The Knowledge Discovery and Management (KDM) crowdsourcingapproach concerns information management problems where the information needed by an organization is located outside the firm, “out there” on the Internet or in daily life. When organizations use a KDM approach to crowdsourcing, they issue a challenge to an online community, which then responds to the challenge by finding and reporting information in a given format back to the organization, for the organization’s benefit. This method is suitable for building collective resources. Many mapping-related activities follow this logic.

2. The Distributed Human Intelligence Tasking (DHIT) crowdsourcing approach concerns information management problems where the organization has the information it needs in-hand but needs that batch of information analyzed or processed by humans. The organization takes the information, decomposes the batch into small “microtasks,” and distributes the tasks to an online community willing to perform the work. This method is ideal for data analysis problems not suitable for efficient processing by computers.

3. The Broadcast Search (BS) crowdsourcingapproach concerns ideation problems that require empirically provable solutions. The organization has a problem it needs solved, opens the problem to an online community in the form of a challenge, and the online community submits possible solutions. The correct solution is a novel approach or design that meets the specifications outlined in the challenge. This method is ideal for scientific problem solving.

4. The Peer-Vetted Creative Production (PVCP) crowdsourcing approach concerns ideation problems where the “correct” solutions are matters of taste, market support, or public opinion. The organization has a problem it needs solved and opens the challenge up to an online community. The online community then submits possible solutions and has a method for choosing the best ideas submitted. This way, the online community is engaged both in the creation and selection of solutions to the problem. This method is ideal for aesthetic, design, or policy-making problems.

This handy decision tree below can help an organization figure out what crowdsourcing approach to take. The first question an organization should ask about their problem solving needs is whether their problem is an information management one or one concerned with ideation, or the generation of new ideas. In the information management direction, the next question to consider is if the challenge is to have an online community go out and find information and assemble it in a common resource (KDM) or if the challenge is to use an online community to process an existing batch of information (DHIT). On the ideation side, the question is whether the resulting solution will be objectively true (BS) or the solution will be one that will be supported through opinion or market support (PVCP).

Untitled1

A decision tree for determining appropriate crowdsourcing approaches for different problems. Source: Brabham, D. C., Ribisl, K. M., Kirchner, T. R., & Bernhardt, J. M. (2014). Crowdsourcing applications for public health. American Journal of Preventive Medicine, 46(2), 179-187.

I hope this conception of crowdsourcing is easy to understand and practically useful. Given this outlook, the big question I always like to ask is how we can mobilize online communities to solve our world’s most pressing problems. What new problems can you think of addressing with the power of crowds?

Daren C. Brabham, Ph.D., is an assistant professor in the Annenberg School for Communication & Journalism at the University of Southern California. He was the first the publish scholarly research using the word “crowdsourcing,” and his work focuses on translating the crowdsourcing model into new applications to serve the public good. He is the author of the books Crowdsourcing (MIT Press, 2013) and Crowdsourcing in the Public Sector (Georgetown University Press, 2015).

Crosspost: Geoweb, crowdsourcing, liability and moral responsibility

This post is cross-posted with permission from Po Ve Sham – Muki Haklay’s personal blog. Muki is a Geothink collaborator at the University College London and the co-director of ExCiteS.

By Muki Haklay

Yesterday [March 3rd, 2015], Tenille Brown led a Twitter discussion as part of the Geothink consortium. Tenille opened with a question about liability and wrongful acts that can harm others

If you follow the discussion (search in Twitter for #geothink) you can see how it evolved and which issues were covered.

At one point, I have asked the question:

It is always intriguing and frustrating, at the same time, when a discussion on Twitter is taking its own life and many times move away from the context in which a topic was brought up originally. At the same time, this is the nature of the medium. Here are the answers that came up to this question:

 

 

You can see that the only legal expert around said that it’s a tough question, but of course, everyone else shared their (lay) view on the basis of moral judgement and their own worldview and not on legality, and that’s also valuable. The reason I brought the question was that during the discussion, we started exploring the duality in the digital technology area to ownership and responsibility – or rights and obligations. It seem that technology companies are very quick to emphasise ownership (expressed in strong intellectual property right arguments) without responsibility over the consequences of technology use (as expressed in EULAs and the general attitude towards the users). So the nub of the issue for me was about agency. Software does have agency on its own but that doesn’t mean that it absolved the human agents from responsibility over what it is doing (be it software developers or the companies).

In ethics discussions with engineering students, the cases of Ford Pinto or the Thiokol O-rings in the Discovery Shuttle disaster come up as useful examples to explore the responsibility of engineers towards their end users. Ethics exist for GIS – e.g. the code of ethics of URISA, or the material online about ethics for GIS professional and in Esri publication. Somehow, the growth of the geoweb took us backward. The degree to which awareness of ethics is internalised within a discourse of ‘move fast and break things‘, software / hardware development culture of perpetual beta, lack of duty of care, and a search for fast ‘exit’ (and therefore IBG-YBG) make me wonder about which mechanisms we need to put in place to ensure the reintroduction of strong ethical notions into the geoweb. As some of the responses to my question demonstrate, people will accept the changes in societal behaviour and view them as normal…

See the original post here. twitter