YPP Network Description

The MacArthur Research Network on Youth and Participatory Politics (YPP) formed out of recognition that youth are critical to the future of democracy and that the digital age is introducing technological changes that are impacting how youth develop into informed, engaged, and effective actors.

MIT Center for Civic Media
Subscribe to MIT Center for Civic Media feed
Updated: 1 hour 8 min ago

The Four Horsemen of the Free Speech Apocalypse: Emerging Conceptual Challenges for Civil Libertarians

April 4, 2018 - 7:05pm

Last April, I blogged about a talk on trigger warnings I gave as a representative of the Board of the National Coalition Against Censorship (NCAC), a nonprofit whose mission is to promote freedom of thought, inquiry and expression and oppose censorship in all its forms. Earlier today, at the request of Executive Director Chris Finan, I presented to the rest of the Board some early thoughts about ascendent challenges and emerging threats to those concerned with the freedom of expression. What follows is a lightly edited version of my notes for that talk. Epistemic status: uncertain, but trying to trace lines to see where they might converge. Extremely interested in feedback.

There is, ironically, a common consensus that we live in a fractured public sphere. At the level of systems design, people worry about filter bubbles, echo chambers, and information cascades. At the level of ordinary politics, people worry about the ability to get opposing sides to agree on common facts, let alone effective policy. At the level of cultural coherence, canons are being challenged, and authority redistributed. Whether you blame liberals or conservatives, the alt-right or snowflake millennials, there is a shared understanding that the questions of who can speak to whom about what are more hotly contested today than they have been in some time.

However, there are more profound risks on the horizon for those invested in traditional conceptions of, and defenses for, free expression. The purpose of this blog post is to briefly outline four interrelated challenges to free expression activists that can't be solved by the old civil libertarian saw of "more speech == better speech." To be clear, when I say these are challenges, I don't mean they are necessarily good or bad developments. I just mean they present thorny problems for existing frameworks about free expression. They are: a growing conviction (that I share) that more speech does not necessarily mean better speech, the economics of attention making it harder to be heard, automated content production swamping human expression, and fake content that's indistinguishable from real content.

Conceptual challenge #1: conversational health and detofixication
The core thesis of this challenge was put nicely by Melissa Tidwell of reddit, in a New Yorker article regarding the company's efforts to "detoxify" its community:

Melissa Tidwell, Reddit’s general counsel, told me, "I am so tired of people who repeat the mantra ‘Free speech!’ but then have nothing else to say. Look, free speech is obviously a great ideal to strive toward. Who doesn’t love freedom? Who doesn’t love speech? But then, in practice, every day, gray areas come up....Does free speech mean literally anyone can say anything at any time?” Tidwell continued. “Or is it actually more conducive to the free exchange of ideas if we create a platform where women and people of color can say what they want without thousands of people screaming, ‘Fuck you, light yourself on fire, I know where you live’? If your entire answer to that very difficult question is ‘Free speech,’ then, I’m sorry, that tells me that you’re not really paying attention."

The framework of health and toxicity has also been recently adopted by Twitter, with CEO Jack Dorsey announcing initiatives to research the "overall health" of Twitter, a notable departure from the previously laissez-faire attitude of a company that used to describe itself as the "free speech wing of the free speech party."

In the not-so-distant past, social media companies largely tried to avoid policing what their users posted on their platforms, citing safe harbor provisions and/or libertarian philosophies and praising the Arab Spring as the result of their publishing tools. Today, as companies seek to expand and diversify their userbase (not to mention their engineering workforce), and confront the legal and economic challenges of their most noxious users, many platforms have shifted their own internal value-systems quite rapidly in the direction of a more nuanced understanding of speech beyond the simple (but common) conceit that more == better.

Conceptual challenge #2: the economics of attention overwhelming the economics of publishing
The core thesis of this challenge, argued persuasively by Zeynep Tufecki in her It's the (Democracy-Poisoning) Golden Age of Free Speech, is that the relevant scarcity, and therefore point of vulnerability, to the free expression of ideas is not the inability to speak but the inability to be heard:

Here's how this golden age of speech actually works: In the 21st century, the capacity to spread ideas and reach an audience is no longer limited by access to expensive, centralized broadcasting infrastructure. It’s limited instead by one’s ability to garner and distribute attention. And right now, the flow of the world’s attention is structured, to a vast and overwhelming degree, by just a few digital platforms...The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech itself. As a result, they don’t look much like the old forms of censorship at all

In a whitepaper titled Is the First Amendment Obsolete?, Tim Wu argues that this change in communications technology requires rethinking the way we regulate speech or risk giving up on Constitutional approaches to improving the public sphere altogether. This paper is especially notable as it was published with the Knight First Amendment Institute itself. '

A corollary to this argument observes that, since most publishing is paid for by advertising, i.e. attention/surveillance, platforms are economically incentivized to promote outrageous content. Certainly this is nothing new: yellow journalism and tabloids have turned a profit off this dynamic for decades. However, these processes are now optimized and individualized to a degree of power and precision never before possible. Which brings us to:

Conceptual challenge #3: automated content production
The core thesis of this challenge is that automated content generation, directed by the prenominate economics of attention and advertising, will produce truly massive volume of toxic, outrageous expression and swamp human expression with the proximately computational. In a haunting essay entitled Something is wrong on the internet, James Bridle falls down the hole of weird YouTube videos that, at least in some cases, appear to be computationally generated at massive volume in order to capitalize on the long tail of advertising dollars.

If smart scripts can reverse-engineer popular titles and keywords, and then mash pixels together to produce cut-ups of pop culture references, then Borgesian libraries of content can be manufactured and posted with none (or nearly no) human intervention. Nor is this dynamic limited to YouTube videos: algorithmic content generation and on-demand production means that you end up with screenprinted tshirts that read "KEEP CALM AND RAPE A LOT" by virtue of random pairings of nouns and verbs. As James Grimmelmann writes in The Platform is the Message, in "the disturbing demand-driven dynamics of the Internet today...any desire no matter how perverse or inarticulate can be catered to by the invisible hand of an algorithmic media ecosystem that has no conscious idea what it is doing."

When humans create perverse or disturbing content, we chalk it up to sickness or to creativity, and institutionalize or memorialize accordingly. But when computers do it, at the scale and volume made possible by digital reproduction and incentivized by the economics of advertising, the sheer flood of content may overrun the stream that people can produce, drowning distinctions between good and bad, and obviating the idea of a "conversation" together except as occurs through algorithmic feedback.

Conceptual challenge #4: documentation that is fake but indistinguishable from real
The core thesis of this challenge is that new technologies that can produce fake content indistinguishable from real content will create a collapse of trust and/or rebuild it through invasive and surveillant technological means. Of all the challenges, I believe this to be the most profound and deeply dangerous. The unholy trinity of technologies that can totally destroy the concept of documentary truth include:

  • Tacotron 2, Google's new text-to-speech system that is virtually indistinguishable from a human voice
  • Digital doppelgangers, through which researchers have been able to generate convincing speaking faces from pictures and audio to make people "say" things they never in fact said
  • DeepFakes, a software package that allows moving faces to be mapped seamlessly onto body doubles

In a recent post for Lawfare, Bobby Chesney and Danielle Citron recognized the grim national security implications for these technologies. Grimmer still are some of the proposed solutions, like the concept of digital signatures embedded in cameras so as to track and verify the creators of videos, which, even if it worked psychologically (as the author of the linked article admits it might not), risks building an even greater surveillance ecosystem, or undermining real (but unsigned) videos from everyday people.

So, these are the four horsemen of the free speech apocalypse. While the current controversies about speech and expression are difficult enough to navigate, to me, these risks seem to approach the existential. People who believe in the value of free expression and free speech must plan to confront these challenges soon or risk having the moral and normative ground melt away beneath their feet.

free speechfree expressioncensorshipactivismsocial networks
Categories: Blog

A tabletop game on privacy in Costa Rica: Sula Batsu

April 2, 2018 - 2:17pm

 

How do youth allies promote young people’s critical thinking on privacy, in informal learning contexts in the Americas? This blog post is part of a series showcasing the work of different organizations at the intersection of youth development, digital rights, and online safety.

Quick facts

Who: Vivian Zúñiga from Sulá Batsú

What: Litigation, campaigning, research

Mission/vision: To promote local development through solidary social economic practices in different fields, including information and communication technologies

Where: Costa Rica

Since: 2005

Years of operation (as of February 2018): 13

Works in the fields of: Digital technologies, computer use, digital stories, digital security

Post summary: Sulá Batsú is a cooperative in Costa Rica promoting local development through information and communication technologies; to address the topic of digital security with youth, they designed a tabletop game.

Highlight quote from the interview: “Teaching methodologies need to be adaptative and emergent. When I arrive at a workshop, I don’t have a full show set up. I have learned that things always change and participants have a lot more to say, and I have a lot more to learn. Yes, you must come in with an idea of what needs to be achieved, but nothing that cannot change.”

More resources: Sulá Batsú’s website

Vivian Zúñiga, Sulá Batsú
 

Sulá Batsú is a cooperative that, since 2005, works to promote local development through solidary social economic practices. In a globalized world, information and communication technologies (ICT) have become one of the main fields to promote their vision. Vivian Zúñiga has been affiliated with the cooperative for over a decade and has been one of the driving forces behind their efforts on youth and technology.

 

Sulá Batsú’s work with youth started as a partnership with Fundación Telefónica, the foundation of one of the primary telecommunications companies in Costa Rica. They went to different parts of Costa Rica to hold workshops under the ‘Digital stories’ umbrella: from taking good photos with mobile phone cameras to digital security basics. Their interest in youth has influenced Sulá Batsú’s programs more broadly; TIC-as, their program on gender equality and technology in rural areas of Costa Rica, has created spaces and networks for young rural women specifically.

 

In the digital rights and security ecosystem, Sulá Batsú’s work with youth shows an interesting context that, in my experience, organizations and researchers from the global north struggle to contemplate. On the one hand, local civil society deals with the specific legislative challenges of the Cybercrime Law passed in 2012, modeled after the Budapest Convention – which has been criticized by human rights organizations worldwide for enabling intrusive surveillance without institutional safeguards (Rodriguez, 2011). On the other hand, they don’t have access to some of the tools most widely associated in the global north with countersurveillance efforts.

 

“I can’t come into a community and pose solutions that won’t work for us. Signal does not work well in Central America”. Signal is a communications application for mobile phones that has long been promoted in digital security circles because of its end-to-end encryption and open source development, in an advocacy attempt to promote secure communications by the most failproof and usable means to less technically savvy users.  

 

In the spirit of proposing context-sensitive solutions, Sulá Batsú realized that security workshops in particular can be heavy, and that the topic can be distant from people. After a five-month research process with Fundación Telefónica, they decided to make the learning more fun through a game called Huellas, or footprints. Two to six players have to match online risk scenarios with good practices to accumulate the largest number of tokens. The goal is to “identify scenarios where they know they are at risk of having their rights violated online, so that they can identify and adopt good practices for safe use of the internet”.

 

Why a fun take on security trainings for youth? “We feel that there isn’t much information for youth on this topic. And they can believe that their own information is not important”. Vivian says that their work was motivated partly by prominent cases where personal data of youth were misused in Costa Rica (like in many other of the contexts described by interviewees in this blog post series), as well as cases where young people were being expelled from schools because of incidents related to privacy.

 

For Vivian, work on security with youth does not happen in a vacuum, isolated from other social issues. “When we work with youth, we have to change our language, sometimes for something as simple as complying with the norms of the environment where we meet them. When we go to a school, even saying the word “sex” can be problematic – in Costa Rica, new guides on sex education have come out, annoying the far right movement on the one hand, but raising challenges for educators on the other.”

 

For an organization focused on local development, valuing the local over broader, more global views on technology influences not just the solutions they propose, but also their thematic and pedagogical choices. After working with mothers who have children who use ICT, they decided to provide trainings that would address the digital gap they witnessed between both generations. They focused on giving them options that would help mothers keep their own privacy in devices that were also touched by small, agile hands. They frame these workshops as “Digital technologies”, or as “Computer use” in some communities, depending on the local language.  

 

“In our workshops, it’s about listening to people’s realities and adapting to them. With some of these women, we end up taking a computer apart so that they can see where the internet comes from. For them, being able to see where their information is being stored is very eye-opening”.

 

So what is Vivian’s advice for other people who want to work in promoting digital security? “Teaching methodologies need to be adaptative and emergent. When I arrive at a workshop, I don’t have a full show set up. I have learned that things always change and participants have a lot more to say, and I have a lot more to learn. Yes, you must come in with an idea of what needs to be achieved, but nothing that cannot change.”


You can read more about Sulá Batsú on their website.

youthactivismdigital securityprivacyLatin America
Categories: Blog

Youth and gender lens in countersurveillance work in Paraguay: TEDIC

March 27, 2018 - 12:46pm

 

How do youth allies promote young people’s critical thinking on privacy, in informal learning contexts in the Americas? This blog post is part of a series showcasing the work of different organizations at the intersection of youth development, digital rights, and online safety.

Quick facts

Who: Eduardo Carrillo from TEDIC

What: Web development, campaigns, workshops, research

Mission/vision: To promote the respect of digital rights and free/open culture

Where: Paraguay

Since: 2012

Years of operation (as of February 2018): 6

Works in the fields of: Digital rights, free/open culture, personal data, countersurveillance, digital security

Post summary: TEDIC is a digital rights and free/open culture organization best known for their campaigns on privacy to resist State and corporate surveillance, and they address youth and gender-based issues in their workshops on digital security.

Highlight quote from the interview: “We [promote critical thinking about online privacy] by talking about concrete cases with local implications. People start to share and dialogue is ignited. It sometimes shows lack of understanding of the technological landscape, but solutions arise from participants themselves too”.

More resources: TEDIC website; EFF blog post on their successful Pyrawebs campaign, and their beautiful website on corporate surveillance in Paraguay at El Surtidor.

Interview with Eduardo Carrillo, TEDIC
 

TEDIC is a digital rights and free/open culture organization in Paraguay. Like other organizations that will be featured in this series, they recognize other social struggles not necessarily as part of their core mission, but as lenses through which they carry out their work: gender, internet as a space that replicates violence and as an opportunity for emancipation; and access to public information. In that same spirit, they have worked with youth throughout the years; Eduardo Carrillo is the youngest TEDIC member, and it is his first job since he graduated from International Relations in university. I am grateful to have learned more about TEDIC through his generous interview.
 

TEDIC are best known for one of their big legislative wins: on the face of a data retention bill, they created a campaign, Pyrawebs, that remixed a guaraní term for citizen informants that enacted State surveillance in times of Stroessner’s dictatorship. The campaign, which had strong support from youth groups in Paraguay, eventually succeeded in stopping the bill. You can read the story on EFF’s blog (or, if you are interested in the campaign tactics, I wrote about them in Spanish for InfoActivismo). Interestingly enough, this did not mean that TEDIC would always be an adversarial organization; they were collaborators in the National Cybersecurity Plan in Paraguay, and have participated in the national plans of action for the open government movement.
 

Now their main focus is research on the implications of open government data for personal data management. They partnered with an online outlet, El Surtidor, to create a second Pyrawebs campaign and raise awareness on corporate surveillance. This time, a beautifully illustrated scroll website explains personal data law and corporate surveillance practices in Paraguay, in a youth-friendly language.

 


Over 6000 telephone numbers were reported [for spamming or scamming in Paraguay]. How do these numbers get our data?” One of the illustrations on El retorno de los Pyrawebs, “The comeback of the Pyrawebs”.

 

My purpose in interviewing TEDIC was understanding what it looks like for a digital rights organization in Latin America to contemplate youth and gender issues as part of their agenda. Eduardo said, “More than thinking specifically about youth, we consider it a transversal topic in or work”. It is seen in how they invite participants to their events, for example. The organization reaches out to media channels consumed by young people in Paraguay to invite them to some of their other events, like the ‘Ilústrame la data’ (Illustrate my data) workshop.
 

“We did a data bootcamp to talk about platforms for access to info. We didn’t want a hackathon to promote new platforms but use the existing ones and do storytelling with the available data. We reached out to the student centers in the main universities of the country, especially in journalism schools. I went to leave flyers and speak with school authorities to get permission for students to take the day off to attend”.
 

Eduardo says that this approach also means leveraging their strong alliances with other similarly-minded individuals and organizations in the digital rights ecosystem in Latin America to do collaborations. TEDIC hosted a feminist technology workshop with Coding Rights and Internet Lab (one of the organizations interviewed for this series) to discuss the gender-based privacy implications of menstrual apps, data collection practices and GIS. They also worked with trans women on digital security, response to online harassment and corporate censorship.
 

In their workshops, they “always do a general overview at first; then we talk about the global context of digital security and espionage. Then we translate it to our particular context so that people understand it from their own experience. Then we go into measures that can be taken to address certain situations. Finally we discuss help channels”. They contemplate “holistic security practices, which aren’t necessarily just digital, but also legal, physical and emotional”.
 

How does TEDIC try to get participants to think critically about privacy? Eduardo says they do so by “talking about concrete cases that have local implications”. In the digital security workshop with trans women, they spent a long time doing critical readings of the terms of service, and the implications of real name policies for trans women in a context like Paraguay. “People start to share and dialogue is ignited. It sometimes shows lack of understanding of the technological landscape, but solutions arise from participants themselves too”.
 

This is a response to what Eduardo describes is a common phenomenon in the digital rights world: “Organizations losing their grassroots perspective” on the face of the global implications of internet debates. This approach, however, depends on the climate at the workshop or event. “If people don’t know each other, it doesn’t get personal quickly. If there is a sense of belonging, it becomes easier. In the first case, leaving time for people to approach you after workshop works.”
 

What are the next steps for TEDIC in the evolution of their privacy work with regard to their focus on youth and gender issues? Expanding their take on what ‘youth’ means. So far, they have focused on individuals aged 18 to 35, not teenagers. And they also hope to make their work more participatory. “In Paraguay, people can feel scared of speaking in large groups”.
 

Ultimately, however, why is an organization like TEDIC invested in youth implications of digital rights issues? Eduardo describes their underlying concern: “I agree that there is a protectionist, dramatic, sensationalist discourse in regard to youth safety online, like we saw in regard to Sarahah and Blue Whale last year. There is little empowering discourse aimed at youth. And the topic in the background of all this is how youth today are understood as a passive subject in their community, not as as a collective that can create change".


You can read more about TEDIC on their website; about their successful Pyrawebs (v.1) campaign on the EFF blog, and see their beautiful illustrated campaign on corporate surveillance in Paraguay at El Surtidor.

 

youthactivismLatin Americaprivacy
Categories: Blog

Youth and gender lens in countersurveillance work in Paraguay: TEDIC

March 26, 2018 - 6:55pm

 

How do youth allies promote young people’s critical thinking on privacy, in informal learning contexts in the Americas? This blog post is part of a series showcasing the work of different organizations at the intersection of youth development, digital rights, and online safety.

Quick facts

Who: Eduardo Carrillo from TEDIC

What: Web development, campaigns, workshops, research

Mission/vision: To promote the respect of digital rights and free/open culture

Where: Paraguay

Since: 2012

Years of operation (as of February 2018): 6

Works in the fields of: Digital rights, free/open culture, personal data, countersurveillance, digital security

Post summary: TEDIC is a digital rights and free/open culture organization best known for their campaigns on privacy to resist State and corporate surveillance, and they address youth and gender-based issues in their workshops on digital security.

Highlight quote from the interview: “We [promote critical thinking about online privacy] by talking about concrete cases with local implications. People start to share and dialogue is ignited. It sometimes shows lack of understanding of the technological landscape, but solutions arise from participants themselves too”.

More resources: TEDIC website; EFF blog post on their successful Pyrawebs campaign, and their beautiful website on corporate surveillance in Paraguay at El Surtidor.

Interview with Eduardo Carrillo, TEDIC
 

TEDIC is a digital rights and free/open culture organization in Paraguay. Like other organizations that will be featured in this series, they recognize other social struggles not necessarily as part of their core mission, but as lenses through which they carry out their work: gender, internet as a space that replicates violence and as an opportunity for emancipation; and access to public information. In that same spirit, they have worked with youth throughout the years; Eduardo Carrillo is the youngest TEDIC member, and it is his first job since he graduated from International Relations in university. I am grateful to have learned more about TEDIC through his generous interview.
 

TEDIC are best known for one of their big legislative wins: on the face of a data retention bill, they created a campaign, Pyrawebs, that remixed a guaraní term for citizen informants that enacted State surveillance in times of Stroessner’s dictatorship. The campaign, which had strong support from youth groups in Paraguay, eventually succeeded in stopping the bill. You can read the story on EFF’s blog (or, if you are interested in the campaign tactics, I wrote about them in Spanish for InfoActivismo). Interestingly enough, this did not mean that TEDIC would always be an adversarial organization; they were collaborators in the National Cybersecurity Plan in Paraguay, and have participated in the national plans of action for the open government movement.
 

Now their main focus is research on the implications of open government data for personal data management. They partnered with an online outlet, El Surtidor, to create a second Pyrawebs campaign and raise awareness on corporate surveillance. This time, a beautifully illustrated scroll website explains personal data law and corporate surveillance practices in Paraguay, in a youth-friendly language.

 


Over 6000 telephone numbers were reported [for spamming or scamming in Paraguay]. How do these numbers get our data?” One of the illustrations on El retorno de los Pyrawebs, “The comeback of the Pyrawebs”.

 

My purpose in interviewing TEDIC was understanding what it looks like for a digital rights organization in Latin America to contemplate youth and gender issues as part of their agenda. Eduardo said, “More than thinking specifically about youth, we consider it a transversal topic in or work”. It is seen in how they invite participants to their events, for example. The organization reaches out to media channels consumed by young people in Paraguay to invite them to some of their other events, like the ‘Ilústrame la data’ (Illustrate my data) workshop.
 

“We did a data bootcamp to talk about platforms for access to info. We didn’t want a hackathon to promote new platforms but use the existing ones and do storytelling with the available data. We reached out to the student centers in the main universities of the country, especially in journalism schools. I went to leave flyers and speak with school authorities to get permission for students to take the day off to attend”.
 

Eduardo says that this approach also means leveraging their strong alliances with other similarly-minded individuals and organizations in the digital rights ecosystem in Latin America to do collaborations. TEDIC hosted a feminist technology workshop with Coding Rights and Internet Lab (one of the organizations interviewed for this series) to discuss the gender-based privacy implications of menstrual apps, data collection practices and GIS. They also worked with trans women on digital security, response to online harassment and corporate censorship.
 

In their workshops, they “always do a general overview at first; then we talk about the global context of digital security and espionage. Then we translate it to our particular context so that people understand it from their own experience. Then we go into measures that can be taken to address certain situations. Finally we discuss help channels”. They contemplate “holistic security practices, which aren’t necessarily just digital, but also legal, physical and emotional”.
 

How does TEDIC try to get participants to think critically about privacy? Eduardo says they do so by “talking about concrete cases that have local implications”. In the digital security workshop with trans women, they spent a long time doing critical readings of the terms of service, and the implications of real name policies for trans women in a context like Paraguay. “People start to share and dialogue is ignited. It sometimes shows lack of understanding of the technological landscape, but solutions arise from participants themselves too”.
 

This is a response to what Eduardo describes is a common phenomenon in the digital rights world: “Organizations losing their grassroots perspective” on the face of the global implications of internet debates. This approach, however, depends on the climate at the workshop or event. “If people don’t know each other, it doesn’t get personal quickly. If there is a sense of belonging, it becomes easier. In the first case, leaving time for people to approach you after workshop works.”
 

What are the next steps for TEDIC in the evolution of their privacy work with regard to their focus on youth and gender issues? Expanding their take on what ‘youth’ means. So far, they have focused on individuals aged 18 to 35, not teenagers. And they also hope to make their work more participatory. “In Paraguay, people can feel scared of speaking in large groups”.
 

Ultimately, however, why is an organization like TEDIC invested in youth implications of digital rights issues? Eduardo describes their underlying concern: “I agree that there is a protectionist, dramatic, sensationalist discourse in regard to youth safety online, like we saw in regard to Sarahah and Blue Whale last year. There is little empowering discourse aimed at youth. And the topic in the background of all this is how youth today are understood as a passive subject in their community, not as as a collective that can create change”.


You can read more about TEDIC on their website; about their successful Pyrawebs (v.1) campaign on the EFF blog, and see their beautiful illustrated campaign on corporate surveillance in Paraguay at El Surtidor.

youthactivismLatin Americaprivacy
Categories: Blog

Youth-friendly data protection in Mexico: Articulo 12, A.C.

March 19, 2018 - 11:51am

 

How do youth allies promote young people’s critical thinking on privacy, in informal learning contexts in the Americas? This blog post is part of a series showcasing the work of different organizations at the intersection of youth development, digital rights, and online safety.

Quick facts

Who: Cédric Laurant from Artículo 12, A.C.

What: Litigation, advocacy, materials for youth

Mission/vision: To defend Mexican users’ privacy on and offline through litigation

Where: Mexico

Since: 2013 (but the team had worked together in different spaces since 2011)

Years of operation (as of January 2018): 5

Works in the field of: Data protection, privacy

Post summary: Artículo 12’s data protection program called “Son Tus Datos” carries out litigation and advocacy to close the gap between data protection legislation and practice in Mexico, which places them in the ecosystem of digital rights organizations in the country – and they use their platform to address the implications for youth rights.  

Highlight quote from the interview: “Even on websites and sites that are cleary intended for youth, the [privacy] language is like that of lawyers communicating to adults”

More resources: Son Tus Datos’ institutional website; and Defensores Digitales, their website for youth

Interview with Cédric Laurant, Artículo 12, A.C.
 

Cédric Laurant is a data lawyer and researcher who had worked in Europe, the United States, Peru and Colombia before he moved to Mexico. There, he started Artículo 12 and its data protection program “Son Tus Datos”. Artículo 12 is an organization that defends users’ privacy on and offline through legal processes. At the time, data protection was not included in the portfolio of the more established non-profit organizations working on transparency, freedom of expression or journalist protection.
 

Artículo 12 is best known for Filtraciones Digitales, a website where corporate workers can whistleblow data breaches that otherwise go unreported. The goal of their work is to have enterprises notify victims when their data is breached. In 2016, they found that a wide spectrum of companies in 7 industrial sectors were not at all prepared to notify their data breaches to their clients, customers or users affected. “It’s not just about disseminating information on human rights, but actively protecting rights through legal processes. We want to use existing law, interpreting it in a way it has not been interpreted before, to flag companies that do not comply with the law, and if required, denounce them.”
 

Their work in corporate litigation shows that, like some of the other organizations featured in this blog post series, Artículo 12 does not self-identify as a “youth rights” organization, nor does it work primarily with youth. However, they are one of the digital rights organizations in Mexico with projects that address younger audiences.
 

One of these projects was the translation of European Digital Rights’ (EDRi) “Digital Defenders” materials into a book and the adaptation of its characters into two web games intended to teach digital security to 9-14 year-olds. “EDRi’s guide is interesting: it addresses the topic from the heroes perspective, with a narrative adapted to discuss privacy, while engaging youth. They presented heroes that, instead of broadly protecting society, protect your privacy. They also presented evil characters. Someone protects your passwords, and there is an enterprise trying to steal your information”.

 

Help the Queen of Passwords choose a secure password – one of the games on DefensoresDigitales.org

 

Artículo 12 seizes the opportunities provided by privacy controversies to advocate for the rights of youth. When a leak on Australian media showed that Facebook had been working with advertisers to target over six million psychologically vulnerable teenagers, they wrote a letter to the Facebook office in Mexico and the National Institute for Access to Information and Protection of Personal Data to find out if a similar experiment had been carried out in the country. They also joined organizations led by the Center for Digital Democracy in the United States in writing a letter to Mark Zuckerberg.
 

Why is an organization like Artículo 12 interested in youth at all? “There is a whole discourse that says that kids don’t worry about anything; however, when we take a closer look, youth are generally more familiar with digital tools than adults sometimes. They know better than their parents how to manage privacy on their Facebook accounts and mobile phones. They have developed ways to avoid family or corporate monitoring. Danah boyd has written about the ways youth protect their privacy. So we need to teach them and give them more ways to protect themselves. Help themunderstand how their personal data can be abused.”
 

“Age-appropriate language would help youth not feel powerless before a privacy notice. Sometimes, youth don’t know what to do, how to complain about what’s happening. Even on websites and sites that are cleary intended for youth, the [privacy] language is like that of lawyers communicating to adults”. Cédric thinks this understanding on the side of youth users is necessary to close the gap between data protection legislation and corporate actions. When users know their rights are violated, they can seek out organizations that will help them challenge corporate practices.
 

One of these organizations is None of Your Business, a new European Union-based non-profit organization that defends the right to privacy through collective actions against enterprises. Their work is essential in the face of the impending General Data Protection Regulation which will be enforced in the European Union later this year. Cédric mentions them as an organization that inspires the work of Son Tus Datos; however, their work style cannot yet be replicated in Mexico, where class action lawsuits have not been exercised.
 

Regardless of the limitations of doing privacy work in the Mexican context, the work carried out by Artículo 12 shows a productive way out of the tension between digital rights advocacy and youth rights at large. “The reason why not many organizations want to work on youth issues is that many stakeholders use data protection of minors as a way to limit the freedoms of adults by enabling surveillance practices. For example, in the US there have been legislative attempts to ‘protect kids from pornographic sites’ by asking people to send a copy of their ID to use them. They wanted to protect kids but ended up reducing privacy”. Through their work, Cédric and his team give a different take on the defense of digital rights that does not ignore youth needs.


You can read more on Artículo 12’s data protection program at sontusdatos.org; and on Defensores Digitales, their website for youth.

Categories: Blog

Youth and privacy research in Chile: Derechos Digitales

March 15, 2018 - 5:40am

 

How do youth allies promote young people’s critical thinking on privacy, in informal learning contexts in the Americas? This blog post is part of a series showcasing the work of different organizations at the intersection of youth development, digital rights, and online safety.

Quick facts

Who: Patricio Velasco from Derechos Digitales

What: Litigation, campaigning, research

Mission/vision: To defend, promote and develop human rights online, through advocacy in public policy and private practices, for a more egalitarian and just region.

Where: Chile

Since: 2005

Years of operation (as of February 2018): 13

Works in the field of: Privacy, data protection

Post summary: Derechos Digitales is a digital rights organization doing qualitative research on youth and privacy as part of its work to improve data privacy legislation in Chile.

Highlight quote from the interview: “Research like this shows interesting tensions in the narratives around youth: challenging the common framing of kids as people who are incapable of understanding the perils and threats of the internet and therefore should be controlled, by showing that, like everyone else, they can contemplate different threats and especially the practical skills needed to deal with them”.

More resources: Derechos Digitales’ website, direct link to their report on youth and privacy in Latin America

Interview with Patricio Velasco, Derechos Digitales
 

Derechos Digitales is one of the first digital rights organizations in Latin America. With over a decade of work, countless publications and campaigns, many human rights organizations in the region point to them as one of the main references for privacy work in the Americas. Patricio Velasco is a researcher at Derechos Digitales and the lead author behind their report on children, youth and privacy in Latin America.
 

Patricio, a sociologist, has always been interested in the configuration of public space and the distinction between public and private. His research motivation for the report was to give a regional take on how youth and children today understand “privacy”, and to what extent it can be construed as a limit of the public sphere. The motivation for the organization, however, was a legislative debate that was sorely in need of different perspectives.

Chile is discussing its personal data legislation, and a common view is that its proposals are insufficient to protect the privacy of youth and children. Derechos Digitales’ question is: “if the protection that the State can give is not as good as the one we want, we need to see what’s happening on the other side: youth. What are the abilities of children and youth to effectively manage their internet resources?” They recognize that risks still exist, and adequate institutional protections are necessary, “but the context still begs the question”.
 

Why are youth at the center of this discussion? “Digital literacies vary by age. What interests me are the grey areas. What is happening with this intermediate generation [the parents of today] is that they have a conscience about the need for some control over their kids’ use of the internet, but kids have more skills than their parents and so their ability to exert control is less”.
 

Youth privacy, more broadly, is also “related to a bunch of problems we see today. Non-consensual image sharing is an extreme case of this; managing personal data in an environment of big data; bullying in online environments. For Derechos Digitales, it is a deeper concern that goes beyond specific cases or public discussions”.
 

Derechos Digitales’ report on child and youth privacy was born as a research project that intended to use regional data from Global Kids Online, an international research project that collects information on children’s use of the internet. Originally, they wanted to compare Global Kids Online data from three Latin American countries (Brazil, Argentina and Chile) with that of a benchmark country in Europe. However, Derechos Digitales was unable to secure access to regional data; they were only able to look at some data from Brazil on a published report, and compare it to Global Kids Online data from Turkey and Poland.
 

With regard to kids’ privacy practices, Global Kids Online data studies kids’ ability to delete web history, to block people with whom they do not want to speak, and to change their privacy settings on social networks. In their comparison, Derechos Digitales found a relation between privacy choices and the household income of different participants, and looked at other factors like class and gender.
 

“Research like this shows interesting tensions in the narratives around youth: challenging the common framing of kids as people who are incapable of understanding the perils and threats of the internet and therefore should be controlled, by showing that, like everyone else, they can contemplate different threats and especially the practical skills needed to deal with them”.

 

 

Derechos Digitales’ report on child and youth privacy (available here)

 

For Derechos Digitales, research is the first step that will then feed other forms of advocacy. Patricio thinks the organization might use this research to think of youth capacity building the way their research on gender has led them to develop special privacy workshops for women and journalists. For Derechos Digitales, capacity building requires to leave normative approaches aside.
 

“The organization has addressed topics like sex in the online environment, and the message has never been to say ‘you should not have sex online’. We have said that, if it’s a practice you are considering, there are some things you need to have in mind; talking about risks, thinking about the underlying social structures and individual agency, is essential for a truly free choice. Ultimately, the question we ask ourselves is how to enable everyone to control their privacy the way they desire, when not all of us have the same resources”.
 

This approach is consistent with the organization’s overall take on safety online, which is seen, for example, in their work on countersurveillance: “the logic of online protection presupposes more or less total knowledge of existing threats and best practices. Appealing to control relies on defined, limited situations that we get to know only from adult points of view. And it is an erroneous presupposition. To exert control over others tacitly implies that one is aware of all threats, and that aspiration seems laughable.” On the other hand, “data shows that youth do have consciousness of the potential threats”.
 

During the first semester of 2018, Derechos Digitales will be publishing more qualitative research on youth and privacy, carried out in collaboration with Chilean university students. “What we want to understand is what is it that end users —youth— consider in their own ways of thinking to be the limits of public and private. We have lots of work left to do in promoting those skills.”


You can read Derechos Digitales’ report on youth and privacy here (in Spanish), and all of their research in English here.

youthprivacyLatin Americaresearchyouth
Categories: Blog

Launching the Data Culture Project

March 5, 2018 - 8:46am

Learning to work with data is like learning a new language — immersing yourself in the culture is the best way to do it. For some individuals, this means jumping into tools like Excel, Tableau, programming, or R Studio. But what does this mean for a group of people that work together? We often talk about data literacy as if it’s an individual capacity, but what about data literacy for a community? How does an organization learn how to work with data?

About a year ago we (Rahul Bhargava and Catherine D’Ignazio) found that more and more users of our DataBasic.io suite of tools and activities were asking this question — online and in workshops. In response, with support from the Stanford Center on Philanthropy and Civil Society, we’ve worked together with 25 organizations to create the Data Culture Project. We’re happy to launch it publicly today! Visit http://datacultureproject.org to learn more.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The Data Culture Project is a hands-on learning program to kickstart a data culture within your organization. We provide facilitation videos to help you run creative introductions to get people across your organization talking to each other — from IT to marketing to programs to evaluation. These are not boring spreadsheet trainings! Try running our fun activities — one per month works as a brown bag lunch to focus people on a common learning goal. For example, “Sketching a Story” brings people together around basic concepts of quantitative text analysis and visual storytelling. “Asking Good Questions” introduces principles of exploratory data analysis in a fun environment. What’s more, you can use the sample data that we provide, or you can integrate your organization’s data as the topic of conversation and learning.

Developing Together

We built DataBasic.io to help individuals build their data literacy in more creative ways. We’ve baked in design principles that focused on learners (read our paper), argued to tool designers that their web-based tools are in fact informal learning spaces (watch our talk video), documented how our activities are particularly well suited to data literacy learners (read another paper), and focused them on building a data mindset (read our opinion piece).

These activities and tools were designed and iterated on with interested users (with support from the Knight Foundation). We develop all our tools based on the problem organizations bring to us. Our latest grant was a partnership with Tech Networks of Boston, who brought years of experience working with organizations to develop their capacity and skills in a variety of ways. We prototyped a first set of videos, for the WordCounter “Sketch a Story” activity with them, and tried it out in a local workshop with some of their partners and clients.

Trying Out a Model — the Data Culture Pilot

Based on how that went, we recruited 25 organizations from around the world to help us build the Data Culture Project. Non-profits, newsrooms, libraries, community groups were included in this cohort, and we created a network to help us guide our prototyping. Over the last 6 months, each group ran 3 activities within their organizations as brown-bag lunches.

It was wonderful to have collaborators that were willing to try out some half-baked things! After each workshop, they shared how it went on a group mailing list. Then each month we hosted an online chat to get feedback and share insights and common points from the feedback.

Even in these prototype sessions, the participants shared some wonderful insights. Here are just a few:

  • “It did lead to a pretty significant rethink fo the communications director for what is coming out in the spring.”
  • “I hear back from participants regularly about how much they enjoyed the activities and wondering what comes next.”
  • “As they were working through their data sets, they kept coming up with more questions it made them wonder about and more things to consider about those questions.”
  • “They can relate everything back to their own situations / data / organizations.”

We were heartened and excited to see that our design partners were able to see impacts already!

How to Join the Community

We are launching the Data Culture Project today. Here’s how to make the best use of the project and the community:

  • Read about why you don’t need a data scientist; you need a data culture to understand why data literacy needs to be understood as a community capacity, in addition to an individual capacity.
  • Run one or more of the activities listed on the Data Culture Project home page. We found in the pilot that running one per month (and providing pizza) can work to bring people together.
  • Remix and modify the activity to work for you and tell us about it! At the bottom of each activity page, you’ll see a “Learn With Others” comment box where you can tell others what worked for you (á la Internet food recipe sites).
  • Join our mailing list to connect with others working on creative approaches to building capacity in their organizations (and be the first to hear about new activities and projects).

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Remix and modify the activity to work for you and tell us about it! At the bottom of each activity page in the Data Culture Project, you’ll see a “Learn With Others” comment box where you can tell others what worked for you (á la Internet food recipe sites).

We are grateful to the Stanford Center on Philanthropy and Civil Society for supporting the development of the Data Culture Project. The Data Culture Project is headed by Rahul Bhargava and Catherine D’Ignazio, undertaken as a collaboration between the MIT Center for Civic Media and the Engagement Lab@Emerson College, and with the assistance of Becky Michelson (project manager) and Jon Elbaz (research assistant).

Categories: Blog

Codesign with youth in Argentina - Faro Digital

March 5, 2018 - 7:50am

 

How do youth allies promote young people’s critical thinking on privacy, in informal learning contexts in the Americas? This blog post is part of a series showcasing the work of different organizations at the intersection of youth development, digital rights, and online safety.


Quick facts

Who: Ezequiel Passeron from Faro Digital

What: Workshops, talks and campaigns

Mission/vision: To promote, through co-design, the responsible use of Information and Communication Technologies (ICT) for a more just society

Where: Argentina

Since: 2016 (but the team had worked together in different spaces since 2011)

Years of operation (as of February 2018): 2

Works in the field of: Responsible use of ICT, digital citizenship

Post summary: Faro Digital works with youth, through co-design workshops and talks, to discuss topics related to digital citizenship and the responsible use of ICT. This work in schools and spaces helped them create a notable campaign on safe sexting.  

Highlight quote from the interview: “To raise awareness in youth, we need to co-work rather than just bring an adult-centric view of responsible use, safety, privacy.”

More resources: Faro Digital’s website, press coverage of their safe sexting campaign

Interview with Ezequiel Passeron, Faro Digital

 

Ezequiel is part of a collective of young communicators who use their understanding of Information and Communication Technologies (ICT) to promote a fair society. This Argentine collective started working together in 2011, and established a non-profit organization called Faro Digital in 2016.

 

Faro Digital (Digital Lighthouse) gives talks and facilitates workshops on the responsible use of ICT in schools, in NGOs, and in spaces where marginalized communities gather. They also train teachers and parents on the ways that they can get involved in youth education in regards to the digital world.

 

As communications professionals, they have always shared a vision with the youth they serve: social media are great tools for communication. But they see that the adults around youth do not always share that view. Faro Digital see a need to explain to adults what is going on with youth in digital spaces, and to promote adult involvement in youth’s digital lives. They want to bridge this intergenerational gap by connecting adult family members with their children, and teachers with their students.

 

Ezequiel argues that connecting adults and youth can allow them both to reflect and learn together about responsibility online, which is the framing that Faro Digital favors to articulate its mission. Yet they are aware that “responsible use of ICT” is not necessarily a catchy term that will get youth excited about a workshop. A lot of their work aims to find shared vocabulary that generates empathy among youth; not to impose formal terms, but to use the terms youth already use.

 

They don’t want youth to feel like they are subjects of study, but rather collaborators, people who are having fun. At the same time, they also want young people to reflect about what they do online. Their overall framing of responsible use of ICT sometimes takes form in conversations about what it means to take care of oneself and others online; about bullying, sexting and non-consensual image sharing.

 

This points at a key element that distinguishes Faro Digital’s work from that of other organizations in the same field: their genuine commitment to co-design in the process. “To raise awareness in youth, we need to co-work rather than just bring an adult-centric view of responsible use, safety, privacy.”

 

Their work in co-design was inspired by the failures they saw when youth weren’t involved in the design process. “We failed in our first years. We had quantitative objectives, we wanted to teach and give tools, but then we saw that youth didn’t respond well to the discursive distance between us. Things we cared about didn’t resonate with them. Sometimes they just told us what we wanted to hear. To address this, we needed a structured methodology with goals, but unstructured enough to give youth agency in the process.”

 

Faro Digital’s work in co-design has been largely inspired in the Digitally Connected network initiated by Youth and Media at the Berkman Klein Center for Internet and Society, and in Lionel Brossi’s research and work in the University of Chile. But what does co-design look like in Faro Digital’s work in Argentina? They have two types of sessions: one-hour, to fit into the periods that schools grant them, and the longer two-hour.

 

In the one-hour sessions, they spend fifteen minutes interacting with the youth, asking what they like about the internet, what they don’t like about the internet, and what they would like to learn about it that nobody has taught them. Then they focus on two topics: with younger kids, they focus on grooming and cyberbullying; with older kids, they focus on digital footprint and sexting. They play a couple of videos, and kids are asked to come up with solutions to the problems they identify.

 

In the two-hour sessions, participants are divided into small teams. They start out each session asking youth to map out the online media they consume and spaces in which they participate: videogames, social networks, influencers. As a consensus-building exercise, they choose one of the networks or games they included, and then it’s time to analyze what they like about them, what they would change about them, and what they find bothersome about them. For Ezequiel, that’s where critical thinking begins.

 

Participants then are asked to pick a problem and think of solutions for it in different formats (campaigns, applications, even emoji). After this exercise, they wrap up the last twenty minutes with a conversation on digital citizenship topics, depending on the age of the group they are working with. With 9-11 year-olds, they talk about grooming; with older youth, they talk about sexting.

 

For Ezequiel, “some topics are always more successful than others, but a lot of it comes down to how you present them. I think audiovisual content is essential in this. When we play videos, there is no one who doesn’t pay attention; they are used to consuming this type of content”.

 

This creative methodology helps address some of the difficulties related to working as external forces in a setting where they will have a very limited amount of time with the youth. “At first, it’s challenging to build trust and participation. That’s why we speed into creative work; to help them feel comfortable with us and like they belong in this activity. For me, one of the most effective tactics is to turn fast when there are awkward silences. If we think there is lack of participation because it’s a painful or boring topic, we try to improvise and deviate the conversation from there”.

 

Through co-design, the responsible use of ICT becomes a discussion on digital citizenship’s hottest issues. But what are the underlying power dynamics that get uncovered through these conversations? For Ezequiel, the conversations that Faro Digital facilitates among youth enable critical thinking on gender oppressions, otherness and intergenerational trust.

 

Gender oppressions become clear as youth start to discuss sexting. When asking about their views on sexting, the Faro Digital facilitators make a point of asking about the gender identity of those in the most widely circulated photos – and why do they think that is the case. “At this point we have stopped talking about internet itself and now we are talking about society.” Talking about famous cases or the “fad of spreading photos on WhatsApp” can help warm up youth for this conversation.

 

Our relationships with others are a central topic of discussion also when Faro Digital facilitates a discussion on digital footprint. By discussing what youth find when they look for themselves on a search engine, and what they expect jobs will be able to find when they look for them after high school, they talk about one’s control in using the internet as a business card – an unrealistic stance. “We try to talk about the role of the other -- the impact that sharing photos without consent, or tagging those who don’t want to be tagged, can have on these searches”.

 

Faro Digital’s conversations on grooming don’t rely on stranger danger narratives; they are about showing that meeting people online is not bad, and about restoring trust on the adults around them. “The objective is for them to understand that adults must take care of them, even if they don’t understand ICT. That sometimes, even if their first reaction if we tell them we are being sextorted is to be angry, their anger is related to their fear for us. And to not let that anger stop us from asking for help”. In Argentina, there is a public phone line against grooming, as well as legislation.

 

“Lots of kids criticize what they see everyday because it’s hard to ask for an ethical use of ICT when we take a snapshot society today. We see lots of violence, little empathy. Bullying and cyberbullying happening in most schools in Argentina. It’s by pointing at the unethical that the conversations about ethics begin. They received little education in values and what that meant for their use of technology, and I think that’s something that worries them. We have heard answers from youth saying, ‘I have to behave well, be careful online, but nobody does that. Why am I not going to be part of the mass that insults, that discriminates others online?’ It’s complex for youth because they are living in a society that shames, publishes everything all the time. I think this makes ethical use complicated”.

 

Having difficult conversations does not need to be boring, and, according to Ezequiel, the power of the co-design process can can be seen in the energy that’s generated in these spaces. “What educators who are there everyday tell us that their kids struggle to feel interested, to engage in everyday activities like this. And the high level of interaction we see shows that this is working. We can see interest and genuine answers about what they think”.

 

The lessons they have learned in Argentinean schools and youth spaces inspired Faro Digital to do a campaign on sexting. When they saw the disconnect in youth’s perceptions between sexting and non-consensual image sharing, and contrasted it with victim-blaming campaigns against sexting, they decided to take a different stance: remind youth that it is their right to sex, but they should do so carefully: opting for anonymity and secure messaging and storage.

 

 

#SexteáConLaCabeza, Faro Digital’s campaign for safe sexting

“It’s better to cover [your head] now than having to wear costumes later”

   

This campaign opened new collaboration possibilities for Faro Digital. Movistar Argentina reached out to collaborate with them as they did a campaign on grooming. The organization will launch a new campaign on Facebook focused on how not to share images of others, and is currently working on qualitative research on youth uses of digital tools funded by Google, and a campaign on “Convivencia digital”, digital coexistence, with UNICEF Argentina and the Government of Buenos Aires.

 

Though the responsible use of ICT was the core of their mission as they started, their main focus today is generation of methodologies to understand and co-create on what technology more broadly means for youth. They are interested in children’s creative and innovative uses of technology, as they think that that is where the potential of transformation lies.

 

Outside of workshops and campaigns, what would Faro Digital recognize as the natural progression of their mission? A digital cultural center where kids can go to learn about robotics, code, media-making, ignite talks; a space where youth can find something that changes their life. “We have a long way to go in seeing mediators (educators, parents, etc) use these tools to help kids find their passions”.

 

Ezequiel’s frustration with the Argentinean education model is that it’s too structured and insists on labeling youth. “The possible paths are too rigid and most youth don’t fit into them, and the internet could help us find our own and create a world where everyone can thrive”. If you involve youth in the creative process, the responsible use of ICT for a more just society can support broader youth development goals: it can allow you to seize “the potential of self-discovery enabled by access to new technologies”.


You can read about Faro Digital, and some coverage of their #SexteaConLaCabeza campaign here.

youthactivismprivacyLatin America
Categories: Blog

Social media helplines for positive and safe school climates - iCanHelpline

February 27, 2018 - 6:43pm

How do youth allies promote young people’s critical thinking on privacy, in informal learning contexts in the Americas? This blog post is part of a series showcasing the work of different organizations at the intersection of youth development, digital rights, and online safety.

Quick facts

Who: iCanHelpline.org (run by The Net Safety Collaborative)

What: A social media helpline for schools

Mission/vision: To help U.S. schools and districts reduce cyberbullying and grow student safety and positive school climates

Where: Based in Salt Lake City and serving schools nationwide

Since: 2015 (based on research since 2000)

Years of operation (as of January 2018): 2.5

Works in the field of: youth online safety

Highlight quote from the interview: “We are trying to humanize the process, get educators to see that it’s not really about technology. Every incident involving young people is unique – as individual as the people involved. We ask administrators who call what’s going on, what platform, if local media have reported on it, if they’ve reported it to the app or internet company, or if they need help with that. [...] In order to do any of this, often they need help from students because they don’t know how to use the app. In most cases students have brought them the issue because they don’t like the negativity either. So we encourage them to work with their students and develop their digital leadership.”

More resources: iCanHelpline.org, NetFamilyNews.org


Interview with Anne Collier, iCanHelpline.org and the Net Safety Collaborative

 

I want to start this post disclosing that Anne’s views, through her writing and through our conversations, have helped me articulate my vision on youth safety that supports, not undermines, youth agency – and I am honored to start this blog post series with her most recent initiative.

 

Anne is a youth advocate; the author of one of the most robust, nuanced and informative sources on youth and media in the United States (and anywhere, in my Latin American opinion); and a true connoiseuse of the evolution of advocacy and research in the field of youth and technology. Through twenty years of work on youth and online safety, Anne looked for practical approaches that served youth, and it became clear to her that the missing piece in the US was an Internet helpline. In 2016, she founded iCanHelpline – the topic of this interview.

 

iCanHelpline is based in the United States, and run by The Net Safety Collaborative. It is the place “where schools and districts can call or email to get help in resolving problems that surface in social media – problems that threaten students’ safety such as cyberbullying, impersonation, harassment and sexting”. This means that the helpline does not work directly with youth, or even with parents: they work with school staff.

 

For the purposes of this series of interviews, I decided not to focus exclusively on allies that work directly with youth; working with other audiences can be key in youth allyship, and iCanHelpline’s strategic decision to work with schools supports this rationale. “My research through the years has shown me that most messaging has been about kids with engaged parents and a significant proportion of our kids don’t have that support but need it. Those at-risk kids may or may not have engaged parents, but virtually all of them are in school. It seemed logical to start there”.

 

Why a helpline? Anne recalls that, in 2006, when kids were adopting MySpace and other platforms, a lot of initiatives were undertaken to fill in the knowledge gaps for parents, policymakers and media, from taskforces to Anne’s own writing on Net Family News. To her, “now, at the beginning of this decade, it felt more and more like we could keep writing and trying to guide parents, but it wasn’t going to get us anywhere”. Anne looked at practical approaches that served youth in English-speaking countries, and found early examples of helplines in the United Kingdom, Australia and New Zealand.

 

In addition to these countries, the European Commission had funded helplines in many EU countries; however, the United States did not have what’s called an “Internet helpline” yet. Anne worked with other colleagues from the world of net safety to set up a pilot, modeled after the UK’s internet helpline, and explore different sustainability models.

 

This is a blog series about youth and privacy. But Anne considers her work to be in the field of net safety. In a changing landscape of encompassing ideas, from the media literacies of the last decade to the digital citizenship of today, Anne considers that being deliberate about our approach to the framing of youth safety is key in the recognition of youth agency in the process:

 

“In the first 15-or-so years of Internet safety, all the messaging, from politicians to civil society organizations, was along the lines of ‘be careful with what you post as it can come back to haunt you.’ It was all about consequences to oneself and online in isolation – keeping yourself safe, your personal information private, your ‘digital reputation’ positive. Youth were represented almost entirely as potential victims. There was no focus on you as a stakeholder in a community–online, offline or both–on a participant in keeping things safe for yourself, your peers and your communities. Agency wasn’t even a component of ‘digital citizenship,’ which, at least in the U.S., has been about good digital behavior typically for the purpose of “classroom management.”

 

“Thankfully, there’s increased discussion about the importance of resilience in sustained wellbeing, online and offline. For too long in the public discussion about youth online safety, we neglected the development of resilience and other internal safeguards such as media literacy and the skills of social-emotional learning (or “social literacy”) in favor of surveillance and control: parental control tools, rules, policies and laws. These have their place, but there was an inherent imbalance. They’re all external to the child and send the message that only outside forces, never their own and their peers’ resources, are what keep them safe.

 

“Infrastructure certainly plays a safety role too,” she said, referring to one of “seven properties of safety in a digital age” she proposed in 2012, “providing users with the tools and know-how to counter social cruelty, report abuse and take responsibility for their own and each other’s wellbeing and that of their communities.” Through her interaction with major internet companies, she’s seen progress there, but “still not enough support on any adults’ part for young users’ agency. All the stakeholders – parents, educators, companies, policymakers and, through the conditioning of 20 years of Internet safety messaging, even youth – default too much to thinking that adult-to-child instruction about, and adult monitoring and control of, the online part of their lives are how that part stays positive and good.”

 

Bringing fresh perspective and focus to youth agency is recent work on researchers’ part aimed at adding the “digital” piece to the UN Convention on the Rights of the Child, Anne said. She pointed to a special issue of the journal Media & Society, “Children and young people’s rights in the digital age: An emerging agenda.” The editors, Profs. Sonia Livingstone in the UK and Amanda Third in Australia, “highlight a crucial policy imbalance worldwide where, they write, ‘Efforts to protect [youth] unthinkingly curtail their participation rights in ways they themselves are unable to contest.’ The imbalance they’re referring to,” Anne continues, “is an over-focus on their rights of protection, which jeopardizes their agency.”

 

So “because social media use is as individual as anyone’s social life,” she says, ”maybe the most practical way to educate adults about the pluses and minuses of teens’ social media use on a case-by-case basis, through a helpline that meets caregivers’ need to resolve problems when they arise.” By taking calls from school personnel trying to address cyberbullying and other social cruelty online, Anne and her helpline collaborators “are trying to steer schools away from defaulting to law enforcement and see if they can work on a solution with the students who want the problem resolved”. Anne says nearly two-thirds of cases they’ve been called about came to administrators from students, who don’t like drama and social cruelty any more than adults do. By working with students, she said, administrators see that students are “part of the solution much more than they’re part of the problem, honoring their agency and potential for digital leadership. Internet helpline work, she said, is “much more about adolescent development than crime and punishment, and we want to see more and more schools focus on restorative rather than punitive approaches to online problems as well as offline ones.”

 

School personnel call the helpline most often to try to get harassing content taken down. “Which is fine. Let’s meet that need, and in the process send the message that even problems in and with tech are actually more about humanity than technology. That’s not always easy to hear, but our process is simple. We find out what’s going on, what platform’s involved – sometimes it’s more than one – if local media have reported on it, if schools have reported it to the internet companies and if they need help with that. [...] And in order to do that, many times they need help from students because they don’t know how to use the app.”

 

Once schools have reported the problematic content, iCanHelpline leverages its relationships with different internet companies to help expedite the process. Anne says that companies are “typically very responsive” in getting harmful content deleted.

 

This is all part of the complaint escalation model, which has complications: It means that only users who understand the abuse reporting tools or have access to the correct intermediaries are likely to see a prompt response to their reports, largely excluding users outside the global north (Athar, 2015). It also means that intermediaries like iCanHelpline and other helplines can’t always meet callers’ expectations because they can’t themselves act on Internet companies’ Terms of Service or community guidelines; only the companies themselves can. And these intermediaries bear the responsibility placed on them by the public without being able to guarantee a satisfactory outcome and without remuneration from the companies for making their users' experiences safer.

 

But the fact is that, as of today, companies still struggle to respond to requests in a timely manner in the face of masses of user-generated reports, and the role of intermediaries like iCanHelpline is essential both in helping companies address time-sensitive issues more promptly and in helping users understand the reporting options available to them. “Schools and other institutions responsible for user safety don’t understand social media companies and systems, and a lot of the reports companies receive are not actionable because of a lack of context for what’s being reported,” Anne said. Most of the reports are what the companies call ‘false positives,’ coming in with inaccurate or inadequate information.”

 

In other countries, many of these helplines are largely government-funded, but Anne says she’s not convinced this is the right funding model in the United States – at least not now, in a politically charged environment.

 

“I think, ideally, support comes from a consortium of companies whose users are receiving help from these intermediaries around the world. But it’s complicated,” she adds. “Societies don’t yet understand that there’s this new intermediary layer that helps both users on the ground and the services in the cloud. Users get help and perspective, companies’ moderation teams get pre-screened context – and this is on top of the traditional help layer, with vertical-interest help services for suicide prevention, mental healthcare, support for domestic violence victims and all the other established services for offline issues.

 

“The new middle layer is unprecedented, so the business model has not yet been figured out, and meanwhile these companies, some of which have users in every country on the planet, are getting requests for funding from an unimaginable number of NGOs. They need some solid analysis of the safety ecosystem – the education parts for prevention and the helplines, hotlines and law enforcement agencies for intervention. I’d like to see an international gathering of representatives from both the prevention and intervention sides.” For now, iCanHelpline is operating under the subscription model.

 

With more funds, “we’d do a lot more marketing: it’s all about uptake and letting schools know that this is available to them. But that’s complicated too, because many schools still think of cyberbullying and other problematic digital content as ‘off-campus speech’ that isn’t their responsibility to address,” Collier said. “Another challenge, I think, is that schools are conditioned to believe that nothing can be done about harmful digital content. The social media companies are really stepping up now, but they weren’t as responsive in social media’s first decade, so schools, parents and users in general came to think they were on their own. Not only is social media, not to mention a service like this, hard to wrap their brains around, but they’ve developed stopgap measures, that we don’t think really work for social cruelty online – like calling in the tech coordinator, the school resource officer, or the district Title 9 lawyer. So little of what happens in social media is about tech, actually, and certainly not something you call the cops about. But people don’t know that”.

 

In an independent evaluation of the iCanHelpline’s service, 93% of the educators who called were “extremely satisfied” or “very satisfied.” I personally like to think that these are all users who will never assume that nothing can be done to improve youth’s experiences online.

 

“The public discussion and the news media, with the tremendous negativity toward young people’s use of social media, even if contradicted by research and the new sociology of childhood, does frame youth as victims. It is rooted in old-school consumer media culture – the previous media era, not this one. Not only does it apply to young media users today, it disempowers them. That should not be,” Anne said. A content takedown request becomes a pretext to promote understanding among educators and to encourage them to collaborate with their students to resolve problems together – and, ultimately, enable youth to improve their experiences online, rather leave them alone with their devices or drag their social tools away from them. For Anne, this form of practical work serves all stakeholders, including the youngest ones.

 

You can read about iCanHelpline, and all of Anne Collier’s extensive writing on youth and media at Net Family News.

 

youthactivismprivacyAmericas
Categories: Blog

Youth and privacy in the Americas: a blog post series

February 27, 2018 - 6:39pm

How do allies who work with youth, in informal learning contexts in the Americas, promote critical thinking on privacy?

Before I came to MIT, I worked for five years as a technological capacity builder in nonprofit organizations in Latin America. Through this work, I encountered strong tensions between the missions, visions, and methods of organizations that work on youth and child issues, versus those that work on digital rights. Youth and child rights organizations seemed to advocate for stronger regulation and law enforcement capacities to keep minors safe online, while digital rights organizations pointed out at the ways this infrastructure could be used to silence dissent and harm activists. These tensions are part of a discussion on youth privacy that has not found much common ground between organizations that appeal to moral panic and those whose visions of net freedom are blind to child and youth rights and needs.

In making sense of the different threats to youth privacy, the needs of different populations, and a fast-changing media ecosystem, youth allies who work at the intersections of youth development, digital rights, and online safety face a complex challenge: how to support critical reflection by young people that transcends specific media and particular contexts, and develops into an evolving stance on the ethics of sharing personal information.

In many discussions on privacy, youth are framed as fragile and in need of protection. This argument is then used to justify surveillance systems that help target dissidents. Youth are also framed as mindless media consumers whose relationships with technology pose a threat to democracy. I want us all to instead engage in initiatives where youth are recognized as subjects of rights with specific needs, yes, but also as actors with agency.

This is the motivation behind my Comparative Media Studies graduate thesis, and the basis for this series of interviews with non-profit workers, youth advocates, and allies who work on youth privacy issues in the Americas. This blog post series will feature different organizations: the ways they carry out their work, their takes on youth and privacy, and the ways they support the development of critical thinking and practices.

A note on process: after each interview, I draft a blog post and share it back with the organization. Some have responded with immediate approval or minor changes, and others have taken a more active role in editing the post – co-authoring the post, in a way. The posts in this series therefore reflect this back-and-forth conversation and editorial process.

I will update this post with links to each interview:

 

 

youthactivismprivacyAmericasLatin America
Categories: Blog

Hiring a Media Cloud Researcher / Community Manager

January 29, 2018 - 9:01am

The Media Cloud project (http://mediacloud.org) is seeking a Researcher/Community Manager to conduct research on media manipulation and to work with social change organizations to analyze conversations in digital media. The Media Cloud platform is a set of online tools for monitoring and measuring online media coverage. After becoming proficient in the use of the Media Cloud tool suite, the Researcher/Community Manager will contribute to ongoing research and help partner organizations explore online media coverage of issues related to their goals. The Researcher/Community Manager’s primary responsibilities are to develop a deep understanding of digital media ecosystems; to work with Media Cloud partners to conduct research on their dynamics; and to use those experiences to help inform further development of the Media Cloud tools.

The Media Cloud Researcher/Community Manager will work closely with researchers and developers on the Media Cloud team at the MIT Media Lab and at the Berkman Center for Internet and Society at Harvard University. This grant-funded position provides an opportunity for someone to conduct important research into how social mobilization interacts with media and to help make Media Cloud more useful for non-profits trying to understand and the role of media for democratic processes. It is expected that scholarly and popular publications will arise from this research.

Responsibilities
  • Produce original research using Media Cloud tools and publishes in cooperation with Media Cloud team and partner organizations;
  • Manage communications with multiple external partner organizations;
  • Construct and maintain search queries on various topics using Media Cloud tools;
  • Clearly and concisely communicate research discoveries to partners and partner needs to the Media Cloud research team;
  • Develop and produce reports and multimedia training materials for partners and sponsor organizations;
  • Represent the needs of end users, participate in design discussions and help prioritize improvements on the Media Cloud tools;
  • Report to Media Cloud Team on important trends in topic areas critical to partner organizations;
  • Help with Media Cloud outreach efforts by running training sessions, workshops and supporting.
  • This is a one-year appointment with the possibility of extension.
Qualifications

REQUIRED:

  • Bachelor’s degree in social sciences or health-related field, and at least one year’s research experience;
  • Experience and/or interest in social change topics, particularly in the field of political communication and/or human rights;
  • Active participation in online communities;
  • Experience with creative uses of new technologies for social change;
  • Project management experience;
  • Excellent writing and communication skills;
  • Excels at considering issues from multiple perspectives;
  • Able to work independently and as part of a team;
  • Strong group facilitator;
  • Expertise with explaining complex technical concepts to non-technical audiences.

PREFERRED:

  • Master’s degree in political sciences, communication, or social sciences
  • Fluency in languages other than English.

Apply via the MIT Careers website (Job ID 15618)

Categories: Blog

Why Wholesome Memes Might Be Our Best Hope Against the Nazis

December 6, 2017 - 1:50pm
In Tokyo Boogie-Woogie: Japan’s Pop Era and Its Discontents, the historian Hiromu Nagahara describes a Japanese government meeting convened during the second World War. A wartime ban had been placed on American popular music, and so officials were serenaded instead by the popular nationalist songs of the day, including "Over There," a 1939 tribute to the bravery of Japan's soldiers—and, unbeknownst to all but a music journalist in attendance, a cover of "Over There," the 1917 American anthem better known by its opening hook "Johnny, get your gun."    Nagahara tells this story as a case of and for transnational optimism, evidence that even mortal enemies could share a deep and common cultural connection under conditions of total war. Of course, the same facts can be read in the opposite way: that two nations could cheerfully hum the same tune while violently slaughtering each other.    I've been thinking about this story since Jason Koebler at Motherboard published an article earlier this summer revealing that many mainstream memes are made by Nazis. And not just the ones featuring fascist frogs, either: dank memes of all kinds often emerge from subcultural territory occupied by the alt-right before circulating throughout the wider web. Jason raised the question of whether it is ethical to share memes manufactured under such conditions, as if they were conflict diamonds.

But it also made me wonder: what does the mutual appreciation of these memes say about us? What common aesthetic allows a Nazi and a non-Nazi to appreciate the same dank meme? Should we worry about that kind of cultural connection? And if so, what should we do about it?

This problem is not new, but in fact just a different form of an ancient struggle between fascism and democracy. The common aesthetic of these circulating memes is the nihilist irony that developed as a reaction to WWII and today fuels the alt-right while fatiguing the rest of us. Our best cultural weapon against the advances of the alt-right, then, are wholesome memes, which look much like the memes you know, but are rooted in sincerity and compassion rather than nihilism.

 

 

 

 

As Jason referenced in his article, for decades academics have been arguing whether it is ever OK to cite Martin Heidegger. The basic problem is that Heidegger was a brilliant and influential philosopher and also a Nazi. Some people argue he was so brilliant we can't ignore his philosophy; others said no, you shouldn't cite Nazis, because they were Nazis.

The debate was renewed in 2014 with the publication of Heidegger's long-embargoed Black Notebooks, which are somehow even more sinister than their title makes them sound. To save you from reading a thousand pages of a Nazi's diary, what the Notebooks show is that not only was Heidegger a committed Nazi, but that Nazism was baked into his thought, which means his philosophy must not only be discarded but actively opposed.

But how do you fight one of the philosophy's most brilliant/damaged thinkers on his own turf? In 1953 Jurgen Habermas, perhaps the most influential democratic theorist of the postwar West, argued that Heidegger was too important to be ignored but too dangerous to rely upon. Instead, he proposed that "it appears to be time to think with Heidegger against Heidegger": to take his critical insights and transpose them democratically in order to, as one commentator put it, "to leave space for the citizens themselves to determine and develop their different collective and individual life projects." For the purposes of this essay, we can think of Heidegger as the philosopher of Nazi memes, and Habermas as the philosopher of democratic memes.

And so the time has come to think with memes, against memes: to reappropriate and redesign the meme-form to fight fascism and rebuild democracy.

 

 

 

 

There are many kinds of memes: dank, spicy, and fresh, just to name a few. But the memes that Jason identified as emerging from the alt-right are often wrapped in layers of irony that insulate the reader from caring about their subject. Who cares about climate change if we're all garbage?

This particular style of nihilism resembles that found in Heidegger's philosophy, specifically his concept of being-toward-death. According to Heidegger, we are thrown into the world, and our uncertain fate makes us feel guilty and anxious. Only the resolute anticipation of our own death can free us by allowing us to see, in our individual absence, our individuality. Our own death makes possible our own life.

Being-toward-death thus liberates us by alienating us. It reminds us that we are individuals, apart from society, and that when we die, our being ceases. Except here is the thing: being an individual waiting/wanting to die is no way to live together in a society, which is precisely the problem we face broadly, now. We sit and scroll and daydream of the day we will each be dead in the ground, freed of our respective responsibilities. Meanwhile, the alt-right, enthusiastically alienated from/by society and without a single fuck to give, is on the march.

The good news is that there is an antidote to this kind of alienation. The bad news (for lovers of nihilist, ironic memes, anyway) is that it's to be brutally, unironically earnest, to others and with yourselves. It requires a New Sincerity, but for memes: replacing the ethic/aesthetic of postmodern irony with an earnest wholesomeness that, as Habermas hoped, helps us live together rather than apart.

 

 

 

 

Postmodernism refers to many things, but can be broadly understood as a philosophical and artistic movement, developing especially after the catastrophe of the Second World War, that questions modern concepts of progress and objectivity. In literature and art, this skepticism was performed especially through irony, which projected expertise while protecting authors from being pinned down to truth-claims. In the decades since, postmodernism spread not only across the humanities and arts but the sciences and now everyday life, with government officials offering alternative facts and Facebook struggling to define fake news.

Yet in recent years a new movement has sought to transcend postmodernism by moving beyond irony and rebuilding the world it once sought to split. If this movement has a manifesto, it might be David Foster Wallace's 1993 essay on television and US fiction. The essay, while nominally a review of contemporary sitcoms, is also a commentary on the postmodern aesthetic of nihilist irony: distant and distancing, and terribly isolating. The title of the article itself (E Unibus Pluram, or "out of one, many") describes both how televisual culture operates and the ultimate effect on a nation subjected to it.

Written while he was drafting Infinite Jest (which, may I remind you, features a germophobic President tanned an unnatural orange whose television stardom gets him improbably elected despite a quixotic campaign of building a border wall and launching trash into Canada to Make America Clean Again), Wallace argued that "irony and ridicule are entertaining and effective [and] at the same time they are agents of a great despair and stasis in U.S. culture."

For Wallace, the nudged ribs, cool smiles, and knowing winks of televisual culture made people laugh, but also made people afraid of being laughed at, and thus simultaneously kept them pacified by entertainment and frozen by fear, afraid of becoming entertainment. This is also why, when people asked Wallace what the massive Infinite Jest was about, he often told them "loneliness."

Against this debilitating irony, Wallace both predicted and prayed for a new post-postmodernism, one that, rather than daring to be skeptical, would dare to be sincere:

"The next real literary ‘rebels’ in this country might well emerge as some weird bunch of anti-rebels...who have the childish gall actually to endorse and instantiate single-entendre principles. Who treat of plain old untrendy human troubles and emotions in U.S. life with reverence and conviction. Who eschew self-consciousness and hip fatigue. These anti-rebels would be outdated, of course, before they even started. Dead on the page. Too sincere. Clearly repressed. Backward, quaint, naive, anachronistic. Maybe that’ll be the point. Maybe that’s why they’ll be the next real rebels."

 

 

 

 

 

 

 

In late 2016, as the entire world was collapsing, a new kind of meme was exploding. Wholesome memes did not originate on reddit, but they came to be gathered there, aggregated in an eponymous subreddit that called for memes expressing "support, positivity, compassion, understanding, love, affection, and genuine friendship by re-contextualizing classic meme formats, and using them to display warmth and empathy...with no snark or sarcasm." Such memes are not pre-ironic but post-ironic: they are aware of, and actively remix, the expectations and form of more nihilistic genres in order to express authentic sentiment and acknowledge the human connection between author and viewer.

Wholesome memes are effective because they encode, in a spreadable and durable digital form, the kind of emotional labor that picks people up and encourages them to go on, even if it's not clear where they are going or what awaits them. They dare to speak of the ordinary with reverence and conviction. They require vulnerability on the part of both author and viewer, and through that vulnerability build strength. They liberate not by anticipating individual death but by affirming shared life: that, after we pass on, we do not die, but instead live on through the people and institutions that compose the common world we made together.

For these and other reasons wholesome memes are also remarkably Nazi-proof. It's not only that territories controlled by the alt-right don't source wholesome memes, it's that they can't. Or, at least, they haven't yet, and I think it's unlikely they will. The wholesome ethic is egalitarian, antifascist, and resists ironic deployment. Instead, wholesome memes are fundamentally democratic because they build solidarity, indeed are solidarity: in both essence and function artifacts of a democratic consciousness, realized through communicative action, that Habermas has spent his life trying to build after Heidegger and despite postmodernism.

 

 

 

 

Most of us were raised in a postmodern age, taught first on the schoolyard, and eventually in the schoolhouse, to be cool, distant, safe, and so also passive, weak, complicit. But the challenges we face require a change in our culture, and thus also the media that both bears and transforms it. That change is here in wholesome memes. And it could not have come at a better time.

This change will not be easy. Living wholesomely is hard. Sincerity has risks. Faith can be tested, and can falter, but mustn't fail. Particularly in the present, beset by false facts and fascist frogs, but still trying to forge the path ahead, progressing in a dumb determined animal way, simply because we must. Because authentic wholesomeness, not the wholesomeness of a child but of a monk, not inherited but chosen, not given by grace but earned by hard work, can be mocked, or betrayed, but it can never be corrupted. It has the lasting strength of strength surrendered; "no one takes it from me; I lay it down of my own accord." And in these dark days, it may be the best hope we have.

memeswholesomeheideggerhabermasdanknihilismdemocracyliteracy
Categories: Blog

Hiring a Media Cloud Collections Curator

December 1, 2017 - 10:37am

The Media Cloud project is seeking a contract collections curator to help assess the state of our existing media collections and work towards improving the quality and reach of our data to support data-driven research about the role of online media in civic discourse. The curator will collaborate with the research and technology teams to understand the current coverage and health of the different collections and contribute to their improvement. They will assist in the identification of additional web-based news sources and data from different digital platforms, work to improve the presentation and use of existing collections, and collaborate with external partners. The position will be a 6-month part-time contract position based at the Center for Civic Media (at the MIT Media Lab). This is a grant-funded contract position that we hope to extend, or perhaps turn into a staff position.

Media Cloud is a joint project between the Center for Civic Media at MIT and the Berkman Klein Center for Internet & Society at Harvard University. We are an open source project producing research about the networked public sphere, and helping others do their own research about online media. We make available to the public our existing archive of more than 550 million stories, adding more than 40,000 new stories daily. The project is funded by grants from different foundations. We produce both the open platform and research that helps our funders make decisions about how best to influence online civic conversations about democracy, activism, and health.

We are a diverse project of researchers and technologists who love to wrestle with hard questions about online media by using a combination of social, computer, and data sciences. The ideal candidate will work well with all members of the team, from senior faculty to junior developers, and will thrive in an academic atmosphere that privileges constant questioning and validation at all levels of the platform and of our research products

Minimum Qualifications:

Research experience;

  • familiarity with media ecosystems and journalism;
  • experience working with databases and digital research tools;
  • demonstrated ability to work with technical and research teams;
  • good at considering issues from multiple perspectives
  • able to work independently and as part of a team
  • interest in working on issues related to democracy, gender, race, health, and globalization.

Duties:

  • assess the current state of Media Cloud’s media collections and document key findings in a report;
  • improve the overall health of our data sources;
  • contribute to the ongoing identification of additional sources and collections;
  • work with the Media Cloud team to facilitate the presentation of media collections to users;
  • work with the technical team to improve data discovery and ingestion tools;
  • assist in the identification of additional data from different media platforms;
  • collaborate with partners and volunteers to improve the coverage of media collections.

Helpful Skills:

  • experience working on big data systems and/or projects investigating online media;
  • global outlook;
  • passion for solving difficult data problems;
  • multilingual.

We strongly encourage women, people of color, people of all ages, and people of any sexual identity to apply.
The job is based in Cambridge, MA, but much of our team is distributed around the world. We are open to alternative working arrangements that include part time residence in Cambridge. We imagine this position as a 3- or 4-day a week engagement over 5 to 6 months, but are open to other approaches.

Apply by sending a cover letter, resume, and portfolio to jobs@mediacloud.org.

Categories: Blog

Data for Black Lives: Automating (In)Justice

November 18, 2017 - 12:33pm

Automating (In)Justice: Policing and Sentencing in the Algorithmic Age

Data for Black Lives (D4BL)  is "a group of activists, organizers, and mathematicians committed to the mission of using data science to create concrete and measurable change in the lives of Black people." This is a liveblog from the Automating (In)Justice panel for the D4BL 2017 Inaugural Conference. Liveblogging contributed by Rahul Bhargava – apologizes for any errors or omissions.

Adam Foss starts by talking about how criminal Justice reform has been a hot-button issue. In Boston we incarcerated a generation of black men, but now we are feeling the impact of this the “smart on crime” approach. Right now all along the continuum people are trying to use data to solve this historical problem of mass incarceration.  There’s good to that, and bad to that.

Panelists:

  • Adam Foss
  • Charmaine Arthur
  • Samuel Sinyangwe
  • Kim Foxx
  • Julia Anfwin

Charmaine Arthur

Arthur is the Director of Community Programs at Freedom House (in Roxbury, Boston). Their founders were at the forefront of the Boston bussing crisis.  They’ve started a school for children of color to fight for equitable education.  They work with high school and college students to create success and opportunities through coaching, college-level opportunities, and other community work and civic engagement.

Data helps them in a number of ways. It helps them do their work better.  It gives them context. It helps them identify who they serve. They measure things like race, sex, grade, graduation, attendance, family base, economics, and more. They use SalesForce for a lot of this. Data allows for some accountability.

This data is a shell.  Until they meet a person they don’t see the life. And they let the students use their own data and be advocates.

Data can also be a false sense of progress and hope. It takes time to work against this. Freedom House survives through funding from foundations, and often they dictate how to do the work.  The corporatization of non-profits is happening – they’re using the same language as Wall Street.  How do you feel about the “return on my investment” in this work?  Absolutely not. We don’t talk that way about our young people.

Samuel Sinyangwe

Sinyangwe’s work began with the death of Michael Brown in 2014.  Just afterwards communities that had been experiencing police violence were able to say that. Others attempted to shut this down by saying they didn’t have the data, as if your lived experience needed a study to justify it.

They built the most comprehensive database of people killed by police in the US. They showed that police killed 323 black people the year Michael Brown was killed.  Then they began to use data as a tool for accountability.

Then they could have a conversation about why the numbers were the way they were.  Why are 1 in 6 homicides in Oklahoma City committed by police? 1 in 3 people killed by strangers in the US are killed by police officers.  Over 1200 people a year for the last five years. How do we make this apparent and accessible to people?  Visualization has been critical to help peopleunderstand what is going on, and move to some kind of action.

They have national data, and also deeper data bout the top 100 departments in the US (through public records requests and other means). In Orlando, FL they met with police leadership and the data showed that they are the second highest for people killed by police. When they presented all this they explain this because Orlando is a heavy tourist place, and there are lots of folks on Orange Ave corridor; clearly this is unque and they can’t be compared.  So Sinyangwe pulled the New Orleans Bourbon St. data, which shut down that conversation.

The people in this room can download the dataset and use it – http://mappingpoliceviolence.com.

Kim Foxx

Foxx is the state’s attorney of Cook Country, Chicago. They release this data to the public in a very accessible way. There is a sense around mass incarceration that things are “anecdotal.” 86% of the people in Cook County jail are black or brown.  94% of people in the juvenile system are black and brown. In the prosecutor’s office we don’t know how this happens, because the systems are black boxes.

For Foxx it was important to have the public know what she was doing, and how she makes decisions. How do you measure if you are better than your predecessor? She ran on the issue of people in jail being stuck there only because they are not able to afford their bail. Sharing information gives them a benchmark.  Foxx insists that “you can’t fix what you can’t measure.” 

Sharing budget, agenda, and more lets the public know. People can run the datasets themselves. They’ve hired a Chief Data Officer for the Prosecutors office, and released the last 6 years of data; precisely because they wanted it to be continuing and accountable.

In 2016 their second highest felony offense (after gun possession), was retail theft (shoplifting). They didn’t know that until they dove into the data.  Illinois’ level of retail theft felony is $300.  Indiana is $750.  Wisconsin is $2500. When you think about the impact of a felony conviction you can ask a question about what we are doing.  At her department, they decided to not charge retail theft as a felony for under $1000 (they can do that at their discretion). The data helped them see that, and led to that decision.  Next year they’ll be able to look at the impact of that on prison and jail populations, and more.

Chicago has an issue of violence, and Foxx has limited resources. If we are about public safety, we must look at on a continuum. Violence is connected to education, arrests in schools, and more. The highest incidence of violence is in places with under-resourced schools, the places where people returning from convictions live, and more. You can’t arrest your way out of violence. The justice system is not just reactive.  We can’t put the wrong people in jail.

Julia Anfwin

Julia was destined for computer science, but took a turn towards journalism.  She covered technology for 15 years. She started writing about criminal justice because she was writing about the data being collected and was wondering about how it was being used.

The highest stakes algorithm judging people is the software used across the country to create “risk assessment scores”. At ProPublica she wrote about this. This is used at pre-trial, parole, and sentencing.  San Francisco, most of New York, and lots of other places use this. As someone math and data literate she looked for studies to justify this.  No one was doing these studies. In fact Eric Holder asked the sentencing commission to have these studied. The only studies were by the companies that created it. New York State purchased this in 2001 and released number in 2012; but they didn’t look at race. She did a FOIA request in Florida to get data, and succeeded in getting 2 years worth of score (2 years).

Anfwin looked at the scores and found for black defendants there were sentences across the board. For white people, almost no-one was getting longer sentences. These algorithms are totally biased. The rigorous statistical analysis after 6 moths of work backed this up.

Computer science community has validated all this work, but the criminal justice community has totally rejected this. It becomes bout a debate about the definition of fairness. Bringing numbers to the table helped this debate happen.

“You gotta bring numbers to the fight”

 

Discussion

Adam shares that in Chicago the shootings aren’t that outside the average, but you just can’t get to a hospital in 45 minutes or less so the homicide numbers are worse. In Boston, they say homicide is way down, while shooting rate is through the roof (because you are at a trauma hospital in 4 minutes).

Samuel shares that in the absence of data you just have assumptions. When you talk about addressing police violence you run into an old script.  It says that anything that restricts how police use force endangers police or community. There is no data to support any of those claims. These are assumptions that are taken as fact. This couldn’t be challenged well because the data wasn’t there.

They’ve tested this with the data we have now, and find they are lies. They looked at use of force policies and how restrictive they were. They tested whether there was an increased risk in departments that are most restrictive. In fact these were the safest for civilians and police officers. You share that finding in the room with the police union and they have nothing to respond with. This can move those conversations forward.

Foxx asks what makes you a good prosecutor? How do you measure the outcomes of what you do? IF you say that you want to keep communities safe, and give someone a harsh punishment, and then see the person over and over, are you successful? Is this harsh sentencing aiding public safety? We have to look at the aggregate impacts on community, otherwise we’ll continue to do the same thing.

We haven’t defined what “tough on crime” or “smart on crime” means.  If you don’t have to own that “tough on crime” means lots of people in prison and decimated neighborhoods, then the data doesn’t matter. The narrative of “personal responsibly” has dominated prosecutorial offices for years. This narrative lets you not care about the impact, and absolves you from the conversation. We cannot afford to do that. In what place can you invest 500 million on crime and have a 55% recidivism rate?

Adam shares that Foxx was elected on a wave of anti-incumbent prosecutorial elections. Next year there are 1000 DA elections across the country. This is an opportunity. Foxx is a leading example of what can happen when we change.

Foxx shares that 80% of elected prosecutors are white men. Less than 1% are women of color. This is important, because we need people in these positions to push back on this. She is from public housing, a single mother-family, all the risk factors that make her high risk from an algorithmic sentencing point of view. These un-connected people don’t know the impact of the policies, and that’s a problem.

Arthur shares that there are lots of egos at the table.  Yes you have to bring numbers, but what happens when you are worn out fighting with the numbers, because those numbers are lives. Understanding “why” matters.  There has to be action with the communication. It takes time, and we have to keep chipping away. But the funders say here, have 3 years to fix it. We just can’t do it. Quality programs are proactive and find youth before they fall off the wall.

Adam asks the panelists – what do you need to do your work better? How is data going to help us?

Arthur shares the story of her kids, who have had different experiences of racism – from shootings and support failures to more. The danger of the story about 18 year old black males is dangerous for individuals. The information Sam has is information Freedom House can use. They can give youth the tools to advocate for themselves. We need to advocate for our own.

Sinyangwe argues that the field of stopping police violence is new. The data is out there for you. The policy information is out there. Help produce knowledge that communities can use for change. Look at civilian review boards – there is no data to tell you which structure is the most useful. Make this stuff accessible.

Foxx wants to amplify this. We don’t validate why things are happening; we don’t understand them. We have to be cognizant of the nuances in spaces, otherwise we’ll just adopt things because other folks have. They need people in the data/analytical space to come to the criminal justice system. Advocacy from outside is good, but we need help inside it too to figure out what questions to ask. Foxx wants people to work with prosecutors to help.

Anfwin has a team of two programmers that she works with. Every industry needs more tech literacy. The most shocking thing of criminal justice scores was the shocking amount of forgiveness applied to white defendants. Her analysis of car insurance rates was the same chart – with higher risk the rates declined in white neighborhoods. They use the word “bias”, but the algorithms have allocated “forgiveness”. This is an important re-framing. Can we build-in forgiveness for more than one group of people.

 

Questions

How do we help advocates use data better?

Anfwin shares that people over-collect data before they have a question.  You need a targeted smart question before you start collecting data.  Otherwise your data is putting data at risk. You have to think about when your data is lost, because it will happen.

Surveillance is a real risk, reminds Sinyangwe. You have to take steps to protect yourself. The framing of your statistics matter.  Especially folks that aren’t data literate; they’re the ones that need to take these numbers and use them.

What do you say to black communities that don’t feel safe and want more policing and surveillance?

There is not a magic answer to that, says Arthur. We used to depend on our neighbors. We’ve lost trust. We used to have a shared understanding of what the village looked like. Community policing works for some people, but it doesn’t work for everyone. It can build trust.

Those numbers are real people that live and breathe. We need to really remember that. We need the police. Arthur was going to be a cop, but her mama said that her calling was to work with young people.

Foxx hears this question a lot. She goes into neighborhoods and talks with people in forums. The ACLU folks were talking about stop and frisk. A woman stood up and shared that she was scared to go to the bus stop. She didn’t want an open-air drug market next to the bus stop. Another woman asked about getting rid of the unlicensed snow cone seller. Foxx didn’t understand, because she didn’t live there, that the problem was around the loitering around the snow cone person and the drug sales and more that happened there. People have a deep fear of the police, and a deep fear of the person causing harm.

People want policing that isn’t dangerous to them. This narrative can’t be lost. Law enforcement has to contend with bad tactics and bad policies in the communities that need to trust law the most (because they are suffering the most).

Is the decision whether to keep this data a problem of resources, or a deliberate effort to not collect it?

Sinyangwe says it is a combination. The political system responds to crises. The Department of Justice only opened up an investigation into Fergurson, Baltimore, Chicago when something big happened. Patrick Sharkey found in a recent study that the crime decline in the past few decades was driven by non-profit organizations. For every 10 NGOs working on stuff, there was a 6% drop in violent crime and 10% drop in homicide. The only place resourced to respond when you need safety help is the police department. That was a choice; and they defunded other alternatives. Other studies show that mass incarceration had zero perfect impact on the decline in crime; but that is where all the money goes. Same result for spending on police – very little impact on crime (0% to 5%). We have to shift to community-based responses. Those are the evidence based responses to this problem.

Anfwin attributes this to benign malice (if that exists). Journalism is the last watchdog – people respond. And journalism is in crisis.  All the money is going from them to Google and Facebook. Journalism needs our support to bring attention to this.

 

 

Categories: Blog

Data for Black Lives: Opening Panel Live Blog

November 18, 2017 - 7:54am

Data for Black Lives (D4BL)  is "a group of activists, organizers, and mathematicians committed to the mission of using data science to create concrete and measurable change in the lives of Black people." This is a liveblog from the opening panel for the D4BL 2017 Inaugural Conference. Liveblogging contributed by Rahul Bhargava and Catherine D'Ignazio. They apologize for any errors or omissions.

Yeshimabeit opens with a reminder that data and technologies have far too often been weaponized against black communities.

Panelists:

Cathy O'Neil

Data is presented to us as facts.  Cathy found in finance that data had been weaponized (during the last financial crisis). She left for data science, where she say the same thing happening.  Algorithms are opinions embedded in code.

Algorithms are predictions. They use data as input, but you've chosen it and ignored some. Then you train it for success (you have to define success). She uses cooking as an analogy. When she decides what to cook for her kids she uses various criteria. For example, she disregards ramen noodles because she doesn't think they are food (her kids would disagree).  She's in charge, she gets to define success (ie. if the kids ate vegetables).

The point is: whoever's in power gets to choose what is relevant data and what success looks like. You optimize for success over time. The definition of success matters a great deal. Behind the algorithmic math are people who define success, but don't "share in the suffering of failure".

We have had a long history of racist police policy based on broken windows policy. The explicit goal as to arrest people on small charges which would theoretically reduce worse crimes in the future. But what that looks like is concentrated nonviolent crime charges in black neighborhoods. We do not have crime data. I want to make that point again. We do not have crime data. We have arrest data. We have an enormous amount of missing data on white crime data. Blacks get arrested five times as much as whites for marijuana possession whereas usage rates are the same.

We're not predicting crime - we're predicting policing.

A report by Human Rights Data Analysis Group shows that Predictive Policing Reinforces Police bias. This explores the difference between an actual crime versus an arrest as a data point. Predictive policing is perpetuating the cycle of broken windows policing even for places where they say they are not pursuing that strategy any more.

Recidivism risk algorithms is being used to get rid of the bail system. This is not simple. IT is used in sentencing, parole and bail.  People are assessed on their risk of getting arrested , failing to appear for court, etc.  These might have legitimate reasons (getting your kids from school) - the risks are higher for some people.

The questionnaires used for these assessments contain questions that are proxies for race and class. "Any prior convictions", "Do you think the system is rigged against you", "Did your father go to prison", "have you been suspended from school"  - all of these are proxies for asking race. Many of these questions would be unconstitutional in an open court but are used against you in a risk assessment. This creates a pernicious feedback loop of its own device. You get a longer sentence if your risk score is higher.

These risk scores were created to fight human bias. Judges are also racist. But unfortunately the idea of objective, scientific risk scores is not working.


Cathy O'Neil's pernicious feedback loop caused by unjust algorithms.

We need to instill the concept of ethics into data science trainings.  Cathy asked a creator if they used race.  He said no.  She asked if he uses zip code.  He says yes. The data scientists don't feel any responsibility for their use.

She says we should have a Bill of Rights for Data in this country that would explain how these scores work and are being used against us.

Malika Saada Saar

Recently Malika brought leading women's rights defenders to Google to talk about tech's role in gender violence. While talking about tech's impact on womens lives, one advocate said she felt like part of the "resistance" fighting the Empire, but while doing that the Empire was building another Death Star.

In civil society, government and more we have done the work to demand equality.  We might not have succeeded yet, but we know where to go to demand accountability.  Tech is a new circle of power; a new ecosystem of abuse, violence, subordination, and exclusion.  We have to see this, name it, and hold it accountable.

The other side is that we can use this to advance our rights.  BLM started as an online love letter. The moment of naming and shaming violence against women is only happening because of social media. #MeToo is what has allowed for this powerful echo chamber for naming and shaming what has been done to our bodies as women and girls. Social media has done more for representation of black and brown folk than the Motion Picture Association. We have to recognize that we can use this space for purposes of mobilization and the advancement of our rights.

Malika was trained as a human rights lawyer. Being at Google has been an opportunity to do that work. Her training focused on documenting abuses, so the world knows what is happening. At Google she can document in a way she never imagined before.

You have these smartphones. You can use them to bear witness, to document human rights abuses. Whether you are documenting them in Ferguson or Uganda. You take the digital evidence of those abuses and share them on these global platforms. It is absolutely what has changed the conversation around police brutality.

So much of how abuse happens is in the context of isolation and silence. Almost every genocide, every rape, happens in the context of isolation and silence. How do we use these technologies to surmount the silence and isolation? That is the power and promise of these technologies. The proof of it is in what we have seen around police brutality.

Malika saw another manifestation of this with women in Rwanda. They used smartphones to document abuse at the hands of their husbands and then showed the videos to the judges. We can use these technologies for the purposes of taking back power and protecting ourselves.

Google has been thinking about how to use VR to "scale the prison walls". Abuse in prisons can happens because others are behind the walls.  They created an immersive experience of solitary confinement. For four minutes your are in it, and it is narrated by people who have been there.  Now they are doing it for girls behind bars. Using technology in this way to bear witness is powerful… and there is danger. After four minutes we have the privilege to take the headset off and lay it down.

There are ways that we will use these technologies, reimagine them, to advance our rights. There are also ways that we have to make sure that, not only do we take what others have created for our own purposes, but also we are in the rooms to create the products themselves - for our communities, for our rights, for our safety. But it's also about how we name and recognize problems. We must hold this new space accountable - from the same place of insistence on equality and dignity that we have done in every other circle of power and privilege.

Dr. Atyia Martin

This conference has the power to impact how we change the struggle. This is an opportunity for us… so there is hard work in involved. Dr. Martin is going to connect this to the resilience plan for Boston. The strategy has been launched,  and embeds racial equity, social justice, and social cohesion.  We have to ask who benefits and who is harmed from some policy or approach? How can be address this as an opportunity to work on these social justice domains? We have to see the multiple benefits this strategy can bring.

When we talk about racism we don't always have the same framing and starting point. Our definition used in our resilience strategy is a strong academic definition but I want to share another one:

A historically rooted system of dehumanizing power structures and behavior based on ideologies that reinforce the superiority of white people and the inferiority of people of color while harming both. It is embedded in all of us. We are conditioned to adopts the behavior that fueled racism as a continuous process.

Dr. Martin has been working and re-working on this academic definition for seven years. This definition is important to understand the context of the data we work with.   Data is biased. It is human and bias is part of the human condition. We live in a society that constantly presents messages about who people of color are. How do we turn data into information? Data are some pieces that we put together the story that we call information. It starts from the collection, how we collate and synthesize, and then how we leverage that to make decisions that impact millions of people. Not just in terms of policy, but if racism is embedded everywhere then this applies across the board. We often leave this part off -- what is our personal responsibility as individuals?  

We have to work on ourselves if we want to call ourselves social justice warriors.  You can't walk someone else through a process you haven't gone through.

Our definition of racism has created two challenges. On the one hand it says that you are a good person if you point out racism. And you're a bad person if you get that pointed finger and you are a racist. So we miss the complexity. It's disrespectful to the struggle and to the people who are living it everyday.

In our culture both people of color and white people all "drink the same Kool-aid" - people of color are taking in the same media as everyone else. We have to manage the underlying things as well.  In most cases we haven't done the work to think about where that comes from. What does that mean for how I navigate the world?

The world assumes we're happy to talk about racism everyday, because we live it. We don't have the language for that y`et, but data can help us do this.

The other piece is white people who have internalized the other side of it. She wants to contribute some framing around the concept of power which we have been talking about. What does power actually mean?

  • Who gets to make decisions?

  • Who gets to allocate resources?

  • Who gets to establish the norms and standards?

  • Who gets to decide how we paint the picture of what's happening right now?

  • Who is to blame and who are the victors?

  • Who gets to decide the history?

  • Who has the time to engage, to attend meetings? (Time itself is a form of power. People of color are more likely to live in food deserts, more likely to have long commutes.)

Relationships are one of the biggest ways we perpetuate racism and other forms of social injustice. Who we decide to have over for dinner matters. Everything in our society is based on our personal relationships.  How can we leverage data to show this racism, classism, sexism?

When you change the way you look at things, the things you look at change. - Wayne Dryer

Purvi Shah

Purvi comes from the world of law. Law for Black Lives is a sister project of Data for Black Lives.  She deferred going to law school because she got engaged in organizing, but hte organizers said no - we need you to understand the law.

She had a hard time the first year. She thought she was going to law school to learn about justice but what she ended up learning about were the rules and regulations of oppression. Law has always codified injustice and oppression. Purvi's first example of this is Johnsion v. M'Intosh was a case the created the concept of property rights… the case is about a white person taking land from a native person.

The second story was August 9, 2014, when Michael Brown was shot and his body lay in the street in Ferguson. She was a lawyer for Center for Constitutional Rights and was glued to Twitter and television and watching the protests happening. She marched into her office and asked what they were going to do. At that time, their organization was suing the President, the Pope and the NYPD, so we were pretty busy. Her boss said they were busy but why don't you go to Ferguson.

She went there a few days after the killing. In Ferguson she noticed that the media was covering the story as if it was all looters, but she say parents, veterans, and more marching each day as a ritual. Documenting what was happening online, she remembers seeing a family with all orange shirts on. They were in town for the family reunion and while she was about to tweet that they were all teargassed - babies, grandmas, and more. Purvi reminds us that this is the history of how black people are treated when the protest the killing of a family member."There is an indifference to my life" is the attitude this builds.  That's why it is important to assert that "Black Lives Matter".

This was a turning point for her as a lawyer. There was an execution style killing of Mike Brown and then hundreds of people were being arrested. One of the challenges in this situation was that lawyers did not want to stand with the community. We needed hundreds of lawyers and didn't have enough lawyers to assist them. So what they ended up doing is building the Ferguson Legal Defense Committee. They started connecting to lawyers across the country, started small, did daily calls, and shared briefs.

Post-Ferguson, there was the grand jury announcement in 2014. Then there were rallies the day after that event which sparked national protests. They saw a role for lawyers operating within the system that was creating all these problems. Then in July, 2015, they hosted Law for Black Lives. They wanted to have a turning point for the legal community. They had 1000 people that were interested and hosted 2 days of conversation about the role of the law in creating the world in which black lives matter.

They discussed policing, the environment, co-ops, and more. Law for Black Lives has mobilized lawyers from Charlottesville to Charleston, supporting victims and organizing.  They organize lawyers towards solutions as well.  At this point they have 3500 members. Over 20 have worked collaboratively to built this over the last 3 years.

Purvi Shah talks about "Movement Lawyering" and asks what are the ways that lawyers partner with the people most impacted by social problems and take an explicit, non-neutral, values-based position in their work.

Purvi wants to share some lessons from her work for people in the room. She argues that like data, law has the veneer of neutrailty; a false neutrality.

Movement lawyering is about connecting law to social movements (buidling on a long history).  This hasn't always been identified as a clear strategy, with supporting theory and practice.  What would it mean to be a "movement data" person? For them it meant partnering with the people. This is about creating an atmosphere to support people functioning and moving forward (quote from Arthur Kinoy).

"It's not about winning cases, it's about shifting power," says Shah.

Solve the problems people want you to solve. The Vision for Black Lives documents this comprehensively. People have tons of ideas for this community talk about.  Predictive sentencing, homelessness data, food justice - Purvi encourages us to start there.

She says a couple rules is that 1) There should be no rogue agents and 2) There should be no savior complexes. People in this room have a lot of expertise. How do you offer your expertise but in partnership with existing groups?

"Partnership means going at the speed of trust."

 

She calls on this movement to center black leadership – the folks living at the most intersections and the margins. Privilege is complex - this is not about just values and ideas, but also about strategy. People who have the lived experience are the ones most likely to see the solutions.

Emotions >= data. Lawyers, like data people, are very analytical. Both are trained to be neutral.  But emotions are a data point; both for the communities we work in and for ourselves. We need to create space for the trauma we feel, and the secondary trauma of experiencing the communities' trauma.

A Bill of Rights is great but what about a Code of Values? Law for Black Lives has a code of values on their website. They believe in democratizing the law, for example.

We have to: Change ourselves, change our work, shift our institutions and shift our fields.

Questions

What is the main goal for Data for Black Lives over the next few years?

Yeshimabeit: One of the things they are focusing on is building out this network and relationships with the people here and nationally.

Cathy: I joined Occupy in 2011 and that was 6 years ago and we still meet every week. Most people don't know that Occupy still exists but that's not hte point. The people I met then are now working for Senators, integrating themselves into the systems. It's a network that you build and make strong bonds, and then you keep going. It has a certain goal and mission.

Atyia: We need to re-think a lot of things. We need to reframe how we are thinking about the world based on research and information, remember our critical thinking and bias skills. We need to re-intellectualize - we have become so fractured around different interest areas rather than big picture goals. We get stuck on strategies versus outcomes. We need to put a mirror on ourselves on a regular basis. The research shows that if you think you are super smart then the more dangerous you are in terms of your bias. The Northeast region sees a lot of well-meaning folks that haven't done the work on themselves.

Purvi: Center yourself on things that change conditions for people suffering right now.  Work on the problem and the underlying thing.  You have to keep people alive while working on the system.

Malika: How do we make sure that there is a constant dialogue between Google, Facebook, Twitter to what you are doing? Not just issue of access. How do we unlock these spaces for criminal justice defenders for example? The other intersection that has to play out is that there's a divide between rights defenders and tech. What's the intersection between those groups? If they are not talking about predictive policing then they are not doing their work. There's a difference in knowledge and language. A lot of it is generational. Ella Baker would have said - it's the younger folk who know how to bring a movement forward.

Cathy spoke about proxies for race and class. Every place has proxies for certain modes of oppression.  How can we work towards identifying and exposing these proxies in lots of fields?

Cathy: I loved finance and started with Occupy. And I thought the weaponization of math was a finance problem. She was previously deciding who got comparison advertising when they searched for flights on travel sites. Then a venture capitalist came and his vision for the future of ads was when he got to see flights to Aruba and not see ads for University of Phoenix because that "was not for people like me." She was ticked off and then started looking at predatory online ads from trashy, private higher education institutions like U of Phoenix. I felt that I was complicit in a technology that was making people suffer. For the second time after being in finance. I'm creating a system that I personally will not suffer from but others will.

Cathy loves her luxury yarn advertisements, but on the other side it is predatory - payday lenders and for-profit colleges.  It is ruining people's lives and we call it a service.

Malika: Within human rights and civil rights community, we don't know this. It's a form of rights abuse that we don't understand. There's a real need for folks like you and the civil rights communities to be in dialogue to map these things out.

Google was asked by the civil rights community to pull down the payday loans ads.  They mobilized to bring Google to the table and explain it.  They made the decision to take them down.  Right now their is a conversation about bail bonds.  There might be more violations of civil rights.

Atyia: What was just described by Mildred is the idea that policies and practices have disproportionate outcomes for people of color is old history. This is the hisotircal context. Every issue has a historical story for why we see the problems today. Fo

The Social Security Act of 1935 did this.  They wanted to give every citizen access to money.  But the "fine print" didn't allow for agricultural workers and domestic workers ("These are proxies!" chimes in an audience member).  These started off intentionally.  This is not new, it just comes in new forms.

Purvi: The new piece is that our data is being collected. How you interact on social media is being used to identify you. Everyday people have to understand what's happening here. Most of this is happening in non-transparent ways. How can we shift the ethics and values about how this is done.  We can't bring this down from the top levels.

Malika: People of color and women have to be in the room as designers and creators.

Cathy: I agree with all that but I want to add something which is that this is about power but it is also happening in an extremely secretive environment. We have no access to weigh those algorithms, test them, see if they are wrong, see how they influence people. It's not exactly the same thing. It is historically embedded, but the tech has made it possible for people that have power to have even more power.

Atyia: The vehicle is new, but the methodology is ancient. This is important for data scientists, because you don't need to come up with things from scratch.

Malika: The tech companies talk about being justice-driven. Tech has stood up around things like the travel-ban, the bathroom-ban, DACA and more.

Community organizer from Newport News, VA, who works with youth. Working with the black community you work on lots of issues - mental health, economics, environmental justice, and more.  Is there a toolkit to train us and the youth on how to identify what data is important and what we need. The questions we have might not be the data we need to find the trend and disrupt the norm.

Cathy: This is important, and hard to answer. Different fields have different forms of evidence gathering.  "Big data" is mostly online data.  The kids are being surveilled by the big tech companies.  They can go "incognito mode" to protect themselves.  Videos and documentation are important.  The ACLU tool can immediately live stream police interactions.

Purvi: With Law for Black Lives we created chapters. How do you organize yourselves in the local communities and build bridges to people being impacted? How to democratize and build toolkits? There are pieces that are very complex. But how do we take democratization as far as possible? What's the connectivity point in this room to make that toolkit?

Categories: Blog

Increasing Voter Knowledge with Pre-Election Interventions on Facebook

November 14, 2017 - 8:52am

Liveblog of Winter Mason's talk at MIT sponsored by the MIT Gov/Lab on 13 November 2017. All errors are mine.

Moving voter knowledge is hard but possible.

Winter starts by introducing the unusually large research team serving the Civic Engagement products at Facebook. Civic engagement is one of the five major pillars of how Facebook seeks to realize its mission. Zuckerberg has clearly stated that ensuring people have a voice in their government is a priority for the platform and is the mission driving the civic engagement product team. 

Political efficacy is their north star for "better" for evaluating their product success. They see themselves involved in addressing a longer term trend in declining political efficacy in America as documented by ANES (see definition and graph below).

The civic engagement team at Facebook also thinks deeply about the values that are driving their work. In their work they reflect on whether they are being selfless, protective, fair, representative, constructive, and conscious. The value of consciousness is about knowing what their impact is whether positive or negative. They want to understand whether they are doing the things they are trying to do and not doing the things they are trying to avoid.

Research Strategy

The research process for the team begins with qualitative research: asking people is the cornerstone to understanding how people think about civics and politics. They have done research in 12 countries and multiple U.S. cities, group interviews with U.S. Senate and Congressional staffers, and interviews with social media managers of world leaders.

They have found that elections are one of the core ways that citizens feel they are heard. Everybody wants to have some way to connect with their representatives. However, people feel that there are few opportunities to be heard—they are skeptical whether individual voices matter. There has also been concern, especially among international interviewees, that there is personal risk for discussing politics online. These insights drive subsequent research and product design ideas.

Quantitative analysis on the platform around representative pages have found that connections between users and politicians do not follow constituencies. They have also noted that discussions are spiky around major events. Breaking down a topic model around legislator page discussions found that in-district and out-of-district users shows that it's hard to know whether to pay attention to certain things like swearing when you don't know if it's a true constituent. This led to the constituent badging design feature that shows who is in the district, helping representatives focus their attention in discussions.

Quantitative analysis has also revealed gaps in political engagement on Facebook. There are differences across ages and between men and women, with women on average contributing far fewer political comments among users ages 20–60. By using surveys, they can also check for political ideology self-reports to understand what biases may exist across the political spectrum among users.

CASE STUDY: 2016 U.S. Election Voting Knowledge

Voting Plan was Facebook's flagship product to enhance people's knowledge of their ballot. It provided the slate of candidates for people running in a user's district and any endorsements that had been made.

Note: All data in this study was anonymized and then deleted after the study. Analysis happened within 30 days of the election and those data are no longer available and there is no way to go back and check on individuals' preferences now.

They conducted a large scale survey to measure their impact on both knowledge and key attitudes. The survey was built according to your own ballot. Contest knowledge questions asked about which offices were being elected this year. And candidate knowledge questions asked about who was running for those offices. And they wanted to make sure they were changing people's positions so they measured "affective polarization," a.k.a. how tribal people were feeling toward their own party.

Random control groups were used to causally determine the impact of these products. The treatment group received a newsfeed promo inviting them to use the voting plan, which they could also access through search, bookmark, and friends' posts. And a random 1% of users were in the control group that still had access to the tool through search, bookmark, and posts but did not get the newsfeed promo.

They found a significant lift in knowledge of ballot contests for treatment versus control. This 6% lift is equivalent to the difference between high school and college students. There was no difference in candidate knowledge, which means this a place for improving the design. There was also no difference on affective polarization (which is good!), they were not having an impact on political attitudes.

CASE STUDY: 2017 French and UK Elections Voting Knowledge

On politician pages, there was a new "Issue" tab where officials could add cards with their positions on different issues. Of course, only the few most hardcore political junkies would go to an issues tab on a politician's page.


See video on Huffingtonpost.fr.

So, the "Election Perspectives" product was introduced in the 2017 French and UK elections so that these issue position cards could be added onto newsfeed items that discussed those particular issues. This allowed people to browse through and compare the positions of different candidates and different parties. Users could then also share different policy cards.

They saw a lot of engagement with these cards. First, they broke down clicks and shares and saw that there was some difference between issue interest and those that sparked a desire to share after browsing the cards.

And they did survey research in UK and France (at both election rounds) that asked about knowledge about candidates as well as perspectives on their own knowledge and the diversity of political information sources. In France, they found that the impact was detectable during Round 2 between treatment and control among those who had the lowest political interest. Of course, this was a small group because those that have low political interest are least likely to interact with the Election Perspectives tool. That said, this is now driving some design work on how they could reach this group in other ways.

In the UK, they ran the same survey as during the first two rounds in France although with a much larger sample. There were also two control groups because the UK has a long-term hold-out group not exposed to Facebook civic engagement products. However, despite the larger power and stronger design for impact of any of their products there was no different political knowledge even when controlling for political interest.

Conclusion: Increase voter knowledge is a tough yet worthwhile endeavor. Winter notes that the neutral impacts are still important because they ensure that they are being responsible in their research and their product design.

Selected/Edited Q&A

Question: How are you acting on the gap in female participation that you illustrated?

Facebook doesn't just want to optimize for engagement with the platform. Fairness is a principle they take seriously in practice. There is one example from a product where they changed the privacy settings around sharing political preferences. The stricter privacy model reduced overall participation but increase female participation which the closer to the true goal.

Question: Is Facebook only committed to thinking about civics in such a high-minded way about voting information and elections, especially considering we are realizing that low-minded, meme-pandering is a huge part of the discourse and is having an impact on elections?

If Facebook found that memes were really effective and getting politicians to listen to their constituents, then they would have to look closer at that. They are completely committed to the goal of having real voice in government.

There are other teams at Facebook focused on civil discourse. They know that women don't participate online in politics because of the abuse they sustain. And it is against their goal of fairness, so they opt for a higher minded approach because it is closer to their holistic goals.

Question: The elephant in the room is the problem of fake news and the use of advertising by nefarious actors, the best known is Russia.

Winter thanks the audience member for disambiguating between Facebook's opinion and his own. First, the Facebook company position is that they need to address bad stuff like fake news and the company has been very public about hiring up on this and will likely continue to do so.

But if Winter argues that if Facebook only tried to stamp out the bad stuff and didn't try to promote democracy, then they would be missing out on a huge opportunity. His high-minded belief is that in the long-term the focus on things like voting information may help address these problems.

Question: How are these new products improving the quality of political discourse on Facebook?

Studying this is on Winter's to-do list. He knows they have roughly doubled the connections between people and their representatives and doubled the number of interactions between them. But looking at the nature of political discussion before and after the introduction of their new products has not been closely researched yet.

Question: How do you deal with biases in the sample participation on Facebook and on your research? Are you re-weighting them to what the American electorate looks like?

This is something I Winter has been looking at and he says he should probably report on his slides. And it turns out that respondents to their surveys are closer demographically to the American electorate than to the Facebook population.

Question: Could reducing men's commenting produce fairer participation or perhaps we should move toward a model of collective action? And what does this mean about political efficacy?

There is much more to research here. In their analyses, Facebook has found that being connected to your representatives is most closely correlated with political efficacy.

Question: What about products between elections? Would you do something about election promises?

They have been thinking a lot about this, especially accountability ideas, although they don't know exactly how to implement this. The "Town Hall" tool is the start of this to allow everyone to easily follow their representatives after an election. And now there is a way to get a summary of posts from representatives on a regular basis.

Question: You talked about the difference between civics and politics. How are you thinking about civic engagement and grassroots efforts?

They recently sat down with the leaders of groups like Pantsuit Nation and March for Science and asked them about what their needs were and what they wanted to do next. They are really excited to do more thinking about this.

Civic media
Categories: Blog

Data For Equity: The Power of Data to Promote Justice - Liveblog

September 26, 2017 - 3:28pm

This is a live blog account from the Data For Equity: The Power of Data to Promote Justice event.

 

Barbara Best, Executive Director, Center for Public Leadership, introduces the panel. The moderator is Yeshimabeit Milner, the Executive Director and Founder of Data for Black Lives which uses data science to create concrete measureable change in black lives.  The panel has been organized by Black Student Union and Black Policy Conference. People are tuning in via the livestream.

Yeshi introduces Data for Black Lives. They are building a movement for scientists, activists and organizers. We can use data and tech to make concrete change in the lives of black people and all people. Data and tech is changing the world so fast. We can look to the past to respond to the present moment. In 1793, Eli Whitney invented the cotton gin - separated seeds from cotton fiber. Cotton became king in US. By the 1850s, the Us produced the vast majority of cotton produced worldwide. The cotton gin was a gamechanging social invention. But it had extraordinary negative impact on transatlantic slave trade. For millions of enslaved people, cotton gin helped expand a cruel and violent system. No technology is neutral. For far too long data and tech have been weaponized against community. But we have examples of technology for positive social change. We are seeing advances now in civic analytics and data at all levels of government. Data plays a huge role in allocation of resources. These tools have a role to play in equity and to help elevate the voices that have been silenced.

 

This call to action is more urgent than ever before. How do we use data to expose inequity and hold governments accountable?

 

The four panelists here have used data in inspiring and different ways to promote justice. They include:

William Isaac, Fellow, Open Society Foundation; Research Advisor, the Human Rights Data Analysis Group
Kelly Jin, Director, Data-Driven Justice at the Arnold Foundation; former Citywide Analytics Manager, City of Boston
Carlos Rojas, Special Projects Consultant, Youth on Board; Founding Member, Boston Education Justice Alliance
Paola Villarreal, Harvard University Fellow, Berkman Klein Center for Internet & Society; former Data Science Fellow, ACLU

Yeshi will use some of the questions submitted online prior to the event to guide the panel discussion. Each panelist will briefly introduce themselves.

 

William Isaac leads with an introduction of his work. He focuses on algorithms and their role in public decision making. There's an assumption amongst policy-makers that data is good and objective. Through his research he wants to show that data is not objective and that algorithms in and of themselves do not solve those problems. His research tries to illuminate that and build towards something better.

 

Kelly Jin has worked in many organizations. She feels everyone should tackle one issue: every year we have millions of people cycling through local jails. This costs a lot of money. When you look closer many have mental health issues, substance abuse issues. They launched the data-driven justice program at the White House last year to try to address this at the local level. People keep ending up in jail. How to bring together ER doctors, sheriffs and local communities to hold government accountable? She previously built the data team at City Hall Boston.

Paola Villareal says she comes from a corrupt country and she thought it would be different here. But it happens that it's not - it's just a different kind of corruption called oppression. There are many pipelines that show that. Every state and city has a different type of oppressive pipeline that is biased against people of color. She is here because she started to work on data, analyze it and show disparities. It is super important to learn about these shocking biases and oppressive systems.

Carlos Rojas says he comes from a perceived corrupt country - Colombia - and moved here when he was five. They moved to Dudley Square. After 6 months here, he became undocumented because they had flown in with a tourist visa. He noted how black and brown kids interacted with police. He was told to never, ever approach or talk to a police officer. If you do, be very polite or you might get arrested. As he grew up, he became aware of the ways that these problems start within the school system. What does it look like to reform school-level policies. School-to-prison pipeline in this country is a real thing. It sends young black and brown people from school directly into prison. They believe that young people organizing and in partnerships with adults can make beautiful things happen. Data that corroborates that lived experience is also very important. He has examples of amazing advocacy efforts but they are also having struggles getting the state to hold agencies accountable.

 

Yeshi says this represents real breadth. One of the first questions from the audience - what have been the pro and con impacts of data-driven decision making in government over the past decade?

 

Kelly says she wouldn't go that far back. The core of a of the work around open data has only blossomed in the last five or so years. Cities have done a lot of work to open up their data. It's hard to unlock data, but fundamentally this is public data. That's the first step - how do we unlock it. From that, the engagement of a much broader community is what matters. If you have 300 data sets it means nothing if no one is using them for policy change or research or recommendations. The policy changes that have happened as a result of people looking at the data are what matters. One question is what tech infrastructure can we build on top of open data to provide value back to citizens. For example, individual health data donated to researchers. Technology - why aren't we using more open source? It's amazing to see the turnout for this event - how can these people get involved?

 

Paola says that people are here because of openness - this has happened in the last 8-10 years. Open source, open government, open data. It's not just a set of tools but a mission. Openness in general is one of the best things that have happened. But on the other hand, machine learning in the criminal justice system is one of the worst things that has happened. We need to solve that. In the meantime, I would call for an embargo on that.

 

Walter agrees that openness is the biggest thing he has seen. We have seen the big coastal cities who have embraced data. He has seen something different in the midwest. In Michigan, they faced the Flint water crisis. They had no digital records of the water pipe existing. It turns out that they actually did have records but they were on file cards and there was one person responsible for them. Loveland Technologies is a company that then took those cards and digitized them. Data does have a lot of good use when you are allowed to share, usually with partners not inside government. The cons of this movement - there have been some weird things coming out of machine learning.  Part of it is algorithm but part of it is the institutional decision making on top of that. Some people in government don't want to use data at all. Others say data will solve all our problems and don't like when you say bad things about data. There has to be some middle ground. It's not just machine learning or algorithms. Some places have predictive policing but don't even use the software. Even when they have the tools when the institutions don't change as well then nothing changes. Particularly need to focus on accountability mechanisms.

 

Kelly adds that one thing they have talked about is TQ - what is the technology/data quotient within government and how do you improve that? Data and tech vendors come into government. What are you doing? What decisions making? Data ethics and algorithmic transparency - how do you ensure that? The algorithms should also possibly be public.

 

Carlos says that in the school systems it has been striking what data has been capable of both positively and negatively. No Child Left Behind put schools in frenzy of data collection around test scores and what amounted to a toxic culture of high stakes testing. The policy ended up doing exactly what it was not designed to do - closed schools, left many students behind. The biggest predictor of how well you do on a test is how much money your parents make. But on the other hand, youth organizers have been demanding that schools collect more data. In the case of dismantling the school to prison pipeline, the state wasn't collecting data on school discipline. We didn't see data on who was being suspended and expelled. For years, we had to rely on personal narrative and anecdotes to prove that young people of color were being suspended at egregiously disproportionate levels. We demanded that the state collect disaggregated data, school by school, so that we could see better the school to prison pipeline.

 

Yeshi asks the next question. Openness has made it possible for us to be here today. But one thing we are grappling with, once people get the data, what are they going to do with it? Not everyone can learn R. She got involved with data collection as a youth organizer. How do we scale data literacy and change how we teach about data to get more women and people of color to make the open data movement more lively and accessible to more people?

Paola says we need more ways to tell stories and show how data impacted communities. We need more community engagement and co-creation. Show communities the data and ask them what they think. Data scientists are not saviors. They collaborate with communities to define the problem.

Carlos states that he has seen the impact when data scientists align with community groups. And they have also seen data waste when researchers create data and then it just sits on a shelf and isn't used by groups that could benefit from it. They are lucky to work in a city that is rich in data science. Lot of people in Boston are interested to take direction from people on the ground. They come to them and say "What research do you need done to make your work effective?" Then when young people are in a legislator's office they have data to back their arguments. We have been very invested in those partnerships. On Oct 19th, they will be gathering with youth and parent groups to review at the Chapter 222 data, talk about what is happening on the ground, determine how to move forward.

Paola says the most impactful work she has done was in teams of lawyers, community members, advocates and activists.

 

Kelly says she wants to talk about the role of media in this. How do we continue to show that there are women and minorities working in this field? For example, the film Hidden Figures. And on "how do we engage" - not to have data for data's sake but to determine what the questions are and then figure out what are the data sets to help make those easier to answer. What data are we not collecting? There are so many cases where no one is collecting that information. There is a huge piece of catching up.

 

Walter thinks a lot about how you teach these concepts. For undergrads and people in college they created InnovateGov program that teaches them data science and then places them in government agencies. They found that you need to have something that you are passionate about. A lot of the stuff is boring. But the coolest part is that when you possess the knowledge you can present it to someone to make a case that they should change things. For example, a student team collected surveys about how to reach people involved in foreclosures. For high school students - smaller toy data sets where you introduce concepts and giving them passion or interest in a topic.

Yeshi asks what are some good examples of cases where agencies and orgs have used data for justice? That will help us after the Q&A.

Carlos talks about Youth on Board. They created surveys with questions that would help paint picture of students of color as well as listening projects where they would go have quick conversations. To engage them, they needed two things: big signs and bags of candy. Their questions were like, "Do you have police in your school? How do they affect the environment? Have you been suspended? Do you think it was fair? Do you think your race had anything to do with it?" We didn't find anything surprising. They passed the Chapter 222 legislation which said instead of districts just doing zero-tolerance, they had to try other methods before suspension and expulsion. But they had no way of holding schools accountable on this. They decided to develop an app that summarizes major changes and allows you to report an equity grievance and they developed the Boston Student Rights app. Incredible tool that collected 26,000 cases. Students are using it to educate themselves and their teachers. Sometimes they are using it to advocate for themselves and prevent themselves from being suspended. All the data goes to the department of equity which the community group has a good relation with.

However, but now the schools are doing things like dismissing students early, doing emergency removals, and providing an informal no-trespassing notice. These things fall under the radar but then are not held accountable from the state.

Paola discusses her work for the ACLU and its relationship with the City of Boston. Although the entities did not agree, there was an open and transparent process. Data & Society is a great research organization.

Kelly talks about Measures for Justice - they are doing the hard work to do data collection and make it open and available. How do philanthropists step up to do that work? Coding it forward class at HKS taught by Nick Sinai creates partnerships between undergrads and the city of Boston. Finally, Jen Palka runs Code for America, like Teach for America for technologists who are placed in local jurisdictions.

Walter says he has so many examples. Sam Singyawe is amazing and part of Black Lives Matter started Mapping Police Violence. Some smaller ones in Detroit - like Data-driven Detroit, Future City Detroit - projects that are building the public infrastructure for data. A lot of nonprofits are doing the heavy lifting.

 

Categories: Blog

Hiring a Media Cloud Contract UX Specialist

September 22, 2017 - 8:21am

Online media is in a state of flux. Twitter, Facebook, blogs, so-called fake news - these are all recent developments that have radically altered the landscape of news and information online. We call this the "networked public sphere", and the Media Cloud project was created to track and understand it. Come help us design easier-to-use data-centric web tools for academic internet researchers and human rights activists that let them investigate coverage and conversations online about topics they care about.

The Media Cloud project is seeking a contract user experience specialist to help assess our existing web-based tools, and design new ones, to support data-driven research about the role of online media in civic discourse. The specialist will begin by designing and leading a process to evaluate the usability of our current suite of web based tools (available at tools.mediacloud.org). They will collaborate with the technology team to understand the current and future features available, focused on how they could be used by media-makers like documentary film producers. They will assist in development of training guides for novice users in the non-profit space. The position will be a 6-month part-time contract position based at the Center for Civic Media (at the MIT Media Lab), but the UX specialist will work closely with members in other institutions as well. This is a grant-funded contract position that we hope to extend, or perhaps turn into a staff position.

Media Cloud is a joint project between the Center for Civic Media at MIT and the Berkman Klein Center for Internet & Society at Harvard University. We are an open source project producing research about the networked public sphere, and helping others do their own research about online media. We make available to the public our existing archive of more than 550 million stories, adding more than 40,000 new stories daily. The project is funded by human rights foundations. We produce both the open platform and research that helps our funders make decisions about how best to influence online civic conversations about democracy, activism, and health.

We are a diverse project of researchers and technologists who love to wrestle with hard questions about online media by using a combination of social, computer, and data sciences. The ideal candidate will work well with all members of the team, from senior faculty to junior developers, and will thrive in an academic atmosphere that privileges constant questioning and validation at all levels of the platform and of our research products. Experience working on big data systems, or data-driven interfaces, as is experience working on projects investigating online media.

Minimum Qualifications

  • at least two years experience working as a UX designer on web-based products;
  • familiarity with user-centered design and research methodologies;
  • demonstrated ability to translate between technical and non-technical audiences;
  • demonstrated ability to iterate on design ideas quickly;
  • demonstrated ability to use data to validate decisions;
  • experience writing design documents and user guides;
  • interest in working on issues related to democracy, gender, race, health, and globalization.

Duties

  • design and lead a usability study with non-profit partners;
  • document key findings in a report;
  • create and assess mockups for existing and new features;
  • contribute to the ongoing identification of key features to add to the platform;
  • assist in the development of user guides for tools;
  • collaborate with undergraduate interns working on same project.

Helpful Skills

  • a strong portfolio showing user-centered design approaches applied to data-intensive products;
  • passion for solving difficult engineering and data problems;
  • experience designing data-driven interfaces;
  • experience working with design tools like Sketch, Photoshop and Illustrator;
  • knowledge and interest in social sciences;

Much of our substantive work focuses on issues of gender, race, and globalization. We strongly encourage women, people of color, people of all ages, and people of any sexual identity to apply.

The job is based in Cambridge, MA, but much of our team is distributed around the world. We are open to alternative working arrangements that include part time residence in Cambridge. We imagine this position as a 2- or 3-day a week engagement over 5 to 6 months, but are open to other approaches.

Apply by sending a cover letter, resume, and portfolio to jobs@mediacloud.org.

jobs
Categories: Blog

An Open Letter From Civic Hackers to Puerto Rico & USVI in the Wake of Hurricane Maria

September 19, 2017 - 2:49pm

I am working with a group of civic developers committed to supporting Hurricane victims for relief & recovery who have helped with the software development and data analysis of Hurricane Harvey and Hurricane Irma primarily in Texas and Florida. In the wake of Hurricane Maria, we want to help Puerto Rico and the U.S. Virgin Islands in the same way. Devastation has already occurred in Puerto Rico and the USVI, and we’re here to help in the response and recovery pending from Maria.

But, we won’t jump in without your permission. These places have a long history of imperialism, and we refuse to add tech colonialism on top of that.

Here’s how we might be able to help: Rescue

Sometimes emergency services are overloaded fielding calls and deploying assistance. Remote grassroots groups help take in additional requests through social media and apps like Zello and then help to dispatch local people who are offering to perform rescue services (like the Cajun Navy in Houston after Hurricane Harvey).

Shelter updates

As people seek shelter while communication infrastructure remains spotty, having a way to text or call to findt the nearest shelter accepting people becomes useful. We can remotely keep track of what shelters are open and accepting people by calling them and scraping websites, along with extra information such as if they accept pets and if they check identification.

Needs matching

As people settle into shelters or return to their homes, they start needing things like first aid supplies and building materials. Shelter managers or community leaders seek ways to pair those offering material support with those in need of the support. We help with the technology and data related to taking and fulfilling these requests, although we don’t fulfill the requests directly ourselves.

If you are interested in this, please let us know by emailing me (bl00 at mit) or finding us on Twitter at @irmaresponse or @sketchcityhou.

Here are other groups lending aid already (maintained by someone else).
If you’re looking to jump in an an existing task, Humanitarian OpenStreetMap Team already has a tasker active for helping to map the area for responders and coordination.

responsehurricanesnetworks
Categories: Blog

How Would You Design Crypto Backdoor Regulation? Ed Felten at CITP

September 19, 2017 - 10:43am

Law enforcement sometimes argue that they need backdoors to encryption in order to carry out their mission, while cryptographers like Bruce Schneier describe the public cybersecurity risk from backdoors and say that the "technology just doesn't work that way."

I'm here at the Princeton University Center for Information Tech Policy, liveblogging the first public lunch of the semester, where Ed Felten shares work in progress to find a way through this argument. Ed is the director of CITP and a professor of computer science and public affairs at Princeton University. He served at the White House as the Deputy U.S. Chief Technology Officer from June 2015 to January 2017. Ed was also the first chief technologist for the Federal Trade Commission from January 2011 until September 2012.

Ed starts out by pointing out that his talk is work in progress, that he's thinking about the U.S. policy context. His goal is to explore the encryption policy issue in relation to the details, understand the tradeoffs, and imagine effective policies– something he says is rare in debates over encryption backdoors.

Five Equities For Thinking about Encryption Backdoor Policies

People who debate encryption backdoors are often thinking about five "equities," says Ed. Focus on public safety concerns the ability of law enforcement and intelligence community to protect the public from harm. Cybersecurity is the ability of law-abiding people to protect their systems. Personal privacy is the ability of users to control the data about them. Civil liberties and free expression concern the ability of people to exercise their rights and speak freely. Economic competitiveness is the ability of US companies to compete in international and domestic markets. Across all of these, we care about these things over time, not just immediately.

Ed notes that policy debates often come to loggerheads because people weight these equities differently. For example, people often contrast public safety with cybersecurity without considering other factors. They also come to loggerheads when people start with these equities without asking in detail what regulation can and cannot do. 

Understanding Policy Pipielines

When we think about policies, Ed encourages us to think about a three-part pipeline. Policymakers start by thinking about regulation, hope that the regulation creates changes in design and user behavior, and then ask the impact of those changes and behaviors on the equities that matter. In this conversation, Ed is working from an assumption of basic trust in the US rule of law, as well as realism about technology, economics, and policy.

The Nobody But Us Principle (NOBUS)
In the past, signals intelligence agencies have tended to have two goals: to undermine the security of adversaries' technologies while strengthening the security of our own technologies. Lately, there's been a problem, which is that US adversaries tend to use the same technologies: strengthening or weakening adversaries' security also affects our own security.

The usual doctrine in these situations is to assume that it's better to strengthen encryption, in hopes that one's own country benefits from that strength. But there's an exception: perhaps one could look for methods of access that the US can carry out but adversaries cannot; these methods are NOBUS (nobody but us). For example, zero-day exploits are an example of something that intelligence agencies might think of as NOBUS. Of course, as Ed points out, the NOBUS principle raises important questions about who the "us" are in any policy idea.

NOBUS Test in Crypto Policy

Based on the NOBUS principle, Ed proposes a principle that any mandated means of access to encrypted data must be NOBUS with high probability. Several rules fail this test, such as banning all encryption, or requiring that encryption be disabled by default.

Why Do People Need Crypto?

Ed offers some basics on cryptography, pointing out that cryptography is used to protect three things. It protects confidentiality, so unauthorized party can't learn message contents. Crypto protects integrity, so unauthorized parties can't forge or modify messages without detection. It also protects identity, protecting people from impersonation. Ed describes two main scenarios for uses of crypto: storage and communications.

In storage situations, device keys and passcodes are combined to create a storage key that can be used to encrypt and decrypt data from a computer or a phone. Once the key is no longer being used, the information is removed and the device is safe.

Encrypted communications are more complicated. Here is a typical situation: In a handshake phase, two people use long-term identity keys to confirm who they are and receive a session key. During the data transfer phase, the session key is used to encrypt and decrypt messages between them. They might change the session key from time to time, and when they are done with the key, they delete it. Once they have deleted a session key, an adversary will be unable to decrypt anything that was said during that session key. Systems like TLS for secure browsing and the Signal protocol fit within this framework.

Trends in the Uses of Crypto

When law enforcement make statements about how they're losing access to communications, they're making a claim about trends. We are seeing a move toward more encryption in storage and on devices, says Ed. To understand the actual impact on security, Ed argues, we should ask instead: who can recover the data? If only the user can recover the data, then law enforcement/intelligence (LE/IC) may lose access. But if the service provider can recover, then LE/IC can get access from the provider. To think through this, Ed asks us to imagine email services. Messages might be encrypted, but law enforcement can often still get companies to give the data to law enforcement.

Ed predicts that in situations where most users want data recovery as a feature, or where the nature of the service requires the provider to have access, the provider will have access, and law enforcement will be able to access it. This includes most email and file storage. Otherwise, users will have exclusive control, in areas such as private messages and ephemeral data.

Designing a Regulatory Requirement for Crypto Backdoors

Any regulatory requirement needs to work through a series of trade-offs, issues that have no relation to the technical questions, says Ed. He outlines a series of decisions that need to be made when designing a regulation on crypto backdoors.

The first question to ask is: should we regulate storage only, or storage and comunications? Communications are harder because keys change frequently and LE/IC can't assume access to the device. Storage regulations typically assume that LE/IC has access to the device, so this is an important question. Storage-only approaches are simpler, so regulation writers should consider whether they should stretch for communications or not. In today's conversation, Ed focuses on storage for simplicity.

The next decision is to ask which services are covered by the regulation. There are many kinds of products that use crypto, and regulators need to decide how much to cover. The broader the range, the more complicated the regulation is, and the greater the burden becomes across the equities. But simple regulations can put many of LE/IC requests beyond their reach. Ed urges us to stop thinking about the iphone, a vertically-integrated system run by a single company. Think instead about an android phone, which involves many different companies from many countries in one device: chip makers, device manufacturers, OS distributors, open source contributors, crypto library distributors, mobile carriers, retailers, and app developers. All of them put technology on the phone, and you have to decide which ones in this supply chain are covered by the regulation.

When deciding who to cover in the regulation, you also need to ask what they're able to do. Chip makers can't control the operating systems. Manufacturers are often foreign. App developers are small teams or individual contractors.

The next decision is to ask how robust encryption backdoors must be. If users attempt to prevent access, how strongly must the system resist? Ed outlines several options. The first option is not to resist user attempts. Another option is to make disabling the backdoor at least as hard as jailbreaking the device. A stronger option would be to require users to conduct non-trivial modifications to hardware to secure. If you require this, you will make it much less likely that adversaries and would-be targets would evade the public safety investigation, but it also probably requires hardware modifications. Legacy systems would be unable to comply, and depending on who you require to comply, they might not be able to comply; you couldn't ask Google to require hardware backdoors on android phones, whose hardware they don't control.

Next, regulators need to decide how to treat legacy products. Do you allow legacy systems? Do you ban them? If so, how can people tell if their system has a backdoor to comply with the ban, and do you want them to know?

Another decision is to work out what to do with travelers. If someone travels to the U.S. and brings a device that is compliant with their own country's rules but not US policies, what do you do? Do you allow it, so long as the visit is time-limited? Do you prohibit it, detecting and taking away the device? Do you try to reconfigure the device at the border? Manually? Automatically? How would these requirements violate trade agreements?

All of these decisions, says Ed, are decisions you need to make even before discussing the technical details. Next, he talks through the most common technical proposal, key escrow, to show how regulators could reason through these policies.

Technical Example: Key Escrow

Under the key escrow approach, storage systems are required to keep a copy of the storage key, encrypting it so that a ``recovery key'' is needed to recover it. The storage system creates and stores an escrow package. Recovering takes a three-stage process: extract the escrow package from the device, decrypt the escrow package to get the storage key, and use the storage key to decrypt the data.

If you use key escrow, you have to decide if you're going to require physical access. On option is to require that physical access is necessary, you could allow remote access to the escrow package, or you could leave it to the market. Requiring physical access limits the worst case from the leak of keys; even if the recovery key is compromised, users could protect themselves through physical control. In the US, law enforcement have said that they envision using key escrow systems only in cases of physical access and court orders. Relying on a requirement for physical access depends on a technical ability to do so, something that is theoretical so far and may be difficult to force hardware supply chains to comply with.

Next Ed shows us a matrix of four policy approaches:

  • The device must include a physical access port for law enforcement
  • The company must hold and provide the escrow package and give it to law enforcement if requested
  • The company must provide the storage key directly when requested from law enforcement
  • The company must provide the data

Lower on the list, the company does the work and has more design latitude about how to respond. But the bottom two policy approaches have a NOBUS problem, since they expose users to third party access. Requiring companies to provide the data and to store the key probably fails the NOBUS test as well. In the top two options, law enforcement needs knowledge about many devices, probably managed through industry standard.

Maybe there are more options. Ed talks about a number of other possibilities, including working on who holds the recovery keys. Giving all keys to the US government could harm competitiveness and be blocked by other governments. Giving keys to other countries fails the NOBUS test because it gives other governments a competitive advantage.

Another option is to split the keys, giving the keys to multiple parties and requiring them all to participate. Imagine for example that one key is held by the company and one by the FBI. This approach has some advantages. The approach is NOBUS if any one of the key holders is NOBUS, since any key holder can withhold participation. This approach is also more resilient against compromise of recovery keys. Disadvantages are that any key holder can block recovery, availability is harder to ensure, and every key holder learns which devices were accessed.

Another split-key model requires that some subset of all keys be used (K-of-N keys) to access the data. The advantages of the system are that the approach is NOBUS if at least N-K+1 of the key holders are NOBUS. It's more resilient against compromise than a single key. Among disadvantages, any N-K+1 key holders can block recovery, K key holders learn which devices were accessed, and the system is much less resilient against compromise than a simple split key.

Where Does This Leave Us?

Ed wraps up by arguing that we can have a policy discussion beyond the impasse people in security policy have reached. He suggests that we think about the entire regulation pipeline, from regulation to response to impact. Next, regulators need to think about the full range of products, how they are designed, how they are used, and the impact on equities. The NOBUS test does help regulators narrow down choices. Yet each of the decisions has tradeoffs with pros and cons. Overall, Ed hopes that his talk shows how regulation debates should engage with details and unpack how to think about the policy by working through specific proposals.

Finally, Ed encourages us to take the final step that his talk leaves out: thinking through the impact of policy ideas on the equities in play and how to weigh them.

mobile devicesnetworkstechnology solutions
Categories: Blog

Pages