The most recent chapter in the debate over net neutrality has been, like previous chapters, cacophonous. One notable difference this time around, though, was the relative quiet of many large tech companies. In previous years, these firms had been outspoken about the issue. What changed? Netflix’s net-neutrality journey is an illuminating example. The reality is that Netflix and other large tech companies, such as Facebook and Google, have grown so dominant that net neutrality has become a nonissue for them.
[Commentary] The open internet has decentralized the media and allowed black activists in a modern movement against police and state violence to bypass discriminatory media gatekeepers and reveal the extent of the state’s abuse. When ordinary people capture shocking video footage of police officers fatally shooting black citizens, for example, it is more difficult for Americans to ignore the realities of racial injustice.
Technology has always been a double-edged sword for black people in America and beyond. On the one hand, it can pose a grave threat; on the other, great opportunity. Our survival, and our democracy, requires us to reject high-tech policing and usher in the strongest net neutrality rules available. The open internet can represent the future of digital democracy, or we can use technology to continue encoding inequality into our modern world.
[Malkia Cyril is the founder and executive director of the Center for Media Justice.]
[Commentary] From the Boston Tea Party to the printing of Common Sense, the ability to dissent—and to do it anonymously—was central to the founding of the United States. Anonymity was no luxury: It was a crime to advocate separation from the British Crown. It was a crime to dump British tea into Boston harbor. This trend persists. Our history is replete with moments when it was a “crime” to do the right thing, and legal to inflict injustice.
The latest crime-fighting tools, however, may eliminate people’s ability to be anonymous. Historically, surveillance technology has tracked our technology: our cars, our computers, our phones. Face recognition technology tracks our bodies. And unlike fingerprinting or DNA analysis, face recognition is designed to identify us from far away and in secret.
[Alvaro Bedoya is the founding executive director of the Center on Privacy & Technology at Georgetown Law. ]
[Commentary] There are two big problems with America’s news and information landscape: concentration of media, and new ways for the powerful to game it.
First, we increasingly turn to only a few aggregators like Facebook and Twitter to find out what’s going on the world, which makes their decisions about what to show us impossibly fraught. Those aggregators draw—opaquely while consistently—from largely undifferentiated sources to figure out what to show us. They are, they often remind regulators, only aggregators rather than content originators or editors.
Second, the opacity by which these platforms offer us news and set our information agendas means that we don’t have cues about whether what we see is representative of sentiment at large, or for that matter of anything, including expert consensus. But expert outsiders can still game the system to ensure disproportionate attention to the propaganda they want to inject into public discourse. Those users might employ bots, capable of numbers that swamp actual people, and of persistence that ensures their voices are heard above all others while still appearing to be humbly part of the real crowd. What to do about it? We must realize that the market for vital information is not merely a market.
[Jonathan Zittrain is a professor at Harvard Law School and the Kennedy School of Government.]
We asked more than two dozen people who think deeply about the intersection of technology and civics to reflect on two straightforward questions: Is technology hurting democracy? And can technology help save democracy? We’ll publish a new essay every day for the next several weeks, beginning with Shannon Vallor’s “Lessons From Isaac Asimov’s Multivac.”
An interview with Francine Berman, a computer-science professor at Rensselaer Polytechnic Institute and a longtime expert on computer infrastructure.
In October, when malware called Mirai took over poorly secured webcams and DVRs, and used them to disrupt internet access across the United States, I wondered who was responsible. Not who actually coded the malware, or who unleashed it on an essential piece of the internet’s infrastructure—instead, I wanted to know if anybody could be held legally responsible. Could the unsecure devices’ manufacturers be liable for the damage their products? Right now, in this early stage of connected devices’ slow invasion into our daily lives, there’s no clear answer to that question. That’s because there’s no real legal framework that would hold manufacturers responsible for critical failures that harm others. As is often the case, the technology has developed far faster than policies and regulations.
[Commentary] The larger problem with WikiTribune is this: Someone who is paid for doing journalistic work cannot be considered “equals” with someone who is unpaid. And promoting the idea that core journalistic work should be done for free, by volunteers, is harmful to professional journalism.
The difference between a professional and a hobbyist isn't always measurable in skill level, but it is quantifiable in time and other resources necessary to complete a job. This is especially true in journalism, where figuring out the answer to a question often requires stitching together several pieces of information from different sources—not just information sources but people who are willing to be questioned to clarify complicated ideas.
Pairs of Android apps installed on the same smartphone have ways of colluding to extract information about the phone’s user, which can be difficult to detect. Security researchers don’t have much trouble figuring out if a single app is gathering sensitive data and secretly sending it off to a server somewhere. But when two apps team up, neither may show definitive signs of thievery alone. And because of an enormous number of possible app combinations, testing for app collusions is a herculean task. A study released recently developed a new way to tackle this problem—and found more than 20,000 app pairings that leak data.