Fifty-six years after technology critics worried that television would revolutionize—and degrade—American politics, Donald Trump is the embodiment of their worst fears: He is a candidate who picks stunts over substance, who deliberately obfuscates rather than clarifies his thinking before the public, and who routinely tells blatant lies as part of a political performance that’s tailor-made for the modern spectacle of broadcast politics.
The ugliness of presidential campaigns predates Trump by generations, but it was never quite like this before.
In the past few years, the devastating effects of hackers breaking into an organization's network, stealing confidential data, and publishing everything have been made clear. It happened to the Democratic National Committee, to Sony, to the National Security Agency, to the cyber-arms weapons manufacturer Hacking Team, to the online adultery site Ashley Madison, and to the Panamanian tax-evasion law firm Mossack Fonseca. This style of attack is known as organizational doxing. The hackers, in some cases individuals and in others nation-states, are out to make political points by revealing proprietary, secret, and sometimes incriminating information. And the documents they leak do that, airing the organizations’ embarrassments for everyone to see.
In all of these instances, the documents were real: the e-mail conversations, still-secret product details, strategy documents, salary information, and everything else. But what if hackers were to alter documents before releasing them? This is the next step in organizational doxing—and the effects can be much worse. It's one thing to have all of your dirty laundry aired in public for everyone to see. It's another thing entirely for someone to throw in a few choice items that aren't real.
In a city or town, a quick look around will tell you the racial makeup of the community you're in. But on a webpage, there’s no easy way of telling who else is visiting. Some sites make it clear that they’re geared toward members of a certain race: The Root, for example, describes itself as a destination for “black news, opinions, politics, and culture.” Elsewhere, visitors have to guess a site’s target audience based on its content—or they may conclude that race doesn’t matter on most of the Internet. But that latter idea is one that a group of academic researchers who study race and the Internet have been pushing back against for decades. With training in different backgrounds—sociology, media studies, Unternet culture—they contend that the Internet is far from raceless; in fact, they say, most of the Internet is targeted at one demographic in particular.
Because of its history as a product of technology companies that are staffed overwhelmingly by white employees, the Internet is largely made by, and for, white people, the researchers argue. “Those with the most access and capital are more likely to control the culture of the Internet and reproduce it in their interests,” said Safiya Noble, a professor of information studies at UCLA who has published research about examining the role of race in social media and search engines. “The web is a white space and its sensibility otherizes non-whites.” Internet scholars have been kicking around this idea since the early days of the World Wide Web, but it’s a particularly difficult one to test experimentally. Unlike studies that catalog how discrimination leads to generations of segregation in physical spaces—redlining in major American cities, for example—it’s not as easy to detect similar patterns on the web.
The World Wide Web is nearing its end in Iran. The country announced it had completed the first of three stages that will eventually set up a “national Internet”—an intranet, really—controlled by the government, with all of its servers in the country. Iranians will only have access to content, services, and applications that are based in Iran. Iran already blocks access to some overseas-based social media, news outlets, and online stores. A national Internet would tighten the government’s grip on online content even more.
The BBC adds: "The government says the goal is to create an isolated domestic intranet that can be used to promote Islamic content and raise digital awareness among the public. It intends to replace the current system, in which officials seek to limit which parts of the existing internet people have access to via filters—an effort [Iranian Communications and Information Technology minister Mahmoud] Vaezi described as being 'inefficient.'"
Imagine the Earth at night—the vast and curving darkness, splotched with rivulets of light. It is a gorgeous sight, and a familiar one. Today, this image often plays as a beautiful cliché, a pre-metabolized testament to human invention and connectedness, as likely to appear in Koyannisqatsi as in a Kia commercial. For economists, though, this spectacle is more than a symbol: It is a powerful data set.
For the last few decades, and almost since astronauts first captured images of the nocturnal Earth, researchers have recognized that “night lights” data indirectly indexes the wealth of people producing the light. This econometric power seems to work across the planet: Not only do cities glow brighter than farmland, but American cities outshine Indian cities; and as a country’s GDP increases, so does its nighttime luminosity. Two years ago, a Stanford professor even used night lights data to show that North Korean leaders were passing the costs of international economic sanctions down to farmers and villagers. As foreign governments imposed sanctions, Pyongyang became brighter and light from the hinterlands waned. Night lights, therefore, appear to be an incredible resource. So much so that in countries with poor economic statistics, they can serve as a proxy for a regional wealth survey—except no one has to go house to house, running through a questionnaire. Yet research has also shown this not-a-survey will remain inexact: To a satellite at night, a few well-lit mansions and a dense but poorly lit shantytown can look nearly the same.
[Commentary] The prize for the most wasteful post-9/11 initiative arguably should go to FirstNet—a whole new agency set up to provide a telecommunications system exclusively for firefighters, police, and other first responders. They would communicate on bandwidth worth billions of dollars in the commercial market but now reserved by the Federal Communications Commission for FirstNet. FirstNet is in such disarray that 15 years after the problem it is supposed to solve was identified, it is years from completion—and it may never get completed at all. According to the GAO, estimates of its cost range from $12 billion to $47 billion, even as advances in digital technology seem to have eliminated the need to spend any of it.
Cheap smartphones with cameras have brought the power take documentary evidence to just about anyone, and the credibility of phone-shot video has held up in court and in the news. But a patent awarded to Apple in June hints at a future where invisible signals could alter the images that smartphone cameras capture—or even disable smartphone cameras entirely.
Apple filed for the patent in 2011, proposing a smartphone camera that could respond to data streams encoded in invisible infrared signals. The signals could display additional information on the phone’s screen: If a user points his or her camera at a museum exhibit, for example, a transmitter placed nearby could tell the phone to show information about the object in the viewfinder. A different type of data stream, however, could prevent the phone from recording at all. Apple’s patent also proposes using infrared rays to force iPhone cameras to shut off at concerts, where video, photo, and audio recording is often prohibited. Yes, smartphones are the scourge of the modern concert, but using remote camera-blocking technology to curb their use opens up a dangerous potential for abuse. What happens if someone else can use technology to enforce limits on how you use your smartphone camera, or to alter the images that you capture without your consent? In public spaces in the US, that would be illegal: Courts have generally ruled that the First Amendment protects people’s right to take pictures when they’re in a public area like a park, plaza, or street. Private spaces are a different story entirely.
Since 2012, Google has been notifying Gmail customers when they come under attack from hackers who may be working for foreign governments. The company has long remained vague about the the way it detects and identifies these hackers—“we can’t reveal the tip-off,” the company tells users—and about the number of notifications it routinely sends. Until now.
When these warnings were introduced, they appeared as thin red bars tacked to the top of users’ inboxes. But just a few months ago, Google redesigned the notifications to be considerably more in-your-face: Now, they take up the entire screen, announcing themselves with an angry red flag. “Government-backed hackers may be trying to steal your password,” the alert reads, advising users to enable two-factor authentication. The new alert says that fewer than one in a thousand Gmail users are targeted by foreign hackers—but for a product with more than a billion active users, that could still be a really big number. (0.1 percent of 1 billion is 1 million.) On July 11, Google provided its most precise estimate ever of the number of cyberattacks it detects that target Gmail users. Google Senior Vice President Diane Greene said the company notifies 4,000 users each month of state-sponsored cyberattacks.
[Commentary] In a small number of Silicon Valley conference rooms, decisions are being made about what people should and shouldn't see online -- without the accountability or culture that has long accompanied that responsibility.
This is a pivotal time for our communications ecosystem. As we cede control to governments and corporations -- and as they take it away from us -- we are risking a most fundamental liberty, the ability to freely speak and assemble. Let’s not trade our freedom for convenience.
[Gillmor teaches digital-media literacy and entrepreneurship at Arizona State University]
[Commentary] I have come to believe that advertising is the original sin of the web. An ad supported web has at least four downsides as a default business model.
First, while advertising without surveillance is possible, it’s hard to imagine online advertising without surveillance.
Second, not only does advertising lead to surveillance through the “investor storytime” mechanism, it creates incentives to produce and share content that generates pageviews and mouse clicks, but little thoughtful engagement.
Third, the advertising model tends to centralize the web. Advertisers are desperate to reach large audiences as the reach of any individual channel shrinks.
Finally, even attempts to mitigate advertising’s downsides have consequences. To compensate us for our experience of continual surveillance, many websites promise personalization of content to match our interests and tastes. By giving platforms information on our interests, we are, of course, generating more ad targeting information.
[Zuckerman is director of the Center for Civic Media at MIT and principal research scientist at MIT’s Media Lab]