Facebook’s long-awaited change in how it handles political advertisements is only a first step toward addressing a problem intrinsic to a social network built on the viral sharing of user posts.
The company’s approach, a searchable database of political ads and their sponsors, depends on the company’s ability to sort through huge quantities of ads and identify which ones are political. Facebook is betting that a combination of voluntary disclosure and review by both people and automated systems will close a vulnerability that was famously exploited by Russian meddlers in the 2016 election.
The company is doubling down on tactics that so far have not prevented the proliferation of hate-filled posts or ads that use Facebook’s capability to target ads particular groups. Continue reading →
As a teenager, I remember being horrified about the possibility of nuclear war. I watched daily news reports about the nuclear arms race between the U.S. and the Soviet Union and listened to music about “what might save us, me and you,” as Sting’s 1985 song “Russians” put it (the answer: “If the Russians love their children too”).
But I especially remember the television event of 1983: “The Day After,” a fictional, made-for-TV movie that imagined a nuclear attack on American soil. The debates and discussions the film spurred make me wonder if a similar sort of high-profile cultural event would serve the country well today.
The water cooler event of the decade
At my junior high school in Southern California, “The Day After” was what everyone was talking about leading up to (and following) the night it aired on ABC on Nov. 20, 1983.
By all measures, it was a major media event. An estimated 100 million viewers tuned in. The White House phone lines were jammed and ABC headquarters in New York received more than 1,000 calls about the movie during its East Coast broadcast.
“The Day After” imagines a scenario in which America’s policy of deterrence fails. It depicts a nuclear attack through the experiences of Midwesterners – doctors, students, children, the pregnant and the engaged – followed by an extended (and, though grim, fairly unrealistic) consideration of post-blast repercussions.
Leading up to the attack, there is quotidian normality, followed by localized shock at the terrifying sight of missiles being launched out of the ground from Kansas missile silos. Panicked anticipation of an incoming nuclear attack follows, replete with period novelties such as huge lines at pay phones.
Although dated and artless in many ways, the representation of the blast remains horrific, if only by virtue of what it forces us to consider: the fire, wind and chaos; the widespread damage and suffering; the desperate need for medical care; and the futile desire for order and assistance.
Society as the characters in the movie knew it – just a day before – was a thing of the past.
“The Day After” was controversial even before it aired, with critics like Tom Shales of The Washington Post deeming it “the most politicized entertainment program ever seen on television.” Reverend Jerry Falwell organized a boycott against the show’s advertisers, and Paul Newman and Meryl Streep both tried (unsuccessfully) to run anti-nuclear proliferation advocacy ads during the program.
In the text that scrolls at the end of the film, “The Day After” declares its intention to “inspire the nations of this earth, their people and leaders, to find the means to avert the fateful day” – to, in essence, scare some sense into anyone tuning in.
Pro- and anti-nuclear groups used the film as a rallying cry for their positions. An Oct. 4, 1983 LA Times article (“‘The Day After’ Creating a Stir”) detailed a “conservative counteroffensive” that attempted to “discredit the film and write it off as a media conspiracy against Ronald Reagan’s strong defense posture.” Reagan supporters also hoped to defuse potential public backlash against American nuclear missile proliferation in Europe.
After the film aired, two simultaneous events at the epicenter of the film’s setting, the University of Kansas, are telling. A Los Angeles Times article titled “‘The Day After’ Viewed Amid Debate, Fear” described how a candlelight vigil in support of nuclear disarmament was joined by counterdemonstrators who “urged peace through military strength.”
As The New York Times’s John Corry wrote, “Champions of the film say it forces us to think intelligently about the arms race; detractors say it preaches appeasement.”
A trigger for serious reflection
Outside of partisan lobbying, “The Day After” opened the door for public debate about nuclear weapons.
Immediately after the movie’s broadcast, Ted Koppel moderated a riveting discussion that featured a formidable group of pundits, including Henry Kissinger, Elie Wiesel, William F. Buckley, Carl Sagan and Robert McNamara. During this special edition of “Viewpoint,” Secretary of State George Shultz also appeared to tell audiences that “nuclear war is simply not acceptable.”
The most prescient and horrifying questions from the audience and responses from the panelists on “Viewpoint” anticipate a future that’s eerily indicative of where we are today – a time of multi-state nuclear capability, where one unstable leader might trigger nuclear catastrophe.
In the weeks after the broadcast, schools and community centers around the country held forums during which people could discuss and debate the issues the film raised. Psychologists and communication scholars were also eager to study the movie’s impact on viewers, from how it influenced their attitudes about nuclear weapons, to its emotional consequences, to whether they felt empowered to try to influence America’s nuclear policies.
That was then, this is now
In the early 1980s, of course, it was the Soviet Union that posed the nuclear threat to America.
Today’s adversaries are more diffuse. The world’s nuclear situation is also much more volatile, with greater destructive potential than “The Day After” imagined.
A modern-day remake of “The Day After” would have to reckon with this bleaker scenario: a world in which there may be no day after.
The bellicose posturing that prevails in the White House today resonates, in some ways, with the public bickering between Soviet Head of State Yuri Andropov and Ronald Reagan in the months leading up to the broadcast of “The Day After.” After the film’s release, New York Times columnist James Reston hoped “the two nuclear giants” would “shut up for a few weeks” – that “some civility or decent manners” might prevail in the wake of public concern about the consequences imagined in ABC’s somber nuclear fable.
But as then-Secretary of State George Shultz pointed out in the Koppel interview, the aim of the Reagan administration was to never have to use nuclear weapons. It was to deter our nuclear adversary and to reduce our nuclear storehouse. Shultz’s words of assurance are a contrast to today’s rhetoric of nuclear one-upmanship that is totally removed from the devastating reality of nuclear war.
Trivializations of nuclear warfare on the order of “my button’s bigger than yours” undermine the grave reality of nuclear cataclysm. Such rhetoric is no longer the domain of farce, as in Stanley Kubrick’s “Dr. Strangelove,” in which erratic, incompetent leaders bumble their way into the apocalypse.
Perhaps some modernized version of “The Day After” could function as a wake-up call for those who have no real context for nuclear fear. If nothing else, “The Day After” got people talking seriously about the environmental, political and societal consequences of nuclear war.
It might also remind our current leaders – Trump, foremost among them – of what modern nuclear war might look like on American soil, perhaps inspiring a more measured stance than has prevailed thus far in 2018.
Voting stand and the notorious “butterfly ballot”, from Palm Beach County from the disputed 2000 U.S. Presidential election. Photo: Infrogmation (Own work) [CC BY 2.5], via Wikimedia Commons
Following the hack of Democratic National Committee emails and reports of a new cyberattack against the Democratic Congressional Campaign Committee, worries abound that foreign nations may be clandestinely involved in the 2016 American presidential campaign. Allegations swirl that Russia, under the direction of President Vladimir Putin, is secretly working to undermine the U.S. Democratic Party. The apparent logic is that a Donald Trump presidency would result in more pro-Russian policies. At the moment, the FBI is investigating, but no U.S. government agency has yet made a formal accusation.
The Republican nominee added unprecedented fuel to the fire by encouraging Russia to “find” and release Hillary Clinton’s missing emails from her time as secretary of state. Trump’s comments drew sharp rebuke from the media and politicians on all sides. Some suggested that by soliciting a foreign power to intervene in domestic politics, his musings bordered on criminality or treason. Trump backtracked, saying his comments were “sarcastic,” implying they’re not to be taken seriously.
Of course, the desire to interfere with another country’s internal political processes is nothing new. Global powers routinely monitor their adversaries and, when deemed necessary, will try to clandestinely undermine or influence foreign domestic politics to their own benefit. For example, the Soviet Union’s foreign intelligence service engaged in so-called “active measures” designed to influence Western opinion. Among other efforts, it spread conspiracy theories about government officials and fabricated documents intended to exploit the social tensions of the 1960s. Similarly, U.S. intelligence services have conducted their own secret activities against foreign political systems – perhaps most notably its repeated attempts to help overthrow pro-communist Fidel Castro in Cuba.
Although the Cold War is over, intelligence services around the world continue to monitor other countries’ domestic political situations. Today’s “influence operations” are generally subtle and strategic. Intelligence services clandestinely try to sway the “hearts and minds” of the target country’s population toward a certain political outcome.
What has changed, however, is the ability of individuals, governments, militaries and criminal or terrorist organizations to use internet-based tools – commonly called cyberweapons – not only to gather information but also to generate influence within a target group.
So what are some of the technical vulnerabilities faced by nations during political elections, and what’s really at stake when foreign powers meddle in domestic political processes?
Vulnerabilities at the electronic ballot box
The process of democratic voting requires a strong sense of trust – in the equipment, the process and the people involved.
One of the most obvious, direct ways to affect a country’s election is to interfere with the way citizens actually cast votes. As the United States (and other nations) embrace electronic voting, it must take steps to ensure the security – and more importantly, the trustworthiness – of the systems. Not doing so can endanger a nation’s domestic democratic will and create general political discord – a situation that can be exploited by an adversary for its own purposes.
New technology always comes with some glitches – even when it’s not being attacked. For example, during the 2004 general election, North Carolina’s Unilect e-voting machines “lost” 4,438 votes due to a system error.
But cybersecurity researchers focus on the kinds of problems that could be intentionally caused by bad actors. In 2006, Princeton computer science professor Ed Felten demonstrated how to install a self-propagating piece of vote-changing malware on Diebold e-voting systems in less than a minute. In 2011, technicians at the Argonne National Laboratory showed how to hack e-voting machines remotely and change voting data.
Voting officials recognize that these technologies are vulnerable. Following a 2007 study of her state’s electronic voting systems, Ohio Secretary of State Jennifer L. Brunner announced that
the computer-based voting systems in use in Ohio do not meet computer industry security standards and are susceptible to breaches of security that may jeopardize the integrity of the voting process.
As the first generation of voting machines ages, even maintenance and updating become an issue. A 2015 report found that electronic voting machines in 43 of 50 U.S. states are at least 10 years old – and that state election officials are unsure where the funding will come from to replace them.
Securing the machines and their data
In many cases, electronic voting depends on a distributed network, just like the electrical grid or municipal water system. Its spread-out nature means there are many points of potential vulnerability.
First, to be secure, the hardware “internals” of each voting machine must be made tamper-proof at the point of manufacture. Each individual machine’s software must remain tamper-proof and accountable, as must the vote data stored on it. (Some machines provide voters with a paper receipt of their votes, too.) When problems are discovered, the machines must be removed from service and fixed. Virginia did just this in 2015 once numerous glaring security vulnerabilities were discovered in its system.
Once votes are collected from individual machines, the compiled results must be transmitted from polling places to higher election offices for official consolidation, tabulation and final statewide reporting. So the network connections between locations must be tamper-proof and prevent interception or modification of the in-transit tallies. Likewise, state-level vote-tabulating systems must have trustworthy software that is both accountable and resistant to unauthorized data modification. Corrupting the integrity of data anywhere during this process, either intentionally or accidentally, can lead to botched election results.
However, technical vulnerabilities with the electoral process extend far beyond the voting machines at the “edge of the network.” Voter registration and administration systems operated by state and national governments are at risk too. Hacks here could affect voter rosters and citizen databases. Failing to secure these systems and records could result in fraudulent information in the voter database that may lead to improper (or illegal) voter registrations and potentially the casting of fraudulent votes.
And of course, underlying all this is human vulnerability: Anyone involved with e-voting technologies or procedures is susceptible to coercion or human error.
How can we guard the systems?
The first line of defense in protecting electronic voting technologies and information is common sense. Applying the best practices of cybersecurity, data protection, information access and other objectively developed, responsibly implemented procedures makes it more difficult for adversaries to conduct cyber mischief. These are essential and must be practiced regularly.
Sure, it’s unlikely a single voting machine in a specific precinct in a specific polling place would be targeted by an overseas or criminal entity. But the security of each electronic voting machine is essential to ensuring not only free and fair elections but fostering citizen trust in such technologies and processes – think of the chaos around the infamous hanging chads during the contested 2000 Florida recount. Along these lines, in 2004, Nevada was the first state to mandate e-voting machines include a voter-verified paper trail to ensure public accountability for each vote cast.
Proactive examination and analysis of electronic voting machines and voter information systems are essential to ensuring free and fair elections and facilitating citizen trust in e-voting. Unfortunately, some voting machine manufacturers have invoked the controversial Digital Millennium Copyright Act to prohibit external researchers from assessing the security and trustworthiness of their systems.
However, a 2015 exception to the act authorizes security research into technologies otherwise protected by copyright laws. This means the security community can legally research, test, reverse-engineer and analyze such systems. Even more importantly, researchers now have the freedom to publish their findings without fear of being sued for copyright infringement. Their work is vital to identifying security vulnerabilities before they can be exploited in real-world elections.
Because of its benefits and conveniences, electronic voting may become the preferred mode for local and national elections. If so, officials must secure these systems and ensure they can provide trustworthy elections that support the democratic process. State-level election agencies must be given the financial resources to invest in up-to-date e-voting systems. They also must guarantee sufficient, proactive, ongoing and effective protections are in place to reduce the threat of not only operational glitches but intentional cyberattacks.
Democracies endure based not on the whims of a single ruler but the shared electoral responsibility of informed citizens who trust their government and its systems. That trust must not be broken by complacency, lack of resources or the intentional actions of a foreign power. As famed investor Warren Buffett once noted, “It takes 20 years to build a reputation and five minutes to ruin it.”
“The Snowden leaks caused a sea change in the policy landscape related to surveillance,” writes watchdog, from the recent passage of the USA Freedom Act to the coming showdown in Congress over Section 702.
“There can be no renewal of Section 702 unless warrantless surveillance of Americans’ private lives is stopped,” declared bipartisan coalition End702. (Photo: Gage Skidmore/cc/flickr)
Three years ago on Monday, the world was shattered by news that the United States was conducting sweeping, warrantless surveillance of people, heads of state, and organizations across the globe.
To mark the anniversary of those revelations, brought forth by a then-unknown contractor working for the National Security Administration (NSA), a coalition of public interest groups have launched a new campaign fighting for the expiration of the law that the government claims authorizes its mass spying. Continue reading →
“This bill is a clear threat to everyone’s privacy and security,” said Neema Singh Guliani, legislative counsel with the ACLU. (Photo: Laura Bittner/flickr/cc)
A draft of a proposed bill mandating companies give, under a court order, the government access to encrypted data is being derided by technology experts as “ludicrous,” as it “ignores technical reality” and threatens everyone’s security.
The bill’s proposers, Senators Richard Burr (R-North Carolina), Chair of the Senate Intelligence Committee, and Dianne Feinstein (D-California), top Democratic on the committee, neither disavowed the document nor confirmed its legitimacy, the Wall Street Journalreports. Continue reading →
Israeli soldiers standing on a Dolphin-class submarine. Photo: Israel Defense Forces (The Chief of Staff Tours Israel’s Naval Bases) [CC BY-SA 2.0], via Wikimedia Commons
Professor Noam Chomsky offered several alarming insights about Israel’s possible true intentions surrounding Iran — and why we all should be concerned. In an interview with AcTVism Munich’s Zain Raza, Chomsky explained what happens to submarines Germany sends to Israel:
“These dolphin class submarines that Germany is providing to Israel are instantly refitted in Israel to have nuclear weapons capacity, and that’s not aimed at defense of Israel. They are meant for attack, that’s what they are. And we know what attack they’re aimed for in the short run: an attack on Iran in the Gulf. That’s a terrible threat, not only to Iranians, but to the world.”Continue reading →
The ongoing fight between Apple and the FBI over breaking into the iPhone maker’s encryption system to access a person’s data is becoming an increasingly challenging legal issue.
With a deadline looming, Apple filed court papers explaining why it is refusing to assist the FBI in cracking a password on an iPhone used by one of the suspects in the San Bernardino shooting. CEO Tim Cook has declared he will take the case all the way to the Supreme Court.
The tech company now wants Congress to step in and define what can be reasonably demanded of a private company, though perhaps it should be careful what it wishes for, considering lawmakers have introduced a bill that compels companies to break into a digital device if the government asks.
But there is an irony to this debate. Government once pushed industry to improve personal data privacy and security – now it’s the tech companies who are trumpeting better security. My own research has highlighted this interplay among businesses, users and regulators when comes to data security and privacy.
For consumers, who in coming years will see ever more of their lives take place in the digital realm, this heightened attention on data privacy is a very good thing.
The business case for better privacy grows
Not too long ago, everyone seemed to be bemoaning that companies aren’t doing enough to protect customer security and privacy.
The White House, for example, published a widely cited report saying that the lack of online privacy is essentially a market failure. It highlighted that users simply are in no position to control how their data are collected, analyzed and traded. Thus, a market-based approach to privacy will be ineffective, and regulations were necessary to force firms to to protect the security and privacy of consumer data.
The tide seems to have turned. Repeated stories on data breaches and privacy invasion, particularly from former NSA contractor Edward Snowden, appears to have heightened users’ attention to security and privacy. Those two attributes have become important enough that companies are finding it profitable to advertise and promote them.
Whether it is through its payment software or operating system, Apple has emphasized security and privacy as an important differentiator in its products. Of course, unlike Google or Facebook, Apple does not make money using customer data explicitly. So it may have more incentives than others to incorporate these features. But it competes directly with Android and naturally plays an important role in shaping market expectation on what a product and service should look like.
These features possibly play an even more critical role outside the U.S. where privacy is under threat not only from online marketers and hackers but also from governments. In countries like China, where Apple sells millions of iPhones, these features potentially are very attractive to end users to keep their data private from prying eyes of authorities.
Regulators hum a different tune
It is clear that Apple is offering strong security to its users, so much so that FBI accuses it of using it as a marketing gimmick.
It seems we have come a full circle in the privacy debate. A few years ago, regulators were lamenting how businesses were invading consumers’ privacy, lacked the proper incentives to do so and how markets needed stronger rules to make it happen. Today, some of the same regulators are complaining that products are too secure and firms need to relax it in some special cases.
While the legality of this case will likely play out over time, we as end users can feel better that in at least in some markets, companies are responding to a growing consumer demand for products that more aggressively protect our privacy. Interestingly, Apple’s mobile operating system, iOS, offers security by default and does not require users to “opt-in,” a common option in most other products. Moreover, these features are available to every user, whether they explicitly want it or not, suggesting we may be moving to a world in which privacy is fundamental.
Data sharing gets complicated
At its core, this debate also points to a larger question over how a public-private partnership should be structured in a cyberworld and how and when a company needs to share details with either the government or possibly with other businesses for the public good.
When Google servers were breached in China in 2010, similar questions arose. United States government agencies wanted access to technical details on the breach so it could investigate the perpetrators more thoroughly to unearth possible espionage attempts by Chinese hackers. The breach appeared to be aimed at learning the identities of Chinese intelligence operatives in the U.S. that were under surveillance.
Information sharing on data breaches and security infiltration is something the government has widely encouraged, last year passing the Cybersecurity Information Sharing Act of 2015 to encourage just that.
Unfortunately, various government agencies themselves have become self-interested parties in this game. In particular, the Snowden disclosures revealed that many government agencies conduct extensive surveillance on citizens, which arguably not only undermine our privacy but compromise our entire information security infrastructure.
These agencies, including the FBI in the current case, may have good intentions, but all of this has finally given profit-maximizing companies the right incentives they need to do what the regulators once wanted. Private businesses now have little incentive to get caught up in the bad press that usually follows disclosures like Snowden’s, so it’s no wonder they want to convince their customers that their data are safe and secure, even from the government.
With cybersecurity becoming a tool for government agencies to wage war with other nation-states, it is no surprise that companies want to share less, not more, even with their own governments.
The battle ahead
This case is obviously very specific. I suspect that, in this narrow case, Apple and law enforcement agencies will find a compromise.
But the Apple brand has likely strengthened. In the long run, its loyal customers will reward it for putting them first.
However, this question is not going away anywhere. With the “Internet of things” touted as the next big revolution, more and more devices will capture our very personal data – including our conversations.
This case could be a precedent-setting event that can reshape how our data are stored and managed in the future. But at least for now, some of the companies appear to be – or least say they want to be – on our side in terms protecting our privacy.
About the Author: Rahul Telang is Professor of Information Systems and Management, Carnegie Mellon University.
A sea of graves spreads across the Fort Snelling National Cemetery landscape. (Photo MNgranny)
Economic opportunism, or more accurately, profit opportunism, best describes the foundation on which the war machine sustains its existence; and a recent report for ‘defense’ industry investors lays bare this callous reality.
“The Islamic State (ISIS) has become a key threat in Syria, Iraq, and Afghanistan and is involved in exporting terrorism to Europe, Africa, and elsewhere. The recent tragic bombings in Paris, Beirut, Mali, the Sinai Peninsula, and other places have emboldened nations to join in the fight against terrorism,” reads the report from the accounting firm Deloitte. “Several governments affected by these threats are increasing their defense budgets to combat terrorism and address sovereign security matters, including cyber-threats. For defense contractors, this represents an opportunity to sell more equipment and military weapons systems.”Continue reading →
CalECPA is “a landmark win for digital privacy and all Californians.” (Photo: Yuri Samoilov/flickr/cc)
In what privacy advocates are hailing as a landmark victory, California Gov. Jerry Brown has signed into law a sweeping tech privacy bill which will require police in the state to obtain warrants for access to telecommunications data, including emails, text messages, GPS coordinates, and other digital information.
“This is a major win for privacy and for Californians. With so much of our information existing online, it’s important that our communications are protected from government access to the strongest degree possible,” said G.S. Hans, policy counsel and director for the Center for Democracy and Technology (CDT). Continue reading →
“The dream of Internet freedom is… dying,” said attorney and civil liberties expert Jennifer Granick during her keynote speech before a major computer security conference in Las Vegas on Wednesday.
Granick, formally the civil liberties director at the Electronic Frontier Foundation and now the director of civil liberties at the Stanford Center for Internet and Society, was addressing some of the world’s foremost technology experts attending the annual Black Hat information security event this week.
“Centralization, regulation, and globalization,” Granick said, have wrought havoc on a space once thought of as “a world that would leave behind the shackles of age, of race, of gender, of class, even of law.”
The dream is dying, she said, because “we’ve prioritized things like security, online civility, user interface, and intellectual property interests above freedom and openness.” And governments, for their part, have capitalized on the fear of “the Four Horsemen of the Infocalypse: terrorists, pedophiles, drug dealers, and money launderers” to push for even more regulation and control, she added.
Granick’s dire pronouncement, which echoed similar assertions made by security experts and civil liberties groups, comes just over two years after National Security Agency whistleblower Edward Snowden cracked open the seal on the U.S. government’s online spying capabilities and revealed just how little security and secrecy remain on the World Wide Web.
Late last month, Snowden himself made a direct plea to technologists to build a new Internet specifically for the people.
Meanwhile, the U.S. government continues to push for expanded surveillance capabilities, such as with the Cybersecurity Information Sharing Act (CISA) currently making its way through Congress, which would allow companies to share personal user information with the government if there is a so-called “cybersecurity threat.”
In her keynote address, Granick also took on the undisclosed rules which supposedly enable much of the government’s spying activities. “We need to get rid of secret law. We have secret law in this country and it is an abomination in the face of democracy,” Granick proclaimed, to much applause.
In the future, she further warned, Internet users won’t be aware of the “secret” software-driven decisions directly impacting their rights and privacy.
“Software will decide whether a car runs over you or off a bridge,” she said. “Things will happen and no one will really know why.”
“The Internet will become a lot more like TV and a lot less like the global conversation we envisioned 20 years ago,” Granick said, concluding that if this is the case, “we need to get ready to smash it apart to make something better.”