Tuesday’s votes by GOP committee members, as The Nation’s Ari Berman put it, are “more proof of how the GOP’s real agenda is to make it harder to vote.” (Photo: Keith Ivey/cc/flickr)
Amid national outrage over possible foreign interference in the 2016 election and President Donald Trump’s own lies about so-called voter fraud, House Republicans on Tuesday quietly advanced two bills that “could profoundly impact the way we administer and finance national elections,” watchdogs are warning.
The GOP-dominated Committee on House Administration voted along party lines to approve the Election Assistance Commission (EAC) Termination Act (HR 634), which would abolish the only “federal agency charged with upgrading our voting systems” and “helping to protect our elections from hacking,” as Wendy Weiser, director of the Democracy Program at NYU School of Law’s Brennan Center for Justice, put it. Continue reading →
First of all, let me confess that I shed some tears when David Bowie died. I know all 20+ of his albums by heart, and it felt like a piece of my childhood had disappeared. A few years ago, when Philip Seymour Hoffman died, I also cried. It’s a strange emotional symbiosis that occurs when you mourn for a deceased celebrity, and the point of this article is not to cast aspersions. However, 2016 has basically become known as the year a bunch of celebrities died, so there’s no better time to assess the phenomenon (and make sure it doesn’t distract us from other issues).
Over Christmas weekend, millions of people mourned the loss of George Michael and Carrie Fisher. They were advocates for gay rights and mental illness, respectively, and the nation reeled from the passing of two beloved iconic figures. Earlier this year, music legend Prince passed away, devastating tens of millions of fans for whom the musician represented everything from their adolescence in the 1980s to political statements of gender-bending. The list of celebrities who died in 2016 is extensive and, for some, unnerving. Continue reading →
Social media accounts are “gateways into an enormous amount of [users’] online expression and associations, which can reflect highly sensitive information about that person’s opinions, beliefs, identity, and community.” (Photo: The Hamster Factor/flickr/cc)
The U.S. government has quietly started to ask foreign travelers to hand over their social media accounts upon arriving in the country, a program that aims to spot potential terrorist threats but which civil liberties advocates have long opposed as a threat to privacy.
The program has been active since Tuesday, asking travelers arriving to the U.S. on visa waivers to voluntarily enter information associated with their online presence, including “Facebook, Google+, Instagram, LinkedIn, and YouTube, as well as a space for users to input their account names on those sites,” Politicoreports. Continue reading →
“Censorship in all its forms reflects official fear of ideas and information,” said U.N. Special Rapporteur on the freedom of opinion and expression, David Kaye. (Photo: Rachel Hinman/flickr/cc)
“Governments are treating words as weapons,” a United Nations expert has warned, previewing a report on the global attack on the freedom of expression.
The report, based on communications with governments stemming from allegations of human rights law violations—reveal “sobering” trends of threats worldwide and “how policies and laws against terrorism and other criminal activity risk unnecessarily undermining the media, critical voices, and activists.” Continue reading →
“Andy Hall has spent years working to protect the rights of marginalized workers in Thailand. He should be commended for his efforts, not fined and sentenced,” said Malaysian Parliament member and Asian Parliamentarians for Human Rights chairperson Charles Santiago. (Photo via UN Human Rights- Asia/Facebook)
Setting a chilling precedent for human rights defenders worldwide, a British activist on Tuesday was convicted of criminal defamation and cyber crimes by a Thai court for his work exposing the abuse of migrant workers at a pineapple processing plant.
Andy Hall, with the Migrant Worker Rights Network, had contributed to the 2013 report Cheap Has a High Price (pdf) by Finnwatch, a Finnish civil society organization, that outlined allegations of serious human rights violations by Natural Fruit Company Ltd. Continue reading →
Voting stand and the notorious “butterfly ballot”, from Palm Beach County from the disputed 2000 U.S. Presidential election. Photo: Infrogmation (Own work) [CC BY 2.5], via Wikimedia Commons
Following the hack of Democratic National Committee emails and reports of a new cyberattack against the Democratic Congressional Campaign Committee, worries abound that foreign nations may be clandestinely involved in the 2016 American presidential campaign. Allegations swirl that Russia, under the direction of President Vladimir Putin, is secretly working to undermine the U.S. Democratic Party. The apparent logic is that a Donald Trump presidency would result in more pro-Russian policies. At the moment, the FBI is investigating, but no U.S. government agency has yet made a formal accusation.
The Republican nominee added unprecedented fuel to the fire by encouraging Russia to “find” and release Hillary Clinton’s missing emails from her time as secretary of state. Trump’s comments drew sharp rebuke from the media and politicians on all sides. Some suggested that by soliciting a foreign power to intervene in domestic politics, his musings bordered on criminality or treason. Trump backtracked, saying his comments were “sarcastic,” implying they’re not to be taken seriously.
Of course, the desire to interfere with another country’s internal political processes is nothing new. Global powers routinely monitor their adversaries and, when deemed necessary, will try to clandestinely undermine or influence foreign domestic politics to their own benefit. For example, the Soviet Union’s foreign intelligence service engaged in so-called “active measures” designed to influence Western opinion. Among other efforts, it spread conspiracy theories about government officials and fabricated documents intended to exploit the social tensions of the 1960s. Similarly, U.S. intelligence services have conducted their own secret activities against foreign political systems – perhaps most notably its repeated attempts to help overthrow pro-communist Fidel Castro in Cuba.
Although the Cold War is over, intelligence services around the world continue to monitor other countries’ domestic political situations. Today’s “influence operations” are generally subtle and strategic. Intelligence services clandestinely try to sway the “hearts and minds” of the target country’s population toward a certain political outcome.
What has changed, however, is the ability of individuals, governments, militaries and criminal or terrorist organizations to use internet-based tools – commonly called cyberweapons – not only to gather information but also to generate influence within a target group.
So what are some of the technical vulnerabilities faced by nations during political elections, and what’s really at stake when foreign powers meddle in domestic political processes?
Vulnerabilities at the electronic ballot box
The process of democratic voting requires a strong sense of trust – in the equipment, the process and the people involved.
One of the most obvious, direct ways to affect a country’s election is to interfere with the way citizens actually cast votes. As the United States (and other nations) embrace electronic voting, it must take steps to ensure the security – and more importantly, the trustworthiness – of the systems. Not doing so can endanger a nation’s domestic democratic will and create general political discord – a situation that can be exploited by an adversary for its own purposes.
New technology always comes with some glitches – even when it’s not being attacked. For example, during the 2004 general election, North Carolina’s Unilect e-voting machines “lost” 4,438 votes due to a system error.
But cybersecurity researchers focus on the kinds of problems that could be intentionally caused by bad actors. In 2006, Princeton computer science professor Ed Felten demonstrated how to install a self-propagating piece of vote-changing malware on Diebold e-voting systems in less than a minute. In 2011, technicians at the Argonne National Laboratory showed how to hack e-voting machines remotely and change voting data.
Voting officials recognize that these technologies are vulnerable. Following a 2007 study of her state’s electronic voting systems, Ohio Secretary of State Jennifer L. Brunner announced that
the computer-based voting systems in use in Ohio do not meet computer industry security standards and are susceptible to breaches of security that may jeopardize the integrity of the voting process.
As the first generation of voting machines ages, even maintenance and updating become an issue. A 2015 report found that electronic voting machines in 43 of 50 U.S. states are at least 10 years old – and that state election officials are unsure where the funding will come from to replace them.
Securing the machines and their data
In many cases, electronic voting depends on a distributed network, just like the electrical grid or municipal water system. Its spread-out nature means there are many points of potential vulnerability.
First, to be secure, the hardware “internals” of each voting machine must be made tamper-proof at the point of manufacture. Each individual machine’s software must remain tamper-proof and accountable, as must the vote data stored on it. (Some machines provide voters with a paper receipt of their votes, too.) When problems are discovered, the machines must be removed from service and fixed. Virginia did just this in 2015 once numerous glaring security vulnerabilities were discovered in its system.
Once votes are collected from individual machines, the compiled results must be transmitted from polling places to higher election offices for official consolidation, tabulation and final statewide reporting. So the network connections between locations must be tamper-proof and prevent interception or modification of the in-transit tallies. Likewise, state-level vote-tabulating systems must have trustworthy software that is both accountable and resistant to unauthorized data modification. Corrupting the integrity of data anywhere during this process, either intentionally or accidentally, can lead to botched election results.
However, technical vulnerabilities with the electoral process extend far beyond the voting machines at the “edge of the network.” Voter registration and administration systems operated by state and national governments are at risk too. Hacks here could affect voter rosters and citizen databases. Failing to secure these systems and records could result in fraudulent information in the voter database that may lead to improper (or illegal) voter registrations and potentially the casting of fraudulent votes.
And of course, underlying all this is human vulnerability: Anyone involved with e-voting technologies or procedures is susceptible to coercion or human error.
How can we guard the systems?
The first line of defense in protecting electronic voting technologies and information is common sense. Applying the best practices of cybersecurity, data protection, information access and other objectively developed, responsibly implemented procedures makes it more difficult for adversaries to conduct cyber mischief. These are essential and must be practiced regularly.
Sure, it’s unlikely a single voting machine in a specific precinct in a specific polling place would be targeted by an overseas or criminal entity. But the security of each electronic voting machine is essential to ensuring not only free and fair elections but fostering citizen trust in such technologies and processes – think of the chaos around the infamous hanging chads during the contested 2000 Florida recount. Along these lines, in 2004, Nevada was the first state to mandate e-voting machines include a voter-verified paper trail to ensure public accountability for each vote cast.
Proactive examination and analysis of electronic voting machines and voter information systems are essential to ensuring free and fair elections and facilitating citizen trust in e-voting. Unfortunately, some voting machine manufacturers have invoked the controversial Digital Millennium Copyright Act to prohibit external researchers from assessing the security and trustworthiness of their systems.
However, a 2015 exception to the act authorizes security research into technologies otherwise protected by copyright laws. This means the security community can legally research, test, reverse-engineer and analyze such systems. Even more importantly, researchers now have the freedom to publish their findings without fear of being sued for copyright infringement. Their work is vital to identifying security vulnerabilities before they can be exploited in real-world elections.
Because of its benefits and conveniences, electronic voting may become the preferred mode for local and national elections. If so, officials must secure these systems and ensure they can provide trustworthy elections that support the democratic process. State-level election agencies must be given the financial resources to invest in up-to-date e-voting systems. They also must guarantee sufficient, proactive, ongoing and effective protections are in place to reduce the threat of not only operational glitches but intentional cyberattacks.
Democracies endure based not on the whims of a single ruler but the shared electoral responsibility of informed citizens who trust their government and its systems. That trust must not be broken by complacency, lack of resources or the intentional actions of a foreign power. As famed investor Warren Buffett once noted, “It takes 20 years to build a reputation and five minutes to ruin it.”
Last week, the White House released a report chronicling the Obama administration’s concerns over Big Data and artificial intelligence. Many prominent thinkers and scientists have come out recently with warnings about the dangers of unchecked artificial intelligence. However, the A.I. the White House report refers to is not of the Terminator ilk — rather, Obama has concerns over algorithmic artificial intelligence operating without human oversight.
The report, “Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights,” catalogs the growing sphere of influence represented by Big Data in society, including employment, higher education, and criminal justice. Continue reading →
The rule would allow a federal judge to issue a warrant for any target using anonymity software like Tor to browse the internet. (Photo: Ben Watkin/flickr/cc)
The U.S. Supreme Court on Thursday quietly approved a rule change that would allow a federal magistrate judge to issue a search and seizure warrant for any target using anonymity software like Tor to browse the internet.
Absent action by U.S. Congress, the rule change (pdf) will go into effect in December. The FBI would then be able to search computers remotely—even if the bureau doesn’t know where that computer is located—if a user has anonymity software installed on it. Continue reading →
“This bill is a clear threat to everyone’s privacy and security,” said Neema Singh Guliani, legislative counsel with the ACLU. (Photo: Laura Bittner/flickr/cc)
A draft of a proposed bill mandating companies give, under a court order, the government access to encrypted data is being derided by technology experts as “ludicrous,” as it “ignores technical reality” and threatens everyone’s security.
The bill’s proposers, Senators Richard Burr (R-North Carolina), Chair of the Senate Intelligence Committee, and Dianne Feinstein (D-California), top Democratic on the committee, neither disavowed the document nor confirmed its legitimacy, the Wall Street Journalreports. Continue reading →
The ongoing fight between Apple and the FBI over breaking into the iPhone maker’s encryption system to access a person’s data is becoming an increasingly challenging legal issue.
With a deadline looming, Apple filed court papers explaining why it is refusing to assist the FBI in cracking a password on an iPhone used by one of the suspects in the San Bernardino shooting. CEO Tim Cook has declared he will take the case all the way to the Supreme Court.
The tech company now wants Congress to step in and define what can be reasonably demanded of a private company, though perhaps it should be careful what it wishes for, considering lawmakers have introduced a bill that compels companies to break into a digital device if the government asks.
But there is an irony to this debate. Government once pushed industry to improve personal data privacy and security – now it’s the tech companies who are trumpeting better security. My own research has highlighted this interplay among businesses, users and regulators when comes to data security and privacy.
For consumers, who in coming years will see ever more of their lives take place in the digital realm, this heightened attention on data privacy is a very good thing.
The business case for better privacy grows
Not too long ago, everyone seemed to be bemoaning that companies aren’t doing enough to protect customer security and privacy.
The White House, for example, published a widely cited report saying that the lack of online privacy is essentially a market failure. It highlighted that users simply are in no position to control how their data are collected, analyzed and traded. Thus, a market-based approach to privacy will be ineffective, and regulations were necessary to force firms to to protect the security and privacy of consumer data.
The tide seems to have turned. Repeated stories on data breaches and privacy invasion, particularly from former NSA contractor Edward Snowden, appears to have heightened users’ attention to security and privacy. Those two attributes have become important enough that companies are finding it profitable to advertise and promote them.
Whether it is through its payment software or operating system, Apple has emphasized security and privacy as an important differentiator in its products. Of course, unlike Google or Facebook, Apple does not make money using customer data explicitly. So it may have more incentives than others to incorporate these features. But it competes directly with Android and naturally plays an important role in shaping market expectation on what a product and service should look like.
These features possibly play an even more critical role outside the U.S. where privacy is under threat not only from online marketers and hackers but also from governments. In countries like China, where Apple sells millions of iPhones, these features potentially are very attractive to end users to keep their data private from prying eyes of authorities.
Regulators hum a different tune
It is clear that Apple is offering strong security to its users, so much so that FBI accuses it of using it as a marketing gimmick.
It seems we have come a full circle in the privacy debate. A few years ago, regulators were lamenting how businesses were invading consumers’ privacy, lacked the proper incentives to do so and how markets needed stronger rules to make it happen. Today, some of the same regulators are complaining that products are too secure and firms need to relax it in some special cases.
While the legality of this case will likely play out over time, we as end users can feel better that in at least in some markets, companies are responding to a growing consumer demand for products that more aggressively protect our privacy. Interestingly, Apple’s mobile operating system, iOS, offers security by default and does not require users to “opt-in,” a common option in most other products. Moreover, these features are available to every user, whether they explicitly want it or not, suggesting we may be moving to a world in which privacy is fundamental.
Data sharing gets complicated
At its core, this debate also points to a larger question over how a public-private partnership should be structured in a cyberworld and how and when a company needs to share details with either the government or possibly with other businesses for the public good.
When Google servers were breached in China in 2010, similar questions arose. United States government agencies wanted access to technical details on the breach so it could investigate the perpetrators more thoroughly to unearth possible espionage attempts by Chinese hackers. The breach appeared to be aimed at learning the identities of Chinese intelligence operatives in the U.S. that were under surveillance.
Information sharing on data breaches and security infiltration is something the government has widely encouraged, last year passing the Cybersecurity Information Sharing Act of 2015 to encourage just that.
Unfortunately, various government agencies themselves have become self-interested parties in this game. In particular, the Snowden disclosures revealed that many government agencies conduct extensive surveillance on citizens, which arguably not only undermine our privacy but compromise our entire information security infrastructure.
These agencies, including the FBI in the current case, may have good intentions, but all of this has finally given profit-maximizing companies the right incentives they need to do what the regulators once wanted. Private businesses now have little incentive to get caught up in the bad press that usually follows disclosures like Snowden’s, so it’s no wonder they want to convince their customers that their data are safe and secure, even from the government.
With cybersecurity becoming a tool for government agencies to wage war with other nation-states, it is no surprise that companies want to share less, not more, even with their own governments.
The battle ahead
This case is obviously very specific. I suspect that, in this narrow case, Apple and law enforcement agencies will find a compromise.
But the Apple brand has likely strengthened. In the long run, its loyal customers will reward it for putting them first.
However, this question is not going away anywhere. With the “Internet of things” touted as the next big revolution, more and more devices will capture our very personal data – including our conversations.
This case could be a precedent-setting event that can reshape how our data are stored and managed in the future. But at least for now, some of the companies appear to be – or least say they want to be – on our side in terms protecting our privacy.
About the Author: Rahul Telang is Professor of Information Systems and Management, Carnegie Mellon University.