top of page
Search

U.S. Election Vulnerability to Foreign Electronic Interference


Key Judgements:


Election infrastructure

Advancements in technology have allowed foreign actors to interfere in numerous aspects of domestic elections. When it comes to election infrastructure, countries including Iran, Russia, and China have utilized confidentiality, integrity, and availability attacks to disrupt proceedings and obtain access to confidential information. Highly networked and centralized election systems in the preparation (programming) phase present the greatest security threat. If introduced to these systems, malware can affect entire jurisdictions, districts, and states. To guard against these threats, the Federal Bureau of Investigation can encourage states to:

§ Limit the importation of external files, especially for large, centralized networks

§ Implement “air gaps” in data transmission to stop the spread of malware


Political parties, campaigns, and public officials

Political parties remain susceptible to Distributed Denial of Service (DDoS) attacks by foreign actors seeking to disrupt their websites. Meanwhile, campaigns and public officials are viable targets for botnet attacks, phishing attempts, and “hack and post” attacks which seek to gain access to private accounts. Not only can these attacks alter public opinion and electoral outcomes, but they can raise public safety concerns when false threats are made. To avoid such consequences, agencies and companies can:

§ Conduct regular security checks to identify software and personnel vulnerabilities

§ Coordinate with internet service providers, technology designers, and private companies to identify threats and implement security measures

§ Encourage NGOs and private companies to engage with global cybersecurity initiatives

§ Advocate for increased funding to the Computer Crime and Intellectual Property Section of the Department of Justice’s Criminal Division and other such groups

§ Advocate to amend the Computer Fraud and Abuse Act to reflect modern cyber terminology and threat assessments


Influence on public opinion and preexisting divisions

As seen in prior election years, foreign intervention poses a large threat to election security in the form of public perception and opinion. The constantly advancing quality of deep fakes and attack algorithms require ongoing research to identify, counter, and prevent these threats. Public perceptions of reality and willingness to mobilize are especially susceptible to fabricated controversy as seen in recent demonstrations. Such threats also undermine faith in U.S. election security, to which the democratic hallmarks of electoral participation and the peaceful transition of power are tied. To counteract these trends, the FBI can:

§ Increase crowdsourcing research to automatically detect falsified news

§ Work with states to promote information literacy within schools

§ Advocate for increased governmental funding for independent journalism

§ Advocate to amend or repeal Section 230 of the Communications Decency Act

§ Exchange research with DARPA regarding Media and Semantic Forensics



Cyber Operations


Targeting Election Infrastructure

The Cybersecurity and Infrastructure Security Agency (CISA) within the Department of Homeland Security continues to play a key role in assessing the security of U.S. election infrastructure.[1] Specifically, the National Risk Management Center (NRMC) conducts thorough cyber risk assessments which examine:

§ System preparation

§ Device networking

§ Centralization of infrastructure

§ Voter registration

§ Voting machines

§ Electronic submission of ballots


Cyber-attacks on election infrastructure can be separated into three categories:

1. Confidentiality Attacks – compromising voter registration data

2. Integrity Attacks – altering a system’s information

3. Availability Attacks – interrupting or disabling a system’s functionality

The damage these attacks can cause to election infrastructure is largely determined by whether the equipment is in use or being programmed, the extent to which systems are connected across jurisdictional boundaries, and the degree of centralization.


System Preparation

Cyber-attacks during the preparation phase of election infrastructure have the potential to cause widespread damage. Malware introduced during a jurisdiction or state’s programming phase can affect all machines within that network. Malware introduced to a single machine in use, however, is less likely to affect an entire jurisdiction or state. Election officials and vendors involved in the infrastructure preparation process are therefore encouraged to reduce the number of files imported from external sources (e.g., ballot printers and registration databases) which may represent an additional point of vulnerability if connected to the internet.


Networking

While highly interconnected electronic systems promote efficiency by transmitting tabulations and other data at high speed, the integrity of the system can be more vulnerable to attack. Expansive networks allow hackers to access a larger volume of machines (pollbooks, DRE voting machines, etc.), potentially crossing jurisdictional lines. For this reason, states with extensive networks are encouraged to implement air gaps: breaks in the system which require the occasional manual transmission of data.


Centralization

The decentralized nature of national elections allows for a baseline degree of security from foreign actors. The U.S. federal structure allows states to exercise discretion in their Election Day processes, making a coordinated attack on the entire nation’s voting infrastructure a difficult task. However, recent legislation intended to create greater election accessibility, organization, and efficiency has ultimately centralized some components of election infrastructure within states, raising security concerns. The 2002 Help America Vote Act (HAVA) created greater vulnerability to outside actors by mandating the creation of state-wide voter registration rolls.[2] While this reduces voter roll bloat and optimizes operations, centralized sensitive information is a greater vulnerability.


Options

§ Limit the importation of external files, especially for large, centralized networks

§ Implement more “air gaps” in data transmission to stop the spread of malware



The implementation of air gaps is considered by some agencies, including the Global Cybersecurity Alliance as insufficient to fully protect a system from outside threats.[3] Some systems are considered “air gapped” because of the use of firewalls, rather than true disconnections, to separate networks. Furthermore, recent evidence demonstrates that air gaps can be “bridged” via sound or malware-infected cameras with infrared capabilities. Figure 1 below demonstrates a security breach within an air gapped system. [4] As of 2019, the Defense Advanced Research Projects Agency (DARPA) began work on the Guaranteed Architecture for Physical Stability (GAPS) program which seeks to design a more secure system of electronic transmission free from cumbersome air gaps.[5] Their research may produce a viable alternative to less reliable air gaps.


Figure 1

Targeting political parties, campaigns, and public officials

Foreign attacks on domestic elections are not limited to election infrastructure. In recent years both state and non-state actors have sought to influence the operations and outcomes of elections by obtaining access to sensitive information from parties, campaigns, and public officials. Obtaining this data often includes the use of:

§ Denial of Service Attacks

§ Botnets

§ Phishing

§ Hack and Post Attacks


Parties

Political parties are perhaps most susceptible to Distributed Denial of Service (DDoS) attacks, in which attackers utilize botnets to disable a party’s website at critical times. These attacks are particularly difficult to thwart because botnets, though controlled by a single source, are comprised of malware-infected computers from across the globe, thereby making the identification of the perpetrator more difficult.[6] According to the UK’s National Cyber Security Center, prior to the 2019 General Election, party websites were overwhelmed by botnet requests, rendering the sites inoperable to citizens seeking information.[7] To prevent these attacks, governments and private companies should regularly update software and devote resources to actively monitoring botnet traffic.[8]


Campaigns

Cyber-attacks on campaigns have received increased attention in recent years. In 2019, Microsoft issued a statement revealing that Phosphorus (aka APT 33), an Iran-based, threat group executed a series of over 2,700 phishing attempts to obtain account data of users linked to President Trump’s reelection campaign.[9] Microsoft’s Threat Intelligence Center swiftly identified the threat, notifying compromised parties and recommending use of their two-step authentication feature in the future. Microsoft has also advised governments, companies, and NGOs to support worldwide cybersecurity by signing the 2018 Paris Call for Trust and Security in Cyberspace. The agreement has been signed by 81 states (including the U.S.), 36 local governments, 390 organizations, and 706 companies.[10] Increased training of campaign workers can also serve as a countermeasure to cyberattacks. Notably, some campaign digital directors have gone one step further, testing their employees’ security savvy by sending out fake phishing attempts and recording rates of successful infiltration.[11]



Public Officials

Parties and campaigns are not the sole targets of cyberattacks. Foreign actors have targeted several public officials’ email and social media accounts to shape electoral outcomes and access sensitive information. In early 2019 Tampa Mayor Bob Buckhorn was the victim of one such Hack and Post attack. After gaining access to his personal Twitter account, the perpetrator posted roughly 60 messages with highly inflammatory content including racist statements, a bomb threat, and child pornography.[12] Though the actor responsible remains unidentified, the importance of securing accounts and personal information on an individual basis remains clear. It is therefore recommended that candidates and public officials choose secure passwords, look out for suspicious emails, install malware-detecting software, and segregate their personal and professional accounts. As evidenced by the hacker’s posts on Mayor Buckhorn’s account, easy access to an official’s account can not only offend the public, but also raise serious public safety concerns by making false threats.


Options

§ Conduct regular security checks to identify software and personnel vulnerabilities

§ Coordinate with internet service providers, technology designers, and private companies to identify threats and implement security measures[13]

§ Encourage NGOs and private companies to engage with global cybersecurity initiatives

§ Increase funding to the Computer Crime and Intellectual Property Section of the Department of Justice’s Criminal Division and other such groups

§ Amend the Computer Fraud and Abuse Act, along with similar proposals, to reflect modern cyber terminology and threat assessments[14]


Covert Operations


Influencing public opinion and exacerbating division

As media sources have exponentially increased over the years, so too have opportunities for spreading false information. The advent and subsequent popularity of social media have made platforms such as Instagram, Twitter, TikTok, Reddit, and others, prime targets for foreign disinformation campaigns in recent years. Actors including Russia, China, and Iran have sought to shape public opinion and sow division through the creation of a false reality, chiefly via the use of botnets and ‘deep fakes.’


Constructing a False Reality

Political actors, both foreign and domestic have shown a propensity for using public platforms to create false perceptions of reality in times of upheaval. The 2020 #dcblackout was the product of one such campaign. Following the contested 2020 election, several botnet tweets went viral claiming DC was under a government-orchestrated media blackout. These messages were then followed by a series of identical tweets from hacked accounts labeling the first wave of messages “misinformation.” Within the scope of ongoing misinformation campaigns during the 2020 election, the #dcblackout episode is now seen as another attempt to undermine public trust in media outlets.[15] More recently, skilled hackers have used deep fake technology to shape public perceptions. Deep fakes refer to artificial media “created using techniques in machine learning (ML)—a subfield of AI—especially generative adversarial networks (GANs).”[16] Although when using modern technology most deep fakes are noticeable to the naked eye, rapid advancements in technology will make the discrimination process more difficult.


To accommodate advancements in the automated creation of deep fakes, DARPA has been developing automated deep fake detecting programs. DARPA’s Media Forensics (MediFor) program, which ended in 2021, was dedicated to developing these algorithms; whereas the Semantic Forensics program (SemaFor) is designed to expand on MediFor’s work and additionally categorize deep fakes as “benign or malicious” based on their content. Note, the identification of deep fakes may not prevent their destructive effects and restricting their production could be viewed as a violation of free speech rights.


Exacerbating Public Divisions

Constructing a false reality has proven effective in exacerbating preexisting societal divisions. During a time in which political polarization has driven an ideological wedge between Americans, Russian “active measures” have, on more than one occasion, moved citizens to action.[17] On a smaller scale, in 2016 Russian-operated accounts successfully and remotely orchestrated opposing pro- and anti-Muslim demonstrations in Houston.[18] On a much larger scale, Russian cyber attempts to undermine the credibility of the 2020 election results, alongside politician statements and groups like QAnon, are regarded as a large contributing factor to the January 6 Capitol Attack.


Thus, cyber capabilities have not only galvanized citizens to demonstration and destructive action but have undermined faith in the security of election infrastructure. In one NPR/PBS study, 34% of Americans surveyed indicated that they did not have faith in the accuracy of the 2020 election results.[19] Should this issue persist or worsen, future elections may see decreased turnout and the inability to peacefully transfer power. This is a particularly salient concern if future presidents continue to abstain from formally ceding power.


Response from the Technology Community

Following a rise in foreign intervention and the use of online platforms to incite violence and spread disinformation, companies like Facebook (Meta), Twitter, and others have taken a more active role in policing posts. Twitter has made concerted efforts to fact check posts by:

§ Adding mis/disinformation warnings

§ Linking users to verified government sources

§ Screening Tweets and removing violating accounts

§ Using pop-up warnings for users retweeting questionable material.[20]


Below, Figures 2 and 3 demonstrate the metrics by which Twitter determines what level of action to take when presented with questionable content. [21] [22]


Figure 2


Figure 3



Despite their stated intent to quell unfounded disputes and prevent violence, companies’ efforts at online censorship have not been fully embraced by the political community. Specifically, politicians have called for amendments to or the outright repeal of Section 230 from the 1996 Communications Decency Act.[23] Section 230 protects companies like Twitter from incurring legal punishment based on their users’ actions (like slander or inciting violence). Although some argue in favor of its repeal to others argue the law allows companies to unfairly censor opinions of specific groups. However, it’s important to note that alterations to Section 230 or its removal could subject private companies to an inordinate number of lawsuits, thereby increasing censorship drastically and raising questions regarding freedom of speech.[24]



Options

§ Increased crowdsourcing to automatically detect falsified news

§ Creating an information literacy campaign for public schools

§ Increased governmental funding for independent journalism

§ Amend or repeal Section 230 of the Communications Decency Act

§ Continued funding for DARPA research on Media and Semantic Forensics



Outlook and Implications


Technology and Research

In a world in which online activity is exponentially increasing, cyber-attacks threatening election security are here to stay. Rapid advancements in technology ensure the inevitable permeation and sophistication of algorithms used to commit cyber-attacks and create deep fake media. As these algorithms advance, so too must our defensive strategies. DARPA has remained on the cutting edge of research in this area with programs designed to advance system security past air gaps and develop deep-fake-detecting algorithms, but connections between private and governmental systems ensure the necessity of a united front against cyber threats. If government agencies continue to invest in security research and work with private industries and contractors to secure their data, U.S. elections are likely to remain secure.


Public Opinion

Ongoing foreign influence on public opinion, however, is less straightforward. While research is necessary to identify foreign influence, like the construction of a false narrative through deep fakes, identification alone may not be enough. As demonstrated by several protests and an attack on the Capitol, false narratives can have a destructive, tangible effect in the real world. Although this may prompt calls for increased posting restrictions from the government or from private companies, both suggestions raise concerns over limiting free speech. Therefore, alterations to statutes, like Section 230 of the Communications Decency Act, have crucial implications beyond preventing the spread of disinformation. Though the solution to foreign influence undermining public confidence in election security is unclear, the consequences of this phenomenon are. Should citizens continue to lose faith in the security of elections, voter turnout may fall, and peaceful transitions of power may be less of a certainty.

[1] Cybersecurity and Infrastructure Security Agency. (2020). Critical Infrastructure Security and Resilience Note. https://tinyurl.com/3t7c6abx [2] Help America Vote Act, Publ. L. No. 107-252, 116 Stat. 1666 (2002). https://tinyurl.com/2p99ymc4 [3] Chhillar, S. (n.d.). Common ICS cybersecurity myth #1: The air gap. Global Cybersecurity Alliance. https://tinyurl.com/2p93ufxc [4] Trend Micro. (2017, Sept 28). A look at the threats to air gapped systems. https://tinyurl.com/3htsnsku [5] DARPA. (2019, Jan 16). DARPA explores new computing architectures to deliver verifiable data assurances. https://www.darpa.mil/news-events/2019-01-16 [6] Computer Security Resource Center. (n.d.). Botnet. National Institute of Standards and Technology. https://tinyurl.com/5ya384jb [7] National Cyber Security Centre. (n.d.). Guidance for political parties. https://tinyurl.com/6z3xnacc [8] DataDome. (2021, Jan 8). How to stop and prevent botnet attacks on your website and server. https://tinyurl.com/2p8t85wp [9] Burt, T. (2019, Oct 4). Recent cyberattacks require us all to be vigilant. Microsoft. https://tinyurl.com/53dpv93a [10] France Diplomacy. (n.d.) Cybersecurity: Paris Call of 12 November 2018 for trust and security in cyberspace. https://tinyurl.com/dw5e3nyf [11] Bond, S. (2020, Jan 28). 2020 Political campaigns are trying to avoid a 2016-style hack. NPR. https://tinyurl.com/2p9h9ys2 [12] Altman, H. (2019, Feb. 21). Weak password or ‘phishing’ emails may explain takeover of buckhorn’s tweets, experts say. Tampa Bay Times. https://tinyurl.com/yckbcyvj [13] Council on Foreign Relations. (2018, Nov). Zero Botnets. https://tinyurl.com/2p9am9w4 [14] Taking down botnets: Public and private efforts to disrupt and dismantle cybercriminal networks, 113th Cong. (2014). https://tinyurl.com/4dac48hw [15] Austermuhle, M. & Parks, M. (2020, June 1). ‘None of this is true’: Protests become fertile ground for online disinformation. NPR. https://tinyurl.com/yc534p8x [16] Congressional Research Service. (2021, June 8). Deep fakes and national security. https://tinyurl.com/yc57ftue [17] Ewing, P. (2018, April 25). The Russia investigations: What you need to know about Russian ‘active measures.’ NPR. https://tinyurl.com/2v6p4wdw [18] Parks, M. (2017, Nov 1). How Russian-backed agitation online spilled into the real world in 2016. NPR. https://tinyurl.com/3s466673 [19] Montanaro, D. (2020, Dec 9). Poll: Just a quarter of Republicans accept election outcome. NPR. https://tinyurl.com/2p95vw6h [20] Bond, S. (2020, Oct 9). Twitter expands warning labels to slow spread of election misinformation. NPR. https://tinyurl.com/5dzdmwfz [21] Roth, Y. & Pickles, N. (2020, May 11). Updating our approach to misleading information. Twitter. https://tinyurl.com/2p8bhmt2 [22] Roth, Y. & Achuthan, A. (2020, Feb 4). Building rules in public: Our approach to synthetic and manipulated data. Twitter. https://tinyurl.com/yasjs2ru [23] Siripurapu, A. (2020, Dec 2). Trump and Section 230: What to know. Council on Foreign Relations. https://tinyurl.com/ye282f7p [24] Duan, C. & Westling J. (2020, June 1). Will Trump’s executive order harm online speech? It already did. https://tinyurl.com/bdf9ryyk

2 views0 comments

Recent Posts

See All
bottom of page