The 2016 U.S. presidential
election was a wake-up call that brought the issue of election security front
and center for many Americans. Since then, there’s been a big concern around the
existential risks facing our democratic system. In 2016, we saw a wide range of
attacks – from email leaks to social media propaganda to attacks on voting
systems in 39 states – that left many wondering about what risks we could be
facing in the recent 2018 midterm elections.Although the midterms passed
with fewer apparent cyber incidents than in 2016, in reality, the extent of any
illicit activity may not be known for some time. Only in December 2018 did the
National Republican Congressional Committee (NRCC) belatedly reveal the organization was breached the previous April, and the email accounts of four
top NRCC aides compromised.With the scope and types of
attacks constantly evolving, it’s clear there’s no end in sight for election
interference. Every election that passes by continues to raise the stakes of
election security and magnify cyber threats in the lead up to the 2020
presidential election.
But it’s not all malware,
phishing and breaches. There’s another type of threat that could prove more
detrimental to U.S. elections than cyberattacks: influence operations. More
broadly, influence operations – such as those that have flooded Twitter and
other social media platforms – rely on accounts that spread propaganda and disinformation
with the goal of influencing American opinions, and, ultimately, voting
decisions.Inauthentic accounts are typically
at the root of these campaigns. Between April and September 2018, Facebook
alone removed 1.5 billion fake accounts. While social media companies have made
strides in combatting fake accounts, the problem persists. So far, these
companies have mainly sought to take down accounts on an as-discovered basis,
such as when Facebook deleted more
than 600 accounts that were meant to influence politics in August 2018. More
recently, there’s been increased pressure to build tools that detect and disable
fake accounts before they cause problems, such as tools Instagram
developed to help identify accounts that buy likes and followers.Other measures that are meant
to help users determine what is true on social media have been implemented for
several years, such as a sign of verification for celebrities, media
publications or brands’ accounts. However, even these can be faked by
inauthentic profiles. Instagram, Facebook and Twitter all use a white checkmark
with a blue border to indicate an account is verified, which can be easily
mimicked if people aren’t paying close attention.Influence operations are not
restricted just to social media. During election season, it’s very common for
political candidates and parties to send emails to voter groups, which means it
can be quite easy for inauthentic groups to send false emails in an attempt to
influence these voters. Receiving an email that appears to be from a preferred
candidate – or, potentially worse, an opposing candidate – can easily affect
how a citizen chooses to vote when the polls open, especially if that email is
filled with disinformation. Unfortunately, people often fail at recognizing spoofed
emails and are more likely to believe phony emails that appear authored by the
political parties with whom they affiliate. Moreover, both Democrats and
Republicans are about 15 percent more likely to believe fake news headlines that are ideologically
aligned to them.Ultimately, what makes
influence operations so difficult to combat is that there are no clear defense steps
or fail-safe products to secure against this type of attack. Only with
increased awareness, education and due diligence will people be able to protect
themselves from the spread of disinformation tailored to influence their
perceptions and thereby their votes. Whether on social media, email or any
other place people are choosing to get information about an upcoming election
and its candidates, it’s important that people exercise critical thinking
skills and remain vigilant in fact-checking the news, posts and messages
they’re consuming on a daily basis. When
we go to the movies, we allow our minds to do a “suspension of disbelief” trick
that turns off our logic and skepticism for us to enjoy the fiction. I
recommend that we all employ the opposite of this “suspension of belief” when
reading the news or posts about political candidates or elections.Nick Bilogorskiy, Juniper Networks
The individual behind the 2023 data breach at Star Health and Allied Insurance Company, identified by the alias "xenZen," has claimed responsibility for sending death threats and ammunition to the companys top executives, Reuters reports.
A massive data breach at HR and benefits firm VeriSource Services compromised the personal information of around 4 million people, but it took over a year for the full scope to be disclosed and victims properly notified, The 420 reports.
TechCrunch reports that Insight Partners a venture capital firm with over $90 billion in regulated assets under management has confirmed the compromise of data belonging to an unspecified number of individuals as a result of a January cyberattack.
Get daily email updates
SC Media's daily must-read of the most current and pressing daily news