For thousands of years, the only two domains of war were land and sea. Nations fought battles with rudimentary weapons that were blunt, inaccurate or massive (siege weapons).
It was in World War I that a new domain – air – was added. Forty-three years later, in April 1961, space became the fourth domain when the Soviet Union launched Vostok 1 and Yuri Gagarin.
It would take another 50 years to add the next domain. In 2011, the United States Department of Defense officially incorporated cyberspace as the fifth domain of war. The advance of technology brought the ability to wage war and terrorism to our front door.
But it's the next domain where future wars will begin. It’s a domain not constrained to a single geography, nation, or political party. This domain gets shaped throughout a lifetime, augmented by rapid technological change, and fueled by recent developments in generative AI.
The next domain is headspace. It’s where countries will wage the war for the mind. And someone will win before the first shots get fired. In this war, shaping the narrative will be as crucial as shaping the battlefield is to military planners. The ability to influence perception will become more valuable than the ability to tell the truth.
Disinformation has emerged as a new kind of warfare. Adversaries are leveraging it to erode truth and influence people to think and act in ways they might not have conceptualized. They are fueling Intimidation of those voicing opinions. And online violence has spilled over into real-world violence on a global scale.
The nature of the threat and the vectors of attacks are no longer just about ones and zeros. It’s evolving to the manipulation of perception to achieve a goal. It's so vital to national security the Pentagon was exploring the ability to counter opinion and influence actions three months after 9-11.
Originally called the Office of Strategic Influence, it was a short-lived program that never got traction, especially after allegations of planting news stories, including false ones, around the globe.
Deceptions like these are not new tactics. During the Cold War, the ability to manipulate, influence, deceive, coerce and persuade the press was a staple, including using people posing as journalists.
Some might think an idea that bad would have been permanently banished to the graveyard, disregarded with the question of, "What were they thinking?" But that’s not the case.
The Department of Defense now uses the term “Perception Management” to describe their relaunched initiative. The Influence and Perception Management office has responsibilities that include "overseeing and coordinating the various counter-disinformation efforts being conducted by the military, which can include the U.S.'s own propaganda abroad."
Imagine a scenario where a foreign adversary wanted to disrupt the banking system in the United States through a cyberattack. A brute force attack would likely get repelled, so a higher level of sophistication and advanced tactics are needed. It’s a mistake for SOC analysts and executives with cyber responsibility to think the next attack would only be against an endpoint, cloud computing, or authentication technologies.
To break through this higher barrier, the next attack will look to shape the perception of frontline workers, predisposing them to a point of view favorable to the adversary. That could take the form of combination of widespread social media, professional sites, sponsored webinars, industry events, conferences and parties at events and conferences.
Sound ridiculous? It’s not. In fact, it’s become a favored tactic of intelligence officers from China, Russia, and Iran. The head of MI-5 in the U.K. said an estimated "20,000 Britons have been approached by Chinese state actors on LinkedIn in hopes of stealing industrial or technological secrets."
Why should they break in when they can buy their way in? The Cyber Safety Review Board analyzed the tactics of hacking group LAPSUS$, and in a review of attacks by the group, there was this notable paragraph:
"The Board learned through attackers' public comments and interviews with targeted entities that attackers can socially engineer, coerce, or bribe telecommunications staff, including those in customer support centers, retail stores, and elsewhere. In comparable industries, such as banking, where employees need to access sensitive personal data to service customers, additional advanced insider threat controls and strong identity verification can be helpful in preventing threat actors from tricking, coercing, or bribing staff to act on their behalf."
If I were an adversary, I would invest more time and resources in influencing, not attacking. Trusted access beats a buffer overflow every day of the week.
In this new domain of war, being behind a keyboard is the same as being on the front line. Here’s an anecdote for the naysayers, the skeptical, and the nonbelievers:
As a detective, I specialized in behavioral analysis and interview and interrogation. I taught at the National Security Agency's National Cryptologic School for a time, instructing damage assessment agents from some of the most notorious espionage cases in U.S. history.
Without exception, access to our most closely guarded secrets wasn't accomplished only by some novel technological technique. It was accomplished by understanding the four essential components of human influence: Money, Ideology, Compromise, Ego (MICE).
Threats aren’t confined to cyberspace anymore. Reviewing logs for IOCs may reveal what’s happening inside a company, but not what’s happening inside a mind. The ability to manipulate and persuade groups and individuals who align ideologically to launch an attack against a company is far easier than finding those who align geographically.
The sixth domain of war will look familiar in that it will be based on human error. It will attack human frailty and weakness, manipulating the perceptions and attitudes of the people tasked with defending and protecting our nation and critical infrastructure.
Morgan Wright, chief security advisor, SentinelOne