Security Staff Acquisition & Development, Security Program Controls/Technologies

Identifying ‘normal’ behavior combines art with science for security teams

A hard drive is seen in the light of a projection of a thumbprint.
Security teams can find it challenging to define "normal" user behavior as more employees work from home or while traveling. (Photo by Leon Neal/Getty Images)

Now that the hybrid and remote working models have forever changed IT environments for security teams, it’s become more challenging than ever for security pros to determine what passes for normal behavior. 

In the past, when workforces were mostly in office environments or in regular travel routines, typical behavior was easier to define, and abnormal behavior was easier to identify because it mostly revolved around an on-premises corporate network that was easier to manage and control.

“Today, if an employee logs in from an unusual location, it may be a threat actor — or, it may simply be that the employee has decided to work away from their home,” said Stacy Hughes, senior vice president and CISO at Voya Financial.

As organizations struggle with this changed reality, Hughes said many are leaning on a combination of SIEM tools, endpoint solutions, and cloud posture management that leverages automation and behavior analytics.

“These complexities are challenging — what I would call both an art and a science,” said Hughes. “The science involves existing use cases and established frameworks such as MITRE ATT&CK to assist in overall threat modeling. The art requires partnering with specific business, application, and development teams to fully understand how applications work and what constitutes unusual activity or behavior. Together, the art and science are utilized by information security teams to develop risk-based alerting.” 

Voya Financial CISO Stacy Hughes says managing behavioral patterns today has become part science and part art. (Credit: J. Dixx Photography)
Voya Financial CISO Stacy Hughes says managing behavioral patterns today has become part science and part art. (Credit: J. Dixx Photography)

At NASA, Mike Witt, CISO for cybersecurity and privacy, said the space agency has been busy implementing network-based flow detection. Witt said this tool can monitor and identify risky network traffic patterns. The traffic patterns could include potentially high-risk data exfiltration or patterns of enumeration.

Witt said NASA has also been evaluating a cloud-based SIEM, a tool with a user behavior analytics (UBA) module that would let NASA visualize access patterns based on the user and a subset of collaboration applications.

But even with these new tools, Witt said NASA’s security team still finds it challenging to define “normal” behavior across different organizations and groups of individuals.

“What’s normal for a system administrator is not necessarily normal for an executive assistant,” Witt said. “Further, what’s normal for a system administrator in one part of NASA may be very different than in another. An example of this would be a database administrator versus a system administrator for a flight system in orbit. Using a model trained on a different agency would not necessarily speed the time needed to establish typical pattern since the work done at NASA is extremely diverse in nature.”

Witt added that defining what’s normal for one user and abnormal for another depends on how users are grouped, by role, geographic location, and responsibilities. Even very basic behaviors such as normal log-in hours often deviate from this in the case of travel or the need to meet a deadline.

“For general employee behavior across a large scale of users, this is an extremely difficult problem,” Witt said. “Processes configured to specific organizations as part of the implementation process instead of trying to implement a one-size-fits-all solution is a sound approach for how this will be solved in the long-term.”

What’s normal user behavior, anyway?

Mike Britton, CISO for Abnormal Security, said security teams will continue to struggle with these behavioral issues because there’s really not a universal definition of what’s normal.

“What’s normal for an IT employee who works odd hours and can be online unexpectedly in the middle of the night looks different than someone from HR who handles sensitive information on a regular basis,” Britton said. “The key concept is: What's normal for that user? The ability for technology to capture patterns and behaviors can help establish a baseline of ‘normal’ for that particular user and help identify anomalous behavior that could indicate an account takeover or malicious insider threat.”

Behavior-based security is an approach where all relevant activity is monitored so deviations from normal behavior patterns can be flagged and addressed. It’s an advanced concept, which shapes how companies prevent fraud, secure the endpoint, and manage security awareness strategies. Britton said Abnormal does this by determining malicious behavior through thousands of identity and context signals.

“Some of these are visible, such as the tone and context of the message," Britton said. “Are they requesting a financial payment or are they using a different level of formality? But many aren’t readily apparent to the human eye, such as how often these two individuals communicate, or the identity signals that correlate to normal behavior. Things like the geolocation of log-in and what browser-operating system they are accessing the account from provide powerful signals to determine abnormal behavior and likely attacks.”

John Steven, CTO at ThreatModeler, said while there may not be a universal profile of “normal employee behavior” that everyone agrees on, the practices and conventions of an organization and its business units do create clear patterns. Steven said teams follow a certain cadence, a release schedule, and specific practices for ad-hoc actions such as hot fixes or patching.

“Insider threat detection technologies can sometimes follow absolute heuristics, but often work effectively by identifying anomalous behavior relative to that team’s norms,” Steven said. “These questions are foundational for behavior-based security: an approach where all relevant activity gets monitored so deviations from normal behavior patterns can be flagged and addressed. It’s an advanced concept, which shapes how companies prevent fraud, secure the endpoint, and manage security awareness strategies.”

Attackers follow th path of least resistance

ThreatModeler’s Steven said attackers continue to follow the path of least resistance. He said they’ll find and exploit the most poorly configured platforms in terms of permissions, verifications upon commit and signing, and look for opportunities with coarse attribution to slow or prevent traceability.

“There’s solid evidence that insider threats actively exploit ad-hoc workflows, such as hot fixes or out-of-band security patching, because during these events the ‘excitement’ might allow them to stuff malware into the deployment while circumventing the verification and validation that occurs on a normal release-promotion process,” Steven said.

NASA’s Witt said adversaries continue to lean on social engineering to either initiate or expand attacks. Witt said the industry has seen this in the recent public disclosures regarding the spamming of authorization requests to get a user to authorize a log-in to bypass a multi-factor authentication system.

“This is one of the reasons the federal government has been moving to phishing-resistant multi-factor authentication as part of moving towards a zero-trust architecture,” said Witt. “The transition to a zero-trust architecture will likely include some behavior-based security tools, but each agency [or organization] will decide for themselves where that best works for them.”

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds