COMMENTARY: It’s no secret that organizations are grappling with whether artificial intelligence (AI) will help or hinder their work. Opinions range widely – from concerns that AI could replace jobs to optimism about enhanced human-machine collaboration or even full automation as the future path.
Regardless of stance, it’s clear that enticing as today’s AI products are, there’s a common thread: a need to build trust in AI systems.
[SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Read more Perspectives here.]
AI has unlimited potential to transform cybersecurity, but its promise comes with limitations that make human collaboration essential. Trust in AI cybersecurity hinges on its technical capabilities, and also on its ability to reliably analyze data in real-world environments. Today, it’s not realistic to hand over full control of security decisions to AI. Instead, AI’s strength lies in collaboration with human security experts who can guide, supervise, and evaluate AI’s outputs.
Understand AI’s limits
In understanding the role of human collaboration in AI, we must start with an understanding of where AI stands today. Broadly, we can categorize AI into two types: augmented and autonomous.
Augmented AI assists humans in decision-making. It supports experts by processing vast amounts of data and identifying patterns, making security work faster and more efficient. However, decisions still rest with humans. Autonomous AI, by contrast, can operate independently, making decisions on its own. While this level of AI may be the goal, the cybersecurity industry remains in the augmented phase—meaning we are far from a reality where AI systems can make security decisions without human involvement.
Even with all the progress in the past two years, there's still a lack of confidence in the accuracy of the data that powers AI. The AI algorithms rely on vast datasets to identify trends and anomalies, yet these systems are still evolving. To make reliable decisions, AI systems need data accuracy that’s consistent and nearly flawless. Until we achieve this level of precision, humans are necessary as a final check on AI’s outputs. For example, take today’s security operations centers (SOCs), where AI can flag threats, but it’s the analysts who make the final call.
While examples like these show the value of AI, the technology also presents a double-edged sword. Just as security teams use AI to bolster defenses, attackers also adopt AI to amplify their criminal activities. AI can help cybercriminals write sophisticated phishing emails in multiple languages, aid them in navigating networks and identifying sensitive data to exfiltrate, and even mimic someone’s likeness to gain access to sensitive information. This evolution of AI attacks creates a need for an even stronger human presence as a countermeasure.
As both defenders and attackers leverage AI, the human element remains a final line of defense. Security teams must bring human insight, judgment, and intuition to the table – qualities that can outwit even the most advanced AI-driven attacks. The juxtaposition of human and machine is not simply about achieving better security: it’s about ensuring resilience in an environment where both good and bad actors wield AI.
What human collaboration with AI looks like
Most security organizations recognize the need for human oversight, and they can take these four practical steps to foster a working relationship between human beings and AI:
Human-AI collaboration in cybersecurity will mature, which means we may reach a point where as AI’s reliability and data confidence improve, we can delegate more security responsibilities to it. Over time, human roles could shift from day-to-day oversight to strategic direction and high-level problem solving.
Imagine a future in which AI autonomously handles a significant portion of threat detection and response, making real-time decisions to thwart attacks without waiting for human input. We can reach this vision of cybersecurity, but it hinges on achieving trust in AI’s abilities – a trust we can only build through careful collaboration, intentional data stewardship, and a commitment to balancing AI’s power with caution.
Phil Calvin, chief product officer, Delinea
SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Each contribution has a goal of bringing a unique voice to important cybersecurity topics. Content strives to be of the highest quality, objective and non-commercial.