RSAC, AI benefits/risks, AI/ML

Agentic AI took over the RSA Conference 2025

(Adobe Stock)

COMMENTARY: Across RSA Conference 2025 this year, a deeper shift was underway. The industry no longer frames AI as a passive assistant. Instead, we're seeing the rise of a new kind of system: intelligent agents that don't just recommend actions but take them independently. These technologies navigate complexity, orchestrate tasks, and make decisions with growing confidence, reshaping how enterprises approach security and governance.

This emerging approach, often called Agentic AI, signals a fundamental change.

While banners still shout about AI-driven everything, a closer look reveals something more consequential. From red-teaming autonomous systems to securing AI supply chains, the message is clear: AI has moved off the sidelines and into operational command.

[SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Read more Perspectives here.]

Unlike traditional AI tools that rely on human prompts, Agentic AI refers to systems designed to plan, decide, and act on their own within set goals. Instead of waiting for instructions, they proactively complete tasks — sometimes chaining together multiple steps — and adapt to changing conditions without micromanagement.

Think of it less as a smart assistant, and more as a co-pilot that can navigate complex environments alongside human operators.

Bounded autonomy: opportunity and risk

This move toward autonomous decision-making isn't just technical. It’s strategic. For years, security teams have automated repetitive processes to gain speed and efficiency. Now, RSAC attendees are chasing a more ambitious goal: to reduce noise, respond to emerging threats, and preemptively address risks, sometimes without human prompting.

However, autonomy introduces real challenges. Systems operating at machine speed require a new kind of oversight. It’s no longer enough to review outcomes after the fact. Enterprises must design systems with clear boundaries from the start, shaping what actions AI can take, when, and how.

It's obvious from conversations at this year’s conference that security leaders are thinking hard about how to build trust into these systems. Transparency, explainability, and human-in-the-loop design aren’t optional anymore. They’re critical safeguards. The excitement around autonomous systems is palpable onsite, as well as the understanding that without governance, speed becomes a liability rather than an advantage.

The shift isn't about moving faster at all costs. It's about moving smarter, with resilience built into every decision an agent makes.

Security at machine speed

Beyond the buzzwords we heard on the expo floor, conversations reveal a more measured view. Security teams are waking up to a new reality: defenses operating at human speed can no longer keep pace with threats accelerating at machine speed.

The most compelling demonstrations at RSAC 2025 weren’t about flash. They were about function: silent agents scanning environments for misconfigurations before vulnerabilities emerge, systems autonomously closing compliance gaps, workflows preventing incidents before a human even sounds the alarm.

These systems have the potential to supercharge security operations. But seasoned CISOs are focused on the realities that come with autonomy. They are looking for clear mechanisms to monitor behavior, intervene when mistakes happen, and maintain rigorous audit trails that stand up to regulatory scrutiny.

The best designs do not hand over decision-making blindly. They strike a balance — empowering AI to act, while ensuring human oversight remains a core part of the process.

Autonomy is not a shortcut. It's a responsbility, one tht security teams must manage with precision.

Design for reality, not hype

Agentic AI isn't about moving faster. It's about moving differently. It demands a shift in the tool’s security teams use, as well as the principles they defend. Autonomy without oversight isn’t innovation — it’s exposure.

In this new era, success won’t come from chasing the loudest promises or racing toward the next big breakthrough. It will come from designing systems where power and responsibility are inseparable, where autonomy operates within human-defined limits and remains open to scrutiny. The future isn’t about eliminating human judgment: it’s about embedding it deeper into the architecture of decision-making itself.

The organizations that thrive won’t be the ones that hand over control or surrender to the speed of technology. They will build clear frameworks, anticipate failure points, and maintain the ability to intervene when it matters most. As intelligent systems take on more responsibility, the real advantage will belong to those who never lose sight of who’s ultimately accountable and design their AI not as replacements, but as extensions of human resilience, trust, and intent.

At next year’s RSA Conference, I look forward to seeing which companies rise to the challenge and set themselves up to thrive in 2026 and beyond.

Gal Ringel, co-founder and CEO, MineOS

SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Each contribution has a goal of bringing a unique voice to important cybersecurity topics. Content strives to be of the highest quality, objective and non-commercial.

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds