RSAC, AI/ML, AI benefits/risks

RSAC 2025: ‘If everything is AI, then nothing is AI’

(Adobe Stock)

Much of the thinking around artificial intelligence (AI) doesn't truly reflect what AI is or what it does, two researchers said in different presentations at the BSides SF and RSAC cybersecurity conferences in San Francisco last week.

"The people who are talking about AI are making it up," said Ira Winkler, Field CISO of CYE Security, during a talk at RSAC. "They often don't know what they're talking about, like AI is some magical entity. You're not giving people any advice on how to deal with AI."

Both Winkler and Ian Amit, CEO of Gomboc.ai, stressed that AI is not one single monolithic thing that's going to take over the world. Instead, it's an application of data and processing power to different mathematical algorithms, some of them decades old, that are designed for discrete purposes.

"Anyone generically using the term 'AI'," Winkler said in his presentation slides, "is doing you a disservice."

We already live in an AI world

Besides, Amit and Winkler both said, we've been using AI for many years.

For example, most of the attendees at BSides and RSAC safely got to San Francisco by relying on one of the oldest and most successful forms of AI in existence: the traffic collision and avoidance system (TCAS) used by commercial airliners since the late 1980s.

TCAS involves on-board computers aggregating data about aircraft position, altitude, speed and heading, making quick calculations and talking to similar computers on nearby airliners.

The computers don't talk to air traffic controllers, who are too far away. And they communicate to pilots only after the TCAS systems have reached a decision about whether an aircraft should climb, descend or maintain altitude to avoid midair collision.

"Humans are not involved in this process because they are too slow," said Amit during his BSides talk April 26. "This is a classic case of proper AI use."

The TCAS computers may not be "agentic AI" as they don't learn from their mistakes, but they do have agency in that they make life-or-death decisions without human input.

They also mutually cooperate. TCAS systems on different aircraft talk amongst themselves to make sure that the pilots of each plane get different alerts, such as that one plane should climb while another descends. Several midair collisions have resulted from pilots ignoring TCAS alerts.

Other successful applications of AI, Winkler pointed out, included optical character recognition that lets computers read printed text; voice-recognition software; the Netflix and Amazon streaming recommendation engines; the Alexa and Siri voice assistants; and the predictive text on your smartphone.

"AI is not new," said Winkler. "It just was being applied to finite problems."

In the brick-and-mortar (and life-or-death) world, San Francisco itself is full of Waymo driverless taxis that use AI, GPS and cameras to avoid collisions with pedestrians, light poles or other cars as they zip along the city streets.

We survived our ride in one. The experience would have been even more fun if the Waymo could have taken us to the airport the next morning, past two dozen billboards touting magic-sounding AI solutions for one thing or another.

"If everything is AI," quipped Winkler — and everything at RSAC did indeed seem that way — "then nothing is AI."

Use your AImagination

Is AI going to take away your job? Is it going to take over the world? Will it be a tool for evil?

Maybe, said Winkler, but he added that "all these scary things we use to talk about AI could have been said about computers in general 30 years ago."

In reality, he said, many of the AI algorithms used today were created decades ago. They've just had to wait for data sets large enough, and processors fast enough, to finally make them applicable.

As for those algorithms themselves, Winkler displayed a few formulae on the projection screen during his talk but admitted that he didn't understand them either.

"If you want to try to understand your future," he said, "try to take some college-level math."

How should we think about AI? We need to understand, both Amit and Winkler stressed, that successful AI models are trained on very specific data and are trained to do very specific things, using algorithms that allow for recursive input of prior results to achieve better results.

Amit made a clear distinction between generative and deterministic AI, and Winkler between unsupervised and supervised machine learning.

While the two pairs of opposites don't exactly correspond to each other, both speakers made clear that generative AI and unsupervised machine learning can detect patterns that evade humans. For example, there's the famous, and not totally substantiated, case of the Target marketing algorithm that mailed coupons for diapers and cribs to a teenage girl whose father didn't know she was pregnant, based on her previous shopping choices.

The problem, as most of us know, is that generative AI, in its effort to provide useful answers, can also make things up or "hallucinate."

Amit said he asked three different large language models (LLMs) to tell him which kinds of fruit are red on the outside and green on the inside. The correct answer is "none," yet all three models told him the answer was "watermelon." Some even generated images of reverse-color watermelons to prove it.

The share of erroneous answers seems to be growing as LLMs get more sophisticated, AI researchers told the New York Times this week. 

Likewise, Amit worried about generative AI's ability to write flawless computer software, a very real concern when the head of Microsoft says AI is writing as much as 30% of the company's code.

The problem isn't when the code doesn't run, Amit added, but when it does. If you don't know how the AI came up with the original code, how can you properly change the code when it's time to update it? Can we really count on generative AI, he asked, to come up with reliable, factual code that doesn't use nonexistent libraries?

"No," he said. "I need something very specific. I need planes to not crash into each other. I can't rely on something that gives me different answers if I ask it the same question twice."

Pointing in the right direction

What we need, Amit said, is more deterministic AI, or AI tools that are trained on a certain fixed set of data to solve specific problems, even if that problem is how to speedily write accurate software.

"When it comes to precise sciences, like engineering, I need something deterministic and predictable," he said. "I need accurate, correct, deterministic, no-excuses code fix."

And that requires specific, narrowly focused AI tools, with different ways of using them — and different rules governing them. There's no way that one law can regulate self-driving cars, computer vision, data mining, deepfakes, robots, and LLMs, Winkler said.

"You need to understand: What do I need to regulate?" he added.

Unless we recognize AI for what it really is — a series of processes, not a sinister entity — we won't make rational decisions about how to use, manage and control it, and what its limits are.

"You can't just AI your way out of most problems," said Amit. "You actually have to solve them like a human being."

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.
Paul Wagenseil

Paul Wagenseil is a custom content strategist for CyberRisk Alliance, leading creation of content developed from CRA research and aligned to the most critical topics of interest for the cybersecurity community. He previously held editor roles focused on the security market at Tom’s Guide, Laptop Magazine, TechNewsDaily.com and SecurityNewsDaily.com.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds