Technologist Bruce Schneier on security, society and why we need 'public AI' models


Quardia/Getty Images

In his keynote speech at the Secure Open Source Software (SOSS) Fusion Conference in Atlanta, renowned security expert Bruce Schneier discussed the promises and threats of artificial intelligence (AI) for cybersecurity and society.

Schneier opened by saying, “AI is a complicated word. When I think about how technologies replace people, I think of them as improving in one or more of four dimensions: speed, scale, scope, and sophistication. AIs aren’t better at training than humans are. They are just faster.” Where it gets interesting is when that speed fundamentally changes things up. 

Also: Anthropic’s latest AI model can use a computer just like you – mistakes and all

For example, he said, “High-frequency trading (HFT) is not just faster trading. It’s a different sort of animal. This is why we’re worried about AI, social media, and democracy. The scope and scale of AI agents are so great that they change the nature of social media.” For example, AI political bots are already affecting the US election

Another concern Schneier raised is that AIs make mistakes that aren’t like those made by people. “AI will make more systematic mistakes,” he warned. “AIs at this point don’t have the common sense baseline humans have.” This lack of common sense could lead to pervasive errors when AI is applied to critical decision-making processes.

Also: Generative AI doesn’t have to be a power hog after all

That’s not to say AIs can’t be useful — they can be. Schneier gave an example: “AI can monitor networks and do source code and vulnerability scanning. These are all areas where humans can do it, but we’re too slow for when things happen in real time. Even if AI could do a mediocre job at reviewing all of the source code, that would be phenomenal, and there would be a lot of work in all of these areas.”

Specifically, he continued, “I think we’re going to see AI doing the first level of triage with security issues. I see them as forensic assistants helping in analyzing data. We’re getting a lot of data about threat actors and their actions, and we need somebody to look through it.” 

Also: The best AI for coding (and what not to use)

Schneier suggested that AI can help fill this gap. While AIs can’t replace human experts (at least not yet), they can help: “AIs can become our minions. They’re okay. They’re not that smart. But they can make humans more efficient by outsourcing some of the donkey work.”

Security expert Bruce Schneier

Security expert Bruce Schneier at SOSS Fusion 2024.

Steven Vaughan-Nichols/ZDNET

When it comes to the use of AI in security, Schneier said, “It’s going to be an arms race, but initially, I think defenders will be better. We’re already being attacked at computer speeds. The ability to defend at computer speeds will be very valid.”

Unfortunately, AI systems have a long way to go before they can help us independently. Schneier said part of the problem is that “we know how human minions make mistakes, and we have thousands of years of history of dealing with human mistakes. But AI makes different sorts of mistakes, and our intuitions are going to fail, and we need to figure out new ways of auditing and reviewing to make sure the AI-type mistakes don’t wreck our work.”

Schneier said the bad news is that we’re terrible at recognizing AI mistakes. However, “we’re going to get better at that, understanding AI limitations and how to protect from them. We’ll get a much better analysis of what AI is good at and what decisions it makes, and also look at whether we’re assisting humans versus replacing them. We’ll look for augmenting versus replacing people.” 

Also: Open-source AI definition finally gets its first release candidate – and a compromise

Right now, “the economic incentives are to replace humans with these cheaper alternatives,” but that’s often not going to be the right answer. “Eventually, companies will recognize that, but all too often at the moment, they’ll put AI in charge of jobs they’re really not up to doing.” 

Schneier also addressed the concentration of AI development power in the hands of a few large tech corporations. He advocated for creating “public AI” models that are fully transparent and developed with societal benefit rather than profit motives. “We need AI models that are not corporate,” Schneier said. “My hope is that the era of burning enormous piles of cash to create a foundation model will be temporary.”

Looking ahead, Schneier expressed cautious optimism about AI’s potential to improve democratic processes and citizen engagement with government. He highlighted several non-profit initiatives working to leverage AI for better legislative access and participation.

Also: Gartner’s 2025 tech trends show how your business needs to adapt – and fast

“Can we build a system to help people engage their legislators and comment on bills that matter to them?” Schneier asked. “AI is playing a part of that, both in language translation, which is a great win for AI in bill summarization, and in the back end summarizing the comments for the system to get to the legislator.”

As AI evolves rapidly, Schneier said there will be an increased need for thoughtful system design and regulatory frameworks to mitigate risks while harnessing the technology’s benefits. We can’t rely on companies to do it. Their interests aren’t the people’s interests. As AI becomes integrated into critical aspects of security and society, we must address these issues sooner rather than later. 





Source link