- I tested Samsung's 98-inch 4K QLED TV, and watching Hollywood movies on it left me in awe
- Apple is working on a doorbell that unlocks your door Face ID-style
- 5 biggest Linux and open-source stories of 2024: From AI arguments to security close calls
- Securing the OT Stage: NIS2, CRA, and IEC62443 Take Center Spotlight
- Trump taps Sriram Krishnan for AI advisor role amid strategic shift in tech policy
AI gives a tactical advantage to hackers, but the cost is prohibitive, expert tells IT Brew
Top insights for IT pros
From cybersecurity and big data to software development and gaming, IT Brew delivers the latest news and analysis of trends shaping the IT industry, like only The Brew can.
As Proofpoint EVP of cybersecurity strategy, Ryan Kalember’s career has taken him around the world, from Peru to London to Geneva to Silicon Valley.
It’s an important context for a career in cybersecurity as threats are coming from adversaries often supported by nation-state actors.
Kalember has been with Proofpoint for eight years. His work focuses on teaching teams to prevent threat actors from gaining access to systems, he told IT Brew at RSA 2023 in April.
The threat landscape is always evolving, with adversaries looking for any advantage and increasingly turning to techniques that go beyond the normal scope of hacking—including social engineering tricks involving AI. IT Brew asked Kalember about the use of deepfakes and AI, and how that technology is evolving.
This conversation has been edited and condensed for clarity.
What do you see as the role of evolving tech around deepfakes and AI? Live deepfakes are a ways off, but do you think that AI is a real threat? Because if it is, we’re not really hearing a lot about it yet.
Conceptually, it’s a real threat. I’ve heard limited examples of very specific groups…using what we presume as deepfake technology to have an Australian accent, which was interesting. Or they were just really good at faking an Australian accent. Also plausible.
What about live deepfakes, video calls in real time?
They do video calls. And we have seen a trend toward certain threat actors moving to video calls. We actually saw the Iranians do this to some professors in the UK. That never turned into a video call to our knowledge.
Do you think video is not quite at the point yet, where they can do it live?
You would have to do something extremely custom and extremely expensive. It is not just enter it into a prompt then it does it, like ChatGPT.
So, hypothetically, we could do all this stuff. But is there a point where it’s not worth it?
This is generally true of the threat landscape. If I can just send somebody an HDA file, and they will run malware for me, why do I need to find a remote code exec exploit chain [for] Windows 11? I don’t. I just need to find somebody who will run my code.
If I know that everybody’s using Active Directory, and Active Directory has an entire set of tooling that gets me from any compromised identity to domain admin so I can ransomware a whole environment, why do I bother doing anything else?
We haven’t stopped the vast majority of the stuff. I’m constantly trying to impress this on my own team—you don’t get to move on to the fancy stuff and the shiny objects until you take away the everyday tools.