Fighting AI Cybercrime with AI Security


On August 10th, the Pentagon introduced “Task Force Lima,” a dedicated team working to bring Artificial Intelligence (AI) into the core of the U.S. defense system. The goal is to use AI to improve business operations, healthcare, military readiness, policy-making, and warfare.

Earlier in August, the White House announced a large cash prize for individuals or groups that can create AI systems to defend important software from cyberattacks. This challenge will last two years and award approximately $20 million in prizes. Big Tech companies like Google, Anthropic, Microsoft, and OpenAI are involved and will provide their AI systems for the challenge.

AI versus AI

“Cybersecurity is a race between offense and defense.” These words from Anne Neuberger, Deputy National Security Advisor for Cyber and Emerging Technology, capture the constant tug-of-war in digital defense.

AI has become the weapon of choice for cyber threats. These threats include Phishing, Advanced Persistent Threats, Deepfakes, and DDoS attacks. A recent VMware survey found that 66% of respondents had witnessed a cyber incident related to deepfakes.

Here are a few examples of real-world AI attacks:

Deepfake CryptoCon:

A fabricated video of Elon Musk peddling a cryptocurrency scam went viral, proclaiming Musk’s ownership of the dubious BitVex trading platform and promising investors a handsome 30 percent return. The result? The scammers stole $243,000 in just over a week.

Holo-Hack Impersonation:

Cybercriminals harnessed holographic AI to masquerade as Patrick Hillmann, Binance’s Chief Communications Officer (COO), conducting convincing video calls on platforms like Zoom. They extended fake offers for crypto listings on Binance.

The scam attracted projects, resulting in the transfer of $200,000 in BlueBenx native tokens (BNX). These tokens were later exchanged for another cryptocurrency via the exchange’s pools. This liquidity drain pushed BlueBenx to suspend withdrawals and triggered drastic staff cuts.

AI Fights Back: Defending Against New Threats

Businesses and governments are investing in tech to fend off cybercrime as AI-generated threats rise. Using AI, security teams can track threats, analyze malware, and spot vulnerabilities. Microsoft’s Security Copilot, for example, uses GPT4 to analyze threat signals. It then summarizes possible malicious activity to help humans investigate further.

Other applications of AI in defense include:

Security Screening:

Human-based security screening can lead to errors due to fatigue and distractions. The U.S. Department of Homeland Security’s (DHS) AVATAR system uses AI to analyze facial expressions and body gestures, detecting suspicious variations. It compares data against indicators of deception and flags transportation passengers for further inspection.

Preventing Phishing:

AI can be used to scan emails for signs of phishing using machine learning. These systems learn from vast amounts of data to recognize patterns associated with phishing. They can also monitor user actions within emails and send alerts when someone interacts with a suspicious link or shares personal information.

Fighting Cybercrime:

Cybercrime costs roughly 1% of global GDP. In the past, two-factor authentication provided a good level of security. But since cyber threats are always changing, firewalls and antivirus software alone are no longer enough. AI-powered systems use deep learning to stay ahead of threats. They look for anything suspicious in logs, real-time messages, and transactions.

Endpoint Security:

Cybercriminals often target endpoints such as laptops and smartphones. A traditional antivirus program relies on known malware signatures. AI, on the other hand, looks at how malware behaves to identify previously unknown variants.

AI-Powered Threat Detection:

Breaches in organizations that use AI and automation tools cost $3.05 million less compared to those without these tools. Using AI, businesses can identify potential problems before they escalate. AI algorithms connect recognized Indicators of Compromise (IoCs) with internal security information. This bolsters defenses against emerging threats.

Also, AI-powered threat detection uncovers network threats by using up-to-date data. This cuts down on the need for manual analysis and speeds up finding and responding to problems.

The Road Ahead for Cybersecurity and AI

The US government’s success with these new AI initiatives could rewrite cybersecurity rules. It’s not just about dollars and data—it’s about fortifying the very core of our digital existence. With AI at the forefront, we can safeguard critical software, protecting families, businesses, and society.


About the Author:

Isioma Ogwuda is a content writer and marketer for SaaS and Tech companies. She shines at asking the right questions, diving deep into research, and crafting standout content. 

Editor’s Note: The opinions expressed in this guest author article are solely those of the contributor and do not necessarily reflect those of Tripwire.

 



Source link