- Get Microsoft Office Pro and Windows 11 Pro for 87% off with this bundle
- Buy or gift a Babbel subscription for 78% off to learn a new language - new low price
- Join BJ's Wholesale Club for just $20 right now to save on holiday shopping
- This $28 'magic arm' makes taking pictures so much easier (and it's only $20 for Black Friday)
- This new wireless carrier promises ultra-secure mobile phone service
Why AI and security pros need to work together to fight cybercrime
Cybercriminals sometimes use AI to their benefit. In order to defeat these cyber bad guys, security pros and AI should focus on what they do best.
Remember when all that was required for digital security was a good antivirus program? It seems like the distant past. Today’s complex cybersecurity offerings have moved way beyond those days, as cybercriminals are attacking on all fronts–infrastructure, applications, and computers.
However, the underlying challenge has not changed over the years: Cyber bad guys only need to find one way in, whereas those guarding digital assets have to protect every possible entry point. That is a huge task, and the good guys aren’t faring so well. So, when help in the form of artificial intelligence (AI) comes along, there is a great deal of interest.
“There is no other technology [AI] that can keep up,” writes Hari Sivaraman, head of AI content strategy for VentureBeat and CEO of 100Digital.ai, in his VentureBeat article What enterprise CISOs need to know about AI and cybersecurity. “It has the ability to rapidly analyze billions of data points, and glean patterns to help a company act intelligently and instantaneously to neutralize many potential threats.”
SEE: Hiring Kit: Cybersecurity Engineer (TechRepublic Premium)
Sivaraman evokes Moravec’s paradox when writing about a conundrum when it comes to humans and AI. According to Hans Moravec, “It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.”
Marvin Minsky is quoted as writing, “In general, we’re least aware of what our minds do best. We’re more aware of simple processes that don’t work well than of complex ones that work flawlessly.”
According to Sivaraman, the optimal and obvious solution is to be aware of what each does best and apply AI technology and/or human intelligence appropriately.
AI is best at detecting security threats
AI is markedly better at security-threat detection if clear guidelines can be turned into training data for AI. “For instance, if there are guidelines on certain kinds of IP addresses or websites that are known for being the source of malicious malware activity, the AI can be trained to look for them, take action, learn from this, and become smarter at detecting such activity in the future,” writes Sivaraman. “When such attacks happen at scale, AI will do a far more efficient job of spotting and neutralizing such threats compared to humans.”
Humans excel at abductive reasoning
Humans are much better than AI at judgement-based decisions–those for which training data has not been created.
“For instance, let’s say a particular well-disguised spear phishing email talks about a piece of information, which only an insider ‘could’ have known,” writes Sivaraman. “A vigilant human security expert with that knowledge and intelligence, will be able to connect the dots and detect that this is ‘probably’ an insider attack and flag the email as suspicious.”
The above example is one instance where AI would likely fail to detect the threat. AI, at the present time, is unable to perform abductive reasoning. “Even if you cover some such use cases with appropriate training data, it is nigh on impossible to cover all the scenarios,” explains Sivaraman. “As every AI expert will tell you, AI is not quite ready to replace human general intelligence or what we call ‘wisdom’ in the foreseeable future.”
SEE: Why we must strike a balance with AI to solve the cybersecurity skills gap (TechRepublic)
How cybercriminals use and hack AI
Cybercriminals are likely already on top of this–they are way more cutting edge than given credit for, and capable of turning AI solutions into potent cyberweapons. Sivaraman offers the following examples.
- Cybercriminals versed in programming can hack into the AI itself, alter the training data, which, in turn, distorts the algorithms, making the AI program ineffective.
- Cybercriminals can also develop their own AI programs to find vulnerabilities much faster than the developers can find and remove them.
SEE: 3 ways criminals use artificial intelligence in cybersecurity attacks (TechRepublic)
Fighting cybercrime requires humans and AI
The key, according to Sivaraman, is to have AI and human intelligence join forces to form a formidable defense against cybersecurity threats. “AI, while being a game-changing potent weapon in the fight against cybercrime, cannot be left unsupervised, at least in the foreseeable future, and will always need human assistance by trained, experienced security professionals and a vigilant workforce,” concludes Sivaraman. “This two-factor AI plus human intelligence security, if implemented fastidiously as a policy guideline across the enterprise, will go a long way in winning the war against cybercrime.”