AI-enabled malware could soon be the newest weapon in the threat actors’ arsenal, a recent report from Malwarebytes warned.
Malwarebytes described AI-enabled malware and cyberattacks as threats that utilize machine learning and AI to find vulnerable systems, evade detection from security products and enhance social engineering techniques. While there are currently no examples of AI-enabled malware in the wild, the report said, it “would be better equipped to familiarize itself with its environment before it strikes,” according to the report.
“We are talking about how AI-enabled malware can be harder to detect,” said Adam Kujawa, director of Malwarebytes Labs. “It could deliver more targeted malware, create better spearfishing campaigns, because it’s able to collect big data from social media, and [create more convincing] fake news and clickbait.”
While cybersecurity companies are working on developing and using AI and machine learning to help detect threats and making security tasks more effective, Kujawa warned that companies can expect to see more use of AI by cybercriminals in the next one to three years.
“The same platforms, frameworks and tools — Google’s got their own open source AI project , for example — are going to be out there and available for cybercriminals to develop on and create their own [malicious] AI that work against us,” he said.
For example, the report said malware authors could use available AI tech to beat CAPTCHA challenges. In such a scenario, Google’s AI could potentially be used to solve its own CAPTCHA technology.
Malicious AI could also be potentially used to trick automated detection systems and execute serious privacy violation operations, he added. Advances in AI will also facilitate the creation of convincing fake video and audio — dubbed “deepfakes” — really easily, he added. Those deepfakes could be used to commit more believable social engineering attacks, the report said.
“Right now we mainly see AI and machine learning, as far as being used by the cyber criminals, as a means to produce malware faster to make it more effective and to make the campaigns used to go after particular targets even more effective because of the data that the AI can collect,” Kujawa said.
How to defend against AI threats
Justin Fier, director of cyber intelligence and analysis at cybersecurity vendor Darktrace, said while there isn’t any direct evidence of threat actors using malicious AI, the company did find a piece of novel malware where it didn’t steal anything — its sole functionality was “just learning,” he said.
“It was watching and learning patterns of life … it was trying to blend in and look like a human user as much as possible so that when it did call out for command and control, or make its next moves, it wouldn’t raise as many flags,” Fier told SearchSecurity at the recent Gartner Security & Risk Management Summit. “That’s the one that worries me the most — a piece of malware [that] all it does is learns and takes action based off the things that it is learning and actually can pivot and alter its course.”
When it comes to combatting AI-enabled malware, Kujawa said, security companies that aren’t already working on developing AI technology within their own products are already behind the curve.
“You do need to make sure that you’re investing in some sort of cybersecurity solution that also utilizes AI because on the opposite side of the coin you’ve got a malicious AI that’s churning out 10 new malware samples every half hour,” Kujawa said. “You need a solution that can identify a malicious activity and make the determination saying, ‘This thing is not legit.'”
Fier advised more companies to focus on network traffic “for the sheer reason that it’s a ton of data … it’s a massive data set that changes thousands of time per second.” Investing in anomaly detection is equally crucial, he added, because it catches more than just malware detection.
“Malware detection is just binary: It’s just good or bad,” Fier said. “Anomaly detection is a little bit of everything: insider threats, configuration error, hardware failure and network outages. It’s so many different things that are not necessarily malicious but truly anomalous and don’t belong.”
But one of the problems with anomaly detection that uses artificial intelligence is that if it is tuned to low, then it’s not going to catch anything, Kujawa said, and if it’s tuned to high, it catches everything, including a lot of false positives.
“That’s an important thing to take into consideration when you are looking at a vendor that utilizes AI: How long have they been doing this, has their software had time to learn, will it result in lots of false positives?” he said. “And then from that point on it’s kind of the same situation of how you would determine which solution to use just to fight malware in general.”