AI disruptive cyberattacks are inevitable: 4 ways security professionals can prepare for them

We’re excited to bring Transform 2022 back in person on July 19 and around July 20-28. Join AI and data leaders for insightful conversations and exciting networking opportunities. Register today!


When Eric Horvitz, Microsoft’s chief scientific officer, said, Saw on May 3 Before the Subcommittee on Cyber ​​Security of the US Senate Armed Services Committee, he stressed that organizations will certainly face new challenges as cybersecurity attacks become more sophisticated — including through the use of artificial intelligence.

He explained that while AI improves the ability to detect cybersecurity threats, threat actors are also upping the ante.

“While there is scant information to date on the active use of Artificial intelligence in cyber attacksIt is widely accepted that AI techniques can be used to extend the scope of cyber attacks via various forms of investigation and automation…referred to as aggressive AI.”

However, it is not only the military that needs to stay ahead of the actors who threaten to use AI to expand their attacks and avoid detection. As enterprise companies grapple with an increasing number of major security breaches, they need to Preparation to evolve incrementally Cybercrime driven by artificial intelligenceexperts say.

Attackers want to take a big leap forward with AI

Max Heinemeier, Vice President of Cyber ​​Innovation at cybersecurity firm AI, said: Dark Trace, for VentureBeat. He added that what we’re currently seeing is “a huge driver in cybersecurity – when attackers want to take a big leap forward, with a mind-altering attack that would be hugely disruptive.”

For example, there have been attacks not driven by artificial intelligence, such as the 2017 WannaCry ransomware attack, which used what were considered new cyber weapons, he explained, while today there is malware used in the war between Ukraine and Russia that has rarely been seen before. “This kind of mind-altering attack is where we would expect to see AI,” he said.

So far, the use of artificial intelligence in the war between Ukraine and Russia still limited in russian Use of deep fake Ukraine’s use of Clearview AI’s Controversial face recognition Software, publicly at least. But security professionals are preparing for a fight: Scan Darktrace Last year it found that a growing number of IT security leaders are concerned about the potential use of artificial intelligence by cybercriminals. Sixty percent of survey respondents said human responses are lagging behind the pace of cyberattacks, while nearly all (96%) are starting to protect their companies from AI-based threats — mostly related to email, fraud and advanced impersonation threats.

said Corey Nachreiner, CSO at WatchGuard, which provides enterprise-grade security products to mid-market customers.

Threat actors are already using machine learning to help with more social engineering attacks. If they get hold of big chunks of data with lots and lots of passwords, they can learn things about those passwords to better hack their passwords.

He said machine learning algorithms will also drive more spear phishing attacks, or highly targeted non-public phishing emails, than in the past. “Unfortunately, it is difficult to train users not to click on spear phishing messages,” he said.

What companies really need to worry about

According to Seth Siegel, North American AI advisory leader at Infosys, security professionals may not be considering the actors that threaten the use of AI explicitly, but they are seeing more and faster attacks and could sense an increased use of AI on the horizon.

“I think they see it’s getting really fast and furious there,” he told VentureBeat. “The threat landscape is really aggressive compared to last year, compared to three years ago, and it’s only getting worse.”

However, he warned that organizations should be concerned about much more spear phishing attacks. “The question really should be, how can companies deal with one of the biggest risks of AI, which is getting bad data into their machine learning models?” He said.

These efforts will not come from individuals attackersbut from nation-state hackers and sophisticated criminal gangs.

“This is where the problem lies – they use the most available technology, the fastest technology, the cutting edge technology because they need to be able to not only get past crimes, but overwhelm departments that are frankly not equipped to deal with this level of poor representation.” “Basically, you can’t bring a human tool into an AI battle.”

4 ways to prepare for the future of AI cyber attacks

Experts say security professionals should take several key steps to prepare for the future of AI cyber attacks:

Provide ongoing security awareness training.

The problem with spear phishing, Nachreiner said, is that because emails are meant to look like real business messages, they are difficult to block. “You need to have security awareness training, so users know to expect and be suspicious of these emails, even if they seem to come in a business context,” he said.

Use AI-driven tools.

Heinenmeyer said AI should be embraced by Infosec as a core security strategy. “They shouldn’t wait to use AI or see it as just the cherry on top – they should anticipate and implement AI themselves,” he explained. “I don’t think they realize how necessary it is at the moment – but once the attackers start using more ferocious automation and maybe there are more devastating attacks being launched against the West, you really want to have the AI.”

Think beyond the bad individual actors.

Siegel said companies need to refocus their perspective away from the individual bad actor. “They should think more about hacking at the state level, about hacking criminal gangs, and be able to take defensive positions and also understand that it’s just something they now need to deal with on a daily basis,”

Have a proactive strategy.

Siegel said organizations also need to make sure they are on top of their security postures. He explained, “When patches are published, you have to treat them with the level of importance they deserve, and you need to review your data and forms to make sure no malicious information is entered into the forms.”

Siegel added that his organization includes cybersecurity professionals on data science teams, and also trains data scientists in cybersecurity techniques.

The future of offensive artificial intelligence

According to Nachreiner, more “adversarial” machine learning is coming down the shaft.

“This is about how we use machine learning to defend — people will use that against us,” he said.

For example, one of the ways organizations are using artificial intelligence and machine learning today is to better proactively detect malware — because malware is now changing rapidly and signature-based malware detection doesn’t detect malware on a regular basis anymore. However, in the future, these money laundering models will be vulnerable to attacks by threat actors.

Heinenmeyer said the AI-driven threat landscape will continue to deteriorate, with increased geopolitical tensions contributing to this trend. was martyred recent study from Georgetown University, which has studied China and how it intertwines with nation-state-sponsored AI and hacking research universities. “It tells us a lot about how close China, like other governments, are to working with academics, universities, and AI research to harness potential hacking cyber operations.”

“As I think about this study and other things that are happening, I think my view of threats a year from now will be much darker than today,” he admitted. However, he noted that the defensive outlook will also improve as more organizations are embracing AI. “We’re going to be stuck in this cat-and-mouse game,” he said.

VentureBeat mission It is to be the digital city arena for technical decision makers to gain knowledge about transformational enterprise technology and transactions. Learn more about membership.