Fighting The Surge in AI-Assisted Cyberattacks


Written by David Lenz, Vice President, Asia Pacific, Arcserve.

With cyberattacks never-ending and their impact often lethal, organisations are constantly looking for ways to enhance their data resilience.

It’s a back-and-forth battle, good guys versus bad, and recently the bad guys have taken a step forward.

They’re using AI to ramp up the frequency and severity of their attacks.

Worse, many newbies are jumping in to try their hand at cybercrime.

Script kiddies with zero coding experience can grab off-the-shelf AI tools and create and deploy malicious software.

Anyone with bad intentions can quickly develop and unleash malware that wreaks havoc on companies.

For instance, readily available AI tools enable even unsophisticated actors to execute denial-of-service attacks, create phishing emails, and launch ransomware.

These attacks can be run simultaneously from numerous systems worldwide, making it nearly impossible for human operators to manually detect all the attacking systems accessing their websites or portals.

Turning AI against the hackers

It’s not all bad news for the good guys. AI and deep learning technologies are also potent weapons in the fight against cybercrime.

AI-driven security solutions with self-learning capabilities can proactively respond to emerging threats and protect against a wide range of attacks—effectively putting the power back in the hands of organisations.

For instance, AI security tools can detect anomalies and patterns indicative of malicious behaviour and stop attacks before they cause harm.

This intelligent approach to data protection reduces reliance on reactive measures and empowers organisations to stay one step ahead of cybercriminals.

AI and deep learning protection systems can also adapt and evolve to counter emerging threats.

They can learn from past incidents and continuously improve their defense mechanisms.

By leveraging techniques like transfer learning, these systems can update their knowledge base with the latest threat intelligence and ensure greater resilience against future attacks.

These systems can also take proactive, automated actions based on predefined rules or learned behaviour. For example, upon detecting a security breach or anomaly, the system can automatically trigger measures like isolating affected systems or blocking suspicious traffic.

This automated response reduces the time between detection and remediation, thereby minimising the potential impact of a cyberattack.

AI in action

Here’s an example of what AI looks like in action.

There is a well-known threat in the cybersecurity industry called a remote administration tool. A RAT can be embedded into a simple email attachment, such as a JPEG image, allowing cyber attackers to gain unauthorised access to a system.

Antivirus engines typically detect RATs based on their signatures, then distribute an alert to all endpoints to identify and remove the RATs. However, attackers can easily modify their RATs—even slightly—to generate a different signature and evade traditional signature-based detection.

To fight back, AI and deep learning technologies are crucial. Instead of relying solely on static signature matching, modern cybersecurity tools powered by AI can analyse the behaviour of files and processes.

They can observe whether a file is executing specific actions or installing software. AI security tools can flag suspicious behavior and prevent potentially malicious actions by learning and recognising patterns in these activities.

This approach is more effective in detecting and stopping emerging threats.

Attackers are constantly developing new methods to evade conventional cybersecurity measures, which makes it essential for organisations to keep pace.

AI and deep learning can play a vital role in analysing actual threats and predicting potentially malicious actions based on observed patterns.

Such a proactive approach enhances the security posture of organisations and helps them protect against evolving cyber threats.

A still-evolving tool

When implementing AI and deep learning tools, it’s essential to consider the challenges they may bring. We’ve discussed the benefits of AI, but it’s crucial to remember that mistakes can occur.

AI is still evolving and is not 100% foolproof. Sometimes, it may misinterpret what is happening, disrupting data or system availability. 

These disruptions might happen when the AI detects what it thinks are illegal activities. For instance, AI tools often work with a reliability score. An organisation can take preventive actions if the score falls below a preset threshold.

However, these preventive actions may be unnecessary, resulting in unplanned downtime.

As an evolving technology, AI cannot guarantee absolute perfection, and the threat of errors will always exist.

Nonetheless, as more people use the technology and encounter various threats, AI systems will improve and become better at distinguishing real threats from non-threatening situations.

Getting started with AI

Many companies are intrigued by AI’s potential but don’t know how and where to start with the technology.

The easiest way is to work with reliable security solution providers well-versed in deep learning and AI and already incorporating the technology into their existing products.

This approach enables end-users to embrace AI and apply it effectively in data resilience and cybersecurity.

As the technology continues to evolve, we expect to see more in-house AI and deep learning solutions developed and deployed.

However, AI’s complexity will take some years to become mainstream.

In the meantime, the most accessible and straightforward way for organisations to use AI to defend themselves is to engage with solution providers with readily available AI-powered tools that neutralise cyberattacks and protect against data loss.


Comments are closed.