Cybercrime is directly proportional to improving technology; technology is advancing by leaps and bounds, and so do the malicious tactics employed by threat actors. They are forever on the lookout for vulnerabilities to exploit and access network systems. While present-day cybersecurity strategies such as anti-ransomware solutions and anti-phishing solutions use AI to fight cybercrime, cyber adversaries use the same technology to turn the tables. Hence, it wouldn’t be wrong to say that AI is functioning like a double-edged sword. And here is how AI can be a boon and a bane simultaneously when it comes to phishing.

Malicious Actors Move With The Times

Phishing is as old as cybercrime itself. Ironically, phishing continues to be the most effective cybersecurity threat today, despite the advancements made by technology over the years. Statistics point out some chilling facts.

Over a period, scammers have modified their phishing tactics from simple scamming emails to sophisticated techniques like visual similarity attacks, Distributed Spam Distraction, and even polymorphic attacks.

phishing prevention

(Graph Source – Statista)

The graph shows that the industries and sectors most affected by phishing are SaaS/Webmail and financial institutions. Surprisingly, these institutions use AI the most for their activities. Thus, it brings us to whether AI is more effective in fighting cyberattacks or perpetrating crime by malicious actors.

 

Use Of AI In Fighting Cyber Attacks

Generally, organizations use various anti-phishing solutions like installing anti-malware, updating information systems, educating employees, creating awareness, etc., to reduce cyberattacks due to phishing. However, at times, these methods prove inadequate to curb cybercrime. Hence, there is increasing use of AI to thwart malicious intrusions. Here are some examples of using AI as one of the best anti-phishing solutions.

  • AI As A Restricting Force: Malicious actors generally look for vulnerabilities in a network system to infiltrate them. While organizations have improved their cybersecurity strategies, malicious actors keep finding new ways to move ahead. Hence, recent phishing emails continue to bypass conventional email gateways.–AI is being used to study patterns and modus operandi employed by the adversaries to defeat them at their own game. Today, signature detection is a weak defense mechanism because malicious actors can tweak HTML codes and evade phishing filters. AI, capably supported by ML, looks beyond the conventional detection methods by detecting specific patterns and words used by these threat actors and thus helps restrict such emails from reaching the users’ inboxes.
  • AI As An Analyzing Force: Still, malicious actors go one way ahead and use social engineering and BEC phishing scams to glean information, cause data breaches, and commit financial frauds. Such emails seem to originate from the organization’s CEO or other top officials and do not usually contain payloads like malicious attachments or links. Instead, they deceive employees in the organization by impersonating their superiors to commit fraud.–AI comes to the rescue by analyzing the writing style, syntax, grammar, and other user behavior patterns to determine a specific user profile. This novel method helps organizations to reduce spear phishing and BEC attacks to a considerable extent.

 

what is phishing

 

The Other Side Of The Coin – AI For Phishing

While AI can learn from open-source intelligence feeds to upgrade its capabilities and detect the latest phishing threats, the adversaries also use the same techniques to formulate innovative AI-based phishing exploits. The following are some examples that show how AI can help threat actors achieve their malicious objectives.

  • The Use Of Chatbots To Introduce Malware: While BEC or CEO fraud remains one of the preferred modes of attacks by malicious actors, they are increasingly using AI chatbots to trick users into clicking on suspicious links. These cyber adversaries have also been using AI to monitor the CEO or other executive’s behavior patterns to refine their tactics and carry out more precise and effective phishing attacks.–AI has the inbuilt scalability to go beyond human capability and even algorithms to change the attack modes. As a result, AI helps the perpetrators maintain the unpredictability factor that humans and even AI-enabled anti-phishing solutions look for when detecting phishing attacks.–One such use of AI-driven malware is the keylogger that the threat actors install on the victim’s network system surreptitiously. This malware works in the background and keeps collecting information that can prove helpful to the attacker to launch significant cyberattacks.
  • AI Helps Threat Actors Disguise Themselves: Statistics show that spear phishing is rising, with nearly 88% of organizations globally experiencing spear-phishing attempts in 2019. Today, cyber attackers use AI to develop malware and deploy untraceable malicious applications within the general data payload. Furthermore, AI techniques use reverse engineering to hide the conditions required to be satisfied to unlock the payload. Thus, such emails can bypass modern anti-malware solutions. And, it becomes impossible to detect such a criminal act.–One example of such deception is Generative Adversarial Networks or GANs, the technology working behind Deepfakes. Today, this adversarial AI has made it possible to impersonate almost anyone and keep disguising themselves simultaneously. As a result, today, there are instances of GANs being used by malicious actors to achieve their nefarious objectives.

These instances prove that AI technologies can be dangerous if they fall into the wrong hands.

 

Final Words

From the above discussion, it seems clear that AI can wreak havoc on an organization’s information assets if leveraged by threat actors. Deep fakes of higher company executives can be created to lure mid-level and lower-level employees into disclosing any critical information that could jeopardize the organization. There could be hundreds of such instances where AI could be used in the wrong way. However, this doesn’t belittle the importance of AI and adopting robust AI-based anti-phishing tools. Besides, employee awareness is as crucial as it ever has been;  hence, organizations must prioritize training their employees on basic cyber hygiene practices.