ChatGPT is changing the phishing game for threat actors who can use it for crafting phishing emails and bypassing MFA. This text shares the power of ChatGPT in the hands of phishing actors, how it can be used for email crafting, and how you can protect yourself from AI-powered phishing.
ML (Machine Learning) Models and AI (Artificial intelligence) chatbot technology has come a long way in recent years, and one of the most advanced models is ChatGPT. Making headlines worldwide with its ability to understand and respond to natural language inputs, ChatGPT is a valuable tool in multiple industries.
However, like two sides of a coin, ChatGPT can significantly impact innocent lives in the hands of threat actors. In this article, we will explore how ChatGPT is changing the phishing game and the potential implications of this technology for both businesses and individuals.
The Emergence of ChatGPT and its Role in Phishing
ChatGPT, OpenAI’s large language model, has brought about significant progress in the field of NLP (Natural Language Processing), with applications ranging from customer service, virtual assistants, and even phishing detection and prevention, which is ironical since it can also be used for malicious purposes of phishing and targeting innocent individuals without much effort.
As technology continues to develop, we can expect to see ChatGPT being used more and more innovatively, making it a mighty tool for shaping the future. But we can also see it being used by threat actors to overcome the challenges of crafting phishing emails leading to more sophisticated campaigns with this AI chatbot. But how exactly does ChatGPT fit with phishing and cyberattacks?
ChatGPT Assisting Phishers in Social Engineering and Email Crafting
Phishing is a common tactic used by cybercriminals to trick individuals into sharing sensitive information, such as login credentials or financial information. However, the phishing game is changing with the emergence of AI chatbot technology like ChatGPT. Where ChatGPT can be trained to detect and respond to phishing attempts, making it a valuable asset in the fight against cybercrime, it also takes care of the challenges that low-level cybercriminals face while crafting phishing emails.
Threat actors, or individuals who engage in phishing attacks, face several challenges when crafting phishing emails. Crafting a successful phishing email is a complex task that requires a significant amount of skill and knowledge.
One of the main challenges is making the email appear as legitimate as possible to increase the likelihood of the recipient falling for the scam or social engineering tactic, which almost always involves creating a sense of urgency or fear in the recipient to prompt them to act quickly without thinking. ChatGPT can take care of this to continually craft phishing email templates for mass phishing campaigns enabling threat actors to cause all kinds of harm.
For example, when researchers at HoxHunt were checking how capable the AI chatbot was in crafting phishing emails, they asked it to prepare a BEC (Business Email Compromise) phishing attack impersonating the CEO (Chief Executive Officer) for a defunct organization by the name Standard Oil. ChatGPT delivered a phishing email with the CEO reaching out to individuals for their immediate attention, informing them of financial restructuring, and asking them to redirect invoices to a new account.
Threat actors can and are already utilizing the AI chatbot for crafting malicious phishing emails. Just like RaaS (Ransomware as a service) models transformed ransomware attacks, enabling threat actors to target more organizations for financial gains, ChatGPT can be a similar catalyst for phishing campaigns to target individuals and enterprise workforce. But how is ChatGPT helping threat actors? Let us see.
How Threat Actors can Utilize ChatGPT for Phishing
ChatGPT has advanced coding capabilities that enable threat actors to carry out malicious activities. However, limiting the topic to ChatGPT’s ability to provide writing is an impressive and dangerous feat. Furthermore, since the chatbot improves quickly and offers various ways to write emails that are indistinguishable from the ones that humans write, phishing actors can utilize the AI chatbot and similar platforms to create anything they need to dupe innocent individuals on the Internet, including fake web personas, fake website presence, and more.
Here are two areas where ChatGPT can help attackers:
ChatGPT has over 20 languages, including English, Chinese, Korean, and more, but individuals on the Internet have tested nearly 100, and ChatGPT comes through. Now that language is no bar, any individual could explain to ChatGPT what they need as an output, and it would provide the writing promptly, even if the writing were a phishing email. Even though the AI chatbot is blocked in Russia, individuals and threat actors have found ways to use the chatbot via VPNs (Virtual Private Networks) and foreign numbers.
- Bypassing MFA
With the boom of NLP, ChatGPT can convincingly carry on conversations in a human-like manner and can be used to bypass MFA (Multi-Factor Authentication). In the past, threat actors have used SMSRanger, BloodOTPbot, and other similar bots in turbo-charged phishing attacks to automatically follow up credential harvesting attacks, asking the victim for the OTP (One Time Password) code and making a fool of 2FA (Two Factor Authentication).
When threat analysts at Hoxhunt asked the chatbot how it could bypass MFA, it said, “These chatbots can engage with people in a human-like manner and trick them into revealing their personal information or MFA credentials. For example, an attacker may use a chatbot to impersonate a trusted individual or organization and request that the victim provide their password or security token.”
Since NLP-enabled and AI chatbots are more intelligent, they can keep up with individuals and move with the flow of the conversation to dupe them out of security codes, helping the threat actor bypass MFA.
How to Protect Yourself from Phishing in the age of AI-powered Phishing Campaigns?
The legacy approaches of always being cautious of unsolicited messages and never clicking on links or downloading attachments from unknown or suspicious sources work. And leveraging anti-phishing tools and software, such as email filters and browser extensions, to detect and block phishing attempts can add a layer of protection. But here are some tips to protect yourself from phishing in the age of AI-powered phishing campaigns:
- Offering a simple method for reporting suspicious emails.
- Scrutinizing web traffic through a secure web gateway to safeguard both on-premises and remote users.
- Verifying URLs (Uniform Resource Locator) for malicious content or typosquatting.
- Implementing email security protocols such as DMARC (Domain-based Message Authentication, Reporting, and Conformance), DKIM (DomainKeys Identified Mail), and SPF (Sender Policy Framework) to combat domain spoofing and content tampering.
- Isolating Word documents and other attachments in a sandbox environment to prevent them from accessing corporate networks.
AI chatbots like ChatGPT can become powerful tools for threat actors to carry out phishing attacks. They can mimic human behavior and communication patterns to make their phishing attempts more convincing and automate the process to increase their chances of success, which is why it is imperative for organizations to stay informed about the latest phishing tactics and to implement advanced security measures, such as AI-based threat detection and response, to detect and respond to these threats.
Despite the potential risks and the potential of ChatGPT on the other side, the benefits of ChatGPT in transforming the world and implementing AI chatbots in security are undeniable and will continue to play an important role of phishing protection in the future.