Skip to content
arrow-alt-circle-up icon

Cyber Incident Call

arrow-alt-circle-up icon

00800 1744 0000

arrow-alt-circle-up icon

NW-AI-Main-5

Introduction

We believe in delivering a safe digital journey for our clients by taking care of security functions for them. We offer managed services that provide and run an information security management system, 24/7 monitoring and response to threats, and many more. To maximise the impact we can have, efficiency and scalability are important to us. At the same time, due to the sensitive nature of the work and the trust clients give us, quality is of high importance as well. So, we’re in a constant innovation push to keep up with or stay ahead of the threat actors, to improve our service delivery, and to increase our scalability.

In the past 18 months, AI has become the hottest topic in technology. AI-powered versions of just about everything are hitting the market, and the promises made by AI technology are grand. At the same time, AI isn’t a silver bullet that you can just add to any solution to make it great. We have excellent in-house expertise on security, IT, and AI. We use those combined insights to enhance our solutions in a smart way: by applying AI technology specifically and carefully at the points where it creates a great advantage, and by leveraging old and proven technology and human interaction to have high-quality, high-confidence, scalable end results. This article describes how we did so for automating our handling of phishing email reports. This enables us to scale to have a positive impact on more organisations, and to use the valuable expertise we have for research and development rather than repetitive work. This is how we stay ahead of the curve. We’re publishing this article so that others looking to apply AI to their existing workflows can benefit from our experience, and to show that we keep track of technology trends and manage to integrate those in an intelligent way. We will show how AI can be used to gain efficiency, and how it should be combined with other techniques and with a well-defined way of working to be effective.

The problem

Phishing constitutes a significant cybersecurity threat with far-reaching repercussions, as evidenced by notable financial deficits, data breaches, and operational disruptions spanning diverse sectors. Noteworthy incidents include the expropriation of upwards of $100 million[1] from technology giants such as Facebook and Google, the breach of personal data of 22 million individuals in the U.S. Office of Personnel Management assault[2], and consequential operational interruptions exemplified by the ransomware attack on the City of Baltimore, incurring costs exceeding $18 million[3]. Moreover, during the COVID-19 pandemic, we witnessed an uptick in phishing schemes targeting healthcare institutions, accentuating the adaptability and extensive influence of these threats.

Apart from these general statistics, we observe phishing as one of the three ways in which cyber criminals typically gain access to systems in incidents (the other two being weak/stolen credentials and software vulnerabilities). Many of the ransomware incidents we help victims recover from were started by a phishing email. Therefore, while phishing is an everyday occurrence, it is important to take phishing emails seriously and perform actions upon reception. It’s next to impossible to completely rule out phishing or fraud via emails, because in the end they are about human interaction. However, with good monitoring and detection technology, good hardening on computer systems against actions that typically take place after successful phishing, and with good end-user training, the impact of phishing can be managed well.

Clients of Northwave commonly report encountering dozens to hundreds of potential phishing incidents daily within their infrastructures. These clients are expected to document their interactions with the suspected phishing attempts (e.g., clicking on links, opening attachments, entering credentials) and forward the potential phishing email to us for further analysis. Typically, clients expect that our security desk will evaluate the potential phishing and provide instructions on the course of action.  

So, with such a workload that currently requires timely and manual analysis, can LLMs help us achieve better efficiency and quality?

The approach

Northwave offers managed security services. These include a 24/7 available point of contact for security and privacy related incidents, that we’ll call “security desk” here. For crises and incidents reports that turn out to warrant immediate action, the Northwave CERT is also on 24/7 standby to assist customers and non-customers alike.

Reports of phishing emails arrive at the security desk as tickets. With sufficient maturity, the security desk has well-documented, written, step-by-step guides for handling tickets—known as playbooks. In essence, a playbook might look like this:


Playbook for ticket type ‘X’'

For any tickets of type 'X',

perform 'A', followed by 'B', verify 'C' and 'D', and then proceed with 'E' and 'F';

if 'G' is present, execute 'H'; otherwise, carry out 'I'.


The hypothetical tasks in the example above fall into one of two categories: (1) those based on enhancing understanding of the situation reflected in the tickets and in the environment by running queries on systems and third-party services, or (2) those requiring human comprehension and reasoning. The former, i.e., executing predefined queries, can be readily automated through API calls. The latter, involving human "understanding," might be addressed by testing existing LLMs to see if they can perform as well as humans, thereby enabling automation. This is precisely what we have undertaken, and our initial findings are extremely promising. In the next section, we briefly explain our motivation for automating these processes, followed by a section that provides an example of an essential playbook – the phishing email playbook. Below, we outline steps in the triage, analysis, and response stages.

 

The Triage Phase

The process starts with triage, to quickly assess the severity and therefore priority a ticket should get.

  1. Check if the customer informed to have interacted with the potential phishing (ex. by clicking links, opening attachments, filling-in credentials).
  2. Check whether the potential phishing email is present (i.e.. attached or forwarded) to be further analysed.
  3. Based on the results of the previous checks, perform “actions” accordingly. These actions entail setting the incident priority (Low/Medium/High), requesting more information from the user if needed, forwarding the incident for analysis.

Note that during the triage phase, it is not necessary to conduct an in-depth analysis to determine whether an email is an actual phishing attempt or not. This phase is to make sure the relevant info is present, to request it if it isn’t, set priority accordingly, and to forward the ticket for analysis.

Automation Considerations. The third step of the triage phase is typically easy to automate IF the first two steps accurately extract "the interaction" and "the presence of the phishing email." This step involves preparing some template emails and changing the status of the analysis. However, automating the first two steps tends to be challenging because it requires an understanding of written text. There is an extensive field of research called Natural Language Processing (NLP) that sits at the crossroads of Artificial Intelligence (AI) and Machine Learning (ML). While many ML/AI models have been quite good, the new Large Language Models (LLMs) are game changers in terms of accessibility and, most importantly, accuracy

For example, the Generative Pre-trained Transformer (GPT), the backbone of ChatGPT, was designed to generate human-like text by predicting the next word in a sequence based on the preceding words. This capability allows GPT models to perform a wide range of NLP tasks, such as text completion, translation, summarization, question answering, and "text understanding." Therefore, we considered, "why not use GPT to automate these 'minor' human tasks (steps 1 and 2 of the triage phase)?" And that's what we did.

We have spent dozens of hours crafting prompts using GPT 3.5 turbo and GPT 4 through Azure's OpenAI Service. We chose Azure initially because it guarantees data privacy and can run the same models as OpenAI. In the coming months, we plan to test other open-source LLMs (e.g., LLaMA2, Mixtral, and others). 

We tested our database of tickets previously categorised into 11 different groups. In ALL tests, our automation classified them correctly, achieving 100% accuracy. Moreover, for the phishing-related tickets (which comprised about 40% of the total), it was able to extract 'the interaction' and 'the phishing email' with 100% accuracy as well. After automating the triage, we focused on automating the analysis phase, which is described in the following section.

The Analysis Phase

Below is a hypothetical and non-exhaustive list of steps for analysing a potential phishing email:

  1. Check if the alias of the sender's email matches or is similar to the email address.
  2. Verify if the sender's email corresponds with or resembles the signature at the email's end.
  3. Confirm if the sender's email domain aligns with or is similar to the domain mentioned in the email signature.
  4. Ascertain if the sender's email matches the "reply-to" address.
  5. Evaluate the authentication values of SPF, DKIM, and DMARC.
  6. Inspect the email client of the sender (X-Mailer).
  7. Determine whether the aliases of links correspond to the links themselves.
  8. Compare "Indicators of Compromise (IoC)" with Cyber Threat Intelligence (CTI) databases (e.g., VirusTotal). Examples of IoCs include links/URLs, base64 encoded messages in URLs, domain names, domain name age, IP addresses, redirecting link pages, expanded shortened URLs, links within images, tracking pixels, QR code redirect links, file behaviour, file type versus file extension, and file hashes.
  9. Check if the image containing the brand matches or is similar to the sender's email domain.
  10. Look for social engineering tactics such as urgency, threats, and requests for (personal) information.
  11. Search for indicators of a financial nature to the email.
  12. Review linguistic style and typographical errors.

x. Based on the results of the triage and the preceding checks, conclude whether the analysed email is a phishing attempt or not.

While many steps can be taken to determine if an email is suspiciously phishing, some are strong indicators. For instance, when multiple trusted CTI sources flag a URL as malicious or phishing (step 8). Or when an email's alias has no apparent relation to the actual email address (step 1), such as receiving an email purportedly from ‘Jair Santanna’ but the actual email is ‘19875@notwave.nl’. Additionally, we often recognise when an email exerts a sense of urgency or promises a financial reward (steps 10 and 11), though this may not necessarily signify phishing.

Automation Considerations: First, to automate this analysis, we need to extract certain information from the body and headers of the suspect email. Information like IoCs is typically easy to extract using regular expressions, allowing further checks against CTI databases. Comparisons for text similarities and text interpretation fall into the NLP category, which depends on human-like understanding, as discussed during the triage phase. This led us to consider, "why not try GPT to automate these human-dependent tasks?" We then tried this out.

We've spent many hours working on API calls to CTI, hundreds more preparing and refining prompts for use with GPT 3.5 and GPT 4 via Azure's OpenAI Service, and several additional hours integrating all the modules and testing for consistency. And voilà: we have fully automated the entire analysis of potential phishing. After analysing our dataset of phishing emails, we observed ~1% false positives and 0 false negatives. Those three false positives were outliers, and we have since adjusted our code to correctly classify those as well. We will continue testing it against thousands of phishing-related tickets and will detail our findings to the scientific community and the Anti-Phishing Working Group (APWG) in an academic paper.

The Response Phase

After the triage and analysis of a potential phishing email comes the response phase. This phase heavily relies on the nature of the customer’s interaction with the potential phishing attempt (observed during triage), the outcome of the analysis, and the predetermined agreements with the customer. For instance, if there is confirmed phishing and the customer has interacted with links in the email, some customers may prefer that the security desk take automatic action on their behalf. Others may opt for the security desk to provide guidance to their IT department on the necessary steps to take. 

Automation considerations. The response phase can be efficiently executed if the triage and analysis phases were well-conducted. Similar to the triage phase, it involves a sequence of predefined automated template emails, some of which include a dynamic paragraph summarizing the analysis results. In cases where the customer has predetermined that the security desk should act on their behalf, a set of API calls will be triggered to automatically carry out those actions (e.g., resetting passwords, reconfiguring Multi-Factor Authentication, running antivirus on the endpoint device, among others).

In all of the analysed phishing email cases (including the ~1% false positives from the analysis phase), the response was executed correctly in accordance with the triage and analysis. Once again, we achieved 100% accuracy in this phase.

NW-AI-Main-3

Lessons learned and vision

In this blog, we tried to explain how Northwave Cybersecurity is on the way to drastically automating our cybersecurity processes. We showed how we went about automating the triage, analysis and response of our most frequently occurring ticket type at the security desk: phishing emails. Our preliminary results are very promising:

  • Database of tickets tested, 100% accuracy in the triage phase.
  • Phishing sample set tested, 99% accuracy in the analysis phase.
  • Phishing sample set tested, 100% accuracy in the response phase.

Technology-wise, the automation of processes in this project is not solely based on LLM AI models. For example, it required data enrichment using third-party Cyber Threat Intelligence, API calls to our security stack (which is Microsoft-based), and at times, simple functions such as receiving and sending emails. Nonetheless, the use of LLM AI models was crucial to the success of full automation. We can assure that the current LLMs are performing their intended function: "understanding" human writing (with well-crafted and tested prompts) and responding appropriately.

Another valuable insight we had is that “an inferior” LLM model can perform as well as the “best model”, in most cases. To illustrate, in March 2024, GPT 3.5 turbo (version 0301) cost $0.0005 per thousand tokens, while GPT 4 (version 1106) cost 20 times more ($0.01 per thousand tokens). In addition to this, GPT version 3.5 turbo took on average 11 seconds to return, while version 4 took on average six times longer (around 63 seconds). The prompts used in GPT 3.5 turbo are usually bigger than in GPT4 because the former requires many more “explanations” on what should be done. Overall, GPT 3.5 is able to do as much as GPT 4. One of our next steps is to experiment with other LLM models (private and open-source), some small language models (SLMs), and eventually propose our own models.

In this article, we’ve described how we used modern AI models to automate work in our security desk, while keeping the quality level high. This frees up a lot of time of our experts, who can now spend that time on smarter tasks than triaging phishing incidents, asking for follow-up information and performing analysis steps. We’re constantly improving our way of working and our technology stack so that we can detect and respond to new threats, and so that we can scale our way of working to assist more organisations. We were only able to automate the phishing handling process because Northwave’s security desk is mature enough to have a well-documented playbook describing step-by-step what should be done and how.

The way in which we applied AI to this goal shows that AI technology has high potential, and that a lot can be achieved. It also shows that AI technology isn’t a silver bullet, that it isn’t applied with the flick of a switch: it requires careful analysis, combination with old school heuristic analysis and good workflows, and application of different AI models. For another example of this, read our blog on applying AI models to detection technology in our SOC [https://northwave-cybersecurity.com/whitepapers-articles/improving-mdr-ml-ai-driven-enhancements-in-our-soc].

We are here for you

Need help with getting your organisation ready for DORA or wondering far along you your business currently is?
Get in touch and we will guide you with your next steps.