Skip to content
arrow-alt-circle-up icon

Cyber Incident?

arrow-alt-circle-up icon

Call 00800 1744 0000

arrow-alt-circle-up icon

NW-AI-Main-5
By: Michael Ivancic, Strategic Security Advisor (CISO)
Published: July 2025

It starts with a phone call no organisation wants to receive. 

A journalist contacts your Communications team, claiming to have a scoop on allegations of workplace misconduct and internal investigations. They have employee names and sensitive details that have never been publicly disclosed. Legal confirms: the information is real.

Panic sets in. How did the breach occur? Who had access? Where did the journalist get this private information?

An internal inquiry reveals that a well-meaning intern copied the confidential notes into a free online AI tool to “polish the language.” However, there was nothing on the documents to indicate they were confidential, the intern could access free AI tools through a workplace laptop and didn’t receive any guidance on using them. As such, this data breach was caused by a gap in ownership, clarity, and visibility, rather than a targeted attack, technical failure, or even the intern.

This scenario could unfold in almost any organisation today. You may not have officially onboarded genAI into the company’s technology stack or written a formal policy, but AI tools are being used in the workplace. A recent report reveals that 72% of genAI use in enterprise companies consists of shadow IT and the amount of data sent to these apps in prompts and uploads is at least 30 times greater in 2025 than in 2024.

Although no harm is intended, the consequences can be severe. Damages can include the company’s brand reputation, competitive advantage, and regulatory compliance. Don’t get blindsided by shadow AI. In this three-part blog series, we’ll unpack how CISOs can manage new risks posed by AI use in the workforce.

You Can’t Govern What You Don’t Understand

First, you should be aware of unique aspects of AI risks. While legacy IT tools typically operate within predictable, rule-based structures, generative AI introduces new challenges by acting as an autonomous data consumer. It ingests sensitive inputs and transforms them. Therefore, generated outputs can be difficult to trace back to the source content or explain.

Another challenge is that large language models (LLMs) are often built or hosted by third parties outside your organisation’s control. The models also learn from the information people feed into them. As a result, a few sentences pasted into a public AI tool can inadvertently expose personal data, intellectual property, or other confidential content. The response itself could include inaccurate details that can cause legal or reputational harm if shared publicly.

Now that you’re aware of the general AI risks, your first steps towards governance involve gaining a clearer picture of how genAI tools are being already used in your organisation and your actual business risks.

Securing-Your-Data

Step 1: Understand AI Activity Across the Organisation

Many organisations believe they have limited exposure because they assume employees are only using approved AI tools or not using any at all. As mentioned in the introduction, genAI adoption often happens under the radar. So, while mapping internal AI activity, don’t only focus on sanctioned tools. Consider everyday situations: document translation and summarisation, marketing content development, or AI-assisted coding. External consultants, interns, and hybrid workers can easily use public AI platforms without oversight.

To uncover these hidden behaviours, use a mix of interviews, surveys, and technical monitoring. We recommend asking questions such as:

  • What kinds of tasks are employees supporting with AI? (Officially and unofficially)
  • Which tools and platforms do they use? (both sanctioned and shadow)
  • What types of data are involved?

With thousands of tools freely available online, you need an accurate picture of AI use in the workplace. It’s important to encourage honest answers; employees should feel safe from being reprimanded. This step will help establish clear boundaries for acceptable use, shifting from a reactive to proactive mindset.

Common AI Blind Spots

This assessment process closely aligns with Northwave’s Human Risk Management approach. Instead of relying on assumptions or generic awareness campaigns, we advocate for a measurable, evidence-based understanding of human behaviour in your organisation. Just as we identify behavioural patterns to reduce phishing risk or improve password hygiene, AI usage can be analysed to find real vulnerabilities and close them with targeted solutions.

Securing-Your-Data2

Step 2: Uncover AI Risks in the Organisation

Now that you know more about AI usage and potential data exposure points in the organisation, the next step is to assess the extent of your AI-related risks. We recommend beginning with these five key risk areas:

  1. Use of Unapproved or Insecure Tools (Shadow AI)
    Staff may use public or unvetted GenAI tools without IT approval, creating security blind spots and bypassing internal controls, monitoring, or data governance.
  1. Data Leakage & Privacy Violations (resulting from risk #1)
    Employees may unintentionally input sensitive, personal, or proprietary data into GenAI tools, risking exposure, non-compliance with privacy laws (e.g., GDPR), or breach of confidentiality agreements.

  2. Employees Use Inaccurate or Misleading Output (Hallucinations)
    GenAI systems can generate factually incorrect or biased outputs, which, if used in decision-making or customer communication, may harm reputation, legal standing, or operational reliability.

  3. Regulatory & Compliance Exposure
    Without proper governance, GenAI use may breach laws such as the EU AI Act, GDPR, or industry-specific regulations by failing to meet transparency, accountability, or human oversight requirements.

  4. Intellectual Property (IP) Infringement
    Generated content might unknowingly infringe on copyrighted materials, or employees may expose internal IP when prompting the AI, creating legal and competitive risks.

These five areas provide a strong starting point for gauging your AI risk posture. However, effective risk management typically requires a more in-depth, tailored assessment aligned to your organisational structure, processes, and actual threat landscape. Then, you can truly start to govern these new risks. 

Next Step: Mitigate AI Risks with Effective Governance

As mentioned, AI risks begin with a lack of visibility. In most organisations, employees aren’t trying to bypass policies. They simply don’t know where the boundaries are. By systematically mapping AI activity and assessing exposure, CISOs gain the clarity needed to turn AI from a shadow risk into a governed asset that can drive innovation. This is just the beginning of your secure AI journey. In our next article in this series, we will explain how to mitigate AI risks through effective governance.

Stars

How Northwave Can Help 

If you are ready to start gaining control over your organisation’s AI risks, Northwave’s comprehensive approach has you covered. As part of our Managed Security Office, we address AI related cyber risks for you by ensuring effective governance and coordinating mitigating measures. Our business consultants can also support you with risk analysis and a roadmap for AI risk mitigation. Target real behavioural risks and ensure safe AI use in the workforce with our data-driven Human Risk Management programme.

Contact us to schedule an introductory meeting and explore the possibilities.

Learn more about our approach to AI security during CyberSec Netherlands 2025. Northwave Strategic Security Advisor (CISO), Michael Ivancic, will deliver a presentation on Securing Your Data in the Age of AI.

We are here for you

 

.