AI Tooling, From Models To Controls
HuFiCon Management Memo:
Journey with the CISO
A mountain climb to cyber security resilience. Find out how to get to the summit of cyber security resilience in Inge van der Beijl's management memo, from her presentation at the Human Firewall Conference (HuFiCon) on the 14.11.2024.
The climb will take you from the base camp of foundational security to the high-camp of a security conscious culture, all the way to the summit consisting of strategic security integration.
Inge van der Beijl
Director Innovation

How to Put data security at the forefront of AI strategy
You don’t need a crystal ball to know that AI governance will be a leading concern for CISOs in 2026. And with good reason. From major data breaches involving large language models (LLMs) over the past year to sweeping regulatory changes such as the EU AI Act, the risk landscape is shifting fast.
While it may be tempting to pull the plug on generative AI use across the organisation, that tactic is neither realistic nor effective. The good news is that with a proactive and strategic approach, you don’t need to block AI. Organisations can enable employees to safely use AI tools. It starts with strategically choosing approved models for the workforce, governing approved use-cases, and monitoring unapproved uses. Here’s what that looks like in practice.
Selecting the user-facing Large Language Model (LLM)
This article is the third installment in our AI governance series. Before choosing your sanctioned LLM, make sure you have first completed these foundational steps, which are detailed in the linked articles below:
- Map AI use across the organisation, including shadow AI.
- Implement a data classification and handling policythat is understood by the business.
- Establish your governance baseline, including roles, responsibilities, and accountabilities.
Once those basics are in place, you can make a strategic decision about which LLM, and which interface, to approve for your workforce. There are now many possible configurations, each with very different governance implications. New offerings appear almost monthly, but they fall into three broad categories that help narrow the field:
|
LLM Category |
Description |
Typical Use Cases |
Governance Strength |
Best Fit For |
|
Public / Consumer LLMs (e.g., ChatGPT Free, Gemini, Claude Free) |
Internet-facing models with no enterprise controls. Prompts may be retained or used for model training. |
Brainstorming, basic text generation, tasks with non-sensitive information. |
Low – no admin controls, no audit trail, uncertain data handling. |
Organisations that only permit non-sensitive AI usage and restrict all business data from being entered into AI tools. |
|
Enterprise-Managed LLMs (ChatGPT Teams/Enterprise, Microsoft Copilot, Gemini for Workspace) |
Hosted by vendors with contractual data protection, no training on prompts, admin governance, logging, and regional data storage. |
Drafting, summarising internal docs, customer communication, coding assistance, safe productivity support. |
Medium–High – strong access controls, DLP, audit, conditional access. |
Organisations that want safe enablement of AI for internal data, protecting sensitive information through governance, audit, and technical controls. |
|
Private / On-Prem / VPC LLMs (Llama in VPC, Azure OpenAI in private endpoint, fully self-hosted models) |
Organisation-run models with full control over data, logs, retention, and model context. No external data flow. |
Highly sensitive workloads, regulated sectors, code analysis, internal R&D/IP generation. |
High – full visibility and control, isolation, strict access boundaries. |
Organisations with low risk appetite, critical data workloads, strong sovereignty requirements. |

A Note on Northwave’s Approach
At Northwave, our sanctioned enterprise LLM is Microsoft Copilot. For Microsoft-centric environments, it offers the least friction and the most integrated experience, provided your data foundation is sound. Copilot drafts, summarises, and retrieves information only from data the user is already entitled to access. That condition matters. Copilot inherits Entra ID permissions and Purview labels, so the quality of your access model and labels becomes your safety margin.
In short, Copilot is a sound choice for many organisations but it not a shortcut around governance.
How to Use Microsoft Purview to Protect Data
With a sanctioned LLM selected, the next challenge is ensuring that data remains protected when employees begin using AI tools. This is where technology plays a supporting role. It does not replace governance, but helps enforce it. Northwave’s Data Security Cycle (see image below) shows how policy, behaviour, and technology must work together as a continuous process.
Governance decisions shape which data needs protection. Classification labels identify that data. Technology enforces the rules. Monitoring and reporting provide evidence of what happened. When something changes, such as an updated retention policy, the cycle must be adjusted again. This is the ongoing nature of data security.
Within this cycle, Microsoft Purview makes it possible to identify data, monitor data and users, and block actions such as sharing certain information with external parties. Think of Purview as the policy engine and the evidence system for how users and AI assistants handle sensitive data across Microsoft 365 and connected services.

Identifying and Labelling Sensitive Data
Purview supports secure AI use by scanning documents and emails for sensitive information and applying labels such as “Personal Data”, “Financial”, “Health Data”, “Contracts”, or “Source Code”. Automatic classification helps organisations label information at scale and keep labels updated for consistent protection.
These labels indicate how the data should be handled: who may access it, if it can be shared externally or used in an AI tool, and what retention requirements apply. When labels carry protection, applications and AI assistants must adhere to them. With Purview, that is enforced by policy rather than left to judgement.
Enforcing Data Rules Without Disrupting Productivity
Once data is labelled, Purview ensures those rules are followed. With the right configurations, Microsoft 365 DLP and Endpoint DLP can:
- block uploads of sensitive files to public AI tools
- prevent copying or pasting protected content into unsanctioned apps
- stop the export of sensitive data to unmanaged devices
Adaptive Protection applies stricter controls automatically to users or roles whose behaviour indicates elevated risk. Meanwhile, low-risk users experience minimal disruption. This avoids the all-too-common situation where everything is blocked and employees resort to shadow AI tools.
Monitoring, Alerting, and Reporting
Visibility is essential for security operations, internal accountability, and regulatory requirements. Purview Audit and Insider Risk Management help detect unusual activity, such as repeated attempts to move sensitive data into external tools or interactions with AI that conflict with policy.
By feeding these signals into your Security Operations Centre (SOC) through Microsoft Sentinel, analysts can then correlate them with other security events. Purview’s eDiscovery and Data Lifecycle Management capabilities ensure that documents and communication records remain compliant with retention and legal-hold obligations, even as generative AI speeds up content creation.
Models Are a Choice. Controls Are a Requirement.
Whether your organisation uses ChatGPT Enterprise, Copilot, Gemini, or an internal LLM, the constant requirement is a strong control plane. Purview provides the enforcement needed to protect data, apply governance rules, and maintain visibility.
The organisation remains free to choose the AI tools that best support productivity. Purview ensures the data is handled in line with policy, monitored for misuse, and supported by evidence when needed.


How Northwave can help
The scale of generative AI adoption and pace of advances in this technology over the past few years is something almost no organisation was ready for. It’s a major undertaking that does require specialised expertise and attention, at a time when many security teams are already overloaded.
Northwave’s Data Security Essential Implementation can help you gain control of AI governance. We work together with you to:
- define organisational policies and objectives
- assign AI governance ownership
- classify and label data using Microsoft Purview
- configure DLP, retention, and insider-risk policies
With this approach, your controls are real in your tenant, not just on paper. We validate the setup in a three-week simulation period to prove effectiveness without disrupting day-to-day work. Then, we deliver concrete policy, technical, and training recommendations.
Not sure yet if the implementation is the right solution for you? Contact us to learn more and find out if your organisation is eligible for our free data security workshop.
We are here for you
Need help with your cyber security or wondering how secure your business really is?
Get in touch and we will help you find the best solution.
