Skip to content
arrow-alt-circle-up icon

Cyber Incident?

arrow-alt-circle-up icon

Call 00800 1744 0000

arrow-alt-circle-up icon

Securing-Your-Data
By: Michael Ivancic, Strategic Security Advisor (CISO)
Published: September 2025

Risk Awareness Without Governance is still Negligence

Generative AI has fundamentally altered the security landscape. From more familiar tools like ChatGPT and Microsoft Copilot to the thousands of freely available AI tools online, employees now have access to powerful technologies that can boost productivity while also introducing unseen risks. As we detailed in the first part of this article series, Securing Your Data in the Age of AI, the real danger isn’t AI itself, it’s the absence of oversight.

Too often, data breaches tied to AI stem not from sophisticated attackers, but from governance failures that include:

  • lack of a policy on AI tool usage
  • not classifying or protecting sensitive information
  • failing to monitor or detect inappropriate AI use
  • not providing training to ensure staff understand AI risks

Organisations must move from simply identifying AI risks to embedding structures, policies, and controls that turn AI from a liability into a managed innovation opportunity. As such, the new challenge for executive leaders is determining who owns AI governance and taking proactive steps to keep AI from turning into a silent liability.

Build on Trusted Frameworks

The good news is that organisations don’t need to start from scratch. AI governance isn’t a greenfield so it’s possible to build on frameworks that have already proven effective in cyber security. ISO/IEC 27001, with its emphasis on data protection and classification, remains a cornerstone. Furthermore, the NIST2 Cybersecurity Framework also highlights “Govern” and “Protect” as foundational pillars, offering CISOs tested baselines for policy and enforcement. Most importantly, ISO/IEC 42001 now introduces a standard built specifically for AI, requiring organisations to maintain an AI asset inventory, define approved use cases and risk management processes, assign roles, and document their governance activities. Compliance with this standard also serves to help organisations prepare for the EU AI Act.

Put simply: ISO 42001 is to AI what ISO 27001 is to cyber security.

NW-AI-Main-5

Determine AI Governance Responsibility

Governance often fails not because rules don’t exist, but because no one is accountable for enforcing them. Assigning responsibility is a key first step in AI governance. And yet, it’s not always a straightforward one. While CISOs may oversee the ISO/IEC 42001 framework, that doesn’t mean AI governance and compliance should necessarily land on their desk. Many CISOs lack specific knowledge about transparency and accountability in the AI-context.

Increasingly, we see organisations appointing a dedicated AI Officer, Compliance Officer or AI Governance Lead. These roles ensure accountability for AI adoption and compliance with standards like ISO/IEC 42001 and the EU AI Act. Alongside this role, Privacy and Security Officers keep driving data classification and risk assessments, while IT teams are responsible for deploying and maintaining technical controls and HR integrates AI rules into training and disciplinary processes. Legal and Supplier Management functions align governance with contractual agreements.

This matrix of responsibilities isn’t optional. International standards, including ISO/IEC 42001, explicitly require that AI governance responsibilities be formally assigned and documented. Without clarity, accountability dissolves, and governance quickly collapses.

Securing-Your-Data2

Lay the Groundwork for Effective AI Governance

Once ownership is clear, it’s time to translate governance into policy. Effective governance cannot remain vague. Focus on being specific, enforceable, and aligned with regulatory obligations. To comply with the EU AI Act, governance should address both operational safeguards and regulatory requirements.

Although not explicitly mentioned in the AI Act, a strong data classification and handling policy is paramount to success. Most AI-related breaches do not begin with malicious intent; they happen when employees unknowingly expose sensitive data. Without classification, neither people nor systems know what must be protected.

A strong framework for classification rests on three pillars:

  1. Policy rules
  2. Practical examples
  3. Automation

Policy rules define the categories: public, internal, confidential, restricted, and regulated data. They also set the storage, sharing, access, and disposal rules for each. Practical examples make those rules real for employees. For example:

  • Customer PII: confidential
  • Employee health data: highly confidential
  • Marketing brochures: public
  • Source code: restricted

Such examples allow employees to make the right decision in the moment.

The third pillar, automation, ensures that classification scales. Tools like Microsoft Purview and Data Loss Prevention (DLP) systems can automatically detect and tag sensitive information from PII and financial data to contracts, source code, and intellectual property. By linking policy definitions with technical classifiers, classification becomes more than a set of guidelines. It becomes a live control embedded into enterprise operations, continuously monitoring, adapting, and protecting data as AI use evolves.

Secure Innovation Powered by AI Governance

The data risks that AI introduce in the organisation stem primarily from good intentions. People want to work smarter, be more efficient and innovative. Effective governance creates the conditions for safe, responsible, and scalable AI use. Organisations gain security along with confidence, compliance, and a competitive advantage.

Executive leaders have a clear call to action: ensure secure and value-adding AI use through effective governance. In our third article in this series, we will explore how CISOs can identify AI tools that best align with their security needs and business goals.

Stars

How Northwave Can Help 

AI and growing compliancy demands are adding more pressure on understaffed security, privacy & IT departments. That’s where Northwave’s Managed Security and Privacy Office can deliver peace of mind. We address AI related cyber risks for you by managing your security & privacy and coordinating the execution of measures as an extension to your internal security, privacy and IT department. 

Our business consultants can also support with risk analysis and a roadmap for AI risk mitigation, while our data-driven Human Risk Management programme targets behavioural risks to ensure safe AI use across the workforce. programme targets behavioural risks to ensure safe AI use across the workforce.

Contact us to schedule an introductory meeting and explore the possibilities.

We are here for you

 

.