AI security & governance

Your team is already using AI. Is it safe?

Most NZ businesses are already using AI tools — ChatGPT, Microsoft Copilot, Gemini — with no governance, no data handling policies, and no understanding of what data is leaving the organisation. KIS helps you get ahead of both the risks and the regulations: assessing your current AI usage against the NIST AI RMF, building practical governance aligned to ISO/IEC 42001, and preparing for the EU AI Act.

What's included
  • AI usage audit — what tools are in use and what data is being fed into them
  • Risk assessment across your AI tool landscape (NIST AI RMF)
  • Data classification — what should and shouldn't go into AI tools
  • AI acceptable use policy development for staff
  • EU AI Act applicability assessment — are you in scope and at what risk level?
  • ISO/IEC 42001 readiness assessment and gap analysis
  • AI governance framework design and implementation support
  • Board and leadership briefing on AI risk
Why now? The EU AI Act is in force since August 2024. ISO/IEC 42001 is being requested by enterprise procurement teams. Ungoverned AI tool use creates data leakage, privacy breaches, and regulatory exposure today — not in the future. (Source: EU AI Act Official Journal — eur-lex.europa.eu)
Talk to us about AI governance →
AI frameworks we work with
EU Act

EU AI Act

In force August 2024. Applies to NZ businesses whose AI systems affect EU users. Classifies AI by risk level.

ISO 42001

ISO/IEC 42001

Published December 2023. The international standard for AI management systems — the ISO 27001 equivalent for AI.

NIST AI

NIST AI RMF + GenAI profile

Practical AI risk management framework. The 2024 GenAI profile addresses risks from ChatGPT, Copilot, and similar tools.

NZ Charter

NZ Algorithm Charter

Voluntary NZ government commitment covering transparency, bias, and human oversight for algorithmic decisions.