4 min read

Governing AI and APIs in Mission-First Organisations

Policy documents do not stop a rogue agent. Neither does a terms of service. What actually enforces governance is the infrastructure layer — and this is what building it looks like in a mission-first organisation.
Governing AI and APIs in Mission-First Organisations

I spent nine years as Head of Data at the RNLI. For most of that time, the governance conversation was about data: what we collected, how we stored it, who could see it and under what conditions. The technology moved fast. The structures to govern it moved slower. That pattern has not changed. What has changed is the stakes.

AI is not arriving at the frontline of mission-driven organisations in a controlled, well-planned sequence. It is already there, often in forms nobody signed off on. A caseworker summarising a messy client file in ChatGPT because the internal system is too slow. A fundraiser drafting grant applications with a public LLM. A volunteer coordinator running beneficiary lists through a free tool that nobody has checked the data processing terms on. Not malice. Pragmatism.

The governance response to this, almost universally, is a policy document. An acceptable-use policy. A guidance note on AI tools. Sometimes a training session. These are not useless, but they are also not sufficient. A policy document cannot intercept a prompt. It cannot redact a phone number before it leaves the building. It cannot rate-limit an agent that is making a hundred API calls a minute because someone left it running overnight. Policy governs intent. Infrastructure governs behaviour.

The problem I kept running into at the RNLI

Emergency services organisations operate under pressure by definition. The RNLI launches into weather that would ground most aircraft. The people coordinating a shout are not thinking about data governance. They are thinking about getting a crew out safely and getting information to the right person in the right order.

When I was trying to introduce data tooling into that environment, the question I kept getting was a reasonable one: what happens when it goes wrong? Not what is the policy if it goes wrong. What actually stops it going wrong.

The honest answer, for most of the AI tools being deployed in mission-first contexts right now, is: nothing technical. There is a prompt that says be careful, and there is a policy that says use it responsibly, and beyond that the system will do whatever it does.

That is not governance. That is hope.

What the wiring actually needs to do

The thing I have come to think of as the governance wrapper is not complicated conceptually. It is a control plane that sits between AI systems and everything they access. Every API call, every LLM request, every agent tool invocation goes through it.

The wrapper does five things that a policy document cannot.

It redacts before data leaves. Before a prompt hits a public model, PII is stripped at the infrastructure layer. The agent never had the option to send it.

It enforces rate limits in real time. Not a guideline about reasonable use. An actual limit that fires a 429 when exceeded. Useful for controlling costs as well as preventing runaway agent behaviour.

It screens inputs and outputs. Injection attacks, toxicity, off-topic requests — these can be caught before they reach the agent or before the response reaches the user, using classifiers running locally inside the infrastructure.

It gates access by plan or role. The same agent asking the same question gets a different answer depending on who is asking. A free-tier user gets less data. A credentialed clinician gets more. The agent does not reason about this. The gateway decides.

It logs everything with context. Not just that a request happened, but what plan it was on, what was returned, whether it was blocked and why. Auditable by design, not by accident.

Why this matters specifically for charities and public sector bodies

The EU AI Act is coming into force in stages through 2025 and 2026. High-risk AI systems — and several categories directly relevant to mission-first work, including systems used in employment, education, access to public services and emergency response — require logging, traceability and human oversight by law, not by preference.

Most charities and public sector bodies are not ready for this. Not because they are careless but because nobody has told them clearly that the accountability requirement lands at the infrastructure level, not the policy level.

Beyond compliance, there is a trust argument. When you can show a trustee, a regulator or a service user exactly what an AI system accessed, when it accessed it and what it returned, you are having a different conversation than the one most organisations are currently having. That conversation is possible when governance is in the wiring. It is impossible when governance only exists in a document.

This is not as expensive as it sounds

The immediate objection is cost. Infrastructure costs money. Developers cost money. Most charities do not have either in abundance.

Two things worth saying to that. First, the cost of a data breach, a regulatory finding or a serious AI-assisted error in a high-stakes context is substantially higher than the cost of building this properly. Second, this does not have to be built from scratch.

API gateway technology is mature and available at relatively low cost or through programmes aimed specifically at nonprofits. The governance wrapper I am describing is a configuration challenge as much as a build challenge. The technical primitives exist. The gap is usually in knowing what to configure and why.

If you want to see what it looks like running, the Demos page has a walkthrough of a working implementation. It is not a slide deck but a hands on example.


Sam Prodger is Field CTO at Gravitee and spent nine years as Head of Data at the RNLI.

Continue this conversation

Open a pre-loaded prompt in your preferred AI. Edit it before you send.

Pre-loaded with context from this article. Opens in a new tab.