Humans First: how 100 managers wrote their own AI principles
The most memorable moment of the morning didn’t happen on a slide.
At the start of the day, one of the managers told me he “doesn’t do AI, not interested”. By the end of the morning, that same person came up to tell me he’d downloaded ChatGPT on his phone and was excited to give it a go.
That’s what happens when you start with people, not technology. Humans first, working as designed.
A couple of weeks ago I had the privilege of running “Humans First: Making Sense of AI at Pact” for Pact Group: an organisation that supports some of the most vulnerable people in Aotearoa across mental health, housing, addiction recovery and family support. Around 100 managers from across the motu, twelve round tables, three hours.
The brief was simple enough on paper. Get a mixed-capability audience from “I haven’t really touched it” to “I’ve got a clear next step” in a single morning. No tech sales pitch. No doomsday warnings. Just an honest conversation with people who hold the trust of vulnerable communities every day.
The shape that made it work was old-school. People, Process, Technology. In that order. We treated AI as the third thing, not the first.
Three short presentations, each followed by a structured breakout at the tables. Each breakout built on the last. By midday the room had thought through and written down their own AI principles. The ones that protect the work they do for the people they support.
Here’s how the three breakouts worked, and what each one was really for.
Fear and Excitement
The first breakout asked something small, on purpose. What excites you about AI? What worries you? Where in your role could it help, and where would it feel wrong?
I started where the room was. AI expertise was low across most of it, and there was no point pretending otherwise. We worked up from there: comfort first, then understanding, then what any of this actually means for the work they do.
Some came in with genuine optimism. Others with real anxiety. Plenty with a healthy dose of scepticism. All of it valid, all of it worth naming out loud before we went anywhere.
The shift after that first breakout was visible. People had heard each other. Hands were going up before I’d framed the next question. That’s the moment a workshop becomes a conversation.
The Ethical AI Continuum
The second breakout moved from feelings to judgement.
Each table got a set of scenario cards. Real situations, the kind of decisions Pact people face. A care plan suggestion. A risk-flagging tool. An overnight chatbot for clients. AI summarising a client’s file before a case review. Each one had to be sorted: clearly fine, borderline, or clearly not OK. Then the table had to write down the principle that guided their decision.
That last step matters more than the sorting. The point isn’t where any individual scenario lands. The point is that the table can articulate why. A continuum without a principle is just a vibe. A principle gives you something you can apply tomorrow, in a real case, with a real client.
No table was the same as the next. Some agreed quickly. Others were genuinely contested, and that’s where the most interesting thinking happened. “Borderline” turned out to be the most honest category in the room.
Your Guardrails for Pact
The third breakout asked the question the morning had been building toward. Given everything we’ve talked about, what are your top three AI guardrails for Pact?
Every one of the twelve tables said yes to AI, with conditions. Not a single table rejected it outright. That’s a stronger starting point than a lot of organisation I’ve spent time with.
Governance was the connective tissue. Most tables named it as a primary concern, and it showed up again across consent, oversight and training. Without clear policy, the other guardrails don’t hold together. The room was telling its leadership that loud and clear.
The finding I wasn’t expecting, and the most Pact-shaped one, was around accessibility. Several tables landed on AI as a genuine way to reach the people they support. Non-verbal. Hearing, vision, understanding social cues. Not as an admin helper. As a way to connect with people who are otherwise hard to reach.
That’s the kind of thinking you only get when you stop framing AI as a productivity tool and let people in community services tell you what it could actually do for the people they serve.
What I’m taking from it
Three things have stuck with me from the day.
Facilitating at this scale is its own discipline. Twelve tables, helpers in zones, lapel mics, table number cards, run sheets the helpers can read at a glance. None of it accidental. All of it in service of one thing: keeping the people in the room at the centre of the conversation.
Starting with the humans is a design choice, not a slogan. The sceptic who downloaded ChatGPT didn’t change his mind because of a slide. He changed it because the morning gave him space to be sceptical, then space to be curious, then space to make his own move. That sequence matters.
The hard work isn’t the technology. The hard work is asking what this organisation actually does, who it serves, where the consent and privacy lines sit, and what “good” looks like in their context. Get that right and the technology question almost answers itself.
Huge thanks to Paul Chamberlain and Ngaire Kitt for trusting Cairn with this, and to the GMs who facilitated alongside me on the day. The principles their team wrote belong to them. They were good ones.