What Happens After 300,000 People Have Their Say?
- Bartholomew Lind
- 6 days ago
- 4 min read
Updated: 4 hours ago

Public consultations are a cornerstone of democratic governance in New Zealand. When Parliament considers a new Bill or a council proposes a policy change, the public gets to have their say. And they do — sometimes in the thousands, sometimes in the hundreds of thousands. The Treaty Principles Bill alone generated over 300,000 written submissions, the largest response to proposed legislation Parliament has ever received. The previous record, set by the Conversion Practices Prohibition Bill, was around 125,000.
The problem is what happens next. Someone has to read all of those submissions, work out what people are actually saying, and turn a mountain of written responses into structured findings that decision-makers can act on. That work is important, and the people who do it well bring years of experience and judgement to the task.
But they also face a practical reality: submission volumes are growing well beyond what traditional methods were designed to handle, timelines are tight, and the tools available haven't kept pace.
We partnered with Global Research, a New Zealand-based research consultancy that specialises in public policy analysis, to build something that could help. The result is a platform called Stoa.
The Challenge
Research teams analysing public consultations face a tension between depth and scale. Deep qualitative analysis — the kind that identifies themes, assesses stances, surfaces evidence, and produces findings worth acting on — takes time and expertise. When submission volumes grow, teams have to make difficult choices about how to allocate that expertise.
General-purpose tools can summarise text, but they don't offer the structure, transparency, or analytical control that serious policy research requires. Traditional qualitative software is thorough but was designed for a different era of data volumes.
Global Research wanted a third option — one that gave them more flexibility in how they approached large consultations without asking them to hand over analytical control to a black box.
What We Built
Stoa is a web-based platform that processes public consultation submissions and produces structured analytical outputs — themes, sentiment, stances, and evidence flags — that research teams can explore, filter, and refine through a browser interface.
It's designed to sit alongside existing analytical practice, not replace it. A team might use Stoa to get an initial structured read on a large dataset, then focus their expert attention where it matters most. Or they might use it to validate their own manual coding against an independent pass. Or they might run it on the questions with the highest volume while handling smaller questions entirely by hand.
The point is flexibility. Stoa gives research teams another tool in the kit — one that handles volume well and keeps them in control of how the analysis works.
Four Things That Mattered in the Design
The research team owns the analytical framework. The instructions that shape Stoa's analysis are fully visible and editable. Global Research can see exactly how the platform is approaching their data, adjust that approach, and refine it over time. Their domain expertise — the product of years spent working in New Zealand policy research — is what drives the analysis. The technology follows their lead.
Human judgement stays central. Stoa includes a review interface where analysts can examine every analytical decision the platform has made, see the reasoning behind it, and accept, adjust, or override it. Every change is recorded with a full audit trail. This isn't about removing people from the process — it's about giving them better ways to direct their attention across large datasets.
Transparency is built in, not bolted on. Every theme mapping comes with an explanation. Every human edit is recorded. The full history of how an analysis evolved — from initial processing through to final human review — is preserved and exportable. When a client asks "how did you reach this conclusion?", the answer is documented.
The client controls where the data goes. This was a design priority from the start, not an afterthought. Stoa is built to run with powerful frontier models for maximum analytical capability, or with self-hosted open-weight models running on New Zealand-controlled infrastructure — meaning submissions never have to leave the country. Making that second option real required custom architecture and building relationships with New Zealand-based infrastructure providers. It's not something you get with off-the-shelf software. Global Research can choose the right approach for each engagement: frontier models when performance matters most, sovereign deployment when the data demands it. For government consultations on sensitive legislation, that choice is often made for you. Stoa makes sure it's a choice you actually have.
Why It Matters
New Zealand has a strong tradition of public participation in policy-making. Select committees, regulatory consultations, community engagement processes — these all depend on people's contributions being genuinely heard and carefully considered.
As submission volumes grow — and the Treaty Principles Bill showed just how dramatically they can grow — the risk isn't that research teams stop doing good work. It's that the tools available force uncomfortable trade-offs between thoroughness and timeliness. Stoa is designed to ease that tension — to let experienced analysts apply their judgement at a scale that wasn't previously practical, with a transparent record of how every finding was reached.
The Bigger Picture
We started Fridai because we believed intelligent technology should make expert work more effective — not replace the expertise. Stoa is a clear example of that philosophy. Global Research's analysts are still doing the thinking. They're still making the calls. They just have more options for how they approach the work, particularly when the volume is high and the timeline is tight.
If your organisation deals with large volumes of qualitative data — public consultations, community engagement, survey responses, stakeholder feedback — and you're looking for a way to handle that volume without compromising on rigour, we'd like to talk.
Comments