Over the last few years, there has been an explosion of survey platforms that promise to do everything. Collect the data. Analyse it. Produce charts. Build dashboards. Generate executive summaries, increasingly with the help of AI. Share results. Job done.
And, to be fair, for some projects, that really is enough.
But for many research teams, particularly agencies and insight teams dealing with complex, recurring, or high-stakes work, the cracks start to show fairly quickly. Not because these platforms are bad, but because the reality of modern research is messier, more varied, and more demanding than any single “end-to-end” tool can comfortably support.
The quiet shift in expectations
One change that often goes unspoken is the pressure now placed on researchers by their clients.
Research buyers increasingly expect:
- results delivered in their preferred formats
- data that can flow into internal systems
- outputs that can be reused, revisited, or re-cut later
- fewer bottlenecks and less friction
- and, often, everything wanted yesterday
Whether or not this is always realistic is almost beside the point. Expectations have shifted.
Clients are less patient with “that’s just how the system works” explanations, particularly when they themselves are under pressure to integrate insights quickly into decisions, dashboards, or planning processes.
This creates a tension. Researchers are expected to be flexible, but many of the tools they rely on are not.
Platforms versus reality
Most platforms are designed around a clear philosophy: keep everything inside one system. That approach has advantages. It reduces choice, limits obvious errors, and works well when projects fit neatly within the platform’s design assumptions. Problems arise when they don’t.

-
- combining survey data with other sources
- handling tracking studies or multi-country work
- delivering outputs in more than one format
- repeating analysis reliably as new data arrives
- making “small” changes that turn out not to be small at all
At this stage, teams often stop working with the platform and start working around it. Data gets exported to Excel “just to tweak something”, then re-imported. Logic is rebuilt elsewhere. One-off fixes become embedded practice.
This kind of Excel round-tripping is a warning sign. It usually means the system underneath is no longer flexible enough, and it comes with no audit trail, limited transparency, and growing operational risk.
Why a hub mindset matters
This is where a different way of thinking starts to make sense. Rather than asking a single platform to do everything, an alternative approach is to treat analysis and data management as a hub, a central point where data can be shaped, analysed, validated, and prepared for whatever comes next.
In this model:
- Data can come from multiple sources
- The analysis does not assume a single output
- Repetition is expected, not feared
- Flexibility is designed in, not bolted on
Crucially, this doesn’t mean abandoning specialist tools. Quite the opposite.
It means using the best tools for collection, visualisation, reporting, or delivery, without being constrained by the weakest link in the chain.
The role of analysis at the centre
Tabulations, variable generation, weighting, and data restructuring may no longer be the visible “end product” of research, but they remain foundational. If this layer is rigid, everything built on top of it inherits that solidity.
A central analysis layer that is:
- Powerful
- Repeatable
- Transparent
- Adaptable
gives researchers room to manoeuvre when requirements change, which, in practice, they almost always do.
AI can assist at various points in the process, particularly in summarisation or exploration. But without a robust, flexible analytical core, AI simply accelerates whatever structure, good or bad, already exists underneath.
Not another platform, something different
None of this suggests that platforms are obsolete or that every project needs maximum flexibility. Many projects don’t. But as research becomes more interconnected with wider business systems, and as delivery expectations continue to rise, it becomes risky to rely on tools that assume a single, fixed way of working.
A hub approach doesn’t promise perfection. What it offers instead is resilience: the ability to cope when projects become more complex, when outputs need to change, or when today’s “one-off” quietly turns into tomorrow’s recurring process.
And in modern market research, that resilience is becoming less of a nice-to-have, and more of a necessity.
If you’re finding that your platform works well, until it suddenly doesn’t, it may be worth thinking about which parts of your process need flexibility, and which tools should sit at the centre.




