In theory, tracking studies should get easier over time. The questionnaire is familiar. The client understands the outputs. The analysis framework is established. Everyone knows what “good” looks like. In practice, the opposite often happens.
Tracking studies have a habit of becoming more fragile, more time-consuming, and more stressful the longer they run. When that happens, the cause is rarely the people involved. Much more often, it is the software and processes sitting underneath.
Change is inevitable and underestimated
The biggest misconception about tracking studies is that they are difficult because they are large or long-running. Size is rarely the real issue. Change is. Questionnaires evolve. 
None of this is unusual. In fact, most of it is entirely reasonable. What is unusual is software that handles these changes calmly and predictably.
Small changes create long shadows
In tracking studies, small changes rarely stay small. A single extra response option can alter historical comparisons. A tweak to routing can change who answers what. A minor coding adjustment can ripple through derived variables, banners, and summary tables.
Weighting is a particularly common example. In multi-country trackers, targets often change at different times in different markets. New demographic splits are introduced. Population updates arrive mid-year. What starts as a “small” adjustment in one country can quickly turn into a complex maintenance task across the entire study.
These dependencies are easy to underestimate, especially under pressure to “just make it work for this wave”. But over time, they accumulate and the workload doesn’t increase by 5% or 10%. It can double, treble, or worse.
How tracking studies quietly break
Many platforms cope with change by rewriting data. Old waves are physically recoded to match the latest questionnaire. Historical data is reshaped so that everything appears consistent. Weighting variables are rebuilt again and again as targets evolve. One-off fixes are applied to keep outputs lining up.
This approach can feel efficient in the short term. But it creates a fragile structure underneath. Each rewrite makes the logic harder to follow. The audit trail becomes unclear. Repeating the process for the next wave takes longer. And if something goes wrong, it can be very difficult to unwind what was done, when, and why.
Ironically, the effort invested to “simplify” tracking studies often makes them harder to manage as time goes on.
Rewriting the past is risky business
Physically rewriting data produces a single, neat dataset, but neatness comes at a cost. Errors are harder to detect. Changes are harder to reverse. And the burden of maintaining consistency grows with every wave. In many tracking studies, especially multi-country ones, rewriting data is unnecessary. Different countries may legitimately follow different weighting regimes or questionnaire versions for periods of time. Forcing everything into a single mould often obscures reality rather than clarifying it.
Separate rules from data
The most resilient tracking studies separate what the data is from how it should be interpreted. Instead of forcing historical data to conform to the present, they document change explicitly and apply logic programmatically. Filters, mappings, weighting rules, and variable dependencies are handled through rules rather than physical transformation. That’s why a database approach works best, adding rules rather than ‘fixing’.
This approach does two important things:
- It preserves the original data, reducing risk
- It makes change visible, repeatable, and auditable
When targets change again, as they inevitably will, the logic adapts without the need to rebuild the study from scratch.
Why automation matters so much for trackers
Tracking studies are unforgiving of manual processes. If every wave requires bespoke fixes, adjusting code lists, reworking weights, rebuilding derived variables, re-checking tables, effort escalates quickly, and errors eventually creep in.
Automation, templates, and parameterised logic are not “nice to have” here. They are what keep tracking studies viable. The aim isn’t to eliminate complexity. It’s to contain it:
handle it once, apply it consistently, and allow the study to evolve without re-engineering it every time something changes.
Tracking studies as a stress test
Tracking studies expose the limits of software faster than almost anything else. If a system struggles with trackers, it will struggle
with complexity elsewhere. That doesn’t make it a bad product, but it may not be the right foundation for long-lived, evolving research.
Good tracking software doesn’t promise that change will be easy. It promises that change won’t be catastrophic. And that distinction makes all the difference.
If your tracking studies are getting harder each wave rather than easier, it’s often worth examining whether the problem lies in the study or in the tools supporting it.




