30  Tracking the Impact of HR Interventions

30.1 Why Tracking Differs from Monitoring

Monitoring asks whether the intervention is working; tracking asks whether it is still working, and at what cost, after the spotlight has moved.

The previous chapter set out the disciplines of monitoring an HR intervention while it is being delivered. This chapter takes up the longer-running question of tracking the intervention’s impact after the formal monitoring period has ended. Most HR interventions produce their largest, most-watched effects in the first six to twelve months. The interesting analytical question is what happens in the second year, the third year, the fifth — when the spotlight has moved, the budget has been reabsorbed, and the workforce has accommodated to the new reality. Tracking is the discipline that answers that question.

The methodological frame for long-running tracking is given by the longitudinal-research literature. As Robert E. Ployhart & Robert J. Vandenberg (2010) set out in their treatment of longitudinal research design and analysis, the questions a tracking programme can answer depend on the temporal structure of the data, the spacing of measurements, and the assumptions the analyst is willing to defend. A pre-and-post comparison answers a different question from a five-cycle growth-curve model, and the dashboard that does not declare the temporal frame leaves the audience to guess at the strength of the inference.

The classical foundations of the field were laid earlier. As Thomas D. Cook & Donald T. Campbell (1979) argued in their classic work on quasi-experimentation, the threats to internal validity that monitoring controls for in the short term — history, maturation, regression to the mean, instrumentation — operate equally over longer horizons, and a tracking programme has to address them deliberately rather than assume they have stopped mattering. The dashboard is the working surface where those threats are surfaced and contested rather than hidden.

The visualisation lens is what makes a multi-year tracking programme legible. A growth-curve chart shows the trajectory rather than the level. A cohort-survival chart shows whether each cohort retains the gain. A ratio-of-effects chart shows whether the original effect has decayed, persisted, or grown. A page that surfaces all three for an intervention is a page that lets the audience read the longer story rather than only the headline of the first six months.

TipThe tracking contract
  1. Every HR intervention that earns dashboard space at monitoring also earns a tracking plan that extends past the immediate monitoring window into at least a multi-cycle horizon.
  2. The tracking design is named on the page — pre-and-post, cohort, growth-curve, time-series — so that the audience reads the inference at the strength the design supports.
  3. Tracking surfaces decay, persistence, and re-emergence honestly. A programme whose effect has faded is reported as such, with the implications for the next investment round rendered alongside the chart.

30.2 Longitudinal Tracking Designs

Long-run tracking uses a small set of longitudinal designs, each of which supports different inferences about the persistence and decay of an intervention’s effect. The choice of design depends on what the data the firm collects routinely allows.

TipFour Longitudinal Designs for Tracking
Design What it tracks Visualisation Strength
Repeated cross-section The same indicator measured at intervals across the workforce Trend line with intervention markers Reveals aggregate change but not individual trajectories
Cohort follow-up Specific groups followed across cycles Cohort survival or trajectory chart Reveals whether the gain persists for those who received it
Panel study The same individuals measured across cycles Growth-curve chart Supports the strongest individual-trajectory claims
Time-series with intervention markers Long history with multiple interventions noted Multi-marker time-series chart Supports comparison across interventions
TipChoosing the design for the question

A repeated cross-section is fine for asking whether a workforce-wide indicator has changed, but it cannot tell you whether the people who experienced the intervention retained the gain. A panel study can answer that, but it requires the firm to have collected matched individual measurements across cycles. A cohort follow-up sits between the two: cheaper to maintain than a panel, more informative than a cross-section. As Robert E. Ployhart & Robert J. Vandenberg (2010) argue, the right design is the one whose data the firm already has or can build, paired with the question the audience is actually asking.

30.3 Tracking Methods

Three working methods recur in long-run intervention tracking. They differ in the kinds of pattern they surface and in the analytical effort they require, but each is rendered as a chart the dashboard can carry.

TipThree Methods for Tracking Impact Over Time
Method What it does Pattern it reveals
Cohort comparison Compares cohorts that received the intervention with those that did not Whether the cohort effect persists, decays, or re-emerges
Growth-curve modelling Fits a trajectory for each individual or unit Differences in trajectory across groups, conditions, or cohorts
Interrupted time series with controls Compares the trajectory of treated and control units before and after Whether the treated trajectory diverged sustainably
TipThe decay-and-persistence question

The most useful single question a tracking programme can answer is whether the intervention’s effect persists, decays, or re-emerges. Persistence is the desired outcome and the rarest. Decay is more common than firms typically admit. Re-emergence — an effect that fades and then returns — is rarer still and usually signals a follow-on intervention or a structural change. As Thomas D. Cook & Donald T. Campbell (1979) observe, the threats to long-running inference are real but tractable when the design declares its assumptions and the dashboard surfaces them. Render decay and persistence as the same chart, with the same axes, so that the audience reads the trajectory rather than infers it.

30.4 Attribution and the Counterfactual

The longer the tracking horizon, the harder the attribution problem becomes. By year three of a leadership programme, dozens of other interventions have run, the labour market has shifted, and the firm itself has changed. The effect that survives over multiple cycles is increasingly entangled with everything else the workforce has been through, and the dashboard has to render the entanglement honestly.

TipThree Strategies for Long-Run Attribution
Strategy What it does When it is useful
Maintained control group Continues to measure a comparison group across the tracking horizon When a non-treated group can be plausibly maintained
Synthetic counterfactual Constructs a counterfactual trajectory from external benchmarks When the comparison group has been contaminated or lost
Bracketing analyses Renders multiple plausible attribution scenarios on the page When no single counterfactual is defensible
TipBracketing as the honest default

When attribution is genuinely uncertain — and over a multi-year horizon it usually is — the most credible move is bracketing. The function renders the range of plausible long-run effects given the assumptions it can defend, and the audience reads the chart as a range rather than as a point estimate. Bracketing is harder than a single confident chart, but it earns more trust over the long run because it does not invite the kind of correction that a confident overstatement eventually produces.

30.5 Visualising Long-Term Tracking

The tracking dashboard is read more rarely than the monitoring dashboard but it is read by a more senior audience. The chief people officer, the executive committee, and sometimes the board open the page to read the long story of an investment that may have been agreed years earlier. Five design choices, applied consistently, hold the long story together.

TipFive Design Choices for the Tracking Dashboard
Choice What it does on the page
Trajectory as the headline The chart shows the trajectory across cycles rather than a level at one cycle
Intervention marker Each intervention is marked on the time axis with date and label
Cohort decomposition Each cohort that received the intervention is rendered separately
Counterfactual band The plausible non-intervention trajectory is rendered as a band
Decay-and-persistence panel A small panel summarises whether the effect has persisted, decayed, or re-emerged
TipThe arc of long-running tracking

flowchart LR
  A[Intervention<br/>delivered] --> B[Monitoring<br/>first cycles]
  B --> C[Tracking<br/>multi-cycle horizon]
  C --> D[Decay or persistence<br/>read from the trajectory]
  D --> E[Reinvestment<br/>or retirement decision]
  E --> A
  style A fill:#E8F0FE,stroke:#1A73E8
  style C fill:#E6F4EA,stroke:#137333
  style E fill:#F3E8FD,stroke:#8430CE

Tracking closes the loop on the original intervention by feeding evidence back into the next reinvestment decision. The dashboard’s value is to make the loop visible: when the same intervention is renewed, redesigned, or retired, the audience can see the trajectory that justified the choice. Building the loop deliberately is what turns one-off interventions into a programme of accumulated capability and credibility.

30.6 Hands-On Exercise: Longitudinal Cohort Tracking

NoteAim, Scenario, Dataset, Deliverable

Aim. Build a longitudinal cohort-tracking analysis that follows joining cohorts across multiple cycles, fits a working growth-curve model, and surfaces the trajectory with intervention markers and a counterfactual band on a Power BI page.

Scenario. You are tracking the long-run impact of an onboarding redesign on first-year retention and ramp time across multiple joining cohorts. The dataset gives you each employee’s tenure cohort, performance, and attrition status, and you treat tenure cohort as a proxy for joining year for the purpose of this lab.

Dataset. The IBM HR Analytics Employee Attrition dataset, available publicly on Kaggle at www.kaggle.com/datasets/pavansubhasht/ibm-hr-analytics-attrition-dataset. Use YearsAtCompany to derive joining cohorts (recent, mid, long-tenure), Attrition as the retention outcome, PerformanceRating as the performance outcome, and JobInvolvement, JobSatisfaction, OverTime as covariates.

Deliverable. A Cohort-Tracking.xlsx workbook with cohort survival and performance trajectories, plus a Cohort-Tracking.pbix Power BI file with the long-running tracking page described below.

30.6.1 Step 1 — Build joining cohorts from tenure

Add a Joining Cohort column derived from YearsAtCompany (for example, 0–2 years, 3–5, 6–10, 10+). Treat each cohort as a wave for the lab; in production this column would carry the actual joining year.

30.6.2 Step 2 — Compute cohort-level attrition and performance

Code
Excel Formula
Cohort Attrition Rate    = COUNTIFS(HR[Joining Cohort], <c>, HR[Attrition], "Yes")
                         / COUNTIF(HR[Joining Cohort], <c>) * 100
Cohort Mean Performance  = AVERAGEIFS(HR[PerformanceRating], HR[Joining Cohort], <c>)

Plot both measures by cohort. Render cohort attrition as a survival curve and cohort mean performance as a trajectory line.

30.6.3 Step 3 — Fit a working growth curve

Using the Data Analysis ToolPak’s regression, fit PerformanceRating against tenure (in years) plus JobInvolvement and JobSatisfaction as controls. The slope coefficient on tenure is the working growth-curve estimate.

30.6.4 Step 4 — Add the intervention marker

Pick a tenure cut-off as the proxy intervention point (for example, employees with tenure between 3 and 5 years experienced the redesigned onboarding). Render the trajectory before and after the cut-off, with the cut-off marked on the time axis.

30.6.5 Step 5 — Build the counterfactual band

Construct a counterfactual trajectory by extrapolating the pre-intervention trend forward. Render it as a band beside the realised trajectory.

30.6.6 Step 6 — Document attribution and bracketing

On a Bracketing sheet, document three plausible attribution scenarios for the observed difference between the realised and counterfactual trajectories: a strong claim attributing the difference to the intervention, a moderate claim attributing it partly to the intervention and partly to changes in the labour market, and a weak claim attributing little to the intervention. The dashboard surfaces all three.

30.6.7 Step 7 — Promote to Power BI and build the tracking page

Lay out the page using the design choices from Section 5 of this chapter.

  • The trajectory chart shows attrition and performance across cohorts as the headline.
  • An intervention marker on the time axis names the date and the redesign.
  • A cohort-decomposition view shows each cohort’s trajectory rendered separately.
  • A counterfactual band overlays the plausible non-intervention trajectory.
  • A decay-and-persistence panel summarises whether the effect has persisted, decayed, or re-emerged across cohorts.

30.6.8 Step 8 — Publish

Publish the report and connect it to the annual workforce-strategy review. Confirm that the bracketing-scenario panel is read alongside the headline trajectory so that the audience reads the inference at the strength the data supports.

TipConnect to the Visualisation Layer

The tracking page sits downstream of the monitoring page from Chapter 29. The difference-in-differences estimate from the monitoring page becomes the first cycle’s data point on the tracking page’s longitudinal trajectory. The two pages together form the monitoring-and-tracking block of Module 4.

TipFiles and Screen Recordings

Cohort-Tracking.xlsx, Cohort-Tracking.pbix, and ch30-tracking-walkthrough.mp4 will be attached at this point in the published edition. The screen recording walks through Steps 1 to 8 with the Excel cohort workbench and the Power BI tracking page shown side by side.

Summary

Concept Description
Why Tracking Differs from Monitoring
Tracking versus monitoring Monitoring asks whether it is working now; tracking asks whether it is still working later
Longer horizon question The interesting question is what happens after the spotlight has moved
Decay-and-persistence read Persistence, decay, and re-emergence are the patterns to read from a long trajectory
Threats to long-run inference Internal-validity threats operate over years as well as months
Reinvestment-and-retirement loop Tracking feeds back into the decision to renew, redesign, or retire the intervention
Longitudinal Designs
Repeated cross-section design Same indicator measured at intervals across the workforce; aggregate-level inference
Cohort follow-up design Specific groups followed across cycles; reveals whether gain persists
Panel study design Same individuals measured across cycles; supports the strongest individual claims
Time-series with intervention markers Long history with multiple interventions noted; supports cross-intervention comparison
Choosing design for the question Choose the design whose data the firm has and that answers the audience question
Tracking Methods
Cohort comparison method Compares cohorts that received the intervention with those that did not
Growth-curve modelling Fits a trajectory for each individual or unit and compares across groups
Interrupted time series with controls Compares the trajectory of treated and control units before and after
Persistence pattern The intervention's effect remains over the tracking horizon
Decay pattern The intervention's effect fades over time
Re-emergence pattern An effect that fades and then returns, often signalling a follow-on intervention
Attribution and Counterfactuals
Maintained control group A non-treated group continues to be measured across the tracking horizon
Synthetic counterfactual A counterfactual trajectory constructed from external benchmarks
Bracketing analyses Multiple plausible attribution scenarios rendered on the page
Bracketing as the honest default When attribution is genuinely uncertain, render the range rather than a point
Visualising Long-Term Tracking
Trajectory as the headline The chart shows the trajectory across cycles rather than a level at one cycle
Intervention marker on the time axis Each intervention is marked on the time axis with date and label
Cohort decomposition view Each cohort that received the intervention is rendered separately
Counterfactual band The plausible non-intervention trajectory is rendered as a band
Decay-and-persistence panel A small panel summarises whether the effect has persisted, decayed, or re-emerged
The Tracking Loop in Practice
Loop from tracking to reinvestment The dashboard makes the loop from intervention to reinvestment visible
Multi-year audience The CHRO, executive committee, and sometimes the board read the long story
Honest range over confident point Bracketed ranges earn more trust than confident overstatement
Programme of accumulated capability Long-run tracking turns one-off interventions into accumulated capability
Credibility from longer reporting Honest cycle-after-cycle reporting builds credibility a single chart cannot