28  Optimizing Selection and Promotion Decisions

28.1 Why Optimisation Matters

A selection or promotion decision is rarely the choice of a single best candidate; it is the choice of a rule, applied across many candidates, whose value compounds over time.

The previous module described how to design, validate, and monitor selection and performance models. This chapter addresses the specifically optimisation question: given the validated tools, how does the firm allocate them to maximise the value of selection and promotion decisions across the workforce as a whole. Optimisation is not the act of finding the best candidate for a single role. It is the act of designing the rule by which thousands of decisions are made each year, with each decision contributing a small amount of value that compounds over the firm’s planning horizon.

The economic frame for optimisation has been articulated for decades. As John W. Boudreau & Peter M. Ramstad (2007) set out in their work on the talent decision science underlying the LAMP framework, every selection and promotion decision is implicitly an investment with a cost, an expected return, and a distribution of plausible outcomes. The optimisation question is the same one a finance function asks of any investment: given the validity of the tools, the variance of performance in the role, the cost of the selection process, and the strategic criticality of the position, what allocation maximises the firm’s expected return on its workforce.

The applied vocabulary for that economic frame is utility analysis, and the contemporary treatment is given by Wayne F. Cascio & Herman Aguinis (2019) in their textbook on applied psychology in talent management. Utility analysis combines the validity coefficient of the selection model, the variance in job performance, the criticality of the role, the cost of the selection process, and the time horizon over which the hire’s contribution accrues, into a single estimate of the financial value the selection decision is expected to produce. The dashboard that surfaces utility components is the dashboard that lets the executive committee read selection and promotion as financial decisions rather than as administrative ones.

The visualisation lens is what carries optimisation into the working calendar of the function. A utility analysis without a chart is an internal report. A utility analysis rendered on the dashboard, with the components visible and the trade-offs explicit, becomes a recurring conversation with leadership about where the next selection investment should go. The page is one of the few in the HR analytics estate that translates directly into capital allocation language, and the function that builds it deliberately earns its seat in the capital-allocation conversation.

TipThe selection-and-promotion-optimisation contract
  1. Every selection or promotion decision design is paired with a utility analysis that combines validity, performance variance, role criticality, cost, and time horizon into a single defensible estimate.
  2. Optimisation is treated as a portfolio problem. The firm allocates selection investment across roles based on expected return, not on tradition or convenience.
  3. The dashboard renders utility components — validity, variance, criticality, cost, horizon — as separate visuals so that the audience can audit the assumptions rather than read only the headline number.

28.2 The Utility Analysis Framework

Utility analysis is the working framework for optimisation. It combines five inputs into a single estimate of the financial value of a selection or promotion decision rule. Each input is measurable, each is auditable, and each is something the dashboard can render as its own visual.

TipThe Five Inputs of Utility Analysis
Input What it captures Source
Validity How well the selection or promotion model predicts performance Validation study, monitoring data
Performance variance How much performance varies across incumbents in the role Performance system, paired with rating-quality evidence
Role criticality How much a unit of performance change is worth to the firm Strategy alignment, financial models
Selection cost What the selection or promotion process costs per decision Recruitment ledger, internal cost data
Time horizon How long the hire’s contribution accrues Workforce-planning data, attrition forecasts
TipThe compound effect

The intuition behind utility analysis is that small per-decision improvements compound. A modest improvement in validity, applied across thousands of selection decisions, produces a substantial cumulative gain over the planning horizon. As Wayne F. Cascio & Herman Aguinis (2019) note, the evidence base supports this compounding effect strongly, and the firms that take it seriously treat selection-method choice as a high-leverage investment rather than as a procurement decision. The dashboard surfaces the compounding by showing the per-decision gain, the volume of decisions, and the cumulative effect over the planning horizon.

28.3 Optimising Selection Decisions

Selection optimisation has three working levers: the choice of selection method, the cut-score on the selected method, and the allocation of selection investment across role families. Each lever has its own evidence base, its own trade-offs, and its own visual.

TipThree Levers of Selection Optimisation
Lever What it does Optimisation question
Method choice Decides which selection methods are used and in what combination Which combination maximises expected utility for this role
Cut-score setting Decides the threshold above which candidates are selected What cut-score balances false-positive and false-negative costs
Investment allocation Decides how much selection effort each role receives Which roles deserve disproportionate investment given their criticality
TipThe cut-score trade-off

A higher cut-score selects fewer candidates, raises average performance, and reduces hire volume. A lower cut-score selects more candidates, lowers average performance, and increases volume. The optimal cut-score depends on the cost of a false positive (a hire who underperforms), the cost of a false negative (a strong candidate rejected), and the supply of candidates at each score level. The dashboard renders the trade-off as a curve: cut-score on the x-axis, expected utility on the y-axis, with the supply constraint visible and the recommended cut-score marked.

28.4 Optimising Promotion Decisions

Promotion optimisation differs from selection optimisation in one crucial respect: the candidates are already employees, and the firm has direct evidence of their current performance and trajectory. The optimisation problem becomes one of matching incumbents to opportunities, with the additional constraint that promotion decisions also influence the morale, retention, and development of the unselected candidates.

TipThree Levers of Promotion Optimisation
Lever What it does Optimisation question
Slate construction Defines the pool of candidates considered for each promotion How wide and how diverse should the slate be
Predictor weighting Decides how to combine current performance, potential, and other evidence Which combination predicts post-promotion performance best
Sequence and timing Decides the order and timing of promotion moves across the firm Which sequence balances readiness, retention, and capability flow
TipThe promotion-impact externality

Promotion decisions have a side effect that selection decisions do not: they signal to the unselected. A promotion decision that picks a strong candidate but signals an unfair process to the broader workforce can lower retention and engagement among colleagues whose contribution the firm needs to keep. Promotion optimisation that ignores this externality optimises one decision while creating costs for several others. The dashboard surfaces the broader workforce signal — voluntary attrition among unselected candidates, engagement shifts after promotion announcements — so that the externality is measurable rather than only suspected.

28.5 Visualising Optimisation

The optimisation dashboard surfaces utility components, lever choices, and outcomes on the same page so that the audience can read selection and promotion decisions as a coherent investment programme. Five design choices, applied consistently, hold the optimisation logic together.

TipFive Design Choices for the Optimisation Dashboard
Choice What it does on the page
Utility-component panel Validity, variance, criticality, cost, and horizon are surfaced as separate tiles
Cut-score curve The trade-off curve shows expected utility against cut-score with supply constraint
Investment-by-role-family panel Investment levels are surfaced by role family with criticality colour-coded
Promotion-slate breadth view Slate breadth is surfaced alongside slate quality
Externality tracker Voluntary attrition among unselected candidates is rendered alongside the selected outcomes
TipThe arc of an optimised decision

flowchart LR
  A[Validity<br/>method evidence] --> Z[Utility Estimate<br/>combined per decision]
  B[Variance<br/>performance spread] --> Z
  C[Criticality<br/>role weight] --> Z
  D[Cost<br/>selection investment] --> Z
  E[Horizon<br/>contribution period] --> Z
  Z --> Y[Decision Rule<br/>method, cut-score, allocation]
  Y --> X[Realised Outcome<br/>tracked across cycles]
  X --> Z
  style Z fill:#FEF7E0,stroke:#F9AB00
  style Y fill:#E6F4EA,stroke:#137333
  style X fill:#F3E8FD,stroke:#8430CE

The loop closes when realised outcomes feed back into the next cycle’s utility estimate. A function that runs the loop deliberately produces decision rules whose performance improves over time, and a dashboard whose credibility compounds with each cycle. The page is one of the few in the HR analytics estate that earns the executive’s attention through repeated calibration rather than through a single confident headline.

28.6 Hands-On Exercise: Building a Utility-Analysis Workbook

NoteAim, Scenario, Dataset, Deliverable

Aim. Build a utility-analysis spreadsheet that combines validity, performance variance, role criticality, cost, and horizon into a per-decision and cumulative utility estimate, then surface the cut-score trade-off curve and the investment-by-role-family panel in Power BI.

Scenario. You are evaluating selection-method options for Yuvijen Telecom’s frontline-engineering hiring programme. The function has three candidate methods — a structured work-sample test, a cognitive-ability test, and an unstructured interview — and you need to defend the chosen method on its expected utility across the firm’s planning horizon.

Dataset. A synthetic Yuvijen utility-analysis workbook you will assemble in Excel using the structure below. The values are working assumptions you would replace with the firm’s own evidence in production.

Sheet Columns Working values
Methods Method, Validity (r), Cost per Candidate, Time per Candidate (hours) Work-sample 0.54 / 350 / 4; Cognitive 0.51 / 80 / 1; Unstructured interview 0.18 / 60 / 1
Roles Role Family, Performance Variance (SD in INR), Criticality Weight, Annual Hires, Tenure Horizon (years) Frontline 100000 / 1.0 / 200 / 4; Specialist 250000 / 2.0 / 40 / 6; Leader 600000 / 3.0 / 8 / 8
Selection Ratios Cut-Score Z, Selection Ratio, Mean Z of Selected Use a normal distribution to derive

Deliverable. A Yuvijen-Utility-Analysis.xlsx workbook with the per-method, per-role utility computation and the cut-score trade-off curve, plus a Yuvijen-Utility-Analysis.pbix Power BI file with the optimisation page described below.

28.6.1 Step 1 — Set up the three sheets

Create the three sheets named in the dataset specification and populate them with the working values. Convert each range to a Table.

28.6.2 Step 2 — Compute the per-decision utility for each method-role pairing

The Brogden-Cronbach-Gleser utility formula combines the inputs.

Code
Excel Formula
Per-Decision Utility = Validity * SD_Performance * Criticality * Mean_Z_Selected - Cost_per_Candidate / Selection_Ratio

Compute the value for every method-role combination, with Mean_Z_Selected derived from the standard normal distribution at the cut-score the firm uses.

28.6.3 Step 3 — Compute cumulative utility across the horizon

For each method, multiply the per-decision utility by the annual hires and the tenure horizon to surface the cumulative gain across the planning window.

Code
Excel Formula
Cumulative Utility = Per-Decision Utility * Annual Hires * Tenure Horizon

28.6.4 Step 4 — Build the cut-score trade-off curve

Compute per-decision utility at cut-score values from -1.5 to +2.0 standard deviations in 0.5-SD increments, holding all other inputs constant. The resulting curve has a maximum at an interior cut-score; render it as a line chart with the recommended cut-score marked.

28.6.5 Step 5 — Compute investment allocation by role family

Multiply the cumulative utility per method by a candidate-cost-per-hire ratio to derive the implied selection-investment intensity for each role family. The strategic-segment role earns disproportionately larger investment, as Section 4 of this chapter requires.

28.6.6 Step 6 — Build the externality panel

Add a sheet that records the voluntary-attrition rate among unselected candidates after a recent promotion announcement (use a working assumption for the lab). The page later surfaces the externality alongside the optimisation result.

28.6.7 Step 7 — Promote to Power BI

Load the workbook into Power BI and build the per-method, per-role utility measures. Build the cut-score curve as a line chart and the investment-by-role-family panel as a coloured bar chart.

28.6.8 Step 8 — Lay out the optimisation page

Lay out the page using the design choices from Section 5 of this chapter.

  • The utility-component panel surfaces validity, variance, criticality, cost, and horizon as five separate tiles.
  • The cut-score curve sits in the centre with the recommended cut-score marked and the supply constraint visible.
  • The investment-by-role-family panel sits to the right with criticality colour-coded.
  • The externality tracker sits at the bottom rendering voluntary-attrition signals among unselected candidates.

28.6.9 Step 9 — Publish

Publish the report and connect it to the workforce-strategy review meeting. Confirm that selection-method choices and investment allocations are reviewed against the page rather than discussed in the abstract.

TipConnect to the Visualisation Layer

The optimisation page draws inputs from the validity dashboard of Chapter 23 (validity coefficients), the bias-and-prediction page of Chapter 24 (calibration), and the segmentation page of Chapter 21 (role criticality). It feeds the responsible-investment view of Chapter 33 by translating selection optimisation into a portfolio decision.

TipFiles and Screen Recordings

Yuvijen-Utility-Analysis.xlsx, Yuvijen-Utility-Analysis.pbix, and ch28-utility-walkthrough.mp4 will be attached at this point in the published edition. The screen recording walks through Steps 1 to 9 with the Excel utility workbook and the Power BI optimisation page shown side by side.

Summary

Concept Description
Why Optimisation Matters
Optimisation as rule design Optimisation designs the rule that thousands of decisions follow rather than picks one candidate
Compounding decisions Small per-decision improvements compound across the planning horizon
Investment framing Selection and promotion are investments with cost, expected return, and distribution
Utility analysis as the framework Utility analysis combines five inputs into a defensible value estimate
Components visible for audit Each utility component is rendered as its own visual for audit
Utility Analysis
Validity input How well the selection or promotion model predicts performance
Performance variance input How much performance varies across incumbents in the role
Role criticality input How much a unit of performance change is worth to the firm
Selection cost input What the selection or promotion process costs per decision
Time horizon input How long the hire's contribution accrues to the firm
Compound effect across decisions Modest validity improvements yield substantial cumulative gains over the horizon
Optimising Selection
Method choice lever Which selection methods are used and in what combination
Cut-score setting lever The threshold above which candidates are selected
Investment allocation lever How much selection effort each role receives
Cut-score trade-off curve Trade-off between false-positive and false-negative costs as cut-score varies
Optimising Promotion
Slate construction lever How wide and how diverse the promotion candidate pool should be
Predictor weighting lever How current performance, potential, and other evidence are combined
Sequence and timing lever The order and timing of promotion moves across the firm
Promotion-impact externality Promotion decisions signal to the unselected and influence their retention
Voluntary-attrition signal Voluntary attrition among unselected candidates after a promotion announcement
Engagement-shift signal Engagement shifts among colleagues following promotion decisions
Visualising Optimisation
Utility-component panel Validity, variance, criticality, cost, and horizon as separate tiles
Cut-score curve Cut-score on the x-axis, expected utility on the y-axis, supply constraint visible
Investment-by-role-family panel Investment levels surfaced by role family with criticality colour-coded
Promotion-slate breadth view Slate breadth is surfaced alongside slate quality on the page
Externality tracker Voluntary attrition among unselected candidates rendered alongside selected outcomes
Calibration in Practice
Loop from realised outcomes back to estimate Realised outcomes feed back into the next cycle's utility estimate
Calibration over time The dashboard's credibility compounds through repeated calibration
Capital-allocation language Optimisation translates HR decisions into capital-allocation language
Executive seat for the function The page earns the function a seat in the capital-allocation conversation