flowchart LR A[Sourcing] --> B[Screening] B --> C[Assessment] C --> D[Offer] D --> E[Onboarding] E --> F[Productive Hire] style A fill:#E8F0FE,stroke:#1A73E8 style F fill:#E6F4EA,stroke:#137333
22 Recruitment and Selection Analytics
22.1 Why Recruitment and Selection Analytics Matters
Selection is the only HR activity whose decisions are taken before there is any direct evidence of the candidate’s performance, and yet it shapes everything that follows.
Recruitment and selection sit at the start of every workforce story. The candidate the firm hires this quarter becomes the manager of the team three years from now, the leader of the function five years from now, and a measurable contributor to attrition, productivity, and culture for as long as they remain. The decisions are taken with imperfect information, under time pressure, and with the human tendency to remember the hires who succeeded and forget the ones who did not. Recruitment-and-selection analytics is the discipline that improves the average quality of those decisions over time and makes their improvement visible.
The case for treating selection as an analytical problem is older than the analytics function. As Frank L. Schmidt & John E. Hunter (1998) set out in their landmark synthesis of eighty-five years of personnel-psychology research, the choice of selection method has measurable, substantial, and durable effects on hire quality, and those effects are large enough to dwarf most of the discretionary spend a firm makes on its workforce. The methods differ in the validity they offer, the cost they impose, and the boundaries within which they apply, and a function that does not engage with the evidence chooses by default — usually the cheapest method, often the worst-performing one.
The picture has not stayed still. As Robert E. Ployhart et al. (2017) note in the centennial review of selection at the Journal of Applied Psychology, a century of evidence has refined the field’s understanding of which selection methods work, in which contexts, with which boundary conditions. The field is no longer a debate about whether structured selection beats unstructured judgement; it is a debate about how to combine validated methods to fit the role, the candidate market, and the firm’s strategic objectives. The analytics function is the working surface on which that combination is designed and refined.
The visualisation lens runs through the funnel. A recruitment-and-selection dashboard is fundamentally a funnel chart with comparison built in: candidates entering at one end, hires emerging at the other, with conversion, time, cost, and quality measured at every stage. The page that surfaces all four dimensions across all stages is the page that lets the recruitment team see where the funnel is leaking, where the leak is worst, and where the next investment will produce the largest improvement in hire quality.
- Every selection method on the dashboard is paired with its evidence of validity, so that the audience reads the method at the strength the evidence supports.
- The recruitment funnel is rendered with conversion, time, cost, and quality at every stage. A funnel that shows only volume is reporting on the busiest activity, not on the most important one.
- Quality of hire is paired with first-year retention and ramp-time-to-productivity on the same page, so that fast and cheap funnels cannot be defended at the cost of the hires they produce.
22.2 The Recruitment Funnel
Every recruitment programme is a funnel. Candidates enter through a sourcing channel, pass through screening and assessment, receive an offer, and join the firm. Each transition is a place where conversion can be measured, time can be tracked, cost can be allocated, and quality can be assessed. A dashboard that surfaces all four dimensions at every stage is the dashboard that earns its place in the operational review.
| Stage | Volume question | Time question | Cost question | Quality question |
|---|---|---|---|---|
| Sourcing | How many candidates entered, by channel | How long did sourcing take | What did each channel cost per applicant | What is the source quality of each channel |
| Screening | How many advanced past initial review | How long did screening take | What does each screening hour cost | How accurate were the screening decisions |
| Assessment | How many cleared the assessments | How long did assessments take | What does each assessment cost | What is the validity of each assessment |
| Offer | How many offers were made and accepted | How long from final interview to offer | What does each offer cost | What is the offer-to-acceptance rate |
| Onboarding | How many ramped successfully in ninety days | How long did ramp take | What does each onboarding cycle cost | What is the ninety-day quality and retention |
Every recruitment dashboard rests on a funnel chart that shows conversion at each stage with a benchmark or target attached. From that single visual, the team can see where the funnel is leaking and where the leak is worst. Layering time, cost, and quality on the same funnel turns a volume diagram into a complete diagnostic, and the team’s investment decisions follow the chart rather than the loudest voice in the meeting.
22.3 Validated Selection Methods
The choice of selection method is the single largest controllable determinant of hire quality. Decades of meta-analytic evidence, summarised by Frank L. Schmidt & John E. Hunter (1998) and updated by Robert E. Ployhart et al. (2017), give the analyst a working ranking of methods by their validity in predicting job performance. Knowing the ranking, and using it deliberately, is what separates a credible selection function from one that recruits by tradition.
| Method | What it measures | Evidence strength | Notes for the dashboard |
|---|---|---|---|
| Structured work-sample tests | Direct performance on tasks like those of the job | Among the highest-validity methods | Pair the score with the role-task domain it covers |
| General mental ability tests | Cognitive capacity to learn and solve job problems | High validity across roles | Surface the cut-score and any adverse-impact analysis |
| Structured interviews | Job-related questions with anchored ratings | Substantially higher than unstructured | Render inter-rater agreement on the dashboard |
| Job-knowledge tests | Existing knowledge of the role’s domain | High validity for experienced hires | Distinguish from cognitive-ability tests on the page |
| Integrity tests | Reliability, conscientiousness, counterproductive risk | Moderate, with bandwidth across roles | Pair with the role-specific risk profile |
| Assessment centres | Multi-method, multi-rater evaluations | Strong but expensive | Show cost-per-candidate alongside validity |
| Unstructured interviews | Holistic conversational judgement | Among the lowest-validity methods | Track separately to compare against structured methods |
| Reference checks | Past-employer testimonial | Low to moderate validity | Use for verification rather than as primary evidence |
No single method is sufficient. The most defensible programmes combine two or three validated methods in a sequence that is designed for the role. A common pattern pairs a cognitive-ability or work-sample test, a structured interview, and a reference check, with each method contributing complementary information. The dashboard surfaces the combination, the validity of each component, and the incremental contribution of the later stages over the earlier ones, so that the team can defend why each candidate ran the gauntlet they did.
22.4 Quality of Hire and Its Companions
Quality of hire is the headline outcome of the selection programme, and the most often misused. A function that measures only how candidates performed on the assessment will be optimising for assessment performance rather than for job performance. The credible measurement of quality pairs assessment with on-the-job evidence at three time points: ninety days, one year, and three years.
| Time point | What it captures | Visualisation |
|---|---|---|
| Ninety days | Did the hire ramp on schedule and stay through onboarding | Cohort retention chart with ramp markers |
| One year | Did the hire reach acceptable performance and stay | Cohort performance distribution and first-year attrition |
| Three years | Did the hire develop, contribute, and remain | Long-cohort survival and career-trajectory chart |
Quality of hire is most credible when it is paired with three companion measures on the same page. First-year retention catches the failure mode of fast funnels that produce regretful exits. Ramp time to productivity catches the cost of hires who arrive but cannot do the work. Hiring-manager satisfaction catches the qualitative read that the data alone misses. A function that surfaces all four — quality, retention, ramp, and manager satisfaction — has built a recruitment scorecard the rest of the firm can audit and act on.
22.5 Visualising Recruitment and Selection
The recruitment-and-selection dashboard is one of the most operationally read pages in any HR programme. The discipline is to render the funnel, the methods, and the quality together so that the audience can read the trade-offs at a glance. Five design choices, applied consistently, hold the page together.
| Choice | What it does on the page |
|---|---|
| Funnel as the headline | The funnel chart anchors the page with conversion rates at every stage |
| Stage-level four-dimension panel | Time, cost, conversion, and quality are surfaced for each stage |
| Method-validity tooltip | Hovering on a method reveals its evidence base and the inter-rater agreement |
| Quality companion row | Quality, retention, ramp, and manager satisfaction sit on the same row |
| Channel comparison view | Sources are compared on cost-per-hire and quality-per-hire on a scatter |
A well-designed recruitment dashboard reads as a continuous improvement loop rather than as a static report. Each cycle’s funnel feeds the next cycle’s investment. A channel that delivers high quality at high cost prompts a different action than a channel that delivers low quality at low cost. A method whose validity has drifted prompts a redesign rather than a wider rollout. The dashboard is most valuable not when it confirms what the team already believed but when it surfaces the change that the team is being asked to make this cycle, supported by the evidence the page renders together with the change.
22.6 Hands-On Exercise: Building the Recruitment Funnel Page
Aim. Build the recruitment funnel page that anchors any recruitment-and-selection programme: stage-by-stage conversion, time, cost, and quality, paired with a quality-companion row and a channel-comparison view.
Scenario. You are running the recruitment-analytics function for an organisation. The audience is the head of talent acquisition and the COO, who want a single page they can open weekly to see where the funnel is leaking and which channels deserve more or less investment.
Dataset. Recruitment Data (Excel) from the HRMD library. The workbook includes Job Posting Date, Hire Date, Recruitment Cost, Offers Made, Offers Accepted, Source Channel, candidate-stage fields, and ninety-day retention markers.
Deliverable. A Recruitment-Funnel.xlsx workbook with stage-level conversion and four-dimension measures, plus a Recruitment-Funnel.pbix Power BI file with the funnel page described below.
22.6.1 Step 1 — Stage the funnel data
Open the workbook, convert the data to a Table named Recruitment, and confirm the date and numeric fields. Add a Stage lookup table on its own sheet listing the funnel stages — Sourcing, Screening, Assessment, Offer, Onboarding — with the order each stage occupies in the funnel.
22.6.2 Step 2 — Compute conversion at each stage
Code
Excel Formula
Source-to-Screen Conversion = COUNTIF(Recruitment[Stage], "Screening Pass")
/ COUNTIF(Recruitment[Stage], "Sourced") * 100
Screen-to-Assess Conversion = COUNTIF(Recruitment[Stage], "Assessment Pass")
/ COUNTIF(Recruitment[Stage], "Screening Pass") * 100
Assess-to-Offer Conversion = COUNTIF(Recruitment[Stage], "Offer Made")
/ COUNTIF(Recruitment[Stage], "Assessment Pass") * 100
Offer Acceptance Rate = COUNTIF(Recruitment[Stage], "Offer Accepted")
/ COUNTIF(Recruitment[Stage], "Offer Made") * 100Render the conversion rates as a horizontal funnel chart, the central visual of the page.
22.6.3 Step 3 — Compute the four dimensions at each stage
For each stage, compute volume, time, cost, and quality.
Code
Excel Formula
Time to Fill = AVERAGE(Recruitment[Hire Date] - Recruitment[Job Posting Date])
Cost per Hire = SUM(Recruitment[Recruitment Cost]) / COUNTIF(Recruitment[Stage], "Offer Accepted")
Ninety-Day Retention = COUNTIFS(Recruitment[Stage], "Offer Accepted", Recruitment[NinetyDayRetained], "Yes")
/ COUNTIF(Recruitment[Stage], "Offer Accepted") * 10022.6.4 Step 4 — Build the channel-comparison view
On a Channels sheet, pivot cost-per-hire and ninety-day retention by Source Channel. The deliverable is a scatter plot with cost-per-hire on the x-axis and quality (ninety-day retention) on the y-axis, where each point is a channel. The four quadrants of the scatter recommend channel-level actions.
22.6.5 Step 5 — Promote to Power BI and build the funnel chart
Load the workbook into Power BI and rebuild the conversion measures as DAX. Use the built-in Funnel visual for the central chart, with the conversion rate visible above each stage.
Conversion Rate =
DIVIDE(
CALCULATE(COUNTROWS(Recruitment), Recruitment[Stage] = "Offer Accepted"),
CALCULATE(COUNTROWS(Recruitment), Recruitment[Stage] = "Sourced")
) * 100
22.6.6 Step 6 — Build the four-dimension panel
Below the funnel, add a small-multiples panel with one chart per stage, each showing volume, time, cost, and quality. Use a consistent colour for each dimension across the four stage charts so the audience can read across the panel.
22.6.7 Step 7 — Build the quality companion row
Below the four-dimension panel, place a row with three companion measures: ninety-day retention by source, ramp time to productivity by source, and hiring-manager satisfaction by source. The row tests whether speed and cost gains compromise quality.
22.6.8 Step 8 — Build the channel scatter
Add the cost-quality scatter from Step 4 as the lower-right quadrant. Label each quadrant with a recommended action (invest, redesign, retire, scale).
22.6.9 Step 9 — Publish and instrument
Publish the report and add it to the weekly recruitment review. Confirm that conversion rates and channel-level actions are reviewed each week, with the dashboard recording the actions taken.
The recruitment funnel page is the operational anchor of the recruitment-and-selection programme. It pairs with the validity-evidence dashboard from Chapter 23, the bias-and-prediction dashboard from Chapter 24, and the optimisation page from Chapter 28 to form the complete selection-analytics block.
Recruitment-Funnel.xlsx, Recruitment-Funnel.pbix, and ch22-recruitment-funnel-walkthrough.mp4 will be attached at this point in the published edition. The screen recording walks through Steps 1 to 9 with the Excel funnel calculations and the Power BI funnel page shown side by side.
Summary
| Concept | Description |
|---|---|
| Why Selection Analytics Matters | |
| Selection shapes everything that follows | Selection decisions are taken before performance evidence and shape years of outcomes |
| Method choice as largest lever | Method choice has measurable, substantial, durable effects on hire quality |
| Funnel as central visual | Every recruitment dashboard rests on a funnel with conversion at every stage |
| Quality paired with retention and ramp | Quality is most credible when paired with retention, ramp, and manager satisfaction |
| Continuous improvement loop | The dashboard is most valuable when it surfaces the change the team is being asked to make |
| The Recruitment Funnel | |
| Sourcing stage | Volume, time, cost, and quality of candidates entering through each channel |
| Screening stage | Conversion, time, cost, and accuracy of the initial review |
| Assessment stage | Conversion, time, cost, and validity of the assessment instruments |
| Offer stage | Offer-to-acceptance rate, time-to-acceptance, cost per offer |
| Onboarding stage | Ninety-day retention and ramp time to productivity |
| Volume, time, cost, and quality at each stage | Each stage is measured on four dimensions, not on volume alone |
| Validated Selection Methods | |
| Structured work-sample test | Direct performance on tasks like those of the job; among the highest validity |
| General mental ability test | Cognitive capacity to learn and solve job problems; high validity across roles |
| Structured interview | Job-related questions with anchored ratings; substantially higher than unstructured |
| Job-knowledge test | Existing knowledge of the role's domain; high validity for experienced hires |
| Integrity test | Reliability, conscientiousness, counterproductive risk; moderate validity |
| Assessment centre | Multi-method, multi-rater evaluations; strong but expensive |
| Unstructured interview | Holistic conversational judgement; among the lowest-validity methods |
| Reference check | Past-employer testimonial; low to moderate validity, best as verification |
| Combining validated methods | The most defensible programmes combine two or three validated methods |
| Quality of Hire | |
| Ninety-day quality | Did the hire ramp on schedule and stay through onboarding |
| One-year quality | Did the hire reach acceptable performance and stay through year one |
| Three-year quality | Did the hire develop, contribute, and remain through three years |
| First-year retention companion | Catches the failure mode of fast funnels that produce regretful exits |
| Ramp time companion | Catches the cost of hires who arrive but cannot do the work yet |
| Hiring-manager satisfaction companion | Catches the qualitative read that the data alone misses |
| Visualising the Programme | |
| Funnel as the headline | The funnel chart anchors the page with conversion rates at every stage |
| Stage-level four-dimension panel | Time, cost, conversion, and quality are surfaced for each stage |
| Method-validity tooltip | Hovering on a method reveals its evidence base and the inter-rater agreement |
| Channel comparison view | Sources are compared on cost-per-hire and quality-per-hire on a scatter |