## How to work out the ROI of assessment

Assessment methods, such as ability tests, personality or integrity questionnaires, face-to-face interviews and assessment centres, are good predictors of performance and can improve the return on investment (ROI) of hiring procedures. However, these come with a cost.

Wouldn’t it be great if you were able to apply a quantitative approach to calculate the ROI when choosing costly selection methods and deciding how many people to include in each stage of hiring?

The good news is that such an approach exists and makes use of a method ideated by two American scientists, Taylor and Russell. In 1939, they explored how to calculate the effectiveness of workplace selection tests and assessments.

**How can we use predictive validity?**

Decades on from the Taylor and Russell paper, a working paper by Schmidt, Oh and Shaffer (2016) reports new and more accurate relationships between assessment methods and performance outcomes. The authors present linear relationships amongst assessment methods and performance. This means that, by looking at numerous studies on performance and selection methods, they have developed a method which shows that an increase in the score of the assessment produces a proportional increase in the performance, as the two graphs below illustrate:

When the predictive validity (measured by the ‘r’ value) is high, the relation between performance and test scores is more similar to Graph A, with the participants gathering around an ideal line that represents our model.

**The problem for many practitioners is that this information seems hard to put into practice. What does an r value of (for example) .65 mean?**

Does it mean that 65% of those selected will be good employees? Will people on the 65th percentile be good employees? Or do we have 65% more good employees using the test? The answer to all those questions is no. What really happens is a little bit more complicated and needs some further explanation. Let’s go back to our linear model.

What we are doing when we set a minimum benchmark is to select only those candidates whose test score is higher than a certain value. We can represent this benchmark as a straight vertical line.

All the candidates represented by dots on the right of the line will pass on to the next stage. All the candidates represented by dots on the left will be excluded. We can see that if we move the line on the right (by increasing the benchmark) there will be fewer successful candidates.

This makes it much clearer to know what we are doing when we set a benchmark, but it still does not give us much information about the quality of the selected candidates. The reason for this is that we need one more element: the **base rate.**

The base rate is the indication of how many of the candidates are suitable for the job. Graphically it can be represented as a straight horizontal line. All candidates that fall below the line are poor performers and all the candidates placed above the line are good performers. It is obvious that if all (or most) of the candidates are suitable for the job, a selection process would not be needed as, regardless, good performers would be selected.

In the example below, approximately 50% of candidates would be suitable (those above the horizontal line). But by applying our selection benchmark score to the graph below (the vertical line), we can refine our applicant pool. We can see that we would then be selecting through to the next stage five good performers and only three poor performers (i.e. 62.5 %).

**The Taylor Russell approach**

In order to use the conceptual framework explained above, we need a mathematical model. Fortunately, Taylor and Russell (1939) created a very practical model to help identify how many good performers will be selected by knowing the following: base rate; validity of the selection instrument; and benchmark (in percentile). They created the famous tables that you can find in their article referenced below (see Figure 1).

*Figure 1. Example of a Taylor-Russell table*

To use the tables, you need to follow a three steps process.

- Know the base rate (proportion of employees considered satisfactory). To use the tables, you need first to know the base rate. There are 10 tables, one for each value of the base rate from 5% (.05) to 95% (.95). Select the table that is most appropriate for the target role.
- Understand the validity of your instrument. Then, consider the validity of your instrument. According to the latest updates from Schmidt, Oh and Schaffer (2016), the use of ability tests has a mean r of .65, the interview of around .55 and the combination of the two has a validity indicator equal to: .75. You need to match your value of r with the row of the Taylor-Russell table.
- Match your benchmark. The benchmark (in percentile) needs to be matched with the columns of the table.

The table will provide you with how many of the selected employees will be good performers or the true positive (T+). This number is also called the ‘sensitivity’ of your instrument.

To know the percentage of the false positive (selected people who are not good performers), you can look at 1-T+.

**Calculating false negatives**

You can also use the Taylor-Russell tables to calculate the percentage of rejected people who would have been bad performers – or true negative (F+).

- Base rate (F+): 1 minus the base rate of satisfactory employees.
- Validity (F+): the same score used before.
- Benchmark (F+): 1 minus the benchmark used for selection.

The number in the cell you individuate will represent the percentage of true negative, also called ‘specificity’.

You can also calculate the percentage of false negatives (rejected people who would have been good employees) using the formula: 1 minus F+.

**Calculating T+ with different stages**

In some situations, candidates pass through different stages. For example, an ability test and a competency-based interview. In this case, we need to use a two-step approach to calculate sensitivity. The first step is exactly as explained before. Using the population base rate, the validity of the first instrument used, and the benchmark for the first stage. In the second step, you need to look again at the Taylor-Russell tables, however, this time using the following scores:

- Base rate (Step2): T+ (Step 1).
- Validity (Step 2): the validity of the second instrument.
- Benchmark (Step 2): the new benchmark (e.g. in an assessment centre, 25% of the candidates)

The work by Russell and Taylor was significant in moving forward our ability to calculate the ROI of assessment.

**References:**

Taylor, H. C. & Russell, J. T. (1939). The relationship of validity coefficients to the practical effectiveness of tests in selection: discussion and tables. *Journal of Applied Psychology*, 23(5), 565.

Schmidt, F. L., Oh, I. S. & Shaffer, J. A. (2016). The Validity and Utility of Selection Methods in Personnel Psychology: Practical and Theoretical Implications of 100 Years of Research Findings.

## About the Author

Follow on Linkedin Visit Website More Content by Davide Cannata