Académique Documents
Professionnel Documents
Culture Documents
2. Time to fill: It is often measured by the number of days between publishing a job opening
and hiring the candidate. It’s a great metric for business planning and offers a realistic view
for the manager to access the time it will take to attract a replacement for a departed
employee.
3. Time to Hire: Time to hire represents the number of days between the moment a
candidate has approached and the moment candidate accepts the job. In other words it
represents the time it takes for someone to move through the hiring process once they have
applied.
4. Source of hire: Tracking the sources which attract new hires to organization is one of the
most popular recruiting metric. This metric also helps to keep track of the effectiveness of
different recruiting channels. A few examples are job boards, the company's career page,
social media, and Sourcing agencies
5. First year attrition: First year attrition is a key recruiting metric and also indicates hiring
success. Candidates who leave in the first year of work fail to become fully productive and
usually cause a lot of money for the firm. This metric can also be turned around as
“retention rate”.
6. Quality of hire: It is often measured by someone's performance rating, gives as an indicator
of first performance for candidate. Candidates who receive high performance ratings are
indicative of high and success while opposite hold true for the candidates with low
performance ratings. Mathematically,
7. Candidate job satisfaction: Candidate job satisfaction is an excellent way to track whether
the expectations during the recruiting procedure match reality. A low candidate job
satisfaction highlights of a mismanagement of expectations or incomplete job descriptions.
8. Applicants per opening: Applicants per job-opening or applicants per hire gauges the job’s
popularity. A large number of applicant could indicate high demand for jobs in the
particular area or a job description that's too broad.
9. Selection ratio: The selection ratio refers to the number of hired candidates compared to
total number of candidates. This ratio is also called the submittals to Hire ratio. The
selection ratio is very similar to the number of applicants per opening when there is a high
number of candidates the ratio approaches 0. Selection ratio provides information such as
value of different assessment tools and can be used to estimate the utility of a given
selection and recruitment system.
10. Cost per hire: The cost for a recruitment metric is the total cost investment in hiring by
number of hires.
11. Candidate experience: When we talk about recruiting metrics candidate experience
shouldn't be overlooked. Candidate experience is the way that job seekers perceive an
employers recruitment and onboarding process.
12. Offer acceptance rate: The offer acceptance rate compare the number of candidates who
successfully accept job offer with the number of candidates who received an offer. A low
rate is indicative of potential compensation problems. When these problems occur often
for certain functions they can be discussed earlier than recruiting process in an effort to
minimize impact of refused job offer.
13. Percent of open positions: The percent of open positions compared to the total number of
positions can be applied to specific department or the entire organization. A high
percentage can be indicative of high demand or low level market supply.
14. Application completion rate: Application complication rate is especially interesting for
organizations with elaborate online recruiting systems. Many large corporate firms require
candidates to manually input the entire CV in the system before they can apply for a job.
Drop-out in this process is indicative of problems in this procedure.
15. Recruitment funnel effectiveness: By measuring the effectiveness of all the different steps in
the funnel, you can specify a yield ratio per step. This makes for some excellent recruiting
metrics.
For example,
16. Sourcing channel effectiveness: Sourcing channel effectiveness helps to measure the
conversion per channel. By comparing the percentage of applications with the percentage
of Impressions of the position, we can quickly judge the effectiveness of different channels.
17. Sourcing channel cost: You can also calculate the cost efficiency of your different Sourcing
channels by including ad spend, the amount of money spent on advertisement on this
platform. By dividing the ad spend with the number of visitors who successfully applied to
the job opening you can measure Sourcing channel cost for hire.
18. Vacancy rate: Vacancy rate is the ratio of total number of open positions to total number of
positions in the organization.
19. First year turnover rate: Employees who left the organization within one year divided by
total number of recruits in that year.
20. Company initiated attrition during probation: Number of involuntary attrition divided by
(headcount opening + headcount closing)
21. Performance rating distribution: Percentage of interviews who have outstanding rating
divided by total number of interviews. Same goes for the Satisfactory, good, poor and very
poor ratings.
Apart from these metrics we also have some other metrics. Some of these are listed below:
𝑁𝑜.𝑜𝑓 𝐵𝐺𝑉 𝑐𝑎𝑠𝑒𝑠 𝑐𝑙𝑜𝑠𝑒𝑑 𝑤𝑖𝑡ℎ𝑖𝑛 60 𝑑𝑎𝑦𝑠
% closure within 60 days: ; BGV-Background Verification
𝑇𝑜𝑡𝑎𝑙 𝑛𝑜.𝑜𝑓 𝑐𝑎𝑠𝑒𝑠 𝑐𝑙𝑜𝑠𝑒𝑑
𝑁𝑜.𝑜𝑓 𝑟𝑒𝑑 𝑐𝑎𝑠𝑒𝑠 𝑐𝑙𝑜𝑠𝑒𝑑
% RAG status: [similar for amber and green cases]
𝑇𝑜𝑡𝑎𝑙 𝑛𝑜.𝑜𝑓 𝑐𝑎𝑠𝑒𝑠 𝑐𝑙𝑜𝑠𝑒𝑑
% check pre-employment:
𝑁𝑜.𝑜𝑓 𝑐𝑎𝑠𝑒𝑠 𝑤ℎ𝑒𝑟𝑒 𝑝𝑟𝑒 𝑒𝑚𝑝𝑙𝑜𝑦𝑚𝑒𝑛𝑡 𝑐ℎ𝑒𝑐𝑘𝑠 𝑤𝑎𝑠 𝑐𝑎𝑟𝑟𝑖𝑒𝑑 𝑜𝑢𝑡 𝑓𝑜𝑟 𝑐𝑎𝑠𝑒𝑠 𝑖𝑛𝑖𝑡𝑖𝑡𝑒𝑑 𝑖𝑛 𝑎 𝑚𝑜𝑛𝑡ℎ
𝑁𝑜.𝑜𝑓 𝐵𝐺𝑉 𝑐𝑎𝑠𝑒𝑠 𝑖𝑛𝑖𝑡𝑖𝑡𝑒𝑑 𝑖𝑛 𝑡 ℎ𝑎𝑡 𝑚𝑜𝑛𝑡ℎ
RAG Status Reporting: When project managers are asked to indicate, how well the project is doing
in series of traffic lights, this reporting method is used. R stands for RED; A stands for AMBER;
and G stands for GREEN. RED means there are some problems in the ongoing project, AMBER
means that the project is going OK, and the GREEN shows that the project is going well.
The Quality of hire is same as success ratio. Success Ratio is defined as the ratio of number of
new hires who are successful at work performance to the total number of recruits. The success
ratio is used as an input for recruitment utility analysis. This analysis enables us to calculate an
ROI for different selection instruments. Now, we have a reached to a point where we can look
at different utility models used.
demonstrate incremental validity by improving on the BR. Fig: 8.4 represents all of the elements of
Taylor-Russell model together. The following ratios are used by Taylor and Russell in developing
their table.
𝐴+𝐷
Base Rate: 𝐴+𝐵+𝐶+𝐷
𝐴+𝐵
Selection Ratio: 𝐴+𝐵+𝐶+𝐷
𝐴
Success Ratio: 𝐴+𝐵
By specifying the validity coefficient, the base rate, and the selection ratio, and making use of
“Tables for finding the volumes of bivariate surface” Taylor and Russell developed their tables. The
usefulness of a selection measure thus can be assessed in terms of success ratio that will be obtained
if the selection measure is used. The gain in utility to be expected from using the instrument (the
expected % of successful workers) that can be derived by subtracting the base rate from success ratio.
Mathematically,
Gain in utility over base rate = Success Ratio – Base rate.
Eg. Given an SR of 0.10, validity of 0.30, and BR of 0.50, the success ratio jumps to 0.71; thus a 21%
gain in utility over the base rate.
The validity coefficient referred to by Taylor and Russell is, in theory, based on present employees
who have already been screened using methods other than the new selection method.
Perhaps the major shortcoming of this utility model is that, it reflects the quality of the resulting hires
in terms of success or failure. When it is reasonable to assume that the use of higher cutoff scores on
a selection device will lead to higher level of average job performance by those selected, the Taylor-
Russell tables will underestimate the actual amount of value from the selection system. That
observation led to the development of the next framework for selection utility, THE NAYLOR-
SHINE Model.
Ʌ𝑖
Zyi = rxy∗ Ø𝑖
Where Zyi is the average criterion score (in standard-score units) of those selected, rxy is the validity
coefficient,Ʌ𝑖 is the height of the normal curve at predictor cutoff, Zxi (expressed in standard-score
units), and Ø𝑖 is the selection ratio.
Using the above equation as the building block, Naylor-Shine present a series of tables that specify,
for each SR, the standard predictor score that produces that SR, the ordinate of normal curve at that
Ʌ𝑖
point, and the quotient . The table can be used to answer several important HR questions:
Ø𝑖
1. Given a specific SR, what will be the average criterion level of those selected?
2. Given a minimum cutoff score on the selection device above which everyone will be hired, what
will be the average criterion level?
3. Given a desired improvement in the average level of criterion score of those selected, and assuming
a certain validity, what SR and/or predictor score cutoff value (in standard-score units) should be used?
The Naylor-Shine model utility approach is more generally applicable than the Taylor-Russell because in
many cases an organization expects an increase in average job performance as it becomes more selective using
valid selection process. However ,”average performance” is expressed in terms of Standard Z scores, which
are more difficult to interpret than are outcomes more closely related to specific nature of business, such as
dollar volume of sales, units produced or sold or costs reduced.
Neither the Taylor-Russell nor Naylor-Shine models formally integrates the concept of selection system cost,
nor monetary gain or loss, into utility index. Both describes difference in the percentage of successful
employees (T&R) and the increase in average criterion score (N&S), but tell us very little about the benefits
to the employer in monetary terms. The BROGDEN-CRONBACH-GLESER Model, discussed next, was
designed to address this issue.
average score made by persons selected based on the criterion score (Zyi).
Therefore
y€ = bo + b1x + e-----------------------------------------eqn(II)
e = random fluctuation or error in any straight line agreement.
Our original formula described the points that falls exactly on the straight line but this formula
describes the fall around of the straight line. Fig: 8.5 shows this idea as the straight line passing
through the ellipse. The ellipse represents the cloud of score combinations that might occur in an
actual group of people, and the line in the middle is the one that gets closer to maximum number
of points in the cloud. If we don’t know yet how someone is going to perform on the job (which we
can’t know before the person is hired), a best guess estimate of how the employee might perform on
the job would be the y€ value obtained from plugging the applicants x score into the eqn(I). The
letter ‘e’ is called error, because although our estimate of y€ from eqn(I) would be a good guess, it
is not likely to be exactly the level of actual job-performance. Therefore; 𝑦 − y€ = e.
STEP: II Standardize x:
To get back to the validity coefficient, we need to convert the raw scores on our predictor and
criterion to standardize form. Standardizing all applicants’ selection process scores;
Zi = (xi – x)/SD X
X= mean selection process score, typically of all applicants, obtained In above sample.
∑(𝑋𝑖−𝑋)2
SDX = Standard deviation of xi around X or; SDX = √ 𝑁−1
individuals selected at random from all applicants. Finally the value of E(b1) is obtained by using a
multiple regression software as,
b1= rxy(SDY/SDX)
y€ = µ + r *SD
xy Y.
This represents total monetary value of each selected applicant. To calculate the expected average
improvement in utility, or improvement in monetary value by using this model, we can subtract the
expected value without using the system, which is µ, from both sides of the equation. Because µ is
the monetary value of criterion performance the organization expects when it chooses applicants at
random, y€ - µ is equal to the expected gain in monetary valued performance from using the
y€ - µ = r *SD - (Na*C/Ns)
xy Y