Vous êtes sur la page 1sur 12

HUMAN RESOURCE ANALYTICS

(Submitted by Aditya Sinha & Ujjawal Gautam)

ANALYTICS IN TALENT ACQUISITION


Talent Acquisition is the process of making a continuous, long term investment to build a high quality
workforce capable of accomplishing the organization’s current and future goals. Recruitment, on the
other hand, is the process of filling up the vacancies in an organization. Therefore recruitment can
be understood as a subset of Talent Acquisition. This is one of the most important process for any
firm as Human Resource is the biggest investment made by them. In the present competitive
scenario, it is very important that the investments in HR must be guided not just by “gut feelings” but
by a proper data analysis. For carrying out the analysis and reaching to a valid and a reliable result,
we must define some recruitment metrics. Recruitment metrics are an essential part of a data-driven
hiring and recruitment analytics. These are measurements used to track hiring success and optimize
the process of hiring candidates for an organization. Some of the most relevant recruitment metrics
are listed below.
1. Gender-mix : This metrics represents the gender diversity in the hired candidates
𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑤𝑜𝑚𝑒𝑛 ℎ𝑖𝑟𝑒𝑑 𝑖𝑛 𝑎 𝑡𝑖𝑚𝑒 𝑝𝑒𝑟𝑖𝑜𝑑
= 𝑇𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑟𝑒𝑐𝑟𝑢𝑖𝑡𝑠 𝑖𝑛 𝑡ℎ𝑎𝑡 𝑡𝑖𝑚𝑒 𝑝𝑒𝑟𝑖𝑜𝑑

2. Time to fill: It is often measured by the number of days between publishing a job opening
and hiring the candidate. It’s a great metric for business planning and offers a realistic view
for the manager to access the time it will take to attract a replacement for a departed
employee.
3. Time to Hire: Time to hire represents the number of days between the moment a
candidate has approached and the moment candidate accepts the job. In other words it
represents the time it takes for someone to move through the hiring process once they have
applied.
4. Source of hire: Tracking the sources which attract new hires to organization is one of the
most popular recruiting metric. This metric also helps to keep track of the effectiveness of
different recruiting channels. A few examples are job boards, the company's career page,
social media, and Sourcing agencies
5. First year attrition: First year attrition is a key recruiting metric and also indicates hiring
success. Candidates who leave in the first year of work fail to become fully productive and
usually cause a lot of money for the firm. This metric can also be turned around as
“retention rate”.
6. Quality of hire: It is often measured by someone's performance rating, gives as an indicator
of first performance for candidate. Candidates who receive high performance ratings are
indicative of high and success while opposite hold true for the candidates with low
performance ratings. Mathematically,
7. Candidate job satisfaction: Candidate job satisfaction is an excellent way to track whether
the expectations during the recruiting procedure match reality. A low candidate job
satisfaction highlights of a mismanagement of expectations or incomplete job descriptions.
8. Applicants per opening: Applicants per job-opening or applicants per hire gauges the job’s
popularity. A large number of applicant could indicate high demand for jobs in the
particular area or a job description that's too broad.
9. Selection ratio: The selection ratio refers to the number of hired candidates compared to
total number of candidates. This ratio is also called the submittals to Hire ratio. The
selection ratio is very similar to the number of applicants per opening when there is a high
number of candidates the ratio approaches 0. Selection ratio provides information such as
value of different assessment tools and can be used to estimate the utility of a given
selection and recruitment system.

10. Cost per hire: The cost for a recruitment metric is the total cost investment in hiring by
number of hires.

11. Candidate experience: When we talk about recruiting metrics candidate experience
shouldn't be overlooked. Candidate experience is the way that job seekers perceive an
employers recruitment and onboarding process.
12. Offer acceptance rate: The offer acceptance rate compare the number of candidates who
successfully accept job offer with the number of candidates who received an offer. A low
rate is indicative of potential compensation problems. When these problems occur often
for certain functions they can be discussed earlier than recruiting process in an effort to
minimize impact of refused job offer.

13. Percent of open positions: The percent of open positions compared to the total number of
positions can be applied to specific department or the entire organization. A high
percentage can be indicative of high demand or low level market supply.

14. Application completion rate: Application complication rate is especially interesting for
organizations with elaborate online recruiting systems. Many large corporate firms require
candidates to manually input the entire CV in the system before they can apply for a job.
Drop-out in this process is indicative of problems in this procedure.
15. Recruitment funnel effectiveness: By measuring the effectiveness of all the different steps in
the funnel, you can specify a yield ratio per step. This makes for some excellent recruiting
metrics.

For example,

 15:1 (750 applicants apply, 50 CVs are screened)


 5:1 (50 screened CVs lead to 10 candidates submitted to the hiring manager)
 2:1 (10 candidate submissions lead to 5 hiring manager acceptances)
 5:2 (5 first interviews lead to 2 final interviews)
 2:1 (2 final interviews lead to 1 offer)
 1:1 (1 offer to 1 hire)

16. Sourcing channel effectiveness: Sourcing channel effectiveness helps to measure the
conversion per channel. By comparing the percentage of applications with the percentage
of Impressions of the position, we can quickly judge the effectiveness of different channels.
17. Sourcing channel cost: You can also calculate the cost efficiency of your different Sourcing
channels by including ad spend, the amount of money spent on advertisement on this
platform. By dividing the ad spend with the number of visitors who successfully applied to
the job opening you can measure Sourcing channel cost for hire.

18. Vacancy rate: Vacancy rate is the ratio of total number of open positions to total number of
positions in the organization.
19. First year turnover rate: Employees who left the organization within one year divided by
total number of recruits in that year.
20. Company initiated attrition during probation: Number of involuntary attrition divided by
(headcount opening + headcount closing)
21. Performance rating distribution: Percentage of interviews who have outstanding rating
divided by total number of interviews. Same goes for the Satisfactory, good, poor and very
poor ratings.

Apart from these metrics we also have some other metrics. Some of these are listed below:
𝑁𝑜.𝑜𝑓 𝐵𝐺𝑉 𝑐𝑎𝑠𝑒𝑠 𝑐𝑙𝑜𝑠𝑒𝑑 𝑤𝑖𝑡ℎ𝑖𝑛 60 𝑑𝑎𝑦𝑠
 % closure within 60 days: ; BGV-Background Verification
𝑇𝑜𝑡𝑎𝑙 𝑛𝑜.𝑜𝑓 𝑐𝑎𝑠𝑒𝑠 𝑐𝑙𝑜𝑠𝑒𝑑
𝑁𝑜.𝑜𝑓 𝑟𝑒𝑑 𝑐𝑎𝑠𝑒𝑠 𝑐𝑙𝑜𝑠𝑒𝑑
 % RAG status: [similar for amber and green cases]
𝑇𝑜𝑡𝑎𝑙 𝑛𝑜.𝑜𝑓 𝑐𝑎𝑠𝑒𝑠 𝑐𝑙𝑜𝑠𝑒𝑑

 % check pre-employment:
𝑁𝑜.𝑜𝑓 𝑐𝑎𝑠𝑒𝑠 𝑤ℎ𝑒𝑟𝑒 𝑝𝑟𝑒 𝑒𝑚𝑝𝑙𝑜𝑦𝑚𝑒𝑛𝑡 𝑐ℎ𝑒𝑐𝑘𝑠 𝑤𝑎𝑠 𝑐𝑎𝑟𝑟𝑖𝑒𝑑 𝑜𝑢𝑡 𝑓𝑜𝑟 𝑐𝑎𝑠𝑒𝑠 𝑖𝑛𝑖𝑡𝑖𝑡𝑒𝑑 𝑖𝑛 𝑎 𝑚𝑜𝑛𝑡ℎ
𝑁𝑜.𝑜𝑓 𝐵𝐺𝑉 𝑐𝑎𝑠𝑒𝑠 𝑖𝑛𝑖𝑡𝑖𝑡𝑒𝑑 𝑖𝑛 𝑡 ℎ𝑎𝑡 𝑚𝑜𝑛𝑡ℎ

 Employee Referral (ER) process performance:


𝑁𝑜.𝑜𝑓 𝑐𝑎𝑛𝑑𝑖𝑑𝑎𝑡𝑒𝑠 𝑤ℎ𝑜 𝑗𝑜𝑖𝑛𝑒𝑑 𝑤𝑖𝑡ℎ𝑖𝑛 14 𝑑𝑎𝑦𝑠 𝑜𝑓 𝑝𝑟𝑜𝑓𝑖𝑙𝑒 𝑠𝑢𝑏𝑚𝑖𝑠𝑠𝑖𝑜𝑛
𝑇𝑜𝑡𝑎𝑙 𝑛𝑜.𝑜𝑓 𝑗𝑜𝑖𝑛𝑖𝑛𝑔 𝑣𝑖𝑎 𝐸𝑅 𝑖𝑛 𝑡ℎ𝑎𝑡 𝑚𝑜𝑛𝑡ℎ

 Fulfilment Ratio: This will be calculated in two parts.

 Demand Universe/ Joining ratio:


𝑇𝑜𝑡𝑎𝑙 𝑗𝑜𝑖𝑛𝑖𝑛𝑔 𝑚𝑎𝑑𝑒 𝑖𝑛 𝑡ℎ𝑒 𝑚𝑜𝑛𝑡ℎ
𝑡𝑜𝑡𝑎𝑙 𝑑𝑒𝑚𝑎𝑛𝑑 𝑑𝑢𝑒 𝑡𝑜 𝑑𝑢𝑒 𝑜𝑟 𝑜𝑣𝑒𝑟𝑑𝑢𝑒 𝑓𝑜𝑟 𝑡ℎ𝑎𝑡 𝑚𝑜𝑛𝑡ℎ+𝑗𝑜𝑖𝑛𝑖𝑛𝑔 𝑚𝑎𝑑𝑒 𝑛𝑜𝑡 𝑎𝑔𝑎𝑖𝑛𝑠𝑡 𝑑𝑒𝑚𝑎𝑛𝑑 𝑑𝑢𝑒 𝑜𝑟
𝑜𝑣𝑒𝑟𝑑𝑢𝑒 𝑖𝑛 𝑡ℎ𝑎𝑡 𝑚𝑜𝑛𝑡ℎ.
 Fulfilment Universe/On-time Joining ratio:
𝐽𝑜𝑖𝑛𝑖𝑛𝑔 𝑡𝑎𝑘𝑖𝑛𝑔 𝑝𝑙𝑎𝑐𝑒 𝑤𝑖𝑡ℎ𝑖𝑛 𝑠𝑒𝑟𝑣𝑖𝑐𝑒 𝑙𝑒𝑣𝑒𝑙 𝑎𝑔𝑔𝑟𝑒𝑚𝑒𝑛𝑡
𝑇𝑜𝑡𝑎𝑙 𝑗𝑜𝑖𝑛𝑖𝑛𝑔 𝑖𝑛 𝑡ℎ𝑎𝑡 𝑚𝑜𝑛𝑡ℎ

Few important point to be noted:


 In Cost per Hire, total recruitment cost = External Cost + Internal Cost
EXTERNAL COST INTERNAL COST
Advertising Cost Time spent by recruiter
Agency Fees Time spent by manager
Candidate’s expenses New hire onboarding time
New hire training cost Lost Productivity
Other external Cost Other internal cost
 The employee life-cycle curve, depicting that hiring someone who is more suited for the job has
the potential to create an enormous return on investment (ROI).

 RAG Status Reporting: When project managers are asked to indicate, how well the project is doing
in series of traffic lights, this reporting method is used. R stands for RED; A stands for AMBER;
and G stands for GREEN. RED means there are some problems in the ongoing project, AMBER
means that the project is going OK, and the GREEN shows that the project is going well.

 The Quality of hire is same as success ratio. Success Ratio is defined as the ratio of number of
new hires who are successful at work performance to the total number of recruits. The success
ratio is used as an input for recruitment utility analysis. This analysis enables us to calculate an
ROI for different selection instruments. Now, we have a reached to a point where we can look
at different utility models used.

FRAMING HUMAN CAPITAL DECISIONS THROUGH THE LENS OF UTILITY ANALYSIS


Utility analysis is a framework to guide decisions about investments in human capital. It is the
determination of institutional gain or loss (outcomes) anticipated from various course of action. At its
core, utility analysis considers three most important parameters: Quantity, Quality, and Cost. The utility
of a selection device is the degree to which its use improves the quality of the individuals selected beyond
what would have occurred had that device not been used. In the context of employee selection, three of
the best known models are discussed in details.
1. The Taylor-Russell Model:
Many decision makers think that if candidate ratings on a selection device (such as a test or interview)
are highly associated with their later job-performance, the selection device must be worth investing
in. However if a pool of candidates contains very few unacceptable candidates, or generates so few
candidates that it must hire almost all of them, better testing will be of no/very little use. Taylor and
Russell translated these observations into a system of measuring tradeoffs, suggesting the overall utility
or practical effectiveness of a selection device depend on more than just the validity coefficient. Rather
it depends on three parameters:
a) The Validity coefficient (r): It is the correlation between the predictor of a job performance and a
criterion measure of actual job performance.
b) The Selection Ratio (SR): The proportion of applicants selected.
c) The Base Rate (BR): The procedure of candidates who would be successful without the selection
procedure.
They defined the value of selection system as the “success ratio”, which is the ratio of number of
hired candidate who are judged successful to the total number of hired candidates. There are three
key underlying assumptions of this model:
1. It assumes fixed-treatment selection.
2. The model doesn’t account for those rejected candidates who would have been successful at work.
3. The model classifies individual as successful and unsuccessful groups. All the individual within
each group are regarded as same.
The Taylor and Russell model demonstrates convincingly that even selection procedures with low
validity can increase the success ratio substantially if the selection ratio is low (lots of candidates to
choose from) and the base rate is near to 50% (about half the candidates would succeed without any
further testing). A selection ratio of 1 shows that all the candidates are hired, so testing is of no value.
The closer the selection ratio is to 1, the harder it is for better selection to pay off. The wide ranging
effect that SR may exert on predictor with a given validity is illustrated in fig: 8.2. In each case Xc
represents the predictor cut off score and YC represents the minimum level of job-performance
(criterion cut off score) necessary for success. As can be seen in fig: 8.3, even predictors with low
validity can be useful if SR is low, so that organization needs to choose only the cream of the crop.
Conversely, with high selection ratio the procedure must have high validity in order to increase the
success ratio. It might appear that, because the predictor that demonstrates a particular validity is
more valuable at allow selection ratio, one should always opt to reduce SR. However the optimal
strategy is not that simple. When the organization must achieve a minimal number of individuals,
lowering SR means the organization must increase the number of available applicants. This means
expanding recruiting and selection efforts thus becomes too costly to implement. Utility according to
Taylor and Russell, is affected by the base rate. To be of any use in selection, the measure must

demonstrate incremental validity by improving on the BR. Fig: 8.4 represents all of the elements of
Taylor-Russell model together. The following ratios are used by Taylor and Russell in developing
their table.
𝐴+𝐷
 Base Rate: 𝐴+𝐵+𝐶+𝐷

𝐴+𝐵
 Selection Ratio: 𝐴+𝐵+𝐶+𝐷

𝐴
 Success Ratio: 𝐴+𝐵
By specifying the validity coefficient, the base rate, and the selection ratio, and making use of
“Tables for finding the volumes of bivariate surface” Taylor and Russell developed their tables. The
usefulness of a selection measure thus can be assessed in terms of success ratio that will be obtained
if the selection measure is used. The gain in utility to be expected from using the instrument (the
expected % of successful workers) that can be derived by subtracting the base rate from success ratio.
Mathematically,
Gain in utility over base rate = Success Ratio – Base rate.
Eg. Given an SR of 0.10, validity of 0.30, and BR of 0.50, the success ratio jumps to 0.71; thus a 21%
gain in utility over the base rate.
The validity coefficient referred to by Taylor and Russell is, in theory, based on present employees
who have already been screened using methods other than the new selection method.
Perhaps the major shortcoming of this utility model is that, it reflects the quality of the resulting hires
in terms of success or failure. When it is reasonable to assume that the use of higher cutoff scores on
a selection device will lead to higher level of average job performance by those selected, the Taylor-
Russell tables will underestimate the actual amount of value from the selection system. That
observation led to the development of the next framework for selection utility, THE NAYLOR-
SHINE Model.

2. THE NAYLOR-SHINE MODEL:


This model does not require employees to be split into two Satisfactory and unsatisfactory groups.
This model defines utility as the increase in average-criterion score (for e.g. the average level of job
performance by those selected) expected from the use of a selection process with a given validity and
Selection Ratio. The quality of those selected are now defined as the difference in average quality of
the group that is hired, versus the average quality in the original group of candidates. Like Taylor and
Russell, this model also assumes that both scores on the selection device and performance scores are
normally distributed and are linear in relation. Naylor and Shine approach assumes the relationship
between validity and utility as linear. That is, the higher the validity, the greater the increase in average
criterion score that would have been achieved by the candidate group. The basic equation underlying
the Naylor- Shine is shown;

Ʌ𝑖
Zyi = rxy∗ Ø𝑖
Where Zyi is the average criterion score (in standard-score units) of those selected, rxy is the validity
coefficient,Ʌ𝑖 is the height of the normal curve at predictor cutoff, Zxi (expressed in standard-score
units), and Ø𝑖 is the selection ratio.

Using the above equation as the building block, Naylor-Shine present a series of tables that specify,
for each SR, the standard predictor score that produces that SR, the ordinate of normal curve at that
Ʌ𝑖
point, and the quotient . The table can be used to answer several important HR questions:
Ø𝑖
1. Given a specific SR, what will be the average criterion level of those selected?
2. Given a minimum cutoff score on the selection device above which everyone will be hired, what
will be the average criterion level?
3. Given a desired improvement in the average level of criterion score of those selected, and assuming
a certain validity, what SR and/or predictor score cutoff value (in standard-score units) should be used?

The Naylor-Shine model utility approach is more generally applicable than the Taylor-Russell because in
many cases an organization expects an increase in average job performance as it becomes more selective using
valid selection process. However ,”average performance” is expressed in terms of Standard Z scores, which
are more difficult to interpret than are outcomes more closely related to specific nature of business, such as
dollar volume of sales, units produced or sold or costs reduced.
Neither the Taylor-Russell nor Naylor-Shine models formally integrates the concept of selection system cost,
nor monetary gain or loss, into utility index. Both describes difference in the percentage of successful
employees (T&R) and the increase in average criterion score (N&S), but tell us very little about the benefits
to the employer in monetary terms. The BROGDEN-CRONBACH-GLESER Model, discussed next, was
designed to address this issue.

3. THE BROGDEN-CRONBACH-GLESER MODEL:


Brogden showed that under certain conditions, the validity coefficient {r} is a direct index of “selective
efficiency”. That means, if the criterion and predictor are expressed in standard-scores unit, ‘rxy’ represents
the ratio of average criterion score made by persons selected on the basis of predictor score (Zy) to the

average score made by persons selected based on the criterion score (Zyi).

Therefore

rxy = Zy/ Zyi


Note: Regression is linear; SR is constant.
If selecting applicants based on the actual behavior on the job would save an organization
Rs30,000/year over random selections; a selection device with validity of 0.5, could be expected to
save Rs.15,000/year. That means, utility is a direct linear function of validity. As our ultimate goal is
to identify the monetary payoff, let’s assume we could construct a criterion measure expressed in
monetary terms. We symbolize it as ‘y€’. Examples of this might include the sales made during
aweek/month/year by each of the sales person on a certain job etc. If we call the criterion measure
‘y’, then here is the plain English and mathematical description of Brogden’s approach.
STEP I: Express the predictor-criterion relationship as a formula for straight line.
𝑦 = 𝑎 + 𝑏𝑥----------------------------------------------------------eqn(I)
Where; y = dependent variable or criterion (such as job performance measure)
x = independent variable that we hope predicts our criteria
Now, lets modify the equation

y€ = bo + b1x + e-----------------------------------------eqn(II)
e = random fluctuation or error in any straight line agreement.
Our original formula described the points that falls exactly on the straight line but this formula
describes the fall around of the straight line. Fig: 8.5 shows this idea as the straight line passing
through the ellipse. The ellipse represents the cloud of score combinations that might occur in an
actual group of people, and the line in the middle is the one that gets closer to maximum number
of points in the cloud. If we don’t know yet how someone is going to perform on the job (which we
can’t know before the person is hired), a best guess estimate of how the employee might perform on
the job would be the y€ value obtained from plugging the applicants x score into the eqn(I). The
letter ‘e’ is called error, because although our estimate of y€ from eqn(I) would be a good guess, it
is not likely to be exactly the level of actual job-performance. Therefore; 𝑦 − y€ = e.
STEP: II Standardize x:
To get back to the validity coefficient, we need to convert the raw scores on our predictor and
criterion to standardize form. Standardizing all applicants’ selection process scores;

Zi = (xi – x)/SD X

Where; xi = selection process score earned by applicant i

Zi= “standard” or Z Score corresponding to the xi score of applicant i

X= mean selection process score, typically of all applicants, obtained In above sample.
∑(𝑋𝑖−𝑋)2
SDX = Standard deviation of xi around X or; SDX = √ 𝑁−1

Hence eqn (II) is modified as : y€ = bo + b1z

STEP : III- Express the equation in terms of validity coefficient.


Let’s modify this equation in terms of validity coefficient.

E (y€ ) = E (bo) + E(b1)E(zs)


Where; E (y€ ) means “ the expected value of the criterion ,y, in monetary terms. Also note that
zs shows that the criterion scores are from the group of selected applicants. Remember expected
value typically means average. Now, by definition average of all scores in a sample is= 0, so when
zs = 0 => E(b1)E(zs) = 0 & E (y€ ) = E (bo). E (bo) = the average monetary value of criterion for

individuals selected at random from all applicants. Finally the value of E(b1) is obtained by using a
multiple regression software as,

b1= rxy(SDY/SDX)

Note: b1 is also called as Standard regression coefficient.


Also, SDX = 1, always.
Therefore; b1= rxy*SDY .

Substituting µ for E (bo) & b1= rxy*SDY ., we get:

y€ = µ + r *SD
xy Y.

This represents total monetary value of each selected applicant. To calculate the expected average
improvement in utility, or improvement in monetary value by using this model, we can subtract the
expected value without using the system, which is µ, from both sides of the equation. Because µ is
the monetary value of criterion performance the organization expects when it chooses applicants at

random, y€ - µ is equal to the expected gain in monetary valued performance from using the

selection instrument. This term is also called Change in Utility.

STEP: IV – Subtract the costs of the selection process:

y€ - µ = r *SD - (Na*C/Ns)
xy Y

Na = No. of applicants C= Cost of applying/candidate Ns= No. of candidates selected

Vous aimerez peut-être aussi