Vous êtes sur la page 1sur 4

How to Measure the Relevance and Accuracy of OHS Information

Diploma in Occupational Health and Safety Measuring the Accuracy and Relevance of OHS Information and Data For information to be accurate it must be reliable, valid and current. It must also be complete with no gaps that would affect its accuracy. Reliability* refers to the consistency or repeatability of the information, while validity* addresses whether the information or the measure actually addresses what it is intended to measure. Reliability may be affected by factors such as sample size or the period over which information is collected. For some information such as occurrence* statistics*, data must be collected over a period of time to obtain reliable information. The use of positive, or lead, indicators* to measure OHS performance is an increasing trend in OHS management. They are useful as performance drivers as they measure the activities that drive good OHS performance. However, they are also an example of the importance of evaluating reliability and validity. Consider the following examples of performance measures given by the Minerals Council of Australia. Activity measures focusing on safety commitment and effort in safety:

Scheduled safety meetings; Audits completed as a % of those scheduled; and Response to audit findings % closed out. Measures focusing on main areas of risk: Exposures exceeding standard (ie, noise and dust); and Isolation deviations. Measures of achievement of action plans: % of supervisors attending training; and % of injured employees rehabilitated.

(Minerals Council of Australia, no date) What factors need to be considered when evaluating the accuracy (ie, reliability and validity) of these measures? The following table lists some questions that highlight limitations in the validity and reliability of the above measures.

Questions on reliability (ie, repeatability)

Questions on validity (ie applicability) Activity measures focusing on safety commitment and effort in safety Scheduled safety meetings

What should be counted as a meeting? Safety meetings may be scheduled but are they held? What level of attendance is required for the meeting to count? What are you actually measuring when you count the number of meetings? Do more meetings mean more management commitment and more achievement? Audits completed as a percentage of those scheduled If the same auditor repeated the audit in the same organisation using the same audit tool would they get the same result? Would two auditors conducting an audit at the same time using the same tool get the same result? For reliability across organisations:

What is defined as an audit? Who has to conduct the audit for it to be counted? Is the scope of the audit defined? What do audits actually measure? Are they a good measure of management commitment and OHS performance? Should this be linked with other measure(s) to give a meaningful measure of commitment? Response to audit findings percentage closed out What is close out? What are the criteria for close out? If different people review the same actions, will they have the same result in deciding which actions are closed out? Are the corrective actions determined according to defined criteria (ie, hierarchy of control)? Could a closed out action of provide training rate equally with an engineering control?

Measures focusing on main areas of risk Exposures exceeding standard (ie, noise and dust) How often, and when, are the exposure measures conducted? What conditions/work is occurring at the time of the measurement? How are the conditions standardised or changes allowed for? Is the protocol for measurement defined? Are the actual standards a good measure of risk (ie, dose compared with grab sample)? To what extent are the exposure levels exceeded?

Isolation deviations Are all deviations reported? Are deviations assessed for potential severity? Measures of achievement of action plans Percentage of supervisors attending training What constitutes attendance? If a supervisor arrives late to a training session and leaves early does this count as attendance? Are the learning objectives defined? Are the supervisors assessed for competence or is the measure attendance only? Percentage of injured employees rehabilitated What is the definition of rehabilitated? Return to previous job or just return to work? Full time or modified hours? Some weaknesses of positive performance measures* are summarised in the report on measuring performance in the construction industry (Safe Work Australia, formerly ASCC/NOHSC, 1999). It is clear that for any performance indicators to be accurate, the three factors that must be defined are:

the measure itself; the method of collecting the data; and the link to OHS performance.

Good performance indicators are often described as SMART: 1. Specific in that they relate directly to what is being measured. 2. Measurable in that data can be collected that is accurate and complete. 3. Actionable in that they are easy to understand and highlight the areas where action is required. 4. Relevant in that they measure what is important in determining performance. 5. Timely in that data can be obtained when it is needed and that data collected reflects current status. Performance indicators are necessary to provide information on what is happening. Performance measures may be outcome-based (eg, rate of injury) or input-based (eg, number of workplace inspections). Outcome-based measures are also called negative* or lag indicators* as changes in these indicators usually lag well behind the workplace changes that produce the change. Input measures, or process drivers, are also called positive or leading indicators.

This terminology may have detracted from the critical and useful discussion on the most appropriate and useful measures of OHS performance. Negative implies bad; it is not the performance measure that is bad, it may measure failure but it is important to be aware of failure as part of correcting the conditions that led to the failure. The terms lead and lag are more descriptively correct (Ruschena, 2005) but this author prefers the terms driver and output. However, in line with current use, the terms positive and negative will be used in this learning guide. Accuracy OHS information is also affected by the authority of the source. Information may be verified by talking with people, or checking another source. The most common place for unreliable information is the Internet. Much of what is on the net is opinion and not fact. You need to check that information taken from the Internet is from a reliable source (eg, electronic journals, OHS regulatory authority web sites). Currency OHS information should also be checked to ensure it is up to date. Have there been changes to legislation, exposure standards, the workplace, workforce structure or the organisation of work that impact on the accuracy and relevance of the information? Read More OHS Terms

LMIT delivers the Certificate IV in OHS and the Diploma in Occupational Health & Safety Completely Online in Sydney, Melbourne, Brisbane, Perth, Adelaide and Canberra. The Advanced Diploma in OHS is also available via RPL only.

Vous aimerez peut-être aussi