Vous êtes sur la page 1sur 10

S.

No Level KPIs Metrics


Strategic Cost - Productivity Test Cost Optimisation
1

Strategic Quality - Release Test Effectiveness


Stability
2

Strategic Test Cycle Time Test Velocity


3

Strategic Cost, Quality, Time to Test Scorecard


Market
4

Tactical Test Cycle Time Environment Downtime


5

Tactical Test Cycle Time Test Data Index


6

Tactical Test Cycle Time Service Virtualisation Index


7

Tactical Test Cycle Time Automation - Regression


8 Quality
Cost
Tactical Test Cycle Time Quality of Input from
Development
9

Tactical Test Cycle Time Static Testing Index


Cost
10

Operational Cost - Quality Test Quality


11
Operational Test Cycle Time Defect Density
12

Operational Quality Defect Removal Efficiency (into


UAT)
13

Operational Test Cycle Time Defect Rejection Index


14

Operational Test Cycle Time Defect Turnaround time -


15 Development

Operational Test Cycle Time Defect Turnaround time - Test


16
Operational Quality Defect Severity Index
17

Operational Quality Defects not linked to Test Case


18

Operational Quality Test cases not linked defects

Agile Metrics
Strategic Cost Effort Slippage
19

Operational Test Cycle Time Test Velocity


20

Operational Test Cycle Time Burndown Chart

21

Operational Cost Burnup Chart


22
Objective
Measure vendor cost trend across releases to check testing services
is providing Value for Money

Test Effectiveness is used to determine the effectiveness of the


testing team’s defect removal efforts. It compares defect count
identified before production (SIT + E2ET) and in production.

Measure more requirements / stories are delivered within a


defined time-line

Measure on the key objectives of Application Development services

Ensure minimal unplanned downtime so that Test Velocity can


improve

Ensure minimal loss of test time or developer's effort due to test


data issues so that Test Velocity can improve

Ensure minimal loss of test time due to Service Virtualisation issue


so that Test Velocity can improve

Automation will ensure we can run more cycles of test and check
for any functionality has been regressed or not, thus improving
Quality and decreasing test cycle time.
Ensure that not too many Unit Test / System Test Defects are
passed into SIT phase

Ensure requirement and design defects are identified early in the


cycle

Measure cost required to find a defect in test

Measure test scenarios written are identifying defect and RBT has
been applied before test execution

An indicator to measure effectiveness of detecting the defects at


various stages of testing. It compares the No. of defects
leaked/introduced through from Pre-UAT(SIT+ E2ET) into UAT

Defect Rejection Index (DRI) is the ratio of the number of defects


rejected to the total number of defects reported. It is an indicator
to measure the quality of the defects reported.

Defect turnaround time in days is number of days elapsed between


defect first reported to defect fixed. (P1, P2)

Defect turnaround time in days is number of days elapsed between


defect ready for retest to defect closed.
(P1, P2)
An indicator which provides a measurement of the quality of the
product specifically reliability, fault tolerance and stability.

% of Defects not linked to corresponding test cases in a given


project.

% of test instances not linked to corresponding defects in a given


project.

To measure the effectiveness of the Scrum team / iteration


planning

Measure number of story points in each iteration

To check if the project team is behind / on / ahead of schedule in


each iteration

To check productivity improvement of the Scrum team


TAD = Test Assurance Database
Definition Benchmark
TCO = Total Cost / Number of Requirements delivered YOY 10%

TE = (Number of Defects in Production / Number of <=5%


Defects Found both in Pre-Production phase (SIT+E2ET)
+ Production phase )*100 for P1 & P2

Monitor trend of requirements / stories delivered over Upwards trend - 20% increase
defined timeline within 6 months

% allocation for each KRA and total attainment =85%

ED= Average (Environment Down End time - <=5%


Environment Down Start time)/Total Test Window time

Number of Test cases failed due to test data issues / <=5%


Total number of Test Data requests serviced

Number of Test cases failed due to service vistualisation <=5%


issues / Total number of Service Virtualisation requests
serviced
Numer of regression test cases automated / Total =>50% end Year1
number of regression test cases that can be automated =>70% end Year2

QOI = Number of Defects which should be identified in <=20%


Development / Number of Requirements delivered

STI = Number of Defects identified in Early phase / <=20%


(Number of Defects identified during testing + Number
of Defects identified in Early phase) with RCA =
Requirement or Design

TQ = Total Cost / Number of Acknowledged Defects pre- Downward trend


UAT (SIT+E2ET)
DD = Number of valid Defects detected/ Total Number >=20%
of Test Cases Executed

DRE = (No. of Defects Found in UAT / No. of Defects <=15%


Found both in Pre-UAT(SIT+E2E) + UAT)*100

DRI = (No. of Defects Rejected)/(Total No. of Defects <=5%


Reported)%

DTT - Dev= Defect Fixed Date – Defect Reported Date < 5 days

DTT - Test = Defect Closed Date – Defect Ready for < 3 days
Retest Date
DSI = [(4*No. of Critical Defects) + (3*No. of Major =< 2.0
Defects) + (2*No. of Minor Defects) + (1*No. of Trivia
Defects)]/Total No. of Defects Found

DNLT = ((No.of defects not linked to TC)/(Total No.of <5%


defects))*100

DNLT = ((No.of test instances not linked to defects)/ <5%


(Total No.of test instances))*100

ES = Number of stories planned for iteration but not <10%


completed / Number of stories planned for iteration

ATE = Total number of stories executed in each iteration Upwards trend - 20% increase
within 6 months

Burndown chart indicates the amount of hours Outstanding work =< 10%
completed per day. The outstanding work (measured in
hours) is plotted on the vertical axis with time on the
horizontal.

Burnup charts to be plotted with the cumulative Story An exponential graph indicates
points achieved in each iteration against iterations on the productivity improvement of
the Y axis. the Scrum team
ce Database
Data Collection Mechanism Presentation Layer
TAD / HP ALM Power BI

HP ALM Power BI

TAD Power BI

TEM / HP ALM Power BI

TAD / HP ALM Power BI

TAD / HP ALM Power BI

HP ALM Power BI

HP ALM Power BI

HP ALM Power BI

TAD / HP ALM Power BI

HP ALM Power BI

HP ALM Power BI

HP ALM Power BI

HP ALM Power BI

HP ALM Power BI
HP ALM Power BI

HP ALM Power BI

HP ALM Power BI

JIRA / Zephyr / TAD Power BI

JIRA / Zephyr / TAD Power BI

JIRA / Zephyr / TAD Power BI

JIRA / Zephyr / TAD Power BI


Venu's comments

Need to add an additional status called 'Delivered' with status as in SIMP.

Production defects not captured in ALMQC.

Delivered' status will provide the data.

Not in ALMQC or JIRA scope, based on definition.

Defect type can capture data for environment downtime.

Test data issues can be tracked using ‘Defect root cause’ or can be added to
‘Defect type’ as list value. (issue, task/request should be discussed).

Same as above point # 6.

Workflow setup is there in simplification, can be used as STD.

Need to discuss with Samir.

‘Defected in test phase’ can provide data for this KPI.

Data can be extracted via test cycles.

Simple data

Data can be extracted via test cycles and defects data

Simple data

Simple data from audit trail

Simple data from audit trail


Simple data

Simple data

To be discussed with Samir

It can be used for classic development cycle also.

It can be used for classic development cycle also.

Vous aimerez peut-être aussi