Vous êtes sur la page 1sur 263

THÈSE

Pour obtenir le grade de

DOCTEUR DE L’UNIVERSITÉ GRENOBLE ALPES


préparée dans le cadre d’une cotutelle entre l’Université
Grenoble Alpes et l’Université Fédérale de Santa Catarina
Spécialité : Industrial Engineering
Arrêté ministériel : le 6 janvier 2005 - 7 août 2006

Présentée par
Francielly Hedler Staudt

Thèse dirigée par Maria Di Mascolo, Carlos M. Taboada Rodriguez


et codirigée par Gülgün Alpan

préparée au sein des G-SCOP - France, LDL - Brésil


dans les Écoles Doctorales: IMEP2 et PPGEP

Global warehouse management: a


methodology to determine an inte-
grated performance measurement

Thèse soutenue publiquement le 15 octobre 2015,


devant le jury composé de :

M. AZEVEDO Jovane Medina


Maître de Conférences, HDR, Université du État de Santa Catarina, Rapporteur
Mme. SAHIN Evren
Professeur, École Centrale de Paris, Rapporteur
M. CURSI José Eduardo de Souza
Professeur, INSA Rouen, Examinateur
Mme. SILVA Vanina M. Durski
Maître de Conférences, Université Fédérale de Santa Catarina, Examinatrice
M. FOLLMANN Neimar
Maître de Conférences, Université Tecnologique du État du Paraná, Examinateur
Mme. DI MASCOLO Maria
Directrice de Recherche, CNRS , Directeur de thèse
M. RODRIGUEZ Carlos M. Taboada
Professeur, Université Fédérale de Santa Catarina, PRESIDENT, Directeur de thèse
Mme. ALPAN Gülgün
Maître de Conférences, HDR, Grenoble INP, Co-Directeur de thèse
Contents
Front page i
Table of Contents v
Acknowledgements ix
List of Acronyms xi
1 Introduction 1
1.1 Context of the study . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.2 Research Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.2.1 Research Gap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.2.2 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.3 Dissertation Objectives . . . . . . . . . . . . . . . . . . . . . . . . . 8

1.3.1 General Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

1.3.2 Specic Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

1.4 Methodology and Development . . . . . . . . . . . . . . . . . . . . 8

1.5 Research Delimitations . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.6 Thesis Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2 Literature Review on Warehouse Performance 13


2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.2 Research methodology and delimitations . . . . . . . . . . . . . 14

2.3 Results of Content Analysis . . . . . . . . . . . . . . . . . . . . . . 17

2.3.1 Based on geographical and journal representation . . . . . . . . . . . . 17

2.3.2 Based on the work methodology . . . . . . . . . . . . . . . . . . . . . 18

2.3.3 Application area of works . . . . . . . . . . . . . . . . . . . . . . . . . 19

2.3.4 Warehouse activities . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2.3.5 Warehouse Management tools . . . . . . . . . . . . . . . . . . . . . . . 21

2.4 Direct Warehouse Performance Indicators . . . . . . . . . . . . 23

2.4.1 Time related performance indicators . . . . . . . . . . . . . . . . . . . 24

2.4.2 Quality related performance indicators . . . . . . . . . . . . . . . . . . 24

2.4.3 Cost related performance indicators . . . . . . . . . . . . . . . . . . . 25

2.4.4 Productivity related performance indicators . . . . . . . . . . . . . . . 26

2.5 Indirect Warehouse Performance Indicators . . . . . . . . . . . 26

2.6 Classification of the Warehouse Performance Indicators . . 30

2.6.1 Specic and Transversal Indicators . . . . . . . . . . . . . . . . . . . . 33


ii CONTENTS

2.6.2 Resource Related Indicators . . . . . . . . . . . . . . . . . . . . . . . 34

2.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

3 Literature on Performance Integration and Tools 37


3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

3.2 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

3.2.1 Literature on indicator relationships and indicators aggregation . . . . . 38

3.2.2 Literature on Performance Integration . . . . . . . . . . . . . . . . . . 40

3.3 Overview on mathematical tools used for performance inte-


gration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

3.3.1 The choice of the dimension-reduction statistical tool . . . . . . . . . . 43

3.3.2 Principal Component Analysis - PCA . . . . . . . . . . . . . . . . . . . 44

3.3.2.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

3.3.2.2 Data characteristics . . . . . . . . . . . . . . . . . . . . . . 44

3.3.2.3 Basic principles . . . . . . . . . . . . . . . . . . . . . . . . . 45

3.3.2.4 Main outcomes . . . . . . . . . . . . . . . . . . . . . . . . . 45

3.3.2.5 Interpretation of the results . . . . . . . . . . . . . . . . . . 46

3.3.3 Factor Analysis - FA . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

3.3.3.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

3.3.3.2 Data characteristics . . . . . . . . . . . . . . . . . . . . . . 48

3.3.3.3 Basic principles . . . . . . . . . . . . . . . . . . . . . . . . . 48

3.3.3.4 Main outcomes . . . . . . . . . . . . . . . . . . . . . . . . . 48

3.3.3.5 Interpretation of the results . . . . . . . . . . . . . . . . . . 49

3.3.4 Canonical correlation analysis - CCA . . . . . . . . . . . . . . . . . . . 50

3.3.4.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

3.3.4.2 Data characteristics . . . . . . . . . . . . . . . . . . . . . . 50

3.3.4.3 Basic principles . . . . . . . . . . . . . . . . . . . . . . . . . 50

3.3.4.4 Main outcomes . . . . . . . . . . . . . . . . . . . . . . . . . 51

3.3.4.5 Interpretation of the results . . . . . . . . . . . . . . . . . . 52

3.3.5 Structural Equation Modeling - SEM . . . . . . . . . . . . . . . . . . . 52

3.3.5.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

3.3.5.2 Data characteristics . . . . . . . . . . . . . . . . . . . . . . 52

3.3.5.3 Basic principles . . . . . . . . . . . . . . . . . . . . . . . . . 53

3.3.5.4 Main outcomes . . . . . . . . . . . . . . . . . . . . . . . . . 54

3.3.5.5 Interpretation of the results . . . . . . . . . . . . . . . . . . 54

3.3.6 Dynamic Factor Analysis - DFA . . . . . . . . . . . . . . . . . . . . . 54

3.3.6.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

3.3.6.2 Data characteristics . . . . . . . . . . . . . . . . . . . . . . 54

3.3.6.3 Basic principles . . . . . . . . . . . . . . . . . . . . . . . . . 55

3.3.6.4 Main outcomes . . . . . . . . . . . . . . . . . . . . . . . . . 55

3.3.6.5 Interpretation of the results . . . . . . . . . . . . . . . . . . 56

3.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

4 Methodology to define an Integrated Warehouse Perfor-

mance 57
CONTENTS iii

4.1 Introduction - General methodology presentation . . . . . . . 58

4.2 Conceptualization - The analytical model of performance


indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

4.3 Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

4.3.1 Data acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

4.3.2 Theoretical model of indicator relationships . . . . . . . . . . . . . . . 63

4.3.3 Statistical tools application . . . . . . . . . . . . . . . . . . . . . . . . 64

4.4 Model Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

4.4.1 Integrated Performance proposition . . . . . . . . . . . . . . . . . . . . 67

4.4.2 Scale denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

4.5 Implementation and Update . . . . . . . . . . . . . . . . . . . . . . . 69

4.5.1 Integrated model implementation . . . . . . . . . . . . . . . . . . . . . 69

4.5.2 Model update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

4.6 Methodology implementation on this thesis . . . . . . . . . . . . 71

4.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

5 Conceptualization 75
5.1 Introduction - the Standard Warehouse . . . . . . . . . . . . . . 76

5.1.1 Warehouse Layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

5.1.2 Measurement Units of Performance Indicators . . . . . . . . . . . . . . 77

5.2 Analytical model of Indicator Equations . . . . . . . . . . . . . 77

5.2.1 Denition of the metric set . . . . . . . . . . . . . . . . . . . . . . . . 77

5.2.2 Transformation of Indicator Denitions in Equations . . . . . . . . . . . 80

5.2.3 Notation to describe Indicator Equations . . . . . . . . . . . . . . . . . 81

5.2.4 Time Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

5.2.5 Productivity Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . 84

5.2.6 Cost Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

5.2.7 Quality Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

5.3 Complete Analytical Model of Performance Indicators and


Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

5.3.1 The Construction of Data Equations . . . . . . . . . . . . . . . . . . . 92

5.3.2 Analytical model assumptions . . . . . . . . . . . . . . . . . . . . . . 94

5.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

6 Modeling 97
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

6.2 Data generation for the Standard Warehouse . . . . . . . . . . 98

6.2.1 Assumptions in data generation . . . . . . . . . . . . . . . . . . . . . . 98

6.2.2 The global warehouse scenario . . . . . . . . . . . . . . . . . . . . . . 99

6.2.3 The internal warehouse scenario . . . . . . . . . . . . . . . . . . . . . 100

6.2.4 Data characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

6.3 Theoretical model of Indicator relationships . . . . . . . . . . 102


6.3.1 The data associations . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

6.3.2 Determination of the independent data . . . . . . . . . . . . . . . . . . 103

6.3.3 Data versus indicator relationships . . . . . . . . . . . . . . . . . . . . 105


iv CONTENTS

6.3.4 Analysis of indicator relationships . . . . . . . . . . . . . . . . . . . . 107

6.4 Statistical Tools Application . . . . . . . . . . . . . . . . . . . . 111

6.4.1 Data normality test . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

6.4.2 Correlation measurement . . . . . . . . . . . . . . . . . . . . . . . . . 112

6.4.3 Principal Component Analysis . . . . . . . . . . . . . . . . . . . . . . 115

6.4.3.1 PCA for indicator dimensions . . . . . . . . . . . . . . . . . 115

6.4.3.2 PCA with all 40 indicators . . . . . . . . . . . . . . . . . . 121

6.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

7 Model Solving, Implementation and Update 125


7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
7.2 Analysis of Jacobian and Correlation matrix to improve PCA
results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
7.3 Integrated performance model proposition . . . . . . . . . . . . 133
7.4 Scale for the Integrated Indicator . . . . . . . . . . . . . . . . 136

7.4.1 The analytical model adjustment . . . . . . . . . . . . . . . . . . . . 136

7.4.1.1 Equations standardizing indicator values . . . . . . . . . . 136

7.4.1.2 Equations to limit optimization search space . . . . . . . . 136

7.4.2 Objective function denition . . . . . . . . . . . . . . . . . . . . . . . 138

7.4.3 The choice of the optimization algorithm . . . . . . . . . . . . . . . . 139

7.4.4 The setting of the optimization tool . . . . . . . . . . . . . . . . . . . 139

7.4.5 The integrated indicator scale . . . . . . . . . . . . . . . . . . . . . . . 142

7.5 Integrated Model Implementation . . . . . . . . . . . . . . . . . . 145


7.6 Model Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

7.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147

8 Conclusions and suggestions for future research 149


8.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
8.2 Future Research Directions . . . . . . . . . . . . . . . . . . . . . . 153
8.2.1 Short-term Research Directions . . . . . . . . . . . . . . . . . . . . . . 153

8.2.2 Long-term Research Directions . . . . . . . . . . . . . . . . . . . . . . 154

Bibliography 166
A Complete Analytical Model of Performance Indicators and

Data 1
A.1 Time indicator model . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

A.2 Productivity indicator model . . . . . . . . . . . . . . . . . . . . . 6

A.3 Cost indicator model . . . . . . . . . . . . . . . . . . . . . . . . . . 12

A.4 Quality indicator model . . . . . . . . . . . . . . . . . . . . . . . . 16

B Data Generation 21
B.1 Receiving data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

B.1.1 Equations of Receiving data . . . . . . . . . . . . . . . . . . . . . . . 22

B.2 Storage data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

B.3 Replenishment data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23


CONTENTS v

B.4 Picking data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

B.5 Shipping data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

B.6 Delivery data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

B.7 Warehouse and Inventory data . . . . . . . . . . . . . . . . . . . . 27

C Manual Procedure to determine indicator relationships 29


C.1 The Manual Procedure . . . . . . . . . . . . . . . . . . . . . . . . . 30

C.2 The indicator relationships schema for the manual procedure 30

D List of independent input values 33


E Theoretical Framework of indicator relationships 35
F Results of Dynamic Factor Analysis application 37
G Results of Anderson Darling Test 39
H Optimization model 45
I Mean and standard deviation values of indicators 51
J Optimization results 53
K Abstracts 55
K.1 Contexte du Problème de Recherche . . . . . . . . . . . . . . . . 60

K.2 Objectif Général . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

K.2.1 Objectifs Spéciques . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

K.3 Étude bibliographique pour baser le développement de la


méthodologie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

K.4 Méthodologie pour mesurer la performance de l?entrepôt


logistique de façon agrégée . . . . . . . . . . . . . . . . . . . . . . 64

K.5 Application de la méthodologie sur un entrepôt théorique 66

K.5.1 Détermination analytique des relations entre les indicateurs de perfor-


mance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

K.5.2 Agrégation d'indicateurs de performance par des outils statistiques - pre-


miers résultats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

K.5.3 Le modèle nal de performance agrégée . . . . . . . . . . . . . . . . . 75

K.5.4 Création d'une échelle pour l'indicateur de performance agrégée . . . . 80

K.5.5 Exemple d'utilisation du modèle agrégé . . . . . . . . . . . . . . . . . 81

K.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
To my beloved family...
Acknowledgements

Life is made of moments. So, I would like to sincerely thank everybody that shared with

me moments during my PhD. As my relationships were made in other languages instead

of English, I prefer to thank everyone in their own language.

Em primeiro lugar, eu gostaria de agradecer ao meu amado marido Tiago por estar

sempre ao meu lado durante este período. Obrigada por acreditar no meu potencial e

sempre me animar novamente nos momentos em que não acreditava ser possível, por ser

meu incentivador, meu professor, meu conselheiro, meu companheiro, meu amante. Viver

este momentos ao teu lado foi maravilhoso!

Agradeço ao meus pais, Rolf e Karin, pelo apoio no momento em que decidimos investir

nesta louca jornada. Não foi fácil, mas o suporte, auxílio e paciência de vocês, principal-

mente na fase nal de redação da tese, foram importantíssimos. Amo vocês! Agradeço

ao resto da minha família, minha mana Luanna, Darcísio, Sirlei, Tati, Fabiano por todo o

apoio e bons momentos compartilhados ao longo destes anos.

Agradeço aos amigos Mauricio, Lidiane, Fábio, Simoni e Marconi pelos inúmeros mo-

mentos de alegria que passamos. Com certeza o fardo foi menos pesado com a companhia

de vocês!

Agradeço ao Professor Taboada por tantos momentos vivenciados, mesmo antes do

doutorado. Muito obrigada por acreditar na ideia do meu trabalho e da cotutela, por

brigar por mim quando foi preciso, por me ensinar que devemos acreditar no potencial

das pessoas.

Agradeço ao Neimar, Marina, Dimas, Maria e Marisa e a todos os outros membros do

Laboratório de Desempenho Logístico (LDL) pelas saudáveis trocas de experiências e de

conhecimento que sempre tivemos.

Agradeço ao lado brasileiro da banca de doutorado, Professor Neimar, Professora

Vanina, e especialmente ao Professor Jovane Medina por ter aceitado realizar o relatório

da tese.

Je voudrais remercier Maria et Gülgün pour avoir fait l'encadrement rester très proche

même quand on était loin. J'ai appris beaucoup avec vous et j'espère vraiment qu'on puisse

continuer à travailler ensemble. Merci de m'avoir accepté lors de ma première visite au

G-SCOP, même quand mon travail n'était qu'une idée. Le temps que je suis restée au

G-SCOP je ne vais jamais oublier. En parlant du labo G-SCOP, je voudrais remercier

Marie-Jo, Fadila, Kevin et le personnel administratif pour l'accueil et compétence. Par

ailleurs, à tous les collègues doctorants et professeurs pour les bons moments de jeux,
x Acknowledgements

discussions à la Kfèt et soirées festives. Merci à mes co-bureaux Mahendra, Quentin,

Maxime et Yohann pour les moments de partage sur le travail, mais aussi sur la vie en

général.

Je voudrais remercier le côté français du jury de thèse, Professor Eduardo Cursi et

Professor Evren Sahin pour votre disponibilité de participer à la soutenance. De plus, je

remercie M. Xavier Brunotte de l'entreprise Vesta System pour avoir fourni une licence du

logiciel CADES pour l'exécution de ce travail. Je remercie aussi Frédéric Wurtz pour tout

le support administratif donné à moi et Tiago lors de notre arrivée et aussi pendant notre

séjour à Grenoble.

Merci à Rodrigo, Dyenny, William, Douglas, Angelica, Diego, Lucas, Guilherme, Paula,

Vinicius, Juliana, Marcelo, Thiago, Poliana, Vincent F., Jonathan et Savana, avec qui j'ai

partagé des moments très agréables!

À mes grands amis Pauline, Laura, Julie, Yohann, Lucie, merci pour tous les enseigne-

ments de français, sur la France, sur les montagnes, sur la vie! Vous êtes des personnes

incroyables et je suis chanceuse de vous avoir trouvé! Mon séjour en France était spéciale

à cause de vous!

Finally, for the nancial support, I would like to thank CAPES and the Region Rhône

Alpes.
List of Acronyms

AHP Analytic Hierarchy Process

CADES Component Architecture for the Design of Engineering Systems

CBR Case-Based Reasoning

CCA Canonical Correlation Analysis

CFA Conrmatory Factor Analysis

CSEM Covariance Structural Equation Modeling

DC Distribution Center

DEA Data Envelopment Analysis

DEMATEL DEcision-MAking Trial and Evaluation Laboratory Method

DFA Dynamic Factor Analysis

DMU Decision Making Unit

DSS Decision Support System

DWMS Digital Warehouse Management System

EDCs European Distribution Centers

EFA Exploratory Factor Analysis

FAHP Fuzzy Analytic Hierarchy Process

FA Factor Analysis

FR Fuzzy Reasoning

GI Goods Inwards

GP Global Performance

ICO Internet Connected Objects

IoT Internet of Things

ISO International Organization for Standardization

IT Information Technology

JIT Just in Time

KF Kalman Filter

KPI Key Performance Indicator

MACBETH Measuring Attractiveness by a Categorical-Based Evaluation TecHnique

MARSS Multivariate Autoregressive State Space Model

MLE Maximum Likelihood Function

NIPALS Nonlinear Iterative Partial Least Squares

NS Normal Scale
xii List of Acronyms

OS Optimized Scale

PCA Principal Component Analysis

PCTM KPI Cost Transformation Matrix

PC Principal Component

PLS Partial Least Squares regression

PMS Performance Measurement Systems

PPS Production Possibility Set

QFD Quality Function Deployment

QMPMS Quantitative Model for Performance Measurement System

RFID Radio Frequency Identication

SCOR Supply Chain Operations Reference-model

SEM Structural Equation Modeling

SI Internation Systems of Units

SKU Stock Keeping Unit

SQP Sequential Quadratic Programming

TOC Theory of Constraints

TQM Total Quality Management

V-A-T Value-added Tax

VAL Value Adding Logistic

WLI Warehouse Logistics Index

WMS Warehouse Management System


Chapter 1
Introduction

Science, my boy, is made up of mistakes, but they are


mistakes which it is useful to make, because they lead little
by little to the truth.
Jules Verne

Contents

1.1 Context of the study . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2


1.2 Research Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.1 Research Gap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.2 Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Dissertation Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.1 General Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.2 Specic Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4 Methodology and Development . . . . . . . . . . . . . . . . . . . . . 8
1.5 Research Delimitations . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.6 Thesis Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

Abstract
This chapter presents the context of the study and the research gaps which are basis
for the work's objectives. Besides, the dissertation proposal is detailed, presenting the
research methodology and the steps carried out to achieve these objectives. Finally, the
research delimitations are discussed and the thesis structure is reported.
2 1. Introduction

1.1 Context of the study


The literature about performance measurement is vast. Performance measurement or or-

ganizational performance has become an important issue in companies due to the pressure

to give results (Kennerley and Neely, 2002). The performance indicators, which form the

performance measurement system, provide a tool to compare the current results with the

present objectives and thus to eventually launch the necessary actions to carry out in order

to reach these objectives (Berrah et al., 2000). Summarizing the literature of the last 30

years, it is possible to identify four main phases of the performance measurement area

(Neely, 2005).

First, in the 1980s it was the problem identication phase, where the dominant theme

was a discussion of the problems of performance measurement systems. Kennerley and

Neely (2002) state that in this stage, there was a growing realization that, given the

increased complexity of organizations and the markets in which they compete, it was no

longer appropriate to use nancial measures as the sole criteria for assessing success. The

nancial measures are concerned with cost elements and quantify performance solely in

nancial terms, but many enhancements are dicult to quantify monetarily, such as lead-

time reduction, quality improvements and customer service (Tangen, 2004). So, there has

been a growing criticism of traditional performance measurement systems which tend to

focus only on nancial results (Coskun and Bayyurt, 2008). The main reason is, according

to Fernandes (2006), that organizations compete not just on nancial eciency, but also

on social legitimacy. A company does not want just to maximize nancial revenues, but

also to be recognized and accepted in its environment.

By the early 1990s, the second phase potential solutions has proposed measurement

frameworks such as the balanced scorecard (Neely, 2005). Following these developments,

researches have started to suggest other performance measures, since nancial indicators

could not meet expectations of all stakeholders and a good organizational performance

should balance all organization dimensions which are related (Fernandes, 2006). Then, the

methods of application (third phase), involved the search for ways in which the proposed

frameworks could be used (Neely, 2005).

Beginning of the 2000s was marked by the empirical investigation phase, in which peo-

ple have begun to look for more robust empirical and theoretical analysis of performance

measurement frameworks and methodologies. The objective was to develop dynamic rather

than static measurement systems and to ensure an appropriate focus on enterprise perfor-

mance management, rather than simply performance measurement (Neely, 2005). The

performance measurement system is ultimately responsible for maintaining alignment and

coordination. Alignment deals with the maintenance of consistency between the strategic

goals and metrics as plans are implemented and restated as they move from the strategic

through the tactical and operational levels (Melnyk et al., 2004).

Nowadays, we are in the information era. Internet has changed the way people and

companies relate to each other. This situation also has an impact in performance man-

agement methods. Lam et al. (2011) argue that information systems, such as warehouse

management system (WMS), are recognized as useful means to manage resources in the

warehouse. The information technology enables, for example, the product tracking from

raw materials production up to customer acquisition or products' end-of-life. The Internet


1.1. Context of the study 3

of Things (IoT) is often considered to be part of the Internet of the future, consisting in

billions of intelligent communicating things or Internet Connected Objects (ICO) which

will have sensing, actuating, and often data-processing capabilities (Ng et al., 2013).

One of the changes coming from communication development is the conversion of local

competition to global competition. Companies seek constant improvement of their prod-

ucts and services to satisfy customers while trying to reduce costs. It has led companies to

decentralize their production systems all over the world. So, supplying the correct product,

in the right time and in the right quantity has become a challenge, requiring a very good

management of all company areas. The logistics plays an important role by aggregating

value to the products and it has become a critical factor to obtain competitive advantages.

Manufacturing logistics chains consist of complex interconnections among several suppli-

ers, manufacturing facilities, warehouses, retailers and logistics providers. Performance

modeling and analysis become increasingly more important and dicult in the manage-

ment of such complex manufacturing logistics networks (Wu and Dong, 2007).

One of the important aspects under the responsibility of the logistics sector is the

warehouse, where the main logistics operations take place: transportation, warehousing

and stocking. Not only their number is increasing substantially but also their functionality

is changing. Whereas in the past many European Distribution Centers (EDCs) primarily

served as a warehouse with a distribution function, some of the current EDCs have Eu-

ropean headquarters, call-centers, service centers or manufacturing facilities as well (De

Koster and Waremius, 2005). The connection of these activities in one place makes the

performance measurement in the warehouse a key factor for the overall performance of the

logistics operations.

The growing warehouse operation complexity and the easy information access have led

companies to adopt a large number of indicators, making their management increasingly

dicult. The reason for that is the misunderstandings that managers could have when

assessing global warehouse performance, since dierent indicator characteristics make dif-

cult the evaluation of their structural relationships. Also, today managers are confronted

with greater uncertainty and unpredictability, complicating the decision making; wrong

decisions can thus be more disastrous (Sardana, 2008).

Regarding the quantity of indicators used to manage performance, the managers have to

choose among a lot of indicators (having a complete set of informations to make decisions)

or few indicators (e.g the KPIs, Key Performance Indicators). In the rst case it is hard

to evaluate the global performance with so many data but, if the manager chooses few

indicators, the global evaluation is simplied and some important information can be lost.

In both cases, there will be indicators with dierent objectives (e.g. the level of a cost

indicator shall be minimized, while a quality indicator level shall be maximized). This fact

may increase the diculty of the analysis executed by the manager while evaluating the

warehouse global performance, even if he chooses a lot or few indicators. Cai et al. (2009)

conrm this conclusion arming that it is dicult to gure out the intricate relationships

among dierent KPIs and the order of priority for accomplishment of individual KPIs.

Nevertheless, even if managers would like to evaluate just few indicators, the more the

process is complex, the more the indicators needed are numerous and dierent (Melnyk

et al., 2004). Thus, the aggregation of indicators can considerably simplify the analysis of

a system, summarizing the information of a given set of sub-indicators (Franceschini et al.,


4 1. Introduction

2006).

Therefore, the main motivation of this work is to support manager decisions in an

eective way on the global warehouse performance, considering the existing indicators of

the warehouse activities and knowing that there are limits in the decision-maker's ability

to process large sets of performance expressions (Clivillé et al., 2007). In this context, the

research proposal is to dene an integrated warehouse performance measurement system

which aggregates indicators, giving a summarized feedback about the overall performance

of the warehouse considering all relevant information.

It is important to highlight that this global performance is related, in this disserta-

tion, to the aggregation of operational indicators of the warehouse, since this area has the

greatest quantity of indicators used.

Interestingly, the term performance aggregation has dierent meanings in the litera-

ture. For example, Böhm et al. (2007) state that performance information used at higher

decision levels is more aggregated than the one employed at lower levels due to various

reasons (data availability and error minimization, etc.). In this dissertation, we consider

performance aggregation as the mathematical union of several performance indicators in

order to achieve a measure, representing all the performance indicators of the system.

This denition is conrmed by Clivillé et al. (2007), who state that the aggregation of

the performance expressions is an operation that synthesizes the elementary performance

expressions into a global performance expression.

The next section presents the literature supporting the research gaps which are fullled

by this dissertation.

1.2 Research Problem


This section is divided in two subsections. First, we present the research gaps reported

by previous works, explaining for which problems we propose solutions. Secondly, the

complexity of the subject and the proposed solution are detailed.

1.2.1 Research Gap


The literature on warehouse performance assessment has been largely ignored (Dotoli et al.,

2009; Johnson and McGinnis, 2011). While there are widely accepted benchmarks for indi-

vidual warehouse functions such as order picking, little is known about the overall eciency

of warehouses (Johnson and McGinnis, 2011). Gu et al. (2010) present a review about

design and performance evaluation of warehouses. The authors address important future

directions for the warehouse research community, stating that the total warehouse perfor-

mance assessment models are themselves a considerable development challenge. Indeed,

we found very few papers analyzing warehouse performance relationships and proposing

frameworks to evaluate the global performance. The two main approaches used in the

literature could be summarized as follows.

First, Sohn et al. (2007) evaluate relationships among various inuential factors to

develop an Air Force Warehouse Logistics Index (WLI). This index evaluates the logistics

support capability of ROKAF (Republic of Korea Air Force) warehouses. The authors
1.2. Research Problem 5

apply questionnaires to warehouse workers, getting the necessary database to perform a

Structural Equation Modeling to nd relationships among the predened factors.

The group of works in which Sohn et al. (2007) is included presents as the main charac-

teristic the acquisition of data from questionnaires in order to perform mathematical tools.

After interviewing people related to the subject, the papers can use several statistical tools

to conrm, or not, the proposed relationships. In most of the cases, the questionnaires do

not contain indicators' information and in the cases where there are indicators, they are

evaluated qualitatively.

The second approach evaluates the global warehouse performance without subjective

judgments. The papers use basically DEA (Data Envelopment Analysis) tool. For example,

Johnson et al. (2010) investigate the factors that impact warehouse performance (using

correlation method) and evaluate warehouses with regard to technical eciency (i.e. inputs

and outputs).

The DEA tool is usually used for benchmarking, and the database to perform it is

related to production inputs and outputs. Also, indicators as customer satisfaction or

perfect orders (related to more than one activity) are not included in the model.

We observe that the literature on warehouse subject does not provide an aggregated

model to measure warehouse performance, intending its periodic management. There-

fore, we also verify the literature concerning the aggregation of performance measurement

systems (PMS) in enterprises.

Several authors discuss the aggregation of performance indicators and their relation-

ships.

Rodriguez et al. (2009) state that performance indicators provide information as to

whether the upstream objectives are being reached or not. However, no further informa-

tion about the causes is provided by these KPIs (Key Performance Indicators). For these

authors, the fact of discovering relationships between KPIs is potentially much more prof-

itable for an organization if it is possible to discover the latent relationships that occur

between objectives of the PMS. Then, cause-eect relationships between objectives could

be explained and managers would have additional decision-making information. For Mel-

nyk et al. (2004), while there are numerous examples of the use of various metrics, there

are relatively few studies in operations management that have focused on the eects of

metrics within either the operations management system or the supply chain.

Lauras et al. (2010) arm that each KPI should be examined separately and then in

related groups of indicators. Analysts such as the task leader or senior manager must

simultaneously consider all these factors. Regarding the number of indicators analyzed

simultaneously, Lohman et al. (2004) state that it is impossible for a manager to make

decisions on the basis of 100 unstructured metrics. Furthermore, Melnyk et al. (2004)

present the complexity of an individual's metrics set as a load imposed upon a person's

nite mental capacity.

According to Lohman et al. (2004), a possible solution is to cluster the metrics in

perspectives to facilitate manager's interpretation. Franceschini et al. (2008) assert that

if the performance measurement area includes dierent processes, it is possible to dene

an aggregate indicator, which synthesizes the performance of the set of indicators. For

Vascetta et al. (2008), the aggregated indicator is an informative tool, able to provide

general background in a format that is easy to create and to update. In addition, it should
6 1. Introduction

have an attractive and understandable format to be considered helpful for people of all

sectors. Lauras et al. (2010) reinforce that an advantage of an aggregated indicator is to

provide an immediate and global overview of the performance situation interpretable by

an entity not familiar with the details of the activities.

Even if several authors have discussed the need of an aggregate measure, few works have

tried to accomplish it. Thus, the main research gaps which this dissertation proposes to

fulll are: Using a set of ratio measures can lead to confusion; if some measures are good and

some are poor, is the warehouse performing well? (Johnson et al., 2010). The challenge is

to design a structure to the metrics (i.e., grouping them together) and extracting an overall

sense of performance from them (i.e., being able to address the question of Overall, how

well are we doing?) (Melnyk et al., 2004). In the same way, Lohman et al. (2004) arm

that a conceptual question is still not answered: What are the eects of combining several

measures into an overall score?

Even if some questions are asked more than 10 years ago, they are still valid since there

are a lot of developments to be made on this subject. One conrmation is the statement

of Clivillé et al. (2007), pointing out that as soon as managers use more than one KPI,

problems of comparison and aggregation of the performance expressions will exist.

After the works of Melnyk et al. (2004) and Lohman et al. (2004), some papers have

studied ways to aggregate performance. These researches usually use a mathematical tool

based on manager's opinions or subjective judgments (e.g. Fuzzy, AHP - Analytic Hi-

erarchy Process) to achieve this objective (see, for example, Luo et al. (2010)). Also,

several works analyze relationships among enterprise areas/departments using question-

naires (see. Fugate et al. (2010)). Unlike these earlier works, we propose, in this disserta-

tion, a methodology to measure objectively the integrated warehouse performance without

considering experience or subjective judgments inside the mathematical tools. For that,

analytical models and statistical tools are used to relate and aggregate indicators, including

all relevant indicators in the model.

The work of Rodriguez et al. (2009) is the closest we found to our proposition. Ro-

driguez et al. (2009) develop a methodology to dene aggregated indicators without judg-

ments, using the time series of indicators to measure their correlations and combine them

in factors. The main goal of the work is to relate the aggregated performance indica-

tors upstream towards the strategic objectives of the company, to analyze the objective

achievement.

This dissertation diers from Rodriguez et al. (2009) in the following points: our pur-

pose with the performance aggregation is to provide insights about warehouse performance

management in the operational level instead of strategical level; the application area of

our work is warehouses instead of enterprise administration; the statistical tool used by

Rodriguez et al. (2009) is just one part of the analysis performed in our work, since in this

dissertation, relationships are also determined analytically; we also develop a scale for the

integrated performance, which can be used for comparison purposes.

The proposed work is relevant in the theoretical and practical points of view: this

subject has received less attention and this dissertation brings new insights about this

theme; companies can realize their global performance with the implementation of the

proposed methodology and get more eciency on the warehouse management.

The complexity and diculty to get this solution to aggregate warehouse performance
1.2. Research Problem 7

is detailed in the next section.

1.2.2 Complexity
The complexity of this theme is addressed in dierent ways by the literature.

Caplice and She (1994) report some trade-os involving indicator's choice. One of

them is usefulness versus integration of indicators. This trade-o indicates that as a metric

becomes more aggregated it loses its direct usefulness. Moreover, if an indicator captures all

of the details of a process it tends to become more complex and thus harder to understand.

Franceschini et al. (2006) state that the eectiveness of an aggregated indicator strongly

depends on the aggregation rules, because sometimes its result can be questionable or even

misleading. Two years later, Franceschini et al. (2008) conrm that the aggregation of

several indicators into an aggregated indicator is not always easily achievable, especially

when the information to synthesize is assorted.

Vascetta et al. (2008) assert that the aggregation using mathematical equations nec-

essarily requires many assumptions and simplications which could lead to incorrect or

uncertain analyses, misunderstandings and distortions of data, sometimes making experts

reluctant to use and promote the indexes among decision-makers.

Beyond the strong criticism of indicator usefulness and the possible reluctance of man-

agers to utilize aggregated indicators, the main challenge is to provide trustful relationships

among indicators. We believe that once this last problem is solved, the others will be con-

siderably minimized. Thus, the proposition of this thesis is to relate indicators considering

just indicator equations and the time series of their results, without human judgments.

Two dierent quantitative methods (analytical model and statistical tool) are performed,

and an analysis of dierent results builds the solution.

It is hard to model indicator relationships since several factors inuence their results.

De Koster and Balk (2008) exemplify this situation arming that common measures used

in warehouses (e.g. order lines picked per person per hour, picking or shipment error rates,

order throughput times) are not mutually independent and, additionally, each of them can

depend on multiple inputs. The result is that the indicators do not only inuence one

another (e.g. order lines picked per person per hour and order throughput time), but they

can be inuenced by other warehouse parameters as system automation, the assortment

size, and the size of the warehouse, as well.

Another potential problem is how to provide a general solution with many dierent

kinds of warehouses. In this way, the rst issue is to dene the set of indicators to measure

warehouse performance. Clivillé et al. (2007) conrm that one major problem in the design

of PMS (Performance Measurement Systems) concerns the determination of performance

expressions which are useful for the control decision-making.

Finally, the aggregated performance result must have a meaning to be interpreted

by managers. As elementary performance expressions are associated with the various

heterogeneous indicators into a common reference, it is necessary to create a new scale to

provide informations about the current warehouse situation and how far it is possible to

go. The complexity remains especially in the determination of the scale boundaries since

they are usually related to the companies' goals.

The next sections present the dissertation objectives and the development required to
8 1. Introduction

achieve the proposed results.

1.3 Dissertation Objectives


1.3.1 General Objective
The main goal of this dissertation is to develop a methodology for an integrated warehouse

performance evaluation through indicator's aggregation.

1.3.2 Specic Objectives


To reach this objective, it is necessary to balance dierent indicators using mathematical

tools in order to consider the particularities of each of them. From the general objective

presented, specic objectives are proposed as follows:

• Denition and classication of warehouse performance indicators;

• Development of an analytical model of performance indicators and data equations;

• Creation of a methodology to determine an integrated warehouse performance mea-

surement;

• Discovery of a method to determine indicator relationships analytically;

• Determination of an optimization model to design a scale for the integrated perfor-

mance.

Each one of these specic objectives represents a contribution of this work. The next

section details all steps to attain the objectives presented.

1.4 Methodology and Development


The general research methodology applied in this dissertation is a quantitative model based

research. According to Bertrand and Fransoo (2002), this methodology is based on the

assumption that we can build models which explain (part of ) the behavior of real-life

operational processes or that can capture (part of ) the decision-making problems that are

faced by managers in real-life operational processes.

Regarding the specic steps of this work, it is possible to dene two other sub method-

ologies. The rst one consists of a normative empirical quantitative research, dened as a

research in which policies, strategies and actions are developed (Bertrand and Fransoo,

2002). This methodology encompasses from step one of Figure 1.1 (Searches on Databases.

Keyword: warehouse performance) up to the methodology development (Methodology to

determine an integrated performance measurement). The second methodology encom-

passing the rest of the work phases (Figure 1.1) corresponds to the descriptive empirical

research, which is primarily interested in creating a model that adequately describes the

causal relationships that may exist in reality, which leads to understanding the processes

going on (Bertrand and Fransoo, 2002).


1.4. Methodology and Development 9

The research is conducted as described in Figure 1.1. It shows a structured division

of the work in three main columns: bibliographic research, development and outcomes.

The bibliographic research steps performed in the left column of Figure 1.1 are related

specically to the knowledge taken from the literature. This knowledge is used as a basis

for the development area (middle column). Finally, the outcomes are the results of the

developments carried out in this dissertation, also called the main contributions of the

work.

Literature Development Outcomes


Definition and
Searches on Databases.
Determine the set of performance classification of
Keyword: warehouse
indicators and their definitions warehouse performance
performance
indicators

Definition of performance
indicators equations Normative
empirical
Searches on Databases.
Analysis of the literature about
quantitative
Keyword: Aggregate research
performance aggregation
performance/ indicator
Determination of a
Methodology to
Searches on Databases. Evaluation of statistical tools procedure to measure
determine an integrated
Keyword: statistical tools which allow to aggregate the integrated
warehouse performance
to reduce dimensionality indicators performance in
measurement
warehouses
Searches on Databases. Analysis of the literature about
Keyword: scale scale generation

Standard warehouse
definition

Definition of equations for Analytical model of


performance indicators and for indicator and data
the associated data equations (1)

Data generation to implement


the methodology (2) Descriptive
empirical
research
Correlation measurement among
Jacobian matrix assessment
indicators

Method to determine
Indicator’s aggregation using Develop a theoretical model
indicator relationships
statistical tools describing indicator relations
analytically

Integrated performance Integrated performance


proposition model

Development of a scale for the Optimization model for


Dissertation Conclusion
integrated model using (1) and (2) scale determination

Figure 1.1: Research steps.

Figure 1.1 starts with a deep literature review carried out in order to verify the set of

performance indicators used for warehouse performance measurement. We identify that

the literature does not provide a clear classication of these warehouse indicators regarding

their denitions. Thus, the rst outcome of this dissertation is the classication and

denition of the warehouse performance indicators. From this result, indicator denitions

are transformed in measurable equations.

After evidencing which warehouse performance indicators will be aggregated, researches

on dierent themes are carried out to develop the methodology to determine an integrative

warehouse performance measurement (the main contribution of this dissertation, second

outcome). The literature demonstrates some papers treating performance aggregation


10 1. Introduction

subject and, also, discussing adequate statistical tools which should be used to aggregate

indicators and the way it should be made. To simplify the interpretation of the integrated

warehouse performance, it is also necessary to develop a reference scale to allow the evalua-

tion of performance results. These three themes (performance aggregation, statistical tools

and scale generation), together, structure the proposed methodology generating knowledge

in this area.

The methodology is applied in a theoretical case. We dene a standard warehouse,

which contains the main processes/activities as usually found in real warehouses. The

performance indicators used for warehouse management are dened based on the literature

review ndings and an analytical model of indicator and data equations is generated (third

outcome).

To apply the mathematical tools and to nd indicator relationships a historical time

series of indicators is necessary. For that, data is generated representing the warehouse

dynamics with indicator results changing monthly. From these data, two dierent analysis

are performed to propose an integrated performance model (fth outcome). The rst anal-

ysis utilizes the analytical model and the data generated to verify indicator relationships

from the Jacobian matrix result. As a result of this development, we have the fourth out-

come, a method to determine analytically the indicator relationships. The second analysis

is the application of statistical tools to aggregate indicators in components. Both results

are analyzed carefully to determine the indicators which will make part of the integrated

performance model.

Finally, a scale is developed for the proposed integrated model. This scale is the result

of an optimization model which is based on the analytical equations of indicators and data

as well as the data generated to implement the methodology. As this kind of analysis

is quite new to design scales, the last outcome is the optimization model to dene the

integrated indicator scale. The conclusion of this dissertation with all developments return

to the literature as new knowledge to be used by academics and practitioners.

1.5 Research Delimitations


The delimitations of this research and its results are divided in: methodology delimitations,

theoretical case, indicators and scale.

For the proposed methodology, there are three main delimitations.

Firstly, the research boundaries are characterized by the performance analysis of an

individual warehouse. It consists of the evaluation of one warehouse over a time period,

measuring its own performance periodically. Thus, this dissertation does not encompass

the benchmarking and comparisons among warehouses.

Secondly, the indicator set used in the standard warehouse (theoretical case) are taken

from the literature. In a real case, warehouses determine indicators from company goals.

As there are several developed frameworks to help managers with the indicators' choice

(e.g. Franceschini et al. (2008)), this dissertation does not address this subject. Thus, to

apply the methodology, it is considered that the selected indicators are the ones dened

by the company.

Finally, the methodology is developed for operational performance measurement and

the results depend on warehouse characteristics and indicators. Even if there is no limi-
1.6. Thesis Structure 11

tation to use the methodology for indicators of higher levels or warehouses with dierent

characteristics, the method has not been tested/ veried on dierent applications in this

work.

The theoretical case study provides results that, initially, can not be generalized. The

numerical results obtained in the integrated performance model are limited by the consid-

erations made in data generation. However, regarding the analytical model, it is possible to

adapt it to similar warehouse situations, once the characteristics and operations presented

in the standard warehouse are the same.

Regarding the metric set denition and indicator relationships, the delimitations are

as follows:

• The non-linear relationships among indicators are not measured in this work;

• Indicators for human resources performance measurement are not included in the

metric set. Only the indicators that relate persons to operations (e.g. productivity

indicators), are used;

• Indicators related to sustainable practices and reverse logistics activities are also not

considered in the performance metric system.

Lastly, the numerical result of the developed scale can not be used in other warehouses

since to create it, it is necessary to dene low and high limits for data and indicators

according to the warehouse conditions. In this dissertation, some limits are dened based

on the restrictions proposed by the standard warehouse, as the maximum and minimum

number of products processed by the warehouse per month, whereas other limits are deter-

mined from indicator times series. However, the methodology to create the scale remains

a contribution of this work since its utilization is possible under the analytical model and

limits adaptation.

1.6 Thesis Structure


From this rst chapter which has presented the work proposition with its complexity and

delimitations, the next chapters have their structure as follows.

Chapter 2 introduces the literature on warehouse performance measurement. A struc-

tured method is used to classify papers and to obtain the main characteristics of the

literature concerning this subject. Furthermore, the indicators used for warehouse perfor-

mance assessment are acquired from papers and classied according to their dimensions of

measure.

Chapter 3 accomplishes a literature review on integrated performance measurement

and their relations. The main focus is to show papers using mathematical tools to assess

the global performance. These mathematical tools are classied and detailed, providing a

discussion about their usefulness as well as application restrictions.

Chapter 4 presents the methodology to determine the integrated performance measure-

ment, detailing the steps to follow to achieve it.

Chapter 5 describes the standard warehouse for which the performance measurement

is assessed. The warehouse activities, layout and the unit of measure of indicators are
12 1. Introduction

detailed to allow indicator equations development. Furthermore, we develop an analytical

model of indicator and data equations.

Chapter 6 utilizes mathematical methods to nd indicator relationships. For that,

we generate a database for the theoretical warehouse, which will be used for illustration

purposes. After the database generation, the relationships among indicators are calculated

using the Jacobian matrix, correlation matrix and Principal Component Analysis.

Chapter 7 analyzes the results of the mathematical tools application and proposes an

integrated performance model. Also, a scale to evaluate the results of the integrated perfor-

mance model is developed and tested for two dierent warehouse performance situations.

Chapter 8 presents the conclusions from the work results, highlighting the main con-

tributions and future research directions.


Chapter 2
Literature Review on Warehouse Performance

Commence par faire le nécessaire, puis fais ce qu'il est


possible de faire et tu réaliseras l'impossible sans t'en
apercevoir.
François d'Assise

Contents

2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2 Research methodology and delimitations . . . . . . . . . . . . . . 14
2.3 Results of Content Analysis . . . . . . . . . . . . . . . . . . . . . . . 17
2.3.1 Based on geographical and journal representation . . . . . . . . . . . . . 17
2.3.2 Based on the work methodology . . . . . . . . . . . . . . . . . . . . . . . 18
2.3.3 Application area of works . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.3.4 Warehouse activities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3.5 Warehouse Management tools . . . . . . . . . . . . . . . . . . . . . . . . 21
2.4 Direct Warehouse Performance Indicators . . . . . . . . . . . . . 23
2.4.1 Time related performance indicators . . . . . . . . . . . . . . . . . . . . . 24
2.4.2 Quality related performance indicators . . . . . . . . . . . . . . . . . . . 24
2.4.3 Cost related performance indicators . . . . . . . . . . . . . . . . . . . . . 25
2.4.4 Productivity related performance indicators . . . . . . . . . . . . . . . . 26
2.5 Indirect Warehouse Performance Indicators . . . . . . . . . . . 26
2.6 Classification of the Warehouse Performance Indicators . . 30
2.6.1 Specic and Transversal Indicators . . . . . . . . . . . . . . . . . . . . . 33
2.6.2 Resource Related Indicators . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

Abstract
This chapter carries out a deep literature review on warehouse performance. We per-
form a descriptive analysis of selected articles using content analysis method. The
performance indicators acquired from these papers are divided initially as indirect or
direct indicators. The indirect indicators are rather related to concepts and there is
not a unique and simple equation to express them. The direct indicators are measured
by equations like ratios and are also classied according to the dimensions of time,
quality, cost or productivity. In order to clarify the direct indicators boundaries, we
provide a framework positioning the measures according to the activity and dimension
classication. Some conclusions made from this structured literature review are also
presented.
14 2. Literature Review on Warehouse Performance

2.1 Introduction
As the main objective of this dissertation is to study warehouse performance in an in-

tegrated way, a deep literature review is performed in this chapter to identify the main

developments made by researchers as well as research gaps on this subject. Furthermore,

this review synthesizes past works to recognize which kind of measures are mostly used

on warehouse performance management. Due to the dierent kinds of indicators found in

the literature, some classications are performed to organize them according to what they

measure (e.g. the performance of a specic activity) and how they do it (the mathematical

tool used to calculate the performance).

In this work we refer to warehouse performance management as a short term analysis of

the warehouse performance, usually done in short and regular time intervals (like months).

These periodic results are used by managers to verify the evolution of the performance

along the time and to take actions to enhance better results. We refer to the performance

analysis as the measurement and comparison of actual levels of achievement with specic

objectives, measuring the eciency and the outcome of corporation (Lu and Yang, 2010).

In the following discussion, the terms metric, performance measure and performance

indicator are used as synonyms, as commonly done in the literature (Franceschini et al.,

2006).

The reviews found treating warehouse subjects address technical issues as storage ca-

pacity and assignment policies (Cormier and Gunn, 1992; Gu et al., 2007), order picking

problems (Cormier and Gunn, 1992; De Koster et al., 2007; Gu et al., 2007, 2010), routing

problems (De Koster et al., 2007; Gu et al., 2007), and layout design (Cormier and Gunn,

1992; De Koster et al., 2007; Gu et al., 2010). Only the work of Gu et al. (2010) addresses

the subject, but does so in the sense of long-term decision making.

The next sections present the methodology used for selecting and analyzing papers with

the results of content analysis.

2.2 Research methodology and delimitations


The process of collecting and selecting the papers is described in Figure 2.1. In the Initial

Search phase, we dened a list of relevant keywords used for the database search, as

demonstrated by the three parts of Table 2.1. The rst subtable in the left side of Table

2.1 demonstrates the databases researched and the subtable in the right side shows the main

keywords utilized. The third subtable of Table 2.1 (located below the rst two subtables)

presents all 24 possibilities of Keywords Combinations tested in all databases. The initial

search did not limit publication year and document type; the only limitation was the results

published in available English-language. This initial search resulted in 1500 articles, where

1090 were from journals and 410 from conferences, magazines and reports. We focus on

journal publications, choosing just this kind of papers.

Analyzing the article's publication year, we found that the rst publication about

warehouse performance appears in 1970's with the work of Lynagh (1971). But the number

of relevant papers available in databases up to 1990 is really rare. We can cite just the works

of Khan (1984) and Svoronos and Zipkin (1988) as examples. To be sure that the literature

review contains the majority of articles during a range of years, this study was restricted
2.2. Research methodology and delimitations 15

Figure 2.1: Bibliography research scheme.

Table 2.1: Databases and Keywords used for papers research.

Databases Keywords
Scopus (scopus.com) Warehouse/ Distribution Center
Emerald (emeraldinsight.com) Facility Logistics/ Logistics Platforms
EBSCO (ebscohost.com) Performance/ Eciency
Wiley (onlinelibrary.wiley.com) Evaluation/ Measurement/ Assessment
Science Direct (sciencedirect.com) Logistics/ Logistics audit
Web of Science (webofknowledge.com) Operation Management
Compendex (engineeringvillage2.com) Metrics/ index/ KPI

Keyword combinations
Performance Measur∗ / Assessment & warehouse / distribution center / logistics platform
Performance Measur∗ / Assessment & warehous* / DC & logist∗
Performance Evaluation & distribution center / logistics platform
warehouse/ distribution center / logistics platforms & performance
warehouse operations management
warehouse / distribution center & logistics index
warehouse eciency & measur∗
performance & metric & warehouse
warehouse overall performance
warehouse management & logistics
warehouse & logistics KPI
logistics performance

on publications from 1991 up to 2012. This range of years oers sucient support to make

conclusions from the results of descriptive analysis regarding their representativeness.

Following the steps presented in Figure 2.1, in the third phase, the journals articles

are ltered by considering that their titles contain the keywords: (i) warehouse or similar

(Distribution Center, Facility Logistics, Logistics Platforms, Cross Docking); (ii) the words
performance or management or evaluation and the warehouse area / activity; (iii)

logistics management and logistics performance measurement. During this selection, review

papers in the warehouse area are also considered. From this stage, the database is narrowed

down to 461 papers.

Finally, the abstract of each article is analyzed. In this phase, the papers are ltered

according to their relationship to warehouse performance. In case of doubt on the paper's

content, the full text was also veried. Note that the nal database (43 articles) does not

include the works that are directly related to:


16 2. Literature Review on Warehouse Performance

• Economical analysis about warehouse construction and/or investment;

• WMS (Warehouse Management System) evaluation (technical features) and imple-

mentation;

• Warehouse design;

• Warehouse location;

• Supply chain optimization (two or three echelons);

• Storage and picking policies evaluation;

• Distribution optimization.

The justication of not including the subjects cited above is that they treat strategical

and tactical decision making (e.g. warehouse location, design) and not the operational

performance management which is the main focus of our literature review (e.g. unloading

time, labor productivity).

Only the works using decision making for operational warehouse management are taken

into account. As the decision support tools are considered as means to manage the perfor-

mance, the articles presenting decision support systems (DSS) to help warehouse manager's

decisions (Lam et al., 2011; Lao et al., 2011, 2012) and the articles treating the system

inuence on enterprise performance (Autry et al., 2005; Karagiannaki et al., 2011) were

included in this review as well.

The nal database is used to make two dierent analysis as shown in Figure 2.2. First,

we provide a descriptive analysis of all 43 papers in Section 2.3. That is, a quantitative

evaluation of the general characteristics of the articles. The second analysis, presented in

Section 2.4 and 2.5, focuses on the performance indicators used in warehouses. In the nal

database, only 35 articles present performance indicators. Among these 35 papers, 32 arti-

cles discuss the performance indicators which can be expressed by some simple equations,

being measured directly. We qualify them as direct indicators. We address this kind of

papers in Section 2.4. There are 16 articles among 35 that assess performance indicators in

an indirect way. It means that these indicators represent more complex concepts which

are dicult to measure by simple expressions like ratios. Therefore, more sophisticated

statistical tools (e.g. regression analysis) are used to assess them. These performance in-

dicators are named as indirect indicators and an analysis of them is provided in Section

2.5.

The papers of the nal database are explored based on content analysis research

method. Content analysis is an observational research method that is used to system-

atically evaluate the literature in terms of various categories, transforming original texts

into analyzable representations (Pokharel and Mutha, 2009; Krippendor, 2004).

Content analysis can be carried out in two steps: denition of variables analyzed and

the unitization of them. The denition of the variables depends on research objectives. In

this dissertation, the variables extracted from papers are: work methodology, mathematical

tools utilized, warehouse activities and indicators used to assess performance. The second

step to be performed is the unitization. Krippendor (2004) denes unitizing as the

systematic distinguishing of segments of text that are of interest to an analysis. That is,
2.3. Results of Content Analysis 17

Figure 2.2: Analysis realized in this paper.

in the nal paper database we look for the variables and when they are not explicit in the

text some predened rules are used to classify the information acquired from the text. In

order to maintain consistency in this procedure and to avoid biases, this step is conducted

by the author of this thesis (this procedure is usually adopted when performing content

analysis according to Krippendor (2004)). This principal reader has lled the variables as

presented in each study on a spreadsheet. This master listing of ndings is then analyzed

by the persons related to this research.

The results of the spreadsheet analysis are given in the next sections.

2.3 Results of Content Analysis


This section shows the content analysis by using tables which present some quantitative

outcomes resulted from paper's classication. They present patterns identied from the

data, allowing to categorize the warehouse performance literature. More specically, Sec-

tion 2.3.1 shows the number of publications per continent and per journal, Section 2.3.2

introduces paper methodologies, Section 2.3.3 shows their classication by application ar-

eas, Section 2.3.4 presents the warehouse activities most studied in the works and Section

2.3.5 summarizes the tools developed for helping managers on warehouse management.

2.3.1 Based on geographical and journal representation


Figure 2.3 shows one of the results from article analysis, the number of publications over

years per continent. We note that the sum of the number of publications per continent/year

could be more than the total curve value because some papers are co-authored by people

from dierent continents and are counted more than once. From Figure 2.3 several infer-

ences could be made. First, it is apparent that research on warehouse performance has

increased in the last years, demonstrating the subject relevancy. Second, the represen-

tation of European papers has also increased substantially in the last years. The main

European publishing countries are The Netherlands, Greece and Italy with four, three and

three publications each, respectively. America, on the other hand, maintains almost the

same number of publications over years with United States being the country with most

publications (16 papers) of all continents. Third, the number of papers realized in interna-

tional cooperation sums to 10 publications, almost one fourth of our database. Europe is

the continent with the highest international co-authoring (7 papers), followed by America

(6 papers).
18 2. Literature Review on Warehouse Performance

Figure 2.3: Number of publications over years / continent.

Table 2.2: Journals publications - of 43 total papers

Journal NP a
%
European Journal of Operational Research 5 11.6
Journal of Business Logistics 3 7.0
Journal of Manufacturing Technology Management 3 7.0
International Journal of Production Research 2 4.7
International Journal of Production Economics 2 4.7
TOTAL 15 35
a
Number of publications

In response to the question of where the warehouse performance management is most

addressed, Table 2.2 demonstrates the journals that most publish in the area. The re-

sults show that publications are very widespread since the journals with one publication

represent more than 60% of the selected articles. So, we can conclude that this area is

very interdisciplinary. The "European Journal of Operational Research" has the highest

concentration with ve articles. It is interesting to highlight that four among these ve

publications are literature reviews showing the general interest on this subject area.

2.3.2 Based on the work methodology


Other data points acquired by the descriptive analysis capture the articles' methodology.

The articles are classied based on ve research methodologies (see Seuring and Muller

(2008)): mathematical, conceptual, case study, survey, and review papers. A paper is

classied as quantitative/mathematical work if simple tools (e.g. mean, percentage and

standard deviation, etc.) as well as more sophisticated tools (e.g. linear regression, ana-

lytical model, simulation) are used. To be classied as conceptual, the work needs to be

presented as a theoretical concept; there is no kind of practical application or results imple-

mented in practice. The case study is a work that develops a theory and veries the results

in practice; or it is a paper solving some specic problems veried in practice. Survey is

a research paper carrying out a questionnaire to make conclusions about a subject. Each

paper could be classied in more than one methodology, depending on its characteristics.

The exception is the review papers, which were separated because of their relevance. The

results of this classication are given in Table 2.3.


2.3. Results of Content Analysis 19

Table 2.3: Work Methodology - Total 43 articles

Mathematical
Case Study

Conceptual
Review

Survey

NPa % Articles
X X 17 39.5 Kiefer and Novack (1999); Ellinger et al. (2003); Autry et al.
(2005); De Koster and Waremius (2005); Voss et al. (2005);
Sohn et al. (2007); De Koster and Balk (2008); Park (2008);
O'Neill et al. (2008); Menachof et al. (2009); Forslund and Jon-
sson (2010); Lu and Yang (2010); De Marco and Giulio (2011);
Johnson and McGinnis (2011); Markovits-Somogyi et al. (2011);
Banaszewska et al. (2012); Yang and Chen (2012)
X X 13 30.2 Wu and Hou (2009); Manikas and Terry (2010); Matopoulos
and Bourlakis (2010); Wang et al. (2010); Johnson et al. (2010);
Cagliano et al. (2011); Lam et al. (2011); Goomas et al. (2011);
Karagiannaki et al. (2011); Lao et al. (2011); Sellitto et al.
(2011); Lao et al. (2012); Ramaa et al. (2012)
X 5 11.6 Cormier and Gunn (1992); van den Berg and Zijm (1999); De
Koster et al. (2007); Gu et al. (2007, 2010)
X 3 7.0 Spencer (1993); Gunasekaran et al. (1999); Gallmann and
Belvedere (2011)
X 3 7.0 Mentzer and Konrad (1991); Rimiene (2008); Bisenieks and
Ozols (2010)
X X 2 4.7 Yang (2000); Saetta et al. (2012)
TOTAL 43 100.0
a
Number of publications

The quantitative works represent 74.4% of the total papers (i.e. survey/mathematical

(39.5%), case study/mathematical (30.2%) and conceptual/mathematical (4.7%)). Due to

their signicance, we detailed the quantitative works according to the type of method used

(see Table 2.4). The basic statistics are further detailed as ANOVA and F test; p value and
σ; and Others. We note that some papers use more than one mathematical tool. In such

papers, most of the time, the basic statistics are combined with other tools. For example,

factor analysis or regression analysis are combined with the basic statistics to describe

relations among warehouse activities (10 out of 32 papers). Another example is the use of

statistics to compare the simulation results. The next subsections present which kind of

industries and warehouse activities were most representative according to the database.

2.3.3 Application area of works


To verify the most relevant application areas, we classify the articles based on the position

of the application point in the supply chain. Table 2.5 shows three major classes as:

(1) manufacturing industries (with their respective Distribution Centers - DC). In this

category, the articles are further classied as one industry and as several industries if the
application is on a single or on several industries, respectively; (2) retailers, and (3) third
party logistics. We classify as Other the works which are not related to any industrial

activity, like Air Force (see Sohn et al. (2007)) and as Not Specied if application areas

are not mentioned. The main area appearing in papers is the Food industry, with a total

of 8 works (5 are performed in Retailer companies and 3 in Manufacturing). The results


20 2. Literature Review on Warehouse Performance

Table 2.4: Mathematical tools

Table 2.5: Publications area


Math Tool NPa %
(1) Basic Statistics 20 40 Area NPa %
(1.1) ANOVA and/or F test 8 16
(1) Manufacturer and its DC 19 44.2
(1.2) σ b , p value 7 14
(1.1) One industry 6
(1.3) Others 5 10
(1.2) Several industries 13
(2) Regression Analysis 6 12
(2) Retailers 9 20.9
(3) Factor Analysis 5 10
(3) Third Party Logistics 6 14.0
(4) DEAc 5 10
(4) Other 1 2.3
(5) Analytical Model 4 8
(5) Not Specied 8 18.6
(6) Simulation 4 8
Total 43 100
(7) Others 6 12 a
Number of publications
Total 50 100
a
Number of publications b standard deviation
c
Data Envelopment Analysis

presented in Table 2.5 show that 13 out of 19 articles related to a manufacturing domain

cover several industries. This is not very surprising when we cross check with Table 2.3.

We observe that there are a lot of survey papers (see Table 2.3) providing performance

comparison among enterprises. Such articles analyze dierent industry segments at the

same time.

We have also analyzed the kind of facility studied in the selected articles (warehouse

or distribution center (DC)), but it is dicult to provide reliable statistics on this subject.

Even though Manikas and Terry (2010) highlight that main dierences exist between these

two, dening  aDC as a warehouse that emphasizes the rapid movement of goods , the same
authors also state that  a distribution center could be similar to a warehouse in terms of

layout and operations management . In fact, in the related literature the terms DC and
warehouse are often used as synonyms (van den Berg and Zijm, 1999; Dotoli et al., 2009).

Therefore, in this work, we consider all indicators and management practices realized in

warehouse and distribution centers as equivalent.

2.3.4 Warehouse activities


Warehouses could have dierent activities according to product specication, customer re-

quirements and service levels oered. For De Koster and Waremius (2005), the complexity

of the warehouse activities depend mainly on: (i) the number and variety of items to be

handled; (ii) the amount of daily workload to be done; and (iii) the number, the nature

and the variety of processes necessary to fulll the needs and demands of the customers

and suppliers.

Even though dierences may exist among the warehouse activities, they were dened

as: receiving, storage, order picking and shipping (van den Berg and Zijm, 1999). In

what follows we will use this generic classication. Some studies related to warehouse

performance also mention the delivery process (5 articles are identied). In some cases,

the delivery could be considered as a warehouse responsibility in the metrics sense. This

is why, the delivery is also considered as a warehouse activity in our analysis.

However, we did not include other warehouse activities such as replenishment (transfer

of products from the reserve storage to the picking area (Manikas and Terry, 2010)) and

sorting (if the picking is performed in batches, the products could be sorted before packing)
2.3. Results of Content Analysis 21

in this analysis because the database papers do not present performance indicators for these

activities.

As each of these ve activities can be divided into several sub-activities, we consider

the following denitions and boundaries to be used in our analysis:

• Receiving: operations that involve the assignment of trucks to docks, the scheduling

and execution of unloading activities (Gu et al., 2007);

• Storing: material's movement from unloading area to its designated place in inventory

(Yang and Chen, 2012; Mentzer and Konrad, 1991);

• Order Picking: process of obtaining a right amount of the right products for a set

of customer orders (De Koster et al., 2007). This is the main and the most labor-

intensive activity of warehouses (Dotoli et al., 2009);

• Shipping: execution of packing and truck's loading after picking, involving also the

assignment of trucks to docks (Gu et al., 2007);

• Delivery: the transit time for transportation from the warehouse to the customer.

Based on the above warehouse activities, the selected articles are analyzed and classi-

ed as in Table 2.6. This table helps identifying the major research areas by warehouse

activities.

A major observation we make out of Table 2.6 is that almost 40% of the articles

consider all major activities of the warehouse at the same time (rows 1 and 2 of Table

2.6). The articles mentioned in the second row (except Mentzer and Konrad (1991)) are

on the employee performances. According to van den Berg and Zijm (1999) and Mentzer

and Konrad (1991), the labor tasks impact all warehouse activities. Therefore, we choose

to classify these papers as impacting all activities.

Another interesting insight is the fact that the majority of the articles include the

picking activity in their studies. This is quite relevant with industrial observations and

shows a certain maturity in the works undertaken. The order picking process is the most

costly among all warehouse activities, because it tends to be either very labor intensive

(manual picking) or very capital intensive (automatic picking). More than 60% of all

operating costs in a typical warehouse can be attributed to order picking (van den Berg

and Zijm, 1999; Gu et al., 2007; Manikas and Terry, 2010).

In the nal database, we nd some works which explore warehouse management systems

for decision aid and performance management. As these warehouse management tools are

important supports for performance evaluation we give a descriptive analysis of them in

the following subsection.

2.3.5 Warehouse Management tools


The early works on warehouse management are rst focused on examining the processes

and identifying areas where an ecient management could improve the performance of

the warehouse. For example, Spencer (1993) presents a method based on value-added tax

(V-A-T) analysis and Theory of Constraints (TOC) to identify such critical process points;

Gunasekaran et al. (1999) study the problems in Goods Inwards (GI) area and provide
22 2. Literature Review on Warehouse Performance

Table 2.6: Warehouse Activities studied


Receiving

Shipping

Delivery
Storage

Picking

NP a
% Articles
X X X X 12 27.9 Cormier and Gunn (1992); van den Berg and Zijm (1999);
Gunasekaran et al. (1999); Kiefer and Novack (1999); Gu
et al. (2007); Rimiene (2008); Karagiannaki et al. (2011);
Cagliano et al. (2011); Gallmann and Belvedere (2011); Lao
et al. (2012); Yang and Chen (2012); Ramaa et al. (2012)
X X X X X 5 11.6 Mentzer and Konrad (1991); Ellinger et al. (2003); Wu and
Hou (2009); Lu and Yang (2010); Sellitto et al. (2011)
X X 5 11.6 Spencer (1993); Autry et al. (2005); De Koster and Balk
(2008); Johnson et al. (2010); Johnson and McGinnis (2011)
X X X 3 7.0 De Koster and Waremius (2005); O'Neill et al. (2008);
Saetta et al. (2012)
X 3 7.0 De Koster et al. (2007); Lam et al. (2011); Goomas et al.
(2011)
X X 2 4.7 Bisenieks and Ozols (2010); Gu et al. (2010)
X X X 2 4.7 Manikas and Terry (2010); Wang et al. (2010)
X X X 2 4.7 Menachof et al. (2009); De Marco and Giulio (2011)
X 2 4.7 Sohn et al. (2007); Park (2008)
X X X 1 2.3 Matopoulos and Bourlakis (2010)
X X 1 2.3 Voss et al. (2005)
X 1 2.3 Forslund and Jonsson (2010)
X 1 2.3 Markovits-Somogyi et al. (2011)
X X X 1 2.3 Banaszewska et al. (2012)
X X 1 2.3 Yang (2000)
X X 1 2.3 Lao et al. (2011)
Total 43 100.0
a
Number of publications

solutions to increase the performance of warehousing operations using Just in Time (JIT)

and Total Quality Management (TQM). These early techniques do not necessarily need

extensive Information Technology (IT) tools.

In the last decade, however, we observe an increasing complexity in the warehouse

operations. This complexity is very well demonstrated by the implementation of sophis-

ticated IT tools in warehouses and DCs. Since 2000, more complicated algorithms and

simulations start to appear in publications on warehouse management as well. These ar-

ticles follow the same trend and propose utilization or development of decision support

systems for performance evaluation and performance improvement in warehouses. Infor-

mation systems, such as Warehouse Management System (WMS), are recognized as useful

means to manage resources in the warehouse (Lam et al., 2011). Wang et al. (2010) pro-

pose a Digital Warehouse Management System (DWMS) based on RFID (Radio Frequency

Identication) to help managers achieve better inventory control, as well as to improve the

operation eciency. Cagliano et al. (2011) model the warehouse processes using System

Dynamics and develop a dynamic decision support tool to assign employees to counting

tasks. Lam et al. (2011) develop a Decision Support System (DSS) to facilitate warehouse

order fulllment: when there is an incoming customer order, previous similar cases are

retrieved as a reference solution to the new incoming order. Lao et al. (2011) develop a

real-time inbound decision support system with three modules, which integrate the RFID
2.4. Direct Warehouse Performance Indicators 23

technology, Case-Based Reasoning (CBR), and Fuzzy Reasoning (FR) techniques to help

monitor food quality activities. Lao et al. (2012) propose a RFID based system to facilitate

the food safety control activities in receiving area, by generating a proper safety plan.

To evaluate the technology investment, Autry et al. (2005) design a method to deter-

mine whether the investment in WMS-oriented operations results in desirable performance

outcomes for the warehouses or not. More recently, Yang and Chen (2012) and Ramaa

et al. (2012) study the impact of information systems on warehouse performance. These

studies conclude that the introduction of new technologies like RFID and WMS permits the

integration of decision support tools in warehouse management and improves the manager's

decisions.

In the next section, we present further analysis on the selected articles. But this time,

the analysis is focused more specically on the indicators used to assess the warehouse

performance.

2.4 Direct Warehouse Performance Indicators


The traditional logistics performance measures include hard and soft metrics. The rst

one treats quantitative measures such as order cycle time, ll rates and costs; the second

deals with qualitative measures like manager's perceptions of customer satisfaction and

loyalty (Chow et al., 1994; Fugate et al., 2010). The "hard" metrics are computable with

some simple mathematical expressions while the soft metrics require more sophisticated

tools of measurement( e.g. Regression analysis, fuzzy logic, Data Envelopment Analysis,

etc.). This work will refer to the "hard" metrics as direct indicators and the soft ones as

indirect indicators. The rst group will be presented in this section, and the second one

will be described in section 2.5.

For the purpose of the analysis, all direct indicators are extracted from papers and clas-

sied according to four performance evaluation dimensions, commonly used in industries.

These are: time (Mentzer and Konrad, 1991; Spencer, 1993; Neely et al., 1995; Frazelle,

2001; Chan and Qi, 2003; Gunasekaran and Kobu, 2007; Gallmann and Belvedere, 2011),

quality (Neely et al., 1995; Stainer, 1997; Frazelle, 2001; Gallmann and Belvedere, 2011),

cost (Neely et al., 1995; Mentzer and Konrad, 1991; Beamon, 1999; Chan and Qi, 2003;

Cai et al., 2009; Keebler and Plank, 2009), and productivity (Stainer, 1997; Frazelle, 2001;

Chan and Qi, 2003; Keebler and Plank, 2009; Gallmann and Belvedere, 2011). We note

that; some works prefer to use exibility instead of productivity as the fourth dimension

(Neely et al., 1995; Stainer, 1997; Beamon, 1999; Gunasekaran and Kobu, 2007), dening

it as the ability to respond to a changing environment (Beamon, 1999). However, Gu-

nasekaran and Kobu (2007) state that exibility may be intangible and dicult to measure

in some cases. We present in Section 2.5 that exibility is preferably measured indirectly

rather than directly. Consequently, in this section productivity will be used as a dimension

for direct warehouse performance indicators.

The following procedure is used for the classication. Initially, all the direct indicators

found in the selected papers are listed. Once the list is completed, two types of aggregations

are made: (i) similar indicators are regrouped; (ii) very specic metrics are included in more

generic ones. One example of this second group is the work by Manikas and Terry (2010)

mentioning the indicator time of quality control in receiving. This can be considered as a
24 2. Literature Review on Warehouse Performance

portion of the receiving operation time; we include this indicator together with the class

of indicators called the receiving operation time. Finally, the indicators are organized

according to what they measure (time, quality, cost or productivity). We note that, for

the sake of uniformity throughout this work, the classications presented here are based

on our interpretation, instead of the original category proposed by the selected papers. For

example, Banaszewska et al. (2012) consider the number of consignment processed per

warehouse employee as a productivity indicator. Indeed, the measure is a productivity

indicator. In this review we propose a sub-category, called the labor productivity and

Banaszewska et al. (2012) appears in this (see Table 2.10). Another example is the article of

Saetta et al. (2012), where the authors measure the customer satisfaction as the percentage

of orders on time and we classify the article under a broader indicator which is the on

time delivery (see Table 2.8). The classications resulting from this analysis are given

in Tables 2.7, 2.8, 2.9 and 2.10. We present a discussion on each class in the following

sections.

2.4.1 Time related performance indicators


Table 2.7 shows the results for time related indicators. The most used metrics are order

lead time, receiving operation time and order picking time, respectively. Surprisingly, order

picking time is in the third position, even though Gu et al. (2007) state that past research

has focused strongly on order picking since this activity has large impact on the warehouse

performance. One reason could be that in the literature, the order picking time is more

specically treated in optimization works, which are not considered in this review.

Analyzing the time spent by a product in the warehouse through all activities, the

indicators found in Table 2.7 encompass almost all time components (receiving, putaway,

picking, shipping and delivery). The exceptions are the replenishment and inventory time:

there is no paper using an indicator like inventory coverage or replenishment time to mea-

sure it. Mentzer and Konrad (1991) presents indicators covering most of the activities in

a descriptive way; however, no measurement is done. Another interesting point is that no

author has measured the entire time spent by a product in the warehouse (since receiving

up to delivery) using just one indicator.

Regarding the warehouse activities covered by indicators, for the inbound processes

there are receiving and putaway times and for outbound processes picking, shipping and

delivery times. Interestingly, these ve indicators could be represented by just two: dock

to stock time (for inbound process) and order lead time (for outbound process). In the

case of order lead time, this indicator comprehends also administrative time beyond the

activities presented (picking, shipping and delivery) since its denition expresses, according

to Kiefer and Novack (1999), that order lead time starts to be measured at the time the

customer makes an order.

2.4.2 Quality related performance indicators


Dierent from the time dimension, the quality embraces measures linked with customer

satisfaction (external) and operations quality (internal).

The Table 2.8 illustrates the indicators used in the selected papers. We observe that

the emphases are on on-time delivery, customer satisfaction and order ll rate. The
2.4. Direct Warehouse Performance Indicators 25

Table 2.7: Warehouse time indicators found in literature.

Equipment downtime
Delivery Lead Time

Dock to stock time


Order picking time
Order lead time

Receiving time

Putaway time

Queuing time

Loading time
Authors
Mentzer and Konrad (1991) X X X X X X
Kiefer and Novack (1999) X
Yang (2000) X
Gu et al. (2007) X X X X
O'Neill et al. (2008) X
Rimiene (2008) X
Menachof et al. (2009) X
Manikas and Terry (2010) X
Matopoulos and Bourlakis (2010) X X
Wang et al. (2010) X
Cagliano et al. (2011) X X X
Lam et al. (2011) X
Gallmann and Belvedere (2011) X
Karagiannaki et al. (2011) X
Lao et al. (2012) X
Yang and Chen (2012) X X
Ramaa et al. (2012) X X
Total/each indicator 9 5 4 3 3 2 2 1 1

result corroborate with the statement of Forslund and Jonsson (2010), that  perfectorder
results supplier delivery performance in a more comprehensive way, but seems not to be as
widely applied as on-time delivery .
The inventory, the warehouse physical area in which the products remain until they are

picked, is also considered as an important management part to achieve a high warehouse

performance. Gallmann and Belvedere (2011) state that companies take into account

inventory management as a key to reach excellent service levels. Although inventory is not

an activity, its indicators (represented in Table 2.8 by Physical inventory accuracy) were

included in this work due to their importance in warehouse management.

2.4.3 Cost related performance indicators


The results for cost dimension are presented in Table 2.9. It is interesting to note that

fewer works are recorded for cost indicators compared to the other dimensions. It could be

explained by the armation of Gunasekaran and Kobu (2007) that the operational level

performance evaluation is mostly based on non-nancial indicators, but depends always

on company's characteristics and choices. Despite the strategic importance in the supply

chain, warehouses have most of their activities in the operational level.

Table 2.9 also shows that the majority of the works mentioning cost metrics use in-

ventory cost indicator. From this data, it is apparent that what really interests managers

regarding the warehouse management costs is the inventory. The inventory is a cost gen-

erator by nature: according to Kassali and Idowu (2007), inventory is a business that
26 2. Literature Review on Warehouse Performance

Table 2.8: Warehouse quality indicators found in literature.

b
a

Cargo damage rate


Ord. ship. on time
Shipping accuracy

Delivery accuracy
Cust. satisfaction

Physical inv. ac.c


Storage accuracy
Picking accuracy
On-time delivery

Perfect Orders

Stock-out rate
Order ll rate

Scrap rate
Authors
Mentzer and Konrad (1991) X
Gunasekaran et al. (1999) X X
Kiefer and Novack (1999) X X X X X
De Koster and Waremius (2005) X
Voss et al. (2005) X X X X X
De Koster and Balk (2008) X
Rimiene (2008) X X X
Menachof et al. (2009) X
Forslund and Jonsson (2010) X
Lu and Yang (2010) X X
Wang et al. (2010) X
De Marco and Giulio (2011) X
Lam et al. (2011) X
Gallmann and Belvedere (2011) X
Johnson and McGinnis (2011) X
Lao et al. (2011) X X X
Banaszewska et al. (2012) X X
Lao et al. (2012) X X X
Saetta et al. (2012) X X
Yang and Chen (2012) X X X X X X X
Ramaa et al. (2012) X X X X X
Total/each indicator 10 8 7 5 4 3 2 2 2 2 2 1 1
a
Customer satisfaction b
Orders shipped on time c
Physical inventory accuracy

involves costs and risk. The risks may come from probable product losses (e.g. quality

deterioration) or price uncertainty.

2.4.4 Productivity related performance indicators


Another important dimension for the warehouse management is the productivity. Produc-

tivity can be dened as the level of asset utilization (Frazelle, 2001), or how well resources

are combined and used to accomplish specic, desirable results (Neely et al., 1995).

It can be seen from Table 2.10 that labor productivity and throughput are the most

employed metrics in warehouses. This result reinforces the fact that these are the main

areas where the warehouses are pressured for outcomes.

2.5 Indirect Warehouse Performance Indicators


In the past, the distribution centers (DC) primarily served as warehouses with distribu-

tion functions. Nowadays, the DCs have international headquarters, call-centers, service

centers or even manufacturing facilities as well (De Koster and Waremius, 2005). This

evolution is the outcome of a need to provide tailored services for the customers and to
2.5. Indirect Warehouse Performance Indicators 27

Table 2.9: Warehouse cost indicators found in literature.

Order processing cost

Cost as % of sales

Maintenance cost
Distribution cost
Inventory cost

Labor cost
Authors
Mentzer and Konrad (1991) X
Kiefer and Novack (1999) X
Yang (2000) X X
Ellinger et al. (2003) X
Rimiene (2008) X X
Johnson et al. (2010) X
Lu and Yang (2010) X X X
De Marco and Giulio (2011) X
Cagliano et al. (2011) X X
Gallmann and Belvedere (2011) X
Saetta et al. (2012) X
Ramaa et al. (2012) X X
Total/each indicator 7 3 2 2 2 2

gain competitive advantage. These new services require additional indicators to measure

the related performance. Oftentimes, the indicators are complex; either the equations are

not available or they are too dicult to calculate. The warehouse capability (Sohn et al.,

2007), the supervisory coaching behavior (Ellinger et al., 2003), the relation between front-

line employee performance and interdepartmental customer orientation (Voss et al., 2005),

etc. are some examples of these indicators. In this dissertation, we call such indicators,

the indirect indicators. Instead of simple and straightforward equations, some structured

mathematical tools are needed to calculate the value of these indicators. Normally, these

mathematical tools evaluate dierent kinds of information and extract correlations and/or

performances from databases. Some examples of such tools used in the literature are: SEM

(Structural Equation Modeling), DEA (Data Envelopment Analysis), Regression Analysis,

Canonical Matrix.

The papers presenting indirect indicators are listed in Table 2.11. We give next some

details on these papers.

• Maintenance: Sohn et al. (2007) have performed a survey based on warehouse

characteristics in order to assess the capability of each warehouse taking part in the

study. The facility management is determined by the authors as: (i) maintenance

and repair of warehouse facilities, (ii) cooperation with facilities-related departments,

(iii) new construction of modern warehouses, and (iv) full equipment for protecting

facilities against re. As a result of the study, Sohn et al. (2007) conclude that facility

management is the second highest impact on warehouse capability, after manpower

management.

• Flexibility: we can verify that the exibility measures are usually associated with

other performance components such as time, volume, delivery. For example, Lu and
28 2. Literature Review on Warehouse Performance

Table 2.10: Warehouse productivity indicators found in literature.

Receiving productivity
Outbound space util.b
Shipping productivity

Warehouse utilization

Inventory space util.a


Transport utilization

Picking productivity
Labor productivity

Throughput

Turnover
Authors
Mentzer and Konrad (1991) X X X X
Gunasekaran et al. (1999) X
Kiefer and Novack (1999) X X X
De Koster and Waremius (2005) X
Voss et al. (2005) X
Gu et al. (2007) X
De Koster and Balk (2008) X X
O'Neill et al. (2008) X X X
Rimiene (2008) X X X X X
Johnson et al. (2010) X X X X
Manikas and Terry (2010) X X X
Matopoulos and Bourlakis (2010) X X X
Wang et al. (2010) X X
De Marco and Giulio (2011) X X X
Cagliano et al. (2011) X
Goomas et al. (2011) X
Johnson and McGinnis (2011) X X X
Karagiannaki et al. (2011) X
Markovits-Somogyi et al. (2011) X
Banaszewska et al. (2012) X X X
Yang and Chen (2012) X
Ramaa et al. (2012) X X X
Total/each indicator 11 11 8 4 4 3 3 3 2 1
a
Inventory space utilization b
Outbound space utilization

Yang (2010) measure exibility in terms of operation exibility, rapid response to

customer requests, delivery time exibility and volume exibility. Yang and Chen

(2012) consider exibility as urgent order handling and De Koster and Balk (2008)

consider exibility as the capacity to cope with the internal and external changes.

• Labor: the results in Table 2.11 demonstrate the importance of employee perfor-

mance in warehouses with numerous articles in the area. Ellinger et al. (2003) inte-

grate the perception of supervisors to examine the employee performance (seen by

the supervisors). Voss et al. (2005) show that the front-line employee performance

and interdepartmental customer orientation have a positive eect on DC services. In

their study, the authors consider the following variables to measure the employee per-

formance: proper data recording, ecient trailer loading, storing products in proper

locations, eective distribution operations, minimal product loss, minimal product

damage, high productivity, high performance. Wu and Hou (2009) propose a model

for the analysis of employee performance trends. This model is intended to determine

the employees to reward or to train. Goomas et al. (2011) evaluate the order selec-

tors' performance after the implementation of an overhead scoreboard that informs


2.5. Indirect Warehouse Performance Indicators 29

Table 2.11: Indirect indicators measured in papers.

a
VAL activity

Maintenance
d

Cust. Perc.
Inv. Man.c

War. Aut.
Flexibility
Labor
Authors
Kiefer and Novack (1999) X
Ellinger et al. (2003) X
Voss et al. (2005) X
Sohn et al. (2007) X X X
De Koster and Balk (2008) X X X X X
Park (2008) X
O'Neill et al. (2008) X
Wu and Hou (2009) X
Lu and Yang (2010) X X X X
Wang et al. (2010) X
Gallmann and Belvedere (2011) X
Goomas et al. (2011) X
Johnson and McGinnis (2011) X
Banaszewska et al. (2012) X X X
Yang and Chen (2012) X X
Total 7 4 4 4 3 3 1
a
Customer Perception VAL - Value Added Logistics Inventory Manage-
b c

ment d Warehouse Automation

the number of completed tasks, the number of tasks in queue and the team perfor-

mance against the engineered labor standards. Park (2008) study the relationship

between the store-level performance and the composition of the workforce. Workforce

composition is expressed as the full-time and the part-time employees.

• Customer Perception: customer relationship and customer satisfaction are con-

sidered as the most satisfactory performance variables by managers Lu and Yang

(2010). Accordingly, Kiefer and Novack (1999) state that understand the inuence

of some measures in customer's reaction is far more important than any internal

measure alone.

De Koster and Balk (2008) measure customer perception by using DEA. The authors

verify the contribution of some activities (like cross-docking, cycle counting, return

handling) to the increase of customer perception.

Lu and Yang (2010) consider customer response as attributes of logistics service

capabilities. Customer response encompasses pre-sale customer service, post-sale

customer service and responsiveness to customer. As a result, the companies that

are customer-response-oriented have the best performance among DC's in Taiwan.

• Value Adding Logistic (VAL) Activities: can be measured by the number of

VAL activities oered by the company and performed in warehouses. De Koster and

Balk (2008) divide VAL activities in low and high levels. The activities adding low

value to the product include labeling, putting manuals, kitting; whereas high VAL

activities consist of sterilization, nal product assembly, product installation etc.


30 2. Literature Review on Warehouse Performance

For Gu et al. (2007), the roles of VAL activities also include: buering the material

ow along the supply chain to accommodate variability caused by factors such as

product seasonality and/or batching in production and transportation; consolidation

of products from various suppliers for combined delivery to customers. The survey

of O'Neill et al. (2008) conrm that VAL activities have become common activities

in warehouses. However, on the average only 5 per cent of oor area is dedicated to

these activities, indicating that VAL activities are minor in nature.

• Inventory Management: is an area where the automation support for activities has

increased. The relations between inventory management and warehouse automation

are getting closer to each other. Wang et al. (2010) propose a digital warehouse

management system (DWMS) based on RFID to help managers to achieve better

inventory control. Yang and Chen (2012) examine the impact of information systems

on DC's performance. Among the results, they found a positive correlation between

warehousing and inventory management and emergent order handling. In Sohn et al.

(2007), the issues related to the inventory management and the accuracy of logistics

information (considered in Table 2.11 as warehouse automation) are also discussed.

• Warehouse Automation: De Koster and Balk (2008) measure the degree of ware-

house automation according to the level of technology used (use of a computer or

WMS are low levels; RFID and barcoding or robots are high levels). Banaszewska

et al. (2012) assess information technology in warehouses by the number of available

information systems.

The impact of the use of warehouse automation on its performance has also been

addressed. Yang and Chen (2012) conclude that high levels of information systems

utilization in the order selection activity should have positive inuences on delivery.

2.6 Classication of the Warehouse Performance Indicators


Throughout the classication process of direct indicators, we have observed that it is neither

easy to draw straight forward frontiers for them, nor are the measurements clearly dened.

For example, we could see two indicators with dierent names but measured the same way.

Conversely, some metrics have the same name but measured dierently. Moreover, while

in some papers the measurements are explicit, in some others only the indicator names are

given.

In order to provide well dened boundaries for the direct warehouse indicators, the

results presented previously in this chapter are analyzed using an activity-based frame-

work. The indicators that are classied in Section 2.4 according to quality, cost, time and

productivity dimensions, are now also classied in terms of warehouse activities described

in Section 2.3.4. The result of this new classication is illustrated by Table 2.12.

In order to classify the direct indicators with respect to the warehouse activities, we

dened three types of direct indicators:

• Specic Indicators: are dened specically for an activity.

• Transversal Indicators: are dened for a process rather than a unique activity. There-

fore, their boundaries are also dened for a group of activities.


2.6. Classication of the Warehouse Performance Indicators 31

• Resource related Indicators: Some indicators are related to the resources used in

the warehouses. We divide them into two distinct categories: Labor and equip-

ment/building.
32
Table 2.12: Direct indicators classied according to dimensions and activities boundaries.

Dimensions Activity - Specic Indicators


Receiving Storing Inventory Picking Shipping Delivery
receiving putaway order pick- shipping delivery lead
Time
time time ing time time time
delivery accu-
physical shipping ac-
racy; on-time
storage inventory picking ac- curacy; or-
Quality delivery;
accuracy accuracy; curacy ders shipped
cargo damage
stock-out rate on time
rate
distribution
Cost inventory cost
cost
inventory
receiving picking pro- shipping transport uti-
Productivity space utiliza-
productivity ductivity productivity lization
tion; turnover
Dimensions Process - Transversal Indicators
Inbound Processes Outbound Processes

2. Literature Review on Warehouse Performance


Dock to stock time Order lead time
Time
Global= Queuing time
Order ll rate, Perfect orders
Quality
Global= Customer satisfaction, Scrap rate
Order processing cost
Cost
Global= Cost as a % of sales
Outbound space utilization
Productivity
Global= Throughput
2.6. Classication of the Warehouse Performance Indicators 33

2.6.1 Specic and Transversal Indicators


In Table 2.12 we propose a mapping for both the specic (on the upper half of the table) and

transversal indicators (on the lower half ) over the warehouse activities. The activities are

given on the columns. Although inventory is not a warehouse activity, we choose to include

it in Table 2.12 due to its importance in warehouse management. Gallmann and Belvedere

(2011) state that companies consider inventory management as a key to achieve excellent

service levels. We also observe numerous metrics treating the subject (see Section 2.4).

On the rows of Table 2.12, it is possible to observe the previous classication dimensions

(time, quality, cost and productivity). Each direct indicator is then placed in the related

cell in the table. For example, order picking time is a time indicator which is specic to

the picking activity.

In the lower half of Table 2.12, we illustrate the direct transversal indicators. Chan and

Qi (2003) have dened that the inbound logistics concern both the materials transportation

and storage, while outbound logistics involve the outbound warehousing tasks, transporta-

tion and distribution. Based on this idea, the inbound process covers both Receiving

and Storage activities and are named as Inbound Processes in Table 2.12 while Picking,

Shipping and Delivery activities are regrouped under Outbound Processes. Inventory is

considered as internal process in this case linking inbound to outbound processes. The

indicators are then placed according to the extent of their boundaries. For example, the

transversal indicator Dock to stock time is classied as an inbound indicator encompass-

ing receiving and storing activities. Order lead time is an outbound indicator, covering

picking, shipping and delivery activities. Moreover, there are the global transversal indica-

tors that cannot be assigned to specic activities. That is the case, for example, of Cost

as a % of sales, dened as global to all warehouse activities since its measure represents

a sum of warehouse activity eorts. Second, the throughput indicator was classied as

a global measure inside the warehouse, since it assesses the quantity of products that are

produced by the warehouse in items per hour (Voss et al., 2005), not including the delivery.

We note that the boundaries of indicators as described in Table 2.12 depend on ware-

house production processes. Table 2.12 is created following a make-to-stock environment.

A warehouse which operates on a no storage strategy (eg. crossdocking) may dene

the boundaries of the indicators dierently. The operating strategies impact mainly the

transversal indicators. One example is the order lead time. If a make-to-order system is

considered, the customer order would start upstream (in the supply process), not at the

picking activity.

Some remarks can be made on Table 2.12 based on the shown empty elds. First of

all, it is important to note that the empty cells in Table 2.12 do not mean that there is

no indicators to measure the activity/process. It signies that in the literature review,

no paper analyzed has used an indicator related to the activity/process. In Table 2.12, it

could be seen that the receiving and storage activities are less covered than the outbound

areas. This shows that the statement of Gu et al. (2007) that the research on receiving

is limited, is still valid. The number of outbound indicators is higher than the number

of indicators for the inbound processes. This is not very surprising as the warehouse

activities are getting more and more customer oriented. So, it is possible to conclude that

the outbound processes are considered as more critical than the inbound ones and hence
34 2. Literature Review on Warehouse Performance

are subject to more control. The same discussion is also true for the inventory.

2.6.2 Resource Related Indicators


Some indicators are directly related to resources used in the warehouse. Such indicators

impact all warehouse activities. Therefore, instead of presenting them in Table 2.12, we

choose to classify them as resource related indicators. There are 2 major resources: labor

and equipment. The facilities are considered in the same group as equipment. The related

indicators are given in Table 2.13.

Table 2.13: Indicators categorized according to dimensions and support areas.

Dimensions
Resource Related Indicators
Labor Equipment and Building
Time Equipment downtime
Quality
Cost Labor cost Maintenance cost
Productivity Labor productivity Warehouse utilization

Analyzing Table 2.13 we note some empty cells for time and quality dimensions. The

rst empty cell is labor time, which is usually utilized as a data instead of an indicator.

The labor time is used to measure several productivity indicators, thus, it is not utilized

in warehouses for performance indicator purposes. For the cell quality versus labor, it is

expected because the quality of work is usually measured for each activity separately (e.g,

accuracy in picking, shipping; see Table 2.12) instead of a general way. The cell equipment

versus quality is already represented by the indicator Equipment Downtime.

2.7 Conclusions
Some conclusions can be made from the reported results.

Warehouse performance evaluation has been explored in dierent ways by researchers.

In general, the works diversify a lot in terms of performance area evaluated and the mea-

surement tool used for it. The warehouse area means the evaluation of one/various types of

warehouse with focus on one/several warehouse activities. The papers' results are usually

very specic for one kind of situation. For example, works related to tobacco industry

warehouse (Wang et al., 2010), a DC of fresh products (Manikas and Terry, 2010) or an

air force warehouse (Sohn et al., 2007) have used dierent mathematical tools and indica-

tors to evaluate performance. Other dierences are in the type of warehouse studied (e.g.

distribution center (DC), industrial warehouse, warehouse dedicated to cross-docking oper-

ations, third-party warehouses), requiring specic congurations by means of their product

particularities, what demand dierent tools to solve problems.

According to Section 2.3.2 the majority of our database has performed surveys to treat

warehouse performance subject. This shows a new tendency of studies in two directions:

to nd relationships among dierent warehouse performance areas (e.g. degree of automa-

tion inuencing warehouse productivity (De Koster and Balk, 2008)); and the evaluation

of concepts not usually expressed as ratios and, therefore, not measured yet (e.g. VAL

activities (De Koster and Waremius, 2005)).


2.7. Conclusions 35

From these papers, it can be concluded that a high degree of automation has a positive

impact on the delivery accuracy and the total cost (incl. depreciation and maintenance).

This result was expected; otherwise, it would be more ecient to work with people and low

automation. About the use of metrics to manage the information systems (WMS, RFID)

we could see that they are not applied in warehouses. The indicators about information

systems are usually designated to evaluate systems on the implementation phase (based on

time/resources savings). After that, the managers generally use the indexes provided by the

system to evaluate all other warehouse areas. For VAL activities, the studies evaluate their

growing importance in warehouse operations and determine the low value and high-end

activities.

The human resources management in warehouses is an area that has attracted increas-

ing attention in the literature. Several papers of our database treat the operational labor

performance. Measured directly or indirectly, it is an important area to achieve produc-

tivity goals and customer satisfaction. One reason for the importance of this subject is

reported by Park (2008) who highlights that the front-line distribution center personnel

could be responsible for any task in moving products inside the distribution center. Any

service failure or inecient performance directly increases customer order cycle time and

negatively impacts the level of service as perceived by the customers. It can be seen from

papers' application area that the majority of researches have considered manufacturing

companies, which usually employ people to execute the warehouse activities since automa-

tion is a high investment for enterprises that do not have their focus on logistics.

Even if there is a tendency for indirect measures", they are not used for daily manage-

ment since they require a great quantity of data sometimes dicult to obtain. So, direct

indicators continue to be the basis for warehouse performance measurement.

The total direct indicators sum 38 measures, from which 9 of time, 13 of quality, 6

of cost and 10 of productivity. There are indicators related to one activity/area (e.g.

shipping productivity) or several (e.g. dock to stock time, cost as a % of sales). Analyzing

the application area of indicators (i.e. the activity measured by the indicator) we can

conclude that half of them are related to outbound activities (i.e. picking, shipping and

delivery). This reveals that the outbound processes/activities are considered more critical

than the inbound ones and hence they are subjected to more control.

An activity-based framework is proposed to help clarifying the boundaries of the indi-

cators. In this framework we classify indicators not only according to quality, cost, time

and productivity dimensions, but also in terms of warehouse activities (receiving, storage,

picking, shipping and delivery). The result of this classication shows that the number of

outbound indicators is much higher than the number of inbound indicators. This is not

very surprising as the warehouse activities are getting more and more customer oriented.

An important evidence we can highlight is that literature about the performance anal-

ysis and management of the Distribution Center (DC) operations is not as abundant as

for the location and cooperation problems (Dotoli et al., 2009). Indeed, we have not found

literature reviews focusing specically on warehouse performance management and its in-

dicators.

The low attention given for warehouse performance subject leaves several gaps that

should be further investigated. A complete list of them is reported in conclusions, Chapter

8. In what is related to this dissertation goal, to develop a methodology for an inte-


36 2. Literature Review on Warehouse Performance

grated performance evaluation, there is no work concerning the aggregation of warehouse

performance measures or developing an integrated performance measurement model for

warehouses. Only some works evaluating the inuence of indicators on the warehouse per-

formance (e.g. Voss et al. (2005); De Koster and Balk (2008)) can be reported. The next

chapter details these works regarding indicator or process relationships since this disserta-

tion also measures indicator relations. Additionally, works about performance aggregation

are presented to verify the main developments made in this theme and the mathematical

tools used to attain this goal.


Chapter 3
Literature on Performance Integration and Tools

On n'a jamais fait de grande découverte sans hypothèse


audacieuse.
Isaac Newton

Contents

3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.2 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.2.1 Literature on indicator relationships and indicators aggregation . . . . . 38
3.2.2 Literature on Performance Integration . . . . . . . . . . . . . . . . . . . . 40
3.3 Overview on mathematical tools used for performance inte-
gration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.3.1 The choice of the dimension-reduction statistical tool . . . . . . . . . . . 43
3.3.2 Principal Component Analysis - PCA . . . . . . . . . . . . . . . . . . . . 44
3.3.3 Factor Analysis - FA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.3.4 Canonical correlation analysis - CCA . . . . . . . . . . . . . . . . . . . . 50
3.3.5 Structural Equation Modeling - SEM . . . . . . . . . . . . . . . . . . . . 52
3.3.6 Dynamic Factor Analysis - DFA . . . . . . . . . . . . . . . . . . . . . . . 54
3.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

Abstract
We rst present the results of the literature review about indicator relationships and
performance integration. The gaps are identied as well as the mathematical tools
used to associate indicators. The general characteristics of the main techniques are
presented to provide theoretical basis for the methodology development.
38 3. Literature on Performance Integration and Tools

3.1 Introduction
The objective of this Chapter is twofold: to describe works related to indicator relationships

and/or performance integration and to present the mathematical tools used in these papers.

To reach the rst objective, a non exhaustive literature search is performed on online

databases. The keywords used are related to performance integration, performance ag-

gregation, performance relationships and indicator relationships. Moreover, all kinds of

publications (journal articles, conference proceedings, etc.) are included in the database

search. Due to the great variation of objectives and applications in the papers found, we

only present the most related work to this dissertation in the next sections.

The second goal of this chapter is to present the mathematical tools used in the earlier

works to relate indicators or aggregate performance measures. From the articles analyzed,

it is possible to identify some groups of tools utilized with distinct objectives. Therefore,

a general presentation of these groups is made, with a special attention on the statistical

tools used for dimension reduction, which allow the indicators aggregation.

3.2 Literature Review


3.2.1 Literature on indicator relationships and indicators aggregation
Papers that dene indicator relationships are not new. It is possible to identify two main

development periods on this theme. First, the papers try to identify if there are indi-

cator relationships; then, these relationships are measured. This measurement is made

qualitatively (using decision making tools such as AHP (Analytic Hierarchy Process)) or

quantitatively. An example of the rst period is the work of Bititci (1995), which uses

a QFD matrix (Quality Function Deployment) to display how measures of dierent lev-

els (strategical, tactical, operational) are inuencing each other according to manager's

perception. In the same work the author models the process for each strategic measure

dening a Cause-and-Eect diagram to control the interactions between operations and

performance results.

In the relationships measurement period, the work of Suwignjo et al. (2000) develops

the Quantitative Model for Performance Measurement System (QMPMS) to quantify the

eects of factors on performance through the AHP utilization, which is based on manager's

opinion. The three main steps of QMPMS are: (i) identifying factors that aect perfor-

mance and their relationships, (ii) structuring the factors hierarchically, (iii) quantifying

the eect of the factors on performance. The authors discuss that even the methodology

seems intuitive; one of the problems to measure relationships quantitatively is the quali-

tative nature of some measures, for example, management commitment (Suwignjo et al.,

2000).

An approach to overcome this issue started to be extensively used in the performance

management literature some years later. This approach is the statistical techniques of

measurement, which allow the quantitative evaluation of relationships between qualitative

measures. For example, the work of Fugate et al. (2010) investigates the inuence of lo-

gistics performance in the organizational performance using Structural Equation Modeling

(SEM) (Figure 3.1). The logistics performance is decomposed in eciency, eectiveness


3.2. Literature Review 39

Logistics Organizational
Performance Performance

Logistics Logistics Logistics


Effectiveness Efficiency Differentiation

Figure 3.1: Relationships among logistics variables. Source: Fugate et al. (2010)

and dierentiation and the authors assume that all three are related. A questionnaire is

performed with industry managers to obtain the necessary database for the tool applica-

tion. At the end, the results suggest that the overall performance of the logistics function

should produce high levels of logistics eectiveness, eciency, and dierentiation, aecting

positively the organizational performance.

Another example is Cai et al. (2009), proposing a framework to analyze and to select

the right key performance indicators (KPI) to improve supply chain performance. The

framework assigns priorities to dierent KPIs and uses PCTM (KPI cost transformation

matrix) to verify the cost incurred for the KPI accomplishment, considering also the extra

cost caused in all other dependent KPIs. The authors interview managers and employees

identifying 20 dierent KPIs and dening their coupled relationships. Then, the cost of

each KPI accomplishment with its relationships is estimated from interviews with man-

agers. The relationships between two dependent KPIs accomplishment costs are measured

quantitatively by the following classication: weak (0,05), neutral (0,25), and strong (0,5).

Coskun and Bayyurt (2008) determine the eects of the indicator measurement fre-

quency on managers' satisfaction of corporate performance. A questionnaire with 500

enterprises is performed to acquire opinions about indicator measurement frequency and

overall corporate performance. An Exploratory Factor Analysis (EFA) aggregates per-

formance indicators into groups according to Balanced Scorecard dimensions (Financial

Measures, Customer Measures, Process Measures, Learning and Growth Measures). The

relations between the measurement frequency of performance indicators and the corporate

performance satisfaction is analyzed by using canonical correlation analysis.

At this moment, some researchers start to measure indicator relationships without

human judgment. That is the case of Rodriguez et al. (2009) proposing a methodology to

identify KPI relationships and projecting them on strategic objectives, to know whether

the upstream objectives are being reached or not. Principal Component Analysis (PCA) is

performed to quantify indicator relationships and group them according to these relations.

Finally, a framework of these relationships with respect to their strategic objectives is

outlined.

Patel et al. (2008) develop a methodology to demonstrate the cause and eect rela-

tionships between the components of the performance rating system. Using Structural

Equation Modeling, a causal-loop diagram showing the cause and eect relationships be-

tween the 16 common performance indicators is constructed based on a data set of two
40 3. Literature on Performance Integration and Tools

years. These relationships are used to draw scenarios regarding an organization's future

performance.

Johnson et al. (2010) identify the operational policies, design characteristics, and at-

tributes of warehouses that are correlated with greater technical eciency, i.e. those factors

that impact warehouse performance. The variables correlated with high eciency are iden-

tied using a regression model and solve it using ordinary least squares. Another work using

regression model to assess performance is by Kassali and Idowu (2007), which denes the

factors determining the operational eciency of onion storage and uses statistical inference

to conclude the relationships among factors.

Regarding the nature of indicator relationships, it is important to highlight some clas-

sications. Bititci (1995) denes that indicators may have simple or complex relationships;

in other words, if one indicator changes this may alter one or more data items elsewhere

in the information system. Suwignjo et al. (2000) improve the classication of indica-

tor relations as direct (vertical) eect (an indicator inuences another of a higher level),

indirect (horizontal) eect (an indicator inuences another indicator of the same level),

self-interaction eect (the indicator inuences itself ). Cai et al. (2009) classify the rela-

tionships into three categories: parallel, sequential and coupled. In a parallel relationship,

two KPIs are independent of each other, i.e. the eorts of accomplishing these two KPIs

are not related. A sequential relationship usually implies a simple cause-eect relationship,

but the reverse dependence does not always hold. Finally, the coupled relationship means

that both KPIs are dependent on each other.

3.2.2 Literature on Performance Integration


To the best of our knowledge, the term performance integration is interpreted in two dif-

ferent manners in the literature. Some researchers consider integrated performance as an

indicator system framework which links the measures to strategy. One such example, as

formulated by Chenhall and Langeld-Smith (2007), considers a pyramidal analysis with

dierent aspects of an organization's performance (e.g. the Tableau de Bord ) that feeds

the three levels of management (strategy, management and operations). This aggregation

usually deals with the translation of all the elementary performance expressions associated

with the various heterogeneous criteria into a common reference (cost or degree of satisfac-

tion) (Clivillé et al., 2007). In these works, usually the number of indicators from higher

levels is reduced to allow managers to control just the key parameters, i.e. the key perfor-

mance indicators. The literature about this kind of performance integration is signicant,

with several methods proposing the establishment of a performance indicator group (e.g.

SCOR model - Supply Chain Operations Reference-model) or dening how the indicators

should be chosen regarding company's strategy.

The second kind of performance integration, which is studied in this dissertation, refers

to the performance measurement in a global view, not excluding indicators but aggregating

them to nd out the total performance of an area or enterprise. Franceschini et al. (2008)

refer to the performance integration as the association of informations from one or more

sub-indicators in just one aggregated and synthesized indicator. The number of papers

studying performance integration according to this perception is less signicant in the

literature when compared to the rst interpretation. The following papers are related to
3.2. Literature Review 41

this second denition.

Chan and Qi (2003) develop a process-based model to measure the holistic perfor-

mance of complex supply chains. They consider productivity, eciency and utilization as

composite measures since they relate inputs and outputs. A group representing various

management areas of the supply chain is formed and the expert opinions are incorporated

in a fuzzy model as relative weights to assess the aggregated performance.

Lohman et al. (2004) present a prototype system that basically is a balanced scorecard

tailored to the needs of the company studied. After the performance indicator system

determination, they suggest indicator's aggregation in one number. As each individual

metric has a dierent dimension, the authors suggest a method for normalizing metrics

linearly.

In Sohn et al. (2007), the authors developed an Air Force Warehouse Logistics Index

(WLI) to evaluate the logistics support capability of ROKAF (Republic of Korea Air Force)

warehouses. Even if the main goal is not performance measurement, the constructed index

takes into account relationships among various inuential factors for warehouse capability.

The dataset is obtained by interviews with warehouse employees and the answers are related

to latent variables using Structural Equation Modeling (SEM). The six latent variables

sj , with j = 1...6 inuence WLI, which contributes to logistics support capability and

warehouse modernization. The relationship between the overall logistics index ηi and the

six observed variables yij , with i referring to each respondent is (Equation 3.1):

ηi = s1 × yi1 + s2 × yi2 + s3 × yi3 + s4 × yi4 + s5 × yi5 + s6 × yi6 (3.1)

Luo et al. (2010) propose a hierarchical model of performance factors to assess the

general logistics performance of an agricultural products distribution center. First, FAHP

(Fuzzy Analytic Hierarchy Process) is used to calculate index weight, then fuzzy compre-

hensive evaluation method is used to get total logistics performance.

The work of Jiang et al. (2009) develops a theoretical indicator system of logistics perfor-

mance with the objective to analyze the interactions among these performance measures

and to optimize them. The dimensions of logistics performance measurement are time,

quality, cost, exibility (see Figure 3.2) and each dimension includes several indicators.

Logistics Time Performance

Service
Quality

Logistics
Costs

Logistics
Flexibility

Figure 3.2: Framework to evaluate logistics performance in supply chains. Source: Jiang
et al. (2009)

The DEMATEL method (DEcision-MAking Trial and Evaluation Laboratory method)


42 3. Literature on Performance Integration and Tools

is the utilized tool to optimize the index system and delete the indexes with small relational

grade. Finally, DEMATEL is also applied to evaluate the weight of each index and the

total performance of the enterprises (Jiang et al., 2009).

Clivillé et al. (2007) use the MACBETH (Measuring Attractiveness by a Categorical-

Based Evaluation TecHnique) methodology as a global framework to dene multi-criteria

industrial performance expressions. The MACBETH procedure allows to express commen-

surate elementary performances and the relative weights of the performance measures from

decision-maker's knowledge, and then to aggregate the elementary performances. Clivillé

et al. (2007) use MACBETH with Choquet integral operators to take into account the

interactions among performances when dening the aggregated performance.

Some works try to achieve an aggregated performance measurement for benchmarking

purposes. Benchmarking is essentially the process of identifying the highest standards of

excellence for products, services, or processes, and then making the improvements necessary

to reach those standards, commonly called best practices (De Koster and Balk, 2008).

Regarding the warehouses, benchmarking is seen as the process of systematically assessing

the performance of a warehouse, identifying ineciencies, and proposing improvements

(Gu et al., 2010). In these cases, DEA (Data Envelopment Analysis) is probably the most

widely used mathematical approach for benchmarking of organizational units (Jha et al.,

2008).

Data Envelopment Analysis (DEA) is regarded as an appropriate tool for this task

because of its capability to capture simultaneously all the relevant inputs (resources) and

outputs (performances) using one single performance factor, to construct the best perfor-

mance frontier, and to reveal the relative shortcomings of inecient warehouses (Gu et al.,

2010).

Some examples of this kind of works are by Schefczyk (1993), Ross and Droge (2002)

and Johnson et al. (2010). The recent work of Andreji¢ et al. (2013) proposes to benchmark

DCs using PCA (Principal Component Analysis) before the DEA. The PCA is applied for

inputs and outputs separately to reduce the number of variables for the DEA model.

It is important to highlight two main characteristics of the papers presented in this

literature review. First, the majority of works develop a methodology for performance

aggregation using statistical tools; however, the indicators aggregation is not included as

a step before attaining the global performance. Only the work of Jiang et al. (2009)

achieves global performance through indicator relationships. However, these relations are

dened based on expert judgments. This situation demonstrates the second characteristic:

the works proposing aggregated indicators to represent the global performance usually

utilize methods based on expert judgments. One exception is the work of Rodriguez et al.

(2009), which has already aggregated performance indicators in factors without human

judgment. However, these factors are not yet transformed in a global performance. Hence,

this dissertation comes to fulll this gap, providing a global warehouse performance through

the indicators' aggregation.

In the next sections, we present an overview on the mathematical tools used in the most

relevant papers. A special attention is given to statistical tools which allow performance

indicators' aggregation.
3.3. Overview on mathematical tools used for performance integration 43

3.3 Overview on mathematical tools used for performance


integration
The goal of this section is twofold: (i) to identify the most appropriate mathematical tools

to attain indicators aggregation without human judgment; (ii) to provide a basic overview

of these chosen mathematical tools, focusing on the requirements for their application and

the interpretation of their results.

3.3.1 The choice of the dimension-reduction statistical tool


From the papers presented in the above section, we note that dierent kinds of mathe-

matical tools are used to assess performance. It is possible to divide the tools in dierent

groups: decision making tools, DEA techniques, dimension-reduction statistical tools.

There is a vast literature and numerous tools to help decision makers. Several papers

treat the relationship among indicators using decision support systems. The majority of

these tools interpret the manager's opinion about indicator relationships and weights in a

quantitative measure. According to Rodriguez et al. (2009), the weakness of decision-aid

methods as AHP is that they have judgments as inputs, which can be incongruent with

the managerial cognitive limitations. Moreover, the objective of this dissertation is to

nd out relationships from the indicator equations and their data collected periodically,

without manager judgment. Thus, methods which incorporate manager's opinions like

AHP, FAHP (Fuzzy Analytic Hierarchy Process), DEMATEL (Decision-Making Trial and

Evaluation Laboratory method), MACBETH (Measuring Attractiveness by a Categorical-

Based Evaluation TecHnique) and Fuzzy are not considered in our analysis.

DEA technique is a non-parametric linear programming which enables the comparison

of dierent DMUs (Decision Making Units), based on multiple inputs and outputs. In

DEA approach, essential input and output data are selected and the set of observed data

is used to approximate the Production Possibility Set (PPS). The PPS represents all input

and output combinations that actually can be achieved. The boundary of the PPS is called

the ecient frontier and characterizes how the most ecient warehouses trade o inputs

and outputs (Johnson et al., 2010). The eciency is relative and relates to the set of units

within the analysis, i.e. the warehouses are ecient among the other units (Andreji¢ et al.,

2013).

Even if it is possible to use DEA to analyze just one DMU over time (another application

besides the benchmarking), it does not satisfy our objectives in some aspects. Firstly, we

want to dene the indicator relationships to provide the managers additional information

about the impacts of the decisions that are going to be taken based on performance results.

DEA does not give information about input and output relationships. Secondly, the dataset

(inputs and outputs of the model) used for eciency analysis are operational data, and not

the indicator results as we intend to use in this work. Therefore, DEA is also not utilized

in this dissertation.

Looking at the statistical literature, the multivariate analysis has got the potential

to identify relationships between variables over time, clustering them according to these

relationships. Additionally, these tools can aggregate variables determining their weights

and reducing the dimension of the analysis to help managers in decision-making situations.
44 3. Literature on Performance Integration and Tools

Some techniques presented in next sections are: Principal Component Analysis, Factor

Analysis, Canonical Correlation Analysis, Structural Equation Modeling and Dynamic

Factor Analysis. Among these tools, only Dynamic Factor Analysis is specially designed

for time series data, whereas the others have better results with other kinds of data. As

an example, Hoyle (2012) cites that standard SEM approaches use variables measured on

a continuous or quasi-continuous scale (e.g. 5- or 7-point response scales), or sometimes

categorical data (e.g. true-false). However, the use of these tools with time series data is

not forbidden, but in some cases adaptations need to be made for their application.

As these dimension-reduction tools are associated with this dissertation's proposal, they

will be analyzed further in the next sections.

3.3.2 Principal Component Analysis - PCA


3.3.2.1 Objective
Principal Component Analysis (PCA) is one of the most common types of multivariate

methods to identify association patterns between variables (Katchova, 2013). A PCA of-

ten uncovers unsuspected relationships, allowing you to interpret the data in a new way

(Minitab Inc., 2009). The main purpose is to reduce the information of many observed vari-

ables into a little group of articial variables named components (Manly, 2004). In PCA,

the components empirically aggregate the variables without a presumed theory (Wainer,

2010).

3.3.2.2 Data characteristics


There is no specicity about the kind of data that should be used to perform PCA. The

normality of data (usually required in statistical applications) is not a strict requirement

specially when PCA is used for data reduction or exploratory purposes. However, some

authors suggest that the PCA can provide better results if data follow a normal distribution.

The sample (dataset) is a matrix n×p with n number of observations for each p variable.
Usually, the inputs come from questionnaires (each observation is a dierent person), but

nothing prohibits the use of other types of data as, for example, time series.

There are some conditions that the dataset should satisfy (Manly, 2004):

• the sample must be bigger than the number of variables included;

• the sample must have more than 30 observations;

• there must exist correlation among variables.

If the number of variables is greater than the number of observations, as some practical

cases within the performance management context, the application of classic PCA presents

problems. The solution could be to apply the NIPALS (Nonlinear Iterative Partial Least

Squares) algorithm to estimate the dierent principal components (Rodriguez-Rodriguez

et al., 2010).

Besides the sample size, PCA is sensitive to great numerical dierences among variables.

Therefore, after the acquisition of the minimum number of observations required, it is often

convenient to standardize each observation (Zuur et al., 2003a). The standardization is

detailed in Chapter 4.
3.3. Overview on mathematical tools used for performance integration 45

3.3.2.3 Basic principles


The principal components are dened in order to capture the greatest variance of the

dataset. They are calculated by nding the variable eigenvalues and eigenvectors of the

covariance matrix for the p variables. The eigenvalues are a numeric estimation of the

variable variation explained by each component (Wainer, 2010). In the case of PCA, all

variance of the observed variables is analyzed (shared, unique and error variances) (Manly,

2004). Moreover, PCA considers that the variables comprise only linear relationships.

The PCA method essentially denes the same number of components as the quantity of

variables. Since each component is perpendicular to the others, it creates a n-dimensional

plot. As explained in the sequence, the number of components explaining the total dataset

variance can be less than the total number of variables depending on the data character-

istics.

Let us consider that Figure 3.3 demonstrates the scatter plot of indicators measured

monthly. The X axis represents the time and Y axis the indicator values. The points in

the graphic are the observations (indicator values) in all periods of time. In Figure 3.3, the

rst and second principal components are u and v, representing the rst and the second

greatest variance of the dataset, respectively. The u and v components are orthogonal

demonstrating that they are uncorrelated to each other. It happens to all components

(Wainer, 2010).

Figure 3.3: Scatter plot of the dataset with the rst and second principal components.

3.3.2.4 Main outcomes


From the p variables X1 , X2 , . . . , Xp , each principal components C1 , C2 , . . . , Cp describes

a dimension of data variation (Manly, 2004). Since each component is a linear combi-

nation of the observed variables, the principal components (Ci ), combining the variables

X1 , X2 , . . . , Xp have the form (Equation 3.2) (Manly, 2004):

p X
X p
Ci = aij × Xj (3.2)
i=1 j=1

The outcome of PCA is principal components like Equation 3.2, since the maximum
46 3. Literature on Performance Integration and Tools

number of components extracted always equals the number of variables (Minitab Inc.,

2009).

It is important to note, in Equation 3.2, that not all variables are included in every prin-

cipal component. Just the original variables that account for the data variance explained

by Ci are included in equation.

Principal components resulted from PCA are ranked in a descending order of impor-

tance, such that V ar(C1 ) ≥ V ar(C2 ) ≥ . . . ≥ V ar(Cp ), where V ar(Ci ) denotes the vari-

ance of Ci (Manly, 2004).


The aij in Equation 3.2 are the coecients of the variables with i = 1, 2, . . . , p and

j = 1, 2, . . . , p. These coecients mean the correlation between the original variables and

the component. It could be also interpreted as the relative weight of each variable in the

component Ci . Thus, the bigger the absolute value of the coecient, the more important

the corresponding variable is in constructing the component (Minitab Inc., 2009). These

loadings (notation used in this thesis) are optimally dened in PCA analysis to produce

the best set of components which explain the maximum variation of the observed variables.

The loadings have the constraint presented in Equation 3.3. The squared loadings

indicate the percentage of variance of an original variable explained by a component.

p X
X p
a2ij = 1 and aij ∈ < (3.3)
i=1 j=1

In summary, the procedure to implement PCA is:

• Data acquisition and standardization;

• Enter data (in the form of covariance or correlation data matrix) in a software which

performs PCA (e.g. Minitab, AMOS, R (free software));

• Run the model to obtain the components;

• Interpretation of results.

3.3.2.5 Interpretation of the results


An important part of PCA is the interpretation of the results and its main task is to

determine the number of principal components that will be retained to represent data.

There is a need to retain an appropriate number of components based on the trade-o

between simplicity (retaining as few as possible) and completeness (explaining most of the

data variation) (Katchova, 2013). Usually, the rst few principal components are chosen

to represent of the original data (Gentle, 2007).

One of the PCA objectives is to explain the maximum amount of variables variance in

a small number of components. If a component variance is low, it is possible to neglect

this component. However, the results are not always easily interpretable. To help with

this decision, there is the Kaiser's criterion. Kaiser's rule determines that principal com-

ponents with eigenvalues bigger than 1 (λ > 1) should be retained. The eigenvalues of the

correlation matrix are equal to the variances of the principal components, thus, eigenvalues

measure the amount of variation represented by each component.


3.3. Overview on mathematical tools used for performance integration 47

The scree plot can also help in PCA interpretation. This graphic shows the variance

of the data (y axis) explained by each component (x axis) (see Figure 3.4 for an example).

The principal components are sorted in decreasing order of variance, so the most important

principal component is always listed rst. The objective is to help analysts to visualize

the relative importance of the components, identifying easily the sharp drop in the plot

as a signal that subsequent components should be ignorable. Thus, in the example of

Figure 3.4, components 1 up to 4 have a signicant contribution in the explanation of data

variance.

Figure 3.4: Scree Plot example.

It is also possible to decide on the number of principal components based on the amount

of explained variance. For example, you may retain components that cumulatively explain

90% of the variance.

Finally, the decision on the number of principal components retained can be based on

any of the two techniques presented above or even on a combination of them.

Even if the presented techniques provide a useful basis to choose the number of com-

ponents, the analyst should know that all components must be interpretable (Rodriguez-

Rodriguez et al., 2010). Since the components are synthetic variables which do not have a

specic unit of measurement, it is important to nd their meaning in the analysis carried

out.

3.3.3 Factor Analysis - FA


3.3.3.1 Objective
Factor Analysis (FA) is widely used to analyze data because users nd the results useful for

gaining insight into the structure of multivariate data (Manly, 2004). Factor analysis has

aims that are similar to those of Principal Component Analysis, i.e. describe data in a far

smaller number of dimensions compared to the original number of variables. Essentially,

both Factor Analysis and Principal Component Analysis summarize variables considering

linear relationships between them.

The main dierence between PCA and FA is that PCA is not based on any particular

statistical model whereas FA is based on a model (Manly, 2004). It means that Factor

Analysis assumes the existence of a few common factors driving the data variation and
48 3. Literature on Performance Integration and Tools

Principal Component Analysis does not make such assumptions (Katchova, 2013). More-

over, PCA uses all types of variance to estimate components whereas FA utilizes only the

shared variance to dene the factors.

There are two most common factor analysis methods: EFA (Exploratory Factor Anal-

ysis) and CFA (Conrmatory Factor Analysis). The EFA is used to search possible under-

lying structures in the variables while CFA's goal is to conrm with the data a predened

structure based on theoretical hypotheses.

An extension of the FA is the multiple factor analysis, which analyzes several data

tables at the same time. These tables measure sets of variables collected on the same

observation or the same variables are measured on dierent set of observations (for details,

see Abdi et al. (2013)).

3.3.3.2 Data characteristics


There are some conditions to perform CFA:

• sample bigger than 150 observations for each variable, or the sample size should have

5 times the number of variables;

• no missing value (observation);

• data distribution may be normal;

• the observations (in a same variable) may be independent.

The last condition limits the utilization of time series as inputs in the model.

3.3.3.3 Basic principles


The FA model postulates that the observable random variable vector X (with p observa-
tions) is linearly dependent upon a few unobservable random factors F1 , F2 . . . , Fm and p
additional sources of variation ε1 , ε2 . . . , εp called errors or specic factors (Johnson and

Wichern, 2002).

The dimensions (or factors) are formed by the combination of observed variables highly

correlated. The objective is to identify the latent dimensions contained in data; i.e. to

group the variables in dimensions that represent them. The explanation degree of each

variable in each dimension is determined by the factor loadings.

3.3.3.4 Main outcomes


The representation of the factors is given by Equation 3.4 (Johnson and Wichern, 2002).

Xi = bi1 × F1 + bi2 × F2 + . . . + bim × Fm + εi (3.4)

where Fj is the common factor and j = 1, 2, . . . , m; bij are the factor loadings of the
i th variable on the j th factor; Xi are the variables with i = 1, 2, . . . , p; εi is the variation
of Xi that is not explained by the factors Fj .

The factor loadings are measured by the FA model, representing how much a factor

explains a variable. High loadings (positive or negative) indicate that the factor strongly
3.3. Overview on mathematical tools used for performance integration 49

inuences the variable whereas low loadings (positive or negative) indicate a weak inuence.

It is necessary to examine the loading pattern to determine on which factor each variable

loads. Some variables may load on multiple factors (Minitab Inc., 2009).

The communality (represented by h2i in Equation 3.5) is the proportion of variance

in X attributable to the common factors (Katchova, 2013), i.e. it assesses the quality of

the measurement model for each variable (Krizman and Ogorelc, 2010). Communality is

measured by the sum of squares of the loadings of the i th variable on the m common

factors (Equation 3.5)(Johnson and Wichern, 2002):

h2i = b2i1 + b2i2 + . . . + b2im (3.5)

The higher the communality value, the more the variable is explained by common

factors. This parameter is also used in the analysis of FA results as well as loadings. For

example, Krizman and Ogorelc (2010) dene in their paper that variables with a loading

of less than 0.75 and communality less than 0.40 were discarded.

Some steps to perform Factor Analysis are, according to Costa (2006), as follows:

• Data Inputs (should be standardized).

• Calculates the correlation matrix of variables.

• Perform rst a PCA and verify the number of factors that should be used (analyzing

what kind of data each factor represent), when one does not know the variables'

behavior.

• Rotation of factor loading. This procedure (using e.g. Varimax rotation) rotates the

factor to get the higher number of factor loadings as possible. It helps to interpret

the results, clarifying which variables should be part of each factor.

3.3.3.5 Interpretation of the results


A common tool used to provide visual information about the factors is the scree, or eigen-

value, plot (graph of factors versus the corresponding eigenvalues). From this plot, you

can determine how well the chosen number of components t the data.

Furthermore, if there is a subgroup of variables already known (e.g. individual, prod-

ucts, enterprises) the factor analysis can be measured separately for each group; it can

avoid the designation of variables from dierent natures in the same factor.

Finally, the diculty to interpret the variable clusters of the unrotated factor loadings

can be overcome with their rotation, which simplies the loading structure, allowing the

analyst to more easily interpret the results. The goal of factors rotation is to nd clusters

of variables that, to a large extent, dene only one factor (Katchova, 2013).

There are two kinds of rotation: orthogonal and oblique. The orthogonal rotation

preserves the perpendicularity of the axes (rotated factors remain uncorrelated). The

oblique rotation allows the correlation between the rotated factors, and the main method is

the Promax rotation (Katchova, 2013). It corresponds to a nonrigid rotation of coordinate

axes leading to new axes that are not perpendicular (Johnson and Wichern, 2002).

There are four methods to orthogonally rotate the initial factor loadings (Minitab Inc.,

2009):
50 3. Literature on Performance Integration and Tools

• Equimax - maximizes variance of squared loadings within both variables and factors.

• Varimax - maximizes variance of squared loadings within factors (i.e. simplies the

columns of the loading matrix); the most widely used rotation method. This method

attempts to make the loadings either large or small to ease interpretation.

• Quartimax - maximizes variance of squared loadings within variables (i.e. simplies

the rows of the loading matrix).

• Orthomax - rotation that comprises the above three depending on the value of the

parameter gamma (0-1).

Nevertheless, Johnson and Wichern (2002) arm that the choice of the type of rotation

is a less crucial decision. For them, the most satisfactory factor analysis are those in which

rotations are tried with more than one method and all the results substantially conrm

the same factor structure.

3.3.4 Canonical correlation analysis - CCA


3.3.4.1 Objective
Canonical Correlation Analysis (CCA) is a method for exploring the relationships between

two multivariate sets of variables. CCA is similar to multiple regression in assessing variable

relationships. The main dierence is that multiple regression allows only a single dependent

variable whereas CCA analyzes multidimensional relations between multiple dependent and

independent variables (Coskun and Bayyurt, 2008). Therefore, CCA has, as main objective,

to measure the relationships within each variable set, independent and dependent, and also

between both (Voss et al., 2005). For the purposes of this thesis, we are interested in the

measurement of relationships between variable set.

3.3.4.2 Data characteristics


Canonical correlation analysis is not recommended for small samples. Moreover, multi-

variate normal distribution assumptions are required for both sets of variables (UCLA,

2012). Unlike Principal Components Analysis, standardizing the data has no impact on

the canonical correlations.

3.3.4.3 Basic principles


The aim of CCA is to nd a linear combination of the independent (or predictor) variables

such that the outcomes has the maximum correlation with the dependent (or criterion)

variable (Johnson and Wichern, 2002).

To demonstrate how this result is attained, let us consider two set of variables X and

Y, with p variables in X and q variables in Y. As in Principal Component Analysis, the

objective is to look at linear combinations of the data, named U and V. U corresponds to

the linear combinations of the rst set of variables, X (Equation 3.6), and V corresponds

to the second set of variables, Y (Equation 3.7) (PennState, 2015b). For computational

convenience, it is dened that the number of variables in each set is p 6 q.


3.3. Overview on mathematical tools used for performance integration 51

U1 =a11 × X1 + a12 × X2 + . . . + a1p × Xp


U2 =a21 × X1 + a22 × X2 + . . . + a2p × Xp
.
. (3.6)
.

Up =ap1 × X1 + ap2 × X2 + . . . + app × Xp

V1 =b11 × Y1 + b12 × Y2 + . . . + b1q × Yq


V2 =b21 × Y1 + b22 × Y2 + . . . + b2q × Yq
.
. (3.7)
.

Vq =aq1 × Y1 + bq2 × Y2 + . . . + bqq × Yq

Each member of U will be paired with a member of V, forming the canonical variates.

Canonical dimensions, also known as canonical variates, are latent variables that are analo-

gous to factors obtained in factor analysis. In general, the number of canonical dimensions

is equal to the number of variables in the smaller set; however, the number of signicant

dimensions may be even smaller (UCLA, 2012).

For example, (U1 , V1 ) is the rst canonical variate and the objective is to nd the
coecients (ai1 , ai2 , . . . , aip and bi1 , bi2 , . . . , biq ) of the linear combinations that maximize

the correlations between the members of each canonical variate pair (PennState, 2015b).

The canonical correlation (Rc ) for the ith canonical variate pair is given by the covariance

(cov ) of the canonical variate pair per the square root of variances (var ) of Ui and Vi
(Equation 3.8):

cov(Ui , Vi )
Rc = p (3.8)
var(Ui )var(Vi )

3.3.4.4 Main outcomes


The output of canonical correlation consists of two parts, canonical functions and canonical

variates. Each canonical function is composed of two canonical variates, one independent

and one dependent. The independent and the dependent canonical variates represent, each

one, the optimal, linear and weighted combination of the variables that correlate highly

(Voss et al., 2005).

The correlation between the independent and dependent variates in each function is

assessed by the canonical correlation coecient (Rc ) and the shared variance between

the functions is assessed by the squared canonical correlation coecient (Rc ).


2 Multiple

canonical functions are then derived that maximize the correlation between the independent

and dependent canonical variates, such that each function is orthogonal to all others (Voss

et al., 2005).
52 3. Literature on Performance Integration and Tools

3.3.4.5 Interpretation of the results


Two main analytical ndings can be secured from CCA results: (i) the evaluation of how

many dimensions (canonical variables) are necessary to understand the association between

the two sets of variables; (ii) to explore the associations among dimensions and how much

variance is shared between them (PennState, 2015b).

To interpret each component, we must compute the coecients (also named loadings)

between each observed variable and the corresponding canonical variate (UCLA, 2012).

The magnitudes of the loadings give the contributions of the individual variables to the

corresponding canonical variable.

3.3.5 Structural Equation Modeling - SEM


3.3.5.1 Objective
Structural Equation Modeling (SEM) is a growing family of statistical methods for modeling

the relations between variables. The method is also known as Covariance Structural Equa-

tion Modeling (CSEM), Analysis of Covariance Structures, or Covariance Structure Anal-

ysis (Hoyle, 2012).

SEM is appropriate for complex, multivariate data and testing hypotheses regarding

relationships among observed and latent variables, the two broad classes of variables in

SEM (Kline, 2011).

3.3.5.2 Data characteristics


To perform SEM, it is necessary to be aware that the sample size and number of parameters

to be estimated can make SEM unadvisable. Several estimation issues arise in SEM when

the number of variables measurement occasions, T, exceeds the number of participants,

N and some alternatives have been developed to handle this kind of data (Chow et al.,

2010). There is no rm decision rule for the minimum sample size for SEM, but several

authors suggest that at lower sample sizes, typically below 150, structural models with

latent variables become unreliable. Furthermore, there are similar advices against the use

of SEM in cases where the ratio of sample size to estimated parameters is less than 10

(Autry et al., 2005). In cases where there is a relatively small sample size, the threshold

values for factor loadings and communalities are, sometimes, increased, and Partial Least

Squares Regression (PLS) is usually employed to assess the measurement model (Krizman

and Ogorelc, 2010).

Some dataset requirements to apply SEM are (Bentler and Chou, 1987):

• Independence of observations - if not, there is serial correlation among the responses;

• Identical distribution of observations;

• Simple random sampling - each of the units or cases have the same probability to be

included in the sample to be studied;

• Functional form - all the relations among variables are linear.


3.3. Overview on mathematical tools used for performance integration 53

According to these requirements, estimating a structural equation model using time

series data raises the issue of autocorrelated errors. There are methods for accommodating

autocorrelated errors in structural equation models, but they are complex and will not

make part of the scope of this dissertation.

3.3.5.3 Basic principles


SEM comprises the ability to construct latent variables: variables which are not measured

directly, but are estimated in the model from several measured variables. SEM requires

a theoretical model specication before its application. Thus, as the Conrmatory Factor

Analysis (CFA), an accurate estimation of the latent variables depends on the quality of the

theoretical model constructed. The test of the structural model constitutes a conrmatory

assessment of the hypothesized causal relationships among the constructs (Krizman and

Ogorelc, 2010).

The theoretical model of SEM can have numerous congurations. Initially, the model

can have variables that are dependent and independent in the same model. For instance, a

set of observed variables might be used to predict a pair of constructs (or latent variable)

that are correlated, uncorrelated, or related in such a way that one forms the other. In the

latter case, one of the dependent variables is also an independent variable since it is used

to predict another dependent variable (Hoyle, 2012).

Another conguration of theoretical models regards the construct specication (i.e. the

aggregation method used to dene the latent variables) which can be classied as reective

or formative measurement model (Jung, 2013). A formative construct refers to an index

of a weighted sum of variables, i.e. the measured variables cause the construct. In the

reective construct, the latent variable causes the measured variables.

Figure 3.5 shows the reective construct model represented by the path diagram, which

is a graphical representation of direct and indirect eects of observed and latent variables.

In this model, Y and X are the latent variables operationally dened by the measured

variables y1 , y2 , y3 and x1 , x2 , x3 , x4 , respectively. The parameters to be estimated are

denoted by asterisks (Hoyle, 2012).

Figure 3.5: SEM model example (Hoyle, 2012).


54 3. Literature on Performance Integration and Tools

3.3.5.4 Main outcomes


The path analysis, essentially:(i) helps in the understanding of correlations patterns among

the variables; (ii) explains as much of the variable variation as possible with the model

specied. In summary, after the denition of the theoretical model, it is tested using the

dataset of the observed variables and the results will infer about whole hypothesized model:

if it should be rejected, modied, or accepted (Chen, 2011).

There are two general causal modeling approaches to model measurement: the covariance-

based method and the partial least squares (PLS). Covariance-based methods are more ap-

propriate for conrming theory and parameter estimation, and require large samples sizes

with normal distribution. PLS, in contrast, is more appropriate when theory is lacking

regarding the nature of relationships among constructs, dimensions and their indicators

for prediction purposes (Fugate et al., 2010).

3.3.5.5 Interpretation of the results


The interpretation of path coecients cannot be done straightforward (Kline, 2011). The

higher the correlation among multiple indicators of a given construct, the more consistent

i.e., reliable, the measures. However, they are not correlation coecients. Suppose we have

a network with a path connecting from region A to region B. The meaning of the path

coecient theta (e.g., -0.16) is this: if region A increases by one standard deviation from

its mean, region B would be expected to decrease by 0.16 its own standard deviations from

its own mean while holding all other relevant regional connections constant.

3.3.6 Dynamic Factor Analysis - DFA


3.3.6.1 Objective
Dynamic factor analysis is a special case of MARSS Model (Multivariate Autoregressive

State Space Model). State-Space modeling techniques are originally developed as single-

subject time series estimation tools (Chow et al., 2010), studying linear stochastic dynamics

systems (Holmes et al., 2014).

DFA can be looked at as a super regression model especially designed for time-series

data with outcomes of dimension-reduction techniques (Zuur et al., 2003b). Instead of

examining correlates of a single summary metric (i.e. an output), DFA can provide infor-

mation on correlation (explanatory variables) of patterns that emerge over time (Hasson

and Heernan, 2011). Thus, DFA explains temporal variation of a set of n observed time
series (variables) using linear combinations of m hidden trends (or common trends), where
m << n (Holmes et al., 2014).

3.3.6.2 Data characteristics


Although DFA has potential as a useful analysis technique, it often takes an unusually

long time to converge (often exceeds several hours as larger the dataset and the number of

common trends). The results also tend to become inconsistent with such large data sets

(Holmes et al., 2014). Therefore, DFA brings good results when n (number of observed

variables) is big and the number of time observations is small.


3.3. Overview on mathematical tools used for performance integration 55

Besides the short dataset, DFA also accepts non-stationary time series with missing

values (Zuur et al., 2003b).

3.3.6.3 Basic principles


Dynamic Factor analysis manages to combine, from a descriptive point of view (not proba-

bilistic), the cross-section analysis through Principal Component Analysis (PCA) and the

time series dimension of data through linear regression model (Federici and Mazzitelli,

2005). DFA models observations in terms of a trend, seasonal eects, a cycle, explanatory

variables and noise (Zuur et al., 2003a).

A limitation of DFA is that the common trends are combined in a linear fashion, and

the explanatory variable regressions are linear as well. Therefore, nonlinear interactions

between the components of the model are ignored (Hasson and Heernan, 2011).

3.3.6.4 Main outcomes


A DFA model has the following structure (Holmes et al., 2014):

xt = xt−1 + wt where wt ∼ M V N (0, Q)


yt = Zxt + a + vt where vt ∼ M V N (0, R) (3.9)

x0 ∼ M V N (π, Λ)

The general idea presented in Equation 3.9 is that the observed variables (y ) are mod-

eled as a linear combination of hidden trends (x). Then, the data entered into the model

(y ) is explained by some common trends (x). The factor loadings (Z ) are used, as in PCA,

to determine the variables that will be aggregated in each common trend. Other terms

in Equation 3.9 are matrices with the following denitions (see Holmes et al. (2014) for a

detailed explanation):

w is a m × T matrix of the process errors. The process errors at time t are multivariate
normal (MVN) with mean 0 and covariance matrix Q.
v is a n × T column vector of the non-process errors. The observation errors at time t are

multivariate normal (MVN) with mean 0 and covariance matrix R.


a are parameters and are n × 1 column vectors.
Q and R are parameters and are m × m and n × n variance-covariance matrices.
π is either a parameter or a xed prior. It is a m × 1 matrix.
Λ is either a parameter or a xed prior. It is a m × m variance-covariance matrix.

There are three ways of estimating factor loadings in DFA: (i) use Maximum Likelihood

Function (MLE) and the Kalman Filter (KF); (ii) use Principal Components Extraction;

(iii) combination of the two rst. According to Montgomery and Runger (2003), MLE is

one of the best methods of obtaining a point estimator of a parameter. The estimator will

be the value of the parameter that maximizes the likelihood function.

The Kalman lter is an algorithm for calculating the expected means and covariances

of the observed values for a whole time series in the presence of observation and process

error. In its original form it works only for models that are linear (exponential increase or
56 3. Literature on Performance Integration and Tools

decrease or expected constant population size over time) with multivariate normal error;

the extended Kalman lter uses an approximation that works for nonlinear population

dynamics (Bolker, 2007).

3.3.6.5 Interpretation of the results


Finally, interpretation of DFA results may not be straightforward. The DFA model uses

hypothetical latent variables (the common trends) that are deemed to be responsible for

the observed patterns; however, no information is provided as to what these variables

are. Adding explanatory variables to the model could help with interpretation, but this

increases complexity and does not always improve the model. In general, one must keep in

mind that when using advanced techniques such as DFA, extra care may be needed when

interpreting results (Hasson and Heernan, 2011).

3.4 Conclusions
This chapter is divided in two: the presentation of the literature about indicator rela-

tionships and aggregated performance; the explanation of mathematical tools to aggregate

indicators.

From the literature, the main conclusions we can take from papers are: the works carry-

ing out indicators aggregation does not use their results to achieve the global performance;

and, papers usually aggregate performance using tools which incorporate human judg-

ments. Furthermore, we have seen a tendency in papers to combine dierent mathematical

tools to reach their objectives.

These conclusions demonstrate a clear gap in the literature and this dissertation seeks to

fulll it, providing an integrated warehouse performance measurement through indicators'

aggregation.

To achieve our goal, it is necessary to investigate the statistical tools that are used for

dimension reduction. They are introduced in a summarized manner, since the objective of

the explanations is to allow the reader to recognize the characteristics and the requirements

to apply each technique. In Chapter 4, these methods are then evaluated according to the

requirements of the proposed methodology, determining the ones that can be used in our

studied problem.

The knowledge basis constructed in this chapter is used to develop our proposal method-

ology, which is presented in the next chapter.


Chapter 4
Methodology to dene an Integrated Warehouse
Performance

Science is not about making predictions or performing


experiments. Science is about explaining.
Bill Gaede

Contents

4.1 Introduction - General methodology presentation . . . . . . . 58


4.2 Conceptualization - The analytical model of performance
indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4.3 Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.3.1 Data acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.3.2 Theoretical model of indicator relationships . . . . . . . . . . . . . . . . 63
4.3.3 Statistical tools application . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.4 Model Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.4.1 Integrated Performance proposition . . . . . . . . . . . . . . . . . . . . . 67
4.4.2 Scale denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.5 Implementation and Update . . . . . . . . . . . . . . . . . . . . . . . . 69
4.5.1 Integrated model implementation . . . . . . . . . . . . . . . . . . . . . . 69
4.5.2 Model update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.6 Methodology implementation on this thesis . . . . . . . . . . . . 71
4.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

Abstract
This chapter presents the methodology to assess an integrated warehouse performance.
The methodology is divided in four main areas (conceptualization, modeling, model
solving, implementation and update), which are introduced in this chapter.
58 4. Methodology to dene an Integrated Warehouse Performance

4.1 Introduction - General methodology presentation


According to Suwignjo et al. (2000), with the large number of multidimensional factors

aecting performance it is impossible to manage a scale system for each dierent dimension

of measurement. So, integrating those multidimensional eects into a single unit can

facilitate the trade-o between dierent measures.

The proposed methodology, presented throughout this chapter, presents an integrated

performance model to overcome issues related to the interpretation of a large quantity of

indicators measured in warehouses for performance management. Initially, the method-

ology is introduced from a general point of view, being deeply detailed throughout the

sections.

In Chapter 1, the dissertation's methodology is classied as quantitative modeling

research. For this kind of research, Mitro et al. (1974) propose the work development

in four phases: conceptualization, modeling, model solution and implementation. We use

the same four phases to present our developed methodology (Figure 4.1).

It is apparent in Figure 4.1 that the proposed methodology is dynamic. The imple-

mentation and update phase can be seen, at a rst glance, as the end of the methodology

application. However, if a situation changes in the warehouse, the proposed model needs

to be reviewed, and the methodology starts again by the conceptualization phase, closing

the loop.

• Analytical model of • Theoretical model of


performance indicators indicator relationships
• Model for indicators
aggregation
Conceptualization Modeling

Implementation
Model Solving
& • Determination of an
• Integrated model
implementation Update Integrated performance
• Model update model
• Scale definition

Figure 4.1: The proposed methodology phases with their main steps. Source: Adapted
from Mitro et al. (1974).

Figure 4.1 demonstrates the main outcomes inside of each phase in order to achieve

and implement the integrated performance model. The Conceptualization phase results

in an analytical model of performance indicators for the warehouse. Once the analytical

model is dened, the Modeling phase denes the relationships among indicators and how

they can be aggregated using dierent mathematical tools. Then, the Model Solving phase

analyzes the results obtained in the previous phase, proposing an integrated performance

model with a scale to evaluate and interpret the results. The last phase, Implementation

and Update, describes the integrated model implementation in a company as well as how
4.1. Introduction - General methodology presentation 59

to update it.

To perform the proposed methodology, Figure 4.2 shows the process ow, detailing the

steps carried out in each methodology phase (the dotted rectangles of Figure 4.2). Each

step is explained in the next sections.

In summary, the rst phase, Conceptualization, comprehends the determination of the

methodology application boundaries, i.e. in which warehouse areas the performance will

be measured and the indicators used for that. It means that, to perform the methodology,

it is necessary to dene the areas where the performance will be assessed and the indicator

set used by the company to achieve it. These indicators need to be known in terms of their

equations, since the analytical model is formed basically by this group of equations.

Definition of the scope of performance measurement

Definition of the indicator set


Conceptualization
Determination of indicator and data equations

Analytical model of perfomance indicators

Indicator time series acquisition

Assessment of the Jacobian matrix Statistical tools application


Modeling
Theoretical model of indicator Model for indicators
relationships aggregation

Analysis of the mathematical results

Determination of the integrated performance model Model Solving

Scale definition

Implementation and Update

Figure 4.2: Methodology steps ow.

Once the analytical model is developed (last step of conceptualization in Figure 4.2),

it is necessary to acquire data from indicators. This data is the time series of indicator

results, which are measured periodically in the enterprise. From this step, two analyses

can be carried out in parallel: the determination of indicator relationships theoretically

and the use of historical data to perform indicators' aggregation. The theoretical model

is dened from the Jacobian matrix measurement, which is detailed in Section 4.3.2 and
60 4. Methodology to dene an Integrated Warehouse Performance

the indicators' aggregation are achieved from dimension-reduction statistical tools (Section

4.3.3).

From the results of the mathematical tools application, a quantitative model of indicator

relationships is constructed. It is denominated the aggregated performance model, which

provides as outcome the global warehouse performance. Because the performance values

obtained from these aggregated indicators cannot be interpreted straightforwardly, it is

necessary to create a scale for them. Finally, the implementation step demonstrates the

model utilization for periodic warehouse management and the update denes when the

methodology needs to be revised.

The following sections describe how to perform each step detailedly.

4.2 Conceptualization - The analytical model of performance


indicators
The conceptualization phase involves the denition of the performance measurement sys-

tem. For this methodology purposes, it is necessary to perform three steps: the scope of

measurement, the denition of a metric set and, the determination of indicator equations,

which creates the analytical model (see Figure 4.2).

It is really dicult to determine an evaluation model for distinct objectives, since each

enterprise (and, consequently, its warehouse) has specicities linked to dierent processes/

activities. Moreover, the performance measurement has become a strategic tool for cor-

porations to observe their weaknesses and act in a way to minimize them; so, it needs to

be designed and evaluated in a consistent way to be eectively managed (Rodriguez et al.,

2009). Regarding the warehouse objectives, they are usually dened to improve the whole

supply chain performance, and this make the choice of an evaluation model crucial in a

networked organization. In fact, Fabbe-Costes (2002) states that all actors should create

value for chain partners; however, sometimes this is dicult to achieve because the actors

use dierent performance evaluation systems that are almost impossible to reconcile.

Besides the dierent warehouse objectives and processes, the performance measurement

systems should also satisfy some conditions such as (Manikas and Terry, 2010): inclusive-

ness (measurement of all related aspects), universality (allow for comparison under various

operating conditions), measurability (data required are measurable) and consistency (mea-

sures consistent with organization goals).

There are methodologies in the literature enabling to dene a set of performance indi-

cators based on strategic goals (Fernandes, 2006). Since the literature on this subject is

vast and the amount of indicators utilized in the process shall be carefully determined, the

denition of the indicators forming the warehouse metric system is out of this dissertation's

scope. In order to keep a large spectrum of applications for our methodology, we consider

that indicators utilized for warehouse management are derived from enterprise's strategy,

being sucient to perform the methodology.

Regarding the steps of the conceptualization phase, we describe the approach as follows:

Step 1: The scope of measurement is related to the warehouse activities/areas where

the performance will be measured. Kiefer and Novack (1999) state that the complexity

of the measurement systems increases as the number of activities performed by the ware-
4.2. Conceptualization - The analytical model of performance indicators 61

house increases. The proposed methodology considers that all warehouse activities can be

included in the measurement scope. However, the manager could have no interest in the

evaluation of some specic activities in an aggregated manner, denoting the importance of

manager's participation in the denition of the measurement scope.

Step 2: After the denition of the methodology application boundaries, it is necessary

to determine the indicators set used for performance measurement. According to Melnyk

et al. (2004), the term metric is often used to refer to one of three dierent constructs:

(i) the individual metric; (ii) the metric set; and (iii) the overall performance measurement

systems. For the methodology application, the metric set is the group of indicators already

used by the warehouse to manage its activities.

Steps 3 & 4: Even if some indicators from higher levels are generally related to the

ones of lower levels (Böhm et al., 2007), in this thesis we aggregate only the operational

metric set (i.e. the set of individual operational indicators). As our objective is to nd a

good statistical representation of indicator relationships based on internal warehouse data,

it is important to consider indicators mostly inuenced by other internal indicators. The

same does not happen to tactical and strategical indicators, which are usually related to

nancial, market tendencies and customer demand.

There is no limit on the number of performance indicators considered for aggregation,

but some constraints must be satised: the indicators need to be measured in a quanti-

tative way, i.e. there are equations to describe them; historical data of measurement is

necessary to consider the indicator in the methodology, since this data will be used to

model indicators' aggregation.

Example: Although this methodology is generic, it is better explained through an

example. Let us consider that a warehouse measures six indicators I1 , I2 , I3 , I4 , I5 , I6 ,


which are dened quantitatively by the equations:

I1 = A + B; I2 = C + D; I3 = E − F ; I4 = G/A; I5 = C/B; I6 = J/H (4.1)

where A, B, C, D, E, F, G, H, J are quantitative data measured periodically in the ware-

house.

These quantitative indicators described in form of equations represent one part of the

analytical model. The second part comes from data equations. It is necessary to dene

data equations because sometimes collected data are calculated from other subdata, and

this information will be necessary to nd theoretically the relationship between indicators.

In our example, we consider that J data is calculated according to Equation 4.2, and all

other data have no relation with each other.

J =A+G (4.2)

Thus, the nal analytical model for this example comprehends Equations 4.1 and 4.2.

The next section presents the modeling phase, which includes data acquisition, the

denition of indicator relationships and their aggregation.


62 4. Methodology to dene an Integrated Warehouse Performance

4.3 Modeling
4.3.1 Data acquisition
The required data to apply the proposed methodology are time series of indicators, i.e.

indicator values measured periodically by the warehouse. We dene time series data as a

moderate number of measurements made on a single individual and on a repeated context

(du Toit and Browne, 2007). Initially, the number of measurements collected for the

same indicator (i.e. the dataset size) should be as long as possible and available in the

company. For instance, for the example described in Section 4.2, indicator's time series are

measured monthly in the warehouse as shown in Table 4.1, and the unit of each indicator

is demonstrated in parenthesis.

Table 4.1: Examples of indicator time series

Month I1 (min) I2 (%) I3 ($/order) I4 (order/h) I5 (%) I6 ($/month)


1 10 98 10 5 2 100
2 12 97 9 6 1.5 120
3 14 99 9 8 1 130
4 12 97 11 6 2 110
5 15 98 12 7 1.5 100
6 13 99 10 8 2 120
.. .. .. .. .. .. ..
. . . . . . .

Since the indicators are very heterogeneous with regard to their measurement units

($, time, %, etc.), Rodriguez et al. (2009) suggest three operations to be applied on raw

data: ltering, homogenization and standardization. The ltering analyzes the abnormal

behavior of the dataset; homogenization puts all data in the same temporal frequency (it is

necessary when some indicators are measured in weeks and others in months, for instance);

standardization provides an auto-scaled and dimensionless data. A usual technique utilized

to standardize data is demonstrated in Equation 4.3 (Gentle, 2007):

Xactual − Xmean
Xnew = (4.3)
σX
where Xnew is the new value of the variable, Xactual is the real variable value, Xmean is

the time series mean of the variable dataset, σX is the standard deviation of the variable

time series.

The nal dataset form a matrix of data ltered, homogenized in frequency and stan-

dardized, and ready for application of the proper mathematical techniques for identifying

relationships between indicators. The matrix is similar to Table 4.1, with the measurement

date shown in rows and the indicators separated by columns.

Depending on the statistical tool utilized (Section 4.3.3), there may be some limitations

on the dataset to perform the statistical tools.

For instance, some statistical tools may require the dataset to follow a normal distribu-

tion. To verify it, Newsom (2015) suggests to examine the skew and kurtosis of univariate

distributions. Kurtosis is usually a greater concern than skewness, but the literature only

recommends special analysis if skewness >2 and kurtosis > 7. If the univariate distribu-

tions are non-normal, the multivariate distribution will also be non-normal. One reason
4.3. Modeling 63

for non-normality is the presence of outliers in the dataset. In this case, the reason of the

outliers shall be examined, to eliminate the ones generated by typeset errors, for instance.

Another requirement of some statistical tools is the non-existence of missing values. In

a rst moment it is not recommended to ll in the missing values by other ones generated

(for instance, there is a technique where the missing value is replaced by the time series

mean). As there are softwares to perform the statistical methods, usually they take care

automatically of this kind of issue, deleting the matrix line to eliminate the missing values.

Once the data is collected and treated, they are ready to be the inputs of mathematical

techniques described in next sections.

4.3.2 Theoretical model of indicator relationships


The quantitative relationships among indicators are the results from dierent variations

and eects of warehouse processes occurring at the same time. We could verify two main

forms of relationships: the eects of chained processes and of data shared among indicators.

The eect of chained processes is the impact of one performance indicator on the

other one that corresponds to the next activity in the process chain. For example, if an

order is shipped with delay, probably the delivery indicators (like delivery on time) will be

inuenced by this problem. So, one intervention in the system can cause a delay chain for

the rest of the process. However, the delay can be compensated by a great productivity of

the next operations, and at the end the order is delivered on time. Due to the variability of

the cases, this kind of relationship is not considered in the theoretical model construction.

The eect of data shared among indicators considers that two indicators are related

through the number of data they have in common. The main idea of this eect is that

if two indicators have in common one or more data, they have some kind of relationship

because once the data change, both indicators will be impacted, changing in some way.

This circumstance denes a relationship between two indicators. For example, labor pro-

ductivity and scrap rate use the same data, products processed, in their measurement

(Section 5.2 shows the indicator equations). If products processed change, both indicators

will also change. It is important to note that the variation intensity is not necessarily the

same in the concerned indicators. So, the data shared by indicators just suggest indicator

relationships but not their intensity.

By the use of an analytical model, the data (and subdata) used in all indicator equations

can be easily veried. Thus, the analytical model dened in Section 4.2, with indicator

and data equations, is used as an input to assess indicator relations based on data sharing.

To certify the indicators which share similar data we calculate the Jacobian Matrix.

The Jacobian is a matrix of partial derivatives that is used to determine the out-

put/input relationship (Montgomery and Runger, 2003). In other words, the Jacobian is

a partial derivative matrix of the n outputs with respect to the m inputs. Each matrix

cell gives the sensitivity of the output with respect to one input variation, maintaining the

other inputs constant.

So, for a function f : S ⊂ Rm → Rn we dene ∂f /∂x to be the n × m matrix (Gentle,


2007). To meet the methodology's purpose, we derive all functions f (indicator equations)
with respect to their data inputs x as shown in Equation 4.4.
64 4. Methodology to dene an Integrated Warehouse Performance

inputs
z
 }| {
∂f1 ∂f1 ∂f1
...

outputs
∂x1 ∂x2 ∂xm
 
∂f  ∂f2 ∂f2 ∂f2 
 ∂x1 ∂x2 ... ∂xm 
J= =  . . .
(4.4)
∂x ..

 . . . .
 . . .


∂fn ∂fn ∂fn
∂x1 ∂x2 ... ∂xm

Equation 4.4 results in a n×m matrix where n is the outputs (indicators) and m the

number of inputs (independent data). The independent data refers to the non-combined

data used to calculate indicators. In the example dened in Section 4.2, the independent

data inputs used to assess the indicators (outputs) are A, B, C, D, E, F, G, H . The data

J is out of this list because it is calculated from the sum of A and G (Equation 4.2),

resulting in an aggregate data. For this example, the nal Jacobian Matrix is (after partial

derivatives calculation of indicator equations I ):

A B C D E F G H
I1
I1
I2
I2
I3
I3
I4
I4
I5
I5
I6
I6
(4.5)

In the Jacobian matrix detailed in Equation 4.5, the non-zero cells signies that a

change in the data (input) will impact the indicator(s) (output). Therefore, it is possible

to identify the indicators which share data analyzing each matrix column. For example, the

column A has three non-zero cells, a11 , a41 and a61 , representing that this data inuence the
indicators 1, 4 and 6, respectively. Since these three indicators share data A, we conclude

that indicators 1, 4 and 6 have some kind of relationship.

Analyzing all data columns of the Jacobian matrix provide insights about indicator rela-

tionships in an innovative way, without considering human judgments nor possible dataset

issues when used in statistical analysis of relationships (since the dataset can contain im-

perfections as outliers or bias). Chapter 6 demonstrates in detail with an application how

to perform Jacobian matrix analysis.

From the methodology steps of Figure 4.2, the development of the theoretical analysis

of indicator relations occurs in parallel with the model for indicators' aggregation. This

last one is discussed in next section.

4.3.3 Statistical tools application


The objective of applying statistical tools is to group indicators based on their correlation

and data variation. The statistical tools available to achieve the objective of dimension-

reduction are the ones presented in Chapter 3.

Before the application of dimension-reduction methods, the correlation matrix of indica-

tors is calculated from standardized time series data. The objective is twofold: correlation
4.3. Modeling 65

matrix is used as input of dimension-reduction tools and it gives a rst impression on the

strength of indicator relationships.

It is important to emphasize that the relationship between the variables are described

as causal, meaning that it is explicitly recognized that a change of value in one variable

will lead to a change in another variable (Bertrand and Fransoo, 2002). However, it is not

possible to identify crossed relationships from correlation results, as this technique carries

out pair-wise comparisons between pairs of indicators instead of analyzing all indicators at

the same time (Rodriguez et al., 2009).

After obtaining the correlation matrix, statistical tools are applied to reduce data di-

mensionality creating factors/components/trends (the denomination depends on the method

used), which represent a group of indicators. As presented in Chapter 3, each statistical

tool has some requirements to allow its utilization. To assign data characteristics with the

mathematical tools requirements, Table 4.2 is built. It is divided in two parts: the right

side lists the requirements demanded by each method to be applied, according to the data

and sample characteristics presented on the left-side table. The objective of Table 4.2 is to

evaluate the suitable tools to be applied in the methodology, as shown on the last column

of the right-side table.

Table 4.2: Mathematical tools evaluation


Data Characteristics
1. Data is a time series Could be used
Requisites to be
2. There are no missing values Math Tool in the proposed
applied
3. Data is non-stationary methodology?
4. Normality of data FA Item 2, 4, 5, 7, 8, 9 NO
5. Big sample size ( > 150) SEM Item 4, 5, 7, 9 NO
6. Small sample size CCA Item 4, 5 NO
7. Data is categorical PCA Item 2, 8 YES
8. Standardized data DFA Item 1, 3, 6 YES
9. Independence of observations

The mathematical tools analyzed in Table 4.2 are Factor Analysis (FA), Structural

Equation Modeling (SEM), Principal Component Analysis (PCA), Canonical Correlation

Analysis (CCA) and Dynamic Factor Analysis (DFA).

FA and SEM demand a lot of data requirements, but the main issues impeding their

utilization are the big sample size (Item 5) and the independence of observations (Item

9). Regarding the sample size, Rodriguez-Rodriguez et al. (2010) arm that indicators

are dynamic as well as the PMS (Performance Measurement System), and enterprises

usually do not have large stores of data. They could keep nancial registers from lots of

years but it is not a normal practice for other PMS measures. The SEM method has a

measurement model less restrictive regarding the sample size. Fugate et al. (2010) state

that PLS (Partial Least Squares Regression - SEM measurement model) is often applied

for analyzing constructs because it accepts small sample sizes with no data distribution

requirements, as normality. However, our methodology proposes the use of time series as

model inputs, and this kind of data cannot t the condition of observations' independence.

Moreover, to apply Factor Analysis and Structural Equation Modeling methods, it is

also necessary to specify an initial model, i.e., to establish which are the observed variables,
66 4. Methodology to dene an Integrated Warehouse Performance

the error terms as well as their possible relationships (Rodriguez et al., 2009).

Therefore, FA and SEM are not initially considered as options for our methodology.

Even if there are some mathematical adjustments in FA and SEM model to overcome

the independence of observations problem, allowing their utilization with time series data

(for instance, see Choo (2004); du Toit and Browne (2007); Wang and Fan (2011)), these

applications are suggested for future researches.

In the case of CCA, the variables are initially classied in a specic group, and then,

the correlation between variables and groups are calculated in order to obtain the highly

correlated linear combinations of variables (Westfall, 2007). Even if it is possible to roughly

arm that the CCA results are quite similar of PCA and DFA (i.e. to group variables

according to their similarities), we do not include the CCA as an option for our methodology

since there is no idea about which variables (in our case the indicators) can be classied on

the dependent and independent groups. Also, as big samples are required to apply CCA,

its utilization in our methodology are limited for the same reasons presented for FA and

SEM.

The techniques which can be used in our methodology are PCA and DFA. Both of

them could be used to determine indicator groups even if the techniques display some

dierences. PCA is optimal to nd linear combinations that represent the original set of

variables as well as possible, capturing the maximum amount of variance from the original

variables (Westfall, 2007). In the case of DFA, besides it is particularly designed for small

and non-stationary time series, this technique models the time series (variables) in terms

of a trend, seasonal eects, a cycle, explanatory variables and noise (Zuur et al., 2003a),

and the variables with similarities in these aspects are grouped together (called common

trend).

Getting back to the generic example started in Section 4.2, a possible result from the

dimension-reduction tool application (PCA or DFA) is illustrated in Figure 4.3. Taking the

indicators dened in Section 4.2, the dimension-reduction tool will separate them according

to their correlations.

Figure 4.3: Hypothetical PCA or DFA result: indicators grouped in components/trends.

Figure 4.3 shows a hypothetical result whereas the indicators are aggregated in two

dierent trends or components (the name will be in accordance to the tool applied). The

coecient of each indicator (α, β , γ , θ , η , µ) represents the relative weight between the

original variable (I1 , . . . , I6 ) and the component/trend. The results from Figure 4.3 can be

described in form of Equations 4.6.

Component/T rend1 = αI1 + βI4 − γI6


(4.6)
Component/T rend2 = θI3 − ηI5 + µI2
4.4. Model Solution 67

The analysis of this result is discussed in the next section.

4.4 Model Solution


4.4.1 Integrated Performance proposition
This step of the proposed methodology comprehends the analysis of all mathematical

results achieving an integrated performance model at the end. The mathematical tools

analyzed are: Jacobian matrix , correlation matrix and PCA or DFA result.

Initially, the main objective is to dene which indicators should make part of the ag-

gregated model and which ones should be discarded. The theoretical model of indicator

relationships (the Jacobian matrix) and the correlation matrix may be evaluated together

to verify indicators that should be excluded because they will not t well the dimension-

reduction statistical tool. For instance, if the Jacobian matrix demonstrates that an indi-

cator does not share data with any other and, in the correlation matrix, the correlation

coecients  r  (named Person's r) are low (e.g. values lower than 0,3), we can conclude

that the indicator should be excluded from the model. After the exclusion of one indicator,

it is suggested to perform the PCA or DFA once again for the new indicator group.

In our theoretical example, the mathematical tools analyzed are the Jacobian matrix

(Equation 4.5) and the PCA/DFA result (Equation 4.6). From the Jacobian matrix we

conclude that indicator I3 has no relation with the indicator group, since it does not share

any data with other indicators. Moreover, let us consider that the correlation matrix

presents as the maximum correlation coecient (Person's r) for indicator I3 , r = 0, 3.


Therefore, both results (Jacobian and Correlation) recommend to discard this indicator

because it does not make part of the indicator's group which relates among them.

The exclusion of I3 requests a new application of PCA method (Figure 4.3 shows the

rst result), which hypothetically has the following new outcome (Figure 4.4 and Equation

4.7). The exclusion of I3 has improved the result, since all the indicators are explained now

by just one component/trend in the new outcome. Equation 4.7 shows the nal result. It is

important to note that the indicators' coecients have also changed due to the modication

of the dataset with the I3 exclusion.

Figure 4.4: Hypothetical result after exclusion of indicator I3 .

Component/T rend1 = βI1 + µI2 − γI4 + ρI5 − αI6 (4.7)

Equation 4.7 represents the integrated performance model for the example carried out

throughout this chapter. However, if the number of indicators is high, usually it is dicult

to aggregate all measures in just one component. Generalizing the result of Equation 4.7,
68 4. Methodology to dene an Integrated Warehouse Performance

a generic model representing several components can be described by Equation 4.8, from

Manly (2004).

m
X
Ci = bij Xj ∀ i = 1, . . . , n (4.8)
j=1

where:

Ci = principal components
b11 . . . bnm ∈ < = relative weight of each variable (X1 , . . . , Xm ) in the corresponding

component (C1 , . . . , Cn )

X1 . . . Xm = performance indicators.

The integrated performance model presented in Equation 4.8 can be implemented in

the company and used for daily management. For that, each component needs a scale to

allow the interpretation of results, since the inputs are normalized indicators, producing

components without units. Analyzing the global performance of a warehouse using dif-

ferent components without physical units can be dicult for managers because of their

subjectivity.

Therefore, we propose an aggregated expression for the component's equations. There

are several methods to achieve this global expression. Lohman et al. (2004) state that

aggregation can be done directly if the underlying metrics are expressed in the same units of

measure, which can be achieved after a normalization, for example. Clivillé et al. (2007) cite

some examples of methods as the weighted mean, which is the more common aggregation

operator; the weighted arithmetic mean; and the Choquet integral aggregation operator,

which generalizes the weighted mean by taking mutual interactions between criteria into

account.

Taking the components of Equation 4.8 and aggregating them using the weighted mean,

the result is shown in Equation 4.9.

GP = a1 × C1 + a2 × C2 + a3 × C3 + . . . + an × Cn (4.9)

where:

GP = global performance (integrated indicator).

a1 . . . an ∈ < = component weights.


C1 . . . Cn = principal components which group X1 up to Xm in linear combinations.

The determination of the component weights depends on several factors. Firstly, de-

pends upon the aggregation formula. For example, the criteria in a weighted mean and

in a weighted geometric mean would not be the same (Clivillé et al., 2007). Secondly, the

relative importance of the indicators should be considered. Each warehouse will have dif-

ferent results of indicator aggregation which requires an analysis of the indicators grouped

in each component to dene their weights (some indicators could be more important than

others). Lastly, the weights depend upon the warehouse strategy. Each warehouse has

dierent objectives and can rank component's weight according to its priority.

The company can choose to stop the solution method in the component level (Equation

4.8) or build the integrated indicator (Equation 4.9). In both situations, the results of
4.5. Implementation and Update 69

component's expressions or integrated indicator must be interpretable. One way to achieve

it is creating a scale, which is presented in next section.

4.4.2 Scale denition


A scale determines the maximum and minimum values reached by a variable. It can be

used to develop interview instruments in an organized way, verifying some hypothesis from

the data. For example, Chen (2008) uses a six-item scale to measure the operational

performance of a manufacturing plant after dierent levels of lean manufacturing practice.

However, the scale is used in our work as a reference point to evaluate the results of given

variables, which are the principal components (Equation 4.8) and the integrated indicator

(Equation 4.9).

Jung (2013) states that the four main types of scale are: nominal scales (categorical:

only attributes are named), ordinal scales (rankings: attributes can be ordered), interval

scales (equal distances corresponding to equal quantities of the attribute), and ratio scales

(equal distances corresponding to equal quantities of the attribute where the value of zero

corresponds to none of the attributes). The scale developed for our purpose is the interval

one, as there is not a xed zero and ratios cannot be expressed.

Regarding the dierent measurement units of indicators (time, %, etc.), Rodriguez et al.

(2009) propose the auto-scaled technique, which combines centering and standardization.

The scale is built for each variable independently, using its mean and standard deviation

to dene the lower and an upper scale limits. One potential problem of the auto-scaled

technique is that it does not allow the comparison among dierent variables, because each

of them has a distinct scale.

The work of Lohman et al. (2004) proposes the normalization method to create the same

scale range for dierent indicators. The authors determine a linear 0 − 10 scale. Two steps

need to be taken for normalizing the metric scores (Lohman et al., 2004): (1) the denition

of the metric score range that corresponds to the 0 − 10 scale; (2) the normalization of

the scores to a 0 − 10 scale, since the values 0 − 10 should always have the same meaning,

regardless the metric observed.

For the component expressions (Equation 4.8) it is not possible to use this kind of

procedure since indicators may have opposite objectives (e.g. the productivity wants high

values whereas time aims for the lower ones), complicating the target denition.

One possible solution is the use of optimization methods to dene the best warehouse

performance. It facilitates the inclusion of dierent indicator goals in the same model as

well as all warehouse operation constraints.

The proposed scale using optimization seems a good option to evaluate the integrated

indicator compared with an objective/goal. The development of this scale is presented in

Chapter 7.

4.5 Implementation and Update


4.5.1 Integrated model implementation
The implementation consists of demonstrating the equations that may be maintained and

refreshed for periodic management, and how the integrated results should be interpreted.
70 4. Methodology to dene an Integrated Warehouse Performance

The expressions that will be used by the manager for warehouse performance measure-

ment are Equations 4.8 and 4.9. Other equations from the analytical model are not used

once the aggregated model is achieved. It is important to note that the coecients `ai ',

`bij ' are real constants in Equations 4.8 and 4.9.

The objective of the integrated model is to be used as any other indicator system,

being measured periodically and analyzed according to a given objective. To attain this,

the integrated model should be refreshed as follows:

1. Calculate the indicator values in their original units of measure;

2. Standardize these indicator values according to Equation 4.3;

3. Replace these standardized indicators in component Equations 4.8, obtaining the

component values;

4. These component results are used in Equation 4.9 to obtain the integrated indicator

value.

These steps can be easily automatized on a spreadsheet to facilitate manager's work.

This procedure should be done periodically (preferably with the same periodicity of

the performance indicator measurements) allowing to follow the evolution of the integrated

indicator throughout time. As all operational performance indicators are also measured,

it is possible to identify signicant changes in indicators which alter the aggregated one.

Moreover, the developed scale provides the warehouse performance limits; if the manager

evaluates this integrated indicator periodically he is aware of the warehouse performance

progress.

Before the implementation, it is important to conrm with the managers that the results

from the aggregated model and the scale t the warehouse reality. If it is conrmed, the

analytical model and aggregated performance expressions are validated by the reality.

4.5.2 Model update


The aggregated model cannot be considered as a static entity: it must be maintained and

updated to remain relevant and useful for the organization (Lohman et al., 2004). However,

some authors cite that the literature has not yet satisfactorily addressed the issue of how

performance measures should evolve over time (i.e. be exible) in order to remain relevant

with the constant evolution of organizations (Kennerley and Neely, 2002; Neely, 2005).

Regarding this situation, our methodology proposes a periodic reevaluation of the in-

tegrated performance model. This reevaluation encompasses mainly the selection of the

metrics with their equations, the application of statistical tools (PCA, DFA) in a new

dataset to compare the results, and the revision of component's weights in the integrated

indicator equation.

The aggregated performance model, which emerges from the proposed methodology,

has a life cycle and is only valid as long as the internal and external environment remains

stable. For example, new business areas or new challenges require a revision of the model. A

periodic revision of the model can help with this identication. It is important to recognize

these changes as soon as possible to redene the quantitative basis of the model. This
4.6. Methodology implementation on this thesis 71

practice is also used in other PMS proposed in the literature (e.g. Suwignjo et al. (2000)).

The model redenition encompasses the comparison of desired performance indicators with

existing measures (to identify which current measures are kept, which existing measures

are no longer relevant, and which gaps exist so that new measures are needed) (Lohman

et al., 2004).

4.6 Methodology implementation on this thesis


The methodology presented in this chapter explains the steps that should be done to at-

tain an integrated performance model. In order to provide a general methodology, dierent

alternatives of mathematical tools/ methods are presented without determining, in some

cases, a specic one to be applied. That is the case of: indicator equations denition (each

warehouse should dene its indicator set); the dimension-reduction statistical tool (PCA

or DFA); the nal aggregated model (using just component equations or also the aggre-

gated indicator), the criterion for scale optimization (the constraints depend on warehouse

situation). Therefore, the methodology provides a customized integrated model for each

warehouse, according to its choices.

Before starting the implementation of the methodology in next chapters, we present an

overview of the methodology application on this thesis. Figure 4.5 depicts the activities

performed (numbered from 1 up to 21) to achieve the aggregated model for the studied

warehouse.

Figure 4.5 shows the phases of the methodology in green dotted rectangles (Concep-

tualization, Modeling, Model Solving, Implementation and Update). The blue rectangles

highlight the main outcomes of each phase with their activities written in black.

The implementation of this methodology is performed in a theoretical manner, meaning

that we dene a standard warehouse (activity 1) as the object of the study, and the

performance indicators are taken from the literature to manage the ctitious warehouse.

The equations for these indicators are a result of the literature interpretation, based on

metric's denition (activity 2). The nal group of equations form the analytical model,

which is coupled with the software CADES


r (Component Architecture for the Design of

Engineering Systems). We use this software to analyze the analytical model, providing the

independent inputs and outputs (activity 3) and calculating the Jacobian matrix (activity

6).

To apply the statistical tool, a dataset is necessary. We generate data to calculate

indicators periodically reproducing the warehouse dynamics (activity 4). This data is used

for the next steps: theoretical model of indicator relations and aggregated model. For

the rst one, just a sample is used to calculate the Jacobian matrix automatically using

CADES
r (activity 5). For the second one, the dataset created is standardized (activity 9)

to be used as input of the dimension-reduction statistical tool.

Regarding the application of statistical tools, the correlation matrix is calculated (activ-

ity 8) as well as the PCA method (activity 10). Both results with the indicator relationships

matrix (activity 7) are analyzed to provide insights about the behavior of the indicators.

The PCA and DFA have been tested to aggregate indicators. For the DFA, results do not

t with the objective of this thesis. Further study needs to be developed and it is proposed

as future research (for details about the rst results obtained see Appendix F). In the case
72 4. Methodology to dene an Integrated Warehouse Performance

1. Definition of a standard warehouse


2. Determination of indicator and data equations
Analytical
model
based on the literature Conceptualization
3. The analytical model is coupled with CADES
software. Result: inputs and outputs identification

Data 4. Data generation for the predefined


Acquisition indicators
Modeling
5. Determination of Inputs values
Theoretical 8. Assessment of indicators’ correlation
6. Jacobian matrix calculation using CADES Model for
model of matrix
software. The result is a matrix of relations indicators’
indicator between data (inputs) and indicators (outputs) 9. Indicator standardization
relationships aggregation
7. Matrix of indicators’ relationships based 10. Statistical tools application: PCA
on the number of data in common

Partial Result (7) Partial Result (8) and (10)

11. Indicators eliminated Integrated


Analysis of Partial Results
Theoretical(7) x Correlation(8) x 12. Factors retained performance
Statistics (10) 13. Set of indicators related
proposition Model Solving

15. Factors weights 16. Objective function 18. Inputs/outputs limits Scale definition
17. Adjustment of the 19. Optimization
14. Constraints 20. Integrated indicator scale
analytical model (3) for optimization. tool choice

21. We explain how to implement and interpret the results obtained as Implementation and
well as its update when necessary. Update

Figure 4.5: Methodology application in this dissertation.

of PCA, good results are attained and it was the chosen method to determine indicator

groups. Moreover, it is simple to apply and interpret, which are interesting characteristics

for industrial applications.

The partial results (7, 8 and 10) are analyzed to dene the integrated performance

model with a global indicator (activities 11, 12, 13). The integrated indicator scale is

dened using an optimization model (activities 14 up to 20). The application nishes with

an explanation of the model utilization for periodic management and when its update is

necessary (activity 21).

4.7 Conclusions
This chapter presents a methodology to dene an integrated warehouse performance model.

It consists of several steps to analyze indicator relationships from dierent points of view,

using distinct mathematical tools to group these indicators according to their correlation

and proposing an expression which aggregates them in a unique measure.

The proposed methodology encompasses dierent disciplines to achieve the aggregated

model: the analytical model and the Jacobian matrix measurement to analyze indicator
4.7. Conclusions 73

relationships; the statistical tools to propose indicator groups; the optimization method to

develop the scale for the integrated indicator. This multidisciplinary approach permits a

good model construction to manage warehouse performance.

The methodology is general; it gives several alternatives that one can choose when

developing the integrated model. Each warehouse can present dierent objectives, pro-

cesses, particularities, and the fact of not specifying the tools allows the adaptation of the

methodology for specic situations.

The next chapters detail the methodology implementation.


Chapter 5
Conceptualization

The question is not what you look at, but what you see.
Henry David Thoreau

Contents

5.1 Introduction - the Standard Warehouse . . . . . . . . . . . . . . . 76


5.1.1 Warehouse Layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5.1.2 Measurement Units of Performance Indicators . . . . . . . . . . . . . . . 77
5.2 Analytical model of Indicator Equations . . . . . . . . . . . . . . 77
5.2.1 Denition of the metric set . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.2.2 Transformation of Indicator Denitions in Equations . . . . . . . . . . . 80
5.2.3 Notation to describe Indicator Equations . . . . . . . . . . . . . . . . . . 81
5.2.4 Time Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.2.5 Productivity Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.2.6 Cost Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5.2.7 Quality Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5.3 Complete Analytical Model of Performance Indicators and
Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
5.3.1 The Construction of Data Equations . . . . . . . . . . . . . . . . . . . . 92
5.3.2 Analytical model assumptions . . . . . . . . . . . . . . . . . . . . . . . . 94
5.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

Abstract
This chapter performs the Conceptualization phase of the methodology. It begins by
presenting the studied standard warehouse, with its characteristics and processes (i.e.,
the scope of the work). Thereafter, the metric system used to measure the warehouse
performance is dened, based on the literature review. To determine the rst part of the
analytical model, formed by indicator equations, indicator denitions are interpreted
in detail. Finally, the data of all indicators are expanded into new equations; then, the
complete analytical model is constituted of indicators and data equations.
76 5. Conceptualization

5.1 Introduction - the Standard Warehouse


This chapter is the starting point to implement the methodology presented in Chapter 4,

establishing the basis of an integrated performance model. All steps needed to develop

an analytical model of performance indicators are carried out: the denition of the per-

formance measurement scope; the determination of the indicator set; the formulation of

indicators and data equations.

Warehouses can have dierent congurations according to the product specication,

customer requirements, service level oered, etc. The scope of this implementation is on a

hypothetical warehouse, named standard warehouse (shown in Figure 5.1). The denomina-

tion standard is due to the processes carried out on it. We consider the main operational

activities performed by the majority of warehouses, which are (Section 2.3.4 presents their

denitions): receiving, storing, internal replenishment, order picking, shipping and deliv-

ery. Thus, the performance measurement is carried out on the warehouse shop oor, also

including the delivery activity.

Figure 5.1 details not only the boundaries of the activities carried out in the stan-

dard warehouse but also its layout and the measurement unit limits of the performance

indicators, both explained in the sections 5.1.1, 5.1.2.

Figure 5.1: The standard warehouse.

5.1.1 Warehouse Layout


The layout of the standard warehouse is shown in the middle part of Figure 5.1 with the

following regions: receiving docks for truck assignment, unloading area, inventory area,

packing and shipping area, delivery docks.

Since the majority of warehouses have intensive handling activities in order picking

(De Koster et al., 2007), this warehouse follows a manual system for storing and picking

products. In the manual system, the order picker/forklift driver has to store products in a
5.2. Analytical model of Indicator Equations 77

proper location (in case of storage activity) or localize and pick the searched products in

racks (in case of order picking).

We consider that this facility supplies the market with a make-to-stock production. In

a make-to-stock operation, the customer orders launch a process in the picking area, going

up to the product delivery to the client.

The inventory area of Figure 5.1 comprehends the reserve storage area and the forward

picking area. The reserve area contains the bulk stock and it is located in superior rack

levels. The forward picking area is situated in the same racks of bulk stock, but in the

inferior levels to facilitate order picking process. So, this conguration implies in regular

internal replenishments from the reserve to the forward picking area.

The inbound area of the warehouse encompasses the receiving of trucks until the storage

of products in inventory area, and the outbound area comprises the replenishment activity

performed from the inventory area up to the delivery of the product to the client.

5.1.2 Measurement Units of Performance Indicators


The top of Figure 5.1 demonstrates the boundaries of measurement units used to calculate

warehouse performance indicators in this dissertation. The units are: pallets, order lines

and order.

A customer order or simply order (as described in this work) is an individual cus-

tomer request to be fullled by the warehouse. It generally includes product specicities

and the quantity of each one (Johnson et al., 2010). Order lines are the number of dif-

ferent product types in a customer order. Each line designates a unique product or stock

keeping unit (SKU) in a certain quantity (De Koster et al., 2007). A pallet refers to the

products transported on it, with the quantity and kind of products varying from one pallet

to another.

Each measurement unit described in the top of Figure 5.1 is related to the indicator

units in one or more warehouse activities. For example, in receiving, storage and internal

replenishment, the operations are measured in pallets. Similarly, order lines is the unit

for picking indicators and order is the standard measure for delivery indicators.

The exception is the shipping activity, where both order lines and order are used

to measure shipping indicators. Packing and shipping are transition areas, in which some

indicators are related to internal operations (e.g. labor performance in shipping activity)

whereas others are customer-oriented (e.g. orders shipped on time).

As each part of the warehouse uses a specic unit of measure (for instance, pallets, or-

ders), we also dene a smaller unit related to a single item, named product or SKU(stock

keeping unit). This distinct notation is used in more general indicators, measuring sev-

eral activities (e.g., Stock out rate, Equation 5.40) or the whole warehouse (e.g., Labor

productivity, Equation 5.9).

5.2 Analytical model of Indicator Equations


5.2.1 Denition of the metric set
After the denition of the warehouse characteristics, the metric system used for perfor-

mance measurement needs to be dened. Keebler and Plank (2009) study the logistics
78 5. Conceptualization

measures most commonly used by the managers in the US industry. The results show some

preference for the indicators such as the outbound freight cost, the inventory count accu-

racy, the nished goods inventory turn and the order ll. However, the authors conclude

that there is not a consensus of a group of measures used to assess warehouse performance.

The methodology presented in Chapter 4 determines that the indicators used to develop

the integrated model need to come from strategic goals of the enterprise. As our standard

warehouse is theoretical, we consider that its operational metric system comes from the

analysis of strategical goals.

Regarding the indicator requisites, the methodology denes that they need to be quan-

titatively measured, i.e. it is necessary to describe them in equations. Thus, the metric

system is dened from the direct indicators resulting from the literature review, which are

presented in Table 2.12 and Table 2.13, of Chapter 2.

Comparing Tables 2.12 and 2.13 with the warehouse characteristics, we can see that

not all warehouse areas contain indicators (e.g. there are no indicators related to the

replenishment). Moreover, some indicators related to specic activities are missing. That

is the case of productivity indicators, for example. Table 2.12 shows productivity indicators

related only to receiving, picking and shipping activities.

Therefore, we make some adjustments in the initial group of indicators taken from the

literature which result in the indicators presented in Table 5.1. To maintain consistency

among the warehouse activities, indicators related to internal replenishment, not veried

in literature review but also important for warehouse management, are added to the metric

set. Furthermore, quality and productivity indicators for receiving and storage activities

are also considered in the nal indicator group.

From the literature, we can infer that the cost indicators are not so frequently used for

warehouse management as quality or productivity indicators. The cost indicators found

in papers are more global and usually related to several activities, demonstrating that

costs are analyzed in managerial levels. One reason for that could be that the operational

objectives of the warehouse are usually related to process performance due to the intensive

work-handling (e.g. lead time reduction, quality improvements) instead of cost measures.

For these reasons, we have not included new cost indicators related to specic activities as

we made in quality, time and productivity dimensions.


5.2. Analytical model of Indicator Equations
Table 5.1: Final warehouse performance indicators group.

Dimension Activity - Specic Indicators


Receiving Storing Inventory Replenishment Picking Shipping Delivery
Time Rect Putt Rept * Pickt Shipt Delt
Invq , Shipq , Delq ,
Quality Recq * Stoq Repq * Pickq
StockOutq OTShipq OTDelq
Cost Invc Trc
InvUtp , Delp *,
Productivity Recp Stop * Repp * Pickp Shipp
TOp TrUtp
Dimension Process - Transversal Indicators
Inbound Process Outbound Process
DSt OrdLTt
Time
OrdFq , PerfOrdq
Quality
CustSatq , Scrapq
OrdProcc
Cost
CSc , Labc , Maintc
Productivity
Thp , Labp , WarUtp , EqDp
∗ The symbol denotes indicators added after the literature review analysis.

79
80 5. Conceptualization

In contrary of the indicator additions, some others listed in the metric system of Chapter

2 are not included in Table 5.1 since more general metrics encompass them. That is the case

of Queuing time and Outbound space utilization. For Queuing time, it is comprised in

data equations of time indicators (Appendix A demonstrates this parameter inside the time

indicator equations) and Outbound space utilization is considered in Warehouse utilization

equation (Equation 5.19, Appendix A).

The nal warehouse metric system analyzed in this thesis has 41 indicators. Table

5.1 shows these indicators using the same table format presented in Chapter 2. The only

dierence is that, besides the metrics added (highlighted with the symbol *), the resource

related indicators (Labor cost `Labc ', Labor productivity `Labp ', Equipment downtime

`EqDp ', Maintenance cost `Maintc ' and Warehouse utilization `WarUtp ', presented in

Table 2.13) are also included in Table 5.1, being classied as transversal indicators.

The notation used in Table 5.1 to describe indicators is a standard created in this thesis

to represent indicator names. This notation is detailed in Section 5.2.3.

Finally, it is important to underline that this group of indicators does not provide an

exhaustive analysis of warehouse performance. Thus, in real situations other indicators

can be measured by the warehouses which are not included in Table 5.1.

5.2.2 Transformation of Indicator Denitions in Equations


After the determination of the nal group of indicators, their denitions are used as a

basis to establish indicator equations. While some denitions are easily transformed in

equations, others do not have the same interpretation. The denitions come basically from

the same paper database of the literature review. In the cases that the indicators are not

dened in papers, we look for these denitions in a supplementary database. Tables 5.2,

5.4, 5.6, 5.8 present three kinds of indicators distinguished by the symbols
a , b and c . The

indicators symbolized as
a need an interpretation of their denitions in order to transform

them into equations. One example is receiving time indicator dened as unloading time

(see Table 5.2). We determine its equation as the total unloading time divided by the

number of pallets unloaded in a month (Equation 5.1). The indicators represented by the

symbol
b are the ones for which neither the denition nor the measurement are found in the

literature. We dene these indicators based on the best common sense that we could infer

from the literature. The symbol


c is attributed to maintenance cost indicator (Table 5.6),

the only metric dened by the union of two distinct denitions (from De Marco and Giulio

(2011) and Johnson et al. (2010)). In the cases where there is more than one denition,

they are demonstrated in the table (e.g. order lead time in Table 5.2) and the alternatives

are discussed in the respective section.

All other indicators, described in Tables 5.2, 5.4, 5.6, 5.8 without symbols, have their

measurement given directly by their denition (e.g. lead time to pick an order line, total

of products stored per labor hour storing, etc.). Some of these denitions are just adjusted

to the measurement unit used in this work. For example, picking accuracy is dened as

orders picked correctly per orders picked but we changed the unit order picked to order

line picked.
5.2. Analytical model of Indicator Equations 81

5.2.3 Notation to describe Indicator Equations


The nal metric system encompasses Equation 5.1 to Equation 5.41. To better illustrate

the results, we show in parenthesis the equation outcomes, even if they are not units

derived from International System of Units (SI). For example, we dene  pallet as a

pseudo unit indicating the number of pallets. To dene the data used in each indicator's

equation, Tables 5.3, 5.5, 5.7, 5.9 describe the data meanings and their measurement units

(in parenthesis). The time base used in this work is month and the measurement unit

follows the description made in Section 5.1.2.

The indicator notations presented in this chapter are used all along the thesis. All

indicator names are written in bold format (for instance, Rect ) and data used in indicator
equation are in sans serif style (e.g. Pal Unlo). Moreover, the indicators have also a letter at

the end of the indicator name to designate their classication: t for time, p for productivity,
c for cost and q for quality.
The next sections present the indicator equations separated in terms of time, produc-

tivity, cost and quality indicators.

5.2.4 Time Indicators


The time indicator equations are elaborated from the interpretation of the indicator def-

initions given in Table 5.2. The data used in these time indicators (Equation 5.1 up to

Equation 5.8) are explained in Table 5.3.

Table 5.2 presents two indicators with more than one interpretation: order lead time

and dock to stock time. Analyzing order lead time denition from customer's perspective,

it should encompass from the time when the customer order is placed up to the time when

the customer receives his order and not until the product is shipped by the warehouse.

Thus, all parts of the supply chain involved to the accomplishment of this task should

be included in this indicator. For dock to stock time, it is important to note that some

denitions could be misleading. The denition of Ramaa et al. (2012) could be interpreted

as if the indicator comprehends the inventory and replenishment times (time from the

storage up to the product is picked), but this is not the case. The authors consider that

the product is available for order picking at the moment of storing. Therefore, dock to

stock is the time from supply arrival up to the storage in the inventory oor.

Usually, the activities performed in a warehouse are sequential, i.e. the shipping starts

after the picking is nished. As the time indicators are measured in terms of the mean time

one activity takes, it is possible to depict all these measures in a timeline, as shown in Figure

5.2. Some events are pointed out in the timeline, to demonstrate exactly the beginning

and the end of each measurement, according to the denitions described in Table 5.3. It is

important to highlight that ∆t(Rec) encompasses also the inspection activity, which takes

some time after the unloading nishes to enable the pallets to be stored.

The time indicators (Equation 5.1 up to Equation 5.8) are measured monthly, so the

sum operator in all equations are related to the activities performed during a whole month.

The indexes p, l and o in indicator equations correspond to pallets, order lines and orders,

respectively.
82 5. Conceptualization

Table 5.2: Warehouse time indicator denitions.

Notation Indicator Denition Authors Equation


Gu et al. (2007);
Receiving
Rect unloading time Matopoulos and (5.1)a
time
Bourlakis (2010)
lead time between the
Mentzer and
product(s) is unloaded
Konrad (1991);
Putaway and available to be
Putt De Koster et al. (5.2)a
time storage until its eec-
(2007); Yang and
tive storage in a desig-
Chen (2012)
nated place
lead time from sup-
Dock to
ply arrival until prod-
stock Ramaa et al. (2012)
DSt uct is available for or- (5.3)a
time
der picking
the amount of time it
takes to get shipments
Yang and Chen
from the dock to in-
(2012)
ventory oor without
inspection
lead time to transfer
Replenish-
products from reserve Manikas and Terry
Rept ment (5.4)
storage area to for- (2010)
time
ward pick area
Order
lead time to pick an Mentzer and Kon-
Pickt picking (5.5)
order line rad (1991)
time
lead time to load a
Shipping
Shipt truck per total orders Campos (2004) (5.6)
time
loaded
total time of distribu-
Delivery
Delt tions per total orders Campos (2004) (5.7)
lead time
distributed
Mentzer and Kon-
rad (1991); Kiefer
lead time from cus- and Novack (1999);
Order
tomer order to cus- Rimiene (2008);
OrdLTt lead time (5.8)a
tomer acceptance Menachof et al.
(2009); Yang and
Chen (2012)
lead time from or-
Yang (2000); Ra-
der placement to ship-
maa et al. (2012)
ment

a
Interpretation of the indicator denition
5.2. Analytical model of Indicator Equations 83

Time

Δt(Rec) Δt(Sto) Δt(Rep) Δt(Pick) Δt(Ship) Δt(Del)

Δt(DS) Δt(Ord)

Figure 5.2: Time line for time indicators data .

P alU
Pnlo
∆t(Rec)p
p=1 hour
Rect = ( pallet ) (5.1)
Pal Unlo
P alSto
P
∆t(Sto)p
p=1 hour
Putt = ( pallet ) (5.2)
Pal Sto
P alSto
P
∆t(DS)p
p=1 hour
DSt = ( pallet ) (5.3)
Pal Unlo
P alM
Poved
∆t(Rep)p
p=1 hour
Rept = ( pallet ) (5.4)
Pal Moved
OrdLiP
P ick
∆t(Pick)l
l=1 hour
Pickt = ( order line ) (5.5)
OrdLi Pick
OrdLiShip
P
∆t(Ship)l
l=1 hour
Shipt = ( order line ) (5.6)
OrdLi Ship

OrdDel
P
∆t(Del)o
o=1 hour
Delt = ( order ) (5.7)
Ord Del
OrdDel
P
∆t(Ord)o
o=1 hour
OrdLTt = ( order ) (5.8)
Ord Del
84 5. Conceptualization

Table 5.3: Explanation of Data used in Time indicators.

Notation Denition
Time between the truck assignment to a dock and the moment when the unloading
∆t(Rec) =
nishes and the pallet is available to be stored (hour/P alU nlo)
Time between the instant when the pallet is available to be stored and its eective
∆t(Sto) =
storing (hour/P alSto)
Time between the truck assignment to a dock up to the storing of the pallet
∆t(DS) =
(hour/P alU nlo)
Time to transfer a pallet from the reserve storage area to the forward picking area
∆t(Rep) =
(hour/P alM oved)
Time between the instants when operator starts to pick an order line and when the
∆t(Pick) =
picking nishes (hour/OrdLiP ick)
Time between the instants when the order picking nishes and when the order line
∆t(Ship) =
shipping is loaded in the truck (hour/OrdLiShip)
Time between the truck leaving the warehouse and the customer acceptance of the
∆t(Del) =
product (hour/OrdDel)
Time between the customer ordering and the customer acceptance of the product
∆t(Ord) =
(hour/OrdDel)
Pal Unlo = number of pallets unloaded per month (pallets/month)
Pal Sto = number of pallets stored per month (pallets/month)
Pal Moved = number of pallets moved during replenishment operation per month (pallets/month)
OrdLi Pick = number of order lines picked per month (order lines/month)
OrdLi Ship = number of order lines shipped per month (order lines/month)
Ord Del = number of orders delivered per month (orders/month)

5.2.5 Productivity Indicators


Productivity can be dened as the level of asset utilization (Frazelle, 2001), or how well

resources are combined and used to accomplish specic, desirable results (Neely et al.,

1995). Productivity is a relationship, usually a ratio or an index between output of goods,

work completed, and/or services produced and quantities of inputs or resources utilized to

produce the output (Bowersox et al., 2002).

One of the most commonly used productivity measure is the labor productivity. Indeed,

warehouses usually have many handling-intensive activities. Bowersox et al. (2002) arms

that logistics executives are very concerned with labor performance. In fact, the number

of papers found concerning this theme conrms his statement. There are several ways

to measure labor productivity, and two denitions are presented in Table 5.4. The rst

labor productivity indicator (from De Marco and Giulio (2011)) measures the workers'

eciency, verifying the production during the real time used to execute the tasks. The

second denition (from Frazelle (2001)) produces a measure based on the work done during

the available time to work, e.g. measuring the number of items processed during a day.

We use the last indicator in our work because it is the most commonly used in warehouses

among the two presented.

It is interesting to make a remark about the interpretation of labor productivity. The

denition in Table 5.4 and Equation 5.9 shows that this indicator does not measure directly

the employee eciency, it focuses on time usefulness. It means that all incoming ow
5.2. Analytical model of Indicator Equations 85

Table 5.4: Warehouse productivity indicator denitions.

Notation Indicator Denition Authors Equation


ratio of the total num-
ber of items man-
Labor pro- De Marco and Giulio
aged to the amount of
Labp ductivity (2011) (5.9)
item-handling working
hours
total produced per to-
Frazelle (2001)
tal man-hour
Receiving number of vehicles un- Mentzer and Konrad
Recp (5.10)
productivity loaded per labor hour (1991)
total number of prod-
Storage pro- ucts stored per labor
Stop our denition (5.11)b
ductivity hour in storage activ-
ity
total number of pallets
Replenishment moved per labor hour
Repp our denition (5.12)b
productivity in replenishment activ-
ity
total number of prod- Kiefer and Novack
Picking pro- ucts picked per labor (1999); Manikas and
Pickp (5.13)
ductivity hours in picking activ- Terry (2010); Yang and
ity Chen (2012)
Mentzer and Konrad
total number of prod-
Shipping pro- (1991); Kiefer and No-
Shipp ucts shipped per time (5.14)
ductivity vack (1999); De Koster
period
and Waremius (2005)
total number of or-
Delivery Pro- ders delivered per la-
Delp our denition (5.15)b
ductivity bor hours in delivery
activity
Inventory rate of space occupied Ramaa et al. (2012);
InvUtp (5.16)
utilization by storage Ilies et al. (2009)
ratio between the cost Johnson and McGinnis
TOp Turnover of goods sold and the (2011); Yang and Chen (5.17)
average inventory (2012)
O'Neill et al. (2008);
Transport
TrUtp vehicle ll rate Matopoulos and (5.18)
utilization
Bourlakis (2010)
Warehouse rate of warehouse ca-
WarUtp Bowersox et al. (2002) (5.19)
utilization pacity used
percentage of hours
Equipment
EqDp that the equipment is Bowersox et al. (2002) (5.20)
downtime
not utilized
Mentzer and Konrad
(1991); Gunasekaran
and Kobu (2007);
items / hour leaving Kiefer and Novack
Thp Throughput (5.21)
the warehouse (1999); De Koster and
Waremius (2005);
Voss et al. (2005); Gu
et al. (2007)
b
This indicator is not explicitly dened in the literature and we consider the denition
presented in this table for the purpose of this work.
86 5. Conceptualization

(number of products processed per month, in our case) processed will be divided by the

total number of hours available to work. If there are some periods where there is no product

to process, this will reduce the indicator result even if the employees have worked well.

The indicator Equipment Downtime, EqDp Equation 5.20, was initially identied in

the work of Mentzer and Konrad (1991) and dened as a period in which an equipment

is not functional, downtime incurred for repairs. Since this is a time indicator, they

were classied in this dimension in Section 2.4.1. However, the denition of Bowersox

et al. (2002) presented in Table 5.4 produces an indicator with more information, relating

the time in which the equipment is not functional in all available time. For this reason,

Equipment Downtime is transformed and used as a productivity indicator in this thesis.

The productivity indicators are described in Equation 5.9 up to Equation 5.21. It is

interesting to highlight that the pseudo unit times, in Equation 5.17, signies the number

of times that the inventory turns in a month.

Prod Proc products


Labp = ( hour ) (5.9)
WH

Pal Unlo pallets


Recp = ( ) (5.10)
WH Rec hour

Pal Sto pallets


Stop = ( ) (5.11)
WH Sto hour

Pal Moved pallets


Repp = ( hour ) (5.12)
WH Rep

OrdLi Pick order line


Pickp = ( hour ) (5.13)
WH Pick

OrdLi Ship order line


Shipp = ( hour ) (5.14)
WH Ship

Ord Del order


Delp = ( ) (5.15)
WH Del hour

Inv CapUsed
InvUtp = × 100(%) (5.16)
Inv Cap

CGoods
TOp = (times) (5.17)
Ave Inv

Kg Tr
TrUtp = × 100(%) (5.18)
Kg Avail

War CapUsed
WarUtp = × 100(%) (5.19)
War Cap

HEq Stop
EqDp = × 100(%) (5.20)
HEq Avail
5.2. Analytical model of Indicator Equations 87

Table 5.5: Explanation of Data used in Productivity indicators.

Notation Denition
Ave Inv = average warehouse inventory per month ($/month)
CGoods = cost of all products sold by the warehouse per month ($/month)
Inv CapUsed = average number of pallets in inventory per month (pallets/month)
Inv Cap = total amount of pallet space (pallets)
Kg Tr = total of kilograms transported per month (kg/month)
Kg Avail = delivery capacity in kilograms per month (kg/month)
total number of hours during which equipments are stopped per month
HEq Stop =
(hour/month)
total number of hours during which equipments are available to work per month
HEq Avail =
(hour/month)
OrdLi Pick = number of order lines picked per month (order lines/month)
OrdLi Ship = number of order lines shipped per month (order lines/month)
Ord Del = number of orders delivered per month (orders/month)
Pal Unlo = number of pallets unloaded per month (pallets/month)
Pal Sto = number of pallets stored per month (pallets/month)
number of pallets moved during replenishment operation per month
Pal Moved =
(pallets/month)
Prod Ship = number of products shipped per month (nb/month)
number of products processed by the warehouse per month. Products processed
Prod Proc =
refers to the number of products shipped in the warehouse (products/month)
total item-handling working hours for all warehouse activities per month. In this
WH = thesis, WH is calculated by the sum of WH Rec, WH Sto, WH Rep, WH Pick, WH
Ship (hour/month)
War CapUsed = total warehouse oor area occupied by activities per month (m2 /month)
War Cap = total warehouse capacity oor (m2 )
total employee labor hours available for receiving activity per month
WH Rec =
(hour/month)
WH Sto = total employee labor hours available for storing activity per month (hour/month)
total employee labor hours available for replenishment activity per month
WH Rep =
(hour/month)
WH Pick = total employee labor hours available for picking activity per month (hour/month)
WH Ship = total employee labor hours available for shipping activity per month (hour/month)
WH Del = total employee labor hours available for delivery activity per month (hour/month)
total number of hours during which the warehouse is open per month
War WH =
(hour/month)
88 5. Conceptualization

Table 5.6: Warehouse cost indicator denitions.

Notation Indicator Denition Authors Equation


the holding cost
Inventory
and the stock out Li et al. (2009)
costs
Invc penalty (5.22)a
total storage costs
Rimiene (2008)
/ unit
Cagliano et al.
inventory level
(2011); Gallmann
(measured mone-
and Belvedere
tarily)
(2011)
amount of dollars
Transportation Bowersox et al.
Trc spent per order de- (5.23)
costs (2002)
livered.
total processing
Order pro-
cost of all orders
OrdProcc cessing Campos (2004) (5.24)
per number of
cost
orders
total warehousing Bowersox et al.
Cost as a % cost as a percent (2002); Ilies et al.
CSc (5.25)
of sales of total company (2009); Ramaa
sales et al. (2012)
cost of personnel
Cagliano et al.
Labc Labor cost involved in ware- (5.26)
(2011)
house operations
costs of building (1)- De Marco
Maintenance maintenance (1) and Giulio (2011)
Maintc (5.27)c
cost and equipment (2)- Johnson et al.
maintenance (2) (2010)
a
Interpretation of the indicator denition or many indicators' aggregation c Union
of two distinct denitions.

Prod Ship products


Thp = ( hour ) (5.21)
War WH

5.2.6 Cost Indicators


In Table 5.6 there are three dierent denitions for inventory costs. Analyzing the results

of the literature review, inventory level assessed monetarily is the most employed metric

in papers. It is true that some expenses like depreciation and insurance could be included

in total warehouse costs and not necessarily in inventory costs. However, considering just

inventory level seems to be an incomplete way of measurement since other expenses like

holding cost and stock out penalty are also taken into account by other authors like Rimiene

(2008) and Li et al. (2009). So, the inventory cost denition used in this work follows Li

et al. (2009).

The nal group of cost indicators are presented in Equation 5.22 up to Equation 5.27,

with the meaning of data utilized in cost indicators described in Table 5.7.

Invc = InvC + LostC($) (5.22)

TrC $
Trc = ( ) (5.23)
Ord Del order
5.2. Analytical model of Indicator Equations 89

Table 5.7: Explanation of Data used in Cost indicators.

Notation Denition
InvC = nancial cost to maintain inventory in warehouse per month($/month)
penalty measured by company as a cost when the customer makes an order and the
LostC =
product is not available per month ($/month)
total transportation cost, which is the sum of assets, oil, maintenance and labor
TrC =
costs per month ($/month)
Ord Del = number of orders delivered per month(nb/month)
Ord ProcC = sum of oce and employee costs to process orders per month ($/month)
Cust Ord = number of customer orders per month (nb/month)
WarC = sum of all activity costs that the warehouse has in charge per month ($/month)
Sales = total revenue from sales per month ($/month)
Salary = total salaries of all warehouse employees per month ($/month)
Charges = total charges paid over salary for all warehouse employees per month ($/month)
BuildC = total cost to maintain warehouse building per month($/month)
EqMaintC = total equipment maintenance costs per month ($/month)
Others = other costs not dened in the formulas per month ($/month)

Ord ProcC $
OrdProcc = ( order ) (5.24)
Cust Ord

WarC
CSc = × 100(%) (5.25)
Sales

$
Labc = Salary + Charges + Others( month ) (5.26)

$
Maintc = BuildC + EqMaintC + Others( month ) (5.27)

5.2.7 Quality Indicators


The quality indicators are presented in Equation 5.28 up to Equation 5.41, derived from

metric denitions (Table 5.8). These indicators measure characteristics of the products

and the work performed in a quantitative way. The indicator data are described in Table

5.9.

The distinction between the indicators on time delivery and orders shipped on time

(see Table 5.8) resides in what is considered as the nal monitoring point. On time delivery

is a measurement, which covers up to the product delivering to the customer. In other

words, if the warehouse monitors the delivery activity, it will use the indicator on time

delivery. The indicator orders shipped on time does not include the delivery activity

and if the warehouse measures their indicators up to the shipping activity (i.e. the moment

when the products leave the warehouse), it will use the indicator orders shipped on time. In

this work both measures are maintained in the metric system to evaluate their interaction

with other indicators.

Cor Unlo
Recq = × 100(%) (5.28)
Pal Unlo
90 5. Conceptualization

Table 5.8: Warehouse Quality indicator denitions.

Notation Indicator Denition Authors Equation


Receiving pallets unloaded
Recq our denition (5.28)b
accuracy without incidents
Storage ac- storing products in Voss et al. (2005);
Stoq (5.29)a
curacy proper locations Rimiene (2008)
movement of the
right product from
Replenishment storage area to
Repq our denition (5.30)b
accuracy the right place in
forward pick area,
without damages
the physical counts
Physical in- of inventory agree
Bowersox et al.
Invq ventory ac- with the inventory (5.31)a
(2002)
curacy status reported in
the database
number of orders
Picking ac- Bowersox et al.
Pickq picked correctly per (5.32)a
curacy (2002)
orders picked
De Koster and
Orders
number of errors Waremius (2005);
Shipq shipped (5.33)a
free orders shipped De Koster and
accuracy
Balk (2008)
number of orders
Delivery ac-
Delq distributed without Campos (2004) (5.34)a
curacy
incidents
Voss et al. (2005);
number of orders
Forslund and Jons-
On time de- received on time
OTDelq son (2010); Lu and (5.35)
livery or before commit-
Yang (2010); Yang
ted date
and Chen (2012)
Orders number of orders
Kiefer and Novack
OTShipq shipped on shipped on time per (5.36)
(1999)
time total orders shipped
number of orders
Order ll Ramaa et al.
OrdFq lled completely on (5.37)
rate (2012)
the rst shipment
number of orders
delivered on time,
Perfect Kiefer and Novack
PerfOrdq without damage (5.38)
order (1999)
and with accurate
documentation
number of cus-
Lao et al. (2011);
Customer tomer complaints
CustSatq Voss et al. (2005); (5.39)
satisfaction per number of
Lao et al. (2012)
orders
Lao et al. (2011);
number of stock
Stockout Yang and Chen
StockOutq products out of (5.40)a
rate (2012); Lao et al.
order
(2012)
Rate of product loss
Scrapq Scrap rate Voss et al. (2005) (5.41)a
and damage
a
Interpretation of the indicator denition b This indicator is not explicitly dened
in the literature and we consider the denition presented in this table for the purpose
of this work.
5.2. Analytical model of Indicator Equations 91

Table 5.9: Explanation of Data used in Quality indicators.

Notation Denition
number of orders delivered complete in one shipment per month
Complet 1st Ship=
(orders/month)
Cor Unlo = number of pallets unloaded correctly per month (pallets/month)
Cor Sto = number of pallets stored correctly per month (pallets/month)
number of pallets moved correctly from reserve storage to forward picking area
Cor Rep =
per month (pallets/month)
Cor OrdLi Pick = number of order lines picked correctly per month (order lines/month)
Cor OrdLi Ship = number of order lines shipped correctly per month (order lines/month)
Cor Del = number of orders delivered correctly per month (orders/month)
number of orders with customer complaints regarding on logistics aspects per
Cust Complain=
month (orders/month)
number of products per month that are not available in stock when the customer
Prod noAvail=
makes an order (product/month)
Nb Scrap= number of scraps occurred in warehouse operations per month (product/month)
OrdLi Pick = number of order lines picked per month (order lines/month)
OrdLi Ship = number of order lines shipped per month (order lines/month)
Ord Ship = number of orders shipped per month (orders/month)
Ord Del = number of orders delivered per month (orders/month)
number of orders received by customer on or before deadline per month
Ord Del OT=
(orders/month)
Ord Ship OT= number of orders shipped on or before the deadline per month (orders/month)
number of orders received by customer on time (OT), with no damages (ND)
Ord OT, ND, CD=
and correct documentation (CD) per month (orders/month)
number of pallets with inaccuracies between the physical inventory and the
Prob data =
system per month (pallets/month)
Prod Out = number of products taken out of the inventory per month (product/month)
Pal Unlo = number of pallets unloaded per month (pallets/month)
Pal Sto = number of pallets stored per month (pallets/month)
number of pallets moved during replenishment operation per month
Pal Moved =
(pallets/month)
92 5. Conceptualization

Cor Sto
Stoq = × 100(%) (5.29)
Pal Sto

Cor Rep
Repq = × 100(%) (5.30)
Pal Moved

Pal Unlo + Pal Sto + Pal Moved - Prob data


Invq = × 100(%) (5.31)
Pal Unlo + Pal Sto + Pal Moved

Cor OrdLi Pick


Pickq = × 100(%) (5.32)
OrdLi Pick

Cor OrdLi Ship


Shipq = × 100(%) (5.33)
OrdLi Ship

Cor Del
Delq = × 100(%) (5.34)
Ord Del

Ord Del OT
OTDelq = × 100(%) (5.35)
Ord Del

Ord Ship OT
OTShipq = × 100(%) (5.36)
Ord Ship

Complet 1st Ship


OrdFq = × 100(%) (5.37)
Ord Ship

(Ord OT, ND, CD)


PerfOrdq = × 100(%) (5.38)
Ord Del

Ord Del - Cust Complain


CustSatq = × 100(%) (5.39)
Ord Del

Prod noAvail
StockOutq = × 100(%) (5.40)
Prod Out

Nb Scrap
Scrapq = × 100(%) (5.41)
Prod Proc

5.3 Complete Analytical Model of Performance Indicators


and Data
5.3.1 The Construction of Data Equations
The rst part of the analytical model encompasses the indicator equations presented in

the previous sections. To achieve the complete analytical model, we elaborate quantitative

expressions for indicator data to nd theoretically the indicator relationships (performed

in Section 6.3). The purpose of creating data equations is to verify their relationships,

identifying the independent and combined data. The combined data is measure from other

data, e.g. data J in Equation 4.2, Chapter 4, is a combined data since it is calculated from
5.3. Complete Analytical Model of Performance Indicators and Data 93

the sum of A and G data. The independent data are the real inputs of the system, i.e. they

are not calculated from any other data (e.g. A and G in Equation 4.2 are independents).

The complete analytical model, presented in Appendix A, has one more data format

besides the ones already presented (indicator's name are in bold, as Rect , and data used

in indicator equations are in sans serif style, e.g. Pal Unlo): the components inside data

equation are in slanted style like Prob Rep. In the cases where the same component is used

in indicator equation and in data equation, we choose to format it in the higher level.

For instance, the term OrdLi Ship is used as indicator data in Equation 5.14 and also as

data in Equation 5.42; so, it is formated in sans serif style.

To illustrate the construction of data equations and the identication of independent

and combined data, let us analyze some indicators already dened:

Prod Proc products


Labp = ( hour ) (5.9)
WH

OrdLi Ship order line


Shipp = ( hour ) (5.14)
WH Ship

$
Labc = Salary + Charges + Others( month ) (5.26)

Initially, analyzing these equations, one could infer that all these 7 dierent data are

independents because it is not possible to calculate one in terms of another. However, there

are just two independent data: OrdLi Ship and Others. The term Others is independent

and has no relationship with any data presented. The other six data form two dierent

groups: Prod Proc is calculated from OrdLi Ship, and WH, WH Ship, Salary and Charges
have relationships. The relationships among data (and consequently among indicators) are

developed from the data equations, presented in Equations 5.42, 5.43, 5.44, 5.45, 5.46.

Equation 5.42 is developed from the data denition described in Table 5.5, where

Prod Proc is calculated as the number of products shipped Prod Ship.

Prod Proc = Prod Ship = OrdLi Ship × Prod Line (5.42)

where Prod Proc is the number of products processed in the warehouse, represented by the

shipped products, OrdLi Ship are order lines shipped, Prod Ship is the number of products

shipped and Prod Line is the average number of products in a shipping order line. From

this equation we conclude that OrdLi Ship and Prod Line are independent data.

Analyzing the data equations of the other four data (WH, WH Ship, Salary and Charges)

we have:

WH = WH Rec + WH Sto + WH Rep + WH Pick + WH Ship + WH Others (5.43)

where WH is the total available working hours for all warehouse activities (WH Rec,

WH Sto, WH Rep, WH Pick, WH Ship, WH Others). The available working hours for a

specic activity (e.g. WH Ship, used in indicator Equation 5.14) is calculated as the aver-

age number of employees working in storing (nb of employees) times the total number of

hours the warehouse is open in a month (WarWH) (see Equation 5.44).


94 5. Conceptualization

WH Ship = nb of employees × WarWH (5.44)

Salary = $/hrec × WH Rec + $/hsto × WH Sto + $/hrep × WH Rep

+ $/hpick × WH Pick + $/hship × WH Ship + $/hadmin × (1 − βord ) × WH Admin


+ $/hother × WH Others (5.45)

Charges = α × Salary and 0<α<1 (5.46)

where Salary encompasses the total amount payed for all shop oor employees of each

activity. $/h is the remuneration value per hour for each activity ($/hrec , $/hsto , $/hrep ,
$/hpick , $/hship ). βord is an index to represent the percentage of the total available labor
hours the employees are dedicated to customer orders administration. These customer

orders working hours are included in OrdProcc indicator (Equation 5.24), and the working
hours left is considered in Salary equation. α is an index to represent the partial quantity

over the Salary payed as Charges.

It is possible to see from Equations 5.43 - 5.46 that WH, WH Ship, Salary and Charges

are combined data since they are computed from other informations. The real inputs from

these equations are: $/h of all activities, nb of employees of each activity, WarWH, βord
and α.
Therefore, Equation 5.9, 5.14 and 5.26 have as real inputs to be calculated (independent

data): OrdLi Ship, Prod Line, $/h of all activities, nb of employees of each activity, WarWH,
βord and α, Others.
As demonstrated here through an example, we have elaborated expressions for data of

the 41 indicators. The complete analytical model derived from data and indicator equations

is exhibited in Appendix A.

5.3.2 Analytical model assumptions


The analytical model should be developed according to the context of the studied ware-

house, since the specicities of each warehouse result in dierent equations. Therefore,

the developed analytical model refers to the standard warehouse presented throughout this

chapter.

Even if it is not possible to generalize the analytical model, the proposed equations

can help with the development of analytical models in other warehouse contexts. For

this reason, the term others is included in some equations to allow their adjustments if

necessary.

The main assumptions made which impact equation denitions are as follows:

• The picking process is performed manually;

• The inventory cost is not a part of the total warehouse costs. The reason is that

inventory costs are usually a charge of the enterprise as a whole, and the warehouse

just manages it;


5.4. Conclusions 95

• The distribution cost (Equation 5.23) does not make part of total warehouse costs

(Equation A.48) even if delivery is considered as part of the warehouse management.

As the costs incurred for delivery activity usually have no relation with the internal

warehouse activities, managers prefer to treat these costs separately;

• Trucks used for delivery are enterprise's assets. Therefore, distribution costs includes

truck maintenance. If the company has an outsourced distribution, all these compo-

nents are changed by the monthly value paid for the third party logistics company

which carries out the delivery activity;

• Warehouse building is an enterprise asset, impacting mainly the assessment of ware-

house costs;

• The quality data is dened as a sum of a process made correctly and with problems,

and this division allows the identication of quality problems through the process.

Assume the delivery accuracy (Equation 5.34), which is measured by orders delivered

correctly per total orders delivered. In the total orders delivered, a portion of it may

be delivered correctly while the other part may not. This other part is named orders

delivered with problems, Prob Del. But it does not mean that the order could not be

delivered, it just means that this order is recorded with quality issues. For example,

the number of orders not delivered on time are counted in Prob Del even if they

arrive to the client;

• Two data are dierentiated even if their results can occasionally be the same. For

example, the number of orders delivered, Ord Del, is not considered as equal to cus-

tomer orders Cust Ord. Even if these numbers will be close to each other, usually

there are orders in process inside the warehouse at the end of the month, when the

data is collected to measure indicators. Some orders have already been processed by

the administration but not delivered yet. Thus, to calculate the order processing cost

indicator, OrdProcc , the total customer orders are taken into account while for the

order lead time indicator, OrdLTt , orders delivered are considered. Other similar

examples are explained in Appendix A.

5.4 Conclusions
The main objective of this chapter is to develop an analytical model of performance indi-

cators and data.

This chapter starts with the presentation of the theoretical warehouse studied (named

Standard Warehouse) with its layout and activities. After, the metric system to assess

warehouse performance is dened, rstly based on the literature review. A total of 41 indi-

cators compose the metric system, representing all activities that the standard warehouse

has in charge.

In order to create the analytical model, the indicator denitions are rst interpreted

in order to build indicator equations. From these results, data equations are developed,

expanding indicator equations and providing information about the kind of inputs used in

the analytical model. The complete analytical model demonstrates all relations among data

and in the next chapter it will be used to determine indicator relationships theoretically.
Chapter 6
Modeling

Measurement is complex, frustrating, dicult, challenging,


important, abused and misused.
Sink, 1991.

Contents

6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
6.2 Data generation for the Standard Warehouse . . . . . . . . . . 98
6.2.1 Assumptions in data generation . . . . . . . . . . . . . . . . . . . . . . . 98
6.2.2 The global warehouse scenario . . . . . . . . . . . . . . . . . . . . . . . . 99
6.2.3 The internal warehouse scenario . . . . . . . . . . . . . . . . . . . . . . . 100
6.2.4 Data characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
6.3 Theoretical model of Indicator relationships . . . . . . . . . . . 102
6.3.1 The data associations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
6.3.2 Determination of the independent data . . . . . . . . . . . . . . . . . . . 103
6.3.3 Data versus indicator relationships . . . . . . . . . . . . . . . . . . . . . 105
6.3.4 Analysis of indicator relationships . . . . . . . . . . . . . . . . . . . . . . 107
6.4 Statistical Tools Application . . . . . . . . . . . . . . . . . . . . . . 111
6.4.1 Data normality test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
6.4.2 Correlation measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
6.4.3 Principal Component Analysis . . . . . . . . . . . . . . . . . . . . . . . 115
6.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

Abstract
In this chapter, a scenario representing the ow of products between processes for the
standard warehouse is designed. This scenario is used to generate shop-oor monthly
data, which are utilized to measure performance indicators. The dataset formed by
performance indicators measured monthly are the inputs of the mathematical tools used
to model indicator relationships. Firstly, the Jacobian matrix is assessed and the results
give some insights about the relationships between indicators based on their equations.
Secondly, statistical tools are applied to propose a model for indicators' aggregation.
The rst results suggest that the relationships between indicators are mainly based on
their measurement domain, i.e. the indicators are aggregated according to warehouse
activities.
98 6. Modeling

6.1 Introduction
This chapter performs the modeling phase of the methodology. The main objectives of

this phase are to provide the theoretical model of indicator relationships and to apply the

statistical tool, obtaining the rst insights about the indicators' aggregation.

To reach these objectives, a dataset is necessary. In a real context, data from the

warehouse shop oor exists in databases or can be collected. However, as our studied

warehouse is theoretical, we generate data for the standard warehouse, representing its

ow of products between processes. This initial dataset is used to calculate performance

indicators monthly, creating indicator time series that are coupled with the mathematical

tools.

Following the data generation, we demonstrate a method to nd indicator relationships,

from the assessment of the Jacobian matrix. Finally, some statistical tools are performed

(normality tests, correlation measurement and principal component analysis) to analyze

data characteristics and their possible aggregation.

6.2 Data generation for the Standard Warehouse


6.2.1 Assumptions in data generation
The main scenario created for data generation occurs in the shop-oor of the standard

warehouse presented in Figure 5.1. Instead of developing indicator measures directly, we

preferred to generate the data used to calculate indicators, which are the ones presented in

the analytical model of Chapter 5. The reason for this choice is that there is great quantity

of relationships among all data which directly impact indicator results (for instance, the

same data can be used to calculate another data and some indicators, see the example in

Section 5.3.1). If the indicator results are generated directly, these relationships may be

lost (e.g. it may not be possible to see the impact of a data change in the indicator results).

Thus, it could become more dicult to group indicators according to their relationships.

There are a lot of methods for data generation. In this work, a spreadsheet in Excelr
software is developed. The Excel
r spreadsheet is elaborated to create data following

normal and random functions and to represent the eect of chained processes (as discussed

in Chapter 4, Section 4.3.2). Due to the diculty of representing reality, some assumptions

are made for data generation:

• The queuing time is zero for all activities;

• All terms described as `Others' in equations of Chapter 5 are considered equal to

zero;

• The supplier orders have always the same quantity, a truck of 10 tons with 25 pallets;

• The number of employees is constant over time;

• An order can not present two dierent errors within the same month;

• The warehouse processes only one product and it is possible to put 40 products in a

pallet;
6.2. Data generation for the Standard Warehouse 99

• There is no inspection during the shipping activity; thus, Insp2 = 0;

• The indicators Perfect order, PerfOrdq , and Delivery quality, Delq , (Equations 5.38
and 5.34, Section 5.2.7 ) consider that an order is perfect (and, consequently, correct)

if it is on time, with no damages, with the right quantity and the right documents.

Due to this consideration, the number of correct orders delivered (Cor Del) and the

number of perfect orders (Ord OT, ND, CD) are equal, resulting that both indicators

remain the same equation. Thus, delivery quality is eliminated from the metric set,

and the nal group encompasses 40 indicators.

It is important to discuss the assumption that the warehouse manages and delivers

just one product. Even if it seems a restrictive assumption, the data created with one or

several products does not change substantially indicator results, which are calculated with

the data generated. The two following examples demonstrate the impacts of this decision

in indicators from inbound and outbound areas.

The inbound operations usually use the unit `pallet' to measure indicators. In some

cases (e.g. the indicator Labor productivity, Labp ), it is also necessary to know the number
of products that are in the pallets. Even if there are dierent products in a pallet, the

interest is in the total number of products received in pallets, which will not change for

one or several kinds of products. Consequently, the scenario considering one product does

not modify the nal indicator results for the inbound operations.

For the outbound operations, the assumption that an order has just one kind of product

results in `number of orders' and `number of order lines' with the same quantity, since all

orders have just one line. However, this situation impacts only the productivity and time

indicators for picking and shipping activities (total of 4 indicators) from the 40 indicators

included in the metric set.

Therefore, we consider that these data can be used to represent a warehouse operation

and to validate the methodology application performed in this dissertation.

6.2.2 The global warehouse scenario


Figure 6.1 shows the global scenario of the standard warehouse. The main informations

present characteristics related to physical inventory and products processing capacity. We

assume that the warehouse has 5.000 m2 of area, operates eight hours per day and can

store 1000 pallets. The information about the proportion of pallets capacity in a warehouse

area of 5.000 m2 is acquired from specialized websites about warehouse construction (e.g.

www.spartanwarehouse.com/warehouse-space-calculator).

From these characteristics, the quantity of products entering and leaving the warehouse

every month is, on average, 28000 units. Figure 6.1 depicts products arriving in trucks of

10 tons (with 25 pallets per truck and 40 products per pallet) and orders leaving the

warehouse in 5 tons trucks (capacity of 12 pallets) three times per day.

The objective of creating this scenario is to measure the warehouse performance for all

activities, i.e. from the product arrival at the dock to be unloaded up to order delivery to

the client. For indicators' measurement, we assume that this warehouse collects data once

a month, commonly in the last working day, and these data signify all eorts made during
100 6. Modeling

Inbound: Outbound:
Average of 28 procurements / month Truck capacity = 5 tons = 12 pallets
1 pallet = 40 products Delivery travels per truck = 3 / day
Truck capacity = 10 tons = 25 pallets Average of 20 products per order
Receive in average 28000 products/month Average of 28000 products processed/ month

Warehouse:
Warehouse area = 5000 m2
Inventory capacity = 1000 pallets
Warehouse working hours = 8h/day

Figure 6.1: Main informations about warehouse, inbound and outbound activities.

the month to process supplier and customer orders. Hence, the data generated represents

a summary of all that has been processed by the warehouse during the month.

6.2.3 The internal warehouse scenario


The detailed warehouse scenario is shown in Figure 6.2, representing a picture of the

warehouse activities at the end of the last working day of the month. This gure represents

the product and information ows occurred during the month; these data are obtained to

assess indicators. 99K illustrates the


There are three kinds of symbols in Figure 6.2:

ow of products inside the warehouse with their associated information; · · · m shows the

information ow in an activity or between warehouse areas; · · · •is the internal data inputs

(IntInput) and outputs (IntOutput) used to measure indicators related to a specic activity.

The notation used in the inputs and outputs of activities is the same as the ones presented

by the complete analytical model in Appendix A.

Receiving Storage Replenishment Picking Shipping Delivery


Supplier Ord Cor Pick + Prob Pick Cust Ord

+
+ Cor Sto + Cor Rep +
Cor Unlo + + + Cor Pick + + Cor Ship +
+ + + + + + Cor Del +
+ + + +
Prob Sto Prob Rep
Prob Unlo Prob Pick Prob Ship +

No Proc + No Proc + Prob Del


No Proc + No Proc +
+ + -
IntInput IntOutput
IntInput IntOutput IntInput IntOutput IntInput IntInput IntOutput IntInput IntOutput
IntOutput

Rep inProcess Pick inProcess Ship inProcess


Scrap Unlo1 Sto inProcess
Scrap Del1
LEGEND:
Product flow with its information Reserve Stock area
Information flow Forward Picking area
Local data No Proc – Products not processed

Figure 6.2: Flow of products and information throughout the warehouse activities.

In Figure 6.2, the product ows throughout warehouse areas are demonstrated by the

inputs and outputs of each activity.


6.2. Data generation for the Standard Warehouse 101

The inputs vary among activities. The storage, shipping and delivery have as main

inputs the products processed by the previous activity, and the receiving depends on the

number of supplier orders requested in the month. For replenishment, the products are

taken out from the reserve stock area according to the total number of orders picked

(Cor Pick + Prob Pick), since the forward picking area needs to have space to receive the
replenishment. Finally, the picking activity takes products out of the forward picking area

according to the number of customer orders received (Cust Ord).

The outputs for all activities are the total of products processed correctly and with

problem (for instance, in storage activity, the outputs are correct pallets stored,Cor Sto,

and pallets stored with problems, Prob Sto). The outputs with problems are divided into

two categories: the problems totally solved during the month, allowing the products to

advance to the next process (as demonstrated by the arrow added to correct products);

and the problems that have not been solved yet, which are added in the next month to

the number of products that should be processed (information arrow added to `No Proc').

Therefore, the `problems not solved' impacts the product ows (e.g. scrap) while the

others (considered as solved) are just registered for indicators' measurement but they do

not impede product ows (e.g. data information error). As the solved problems make

part of products processed, the two outputs (activity performed correctly and with solved

problems) become the input of the next activity.

Some activities have, at the end of the month, products that are not processed yet

(dened as `No Proc' in Figure 6.2). It means that not all supplier and customer orders

received in the month have already been completely processed. The sum of products with

problems not solved (dened above) and products not processed result in the products in

Process (e.g. Sto inProcess, Figure 6.2). These products in Process are not considered in

performance measures but they are included as inputs to the activity of the next month.

For simplication, the receiving and delivery activities do not have `No Proc' products.

As demonstrated in Figure 6.2, these activities do not have products not processed, which

means that there are no more trucks to unload (in receiving) and all pallets loaded during

the day were delivered (in delivery). For both activities, just the products with non solved

scrap problems are aggregated on the production of the following month.

From this scenario, data is built for each warehouse activity, as shown in Appendix

B. The next section summarizes the dierent kinds of data generated and presents some

examples.

6.2.4 Data characteristics


As stated earlier, a spreadsheet is designed to represent the activities described in Figure

6.2. Due to the complexity of the warehouse scenario, dierent categories of data are

necessary to better represent process variabilities. They are distinguished as xed, uniform,

and normal data.

The xed data are established values that will not change over time, e.g. warehouse

space, number of equipments, warehouse opening hours per day, number of employees.

The `uniform data' is a random number generated from a uniform distribution of prob-

abilities with pre-dened limits (function `randbetween ' in Excelr ). These limits can be

xed (for instance, the number of days per month that the warehouse operates varies be-
102 6. Modeling

tween 20 and 25) or variable (if the limits are determined by other variables). As an example

of this last case, the number of products stored correctly can not be higher than the total

number of products processed in receiving activity. Hence, the number of products stored

will have its limits dened by the outputs of the receiving process. These kinds of limits

are applied for all the warehouse activities, representing the eect of chained processes.

Finally, the normal function calculates a certain probability using the normal distribu-

tion according to a given mean and standard deviation (function `norminv' in Excelr ).
This function is utilized in dierent situations along the warehouse data generation. For

instance, the range of products received and delivered in a month follows a normal distri-

bution, with mean of 28000 products and standard deviation of 2000 products. Moreover,

the number of products per order uses the same function with mean of 20 products and

standard deviation of 2.

The complete list is presented in Appendix B, where all equations used to generate

data are described separately for each warehouse activity.

Once we have the dataset available, it is possible to calculate indicators representing

the products processed in the warehouse during a whole month. These indicators assessed

monthly are used as inputs of the mathematical tools to nd indicator relationships. In the

next section, the theoretical model introduced in Chapter 4, Section 4.3.2 is implemented

for the 40 indicators set.

6.3 Theoretical model of Indicator relationships


The complete analytical model dened in Chapter 5 demonstrates that the relations among

data are complex, making the global performance hard to evaluate taking into account the

data dependency. So, it is crucial to understand these relationships to better evaluate the

warehouse performance.

Section 4.3.2 has presented how to verify indicator relationships analyzing indicator

equations. In this section we perform this analysis for the complete analytical model of

40 indicators with their data equations. Initially, we have carried out a manual procedure

to dene indicator relationships, which is presented in Appendix C. However, the results

achieved are not exhaustive; not all data relationships are taken into account. Thus, we

demonstrate in this section an exhaustive procedure, composed of two main steps:

1. Evaluation of data associations;

2. Determination of the number of data shared by indicators;

First of all, the data equations from the complete analytical model (see Appendix A)

are studied to dierentiate the independent data from the combined data (as dened in

Section 5.3.1, the combined data is measured from other data, whereas the independent

ones are the real inputs of the system). Once the independent data are identied, we verify

the total number of indicators related by one or more data inputs. For that we use the

partial derivative matrix of indicator equations. Finally, the indicator relationships are

discussed.
6.3. Theoretical model of Indicator relationships 103

To get the results of this exhaustive procedure, we utilize the software CADES
r (Com-

ponent Architecture for the Design of Engineering Systems)


1. CADES has three main

modules dedicated to simulation and optimization of systems.

The rst module, CADES Generator, allows to code the analytical model of equations in

sml language (System Modeling Language) (Enciu et al., 2010). The model equations that

can be implemented in sml are analytical and/or semi-analytical. When CADES compiles a

model written in sml, it calculates automatically its gradient by using derivation techniques

and the result is an icar component containing the model output functions in terms of

the inputs (Staudt, 2015). The Jacobian matrix of the system is calculated in CADES

Calculator, the second module, using the exact derivatives obtained in CADES Generator.

Finally, the third module, CADES Optimizer, allows to couple the icar component directly

to optimization algorithms (more details of this module are presented in Section 7.4.3).

6.3.1 The data associations


In Section 5.3.1, an example was carried out to demonstrate how data are highly connected,

with some data making part of more general ones. Regarding this situation, Figure 6.3

depicts the combined data with their main elements for the majority of the indicator set.

The rectangle colors do not have a special meaning; Figure 6.3 demonstrates data in the

external rectangles comprehending the data from the internal ones. For example, Equation

A.3 shows that unloaded pallets can be divided into pallets unloaded correctly and with

problems. The rst rectangle in the upper left side of Figure 6.3 represents this equation.

The blue rectangle concerns all pallets unloaded (sum of data) and inside it there are two

other rectangles corresponding to the pallets unloaded correctly  Cor Unlo and the pallets

unloaded with problems  Prob Unlo . Yet,  Prob Unlo  have two other data represented

by the rectangles 1 and 2, signifying, respectively, the scraps and data system errors

during the unloading.

Figure 6.3 is divided in four areas: inbound, outbound, resource and general. The in-

bound and outbound contain data regarding the activities executed in this warehouse areas.

The resource data is related to capacity and the general data concern several warehouse

activities; that is the reason why they are separated from the other data.

We can infer from Figure 6.3 that it is hard to identify the independent data with so

many relations among them. Thus, next section determines the independent data using

the CADES
r software.

6.3.2 Determination of the independent data


After the identication of data association, we want to obtain a list of the independent data

necessary to assess the 40 indicator set. Due to the big quantity of information to evaluate

(all equations of the complete analytical model in Appendix A), it is dicult to make

manually the same analysis performed in Section 5.3.1, in order to dene the independent

and combined inputs of the system. Therefore, the complete analytical model (without

the data components named Others) is coupled with the software CADES Generator to

obtain all the inputs (independent data) and outputs (indicators) of the equations.

1
http://www.vesta-system.fr/fr/produits/cades/
104 6. Modeling

INBOUND DATA
Cor 1 Prob 2
WH
Unlo Unlo WEfRec
Rec
Pallets Unloaded ∆T
∆TRec 5
Dock
to
Cor 1 Prob 2 WH
WEfSto Stock
Sto Sto Sto
∆TSto 5
Pallets Stored

OUTBOUND DATA WH
WEfRep
1 2
Rep
Cor Prob
Rep Rep ∆TRep 5

Pallets Replenished WH
WEfPick
Pick
1 Prob OrdLi
Cor OrdLi ∆TPick 5
Salary &
Pick 4 Pick
Charges
Prod noAvail
WH ∆T
OrderLi Picked WH WEfShip
Ship Order
∆TShip 5 Lead
Cor OrdLi 1 4 Labor Cost Time
Ship Prob OrdLi WH
Complet Ship Transport Cost WEfDel
Del
1st Ship Deprec Cost Salary
∆TDel 5
OrdLi Ship / Prod Ship Truck Maint
Oil Cost Cost
Orders Shipped

Cor Del Prob


Docs OK On Time Del
1 4
No Damage
Orders Delivered / Kg Tr Kg Available
Customer Orders

RESOURCE DATA GENERAL DATA


1 Scrap 2
War Capacity Error
HEq HEq data system
War Cap Stop Work
Used 4 Data
HEq Avail Sales
Inv Cap error
Used Prod Prod
WH 5 WEf Profit Cost
Inv Cap Admin Admin

Figure 6.3: Data relationships.


6.3. Theoretical model of Indicator relationships 105

The compilation provides as outputs the 40 indicators studied in this work and the

input's list contains 81 independent data, as shown in Table 6.1. The meaning of each

data input is found in Appendix A.

Table 6.1: Analytical model data inputs

Data Inputs
α deprec2 kg Prod Profit
β_del empl Admin l_used Rate
β_ord empl Del mean_Insp Remain_Inv
β_pick empl Pick nbMachine scrap1
β_rec empl Rec nb_travel scrap2
β_rep empl Rep NoComplet Ord Ship scrap3
β_ship empl Ship Ord Del OT scrap4
β_sto empl Sto Ord Ship OT scrap5
BuildC EqMaintC pal_truck scrap6
cap error data system1 pallet_area Truck Maint C
Cor OrdLi Pick error data system2 Prob OrdLi Pick War Cap
Cor OrdLi Ship error data system3 Prob OrdLi Ship war used area
Cor Del HAdmindel Prob Del War WH
Cor Rep HAdminpick Prob Rep $/hadmin
Cor Sto HAdminrec Prob Sto $/hdel
Cor Unlo HAdminrep Prob Unlo $/hpick
Cust Ord HAdminship Prod Ord $/hrec
Cust Complain HAdminsto Prod pal $/hrep
ΔT(Insp)2 HEq Stop Prod noAvail $/hship
deprec1 Inv Cap Prod Cost $/hsto
$ oil

After the determination of this nal data list, we proceed with the verication of the

indicator relationships.

6.3.3 Data versus indicator relationships


To check all indicators that have relations by the use of the same data we use the Jacobian

Matrix, dened in Section 4.3.2.

In our case, we derive all functions f (indicator equations, from Equation 5.1 to Equa-

tion 5.41, excepting Delq ) with respect to their data inputs x (presented above, Table
6.1). So, the nal partial derivative matrix has the size 40 × 81 (n × m), where n are the
indicators and m the data inputs.

Due to the substantial size of the partial derivative matrix, we also automatize the

Jacobian generation using the software module CADES Calculator.

Before getting the results of the Jacobian matrix, it is necessary to provide initial values

to the inputs. The assigned values correspond to the rst month of the data generated for

our warehouse scenario, presented in the beginning of this chapter (see Appendix D for

the complete list of initial input values). Afterwards, CADES


r computes and gives the

numerical results of the Jacobian matrix for the supplied input data set. Figure 6.4 shows

the software interface with the inputs, outputs, and the Jacobian matrix result.
106 6. Modeling

INPUTS: 81
INDEPENDENT DATA
OUTPUTS: ALL 41
INDICATORS

JACOBIAN MATRIX:
41 x 81

Figure 6.4: Interface of CADES


r software: inputs, outputs and Jacobian matrix areas.

The calculated Jacobian matrix is initially analyzed with respect to its columns. We

observe that there are two main kinds of inputs (columns of the matrix): the ones related

to only one output (see Table 6.2) and the others linked to several outputs (see Table

6.3). For illustration purposes, only some parts of the matrix are shown in Tables 6.2 and

6.3. Each cell, in both tables, contains the partial derivative values of the output with

respect to the corresponding input data. The partial derivative value can be interpreted as

the variation of the output when the corresponding input varies, maintaining other inputs

constant.

From the 81 data inputs, 27 are associated with one output and 54 with two or more

outputs. In Tables 6.2 and 6.3, the most signicant values inuencing positively and

negatively the indicators are highlighted in red and green colors, respectively.

Table 6.3 presents the basis used to determine indicator relationships. The assumed

preliminary hypothesis of this thesis mentions that two indicators with non-zero partial

derivative for the same input might have a relationship between them. Evaluating two
6.3. Theoretical model of Indicator relationships 107

Table 6.2: Partial area of Jacobian matrix with inputs related to just one output.

NoComplet_
Cust_Ord CustComplain ErrorDataSystem3 nbMachine OTDel_ord OTShip_ord palSpace
OrdShip
CSc 0 0 0 0 0 0 0 0
CustSatq 0 -0,07496 0 0 0 0 0 0
Delp 0 0 0 0 0 0 0 0
Delt 0 0 0 0 0 0 0 0
DSt 0 0 0 0 0 0 0 0
EqDp 0 0 0 -2,1 0 0 0 0
Invc 0 0 0 0 0 0 0 0
Invq 0 0 -0,05006 0 0 0 0 0
InvUtp 0 0 0 0 0 0 0 -0,07192
Labc 0 0 0 0 0 0 0 0
Labp 0 0 0 0 0 0 0 0
Maintc 0 0 0 0 0 0 0 0
OrdFq 0 0 0 0 -0,07402 0 0 0
OrdLTt 0 0 0 0 0 0 0 0
OrdProcc -0,00155 0 0 0 0 0 0 0
OTDelq 0 0 0 0 0 0,07496 0 0
OTShipq 0 0 0 0 0 0 0,07402 0
PerfOrdq 0 0 0 0 0 0 0 0

Table 6.3: A partial view of the Jacobian matrix with inputs related to two or more outputs.

alpha beta_del beta_ord CorDel CorRep CorSto CorUnlo emplPick emplRec emplRep emplShip emplSto
CSc 0,24550 0 0,00000 -0,00038 0 0 0 0,02593 0,02593 0,02593 0,02593 0,02593
CustSatq 0 0 0 0,00101 0 0 0 0 0 0 0 0
Delp 0 0 0 0,00298 0 0 0 0 0 0 0 0
Delt 0 0,25190 0 -0,00021 0 0 0 0 0 0 0 0
DSt 0 0 0 0 0 0 -0,00067 0 0,20400 0 0 0,20400
EqDp 0 0 0 0 0 0 0 0 0 0 0 0
Invc 0 0 0 0 0 199,8 0 0 0 0 0 0
Invq 0 0 0 0 0,00013 0,00013 0,00013 0 0 0 0 0
InvUtp 0 0 0 0 0 0,05000 0 0 0 0 0 0
Labc 9988,0 0 -5292,0 0 0 0 0 1260,0 1260,0 1260,0 1260,0 1260,0
Labp 0 0 0 0 0 0 0 -1,5 -1,5 -1,5 -1,5 -1,5
Maintc 0 0 0 0 0 0 0 0 0 0 0 0
OrdFq 0 0 0 0 0 0 0 0 0 0 0 0
OrdLTt 0 0,25190 0,37780 -0,00101 0 0 0 0,11960 0 0 0,11960 0
OrdProcc 1,4 0 3,7 0 0 0 0 0 0 0 0 0
OTDelq 0 0 0 -0,07367 0 0 0 0 0 0 0 0
Putt 0 0 0 0 0 -0,00036 0 0 0 0 0 0,21120
Recp 0 0 0 0 0 0 0,00595 0 -4,2 0 0 0
Recq 0 0 0 0 0 0 0,00184 0 0 0 0 0
Rect 0 0 0 0 0 0 -0,00033 0 0,20400 0 0 0
Repp 0 0 0 0 0,00595 0 0 0 0 -3,7 0 0

dierent rows of Table 6.3 (i.e. two indicators), we observe several common inputs. This

is the case of Labc and OrdLTt , which have in common three inputs: βord , emplPick,
emplShip, denoting a relationship between them. Therefore, after comparing two rows of

Table 6.3 each time, we check all possible relations among indicators.

The interpretation of the indicator relationships is explained in the next section.

6.3.4 Analysis of indicator relationships


The results presented by the Jacobian matrix (Table 6.3) are analyzed in terms of: the

number of data shared by two indicators; the numerical values of the partial derivatives.

The main objective of both analysis is to try to gure out the intensity of indicator rela-

tionships.

Table 6.4 shows the number of shared data between indicators for the complete Jacobian

matrix, and the colors represent: red for 1 shared data, blue for 2, and green cells represent
108 6. Modeling

3 or more shared data. From this table, three main results are interesting to discuss:

• Indicators with no data in common;

• Indicators with few data in common (1 or 2);

• Indicators with several data in common (3 or more).

The white cells with zero values represent that indicators have no data in common,

making easy the interpretation. Indicators that do not share data with others should have

no relationships, and consequently, may not make part of the indicator group which will

form the aggregated performance.

In the opposite of the white cells, the green ones show indicators sharing three or more

data. One may deduce that the greater number of shared data determine higher indicator

relationships. Taking the rst column, of CSc indicator, it is possible to see that it shares

data with 11 other indicators. The three most expressive numbers of shared data are

15 with Labc and 7 with Labp and OrdLTt . From this result we may conclude that

these indicators have high relationships, specially between CSc and Labc . However, the

correlation between CSc and Labc is only 0,55, a medium value, whereas between CSc
and Labp is -0,96, denoting a very high correlation (Section 6.4.2 presents the complete

correlation matrix). Therefore, the hypothesis that a great number of shared data signies

a high correlation is not sustained.

Due to the conclusion for indicators with several data in common, the indicators with

few data (the red and blue cells of Table 6.4, which are the majority of situations) are even

more dicult to interpret.

It seems that other situation that impact the nal relationship between indicators is

the numerical values of the partial derivatives. Analyzing the column beta_ord of Table

6.3, the rows for Labc and OrdProcc demonstrate expressive values of partial derivative

(-5292 and 3,7 respectively) what might suggest the intensity of relationships. However,

as it can be noticed in Table 6.2 and 6.3, the numerical values of the partial derivatives

may dier substantially from one to another. At this time, it is interesting to recall that

the input data may have dierent units and their values can be in a distinct scale. For

example, the input number of employees, can be often a small number compared to the

average number of products in inventory, which is usually a big quantity. Moreover, the

Jacobian matrix is calculated by considering the monthly input data set. It means that for

each month the Jacobian matrix can slightly change, depending on the actual variation of

the inputs parameters. Due to the dynamic nature of the input data and also the numerical

dierence they might have (due to their units), it is hard to directly dene the intensity of

indicator relationships from the partial derivatives results.

Therefore, it is not possible to infer about the intensity of indicator relationships from

the results obtained. The use of Jacobian matrix to dene the strength of indicator relation-

ships requires a deeper study, which is proposed as a future research direction. Regarding

this thesis, we utilize the results of the Jacobian matrix to give a preliminary overview of

indicator relationships in a qualitative sense, and to give support in the choice of the nal

indicator group used in the integrated model (Section 7.2).

From the exhaustive relationship matrix presented in Table 6.4, it is possible to create

the same framework as presented in Appendix C, Figure C.3. However, due to the great
6.3. Theoretical model of Indicator relationships 109

quantity of indicator relations, the result is not easily interpretable as in Figure C.3. For

that reason, this nal framework is placed in Appendix E just for illustration.
110
Table 6.4: Indicator relation matrix with the number of shared data.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
1 CSc 0
2 CustSatq 2 0
3 Delp 3 2 0
4 Delt 3 2 4 0
5 DSt 3 0 1 1 0
6 EqDp 1 0 1 1 1 0
7 Invc 3 0 0 0 0 0 0
8 Invq 0 0 0 0 2 0 2 0
9 InvUtp 0 0 0 0 0 0 4 2 0
10 Labc 15 0 1 1 3 1 0 0 0 0
11 Labp 7 0 1 1 3 1 1 0 0 6 0
12 Maintc 2 0 0 0 0 0 0 0 0 0 0 0
13 OrdFq 0 0 0 0 0 0 0 0 0 0 2 0 0
14 OrdLTt 7 2 4 6 1 1 0 0 0 5 3 0 0 0
15 OrdProcc 6 0 1 1 1 1 0 0 0 5 1 0 0 3 0
16 OTDelq 2 2 2 2 0 0 0 0 0 0 0 0 0 2 0 0
17 OTShipq 0 0 0 0 0 0 0 0 0 0 2 0 2 0 0 0 0
18 PerfOrdq 2 2 2 2 0 0 0 0 0 0 0 0 0 2 0 2 0 0
19 Pickp 2 0 1 1 1 1 2 0 0 2 2 0 0 2 1 0 0 0 0
20 Pickq 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 2 0
21 Pickt 2 0 1 1 1 1 2 0 0 2 2 0 0 4 1 0 0 0 4 2 0
22 Putt 2 0 1 1 4 1 2 2 2 2 2 0 0 1 1 0 0 0 1 0 1 0
23 Recp 2 0 1 1 4 1 0 2 0 2 2 0 0 1 1 0 0 0 1 0 1 1 0
24 Recq 0 0 0 0 2 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0
25 Rect 2 0 1 1 8 1 0 2 0 2 2 0 0 1 1 0 0 0 1 0 1 1 4 2 0
26 Repp 2 0 1 1 1 1 0 2 0 2 2 0 0 1 1 0 0 0 1 0 1 1 1 0 1 0
27 Repq 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0
28 Rept 2 0 1 1 1 1 0 2 0 2 2 0 0 1 1 0 0 0 1 0 1 1 1 0 1 4 2 0
29 Scrapq 1 0 0 0 0 0 2 0 1 0 4 0 2 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0
30 Shipp 2 0 1 1 1 1 0 0 0 2 4 0 2 2 1 0 2 0 1 0 1 1 1 0 1 1 0 1 2 0
31 Shipq 0 0 0 0 0 0 0 0 0 0 2 0 2 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 2 2 0

6. Modeling
32 Shipt 2 0 1 1 1 1 0 0 0 2 4 0 2 5 1 0 2 0 1 0 1 1 1 0 1 1 0 1 2 4 2 0
33 StockOutq 1 0 0 0 0 0 5 0 0 0 1 0 0 0 0 0 0 0 2 2 2 0 0 0 0 0 0 0 1 0 0 0 0
34 Stop 2 0 1 1 2 1 2 2 2 2 2 0 0 1 1 0 0 0 1 0 1 4 1 0 1 1 0 1 0 1 0 1 0 0
35 Stoq 0 0 0 0 0 0 2 2 2 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 2 0
36 Thp 2 0 1 1 1 1 1 0 0 1 5 0 2 1 1 0 2 0 1 0 1 1 1 0 1 1 0 1 4 3 2 3 1 1 0 0
37 TOp 4 2 2 2 0 0 5 2 4 0 1 0 0 2 0 2 0 2 0 0 0 2 0 0 0 0 0 0 2 0 0 0 1 2 2 1 0
38 Trc 4 2 4 4 1 1 0 0 0 2 1 0 0 4 2 2 0 2 1 0 1 1 1 0 1 1 0 1 0 1 0 1 0 1 0 1 2 0
39 TrUtp 4 2 2 2 0 0 1 0 0 0 1 0 0 2 0 2 0 2 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 1 4 3 0
40 WarUtp 0 0 0 0 0 0 4 2 4 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 1 0 0 0 0 2 2 0 4 0 0 0
6.4. Statistical Tools Application 111

6.4 Statistical Tools Application


This section presents the application of statistical tools to analyze indicator relationships,

proposing ways to aggregate them based on signicant correlations.

The data matrix used to perform statistical tools is 100 x 40 where the rows present

the dierent values taken by these indicators over time and the columns represent the

dierent indicators. Each cell contains the indicator value for a specic month. The choice

of generating data for 100 months comes from the requirements to apply the PCA tool,

which species that the sample must be bigger than the number of variables. Using this

database generated for 100 months, we rst perform a normality test, to describe the

characteristics of the data. Afterwards, we standardize data according to (Gentle, 2007)

(this equation has been presented in Section 4.3.1):

Xactual − Xmean
Xnew = (4.3)
σX
where Xnew is the new value of the variable, Xactual is the real variable value, Xmean is

the time series mean of the variable dataset, σX is the standard deviation of the variable

time series.

Once the standardization is done, the indicator correlations are measured and the

principal component analysis is performed, completing the group of informations that will

be used to dene the integrated model, in Chapter 7.

Additionally to PCA, dynamic factor analysis is also studied to aggregate indicators.

However, the best results obtained exclude a great quantity of indicators from the model

(from the initial 40 indicators, only 11 remains after performing DFA). As our objective is

to maintain the majority of indicators to evaluate the global performance, we do not use

this result in our integrated model. We suggest further researches to apply dynamic factor

analysis with this purpose. The initial results obtained are reported in Appendix F.

6.4.1 Data normality test


The objectives of testing data normality are to know data characteristics and to verify if

there are outliers in the dataset. The data characteristics are sometimes useful to justify

the results obtained specially in the utilization of the data in statistical tools. In the case of

outliers, according to Section 3.3.2, PCA is sensitive to great dierences among variables.

Even if the data is normalized before PCA application, it is important to identify the

existence of outliers. For this purpose, the skewness and kurtosis are measured for the

variables. As stated is Section 4.3.1, if the skewness is higher than 2 or the is kurtosis is

higher than 7 a special analysis of the time series should be made (Newsom, 2015). If these

limits are exceeded, it is necessary to look for outliers in the time series, xing the wrong

values or excluding inconsistencies.

To evaluate the normality of data, we utilize the Minitab Software


r to accomplish the

Anderson-Darling test for each indicator time series. Moreover, the skewness and kurtosis

are also provided by the software.

The Anderson-Darling test measures how well the data follow a particular distribu-

tion, considering in the null hypothesis that data follow a normal distribution. The null

hypotheses is rejected if p-value is smaller than a chosen alpha (usually 0.05 for 95% of
112 6. Modeling

condence and 0.01 for 99% of condence). We chose to reject the null hypothesis (i.e. to

consider that data has a not-normal distribution) for p-values < 0.01.
Figure 6.5 presents some examples of these tests (Anderson-Darling, skewness and

kurtosis) for the indicators: Cost as a % of Sales (CSc ), Labor costs (Labc ), Inventory

quality (Invq ). The results are highlighted by red rectangles in the gure. For the skewness

and kurtosis, none of the results are greater than 2 and 7, respectively. However, the

Anderson-Darling test has p-values smaller than 0.01 for Labor cost and Inventory quality,

denoting a not-normal distribution. Indeed, the histogram shows these variables with

distributions really dierent from the normal curve.

These tests are carried out for all 40 indicators. For skewness and kurtosis measurement,

we do not identify values higher than the limit determined. For the Anderson-Darling test,

the results demonstrate 14 indicators with not-normal distributions from the 40 variables

analyzed (see Appendix G for all test results). Nevertheless, this result does not impede

the application of statistical tools as correlation and Principal Component Analysis. As in

practical situations the warehouses do not always provide normal data, we consider that

these data characteristics are similar to reality to perform the aggregation analysis.

6.4.2 Correlation measurement


The correlation measurement results are evaluated in parallel with the theoretical model

of relationships (Jacobian matrix) to dene the indicators that should be discarded of the

analysis and the ones that will make part of the integrated model.

The correlation matrix, calculated using the standardized data, is presented in Table

6.5 and the numbers inside it are the correlation coecients, named Person's r (or just r ).
All highlighted cells present a signicant correlation, with p-value < 0.01. The blue cells

present the absolute value of the medium correlations, established between 0.4 up to 0.59;

and the pink cells show the absolute value for high correlations, determined from 0.6 up to

1.

We can verify that some indicators in Table 6.5 have weak or a few medium correlations.

For example, EqDp , Invq and Maintc do not have correlations higher than 0.4 (|r| ≥
0.4). These indicators might have problems to be incorporated in the results of PCA,

since the components are arranged based on the correlations between variables. This

result is evaluated in Chapter 7 with the complete group of informations coming from the

mathematical tools application.


6.4. Statistical Tools Application 113

Figure 6.5: Anderson-Darling normality test for three indicators.


114
Table 6.5: Data correlation matrix.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
1 CSc 1
2 CustSatq -0,1 1
3 Delp -0,6 0 1
4 Delt 0,65 -0 -0,98 1
5 DSt 0,02 -0,1 -0,16 0,13 1
6 EqDp -0,1 0,1 0,07 -0,1 -0,2 1
7 Invc 0,1 -0,2 -0,04 0,03 0,16 -0 1
8 Invq -0,1 -0,1 0,07 -0,1 -0,1 -0 0,09 1
9 InvUtp 0,11 -0,2 -0,02 0,02 0,16 -0 0,97 0,1 1
10 Labc 0,55 -0,2 -0,52 0,52 0,1 -0 0,14 -0 0,12 1
11 Labp -0,96 0,1 0,67 -0,7 -0,1 0,1 -0,13 0,1 -0,1 -0,7 1
12 Maintc -0 -0,1 0,11 -0,1 -0 -0 0,03 -0 0,04 0,21 0,07 1
13 OrdFq 0,04 -0,2 0 0,01 0 -0 0,11 0,1 0,1 0,06 -0 -0 1
14 OrdLTt 0,65 -0 -0,98 1,0 0,13 -0 0,03 -0 0,02 0,52 -0,7 -0,1 0 1
15 OrdProcc 0,60 0 -0,97 0,99 0,14 -0 0,04 -0 0,02 0,43 -0,6 -0,1 0 0,99 1
16 OTDelq -0 0,6 -0,15 0,12 0,05 -0 -0,05 0 -0 -0 -0 -0 -0,2 0,12 0,12 1
17 OTShipq 0,07 -0,2 -0,03 0,04 -0 -0 0,03 0 0,03 0,06 -0 -0 0,9 0,04 0,05 -0,2 1
18 PerfOrdq -0 0,7 -0,03 0,03 -0 0,1 -0,03 -0 -0 -0 -0 -0 -0,1 0,03 0,02 0,9 -0,2 1
19 Pickp -0,6 0 1 -0,97 -0,2 0,1 -0,05 0,1 -0 -0,5 0,66 0,11 0 -0,97 -0,97 -0,1 -0 -0 1
20 Pickq -0,1 0,1 0,07 -0,1 -0 0,1 -0,18 -0 -0,1 -0,1 0,11 -0,1 -0,2 -0,1 -0,1 0,1 -0,1 0,1 0,04 1
21 Pickt 0,63 -0 -0,97 1,00 0,13 -0 0,03 -0 0,02 0,51 -0,7 -0,1 0 1,0 0,99 0,1 0 0 -0,98 -0,1 1
22 Putt 0,38 -0,1 -0,32 0,33 -0,2 -0 -0,12 -0 -0,1 0,74 -0,5 0,12 0 0,33 0,24 -0,1 0,1 -0,1 -0,3 -0 0,31 1
23 Recp -0,39 0,1 0,34 -0,3 0,15 0,2 0,11 0,1 0,14 -0,76 0,51 -0,1 0 -0,3 -0,3 0,1 -0,1 0,1 0,33 0,01 -0,3 -1 1
24 Recq 0,08 0,1 -0,21 0,19 0,1 -0 0,01 0,2 0,04 0 -0,1 -0,1 -0,1 0,19 0,22 0,1 -0,1 0,1 -0,2 0,09 0,21 -0,1 0,04 1
25 Rect -0 -0,1 -0,12 0,1 1,00 -0 0,17 -0 0,17 0,02 -0 -0 0 0,1 0,11 0,1 -0 -0 -0,1 -0 0,1 -0,3 0,24 0,1 1
26 Repp -0,95 0,1 0,69 -0,7 -0,1 0,1 -0,14 0,1 -0,1 -0,69 0,98 0,07 -0 -0,7 -0,7 0 -0,1 0 0,68 0,08 -0,7 -0,5 0,5 -0,1 -0 1
27 Repq -0,1 -0 0,06 -0 0,03 -0 0,18 0,1 0,19 -0,1 0,11 0,07 0,1 -0 -0 0 0,1 -0 0,06 -0,1 -0 -0,1 0,13 0,1 0,04 0,04 1
28 Rept 0,96 -0,1 -0,7 0,73 0,06 -0 0,13 -0 0,12 0,69 -0,98 -0,1 0 0,73 0,68 -0 0,1 -0 -0,7 -0,1 0,72 0,47 -0,5 0,1 0,01 -0,99 -0 1
29 Scrapq 0,08 -0,3 0,03 -0 -0 0 -0,02 0,1 -0 0,12 -0,1 0,09 -0,2 -0 -0 -0,4 -0,3 -0,4 0,04 -0,3 -0 0,12 -0,1 -0,4 -0 -0 -0,41 0,03 1
30 Shipp -0,6 0 1,0 -0,98 -0,2 0,1 -0,05 0,1 -0 -0,5 0,67 0,11 -0 -0,98 -0,97 -0,1 -0,1 -0 1,0 0,07 -0,97 -0,3 0,34 -0,2 -0,1 0,69 0,06 -0,7 0,04 1
31 Shipq 0,05 -0,1 0,03 -0 -0 -0 0,07 -0 0,08 0,06 -0 0 0,8 -0 -0 -0,1 0,8 -0 0,05 -0,2 -0 0,02 -0 -0,2 -0 -0 0,03 0,02 -0,3 0,01 1
32 Shipt 0,65 -0 -0,98 1,00 0,13 -0 0,03 -0 0,02 0,52 -0,7 -0,1 0 1,00 0,99 0,1 0,1 0 -0,97 -0,1 1 0,33 -0,3 0,2 0,09 -0,7 -0 0,73 -0 -0,98 -0 1
33 StockOutq 0,06 -0,1 -0,04 0,03 0,08 -0 0,4 0 0,19 0,11 -0,1 -0,1 0,1 0,03 0,02 -0,1 0,1 -0 -0 -0,5 0,01 0,04 -0,1 -0,1 0,07 -0,1 0,09 0,08 0,05 -0 0,07 0,03 1
34 Stop -0,4 0,1 0,32 -0,3 0,16 0,2 0,14 0,1 0,16 -0,7 0,48 -0,1 -0 -0,3 -0,2 0,1 -0,1 0,1 0,31 0,01 -0,3 -0,99 0,99 0,1 0,25 0,48 0,14 -0,47 -0,1 0,32 -0 -0,3 -0 1
35 Stoq -0 0,2 0,06 -0,1 -0,2 -0 -0,15 0,2 -0,1 -0,2 0,04 -0,2 0,2 -0,1 -0 0,1 0,1 0 0,06 0,01 -0 -0,1 0,18 0,1 -0,1 0,05 0,01 -0,04 -0,46 0,06 0,2 -0 -0,1 0,14 1
36 Thp -0,96 0,1 0,67 -0,7 -0,1 0,1 -0,13 0,1 -0,1 -0,69 1 0,07 -0 -0,7 -0,6 -0 -0 -0 0,66 0,11 -0,7 -0,5 0,51 -0,1 -0 0,98 0,11 -0,98 -0,1 0,67 -0 -0,7 -0,1 0,48 0,04 1

6. Modeling
37 TOp -0,4 0,1 0,17 -0,2 -0,1 0 -0,88 -0 -0,91 -0,1 0,38 0,08 -0,1 -0,2 -0,2 0 -0,1 0 0,17 0,13 -0,2 0,14 -0,1 -0,1 -0,1 0,39 -0,2 -0,38 0,04 0,17 -0,1 -0,2 -0,2 -0,2 0,07 0,38 1
38 Trc 0,59 0 -0,96 0,98 0,14 -0 0,01 -0 -0 0,41 -0,6 -0,1 -0 0,98 0,98 0,1 0 0 -0,96 -0,1 0,98 0,24 -0,3 0,2 0,11 -0,6 -0 0,66 -0 -0,97 -0 0,98 0 -0,2 -0 -0,6 -0,2 1
39 TrUtp -0,5 0,2 0,4 -0,4 -0,1 0,2 -0,08 0,2 -0,1 -0,96 0,64 -0,2 -0 -0,4 -0,3 0,1 -0 0,1 0,37 0,07 -0,4 -0,8 0,77 0,1 0,02 0,62 0,09 -0,62 -0,1 0,4 -0,1 -0,4 -0,1 0,75 0,18 0,64 0,04 -0,3 1
40 WarUtp -0,1 -0,1 0,01 -0,1 0,12 -0 0,6 0,1 0,61 0,14 0,01 0,01 0,1 -0,1 -0,1 -0,1 0 -0,2 0,01 -0,1 -0 0,06 -0,1 0,1 0,11 0,03 0,04 -0,04 0,04 0,01 0,12 -0,1 0,2 -0,1 -0,1 0,01 -0,5 -0,1 -0 1
6.4. Statistical Tools Application 115

6.4.3 Principal Component Analysis


This section performs the rst PCA tests considering all variables in the model, the ones

with low and with high correlations.

The free software R is used to attain the results. There are two mathematical methods

available in R to perform PCA: princomp and prcomp. In princomp formula, calculation

is done with the eigenvalues of the correlation or covariance matrix, using the divisor N
(N = number of variables) for that. The prcomp formula, on the other hand, calculates a

singular value decomposition (centered and possibly scaled) of the data matrix, using the

usual divisor N − 1. According to the R documentation, prcomp is the preferred method

of calculation for numerical accuracy. Thus, we use this one to perform our analysis.

Two main analysis are made in this rst phase: an analysis of indicators separated by

their dimensions of cost, quality, time, productivity (Section 6.4.3.1) and a global PCA

with all 40 indicators (Section 6.4.3.2). The objective is to verify the indicator's behavior

in aggregation situations, providing more elements to dene the nal group which will

make part of the integrated model.

As justied above (Section 6.4.1), data is standardized before their utilization in PCA

due to the sensibility of the model to high data variation.

6.4.3.1 PCA for indicator dimensions


Initially, the PCA results for indicators separated by dimensions are shown in Figure 6.6

for quality, Figure 6.7 for productivity, Figure 6.8 for time, Figure 6.9 for cost. All gures

are divided in three parts (as well as Figure 6.10, showing the PCA result for all 40

indicators): a table demonstrating the standard deviation, proportions of variance and

cumulative proportion for the main components (in the bottom of the gure); a table of

indicators versus components (located in the up-left-side of the gures); the scree plot in

the right side of the gures. Each of these three parts is explained as follows.

The tables on the bottom of the gures have three dierent informations to analyze.

Initially, the standard deviation of each principal component higher than one is used as

one of the criteria to dene the number of components to retain. As an example, in

Figure 6.6 there are 5 components (from PC1 up to PC5) with standard deviation higher

than one, indicating that these ve components should be considered in the representation

of all quality indicators. The second information, proportion of variance, demonstrates

the contribution of each component to explain the data variance, whereas the cumulative

proportion (third line) presents the sum of all component variances. For Figure 6.6, the

cumulative proportion is 76,9%, signifying that the rst ve components explain 76,9% of

the total quality indicators variance.

The indicator versus component tables demonstrate in the cells the loadings aij , giving
the weight of each indicator in the respective component. The highlighted cells are the ones

with |loading| ≥ 0, 3, denoting the indicators considered in each component. For example,

Figure 6.9 shows PC1 and PC2 (both with standard deviation higher than one) formed

by the following indicators: CSc , Labc , OrdProcc , Trc for PC1 and Invc , Labc and

Maintc for PC2. The linear combinations of indicators obtained from this table are shown

in Equation 6.1 and Equation 6.2. The signs of the loadings are arbitrary, and, according

to R documentation, they may dier between dierent PCA programs or even between
116 6. Modeling

dierent builds of R.

P C1 = −0, 48 × CSc − 0, 40 × Labc − 0, 55 × OrdProcc − 0, 55 × Trc (6.1)

P C2 = 0, 42 × Invc + 0, 44 × Labc + 0, 74 × Maintc (6.2)

Finally, the scree plot shows the variance of the data (y axis, measured by the square

of the standard deviation [σ ]) explained by each component (x axis).


2 The principal

components are presented in decreasing order of importance with the objective of helping

analysts to easily visualize the sharp drop in the plot, which is also used as a signal that

subsequent components should be ignorable.

One may expect from the PCA performed that each indicator dimension will be rep-

resented by one component (total of 4 components for all indicators), since indicators of

the same dimension could be more related among them than indicators of dierent dimen-

sions. Nevertheless, the results obtained do not conrm this hypothesis. The number of

components to include in the model (using the criterion of standard deviation higher than

one to retain components) are two for time and cost indicators, whereas for productivity

and quality are three and ve, respectively. It means that, if we would like to represent

all 40 indicators using these results, the number of components utilized will be 12 (2 of

time + 2 of cost + 3 of productivity + 5 of quality) instead of the 4 components initially

expected. Since PCA has the objective to represent variables in a small number of prin-

cipal components, we can infer that 12 components are not a good result. Moreover, the

cumulative proportion of data variance explained by these 2 components of cost and 5 of

quality are still low, with 67% and 76,9%, respectively.

Looking at indicator versus component tables in Figures 6.6, 6.7, 6.8, 6.9, specically

in the columns of principal components with standard deviations > 1, it is possible to

see that a great quantity of indicators are allocated in more than one component, what

is not desirable for PCA results. The worst results can be seen for quality and produc-

tivity dimensions, with more than half of indicators allocated in at least two components.

Therefore, we conclude that indicators are not related just by their dimensions.
6.4. Statistical Tools Application
QUALITY INDICATORS

PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8 PC9 PC10
CustSatq -0,32 0,32 0,20 0,06 -0,17 0,15 -0,10 -0,18 -0,80 0,05
Invq 0,06 0,02 -0,20 -0,46 -0,52 -0,51 -0,31 0,26 -0,13 0,06
OrdFq 0,45 0,29 0,04 0,11 -0,05 -0,15 0,07 -0,03 -0,08 -0,31
OTDelq -0,34 0,35 0,26 0,04 -0,06 -0,28 -0,08 0,08 0,39 -0,10
OTShipq 0,45 0,29 0,01 0,15 0,03 -0,14 0,05 0,04 -0,13 -0,46
PerfOrdq -0,34 0,35 0,33 0,09 0,01 -0,20 0,02 0,03 0,21 -0,04
Pickq -0,18 0,07 -0,51 0,41 0,15 -0,04 -0,12 0,59 -0,14 0,09
Recq -0,16 0,15 -0,35 -0,34 -0,01 -0,17 0,76 -0,17 -0,06 0,06
Repq 0,05 0,14 -0,10 -0,47 0,61 -0,08 -0,46 -0,22 -0,05 0,05
ScrapRate 0,05 -0,49 0,23 0,14 -0,29 -0,21 -0,08 -0,21 -0,05 0,03
Shipq 0,41 0,33 0,10 0,17 -0,05 -0,07 0,04 -0,05 0,07 0,81
StockOutq 0,13 -0,06 0,49 -0,40 0,09 0,26 0,21 0,64 -0,11 0,03
Stoq 0,03 0,29 -0,23 -0,18 -0,45 0,63 -0,19 -0,08 0,29 -0,07

PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8 PC9 PC10 PC11
Standard
1,77 1,6708 1,2488 1,2011 1,033 0,915 0,887 0,7366 0,613 0,387 0,362
deviation
Proportion of
0,241 0,2147 0,12 0,111 0,082 0,064 0,06 0,0417 0,029 0,012 0,01
Variance
Cumulative
0,241 0,4557 0,5757 0,6867 0,769 0,833 0,894 0,9354 0,964 0,976 0,986
Proportion

Figure 6.6: PCA results for quality indicators.

117
118
PRODUCTIVITY INDICATORS

PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8


Delp 0,34 0,02 0,33 -0,28 -0,14 -0,07 0,05 0,15
EqDp 0,07 0,00 -0,23 -0,59 0,76 0,05 -0,11 0,01
InvUtp -0,05 -0,58 0,20 0,04 0,04 0,43 -0,16 0,42
Labp 0,37 0,06 0,01 0,30 0,18 0,24 -0,08 0,18
Pickp 0,33 0,02 0,34 -0,29 -0,15 -0,09 0,00 -0,23
Recp 0,27 -0,27 -0,40 -0,03 -0,21 -0,22 -0,29 -0,06
Repp 0,37 0,06 0,03 0,29 0,18 0,17 -0,12 -0,63
Shipp 0,34 0,02 0,33 -0,28 -0,14 -0,08 0,04 0,15
Stop 0,26 -0,29 -0,40 -0,03 -0,20 -0,27 -0,31 0,09
Thp 0,37 0,06 0,01 0,30 0,18 0,24 -0,08 0,18
TOp 0,12 0,58 -0,08 0,17 0,08 -0,30 -0,17 0,49
TrUtp 0,30 -0,13 -0,36 0,02 0,00 0,00 0,85 0,12
WarUtp -0,03 -0,38 0,35 0,33 0,42 -0,66 0,09 0,00

PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8


Standard
deviation 2,4838 1,5742 1,3505 0,9772 0,96 0,582 0,513 0,162
Proportion of
Variance 0,4745 0,1906 0,1403 0,0735 0,071 0,026 0,02 0,002
Cumulative
Proportion 0,4745 0,6652 0,8055 0,8789 0,95 0,976 0,996 0,998

6. Modeling
Figure 6.7: PCA results for productivity indicators.
6.4. Statistical Tools Application
TIME INDICATORS

PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8


Delt -0,45 0,00 0,16 0,14 0,30 -0,40 0,02 0,71
DSt -0,07 0,66 -0,25 0,04 -0,01 -0,04 -0,70 0,00
OrdLTt -0,45 0,00 0,16 0,14 0,30 -0,40 0,02 -0,71
Pickt -0,45 0,00 0,17 0,14 -0,86 -0,01 0,01 0,00
Putt -0,19 -0,30 -0,86 0,37 -0,01 0,01 0,07 0,00
Rect -0,05 0,68 -0,16 0,01 0,00 0,04 0,71 0,00
Rept -0,37 -0,08 -0,27 -0,88 -0,01 0,00 0,00 0,00
Shipt -0,45 0,00 0,16 0,14 0,28 0,82 -0,05 0,00

PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8


Standard 2,1833 1,4445 0,8962 0,5803 0,079 0,025 0,01 0,00
deviation
Proportion of
0,5958 0,2608 0,1004 0,0421 8E-04 8E-05 1E-05 0,00
Variance
Cumulative
0,5958 0,8567 0,957 0,9991 1 1 1 1,00
Proportion

Figure 6.8: PCA results for time indicators.

119
120
COST INDICATORS

PC1 PC2 PC3 PC4 PC5 PC6


CSc -0,48 0,09 -0,01 0,42 -0,76 0,01
Invc -0,07 0,42 -0,88 -0,22 -0,01 -0,02
Labc -0,40 0,44 0,10 0,53 0,59 -0,03
Maintc 0,03 0,74 0,46 -0,45 -0,19 0,01
OrdProcc -0,55 -0,20 0,02 -0,37 0,13 0,71
Trc -0,55 -0,21 0,05 -0,39 0,10 -0,70

PC1 PC2 PC3 PC4 PC5 PC6


Standard
1,6802 1,0965 0,9837 0,7768 0,621 0,134
deviation
Proportion of
0,4705 0,2004 0,1613 0,1006 0,064 0,003
Variance
Cumulative
0,4705 0,6709 0,8322 0,9328 0,997 1
Proportion

6. Modeling
Figure 6.9: PCA results for cost indicators.
6.4. Statistical Tools Application 121

6.4.3.2 PCA with all 40 indicators


Another PCA is performed with all 40 indicators together, and the results are shown in

Figure 6.10. The informations presented in Figure 6.10 are the same as described pre-

viously for each dimension. The dierence is in the indicator versus component table,

which demonstrates only the columns of principal components (PC) with standard devia-

tion higher than one (the criterion used to choose the number of components to retain).

Moreover, the minimum loading value is reduced to 0.2 (|loading| ≥ 0.2). This limit is

empirically chosen based on the loading results for the rst component.

Dening the number of components to retain by the scree plot (in the right side of Figure

6.10), one could choose them as the rst two; PC1 and PC2. Indeed, these components

are which better contain/explain variable's variance, and the sharp drop is in that point in

the plot. From the standard deviation perspective, there are 10 components with standard

deviation higher than one, proposing the use of all of them to represent the indicators.

Comparing the results with respect to two or ten components we can see that with 2

components, 19 indicators are excluded from the analysis and with 10 components none of

them is excluded. Regarding the number of indicators designated for several components,

with 2 components there is no indicator repetition, and with 10 components 17 indicators

are allocated in more than one PC. Moreover, two components explain 44% of data variation

whereas ten components represent 86%. This situation establishes a trade-o between both

options.

As the analysis carried out on this thesis objectives to aggregate the greater number of

indicators as possible, we consider initially the 10 principal components in the model. The

main reason for this choice is that this result can be improved in Section 7.2 to get the

nal integrated model. However, in situations where there is a doubt about the number

of components to retain, it is very important to analyze if the components have a sense

and are in accordance to the warehouse reality. A framework to demonstrate the results

presented in indicator versus component table, of Figure 6.10, is built in Figure 6.11.

The names inside the blue rectangles are chosen according to the most relevant quantity

of indicator activities that the component encompasses. For example, C1 (which is derived

from PC1 column of Figure 6.10) is named Outbound Performance because the majority

of indicators making part of this component are related to replenishment, picking, shipping

and delivery activities. Also, C3 (representing the PC3 column of Figure 6.10) is dened

as Inventory Utilization because indicators related to stocks and space utilization are

comprised in the component. The exception is C2, named Mixed Performance because

half of the indicators are linked to the delivery and the other half to inbound activities.

This component in particular does not present a good result, since there is no physical

relation among these outbound and inbound indicators (it is possible to see it in the

Jacobian and correlation matrix). It probably happens because there are indicators with

just very low correlations, and their data confuse the PCA tool during the establishment

of indicator relationships. In Section 7.2 this result is analyzed again and these indicators

may probably be discarded of the analysis to improve the nal PCA result.

From Figure 6.11, we note a tendency in indicators' aggregation: the majority of indi-

cators are related in components according to their measurement domain. It means that

indicators are usually grouped with others from dierent dimensions but all metrics are
122 6. Modeling

PCA with all 40 indicators

PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8 PC9 PC10
CSc -0,22 0,06 -0,05 0,01 -0,08 0,14 -0,36 0,01 -0,05 0,07
CustSatq 0,02 -0,21 0,18 -0,02 -0,36 -0,03 0,02 0,18 0,01 -0,15
Delp 0,25 0,14 -0,03 0,01 -0,08 0,01 -0,12 -0,02 -0,04 0,02
Delt -0,26 -0,13 0,03 -0,02 0,09 0,01 0,10 0,03 0,03 0,03
DSt -0,03 -0,12 -0,15 0,13 0,13 -0,55 -0,16 -0,07 -0,07 -0,12
EqDp 0,04 -0,08 0,07 -0,02 0,04 0,28 -0,04 0,15 0,32 0,03
Invc -0,03 -0,01 -0,44 0,15 -0,15 0,07 0,15 0,04 0,08 -0,02
Invq 0,03 -0,05 -0,07 -0,06 0,05 0,26 0,19 -0,16 -0,34 -0,17
InvUtp -0,02 -0,03 -0,43 0,15 -0,17 0,07 0,12 -0,06 0,18 0,02
Labc -0,198 0,24 -0,01 0,05 -0,12 -0,16 0,07 -0,02 -0,02 0,02
Labp 0,24 -0,08 0,05 -0,03 0,11 -0,08 0,29 0,01 0,05 0,01
Maintc 0,01 0,13 0,00 0,07 -0,03 -0,15 0,14 0,07 0,03 0,58
OrdFq -0,01 0,06 -0,19 -0,49 0,05 -0,09 -0,01 0,06 0,10 -0,03
OrdLTt -0,26 -0,13 0,03 -0,02 0,09 0,01 0,10 0,03 0,03 0,03
OrdProcc -0,24 -0,17 0,02 -0,03 0,11 0,03 0,11 0,02 0,03 0,02
OTDelq -0,02 -0,22 0,13 0,02 -0,42 -0,15 0,08 0,20 0,03 -0,04
OTShipq -0,02 0,07 -0,16 -0,50 0,05 -0,10 -0,02 0,03 0,11 0,04
PerfOrdq 0,00 -0,20 0,12 0,02 -0,46 -0,12 -0,02 0,27 0,07 0,01
Pickp 0,25 0,14 -0,03 0,01 -0,09 0,01 -0,13 0,01 -0,04 0,02
Pickq 0,03 -0,05 0,14 0,04 -0,10 -0,04 -0,05 -0,50 0,45 0,01
Pickt -0,25 -0,14 0,04 -0,02 0,10 0,01 0,11 0,01 0,04 0,02
Putt -0,14 0,34 0,13 -0,02 -0,11 -0,05 0,19 -0,05 -0,01 -0,10
Recp 0,15 -0,33 -0,13 0,00 0,11 0,06 -0,19 0,07 0,02 0,10
Recq -0,04 -0,18 0,01 0,02 -0,11 0,01 0,08 -0,41 -0,25 -0,08
Rect -0,02 -0,15 -0,16 0,13 0,14 -0,53 -0,17 -0,06 -0,07 -0,11
Repp 0,24 -0,07 0,05 -0,02 0,09 -0,08 0,27 0,05 0,06 -0,04
Repq 0,02 -0,07 -0,13 -0,07 -0,08 0,00 0,16 -0,13 -0,33 0,58
Rept -0,24 0,07 -0,05 0,01 -0,08 0,09 -0,26 -0,04 -0,06 0,06
Scrapq -0,01 0,18 0,00 0,28 0,33 0,11 -0,08 0,29 0,10 -0,14
Shipp 0,25 0,13 -0,03 0,03 -0,09 0,02 -0,12 -0,02 -0,04 0,02
Shipq 0,00 0,07 -0,17 -0,48 -0,03 -0,12 -0,05 0,11 0,14 -0,04
Shipt -0,26 -0,13 0,03 -0,04 0,09 0,01 0,10 0,03 0,04 0,03
StockOutq -0,02 0,06 -0,19 0,03 0,00 0,00 0,13 0,42 -0,38 -0,14
Stop 0,14 -0,33 -0,14 0,02 0,11 0,06 -0,19 0,05 0,01 0,11
Stoq 0,03 -0,10 0,04 -0,27 -0,12 0,13 -0,10 -0,19 -0,29 -0,26
Thp 0,24 -0,08 0,05 -0,03 0,11 -0,08 0,29 0,01 0,05 0,01
TOp 0,07 0,06 0,41 -0,11 0,17 -0,19 0,08 0,02 -0,12 -0,02
Trc -0,24 -0,18 0,04 -0,03 0,11 0,02 0,12 0,03 0,03 0,02
TrUtp 0,17 -0,29 -0,01 -0,05 0,13 0,17 -0,03 0,01 0,01 -0,03
WarUtp 0,00 0,06 -0,30 0,09 -0,08 0,00 0,32 -0,13 0,15 -0,30

PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8 PC9 PC10 PC11 PC12
Standard
3,6571 2,0645 1,9917 1,7033 1,563 1,431 1,2734 1,24 1,15 1,063 0,99 0,882
deviation
Proportion
0,3344 0,1066 0,0992 0,0725 0,0611 0,051 0,0405 0,0387 0,0331 0,0282 0,0245 0,019
of Variance
Cumulative
0,3344 0,4409 0,5401 0,6126 0,6737 0,725 0,7654 0,8042 0,8372 0,8655 0,89 0,909
Proportion

Figure 6.10: Result of Principal component analysis for all 40 indicators.


6.5. Conclusions 123

OrdProcc Delt Delp OrdLTt Repp

Shipp Rept

C1 - Outbound Performance Thp


Shipt C7-Outbound Productivity

Pickp Labp
InvUtp Invc
Pickt Trc WarUtp
CSc
TOp
Labc TrUtp C3 – Inventory Utilization

Stop OTDelq
OrdFq OTShipq
C5- Order Quality Scrapq
C2- Mixed Performance
C4-Ship Quality Shipq
Putt
CustSatq
Recp
PerfOrdq C8-Inventory Avaliability
Stoq

DSt
C6-Inbound Time
Recq Pickq StockOutq
Rect EqDp C10- Stock Quality

Productivity indicators
Invq C9-Product movement quality Repq Maintc
Cost indicators
Quality indicators
Time indicators Cut off level of 0,2 in absolute sense

Figure 6.11: Framework of PCA result for all 40 indicators.

from the same warehouse area. C1, for example, are formed of productivity, time and cost

indicators, and all of them are related to outbound activities. There are some exceptions

among the quality indicators: C4, C5 and C8 are components containing just quality indi-

cators. This is particularly interesting since these indicators also share data with indicators

of other dimensions (as cost, time and productivity).

Comparing the results of PCA performed for each dimension and for all indicators at

the same time, we conclude that the second analysis has provided better outcomes if the

components are compared in a practical sense. It means that the indicators aggregated

in components without dimension distinction seem to be more consistent with the reality.

Thus, the PCA result for all 40 indicators is used in the next chapter as the basis to the

integrated model development. As indicators with low correlations are also included in

the framework presented, the next chapter analyzes the right indicators that should be

excluded of the group to improve the PCA outcome.

6.5 Conclusions
In this chapter, we have created a scenario for the standard warehouse to generate the

data used to calculate indicators. This scenario represents the warehouse shop-oor with

its ow of products throughout the processes. An Excelr spreadsheet is elaborated with

data following normal and random distributions, which demonstrate the eect of chained

processes. This initial dataset is used to calculate performance indicators, which are em-

ployed in the mathematical tools.


124 6. Modeling

A data sample of one month and the complete analytical model are coupled with

CADES
r software to calculate the Jacobian matrix. The assessment of the Jacobian ma-

trix makes part of an exhaustive procedure developed to infer about indicator relationships,

which calculates the partial derivatives matrix of the complete analytical model, encom-

passing indicator and data equations.

From the results attained, we can conclude that it is very hard to quantitatively de-

termine from the partial derivatives the intensity of the relationship between indicators.

The procedure described in this chapter is, therefore, used to qualitatively analyze their

interactions, providing a preliminary view of indicator relationships and verifying if the

results are coherent from an analytical point of view.

Further, the whole dataset (100 months) of indicator measures are utilized to apply

statistical tools. The correlation matrix and the principal component analysis are the

main tools performed to determine indicator relationships quantitatively and how they

could be aggregated to estimate the integrated performance. The PCA does not provide

good results in the dimensions aggregated separately nor in the total group of indicators.

The problems are mainly related to inconsistencies in the indicators group (some indicators

of the same component have no relationship among them) and to the great quantity of

indicators designated in more than one component. One reason for these problems may

be the variables not correlated with others, which can lead to misunderstandings of the

statistical model. Thus, next chapter evaluates these variables proposing an improved

integrated model for warehouse performance measurement.


Chapter 7
Model Solving, Implementation and Update

If it were easy it would have been done already.


Unknown

Contents

7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126


7.2 Analysis of Jacobian and Correlation matrix to improve PCA
results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
7.3 Integrated performance model proposition . . . . . . . . . . . . 133
7.4 Scale for the Integrated Indicator . . . . . . . . . . . . . . . . . . 136
7.4.1 The analytical model adjustment . . . . . . . . . . . . . . . . . . . . . . 136
7.4.2 Objective function denition . . . . . . . . . . . . . . . . . . . . . . . . . 138
7.4.3 The choice of the optimization algorithm . . . . . . . . . . . . . . . . . . 139
7.4.4 The setting of the optimization tool . . . . . . . . . . . . . . . . . . . . . 139
7.4.5 The integrated indicator scale . . . . . . . . . . . . . . . . . . . . . . . . 142
7.5 Integrated Model Implementation . . . . . . . . . . . . . . . . . . . 145
7.6 Model Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
7.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147

Abstract
This chapter proposes the integrated model to evaluate warehouse performance. To
attain this objective, the results obtained from dierent sources are analyzed to de-
termine the best number of components to consider in the model. Moreover, a scale
is developed for the integrated model utilizing an optimization tool, which denes the
upper and lower limits of the scale from the maximization and minimization results.
The integrated model with the scale is tested in two dierent warehouse performance
situations verifying that the utilization of the integrated model can help managers to
better evaluate the warehouse as a whole.
126 7. Model Solving, Implementation and Update

7.1 Introduction
The last chapter has presented the application of some methods to analyze indicator rela-

tionships. The results obtained with the measurement of relationships using the Jacobian

matrix, correlation matrix and PCA method are analyzed to propose an integrated perfor-

mance model.

The objective of this model is to be used by managers in their periodic warehouse

performance evaluations. In order to help the interpretation of the integrated model results,

a scale is also proposed using the analytical model as a basis to perform an optimization,

which denes the upper and lower limits of the integrated indicator.

Afterwards, the utilization of the nal model with the developed scale is detailed, along

with a discussion of how to update the model when necessary.

7.2 Analysis of Jacobian and Correlation matrix to improve


PCA results
Chapter 6 presents indicator relationships measured by the Jacobian matrix, the Correla-

tion matrix and the Principal Component analysis. To attain the nal integrated model,

the Jacobian and Correlation matrix are used as decision support to improve the PCA

result, which denes the basis of the aggregated model.

From the PCA performed for all 40 indicators, presented in Section 6.4.3.2, we have

veried that some indicators do not t well the model, probably because of their low

correlation with other indicators. Moreover, the retention of 10 principal components

could be seen as a high number considering the 40 input variables. In cases like that,

the analyst should nd the best balance between simplicity (retaining as few as possible

components, which cause the exclusion of indicators) and completeness (explain most of

data variation).

In this thesis, the initial suggestion of which indicators should be discarded of the

model come from an analysis of the Jacobian and Correlation matrix. Initially, we list the

worst outcomes obtained in the Jacobian (using Table 6.4) and in the Correlation matrix

(using Table 6.5). For the Jacobian, the worst results are represented by indicators with

the lowest number of shared data and, for the Correlation matrix, the indicators with the

lowest correlation values are the worst results. Secondly, the two lists generated with the

worst results are compared to suggest which indicators should be discarded of the model.

Table 7.1 summarizes these results in three parts: on the top of the table are presented

the indicators with bad results in both analysis; in the middle of the table, indicators with

bad results in correlation are listed with their corresponding number of shared data (from

Table 6.4) described in the right column; on the bottom of the table is the opposite: the

indicators with few number of shared data (from Table 6.4) are listed with their correlation

measurements (from Table 6.5).

The analysis of Table 7.1, suggesting a decreasing order of indicators to discard, is

presented as follows.

From this initial list of 15 indicators presented in Table 7.1, we can see 4 of them

with no correlations higher than 0.4 (r ≤ 0.4). As PCA does not t a good model with

variables having no signicant correlations, these indicators are the rst candidates to be
7.2. Analysis of Jacobian and Correlation matrix to improve PCA results 127

Table 7.1: The indicators with Correlation and Jacobian worst results.

Indicator Correlation worst results Jacobian worst results


EqDp |r |≤ 0.4 with all indicators Shares 1 data with 20 indicators
Maintc |r |≤ 0.4 with all indicators Shares 2 data with CSc
Recq |r |≤ 0.4 with all indicators Shares 2 data with 4 indicators
Repq |r |= 0.41 with Scrapq Shares 2 data with 3 indicators
Pickq |r |= 0.5 with StockOutq Shares 2 data with 4 indicators
Indicator Correlation worst results Jacobian results
Invq | r |≤ 0.4 with all indicators Shares 2 data with 14 indicators
Stoq | r |= 0.46 with Scrapq Shares 2 data with 7 indicators
| r |= 0.5 with Pickq and | r |= 0.4 Shares 1 data with 6 indicators, 2 data
StockOutq
with Invc with 3 indicators, 5 data with Invc
Shares 1 data with 5 indicators, 2 data
| r |= 0.4 with Recq , | r |= 0.41 with
Scrapq with 7 indicators, 4 data with Thp and
Repq and | r |= 0.46 with Stoq
Labp
Indicator Correlation results Jacobian worst results
Shipq | r |= 0.8 with OrdFq and OTShipq Shares 2 data with 7 indicators
| r |= 0.9 with OrdFq and | r |= 0.8
OTShipq Shares 2 data with 7 indicators
with Shipq
| r |= 0.9 with OTShipq and | r |= 0.8
OrdFq Shares 2 data with 7 indicators
with Shipq
| r |= 0.6 with OTDelq and | r |= 0.7
CustSatq Shares 2 data with 9 indicators
with PerfOrdq
| r |= 0.6 with CustSatq and | r |= 0.9
OTDelq Shares 2 data with 9 indicators
with PerfOrdq
| r |= 0.7 with CustSatq and | r |= 0.9
PerfOrdq Shares 2 data with 9 indicators
with OTDelq

discarded (EqDp , Maintc , Recq , Invq ). However, Invq shares data with a great quantity
of indicators, demanding a deeper analysis. To determine the sequence of exclusion for the

indicators, we use the decreasing order presented in Table 7.1 (i.e. the indicators with no

correlations higher than 0.4 (r ≤ 0.4) and few shared data are deleted rst).

The exclusion of each indicator is conrmed if a better PCA outcome is attained.

Five aspects are considered in the analysis of PCA results: (i) the number of principal

components (PC) with σ>1 should be the fewest possible; (ii) the cumulative proportion

of data explained by the PC's should be as high as possible; (iii) the number of indicators

designated in more than one component should be as low as possible; (iv) the loading signs

should be in accordance with indicator's objectives; (v) the indicators grouped in each

component should have a physical explanation in a warehouse context. These ve criteria

come from the literature about PCA application. The rst three aspects are quantitative

and used throughout the analysis of indicator's exclusion. The last two are evaluated at the

moment that the exclusion of an indicator provides only few changes in the quantitative

aspects.

In the cases that the indicator exclusion does not improve PCA result, the indicator is

maintained in the model and the following one of the list is tested. Therefore, all indicators

of Table 7.1 are tested one by one.

Table 7.2 shows the outcomes for each PCA, detailing the three quantitative parameters

used to analyze the quality of the result.


128 7. Model Solving, Implementation and Update

Table 7.2: Steps performed to attain the nal indicator group

Step Indicator Number of in- PCA Results


Eliminated dicators left
0  40
• 10 PC with σ > 1
• 17 indicators designated in more than one com-
ponent
• Cumulative proportion of 10 PC: 86,5%
1 EqDp , 38
Maintc • 10 PC with σ > 1
• 18 indicators designated in more than one com-
ponent
• Cumulative proportion of 10 PC: 89%
2 Recq 37
• 9 PC with σ > 1
• 14 indicators designated in more than one com-
ponent
• Cumulative proportion of 9 PC: 88,6%
3 Repq 36
• 9 PC with σ > 1
• 14 indicators designated in more than one com-
ponent
• Cumulative proportion of 9 PC: 90,5%
4 Stoq 35
• 8 PC with σ > 1
• 12 indicators designated in more than one com-
ponent
• Cumulative proportion of 8 PC: 89,3%
5 Pickq 34
• 7 PC with σ > 1
• 15 indicators designated in more than one com-
ponent
• Cumulative proportion of 7 PC: 87,5%
6 StockOutq 33
• 7 PC with σ > 1
• 15 indicators designated in more than one com-
ponent
• Cumulative proportion of 7 PC: 89,8%
7.2. Analysis of Jacobian and Correlation matrix to improve PCA results 129

Only the exclusions that have improved the PCA results are demonstrated in Table

7.2. The PCA result after the exclusion of the three worst indicators (EqDp , Maintc ,

Recq ) is improved with one PC less than step zero (see Table 7.2), fewer indicators in more
components than before and data explanation of 88,6%, in comparison of 86% in step zero.

At the end, the exclusion of the majority of indicators with low or medium correlations

in Table 7.1 (EqDp , Maintc , Recq , Repq , Stoq , Pickq and StockOutq ) improve the

PCA result, providing a higher cumulative proportion of data explanation and the decrease

number of PC's (from 10 to 7). Invq and Scrapq are the only exceptions, being kept in

the model because their exclusion cause worst results.

Even if the PCA outcome for step 5 is not demonstrated, we highlight that StockOutq
is excluded from the nal group because it has not been designated for any PC, i.e. the

loadings for all PC's are lower than 0.2 (|loading|<0.2).

Analyzing the indicators not excluded from the analysis but listed in Table 7.1, we

might conclude that the informations provided by the correlation and the Jacobian are

complementary because some indicators with low correlations have a great quantity of

shared data (e.g. Scrapq ) impeding their exclusion.

Finally, the group of indicators considered for the aggregated model are 33 from the

initial 40, and the PCA result is detailed in Figure 7.1.

Figure 7.1 demonstrates that PC1 explains 40% of data variance (table in the bottom

of the gure) and incorporate almost half of the indicators (14 of 33 in total) (rst column

of indicator versus PC table).

Initially, the indicators are considered in a component when the loadings are higher

than 0.2 (| loading|≥ 0.2). Nevertheless, this minimum loading value cause some problems

in component 2. The rst inconsistency is about the inclusion of TrUtp and OTDelq
indicators in the component two, where the majority of indicators are related to inbound

activities. The second problem is the sign of Rect , that should be negative instead of

positive. As the absolute loading values in component one are at least 0.22, we dene this

value as the new cut o level (|loading| ≥ 0.22). According to PennState (2015a), the

denition of which number is considered a large or small loading is a subjective decision.

In the work of Lu and Yang (2010), they include in the model just loadings higher then

0,5; however, the authors have considered their criterion very conservative.

Switching the absolute cut o level value to 0.22, a new PCA result is obtained (see

Figure 7.2), with two main dierences from the previous result (Figure 7.1). Firstly, the

indicator TrUtp continues to be inappropriately designated to PC2 since it refers to the

utilization of the delivery truck and all other indicators are related to inbound activities.

However, if this indicator is eliminated the global results of other components become

worst. Therefore, the indicator is maintained in the nal model. Secondly, changing the

cut o level reduces to 8 the number of indicators designated in more than one component,

improving the nal result.

The sign of the loadings in Figure 7.2 should be in accordance with the indicator

objectives. In the case of cost and time indicators, the sign must be negative, whereas for

productivity and quality ones, the sign must be positive to represent a better performance.

In the case of component equations (presented in the next section) sharing both types of

loadings, they should be interpreted considering that the greater the resulting value, the

better the performance.


130 7. Model Solving, Implementation and Update

PC1 PC2 PC3 PC4 PC5 PC6 PC7


CSc -0,22 -0,05 0,08 -0,05 0,07 -0,09 0,38
CustSatq 0,02 0,17 -0,22 -0,01 0,40 0,00 -0,04
Delp 0,25 -0,14 0,06 -0,04 0,07 0,01 0,12
Delt -0,26 0,13 -0,06 0,05 -0,07 -0,03 -0,09
DSt -0,03 0,18 0,12 -0,07 -0,13 0,61 0,05
Invc -0,03 0,10 0,43 -0,17 0,13 -0,07 -0,18
Invq 0,03 0,05 0,07 0,03 -0,08 -0,31 -0,13
InvUtp -0,02 0,11 0,44 -0,17 0,15 -0,08 -0,16
Labc -0,20 -0,24 0,05 -0,07 0,10 0,18 -0,10
Labp 0,24 0,08 -0,07 0,07 -0,09 0,02 -0,29
OrdFq -0,01 -0,05 0,22 0,51 0,04 0,04 0,01
OrdLTt -0,26 0,13 -0,06 0,05 -0,07 -0,03 -0,09
OrdProcc -0,24 0,17 -0,06 0,06 -0,10 -0,06 -0,10
OTDelq -0,02 0,21 -0,18 -0,03 0,46 0,09 -0,09
OTShipq -0,02 -0,07 0,18 0,53 0,03 0,03 0,04
PerfOrdq 0,00 0,18 -0,17 -0,04 0,51 0,08 0,02
Pickp 0,25 -0,14 0,06 -0,04 0,08 0,02 0,13
Pickt -0,25 0,14 -0,06 0,05 -0,08 -0,04 -0,10
Putt -0,15 -0,37 -0,07 -0,02 0,09 0,07 -0,22
Recp 0,15 0,36 0,07 0,03 -0,09 -0,07 0,22
Rect -0,02 0,21 0,12 -0,07 -0,14 0,59 0,07
Repp 0,24 0,07 -0,07 0,06 -0,08 0,03 -0,28
Rept -0,25 -0,07 0,07 -0,04 0,07 -0,04 0,28
ScrapRate -0,01 -0,14 0,02 -0,24 -0,34 -0,06 0,06
Shipp 0,25 -0,13 0,05 -0,05 0,07 0,01 0,12
Shipq 0,00 -0,05 0,20 0,50 0,12 0,07 0,03
Shipt -0,26 0,13 -0,05 0,06 -0,07 -0,04 -0,09
Stop 0,15 0,37 0,08 0,01 -0,09 -0,07 0,21
Thp 0,24 0,08 -0,07 0,07 -0,09 0,02 -0,29
TOp 0,07 -0,13 -0,41 0,15 -0,15 0,17 -0,06
Trc -0,24 0,18 -0,08 0,06 -0,09 -0,05 -0,11
TrUtp 0,17 0,29 -0,04 0,08 -0,11 -0,20 0,06
WarUtp 0,00 -0,01 0,33 -0,11 0,05 0,01 -0,41

PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8


Standard
3,65 2,01 1,94 1,65 1,54 1,37 1,25 0,94
deviation
Proportion
0,40 0,12 0,11 0,08 0,07 0,06 0,05 0,03
of Variance
Cumulative
0,40 0,53 0,64 0,72 0,79 0,85 0,90 0,92
Proportion

Figure 7.1: PCA result for the nal group of 33 indicators with |loadings| ≥ 0.2.
7.2. Analysis of Jacobian and Correlation matrix to improve PCA results 131

PC1 PC2 PC3 PC4 PC5 PC6 PC7


CSc -0,22 -0,05 0,08 -0,05 0,07 -0,09 0,38
CustSatq 0,02 0,17 -0,22 -0,01 0,40 0,00 -0,04
Delp 0,25 -0,14 0,06 -0,04 0,07 0,01 0,12
Delt -0,26 0,13 -0,06 0,05 -0,07 -0,03 -0,09
DSt -0,03 0,18 0,12 -0,07 -0,13 0,61 0,05
Invc -0,03 0,10 0,43 -0,17 0,13 -0,07 -0,18
Invq 0,03 0,05 0,07 0,03 -0,08 -0,31 -0,13
InvUtp -0,02 0,11 0,44 -0,17 0,15 -0,08 -0,16
Labc -0,20 -0,24 0,05 -0,07 0,10 0,18 -0,10
Labp 0,24 0,08 -0,07 0,07 -0,09 0,02 -0,29
OrdFq -0,01 -0,05 0,22 0,51 0,04 0,04 0,01
OrdLTt -0,26 0,13 -0,06 0,05 -0,07 -0,03 -0,09
OrdProcc -0,24 0,17 -0,06 0,06 -0,10 -0,06 -0,10
OTDelq -0,02 0,21 -0,18 -0,03 0,46 0,09 -0,09
OTShipq -0,02 -0,07 0,18 0,53 0,03 0,03 0,04
PerfOrdq 0,00 0,18 -0,17 -0,04 0,51 0,08 0,02
Pickp 0,25 -0,14 0,06 -0,04 0,08 0,02 0,13
Pickt -0,25 0,14 -0,06 0,05 -0,08 -0,04 -0,10
Putt -0,15 -0,37 -0,07 -0,02 0,09 0,07 -0,22
Recp 0,15 0,36 0,07 0,03 -0,09 -0,07 0,22
Rect -0,02 0,21 0,12 -0,07 -0,14 0,59 0,07
Repp 0,24 0,07 -0,07 0,06 -0,08 0,03 -0,28
Rept -0,25 -0,07 0,07 -0,04 0,07 -0,04 0,28
Scrapq -0,01 -0,14 0,02 -0,24 -0,34 -0,06 0,06
Shipp 0,25 -0,13 0,05 -0,05 0,07 0,01 0,12
Shipq 0,00 -0,05 0,20 0,50 0,12 0,07 0,03
Shipt -0,26 0,13 -0,05 0,06 -0,07 -0,04 -0,09
Stop 0,15 0,37 0,08 0,01 -0,09 -0,07 0,21
Thp 0,24 0,08 -0,07 0,07 -0,09 0,02 -0,29
TOp 0,07 -0,13 -0,41 0,15 -0,15 0,17 -0,06
Trc -0,24 0,18 -0,08 0,06 -0,09 -0,05 -0,11
TrUtp 0,17 0,29 -0,04 0,08 -0,11 -0,20 0,06
WarUtp 0,00 -0,01 0,33 -0,11 0,05 0,01 -0,41

PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8


Standard
3,65 2,01 1,94 1,65 1,54 1,37 1,25 0,94
deviation
Proportion
0,40 0,12 0,11 0,08 0,07 0,06 0,05 0,03
of Variance
Cumulative
0,40 0,53 0,64 0,72 0,79 0,85 0,90 0,92
Proportion

Figure 7.2: PCA result for the 33 indicators with |loadings| ≥ 0.22.
132 7. Model Solving, Implementation and Update

StockOutq Recq Maintc


The 7 variables excluded
EqDp
from the first PCA result Pickq Repq Stoq

Delt Delp Shipq OTShipq OrdFq


Rept Thp
Labp
Repp C3- Shipping Quality

Shipp C1 - Outbound Performance OrdProcc


OTDelq Scrapq
Pickp CSc

Pickt Shipt
Trc OrdLTt C4- Order Quality

Invq Rect
TrUtp Recp Labc
PerfOrdq CustSatq

C6- Inbound Time


Stop C2- Inbound Performance Putt
TOp WarUtp

Productivity indicators DSt


C5- Inventory Utilization InvUtp
Cost indicators
Quality indicators
Cut off level of 0,22 in absolute sense Invc
Time indicators

Figure 7.3: The indicators eliminated from the analysis and a framework of the nal group
with 33 indicators.

Regarding the number of PC's to use in the aggregated model, the scree plot suggests

that 2 components is a good trade-o between variance explained and number of compo-

nents (the sharp drop point in the plot). However, we want to maintain the same number

of indicators in the model. Analyzing indicator versus PC table of Figure 7.2, we can

see that PC7 is just a repetition of indicators already designated in previous components.

Thus, the performance indicators will be aggregated in the rst six components (from PC1

up to PC6). Figure 7.3 summarizes the results demonstrating on the top of the gure

the indicators eliminated from the model and on the center the nal framework with six

components (named C1 up to C6).

Analyzing the loading signs, we can see that some of cost and time indicators do not

have negative signs as expected and the same happens for some quality and productivity

indicators. For the six components, the loadings of C1, C2, C4 and C5 have the right signs

and the ones from C3 and C6 present the opposite signs compared to indicator's objective.

R documentation arms that the signs are dened arbitrarily and if it is necessary to

change them, it should be made for all loadings of the component. Therefore, the signs of

indicators in components three and six will be inverted when the component equations are

used to nd a scale for the integrated model interpretation.

The next section presents the nal integrated model for warehouse performance man-

agement.
7.3. Integrated performance model proposition 133

7.3 Integrated performance model proposition


Section 4.4.1 presents a generic group of equations to describe the integrated performance

model. In this section, these equations are rewritten according to the result obtained in

Figure 7.2. Equation 7.1 up to Equation 7.6 demonstrate the six components chosen with

their loadings. We recall that the signs of C3 and C6 are modied as explained in the

previous section. The modied signs are highlighted with red color in the equations.

C1 = − 0, 22 × CSc + 0, 25 × Delp − 0, 26 × Delt + 0, 24 × Labp − 0, 26 × OrdLTt


− 0, 24 × OrdP rocc + 0, 25 × P ickp − 0, 25 × P ickt + 0, 24 × Repp (7.1)

− 0, 25 × Rept + 0, 25 × Shipp − 0, 26 × Shipt + 0, 24 × T hp − 0, 24 × T rc

C2 = − 0, 24 × Labc − 0, 37 × P utt + 0, 36 × Recp + 0, 37 × Stop + 0, 29 × T rU tp (7.2)

C3 =+0, 22 × CustSatq −0, 43 × Invc −0, 44 × InvU tp +0, 41 × T Op


(7.3)
−0, 33 × W arU tp

C4 = + 0, 51 × OrdFq + 0, 53 × OT Shipq − 0, 24 × Scrapq + 0, 50 × Shipq (7.4)

C5 = + 0, 40 × CustSatq + 0, 46 × OT Delq + 0, 51 × P erf Ordq − 0, 34 × Scrapq (7.5)

C6 =−0, 61 × DSt +0, 31 × Invq −0, 59 × Rect (7.6)

It is important to highlight that indicator values entries in Equation 7.1 up to 7.6 must

be standardized before their inclusion in equations (see Section 6.4).

Once the standardized indicator results are inserted in equations, it reduces their vari-

ance, making possible to verify which indicators most inuence the component result

through the loading values. For example, in Equation 7.5 the indicators OTDelq and

PerfOrdq have the highest loading values, demonstrating that they are more important

in C5 than CustSatq and Scrapq . However, not all components have this distinction

between indicators. For instance, in the rst component equation (C1, Equation 7.1) the

loading values are very similar for all indicators, resulting nearly in the same absolute

numerical impact on C1 result.

Equation 7.1 up to Equation 7.6 shows the model to measure the integrated performance

with six component equations. Depending on manager objectives, it is possible to choose

just one component to evaluate performance, probably the most important for company's

goals. In this case, the aggregation stops here and the manager loses a great quantity of
134 7. Model Solving, Implementation and Update

information considered in other components. Considering the six component equations to

analyze the warehouse performance, it is necessary to develop a scale for each component,

allowing the manager to evaluate each group of indicators separately. However, it does not

seem a practical choice if the objective is to analyze the global warehouse performance.

The component results are very subjective and dicult to compare with other components,

even if there is a scale for each one to help this interpretation.

As the main idea of this work is to dene a model which aggregates all indicators

to facilitate the global performance interpretation, we propose the sum of all principal

components in an unique measure, dening a global indicator as described in Equation 7.7.

i=m
X
GP = n i × Ci (7.7)
i=1

where GP is global performance, Ci is the principal component with i = 1, . . . , m and

n is the weight dened for the component i.


In this dissertation, the weight of each component is considered equal, and each ni of
1
Equation 7.7 is dened by ( ) (m = 6 in our case). Nevertheless, the manager can adjust
m
each weight according to company's goals and strategy, dening some of them as more or

less important than the others.

Finally, the integrated performance measurement model comprises Equation 7.1 up

to Equation 7.7. Figure 7.4 demonstrates the framework with indicators aggregated in

components (left side of the gure) and the components composing the global performance,

GP (right side of the gure).

To interpret the GP result it is necessary to formulate a scale, which is developed in

the next section.


7.3. Integrated performance model proposition
The integrated performance measurement model for warehouse management

1 – The final aggregated model for warehouse indicators 2 – The proposed global indicator

Delt Delp Shipq OTShipq OrdFq


Rept Thp
Labp
Repp C1 - Outbound Performance
C3- Shipping Quality

Shipp C1 - Outbound Performance OrdProcc C2- Inbound Performance


OTDelq Scrapq
Pickp CSc
C3- Shipping Quality GP
Pickt Shipt
Trc OrdLTt C4- Order Quality
C4- Order Quality
Invq Rect
TrUtp Recp Labc
PerfOrdq CustSatq C5- Inventory Utilization

C6- Inbound Time C6- Inbound Time


Stop C2- Inbound Performance Putt
TOp WarUtp

Productivity indicators DSt


C5- Inventory Utilization InvUtp
Cost indicators
Quality indicators
Cut off level of 0,22 in absolute sense Invc
Time indicators

Figure 7.4: The integrated performance measurement model comprises: (1) the nal aggregated model and, (2) the global indicator.

135
136 7. Model Solving, Implementation and Update

7.4 Scale for the Integrated Indicator


The procedure performed in this section can be used for one component (if the manager

considers just one) as well as for the proposed global performance GP (Equation 7.7). In

our case, we will present a scale for GP.

In summary, it is necessary to dene the following aspects to obtain a scale by using

optimization (presented in Figure 4.5):

1. Analytical model adjustment;

2. Objective function;

3. Optimization algorithm;

Each one of these aspects will be presented in the following sections.

7.4.1 The analytical model adjustment


The rst analytical model, used for Jacobian matrix assessment, needs to be adjusted to

perform the optimization. The adjustments signify mainly the inclusion of new equations

in the model as:

• Component equations, i.e., Equation 7.1 up to 7.6;

• Equations standardizing indicator values;

• Equations to limit optimization search space.

The last two kinds of equations are presented in the next sections.

7.4.1.1 Equations standardizing indicator values


Equations 7.1 up to 7.6 request standardized indicator values to calculate components. As

the data inputs of the analytical model are not standardized, it is necessary to include equa-

tions which shift indicator values to standardized ones. Thus, 33 equations like Equation

7.8 (i.e., one for each indicator) are added to the model. The mean and standard deviation

values inserted for each indicator are taken from their data generation. The complete list

is presented in Appendix I.

OrdFq − M ean_OrdFq
OrdFq _N ORM = (7.8)
σOrdFq

7.4.1.2 Equations to limit optimization search space


Some equations dening data dependencies are included in the optimization model to limit

the optimization search space so that the results fall within reasonable practical values.

This is done by constraining some additional variables. These equations have also been

dened in the spreadsheet used for data generation.

The complete list of equations and the optimization model are demonstrated in Ap-

pendix H.
7.4. Scale for the Integrated Indicator 137

As an example, let us analyze Equation A.3. If it is not included in the model, the opti-

mization algorithm treats the variables Cor Unlo and Prob Unlo as independents. However,

in practice, they must respect the relationship dened by Equation A.3.

Pal Unlo = Cor Unlo + Prob Unlo (A.3)

Other equations limiting the optimization search space are dened by the prex Ctrl.
One example is Ctrl_0 (Equation 7.9) that determines the total eective working hours
made by the administrative employees. These can not be higher than the total number of

administrative working hours available in a month.

Ctrl_0 → WH Admin >H Adminsto + H Adminrep + H Adminpick


(7.9)
+ H Adminship + H Admindel + H Adminorders

Other examples are related to the warehouse product ow, impeding that one activity

processes more products than the previous one. For instance, Ctrl_1 (Equation 7.10)

denes that the number of pallets stored can not be higher than the total of pallets unloaded

in the whole month. Other constraints similar to Ctrl_1 are: Ctrl_2, Ctrl_3, Ctrl_4,
Ctrl_5 (Equations 7.11, 7.13, 7.14, 7.16, respectively). Some terms used in these equations

are dened in Appendix A.

Ctrl_1 → Pal Unlo > Pal Sto (7.10)

The replenishment is the activity of reallocating pallets from the bulk storage area to

the forward picking area. Due to its characteristics, there are two constraints related to

this activity ( Ctrl_2 Ctrl_2A). As the forward picking stock usually has a limited
and

space, the products are not replenished if they do not have orders to be fullled (Ctrl_2,

Equation 7.11). Similarly, the total number of pallets moved to the forward area can not

exceed the number of pallets stored plus the inventory remaining from the previous month

(named `Remain inv ') Ctrl_2A, Equation 7.12).


(

(Cust Ord ∗ Prod Ord)


Ctrl_2 → > Pal Moved (7.11)
Prod pal

Remain inv
Ctrl_2A → Pal Sto + > Pal Moved (7.12)
Prod pal

Regarding the number of orders picked during a month, Ctrl_3 shows that it can not
be higher than the number of customer orders received (Cust Ord). In Equation 7.13, Line
Ord means the average number of lines per customer order, being used to put the number

of order lines picked (OrdLi Pick) in the same unit of customer orders.

OrdLi Pick
Ctrl_3 → Cust Ord > (7.13)
Line Ord
138 7. Model Solving, Implementation and Update

The Ctrl_4 and Ctrl_4A have the same meaning, just the units are dierent. In

Equation 7.14, the number of orders shipped can not overcome the total of orders picked

and Equation 7.15 measures it in terms of number of products.

OrdLi Pick
Ctrl_4 → > Ord Ship (7.14)
Line Ord

Ctrl_4A → OrdLi Pick × Prod Line > Prod Proc (7.15)

As presented for the previous warehouse activities, Ctrl_5 represents the limitations
imposed by the activity ows. Equation 7.16 determines that the number of orders shipped

(Ord Ship) is higher or equal to the number of orders delivered (Ord Del).

Ctrl_5 → Ord Ship > Ord Del (7.16)

Finally, Ctrl_6 denes that the number of orders delivered on time (Ord Del OT) is
always greater than the number of orders delivered on time, without damages and correct

documents (Ord OT, ND, CD), since this last one demands more order requirements than

just orders delivered on time.

Ctrl_6 → Ord Del OT > Ord OT, ND, CD (7.17)

After the denition of optimization limits by these equations, it is missing only the

denition of the objective function, presented in the next section.

7.4.2 Objective function denition


The objective function is determined by the GP equation, Equation 7.7, which calculates a

weighted mean of all components dened in PCA. The maximization (Equation 7.18) and

the minimization (Equation 7.19) of GP achieve the best and worst possible performances,

respectively, which are considered the upper and lower limits of the scale. As dened

in Section 7.3, we assume that the weights are dened equal for all components in GP

equation.

It is important to note that these best and worst performances are only related to the

warehouse studied, and can not be generalized to other warehouses. The main reason is

that the optimization search space is established according to the warehouse conditions

(e.g. processing capacity, number of employees).

1
max GP = ( ) × [C1 + C2 + C3 + C4 + C5 + C6 ] (7.18)
6

1
min GP = ( ) × [C1 + C2 + C3 + C4 + C5 + C6 ] (7.19)
6
After the analytical model and objective function determination, we dene, in the next
7.4. Scale for the Integrated Indicator 139

section, the optimization algorithm chosen and the results obtained.

7.4.3 The choice of the optimization algorithm


The analytical model that has been created has many outputs that must be constrained

in order to solve the problem. Thus, we are interested in algorithms that are able to deal

with several constraints. To that end, the fast and deterministic SQP algorithm (Sequential

Quadratic Programming) has been chosen.

The main reason for that choice is the possibility to manage tens, hundreds or even

thousands of unknown parameters in a constrained output problem. The coupling of the

model with the SQP requires the determination of the Jacobian matrix associated to the

model outputs. The CADES Component Optimizer r has the SQP algorithm built in the

software and it is used for the optimization.

7.4.4 The setting of the optimization tool


The optimization model implemented in CADES comprehends: the analytical model, the

objective function, the component equations, the 33 equations to standardize indicators

and the ones used to limit the optimization search space.

When the model is compiled, it generates an icar component containing the input and

output relationships and the associated Jacobian matrix. In order to use the SQP to solve

the problem, the inputs must be set with an initial value. Additionally, the inputs can also

be left free to vary in a range, dening the optimization search space. The outputs can

be left free to vary, have a xed value assigned to it or constrained in a range. Figure 7.5

illustrates the setting of the inputs and outputs.

One of the potential problems that may arise from the utilization of the SQP is that

the solution may depend on the starting values of the inputs (local minimum). Therefore,

it is a good practice to test several combinations of these initial values in order to increase

the possibility of nding a global optimum. Such investigation is made to dene the initial

values of the inputs that are used in the optimization study presented on this chapter.

Regarding the limits proposed for variables, they need to t the conditions of the

studied warehouse. It is important to incorporate manager's opinion in the denition of

the possible upper and lower limits that the warehouse can attain to develop achievable

scale boundaries. In our case, the variable limits are established based on some predened

warehouse characteristics (e.g. warehouse capacity) and according to the limits presented

by the data generated.

The constraints dened in Section 7.4.1.2 must be adapted to be used in CADES.

CADES requires the denition of a minimum and a maximum value for each constraint

output. Therefore, the inequality equations must be rewritten. For example, Equation

7.16 is modied for the following form to dene the minimum value:

Ctrl_5 → Ord Ship − Ord Del >0 (7.20)

The maximum limit set in CADES for Equation 7.20 is determined by the maximum

allowed value of the Ord Ship. All the constraints are dened in CADES using the same
140 7. Model Solving, Implementation and Update

Figure 7.5: The options provided by CADES Component Optimizer r for input and output
variables.

principle.

The variables are classied as inputs, intermediate outputs and outputs. The input

limits shown in Table 7.3 are separated by type of data and the unit of each variable

is presented in brackets. Some variables are considered xed in the optimization, as the

product cost (Prod Cost) and oil value ($ oil ).


To establish the limits presented in Table 7.3 and Table 7.4, we consider that the stan-

dard warehouse has capacity to process up to 40.000 products per month (the mean value

dened in data generation is 28.000 with standard deviation of 2.000), and a maximum of

3.000 orders. Transforming the 40.000 products in number of pallets (each pallet has 40

products), we have 1.000 pallets as inbound capacity for unloading and storing activities.

For replenishment, the limit is of 2.000 pallets because we consider the sum of the stock

capacity (1.000 pallets) and the inbound capacity (1.000 pallets).

We note that the variables Prob OrdLi Pick, Prob Rep, Prob Sto, Prob Unlo (see

Table 7.4) have a limit smaller than the ones dened for shipping and delivery activities

(20 pallets for Prob Sto and Prob Unlo instead of 1.000 pallets; 40 orders for Prob OrdLi
Pick and Prob Rep instead of 3.000 orders). The reason for this limit is the absence of
quality indicators related to these activities in component equations; consequently, the

optimization model do not maximize or minimize these inputs. Therefore, we establish

2% of the total capacity as the maximum quantity of problems each activity can have (as

made in data generation).


7.4. Scale for the Integrated Indicator 141

Table 7.3: Input limits for optimization.

INPUT LIMITS
Limits in Hours Picking, Shipping and Limits
Time data [unit]
Max Min Delivery data [unit] Max Min
β_del 1 0,3 Prod noAvail [orders] 3000 0
β_ord 1 0,3 No_OT_del [orders] 3000 0
β_pick 1 0,3 No_OT_ship [orders] 3000 0
β_rec 1 0,3 No Cust Complain [orders] 3000 0
β_rep 1 0,3 NoComplet Ord Ship [orders] 3000 0
β_ship 1 0,3 Other_Prob_pick [orders] 40 0
β_sto 1 0,3 Other_Prob_del [orders] 3000 0
Hadmindel [hour] 210 1 Other_Prob_ship [orders] 3000 0
HAdminpick [hour] 210 1 Cor OrdLi Pick [orders] 3000 0
HAdminrec [hour] 210 1 Cor OrdLi Ship [orders] 3000 0
HAdminrep [hour] 210 1 Cor Del [orders] 3000 0
HAdminship [hour] 210 1 scrap4 [orders] 40 0
HAdminsto [hour] 210 1 scrap5 [orders] 3000 0
scrap6 [orders] 3000 0
Limits
Replenishment data [unit]
Max Min Unloading and Storing data Limits
Cor Rep [pallet] 2000 0 [unit] Max Min
error data system 3 [pallet] 40 0 Cor Sto [pallet] 1000 0
scrap3 [pallet] 40 0 Cor Unlo [pallet] 1000 0
Other_Prob_rep [pallet] 40 0 scrap1 [pallet] 20 0
scrap2 [pallet] 20 0
Limits in $ Other_Prob_sto [pallet] 20 0
Cost data
Max Min Other_Prob_unlo [pallet] 20 0
Maintc R$ 50 000,0 R$ 1 000,0 error data system 1 [pallet] 20 0
Truck Maint C R$ 200 000,0 R$ 50,0 error data system 2 [pallet] 20 0

Limits
Fixed data [unit] Values Other data [unit]
Max Min
N [nb of components] 6 War WH [hour] 210 80
pal_truck [pallet] 25 Prod Ord [product] 30 10
Prod Cost [R$] R$ 100,00 war used area [m2] 4000 1000
$ oil [R$] R$ 2,20 nb_Travel [travels] 300 1
mean_Insp [h] 1 0,1
Cust Ord [orders] 3000 10
142 7. Model Solving, Implementation and Update

Table 7.4: Limits for intermediate outputs.

INTERMEDIATE OUTPUT LIMITS


Limits Limits
Constraints [unit] Data [unit]
Max Min Max Min
CTRL_0 [hour] 210 0,1 aveinv [product] 80000 1
CTRL_1 [pallet] 1000 0 Prob Data [pallet] 80 0
CTRL_2 [pallet] 2000 0 Cust Complain [orders] 3000 0
CTRL_2A [pallet] 2000 0 ΔT(Insp) [hour] FREE
CTRL_3 [order] 3000 0 nb_trucks [trucks] FREE
CTRL_4 [order] 3000 0 Prod noAvail [products] 50000 0
CTRL_4A [product] 50000 0 Ord Del OT [orders] 3000 0
CTRL_5 [order] 3000 0 Ord OT, ND, CD [orders] 3000 0
CTRL_6 [order] 3000 0 Ord Ship OT [orders] 3000 0
PalProcInv [pallets] 4000 0
Component Prob OrdLi Pick [orders] 40 0
Limits
Equation Prob OrdLi Ship [orders] 3000 0
C1 Prob Del [orders] 3000 0
C2 Prod Proc [products] 40000 0
C3 Prob Rep [pallet] 40 0
C4
FREE Prob Sto [pallet] 20 0
C5 Prob Unlo [pallet] 20 0
C6 Remain_Inv [products] 40000 0
WarCapUsed 5000 500
Pal Sto [pallet] 1000 300
Pal Unlo [pallet] 1000 300
Pal Moved [pallet] 2000 500
OrdLi Pick [orders] 3000 700
Ord Ship [orders] 3000 700
Ord Del [orders] 3000 700

The nal output limits are presented in Table 7.5. The range of indicator values are

dened very large and cost indicators are left free. The cost indicators are not constrained

since their possible results are a consequence of several other variables.

The results for the maximization and minimization are presented in next section.

7.4.5 The integrated indicator scale


The maximization and minimization results for the nal outputs are shown in Table 7.6.

The maximization and minimization results for the inputs and the intermediate outputs

are presented in Appendix J.

It is interesting to make some remarks about the optimization outcomes.

We establish that the number of warehouse working hours War WH could vary between

80 and 210 hours per month (see Table 7.3). In the maximization, the War WH converges

to 80 hours (equivalent to 10 working days in a month) whereas the minimization results

in 210 hours, which is equivalent to 25 working days in a month (see the last table at the

bottom of Appendix J). In the 80 hours, 40.000 products are shipped (Prod Proc ) and

for 210 hours just 8.790 products. It means that if time is eciently used, the excess of

capacity will appear.


7.4. Scale for the Integrated Indicator 143

Table 7.5: Limits for nal outputs.

FINAL OUTPUT LIMITS


Time Indicators Limits Productivity Limits
[unit] Max Min Indicators Max Min
OrdLTt [h/order] 500 0,05 Thp 1500 0
DSt [h/pallet] 200 0,02 Labp 200 0,1
Delt [h/order] 200 0,02 Delp 200 0,01
Pickt [h/order] 200 0,02 Pickp 200 0,01
Putt [h/pallet] 200 0,02 Recp 200 0,01
Rect [h/pallet] 200 0,02 Repp 200 0,01
Rept [h/pallet] 200 0,02 Shipp 200 0,01
Shipt [h/order] 200 0,02 Stop 200 0,01
TOp 50 0
Limits in % TrUtp 100% 0%
Quality Indicators
Max Min InvUtp 105% 0%
CustSatq 100% 0% WarUtp 100% 0%
Invq 100% 0%
OrdFq 100% 0% Limits in $
Cost Indicators
OTDelq 100% 0% Max Min
OTShipq 100% 0% CSc R$ 1,00 0,00
PerfOrdq 100% 0% Invc
Shipq 100% 0% Labc
Scrapq 100% 0% OrdProcc
FREE
Trc
Global Limits
Performance Max Min
GP 150,0 -150,0

Table 7.6: Output results after maximization and minimization.

FINAL OUTPUT RESULTS


Time Indicators Results Productivity Results
[unit] Maximization Minimization Indicators Maximization Minimization
OrdLTt [h/order] 0,09 2,79 Thp 500 41,8
DSt [h/pallet] 0,05 1 Labp 55,5 4,65
Delt [h/order] 0,02 0,6 Delp 18,75 1,67
Pickt [h/order] 0,03 1,2 Pickp 9,37 0,83
Putt [h/pallet] 0,02 0,69 Recp 25 3,43
Rect [h/pallet] 0,03 0,32 Repp 12,5 2,38
Rept [h/pallet] 0,03 0,422 Shipp 12,5 1,1
Shipt [h/order] 0,03 0,9 Stop 25 3,43
TOp 2 0,88
Results TrUtp 100% 5,9%
Quality Indicators
Maximization Minimization InvUtp 50% 25%
CustSatq 100% 0% WarUtp 32% 86%
OrdFq 100% 100%
OTDelq 100% 0% Results
Cost Indicators
OTShipq 100% 0% Maximization Minimization
PerfOrdq 100% 0% CSc R$ 0,09 1,00
Invq 100% 99,2% Invc R$ 200 000,00 R$ 100 000,00
Shipq 100% 0% Labc R$ 5 987,20 R$ 15 718,50
Scrapq 0% 37,6% OrdProcc R$ 0,15 R$ 0,54
Trc R$ 0,70 R$ 292,80
Global Results
Performance Maximization Minimization
GP 15,35 -123,27
144 7. Model Solving, Implementation and Update

As expected, the maximization results for time indicators are low and for the productiv-

ity indicators are high (see Table 7.6). The capacity measures InvUtp and WarUtp have

low values, demonstrating that the warehouse can process more products due to its extra

capacity. The InvUtp values (see Table 7.6) show the maximization having higher results

than minimization. The reason for these results is the quantity of products processed in

each situation. As described above, the number of products shipped in minimization is

almost 5 times less than in maximization, which reduce the number of products that pass

through the inventory (see Appendix J). Consequently, the same occur for Invc indica-

tor, since in minimization the average inventory is of 10.000 and in maximization 20.000

products.

The Invq and OrdFq indicators present values in the minimization near to the maxi-

mum (see Table 7.6). In OrdFq case, the optimizer prioritizes the reduction of indicators

with the highest loadings in component equations. Another point is an optimization model

restriction, which impedes an order to have more than one kind of problem. As the load-

ings of OrdFq is 0,51 and of OTShipq is 0,53 (Equation 7.4), the software prefers to put

all orders shipped late but complete. In the case of Invq , the reason is the established 2%

as the maximum number of problems for unloading, storing, replenishment and picking

activities. This decision also reects in the Scrapq indicator.

Finally, it is important to discuss the GP results. As Table 7.6 demonstrates, the

variation range is of 138,65, with the maximum of 15,35 and the minimum of -123,3.

The reason for this expressive dierence between the positive and negative values comes

from the mean and standard deviation established for performance indicators. As these

xed values are used to standardize the indicators included in component equations, when

the indicator value in a month is lower than its mean, the result of the standardized

indicator is negative. For example, Equation 7.8 presents the standardization of OrdFq .
Considering that the average of OrdFq is 97% and, at this month, OrdFq value is 95%,

the standardized indicator has a negative sign because the performance is lower than the

average. Therefore, analyzing Table I.1 in Appendix I, it is possible to see that the majority

of quality indicators have means equal or higher than 99%. It means that there is few space

for performance improvements, what is reected by the low value of 15,35 as the upper

scale limit.

To support the scale interpretation, we transform the scale limits from -123,3 and 15,35

to 0 up to 100 (see Figure 7.6).

Using traditional scale transformation rules, Equation 7.21 is used to transform the

values of the optimized scale (OS) to the normal scale (NS).

(OS + 53, 975) N S − 50 100 × (OS + 53, 975)


= → NS = + 50 (7.21)
138, 65 100 138, 65
To exemplify the use of Equation 7.21, let us verify the corresponding value in the

normal scale (NS) for the zero value in the optimized scale (OS). The zero value in OS

signies that all indicators are equal to their mean. Applying Equation 7.21, the result

for the normal scale is 88,93. We can infer from this result that globally, the warehouse

already have a good performance.

In the next section is explained how the integrated model and scale are implemented

and should be used in practice.


7.5. Integrated Model Implementation 145

Optimized Normal
Scale - OS Scale - NS
+15,35 100

-53,975 50

-123,3 0

Figure 7.6: Scale transformation.

7.5 Integrated Model Implementation


After nishing the model and scale development, we present in this section how to use the

integrated performance for periodic management.

The model parts used for periodic management are: the 33 indicator equations (pre-

sented in Chapter 5); the 6 component and GP equations (Equations 7.1 up to 7.7);

optimized scale with transformation to normal scale (Equation 7.21). These equations can

be included in a spreadsheet to facilitate data update. Every month's indicator values are

actualized and all other formulas can be automatically calculated.

For example, Table 7.7 demonstrates the results for all 33 indicators in two dierent

months. The component and GP values for each month (in optimized scale - OS - and

normal scale - NS) are described in Table 7.8.

The indicator values are established in order to evaluate warehouse performance in two

dierent situations. In month 1, we consider that inbound activities have some performance

problems, aecting their indicators of time, productivity and quality (they are lower than

the average). In this special example, the indicators related to replenishment activity are

also considered with problems. The outbound indicators, on the other hand, have very

good results, higher than the average. In month 2 the opposite situation is established:

inbound indicators have good performance whereas outbound indicators have bad results.

It is interesting to note that the global performance of the rst month is better than

the second one. This result could maybe support some manager's practices preferring to

improve the outbound activities.

To attain a performance result in accordance to warehouse reality is imperative to use

an updated model. Usually, new situations in the market impact on enterprises (and also

on their warehouses) requesting a reevaluation of the initial model. Described in the next

section is when and how to update the model.


146 7. Model Solving, Implementation and Update

Table 7.7: Indicator values for two dierent


months.

Indicator value Indicator value


Indicator
Month 1 Month 2
CSc 0,3 0,3
CustSatq 100 92
Delp 6 3,5
Delt 0,15 0,35
DSt 0,95 0,4
Invc 250000 70000
Invq 95 100
InvUtp 75 55 Table 7.8: GP result for two dierent months.
Labc 10500 16000
Labp 15 13
OrdFq 100 98
C Month 1 Month 2
OrdLTt 1 1,8
C1 6,24 -4,02
OrdProcc 0,6 0,92
OTDelq 100 94 C2 -5,16 3,41
OTShipq 100 94 C3 -0,99 -3,73
PerfOrdq 100 90 C4 2,69 -11,48
Pickp 2,8 1,8 C5 2,99 -23,79
Pickt 0,3 1
C6 -19,50 5,03
Putt 0,2 0,08
Recp 5 10 GP (OS) -2,29 -5,76
Rect 0,8 0,4 GP (NS) 87,28 84,77
Repp 3 5
Rept 0,3 0,1
Scrapq 2 7
Shipp 5 1,5
Shipq 99,5 96
Shipt 0,3 0,45
Stop 5 10
Thp 170 100
TOp 1,5 0,9
Trc 2,4 5
TrUtp 87 75
WarUtp 50 55

7.6 Model Update


It is dicult to establish a period of time to review the integrated model. It depends on

the variability of the market, changes in warehouse capacity (structural and human) or

goals.

In this work the updates are classied as minor or major. The minor revisions are

related to little changes requiring a new optimization to update the scale. The variables

could be:

• the component weights in GP equation can be reconsidered for changes in strategic

goals (e.g. the warehouse wants to be faster than the concurrents);

• the xed warehouse conditions (e.g. number of pallets space, employees, equipments)

should be updated in indicator equations and scale model;

The major updates usually require the remodeling of the entire methodology. Some
7.7. Conclusions 147

examples are: changes in indicator equations; modication of variable limits in the optimi-

zation model due to big changes in capacity or process. Moreover, the indicator relation-

ships can change over time, and the manager needs to revise the model when he observes

this tendency.

7.7 Conclusions
The chapter presents the nal integrated performance model with a scale used to analyze

the integrated indicator results.

To determine the nal integrated performance model, an analysis of the Jacobian and

correlation results is carried out in order to improve the PCA outcome. The main objectives

are to keep the greatest quantity of indicators as possible with a minimum number of

principal components. The comparison of the worst results obtained from the Jacobian

and correlation matrix establishes an order in which indicators should be excluded.

At the end, seven indicators are eliminated from the analysis and the remain 33 are

designated in six dierent principal components. It is interesting to note that from the

seven indicators, ve are related to quality measures in receiving, storing, replenishment

and picking activities.

The six component equations compose the global performance measure. The GP is

optimized to obtain the upper and lower values of the GP scale. The method used to dene

the optimization model can be generalized; however, each warehouse should construct its

own optimization model since it is necessary to dene the variable limits according to the

warehouse reality. The optimized scale, OS, is transformed in a named normal scale, NS,

to facilitate the interpretation of the aggregated indicator.

Finally, the utilization of the aggregated model simulating two dierent warehouse per-

formances is tested. In the rst situation, the outbound indicators have their performance

improved and inbound measures have bad results. For the second test we dene the oppo-

site, outbound indicators have bad results whereas the inbound indicators are great. The

global performance indicator provides better result when outbound indicators are better.

Regarding the exclusion of quality indicators in some warehouse activities (during PCA

analysis) and the result of the test considering dierent indicator results, it might conrm

that the time and productivity are the essential performance axes for the majority of

internal warehouse activities, and the quality level must be guaranteed at the end of the

process chain, with measures related to customer satisfaction. However, this hypothesis

needs to be tested in dierent kinds of warehouses to allow us to make such inferences.


Chapter 8
Conclusions and suggestions for future research

Si nous attribuons les phénomènes inexpliqués au hasard,


ce n'est que par des lacunes de notre connaissance.
Pierre Simon de Laplace

Contents

8.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150


8.2 Future Research Directions . . . . . . . . . . . . . . . . . . . . . . . 153
8.2.1 Short-term Research Directions . . . . . . . . . . . . . . . . . . . . . . . 153
8.2.2 Long-term Research Directions . . . . . . . . . . . . . . . . . . . . . . . . 154

Abstract
The chapter is divided in two main sections: rstly, the general conclusions about the
developments carried out throughout this thesis are discussed regarding the objectives
presented in Chapter 1; secondly, research directions are proposed in two dierent
subsections, which split the suggestions by their complexity in short-term and long-
term future researches.
150 8. Conclusions and suggestions for future research

8.1 Conclusions
A dissertation is developed to attain predened objectives. The conclusions serve as a

check out of the accomplishments according to the goals, closing the loop. In the following

items we review the objectives presented in Chapter 1 and discuss the outcomes achieved.

• Denition and classication of warehouse performance indicators: From

the structured literature review on warehouse performance carried out in this thesis,

the warehouse performance indicators extracted from papers are classied as direct

or indirect measures. Direct indicators are usually expressed in simple mathematical

expressions whereas indirect indicators consist, in many cases, of a concept measure.

Even if there is a tendency in the literature to develop indirect measures", they are

not used for daily management since they require a great quantity of data, which

are sometimes dicult to obtain. Therefore, we can conclude that direct indicators

continue to be the basis for warehouse performance measurement.

The main insight coming from the literature analysis is that, for the direct indicators,

there is not always a consensus on the denitions of some of the indicators and their

boundaries across the warehouse, resulting in dierent measures for the same metric.

Therefore, we present indicator denitions based on paper database if the denitions

are given, or based on the best common sense if the denitions are not provided.

An activity-based framework is developed to clarify the boundaries of the indica-

tors obtained from the literature. In this framework we classify indicators not only

according to quality, cost, time and productivity dimensions, but also in terms of

warehouse activities (receiving, storage, picking, shipping and delivery). The most

frequently used indicators are labor productivity, throughput, on-time delivery, order

lead time and inventory costs. The result of this classication shows that the number

of outbound indicators is much higher than the number of inbound indicators. This is

not very surprising as the warehouse activities are getting more and more customer

oriented. This reveals that the outbound processes/activities are considered more

critical than the inbound ones and hence they are subject to more control.

• Creation of a methodology to determine an integrated warehouse perfor-


mance measurement: It consists in four main steps executed to achieve the best
aggregation of the indicator set according to their relationships. The main outcomes

are few (or just one) equation(s) used to measure the global performance with a scale

to allow the interpretation of the results.

The proposed methodology encompasses dierent disciplines to achieve the aggre-

gated model: the analytical model and the Jacobian matrix measurement to analyze

indicator relationships; the statistical tools to propose indicator groups; the optimi-

zation model to develop the scale for the integrated indicator. This multidisciplinary

approach permits a good model construction to manage warehouse performance.

Moreover, the methodology can be viewed as general; it gives some alternatives that

one can choose when developing his integrated model. Each warehouse can present

dierent objectives, processes, particularities, and the fact of not specifying all pa-

rameters allows the adaptation of the methodology for specic situations.


8.1. Conclusions 151

• Development of an analytical model of performance indicators and data


equations: This is the rst step for the methodology application and it is considered
an outcome of the thesis because usually the performance measurement does not

evaluate how the indicators are measured.

To apply the methodology, it is necessary to identify the indicator set that will be

used to evaluate warehouse performance. In our application, performed in a theoret-

ical warehouse, the metric system to assess the standard warehouse performance is

dened, rstly based on the literature review. After some adjustments, a total of 41

indicators compose the metric system, representing all activities that the standard

warehouse have in charge.

Even if the analytical model can not be generalized, it could be adapted for some

warehouses with similar operations or serve as a reference for the development of

further models.

The most interesting kind of indicators that are not found in the literature are the

ones related to the replenishment activity. Indeed, we have not found any indica-

tor dimension related to this activity. The inclusion of replenishment indicators in

our analytical model brings new informations for managers to better evaluate the

warehouse performance.

• Discovery of a method to determine indicator relationships analytically:


The use of the Jacobian matrix to identify indicator relationships is one of the most

innovative contributions of this thesis, even if further developments should be done

to allow its sole utilization to support decisions.

The Jacobian matrix calculates the partial derivatives of the independent inputs re-

lated to the outputs. To verify the independent inputs of the indicator equations, the

last ones are expanded, creating the data equations. This group of equations builds

the complete analytical model, which describe analytically all relations among data.

The utilization of the Jacobian matrix in this thesis is nominated as an exhaustive

procedure which we can make inferences about indicator relationships. An evalua-

tion of the results provided by the Jacobian matrix (indicators x data) permits the

development of a quadratic matrix (indicators x indicators) which inform in the cells

the number of data shared among performance indicators.

This result is compared with the correlation matrix of indicators. We note that the

majority of indicators with very low correlations corroborate with the indicators shar-

ing the least amount of data in the Jacobian matrix. However, the results are not

conclusive, since there are exceptions and the number of shared data can not dene

the relationship's strength as in correlation matrix. We just verify that the informa-

tions provided by the correlation and the Jacobian seems to be complementary, since

some indicators are maintained in the integrated model having a great quantity of

shared data but very low correlations.

Finally, we conclude that it is very hard to quantitatively determine from the par-

tial derivatives the intensity of the relationship between indicators. The procedure

described is only used in this thesis to quantify the number of shared data, which
152 8. Conclusions and suggestions for future research

provides a preliminary view of indicator relationships and veries if the results are

coherent from an analytical point of view.

• Determination of an optimization model to design a scale for the inte-


grated performance: The literature about scale denition is vast, but it is usually
dened for a unique variable. For instance, the quality and productivity performance

indicators are evaluated using dierent scales. Thus, the development of a scale for

several variables is less common. There are some propositions in the literature to

overcome this issue. In this thesis, we use an optimization approach to obtain the

upper and lower limits of the performance scale.

The optimization model contains the integrated performance model (composed of six

component equations), the analytical model with indicator and constraint equations

and the global performance indicator (which is the aggregation of the components in

one measure). The method used to dene the optimization model can be generalized;

however, each warehouse should construct its own optimization model since it is

necessary to dene the variable limits according to the warehouse reality.

The algorithm used to perform the optimization is the SQP, which can handle several

constraints. However, it is very sensitive to the starting values dened for the inputs.

Tests are made to reduce the chances of getting stuck in a local minimum, but other

kinds of tests to verify the results are not done. We believe that this rst optimiza-

tion attained reasonable results regarding the purpose of this thesis, facilitating the

interpretation of the aggregated indicator.

Since the specic objectives are achieved, we conclude that the same happens for the

Development of a methodology for an integrated warehouse perfor-


general one:

mance evaluation through indicators' aggregation.


The methodology application achieves an integrated model which keeps the majority

of the indicators initially proposed using a minimum number of principal components to

represent them. It denotes a very good result, since one of the objectives of this thesis is

to develop a tool that will help managers in the evaluation of a great quantity of informa-

tion. The usability of the integrated model with its scale is tested with indicator values of

two dierent months. In the rst month the outbound indicators have their performance

improved and inbound measures are worst and in the second month is simulated the oppo-

site. The result in the case that outbound indicators are prioritized attains better global

performance.

Finally, we conclude that the methodology proposed in this thesis achieves the objective

of providing insights about indicator relationships, the global warehouse performance and

its relative evaluation by the utilization of a performance scale.

In summary, the main contributions provided by this thesis are:

1. the clarication of warehouse indicator concepts, dening their boundaries;

2. the framework to classify performance indicators according to their dimensions and

warehouse activity;

3. the transformation of indicators' denitions in equations;


8.2. Future Research Directions 153

4. the development of the complete analytical model with indicator and data equations;

5. the use of the Jacobian matrix to verify indicator relationships;

6. the model used to generate the database for the standard warehouse including data

variability and chained processes;

7. the global performance indicator, a unique measure aggregating several indicators

from dierent dimensions;

8. the scale development using an optimization approach;

9. the aggregation of several dierent methods (basic statistical tools, partial derivatives

analysis, optimization tool, dimension-reduction methods) in a unique methodology.

8.2 Future Research Directions


This section is divided in two dierent subsections because we understand that the sugges-

tions presented here have considerable dierences in the development time. The short-term

research directions treat new studies in the warehouse performance subject and possible

applications of the methodology. On the other hand, the long-term research directions

are, in our point of view, new developments that demand more study and time to be

accomplished.

8.2.1 Short-term Research Directions


In this section, we basically report some new developments that can be made to improve

the results obtained in this dissertation.

• The rst one is the application of the proposed methodology in a real warehouse,

comparing the results obtained in theory with the practice.

• In future studies, it will be interesting to incorporate other indicators in the analysis

that are not considered in this work as, for example, measures related to reverse

logistics activities, administrative productivity, sustainable practices.

• The SEM (Structural Equation Modeling) method is usually used to verify if a pre-

dened model (i.e. framework dening variable relationships) ts the data. As our

study is exploratory (we did not know how indicators would be aggregated) we did

not use this method in the thesis. However, from the proposed integrated model

it is possible to make a conrmatory test using SEM. It is important to note that

the application of SEM using autocorrelated data (e.g. time series) requires special

mathematical manipulations.

• An interesting study consists in the utilization of dierent dimension-reduction sta-

tistical tools to compare the indicator relationships obtained with the PCA method.

Among the tools, the DFA theory (Dynamic Factor Analysis) suggest this method

as the best one for our study purpose due to data characteristics. An initial test is

performed in this thesis (Appendix F) but the results are not consistent and reliable,

indicating that more studies should be carried out for DFA utilization.
154 8. Conclusions and suggestions for future research

• The investigation of using the Jacobian matrix to measure strengths between indi-

cator relationships is another point for improvement. The suggestion here is to nd

out a manner to transform the partial derivatives in coecients interpreted similarly

to the ones of the correlation matrix. One suggestion could be to standardize the

input data and calculate the Jacobian to analyze the relationships, verifying which

are strong or weak. However, to be sure that the results are reliable to dene relation

strengths, it is necessary to determine a standard Jacobian matrix. The complexity

of constructing the standard Jacobian resides in the input data used to calculate

the partial derivatives. As they come from the time series, which change each new

period, consequently the Jacobian result also changes over time.

8.2.2 Long-term Research Directions


Warehouses are essential for logistics operations and they have been extensively studied in

the literature. However, the research eort focusing on warehouse performance measure-

ment is not so abundant as for logistics performance. Based on the tendencies identied

in the selected papers, we highlight several future research directions in warehouse man-

agement as follows:

• Regarding the kind of problems treated by the literature on warehouse performance

subject, we identify new study tendencies in two main directions: the assessment

of relationships among dierent warehouse performance areas (e.g. degree of au-

tomation inuencing warehouse productivity (De Koster and Balk, 2008)); and the

evaluation of concepts not usually expressed as ratios and, therefore, not measured

yet (e.g. VAL activities (De Koster and Waremius, 2005)).

• There are dierent types of warehouses. For instance, the manufacturing company

can own the warehouse in which only their products are processed. A warehouse could

be a distribution center or owned by a third party logistics provider in which several

products coming from dierent suppliers are treated. Or, a warehouse could be a

retailer's warehouse. In all these cases, the key performance issues can dier since

the goals may dier. Similarly, the management policies within a warehouse may also

aect the way the performance needs to be measured. For instance, for a warehouse

implementing crossdocking techniques, the time related performance measures are

more crucial compared to those which do not implement this technique. One future

research direction is to investigate to what extent the warehouse type inuences the

choice of indicators for performance evaluation.

• The performance of administrative personnel in warehouse operations is another point

for analysis. The indicators found in papers usually focus on operational labor.

However, the administrative process has also an important role in the warehouse

performance. For instance, indicators like order lead time and number of perfect

orders are directly impacted by the administrative task performance. Nevertheless,

the performance of the warehouse administration is not measured separately and its

impact on the other performance indicators are rarely investigated. This could be

another research direction to improve the global warehouse performance.


8.2. Future Research Directions 155

• Indicators about reverse logistics have already been developed to evaluate backorder

operations, for example. The productivity and costs of these operations are important

for the enterprise as a whole since they involve customer satisfaction. However, papers

integrating these operations with the main warehouse performance indicators are still

missing. Papers regarding the impact of returns in forward warehouse performance

processes can bring some insights about this issue.

• An important subject in progress is the issue of sustainability in logistics. Sell-

itto et al. (2011) measure environmental performance of logistics operations compar-

ing emissions and waste indicators with the maximum levels allowed by ISO 14001.

Matopoulos and Bourlakis (2010) go further including indicators of the three pillars

of sustainability (economic, environmental, social) to evaluate warehouses. Sustain-

able operations have been widely studied in past years, but the inclusion of metrics

in warehouse management still oers a fruitful site for examination.

Regarding specically the methodology proposed in this dissertation, other studies can

be suggested.

Firstly, we propose a study verifying the applicability of the proposed methodology

for strategic areas (e.g. the enterprise performance). One important point, that may be

veried, is the indicators used in the analytical model. Since strategic performance encom-

passes other actors of the supply chain (e.g. suppliers, third party logistics, stakeholders)

besides the focal company, the inclusion of indicators strongly inuenced by external fac-

tors can make the evaluation of performance dicult because it restricts the actions that

could improve results.

Secondly, the generalization of the proposed scale is another point for development. A

suggestion is to dene it by a benchmarking study, evaluating the best practices among

companies of the same area and determining the scale from the results obtained. This

development has, for example, huge diculties as the determination of the same analytical

model for all companies (that can compete in the same area but with dierent strategies)

and the denition of the optimization limits due to the diverse situations found among

enterprises.

Finally, we observe, in the last decade, an increasing complexity in the warehouse oper-

ations. This complexity is very well demonstrated by the implementation of sophisticated

IT tools in warehouses and DCs. Since 2000, more complicated algorithms and simulations

start to appear in publications on warehouse management, usually proposing the utilization

or development of decision support systems for performance evaluation and performance

improvement in warehouses. Information systems, such as warehouse management system

(WMS), are recognized as useful means to manage resources in the warehouse (Lam et al.,

2011). The trend of using information systems in warehouse management is a growing ten-

dency and the related new technologies (e.g. augmented reality, RFID, Internet of Things),

will certainly inuence the way the performance is measured and used for decision making

in the future. Therefore, studies regarding the impact and use of these new technologies

to measure and evaluate warehouse performance are welcome.


Bibliography

Abdi, H., Williams, L. J., and Valentin, D. (2013). Multiple factor analysis: principal

component analysis for multitable and multiblock data sets. Wiley Interdisciplinary
Reviews: Computational Statistics, 5(2):149179.

Andreji¢, M., Bojovi¢, N., and Kilibarda, M. (2013). Benchmarking distribution centres

using Principal Component Analysis and Data Envelopment Analysis: A case study of

Serbia. Expert Systems with Applications, 40:39263933.

Autry, C. W., Gris, S. E., Goldsby, T. J., and Bobbitt, L. M. (2005). Warehouse Manage-

ment Systems: Resource Commitment, Capabilities, and Organizational Performance.

Journal of Business Logistics, 26(2):165183.

Banaszewska, A., Cruijssen, F., Dullaert, W., and Gerdessen, J. C. (2012). A framework

for measuring eciency levels - The case of express depots. International Journal of
Production Economics, 139(2):484495.

Beamon, B. M. (1999). Measuring supply chain performance. International Journal of


Operations & Production Management, 19(3):275292.

Bentler, P. M. and Chou, C.-P. (1987). Practical Issues in Structural Modeling. Sociological
Methods & Research, 16(1):78117.

Berrah, L., Mauris, G., Haurat, A., and Foulloy, L. (2000). Global vision and performance

indicators for an industrial improvement approach. Computers in Industry, 43(3):211

225.

Bertrand, J. W. M. and Fransoo, J. C. (2002). Operations management research method-

ologies using quantitative modeling. International Journal of Operations & Production


Management, 22(2):241264.

Bisenieks, J. and Ozols, E. (2010). The problem of warehouse operation, its improvement

and development in company's logistics system. Human Resources: The Main Factor of
Regional Development, (3):206213.

Bititci, U. S. (1995). Modelling of performance measurement systems in manufacturing

enterprises. International Journal of Production Economics, 42:137147.


158 BIBLIOGRAPHY

Böhm, A. C., Leone, H. P., and Henning, P. (2007). Industrial Supply Chains : Performance

Measures , Metrics and Benchmarks. In Plesu, V. and Agachi, P. S., editors, 17th

European Symposium on Computer Aided Process Engineering - ESCAPE17, pages 757


762.

Bolker, B. (2007). Dynamic models. http://ms.mcmaster.ca/bolker/emdbook/, (Date Ac-

cessed: 2014-12-10):pages 130.

Bowersox, D. J., Closs, D. J., and Cooper, M. B. (2002). Supply Chain Logistics Manage-
ment. McGraw-Hill, Michigan State University, rst edition.

Cagliano, A. C., DeMarco, A., Rafele, C., and Volpe, S. (2011). Using system dynamics in

warehouse management: a fast-fashion case study. Journal of Manufacturing Technology


Management, 22(2):171188.

Cai, J., Liu, X., Xiao, Z., and Liu, J. (2009). Improving supply chain performance man-

agement: A systematic approach to analyzing iterative KPI accomplishment. Decision


Support Systems, 46(2):512521.

Campos, A. J. C. (2004). Metodologia para elaboração de sistema integrado de avaliação

de desempenho Logístico. DEPS - Departamento de Engenharia de Produção e Sistemas.


Available at: www.bu.ufsc.br, page 308.

Caplice, C. and She, Y. (1994). A Review and Evaluation of Logistics Metrics. The
International Journal of Logistics Management, 5(2):1128.

Chan, F. T. and Qi, H. (2003). An innovative performance measurement method for supply

chain management. Supply Chain Management: An International Journal, 8(3):209223.

Chen, C.-C. (2008). An objective-oriented and product-line-based manufacturing perfor-

mance measurement. International Journal of Production Economics, 112(1):380390.

Chen, G. (2011). Structural Equation Modeling (SEM) or Path Analysis.

http://afni.nimh.nih.gov/sscc/gangc/PathAna.html/document_view, (Date Accessed:

2015-03-29).

Chenhall, R. H. and Langeld-Smith, K. (2007). Multiple Perspectives of Performance

Measures. European Management Journal, 25(4):266282.

Choo, S. (2004). Aggregate Relationships between Telecommunications and Travel : Struc-


tural Equation Modeling of Time Series Data. PhD thesis, Hanyang University.

Chow, G., Heaver, T. D., and Henriksson, L. E. (1994). Logistics Performance: Denition

and Measurement. International Journal of Physical Distribution & Logistics Manage-


ment, 24(1):1728.

Chow, S.-M., Ho, M.-h. R., Hamaker, E. L., and Dolan, C. V. (2010). Equivalence and Dif-

ferences Between Structural Equation Modeling and State-Space Modeling Techniques.

Structural Equation Modeling: A Multidisciplinary Journal, 17(2):303332.


BIBLIOGRAPHY 159

Clivillé, V., Berrah, L., and Mauris, G. (2007). Quantitative expression and aggregation

of performance measurements based on the MACBETH multi-criteria method. Interna-


tional Journal of Production Economics, 105(1):171189.

Cormier, G. and Gunn, E. a. (1992). A review of warehouse models. European Journal of


Operational Research, 58(1):313.

Coskun, A. and Bayyurt, N. (2008). Measurement Frequency of Performance Indicators

and Satisfaction on Corporate Performance : A Survey on Manufacturing Companies.

European Journal of Economics, Finance and Administrative Sciences, (13):79  87.

Costa, G. G. d. O. (2006). An inferential procedure for Factor Analysis using Bootstrap


and Jackknife techniques: construction of condence intervals and tests of hypotheses.
PhD thesis, Pontifícia Universidade Católica do Rio de Janeiro.

De Koster, M. B. M. and Balk, B. M. (2008). Benchmarking and Monitoring International

Warehouse Operations in Europe. Production and Operations Management, 17(2):175

183.

De Koster, M. B. M. and Waremius, P. M. J. (2005). American, Asian and third-party

international warehouse operations in Europe - A performance comparison. International


Journal of Operations & Production Management, 25(7-8):762780.

De Koster, R., Le-Duc, T., and Roodbergen, K. J. (2007). Design and control of ware-

house order picking: A literature review. European Journal of Operational Research,


182(2):481501.

De Marco, A. and Giulio, M. (2011). Relationship between logistic service and maintenance

costs of warehouses. Facilities, 29(9-10):411421.

Dotoli, M., Fanti, M. P., Iacobellis, G., Stecco, G., and Ukovich, W. (2009). Performance

analysis and management of an Automated Distribution Center. In 2009 35th Annual


Conference of IEEE Industrial Electronics, pages 43714376, Porto. IEEE.

du Toit, S. H. C. and Browne, M. W. (2007). Structural Equation Modeling of Multivariate

Time Series. Multivariate Behavioral Research, 42(1):67101.

Ellinger, A. D., Ellinger, A. F., and Keller, S. B. (2003). Supervisory Coaching Behavior,

Employee Satisfaction, and Warehouse Employee Performance: A Dyadic Perspective in

the Distribution Industry. Human Resource Development Quarterly, 14(4):435458.

Enciu, P., Wurtz, F., and Gerbaud, L. (2010). Proposal of a Language for Describing

Dierentiable Sizing Models for Electromagnetic Devices Design. In 14th Biennial IEEE
Conference on Electromagnetic Field Computation - CEFC, pages 12, Chicago.

Fabbe-Costes, N. (2002). Évaluer la création de valeur du Supply Chain Management.

Logistique & Management, 10(1):2936.

Federici, A. and Mazzitelli, A. (2005). Dynamic Factor Analysis with STATA. In Italian
STATA User Group meeting, pages 113, Milan, Italy.
160 BIBLIOGRAPHY

Fernandes, B. H. R. (2006). Competências e desempenho organizacional: o que há além do


Balanced Scorecard. Saraiva, São Paulo.

Forslund, H. and Jonsson, P. (2010). Integrating the performance management process of

on-time delivery with suppliers. International Journal of Logistics-Research and Appli-


cations, 13(3):225241.

Franceschini, F., Galetto, M., Maisano, D., and Mastrogiacomo, L. (2008). Properties of

performance indicators in operations management: A reference framework. International


Journal of Productivity and Performance Management, 57(2):137155.

Franceschini, F., Galetto, M., Maisano, D., and Viticchi, L. (2006). The Condition of

Uniqueness in Manufacturing Process Representation by Performance/Quality Indica-

tors. Quality and Reliability Engineering International, 22:567580.

Frazelle, E. (2001). World-Class warehousing and material handling. McGraw-Hill, New

York, NY, USA, rst edition.

Fugate, B. S., Mentzer, J. T., and Stank, T. P. (2010). Logistics Performance: Eciency,

Eectiveness, and Dierentiation. Journal of Business Logistics, 31(1):4363.

Gallmann, F. and Belvedere, V. (2011). Linking service level, inventory management

and warehousing practices: A case-based managerial analysis. Operations Management


Research, 4(1-2):2838.

Gentle, J. E. (2007). Matrix Algebra - Theory, Computations, and Applications in Statistics.


Springer, New York, NY, USA.

Goomas, D. T., Smith, S. M., and Ludwig, T. D. (2011). Business activity monitoring:

Real-time group goals and feedback using an overhead scoreboard in a distribution center.

Journal of Organizational Behavior Management, 31(3):196209.

Gu, J., Goetschalckx, M., and McGinnis, L. F. (2007). Research on warehouse operation:

A comprehensive review. European Journal of Operational Research, 177(1):121.

Gu, J. X., Goetschalckx, M., and McGinnis, L. F. (2010). Research on warehouse design

and performance evaluation: A comprehensive review. European Journal of Operational


Research, 203(3):539549.

Gunasekaran, A. and Kobu, B. (2007). Performance measures and metrics in logistics and

supply chain management: a review of recent literature (1995 - 2004) for research and

applications. International Journal of Production Research, 45(12):28192840.

Gunasekaran, A., Marri, H. B., and Menci, F. (1999). Improving the eectiveness of ware-

housing operations : a case study. Industrial Management & Data Systems, 99(8):328

339.

Hasson, C. J. and Heernan, K. S. (2011). Dynamic factor analysis and the exercise

sciences. Pediatric exercise science, 23(1):1722.


BIBLIOGRAPHY 161

Holmes, E. E. (2015). An introduction to multivariate state-space models.

https://catalyst.uw.edu/workspace/sh203/35553/243771, (Date Accessed: 2015-08-

20):pages 46.

Holmes, E. E., Ward, E. J., and Scheuerell, M. D. (2014). Analysis of multivariate time-

series using the MARSS package. Technical report, Northwest Fisheries Science Center,

NOAA, Seattle, USA.

Hoyle, R. H. (2012). Introduction and Overview. In Hoyle, R. H., editor, Handbook of


Structural Equation Modeling, chapter 1, pages 316. Guilford Publications, New York,

NY.

Ilies, L., Turdean, A.-M., and Crisan, E. (2009). Warehouse Performance Measurement -

A Case study. Economic Science Series, 18(4):307312.

Jha, D. K., Yorino, N., and Zoka, Y. (2008). Analyzing performance of distribution system

2008
in Nepal and investigating possibility of reorganization of distribution centers. In

Third International Conference on Electric Utility Deregulation and Restructuring and


Power Technologies, Vols 1-6, number April, pages 13121317, Nanjing, China.

Jiang, J., Chen, H., and Zhang, X. (2009). Index System of Logistics Performance in

Supply Chain. In International Conference on Transportation Engineering 2009, volume


2009, pages 28512856.

Johnson, A., Chen, W. C., and McGinnis, L. F. (2010). Large-scale Internet benchmark-

ing: Technology and application in warehousing operations. Computers in Industry,


61(3):280286.

Johnson, A. and McGinnis, L. (2011). Performance measurement in the warehousing

industry. IIE Transactions, 43(3):220230.

Johnson, R. A. and Wichern, D. W. (2002). Factor Analysis and Inference for Structured

Covariance Matrices. In Applied Multivariate Statistical Analysis, chapter 9, pages 477

529. Prentice Hall, Upper Saddle River, NJ - USA, 5th edition.

Jung, H. W. (2013). Investigating measurement scales and aggregation methods in SPICE

assessment method. Information and Software Technology, 55(8):14501461.

Karagiannaki, A., Papakiriakopoulos, D., and Bardaki, C. (2011). Warehouse contex-

tual factors aecting the impact of RFID. Industrial Management and Data Systems,
111(5):714734.

Kassali, R. and Idowu, E. O. (2007). Economics of Onion Storage Systems Under Tropical

Conditions. Internation Journal of Vegetable Science, 13(1):8597.

Katchova, A. (2013). Principal Component Analysis and Factor Analysis.

https://sites.google.com/site/econometricsacademy/econometrics-models/principal-
component-analysis, (Date Accessed: 2015-03-28):pages 110.

Keebler, J. S. and Plank, R. E. (2009). Logistics performance measurement in the supply

chain: a benchmark. Benchmarking: An International Journal, 16(6):785798.


162 BIBLIOGRAPHY

Kennerley, M. and Neely, A. (2002). A framework of the factors aecting the evolution of

performance measurement systems. International Journal of Operations & Production


Management, 22(11):12221245.

Khan, M. R. (1984). Eciency measurement model for a computerizes warehousing system.

International Journal of Production Research, 22(3):443452.

Kiefer, A. W. and Novack, R. A. (1999). An empirical analysis of warehouse measure-

ment systems in the context of supply chain implementation. Transportation Journal,


38(3):1827.

Kline, R. B. (2011). Introduction. In Principles and Practice of Structural Equation


Modeling, chapter 1, pages 118. The Guilford Press, New York, NY, 3th edition.

Krippendor, K. (2004). Content Analysis: An introduction to its methodology. Thousand

Oaks, CA., Sage, 2nd edition.

Krizman, A. and Ogorelc, A. (2010). Impact of Disturbing Factors on Cooperation in Logis-

tics Outsourcing Performance: The Empirical Model. Promet-Trac & Transportation,


22(3):209218.

Lam, C. H. Y., Choy, K. L., and Chung, S. H. (2011). A decision support system to facilitate

warehouse order fulllment in cross-border supply chain. Journal of Manufacturing


Technology Management, 22(8):972983.

Lao, S. I., Choy, K. L., Ho, G. T. S., Tsim, Y. C., and Lee, C. K. H. (2011). Real-time

inbound decision support system for enhancing the performance of a food warehouse.

Journal of Manufacturing Technology Management, 22(8):10141031.

Lao, S. I., Choy, K. L., Ho, G. T. S., Tsim, Y. C., Poon, T. C., and Cheng, C. K. (2012). A

real-time food safety management system for receiving operations in distribution centers.

Expert Systems with Applications, 39(3):25322548.

Lauras, M., Marques, G., and Gourc, D. (2010). Towards a multi-dimensional project

Performance Measurement System. Decision Support Systems, 48(2):342353.

Li, J., Sava, A., and Xie, X. (2009). An analytical approach for performance evaluation

and optimization of a two-stage production-distribution system. International Journal


of Production Research, 47(2):403414.

Lohman, C., Fortuin, L., and Wouters, M. (2004). Designing a performance measurement

system: A case study. European Journal of Operational Research, 156(2):267286.

Lu, C.-S. and Yang, C.-C. (2010). Logistics service capabilities and rm performance of

international distribution center operators. The Service Industries Journal, 30(2):281

298.

Luo, S.-q., Liu, L., and Shu-quan, L. (2010). Comprehensive Evaluation of Logistics Per-

formance for Agricultural Products Distribution Center. In 2010 2nd International Con-
ference on E-business and Information System Security, pages 14. Ieee.
BIBLIOGRAPHY 163

Lynagh, P. M. (1971). Measuring Distribution Center Eectiveness. Transportation Jour-


nal, (winter):2133.

Manikas, I. and Terry, L. A. (2010). A case study assessment of the operational perfor-

mance of a multiple fresh produce distribution centre in the UK. British Food Journal,
112(6):653667.

Manly, B. F. J. (2004). Principal Component Analysis. In Multivariate Statistical Methods:


a primer, chapter 6, pages 7590. Chapman & Hall/ CRC, Boca Raton, Florida, USA,

3rd edition.

Markovits-Somogyi, R., Gecse, G., and Bokor, Z. (2011). Basic eciency measurement

of Hungarian logistics centres using data envelopment analysis. Periodica Polytechnica


Social and Management Sciences, 19(2):97101.

Matopoulos, A. and Bourlakis, M. (2010). Sustainability practices and indicators in food

retail logistics: Findings from an exploratory study. Journal on Chain and Network
Science, 10(3):207218.

Melnyk, S. A., Stewart, D. M., and Swink, M. (2004). Metrics and performance measure-

ment in operations management: dealing with the metrics maze. Journal of Operations
Management, 22(3):209217.

Menachof, D. A., Bourlakis, M. A., and Makios, T. (2009). Order lead-time of grocery

retailers in the UK and Greek markets. Supply Chain Management, 14(5):349358.

Mentzer, J. T. and Konrad, B. P. (1991). An Eciency/Eectiveness approach to logistics

performance analysis. Journal of Business Logistics, 12(1):3361.

Minitab Inc. (2009). Help of Minitab Statistical Software. Software Minitab, Release 16
for Windows, page www.minitab.com.

Mitro, I. I., Betz, F., Pondy, L. R., and Sagasti, F. (1974). On Managing Science in the

Systems Age: Two Schemas for the Study of Science as a Whole Systems Phenomenon.

Interfaces, 4(3):4658.

Montgomery, D. C. and Runger, G. C. (2003). Applied Statistics and Probability for Engi-
neers. John Wiley & Sons, Inc., New York, NY, USA, 3rd edition.

Neely, A. (2005). The evolution of performance measurement research: Developments in

the last decade and a research agenda for the next. International Journal of Operations
& Production Management, 25(12):12641277.

Neely, A., Gregory, M., and Platts, K. (1995). Performance measurement system design: A

literature review and research agenda. International Journal of Operations & Production
Management, 15(4):80116.

Newsom, J. (2015). Practical Approaches to Dealing with Nonnormal and Categorical

Variables. http://www.upa.pdx.edu/IOA/newsom/semclass/ho_estimate2.pdf, (Date Ac-


cessed: 2015-03-29):pages 14.
164 BIBLIOGRAPHY

Ng, I., Scharf, K., Progrebna, G., and Maull, R. (2013). Contextual variety, Internet-of-

things and the choice of tailoring over platform: mass customisation strategy in supply

chain management. Intern. Journal of Production Economics, 159:7687.

O'Neill, P., Scavarda, A. J., and Zhenhua, Y. (2008). Channel performance in China: a

study of distribution centers in Fujian Province. Journal of Chinese Entrepreneurship,


1(1):2139.

Park, T. A. (2008). Evaluating labor productivity in food retailing. Agricultural and


Resource Economics Review, 37(2):288300.

Patel, B., Chaussalet, T., and Millard, P. (2008). Balancing the NHS balanced scorecard!

European Journal of Operational Research, 185(3):905914.

PennState, E. C. o. S. (2015a). Lesson 7.4 - Interpretation of the Principal Components.

STAT 505 Available at: https://onlinecourses.science.psu.edu/stat505/node/54, (Date

Accessed: 2015-08-22).

PennState, E. C. o. S. (2015b). Lesson 8: Canonical Correlation Analysis.STAT 505 avail-


able at: https://onlinecourses.science.psu.edu/stat505/node/63, (Date Accessed:2015-08-
02):pages 12.

Pokharel, S. and Mutha, A. (2009). Perspectives in reverse logistics: A review. Resources,


Conservation and Recycling, 53(4):175182.

Ramaa, A., Subramanya, K., and Rangaswamy, T. (2012). Impact of Warehouse Man-

agement System in a Supply Chain. International Journal of Computer Applications,


54(1):1420.

Rimiene, K. (2008). The design and operation of Warehouse. Economics and Management,
13:136137.

Rodriguez, R. R., Saiz, J. J. A., and Bas, A. O. (2009). Quantitative relationships between

key performance indicators for supporting decision-making processes. Computers in


Industry, 60(2):104113.

Rodriguez-Rodriguez, R., Saiz, J. J. A., Bas, A. O., Carot, J. M., and Jabaloyes, J. M.

(2010). Building internal business scenarios based on real data from a performance

measurement system. Technological Forecasting and Social Change, 77(1):5062.

Ross, A. and Droge, C. (2002). An integrated benchmarking approach to distribution center

performance using DEA modeling. Journal of Operations Management, 20(1):1932.

Saetta, S., Paolini, L., Tiacci, L., and Altiok, T. (2012). A decomposition approach for

the performance analysis of a serial multi-echelon supply chain. International Journal


of Production Research, 50(9):23802395.

Sardana, G. D. (2008). Measuring business performance: A conceptual framework with

focus on improvement. Performance Improvement, 47(7):3140.


BIBLIOGRAPHY 165

Schefczyk, M. (1993). Industrial benchmarking : A case study of performance analysis

techniques. International Journal of Production Economics, 32:111.

Sellitto, M. A., Borchardt, M., Pereira, G. M., and Gomes, L. P. (2011). Environmen-

tal performance assessment in transportation and warehousing operations by means of

categorical indicators and multicriteria preference. Chemical Engineering Transactions,


25:291296.

Seuring, S. and Muller, M. (2008). From a literature review to a conceptual framework for

sustainable supply chain management. Journal of Cleaner Production, 16(15):16991710.

Sohn, S., Han, H., and Jeon, H. (2007). Development of an Air Force Warehouse Logistics

Index to continuously improve logistics capabilities. European Journal of Operational


Research, 183(1):148161.

Spencer, M. S. (1993). Warehouse Management Using V-A-T Logical Structure Analysis.

The International Journal of Logistics Management, 4(1):3548.

Stainer, A. (1997). Logistics - a productivity and performance perspective. Supply Chain


Management: An International Journal, 2(2):5362.

Staudt, T. (2015). Brushless Doubly-Fed Reluctance Machine Modeling, Design and Op-

timization. Université Grenoble Alpes. Thèse de Doctorat, pages 1355.

Suwignjo, P., Bititci, U. S., and Carrie, A. S. (2000). Quantitative models for performance

measurement system. International Journal of Production Economics, 64(1-3):231241.

Svoronos, A. and Zipkin, P. (1988). Estimating the performance of Multi-level Inventory

Systems. Operations Research, 36(1):5772.

Tangen, S. (2004). Performance measurement: from philosophy to practice. International


Journal of Productivity and Performance Management, 53(8):726737.

UCLA, S. C. G. (2012). R Data Analysis Examples: Canonical Correlation Analysis.

Available at: http://www.ats.ucla.edu/stat/r/dae/canonical.htm, (Date Accessed: 2015-

08-02).

van den Berg, J. and Zijm, W. (1999). Models for warehouse management: Classication

and examples. International Journal of Production Economics, 59(1-3):519528.

Vascetta, M., Kauppila, P., and Furman, E. (2008). Aggregate indicators in coastal policy

making: Potentials of the trophic index TRIX for sustainable considerations of eutroph-

ication. Sustainable Development, 16:282289.

Voss, M. D., Calantone, R. J., and Keller, S. B. (2005). Internal service quality: Determi-

nants of distribution center performance. International Journal of Physical Distribution


& Logistics Management, 35(3):161176.

Wainer, J. (2010). Principal Components Available at:


Analysis.

http://www.ic.unicamp.br/wainer/cursos/1s2013/ml/Lecture18_PCA.pdf, (Date

Accessed: 2015-07-27):pages 118.


166 BIBLIOGRAPHY

Wang, H., Chen, S., and Xie, Y. (2010). An RFID-based digital warehouse management

system in the tobacco industry: a case study. International Journal of Production Re-
search, 48(9):25132548.

Wang, Y.-F. and Fan, T.-H. (2011). A Bayesian analysis on time series structural equation

models. Journal of Statistical Planning and Inference, 141(6):20712078.

Westfall, P. (2007). Comparison of Principal Components, Canonical Correlation,

and Partial Least Squares for the Job Salience/Job Satisfaction data analysis.

http://courses.ttu.edu/isqs6348-westfall/images/6348/PCA_CCA_PLS.pdf, (Date Ac-

cessed: 2015-04-11):12.

Wu, Y. and Dong, M. (2007). Combining multi-class queueing networks and inventory

models for performance analysis of multi-product manufacturing logistics chains. The


International Journal of Advanced Manufacturing Technology, 37(5-6):564575.

Wu, Y.-J. and Hou, J.-L. (2009). A model for employee performance trend analysis of

distribution centers. Human Factors and Ergonomics in Manufacturing, 19(5):413437.

Yang, K. K. (2000). Managing a single warehouse, multiple retailer distribution center.

Journal of Business Logistics, 21(2):161172.

Yang, L.-r. and Chen, J.-h. (2012). Information Systems Utilization to Improve Distribu-

tion Center Performance : from the Perspective of Task Characteristics and Customers.

Advances in Information Sciences and Service Sciences, 4(1):230238.

Zuur, A. F., Fryer, R. J., Jollie, I. T., Dekker, R., and Beukema, J. J. (2003a). Estimating

common trends in multivariate time series using dynamic factor analysis. Environmetrics,
14(7):665685.

Zuur, A. F., Tuck, I. D., and Bailey, N. (2003b). Dynamic factor analysis to estimate

common trends in sheries time series. Canadian Journal of Fisheries and Aquatic
Sciences, 60(5):542552.
Appendix A
Complete Analytical Model of Performance
Indicators and Data

This section describes the total group of equations creating the complete analytical model.

The analytical model is presented according to indicator equations given in Chapter 5.

The division of indicators by their dimensions (time, productivity, cost, quality) are also

used here. Table A.2, Table A.4, Table A.6 and Table A.8 present the data equations on the

right column of the table whereas the indicator equations (Sections 5.2.4, 5.2.5,5.2.6, 5.2.7)

are repeated in the left column. For example, the rst indicator presented in Table A.2 is
P alU
Pnlo
Rect (Equation 5.1), which is measured by the ratio ∆t(Rec)p per Pal Unlo. These
p=1
data are dened in the right side of the table by the Equations A.1 and A.3, respectively.

The denitions of the components inside data equations are showed in Table A.1, A.3,

A.5 and A.7 with the data units in parenthesis, which follow the same logic as presented

for indicator measures. In these tables, just the data from right-side equations are detailed,

indicator names and data which have already been dened in Section 5.2 are not repeated.

Moreover, a data used in several indicator equations have its equation repeated as many

times as necessary. As there are a lot of data denitions in each table, the data is in

alphabetic order to facilitate the analysis.

There are three distinguish formats in this complete analytical model, which can be

viewed as hierarchical levels of data details (presented in decreasing order): indicator's

name are in bold, as Rect ; data used in indicator equation are in sans serif style (e.g.

Pal Unlo); the components inside data equation are in slanted style like Prob Rep. In the

cases where the same component is used in indicator equation and in data equation, we

choose to format it in the higher level. For instance, the term Cor Unlo is used as indicator

data in Equation 5.28 and also as data in Equation A.3; so, it is formated in sans serif style.

A.1 Time indicator model


The time data equations are presented on the right side of Table A.2 and the meaning of

the new equation terms are explained in Table A.1.

In practice, the total time of an activity is usually acquired by the dierence between

the beginning and the end of the process, independently of the tasks performed inside it.

But in this study, it is necessary to dene time components for relationship analysis. For

that, the time component equations describes the main important tasks performed by each

activity. For example, Equation A.1 details the arrival of a supplier order as: the time used

by administration area to assign truck to docks and verify documentation (HAdminrec );

the inspection time (∆tInsp); the eective time used to unload products (represent by

WEfRec); the queuing time (∆tQueuerec ), which is not a task but exists in practice when
2 A. Complete Analytical Model of Performance Indicators and Data

the total time is obtained. It is important to note that the unit of each detailed task

already represents the total time to perform it in a month, e.g. ∆tInsp is the total time of

all pallets inspected in a month.

The interpretation of the other time equations is similar of the explained for receiving.

The terms ∆tOthers refer to other tasks executed by a specic warehouse.

Analyzing the time data with the productivity data, we can conclude that terms like

WEfRec constitute the major part of WH Rec, in some cases even attaining the equality.

Table A.1: Time data denitions

Data Meaning
index to represent how many hours of the total available labor hours

β = the employees are eectively working. βrec . . . βdel are distinguished

because they can be dierent for each activity.

index to represent how many hours of the total available labor hours
βord =
the employees are dedicated to customer orders administration.

total time for pallet inspection on its arrival or total time for order
∆t(Insp) =
inspection on its dispatch per month (hour/month)
total time that the pallet/order line/order (depending on the ac-

tivity performed it is used a dierent unit) is waiting to be pro-

∆t(Queue) = cessed per month. The ∆t(Queue) can be divided by activities:


∆tQueuerec , ∆tQueuesto , ∆tQueuerep , ∆tQueuepick , ∆tQueueship ,
∆tQueuedel (hour/month)
total time for other activities/situations not considered in previous
∆t(Others)1−6 =
equation terms per month (hour/month)
Cor Del = number of orders delivered correctly per month (orders/month)
Cor OrdLi Pick = number of order lines picked correctly per month (orderline/month)

Cor OrdLi Ship = number of order lines shipped correctly per month (orderline/month)

number of pallets moved correctly from reserve stock to picking inven-


Cor Rep =
tory area per month (pallets/month)
Cor Sto = number of pallets stored correctly per month (pallets/month)
Cor Unlo = number of pallets unloaded correctly per month (pallets/month)

time eective used to perform administrative operations per month.

The HAdmin can divided by activities: H Adminrec , H Adminsto ,


HAdmin = H Adminrep , H Adminpick , H Adminship , H Admindel , H Adminorders .
The H Adminorders refers to the total time between the customer order

receiving and the assignment of the order for picking (hour/month)

number of orders with problems during delivery activity per month


Prob Del =
(orders/month)
number of order lines with problems during picking activity per month
Prob OrdLi Pick =
(orderline/month)
number of order lines with problems during shipping activity per month
Prob OrdLi Ship =
(orderline/month)
Continued on next page. . .
A.1. Time indicator model 3

Table A.1  Continued

Data Meaning
number of pallets with problems in replenishment operation per month
Prob Rep =
(pallets/month)
Prob Sto = number of pallets stored with problems per month (pallets/month)
Prob Unlo = number of pallets unloaded with problems per month (pallets/month)

total eective working hours in delivery activity per month


WEfDel =
(hour/month)
total eective working hours in picking activity per month
WEfPick =
(hour/month)
total eective working hours in receiving activity per month
WEfRec =
(hour/month)
total eective working hours in replenishment activity per month
WEfRep =
(hour/month)
total eective working hours in storage activity per month
WEfSto =
(hour/month)
total eective working hours in shipping activity per month
WEfShip =
(hour/month)
total employee labor hours available for delivery activity per month
WH Del =
(hour/month)
total employee labor hours available for picking activity per month
WH Pick =
(hour/month)
total employee labor hours available for receiving activity per month
WH Rec =
(hour/month)
total employee labor hours available for replenishment activity per
WH Rep =
month (hour/month)
total employee labor hours available for storing activity per month
WH Sto =
(hour/month)
total employee labor hours available for shipping activity per month
WH Ship =
(hour/month)
Table A.2: Time data equation

4
Indicator Equation Data Equations

P alU
Pnlo P alU
Xnlo
∆t(Rec)p ∆t(Rec)p = WEfRec+H Adminrec +∆tQueuerec +∆tInsp1 +∆tOthers1
p=1
Rect = hour
( pallet ) (5.1) p=1
Pal Unlo (A.1)
WEfRec = βrec × WH Rec (A.2)
Pal Unlo = Cor Unlo + Prob Unlo (A.3)

A. Complete Analytical Model of Performance Indicators and Data


P alSto
P PX
alSto
∆t(Sto)p ∆t(Sto)p = WEfSto + H Adminsto + ∆tQueuesto + ∆tOthers2 (A.4)
p=1
Putt = hour
( pallet ) (5.2) p=1
Pal Sto
WEfSto = βsto × WH Sto (A.5)
Pal Sto = Cor Sto + Prob Sto (A.6)

P alSto
P PX
alSto
∆t(DS)p ∆t(DS)p = ∆t(Rec) + ∆t(Sto) (A.7)
p=1
DSt = hour
( pallet ) (5.3) p=1
Pal Unlo
Pal Unlo = Cor Unlo + Prob Unlo (A.3)

P alM
Poved P alM
Xoved
∆t(Rep)p ∆t(Rep)p = WEfRep+H Adminrep +∆tQueuerep +∆tOthers3 (A.8)
p=1
Rept = hour
( pallet ) (5.4) p=1
Pal Moved
WEfRep = βrep × WH Rep (A.9)
Pal Moved = Cor Rep + Prob Rep (A.10)

Continued on next page. . .


Table A.2  continued from previous page

A.1. Time indicator model


Indicator Equation Data Equations

OrdLiP
P ick OrdLiP
X ick
∆t(Pick)l ∆t(Pick)l = WEfPick + H Adminpick + ∆tQueuepick + ∆tOthers4
l=1 hour
Pickt = ( orderline ) l=1
OrdLi Pick (A.11)
(5.5)
WEfPick = βpick × WH Pick (A.12)
OrdLi Pick = Cor OrdLi Pick + Prob OrdLi Pick (A.13)

OrdLiShip
P OrdLiShip
∆t(Ship)l ∆t(Ship)l = WEfShip+H Adminship +∆tQueueship +∆tInsp2 +∆tOthers5
X
l=1 hour
Shipt = ( orderline ) l=1
OrdLi Ship (A.14)
(5.6)
WEfShip = βship × WH Ship (A.15)
OrdLi Ship = Cor OrdLi Ship + Prob OrdLi Ship (A.16)

OrdDel
P OrdDel
∆t(Del)o ∆t(Del)o = WEfDel + H Admindel + ∆tQueuedel + ∆tOthers6 (A.17)
X

Delt =
o=1 hour
( order ) (5.7) o=1
Ord Del
WEfDel = βdel × WH Del (A.18)
Ord Del = Cor Del + Prob Del (A.19)

OrdDel
P OrdDel
∆t(Ord)o ∆t(Ord)o = ∆t(Pick) + ∆t(Ship) + ∆t(Del) + H Adminord
X
(A.20)
) (5.8)
o=1 hour
OrdLTt = ( order o=1
Ord Del
H Adminord = βord × WH Admin (A.21)
Ord Del = Cor Del + Prob Del (A.19)

5
6 A. Complete Analytical Model of Performance Indicators and Data

A.2 Productivity indicator model


The productivity indicators can be classied in two main groups (shown in Table A.4):

indicators related to labor activities (Equation 5.9 - 5.15) and indicators associated with

warehouse capacity and productivity (Equations 5.16 - 5.21).

The rst group of indicators are related to specic activities. As dened in Section

5.2.5, Equations 5.9 - 5.15 have the objective of evaluating the employees' productivity

considering all available time to work, measured as the total hours that the warehouse is

open (War WH).

Regarding the number of employees working in a warehouse, usually the employees are

not dedicated to an activity. For example, the warehouse may have all its reception in the

morning. In this case, the manager assigns a lot of people in the receiving dock during

this period and after the activity is nished the employees are designated for another task.

To model this situation, we take into account that the number of employees working in an

activity is the average number of employees that should work all day long to execute the

same task.

The global labor productivity is presented in Equation A.22. We note that the deliv-

ery productivity is not encompassed by Equation A.22, which is limited to the warehouse

boundaries. Even considering in this work the delivery activity as part of warehouse man-

agement, the indicators are maintained according to their original denitions.

The second group of indicator equations are related to capacity utilization (e.g. ware-

house utilization, Equation 5.19) and global warehouse productivity (represented by Turnover,

Equation 5.17, and Throughput, Equation 5.21). We remark three details about capacity

indicators: (i) it is shown in Equation A.30 that Inv Cap is measured in the number of

pallets available, but depending on the product characteristics other alternative is to use

the unit m3 ; (ii) the inventory capacity used, Inv CapUsed, demonstrated in Equation A.29,
also makes part of the warehouse used areas in Equation A.35, since the inventory area is

an important part of warehouse space. The Inv CapUsed just needs to be transformed to

m2 to stay in accordance with the indicator unit; (iii) the kilograms available, Kg Avail, in

Equation A.34, are calculated in a dynamic way since it considers the number of travels

that a truck can make in a month. Other alternative is to determine the Kg Avail in a

static way, by summing up the total of truck's capacity.

With respect to the warehouse productivity indicators, it is important to note that

turnover, Equation 5.17, is measured in nancial terms because the data available in the

company are usually in this format. Indeed, the company takes out of the information sys-

tem the data CGoods and Ave Inv ready, without necessity of making calculations. Anyway,

the CGoods and Ave Inv equations are presented in A.31 and A.32, respectively. Analyzing

the Cost of Goods, it makes part of turnover, Equation 5.17, and sales, Equation A.49. A

product is considered sold when it is delivered to the client. So, CGoods is measured by

the number of products delivered times their costs. As the average inventory is dened in

products and not in orders, the number of orders delivered, Ord Del, are also multiplied by

the number of products per order, Prod Ord.


A.2. Productivity indicator model 7

Table A.3: Productivity data denitions

Data Meaning
ave inv = average number of products in inventory (products/month)
area used war =
2
warehouse oor area occupied (m )

cap = capacity in kg of each truck (kg/truck)


Cor Del = number of orders delivered correctly (orders/month)

Cor OrdLi Pick = number of order lines picked correctly per month (orderline/month)

Cor OrdLi Ship = number of order lines shipped correctly per month (orderline/month)

number of pallets moved correctly from bulk stock to picking inventory


Cor Rep =
area (pallets/month)
Cor Sto = number of pallets stored correctly(pallets/month)
Cor Unlo = number of pallets unloaded correctly (pallets/month)

days month = total number of working days in the month (days/month)

average number of employees working in an activity per month. It is

divided by activity: empl Rec, empl Sto, empl Rep, empl Pick, empl
empl =
Ship. The empl Del is a x number during all available time because

the employees only work in delivery activity (employees)


total number of hours during which equipments are stopped per month
HEq Stop =
(hours/month)
total number of hours during which the equipments are working per
HEq Work =
month (hours/month)
total number of hours during which the warehouse operates per day
HWarOperate =
(hours/day)
kg Prod = weight of each product (kg/product)
number of travels made per truck for delivery in a month
nb_travel =
(travel/month)
number of orders with problems during delivery activity
Prob Del =
(order/month)
number of order lines with problems during picking activity
Prob OrdLi Pick =
(orderlines/month)
number of order lines with problems during shipping activity per month
Prob OrdLi Ship =
(orderlines/month)
number of pallets with problems in replenishment operation
Prob Rep =
(nb/month)
Prob Sto = number of pallets stored with problems per month (pallets/month)
Prob Unlo = number of pallets unloaded with problems per month (pallets/month)

cost of products arriving in warehouse, the purchasing price


Prod Cost =
($/product)
Prod Line = average number of products per order lines (products/orderline)
Prod Ord = average number of products per customer order (products/order)

Prod pal = average number of products stocked per pallet (products/pallet)

Continued on next page. . .


8 A. Complete Analytical Model of Performance Indicators and Data

Table A.3  Continued

Data Meaning
total of products processed by the warehouse per month
Prod Proc =
(products/month)
total number of hours during which the warehouse is open per month
War WH =
(hour/month)
total employee labor hours available for delivery activity per month
WH Del =
(hour/month)
WH Others = sum of employee labor hours working in other activities (hour/month)
total employee labor hours available for picking activity per month
WH Pick =
(hour/month)
total employee labor hours available for receiving activity per month
WH Rec =
(hour/month)
total employee labor hours available for replenishment activity per
WH Rep =
month (hour/month)
total employee labor hours available for storing activity per month
WH Sto =
(hour/month)
total employee labor hours available for shipping activity per month
WH Ship =
(hour/month)
Table A.4: Productivity data equations

A.2. Productivity indicator model


Indicator Equation Data Equations
Prod Proc products
Labp = ( hour ) (5.9) Prod Proc = Prod Ship = OrdLi Ship × Prod Line (5.42)
WH
WH = WH Rec + WH Sto + WH Rep + WH Pick + WH Ship + WH Others
(A.22)

Pal Unlo pallets


Recp = ( ) (5.10) Pal Unlo = Cor Unlo + Prob Unlo (A.3)
WH Rec hour
WH Rec = empl Rec × War WH (A.23)

Pal Sto
Stop = ( pallets ) (5.11) Pal Sto = Cor Sto + Prob Sto (A.6)
hour
WH Sto
WH Sto = empl Sto × War WH (A.24)

Pal Moved pallets


Repp = ( hour ) (5.12) Pal Moved = Cor Rep + Prob Rep (A.10)
WH Rep
WH Rep = empl Rep × War WH (A.25)

OrdLi Pick orderline


Pickp = ( hour ) (5.13) OrdLi Pick = Cor OrdLi Pick + Prob OrdLi Pick (A.13)
WH Pick
WH Pick = empl Pick × War WH (A.26)

OrdLi Ship orderline


Shipp = ( hour ) (5.14) OrdLi Ship = Cor OrdLi Ship + Prob OrdLi Ship (A.16)
WH Ship
WH Ship = empl Ship × War WH (A.27)
Continued on next page. . .

9
Table A.4  continued from previous page

10
Indicator Equation Data Equations
Ord Del order
Delp = ( ) (5.15) Ord Del = Cor Del + Prob Del (A.19)
WH Del hour
WH Del = empl Del × War WH (A.28)

Inv CapUsed n
× 100(%) (5.16) ave invi
P
InvUtp =
Inv Cap
Inv CapUsed =
i=1
(A.29)

A. Complete Analytical Model of Performance Indicators and Data


Prod pal
i = 1, . . . , n = SKU's
Inv Cap = total amount of pallet space (A.30)

CGoods n
(5.17) Del × Prod Ord)i × Prod costi )
X
TOp =
Ave Inv
(times) CGoods = ((Ord (A.31)
i=1

n
(ave invi × Prod costi )
X
Ave Inv = (A.32)
i=1

i = 1, . . . , n = SKU's

Kg Tr n
TrUtp = × 100(%) (5.18) Del × Prod Ord)i × kg Prodi
X
Kg Avail Kg Tr = (Ord (A.33)
i=1

i = 1, . . . , n = SKU's
m
capa × nb_travela
X
Kg Avail = (A.34)
a=1

a = 1, . . . , m = number of trucks
Continued on next page. . .
Table A.4  continued from previous page

A.2. Productivity indicator model


Indicator Equation Data Equations
War CapUsed warXarea
WarUtp =
War Cap
× 100(%) War CapUsed = war used area (A.35)
(5.19) b=1

b = 1, . . . , war area where war area = areas utilized in warehouse activities


War Cap = total useful warehouse area (A.36)

HEq Stop z z
EqDp = × 100(%) (5.20) HEq Stopc + HEq Workc
X X
HEq Avail HEq Avail = HEq Stop + HEq Work =
c=1 c=1
(A.37)
c = 1, . . . , z = nb of equipments

Prod Ship products


Thp = ( hour ) (5.21) Prod Ship = OrdLi Ship × Prod Line (A.38)
War WH
War WH = HWarOperate × days month (A.39)

11
12 A. Complete Analytical Model of Performance Indicators and Data

A.3 Cost indicator model


The cost equations are presented in Table A.6 whereas their denitions are in Table A.5.

The distribution costs (Equation 5.23) are measured but not included in the total

warehouse costs (Equation A.48). The salary costs of delivery employees are also included

in Equation 5.23, instead of being considered in labor cost indicator (Equation 5.26).

Regarding the labor cost indicator (Equation 5.26), only the employees working inside

the warehouse are taken into account. The time of the administrative employees are divided

in: hours dedicated to customer orders and hours dedicated to other warehouse activities.

The rst part, hours dedicated to customer orders, are included in order processing costs

(Equation 5.24), and the second part, hours dedicated to other warehouse activities, are

included in the labor cost (Equation 5.26). When the total warehouse costs (Equation A.48)

are assessed, order processing cost and labor cost are summed up, and the administrative

costs are entirely considered.

The interpretation of LostC, Equation A.41 could lead to misunderstandings. LostC


should be interpreted as the quantity of prot lost due to the absence of inventory to

fulll customer orders. The lack of stock is measured by the quality indicator stock out

(Equation 5.40). This percentage of missing stock is multiplied by the total products

picked in a month (named Prod Out, Equation A.47) and the average prot gain with each

product sold.
A.3. Cost indicator model 13

Table A.5: Cost data denitions

Data Meaning
index representing the partial quantity over the Salary payed as Charges per
α=
month
index to represent how many hours of the total available labor hours the em-
βord =
ployees are dedicated to customer orders administration.
$ oil = oil price per liter ($/l)
cost per hour worked in each activity. It is divided by activities: $/hrec , $/hsto ,
$/h =
$/hrep , $/hpick , $/hship , $/hdel , $/hadmin , $/hother ($/hour)
Ave Inv = average inventory in warehouse ($/month)
CGoods = total cost of items sold ($)
Cor Del = number of orders delivered correctly per month (orders/month)
deprec 1−2 = depreciation costs of company assets used in activities per month ($/month)
l_used = mean of oil liters used by trucks for one travel (liter/travel)
nb_travel = number of travels made per truck for delivery in a month (travel/month)
Other1−2 = other costs not considered in equation ($/month)
Prob Del = number of orders with problems during delivery activity (orders/month)
Prod Cost = cost of products arriving in warehouse, the purchasing price ($/product)
Prod Out = number of products taken out of the inventory (products/month)
Prot = average gross prot of products sold ($/product)
Rate = monthly nancial rate (%)
Charges tr = Labor charges payed over salary value ($/month)
SL = service level oered to the customer (%)
Salary tr = total salaries of delivery employees per month ($/month)
Truck MaintC = total cost of truck maintenance ($/month)
WH Admin = total employee labor hours available in administration activity (hour/month)
total employee labor hours available for delivery activity per month
WH Del =
(hour/month)
WH Others = sum of employee labor hours working in other activities (hour/month)
total employee labor hours available for picking activity per month
WH Pick =
(hour/month)
total employee labor hours available for receiving activity per month
WH Rec =
(hour/month)
total employee labor hours available for replenishment activity per month
WH Rep =
(hour/month)
total employee labor hours available for storing activity per month
WH Sto =
(hour/month)
total employee labor hours available for shipping activity per month
WH Ship =
(hour/month)
Table A.6: Cost data equations

14
Indicator Equation Data Equations
n
(ave invi × Prod costi )
X
Ave Inv = (A.32)
i=1

Invc = InvC + LostC($) (5.22) InvC = Ave Inv × Rate (A.40)


LostC = (1 − SL) × Prot × Prod Out (A.41)
StockOutq )

A. Complete Analytical Model of Performance Indicators and Data


SL = 1 − ( (A.42)
100

TrC = Truck MaintC + ($ oil × l_used × nb_travel)


+ Salarytr + Chargestr + deprec1 + Other1 (A.43)
TrC
Trc = $
( order ) (5.23)
Ord Del Salarytr = $/hdel × WH Del (A.44)
Chargestr = α × Salarytr and 0<α<1 (A.45)
Ord Del = Cor Del + Prob Del (A.19)

Ord ProcC = $/hadmin × βord × WH Admin + Chargesadmin


Ord ProcC
OrdProcc = $
( order ) (5.24) + deprec2 + Other2 (A.46)
Cust Ord

Cust Ord = number of customer orders per month (A.47)


Continued on next page. . .
Table A.6  continued from previous page

A.3. Cost indicator model


Indicator Equation Data Equations

War Cost = (Ord ProcC × Cust Ord) + Labc + Maintc (A.48)

War Cost
Sales = CGoods + (Prot × Ord Del × Prod Ord) (A.49)
CSc = (%) (5.25) n
Sales
Del × Prod Ord)i × Prod costi )
X
CGoods = ((Ord (A.31)
i=1

i = 1, . . . , n = SKU's

Salary = $/hrec × WH Rec + $/hsto × WH Sto + $/hrep × WH Rep

$ + $/hpick × WH Pick + $/hship × WH Ship + $/hadmin × (1 − βord ) × WH Admin


Labc = Salary + Charges + Others( month )
(5.26) + $/hother × WH Others (A.50)

Charges = α × Salary and 0<α<1 (5.46)

$
Maintc = BuildC+EqMaintC+Others( month ) BuildC = building maintenance costs (A.51)
(5.27) EqMaintC = maintenance cost of all equipments (A.52)

15
16 A. Complete Analytical Model of Performance Indicators and Data

A.4 Quality indicator model


The expressions of the quality problems presented in Equations A.53 - A.59 are inequalities.

The objective of these expressions is to show the main data shared by dierent quality

indicators. For example, the number of order lines picked with problem (Equation A.57)

contain as the main errors: scraps, data error and order lines no available. The problems

represented by scrap and items no available are also used in Scrapq and StockOutq
quality indicators, respectively.

Regarding the inequality result, the total number of order lines picked with problem is

equal or smaller than the sum of problems since in a real situation an order line can have

more than one problem at the same time. The correct orders are the ones with no problem

in any analyzed component (e.g. punctuality, correctness). To be a correct order, it must

fulll all requirements made by the warehouse.

An important consideration about the scraps inserted in indicator equations is that they

do not impact the nal number of orders processed. It is determined that these scraps are

the ones that have been replenished during the same month. As this situation can happen

in practice (scraps not replenished in the same month), we include scraps not solved in the

data generation, presented in Section 6.2.

The new terms introduced in the right side of Table A.8 are presented in Table A.7.

Table A.7: Quality data denitions

Data Meaning
Cor Del = number of orders delivered correctly per month (orders/month)
Cor OrdLi Pick = number of order lines picked correctly per month (orderline/month)
Cor OrdLi Ship = number of order lines shipped correctly per month (orderline/month)
number of pallets moved correctly from the reserve storage to the forward
Cor Rep =
picking area per month (pallets/month)
Cor Sto = number of pallets stored correctly per month (pallets/month)
Cor Unlo = number of pallets unloaded correctly per month (pallets/month)
number of products with data system errors from outbound area per
data error =
month (products/month)
number of pallets with data system errors from the activities: unload-
ing, storing and replenishment. It is the complement of cor data in
error data system1−3 =
system (orders/month), and the sum of all errors result in Prob Data
(orders/month)
number of orders shipped incomplete on rst shipment per month
NoComplet Ord Ship =
(orders/month)
Ord Ship = number of orders shipped per month (orders/month)
others = number of other problems not dened per month (nb/month)
number of orders with delays per month. The opposite of order on time
ord late =
(orders/month)
number of order lines per month that are not available in stock when
OrdLi noAvail =
the customer makes an order (orderlines/month)
number of pallets with inaccuracies between the physical inventory and
Prob data =
the system per month (pallets/month)
number of orders with problems during delivery activity per month
Prob Del =
(orders/month)
A.4. Quality indicator model 17

Table A.7  continued from previous page


Data Meaning
Prod Line = average number of products per order lines (products/orderline)
number of order lines with problems during picking activity per month
Prob OrdLi Pick =
(orderline/month)
number of order lines with problems during shipping activity per month
Prob OrdLi Ship =
(orderline/month)
Prod pal = average number of products stocked per pallet (products/pallet)
number of products processed by the warehouse per month. Products
Prod Proc = processed refers to the number of products shipped in the warehouse
(products/month)
number of pallets with problems in replenishment operation
Prob Rep =
(pallets/month)
Prob Sto = number of pallets stored with problems per month (pallets/month)
Prob Unlo = number of pallets unloaded with problems per month (pallets/month)
number of products per month that are not available in stock when the
Prod noAvail=
customer makes an order (product/month)
Prod Ord = average number of products per customer orders (products/order)
number of products taken out of the inventory per month
Prod Out =
(products/month)
number of pallets with losses from handling problems or accidents per
scrap1−3 = month (pallets/month). scrap 4−5 has the same meaning, it is just mea-
sured by (orderlines/month). scrap 6 is measured in (orders/month)
Table A.8: Quality data equations

18
Indicator Equation Data Equations

Cor Unlo Pal Unlo = Cor Unlo + Prob Unlo (A.3)


Recq = × 100(%) (5.28)
Pal Unlo
Prob Unlo 6 scrap1 + error data system1 + others (A.53)

A. Complete Analytical Model of Performance Indicators and Data


Cor Sto Pal Sto = Cor Sto + Prob Sto (A.6)
Stoq = × 100(%) (5.29)
Pal Sto
Prob Sto 6 scrap2 + error data system2 + others (A.54)

Cor Rep Pal Moved = Cor Rep + Prob Rep (A.10)


Repq = × 100(%) (5.30)
Pal Moved
Prob Rep 6 scrap3 + error data system3 + others (A.55)

Pal Unlo = Cor Unlo + Prob Unlo (A.3)

Pal Sto = Cor Sto + Prob Sto (A.6)


Pal Unlo + Pal Sto + Pal Moved - Prob data
Invq = ×100 Pal Moved = Cor Rep + Prob Rep (A.10)
Pal Unlo + Pal Sto + Pal Moved
(5.31) 3
error data systemm
X
Prob Data = (A.56)
m=1

m = error data system1 , error data system2 ,error data system3

OrdLi Pick = Cor OrdLi Pick + Prob OrdLi Pick (A.13)


Cor OrdLi Pick
Pickq = × 100(%) (5.32)
OrdLi Pick Prob OrdLi Pick 6 scrap4 + data error + OrdLi noAvail + others
(A.57)
Continued on next page. . .
Table A.8  continued from previous page

A.4. Quality indicator model


Indicator Equation Data Equations

OrdLi Ship = Cor OrdLi Ship + Prob OrdLi Ship (A.16)


Cor OrdLi Ship
Shipq = × 100(%) (5.33)
OrdLi Ship Prob OrdLi Ship 6 scrap5 +data error+No OT Ship+NoComplet Ord Ship+others
(A.58)

Ord Del = Cor Del + Prob Del (A.19)


Cor Del
Delq = × 100(%) (5.34)
Ord Del Prob Del 6 scrap6 + data error + ord late + no complete ord + others
(A.59)

Ord Del OT Ord Del OT = Ord Del − No OT Del (A.60)


OTDelq = × 100(%) (5.35)
Ord Del
Ord Del = Cor Del + Prob Del (A.19)

Ord Ship OT = Ord Ship − No OT Ship (A.61)


Ord Ship OT
OTShipq = × 100(%) (5.36) OrdLi
Ord Ship XShip
Ord Ship = OrdLi Shipp (A.62)
p=1

Complet 1st Ship = Ord Ship − NoComplet Ord Ship (A.63)


Complet 1st Ship
OrdFq = × 100(%) (5.37) OrdLi
Ord Ship XShip
Ord Ship = OrdLi Shipp (A.62)
p=1
Continued on next page. . .

19
Table A.8  continued from previous page

20
Indicator Equation Data Equations

(Ord OT, ND, CD) Ord OT, ND, CD = orders on time, with no damages and correct documents
PerfOrdq = × 100(%) (5.38)
Ord Del
Ord Del = Cor Del + Prob Del (A.19)

Ord Del - Cust Complain Cust Complain = customer complaints regarding warehouse processes

A. Complete Analytical Model of Performance Indicators and Data


CustSatq = × 100(%) (A.64)
Ord Del
(5.39) Ord Del = Cor Del + Prob Del (A.19)

Prod noAvail Prod noAvail = products not available in stock


StockOutq = × 100(%) (5.40)
Prod Out
Prod Out = Ord LiPick × Prod Line (A.47)

Nb Scrap = (scrap1 + scrap2 + scrap3 ) × Prod pal+


Nb Scrap
Scrapq = × 100(%) (5.41) (scrap4 + scrap5 ) × Prod Line + scrap6 × Prod Ord (A.65)
Prod Proc

Prod Proc = Prod Ship = OrdLi Ship × Prod Line (5.42)


Appendix B
Data Generation

This appendix details how data is created for the standard warehouse. The next sections

present separately the product ow and data equations for warehouse operations, demon-

strating the considerations made for each activity.

B.1 Receiving data


The receiving activity is detailed in Figure B.1, which is divided in ve parts: four rectan-

gles with data equations and one activity ow schema in the up right side of the gure. The

four rectangles shows, respectively: the Global variables; the internal inputs named `IntIn-

put'; the `Outputs' and internal outputs `IntOutput'; the Number of problems occurred

during the month.

Global variables Receiving


nb_days/month[t] = RANDBETWEEN (20; 25)
HWarOperate = 8 h/day
Supplier Ord [t] = NORM.INV(pr();28;2) +
nb pallets/truck = 25
+ Cor Unlo +
[t-1] +
IntInput – Internal Inputs
Prob Unlo
nb PalRec[t] = Supplier Ord [t] * nb pallets/truck
WHRec[t] = emplRec * HWarOperate * nb_days/month[t]
emplRec = 0,5 IntOutput
Insp time[t] = NORM.INV(pr(); 0,5; 0,1)
Scrap Unlo1[t]
β_rec = 0,85

Outputs
Cor Unlo[t] = RANDBETWEEN ((nb PalRec[t] + Scrap Unlo1[t-1] )*0,98; (nb PalRec[t] + Scrap Unlo1[t-1] )*1)
Prob Unlo[t] = nb PalRec[t] + Scrap Unlo1[t-1] – Cor Unlo[t]

IntOutput – Internal Outputs


Scrap Unlo1[t] = RANDBETWEEN (0; Prob Unlo[t])
∆t Insp[t] = Insp time[t] * Supplier Ord[t]
WEfRec[t] = β_rec * WHRec[t]
∆t Admin_rec[t] = 1h/day * nb_days/month [t]

Number of problems occurred during the month


Scrap Unlo[t] = RANDBETWEEN (Scrap Unlo1[t]; Prob Unlo[t])
Error DataInb1[t] = RANDBETWEEN (0; Prob Unlo [t]– Scrap Unlo[t])
Other Errors rec = Prob Unlo [t] – Scrap Unlo[t] – Error DataInb1[t]

Figure B.1: Receiving ows and data equations.

The Global variables are general information that can be used in any part of the ware-

house to calculate other data or indicators. The number of days worked in a month

`nb_days/month', for example, varies every month between 20 and 25 days, following a
22 B. Data Generation

uniform distribution of probabilities. Once the number of days is dened for a month,

this information is used for all data and indicators in that month. To simplify the gure,

we illustrate only the global variables related to receiving operation and used to calculate

inputs or outputs.

The internal inputs `IntInput' and internal outputs `IntOutput' comprehend data re-

lated specically to the receiving performance indicators.

The `Outputs' are also data used on performance indicators, but the dierence is that

these outputs are also the inputs of the next activity, demonstrating the product ow in the

warehouse, which also impact indicator interactions. Finally, the rectangle on the bottom

of Figure B.1 demonstrates the total `Number of problems occurred during the month'.

These data are a sum of all problems occurred during the month in the activity (solved or

not), and some of these informations are also utilized in indicator equations.

The design of Figure B.1 and the information inside rectangles are used as standard

for all other warehouse activities presented in next sections. Moreover, the notation of the

equations inside the rectangles are the same presented in the complete analytical model

described in Appendix A.

The equations presented in Figure B.1 are explained detailedly as follows.

B.1.1 Equations of Receiving data


In the receiving ow schema of Figure B.1, the number of supplier orders `Supplier Ord'

arriving in the warehouse are a random number varying according to a normal distribution

with mean 28 and standard deviation 2. As the performance indicators in receiving are

measured in pallets, we assess the number of pallets received, `nb PalRec[t]' (rst equation

of `IntInput'), multiplying the number of supplier orders received in the month t and

the number of pallets per truck, `nb pallets/truck'. This equation demonstrates that we

consider all supplier orders arriving with the same quantity, a complete truck of 10 tons

loaded with 25 pallets.

The number of labor hours available to work in a month, `WHRec[t]', change according

to the working days and the number of employees performing the activity. As stated

before, in this scenario, the number of employees are considered constant over time for all

activities.

The last two `IntInput' equations correspond to the time to perform product quality

inspections, Insp time[t] and βord is the index to represent how many hours of the total

available labor hours the employees are eectively receiving. The `Insp time' uses the

normal function to dene the time, in hours, taken by administrative employees to perform

inspection, which is dened as 30 min (0.5 hour) on average for each supplier order with a

standard deviation of 6 minutes (0.1 hour). Insp time[t] and βord are used to calculate the
total inspection time and eective hours receiving in the month [t], named ∆t Insp[t] and

WEfRec[t], respectively. The equations are showed in the `IntOutput' area of the Figure

B.1.

The last formula of IntOutput is ∆t Admin_rec[t] which means the time taken by

administrative personnel to execute activities related to receiving and supplier orders. This

time is xed in one hour per day.

The type of receiving `problems' occurred in a month are not exhaustively detailed.
B.2. Storage data 23

The Scrap Unlo[t] and Error DataInb1[t] are demonstrated separately because their values

are used in ScrapRateq and Invq indicators, respectively. All other possible errors are

identied in equation `Other Errors rec', besides its value is not used for indicator mea-

surement. It is important to note that according to the equations, the number of Error

DataInb1[t] has as limit the number of problems minus products with scrap problems.

Thus, another constraint of the model is not allowing an order with two dierent errors at

the same month.

The outputs of receiving, Cor Unlo[t] and Prob Unlo[t], variates every month between 98

% to 100% of the total pallets unloaded for Cor Unlo[t] and of 0% up to 2% for Prob Unlo[t].

According to Figure B.1, the Cor Unlo[t] is measured using a uniform random probability

between 98% and 100% of the total inputs, which are the total of pallets received, nb

PalRec[t], and the number of scraps not solved in the previous month [t-1] (Scrap Unlo1[t-

1]). The Prob Unlo[t], in contrast, is calculated just with the dierence between the total of

inputs and the pallets unloaded correctly, Cor Unlo[t]. Therefore, the inputs of the storage

activity (presented in the next section) are the resultant of CorU nlo[t] + P robU nlo[t] −
ScrapU nlo1[t] equation.

All other activities have their equations developed based on the same logic presented

here for the receiving activity. Thus, just particularities not discussed yet are presented in

next sections.

B.2 Storage data


The data equations used in storage activity are presented in Figure B.2.

In storage activity, the outputs Cor Sto[t] and Prob Sto[t], variates every month between

96 % to 98% of the total pallets stored for Cor Sto[t] and of 0% up to 2% for Prob Sto[t]. It

results, in some months, that a number of products could be not all processed, remaining

as Sto in Process for the next month. The Sto in Process is the sum of products

with problems not solved (information arrow getting out of Prob Sto and entering in `No

Proc') with products not processed `No Proc'. It is interesting to note that the problems

not solved are the number of scraps not replaced during the month, represented by Scrap

Sto1[t].

B.3 Replenishment data


The data equations used in replenishment activity are shown in Figure B.3. The replen-

ishment activity consist on the movement of pallets from the reserve storage area to the

forward picking area. As this activity aims to replenish the inventory picking area, the

number of pallets to move depends on the quantity of products picked (represented by Cor

Pick[t] + Prob Pick[t] in Figure B.3).

We note that the replenishment indicators are measured by pallets and the Cor Pick[t]

and Prob Pick[t] have order lines as units. Thus, the equations presented in Figure B.3

also transform these dierent kinds of information in the same unit.


24 B. Data Generation

Storage
Global variables
nb_days/month[t] = RANDBETWEEN (20; 25) Cor Unlo[t] + Prob Unlo[t] – Scrap Unlo1[t] +
Cor Sto +
HWarOperate = 8 h/day +
+
[t-1] Prob Sto

No Proc +
IntInput – Internal Inputs +
WHSto[t] = emplSto * HWarOperate * nb_days/month[t]
IntOutput
emplSto = 0,5
β_sto = 0,85
Sto inProcess

Outputs
Cor Sto[t] = RANDBETWEEN ((Sto inProcess[t-1] + Cor Unlo[t] + Prob Unlo[t] – Scrap Unlo1[t])*0,96; (Sto inProcess[t-1] + Cor
Unlo[t] + Prob Unlo[t] – Scrap Unlo1[t])*0,98)
Prob Sto[t] = RANDBETWEEN (0; (Sto inProcess[t-1] + Cor Unlo[t] + Prob Unlo[t] – Scrap Unlo1[t])*0,02)
Sto inProcess[t] = Sto inProcess[t-1] + Cor Unlo[t] + Prob Unlo[t] - Scrap Unlo1[t] – Cor Sto[t] – Prob Sto[t] + Scrap Sto1[t]

IntOutput – Internal Outputs


Scrap Sto1[t] = RANDBETWEEN (0; Prob Sto[t])
WEfSto[t] = β_sto * WHSto [t]
∆t Admin_sto[t] = 1h/day * nb_days/month [t]

Number of problems occurred during the month


Scrap Sto[t] = RANDBETWEEN (Scrap Sto1[t]; Prob Sto[t])
Error DataInb2[t] = RANDBETWEEN (0; Prob Sto[t] – Scrap Sto[t])
Other Errors sto = Prob Sto[t] – Scrap Sto[t] – Error DataInb2[t]

Figure B.2: Storage ows and data equations.

Global variables Replenishment


nb_days/month[t] = RANDBETWEEN (20; 25) Cor Pick[t] + Prob Pick[t]
HWarOperate = 8 h/day
Cor Rep +
Prod_Ord[t] = NORM.INV(pr(); 20; 2)
+
nb prod pal = 40 +
+ Prob Rep
[t-1]

IntInput No Proc +

WHRep[t] = emplRep * HWarOperate * nb_days/month[t]


IntOutput
emplRep = 1
β_rep = 0,8 Rep inProcess[t]

Outputs
Cor Rep[t] = RANDBETWEEN ((((Cor Pick[t] + Prob Pick[t]) *Prod_Ord[t]/ nb products pal)+ Rep inProcess[t-1] ) *0,96; (((Cor
Pick[t] + Prob Pick[t]) *Prod_Ord[t]/ nb products pal)+ Rep inProcess[t-1] )*0,98)
Prob Rep[t] = RANDBETWEEN (0; (((Cor Pick[t] + Prob Pick[t]) *Prod_Ord[t]/ nb products pal)+ Rep inProcess[t-1] )*0,02)
Rep inProcess[t] = ((Cor Pick[t] + Prob Pick[t]) *Prod_Ord[t]/ nb products pal) - Cor Rep[t] - Prob Rep[t] + Scrap Rep1[t] + Rep
inProcess[t-1]

IntOutput
Scrap Rep1[t] = RANDBETWEEN (0; Prob Rep[t])
WEfRep[t] = β_rep * WHRep[t]
∆t Admin_rep[t] = 1h/day * nb_days/month [t]

Number of problems occurred during the month


Scrap Rep[t] = RANDBETWEEN (Scrap Rep1[t]; Prob Rep[t])
Error DataInb3 [t]= RANDBETWEEN (0; Prob Rep[t] – Scrap Rep[t])
Other Errors rep = Prob Rep[t] – Scrap Rep[t] – Error DataInb3[t]

Figure B.3: Replenishment ows and data equations.


B.4. Picking data 25

B.4 Picking data


The data equations used in picking activity are depicted in Figure B.4.

Global variables Cust Ord[t] = Demand[t] / Prod_Ord[t] Picking


nb_days/month[t] = RANDBETWEEN (20; 25)
HWarOperate = 8 h/day + Cor Pick +
Prod_Ord[t] = NORM.INV(pr(); 20; 2) +
Demand[t] = NORM.INV(pr(); 28000; 2000) + Prob Pick
[t-1]
IntInput No Proc
Scrap Del[t-1]
WHPick[t] = emplPick*HWarOperate*nb_days/month[t]
IntOutput
emplPick = 4
β_pick = 0,95 Pick inProcess

Outputs
Cor Pick[t] = RANDBETWEEN ((Scrap Del1[t-1] + Pick inProcess[t-1] + CustOrd[t])*0,96; (Scrap Del1[t-1] + Pick inProcess[t-1] +
CustOrd[t])*0,98)
Prob Pick[t] = RANDBETWEEN (0; (Scrap Del1[t-1] + Pick inProcess[t-1] + CustOrd[t])*0,02)
Pick inProcess [t] = Scrap Del1[t-1] + Pick inProcess[t-1] + Cust Ord[t]- Cor Pick[t] - Prob Pick[t] + Scrap Pick1[t] + ProdnoAvail1[t]

IntOutput
Scrap Pick1[t] = RANDBETWEEN (0; Prob Pick[t])
ItemnoAvail1[t] = RANDBETWEEN (0; Prob Pick[t] – Scrap Pick1[t])
WEfPick[t] = β_pick * WHPick[t]
∆t Admin_pick[t] = 1h/day * nb_days/month [t]

Number of problems occurred during the month


Scrap Pick[t] = RANDBETWEEN (Scrap Pick1[t]; Prob Pick[t] – ProdnoAvail1[t])
ProdnoAvail[t] = RANDBETWEEN (ProdnoAvail1[t]; Prob Pick[t] – Scrap Pick[t])
Other Errors pick = Prob Pick[t] – Scrap Pick[t] – ProdnoAvail[t]

Figure B.4: Picking ows and data equations.

B.5 Shipping data


Figure B.5 presents the shipping activity with its equations. The indicator Order Fill rate

(Equation 5.37) measures the number of orders delivered complete. Instead of generat-

ing the number of complete orders, we evaluate the number of partial orders delivered,

represented by NoComplet_Ord Ship[t].

B.6 Delivery data


Figure B.6 shows the delivery activity with its equations.
26 B. Data Generation

Global variables Shipping


nb_days/month[t] = RANDBETWEEN (20; 25)
HWarOperate = 8 h/day Cor Ship +
Cor Pick[t] + Prob Pick[t] – ScrapPick1[t] +
+
– ProdnoAvail1[t] + Prob Ship
[t-1]
No Proc
IntInput
WHShip[t] = emplShip*HWarOperate*nb_days/month IntOutput
emplShip = 3
β_ship = 0,95 Ship inProcess

Outputs
Cor Ship[t] = RANDBETWEEN ((Ship inProcess[t-1] + Cor Pick[t] + Prob Pick[t] – ScrapPick1[t] – ProdnoAvail1[t]) *0,96; (Ship
inProcess[t-1] + Cor Pick[t] + Prob Pick[t] – ScrapPick1[t] – ProdnoAvail1[t]) *0,98)
Prob Ship[t] = RANDBETWEEN (0; (Ship inProcess[t-1] + Cor Pick[t] + Prob Pick[t] – ScrapPick1[t] – ProdnoAvail1[t]) *0,02)
Ship inProcess [t] = Ship inProcess[t-1] + Cor Pick[t] + Prob Pick[t] - Scrap Pick1[t] - ProdnoAvail1[t] - Cor Ship[t] – Prob Ship[t] +
Scrap Ship1[t]

IntOutput
Scrap Ship1[t] = RANDBETWEEN (0; Prob Ship[t])
WEfShip[t] = β_ship * WHShip[t]
∆t Admin_ship[t] = 1h/day * nb_days/month[t]

Number of problems occurred during the month


Scrap Ship[t] = RANDBETWEEN (Scrap Ship1[t]; Prob Ship[t])
NoComplet_Ord Ship[t] = RANDBETWEEN(0; Prob Ship[t] – Scrap Ship[t])
Other Errors ship = Prob Ship[t] – Scrap Ship[t] – NoComplet_Ord Ship[t]
OTShip[t] = RANDBETWEEN (Cor Ship[t]; Cor Ship[t] + Prob Ship[t] – Scrap Ship1[t])

Figure B.5: Shipping ows and data equations.

Global variables
Delivery
nb_days/month[t] = RANDBETWEEN (20; 25)
HWarOperate = 8 h/day +
Cor Ship[t] + Prob Ship[t] – ScrapShip1[t] Cor Del +
+
+
Prob Del
IntInput -
WHDel[t] = emplDel*HWarOperate*nb_days/month(t] IntOutput

emplDel = 2
Scrap Del[t]
β_del = 0,90

Outputs
Cor Del[t] = RANDBETWEEN ((Cor Ship[t] + Prob Ship[t] – ScrapShip1[t] )*0,98; (Cor Ship[t] + Prob Ship[t] – ScrapShip1[t])*1)
Prob Del[t] = Cor Ship[t] + Prob Ship[t] – ScrapShip1[t] – Cor Del[t]
Scrap Del1[t] = RANDBETWEEN (0; Prob Del[t])

IntOutput
WEfDel[t] = β_del * WHDel[t]
∆t Admin_del [t] = 2h/day * nb_days/month[t]

Number of problems occurred during the month


Scrap Del[t] = RANDBETWEEN (Scrap Del1[t]; Prob Del[t])
Cust Complain[t] = RANDBETWEEN(0; Prob Del[t])
Other Errors del = Prob Del[t] – Scrap Ship
OTDel[t] = RANDBETWEEN (Cor Del[t]; Cor Del[t] + Prob Del[t] – Scrap Del1[t])
OT_ND_DC[t] = Cor Del[t]

Figure B.6: Delivery ows and data equations.


B.7. Warehouse and Inventory data 27

B.7 Warehouse and Inventory data


This section demonstrates the equations related to the warehouse as a whole (Figure B.7),

emphasizing the inventory area in Figure B.8.

The warehouse building and the truck make part of company assets; it means that all

costs associated with their maintenance are taken into account in cost indicators.

The charges, total paid over salary for all employees are considered as 50% of salary

value. The average of liters used per travel is 2, considering that each travel has 10 km and

5 km is made with one oil liter. The depreciations (deprec1 and deprec2) are considered

xed values over time.

Global variables Receiving


nb_days/month[t] = RANDBETWEEN (20; 25)
HWarOperate = 8 h/day
Supplier Ord [t] = NORM.INV(pr();28;2) +
nb pallets/truck = 25
+ Cor Unlo +
[t-1] +
IntInput – Internal Inputs
Prob Unlo
nb PalRec[t] = Supplier Ord [t] * nb pallets/truck
WHRec[t] = emplRec * HWarOperate * nb_days/month[t]
emplRec = 0,5 IntOutput
Insp time[t] = NORM.INV(pr(); 0,5; 0,1)
Scrap Unlo1[t]
β_rec = 0,85

Outputs
Cor Unlo[t] = RANDBETWEEN ((nb PalRec[t] + Scrap Unlo1[t-1] )*0,98; (nb PalRec[t] + Scrap Unlo1[t-1] )*1)
Prob Unlo[t] = nb PalRec[t] + Scrap Unlo1[t-1] – Cor Unlo[t]

IntOutput – Internal Outputs


Scrap Unlo1[t] = RANDBETWEEN (0; Prob Unlo[t])
∆t Insp[t] = Insp time[t] * Supplier Ord[t]
WEfRec[t] = β_rec * WHRec[t]
∆t Admin_rec[t] = 1h/day * nb_days/month [t]

Number of problems occurred during the month


Scrap Unlo[t] = RANDBETWEEN (Scrap Unlo1[t]; Prob Unlo[t])
Error DataInb1[t] = RANDBETWEEN (0; Prob Unlo [t]– Scrap Unlo[t])
Other Errors rec = Prob Unlo [t] – Scrap Unlo[t] – Error DataInb1[t]

Figure B.7: Warehouse ows and data equations.

Figure B.8 shows in IntOutput rectangle the equations inv_end[t] and aveinv. The

equation inv_end[t] means the inventory on hand at the end of a given period. It is

calculated by: the inventory from the previous period (inv_end[t-1]), summed up with the

products get in stock (CorSto[t] + ProbSto[t] - ScrapSto1[t]), less the demand in the given

period (CorPick[t] + ProbPick[t]). To calculate the average stock during an entire month,

a data used in some indicators, the equation aveinv is applied for this purpose.
28 B. Data Generation

Global variables
nb_days/month[t] = RANDBETWEEN
(20; 25)
HWarOperate = 8 h/day
Rate = 10% month Inventory
nb prod/pal = 40
palSpace = 1000
Cor Sto[t] + Prob Sto[t] – ScrapSto1[t] Cor Pick[t] + Prob Pick[t]
$/hour admin = 7
deprec2 = $200/ month

IntInput:
Cust Ord[t] = Demand[t] / Prod_Ord[t]
WHAdmin[t] = emplAdmin*HWarOperate*nb_days/month[t] IntOutput
emplAdmin = 2
β_ord = 0,55
Prod_Ord[t] = NORM.INV(pr(); 20; 2)

IntOutput:
inv_end[t] = inv_end[t-1] + (Cor Sto[t] + Prob Sto[t] – ScrapSto1[t])* nb products/pal – (CorPick[t] + Prob Pick[t])*Prod_Ord[t]
aveinv = (inv_end[t-1] + inv_end[t]) / 2
Ord Procc[t] = (WHAdmin[t] * β_ord * $7/hour + 0,5 * (WHAdmin[t] * β_ord * $7/hour) + deprec2)/ Cust Ord[t]
HAdmin_ord = β_ord * WHAdmin[t]

Number of problems occurred during the month


ProdnoAvail[t]= ProdnoAvail1[t]*Prod_Ord[t] + IF(inv_end[t-1] + (Cor Sto[t] + Prob Sto[t] – Scrap Sto1[t])*nb prod /pal –
(Cor Pick[t] + Prob Pick[t] – Scrap Pick[t])*Prod_Ord[t]) > 0; 0; ABS(Cor Sto[t] + Prob Sto[t] – Scrap Sto1[t])*nb prod/pal –
(Cor Pick[t] + Prob Pick[t] – Scrap Pick[t])* Prod_Ord[t]

Figure B.8: Inventory ows and data equations.


Appendix C
Manual Procedure to determine indicator
relationships

This appendix demonstrate the initial analysis performed to determine indicator relation-

ships manually.

Initially, we construct a schema (Figure C.1) showing the all 40 indicators and the main

data used to measure them (data from indicator equations of Sections 5.2.4, 5.2.5, 5.2.6,

5.2.7). The indicators are represented by ellipses and data by rectangular blocks. The

lines represent the connection between data and the indicator. For example, the indicator

EqDp (in the up left corner of Figure C.1) is calculated by HEq Stop per HEq Avail (the

green rectangles), so there are lines connecting both data with the indicator EqDp .

HEq Avail HEq Stop War Cap WarUtp War CapUsed Inv Cap Prod Out
EqDp
Recp WH Rec InvUtp
EqMaintC Ave Inv
Rect StockOutq
WEfRec
Pal Unlo Invc
Recq Maintc TOp
Cor Prob DSt
WEfSto Sales Prod noAvail
Put BuildC CSc CG P
Rate
Pal Sto WH Sto
Stoq Stop OrdProcc Prod Proc
Cor Prob
Ord ProcC Labp
Invq Labc
Cust Ord Scrapq
WH
Pal Moved Rept WEfRep
Repq Nb Scrap
Cor Prob
OrdLTt
Repp WH Rep
WH Del Cust
TrC
WEfDel Complain
Legend: WH Pick
RP – right product WEfShip WH Ship Trc
RQ – right quantity Delp
CustSatq
RT – right truck Pickp Pickt WEfPick Delt
CD – correct document Shipt Shipp Kg Avail
OT – on time
ND – no damage
OrdLi Pick Thp War WH Ord Del TrUtp
P – Profit Pickq OrdLi Ship
CG – Cost of Goods Cor Prob ND Prob
Productivity indicators RP Prob
CD OTDelq
Cost indicators RQ OrdFq
Shipq PerfOrdq
OT
Quality indicators RT OTShipq
Time indicators OT

Figure C.1: Indicator relationships based on data.

In Figure C.1 we present data just once to simplify the interpretation. It means that if

there is a data used in two or more indicator equations with dierent units, it will appear

just in one rectangle. That is the case, for example, of Ave Inv that is measured in units

for InvUtp (Equation 5.16) and in dollars for Invc (Equation 5.22) and TOp (Equation
30 C. Manual Procedure to determine indicator relationships

5.17).

The violet blocks referring to Unload pallet Pal Unlo, Pallet stored Pal Sto, Pallet

moved Pal Moved, Order lines picked OrdLiPick, Order lines shipped OrdLiShip and

Orders delivered Ord Del means the total of products processed in each activity. For

these data, we distinguish the main data parts to clarify what is being used to calculate the

indicators. For receiving, storage, replenishment and picking there are just two divisions:

correct `Cor' and problem `Prob'. In the case of order lines shipped and orders delivered,

the acronyms mean, respectively: RP, right product; RQ, right quantity; RT, right truck;

ND, no damage; CD, correct documents; OT, on time. Finally, the red rectangular block,

denoting sales (Equation A.49) is calculated by the sum of prot (represented by the red

block P) with cost of goods (represented by the red block CG).

The colors denote the classication of indicators and data, according to their dimen-

sions. The green gures refer to data and indicators of time, the red ones refers to cost,

orange to productivity, blue to capacity data and violet is related to the product and order

quantity with its quality.

Figure C.1 shows that the majority of indicators are related with at least one other

indicator, forming a big cloud of relationships. The exceptions are equipment downtime

and warehouse utilization, EqDp and WarUtp .


Analyzing the interconnections, it is possible to visualize some groups formed from this

relations. Taking the left side of Figure C.1, we observe that the violet rectangles (e.g., Pal

Unlo) connect essentially indicators of time, quality and productivity. In the right side of
Figure C.1 it is possible to note a distinct group of indicators mainly associated to costs

can be identied.

In order to clarify the indicator relations, in the next section we present initially a

manual procedure to determine a framework where just indicator relations are exhibited.

C.1 The Manual Procedure


After the identication of indicator relations in Figure C.1, we use a simple procedure to

get a new schema without data on it.

To construct a relationship framework, all indicators are listed and their relations are

identied by means of structures like the one presented in Figure C.2. The indicator under

analysis is located in the center and the ones that are related to it are connected by arrows.

The number on the arrows represents the number of data shared by indicators. Taking one

example of the four demonstrated in Figure C.2, shipping quality  Shipq  shares one data

with Shipt , Shipp , Thp and two data with OrdFq and OTShipq .

C.2 The indicator relationships schema for the manual pro-


cedure
After the construction of this structure for all indicators, the framework is produced con-

necting indicators with dierent lines depending on the number of data shared. The result

is demonstrated in Figure C.3.


C.2. The indicator relationships schema for the manual procedure 31

Shipt
1

1 1
Shipp Shipq Thp
2 2

OrdFq OTShipq

1 Pickq 1
Pickp Pickt
Recp DSt

Rect 1
1 Put
1
1
1
Pickp CSc
Recq Invq Stop
1 1
1 1
1 1
1 Stop Labc Repp
1
Repq 1 1
Stoq

Rept Repp Recp Shipp

Figure C.2: Manual procedure to construct the indicator's framework.

StockOutq
EqDp WarUtp Invc
OrdProcc
InvUtp Labp Scrapq

Maintc CSc TOp


Rect Recp
Trc

Pickq
Delp CustSatq
Recq DSt
Pickp Pickt
Invq Put
OrdLTt
Labc PerfOrdq
Stop
Repq

Stoq Shipt
Delt OTDelq
Shipp Thp
Rept Repp
TrUtp

OrdFq OTShipq Productivity indicators Relationships:


Cost indicators Two Data
Quality indicators One Data
Shipq Time indicators

Figure C.3: Direct indicator relations.


32 C. Manual Procedure to determine indicator relationships

Looking at Figure C.3, the rst impression could be that the majority of indicators

form a big group of relations. But analyzing Figure C.3 in detail, it is possible to observe

that the indicators are arranged in clusters. The more visible cluster on the right side

of Figure C.3 consists mainly of indicators about delivery process and order quality. The

second group of measures are related to shipping activity, and are located in the bottom

of the gure. The three indicators of picking activity constitute a little group in the center

of the gure. The inbound area, in the left side of the gure, could be viewed as other

important relationship group. However, the relations among inbound indicators do not

seem to be as strong as for the delivery cluster. The last group of measures is located on

the top of the gure, aggregating mainly cost and capacity measures.

It is apparent from Figure C.3 that indicators are rather connected to others by their

processes than by their dimensions. In other words, the indicator relationships seems to

be established per warehouse process, instead of by the dimensions of quality, cost, time,

productivity.

There is two types of lines in Figure C.3: one representing that indicators share one

data and the other one representing two data sharing. We could assume that indicators

with two shared data have a stronger relationship than the others with just one. However,

other informations need to be analyzed to make this kind of conclusion. It is discussed

later in Chapter 7 with more information available.

Figure C.3 shows the main relations, but the procedure performed is not exhaustive.

The analytical model has shown that data are very connected, with some data making part

of more general ones. For example,  WH is a sum of all  WH Activities(means the sum of

WHRec, WHSto, etc.), as presented in Equation A.22. This situation was not taken into
account in this section. Indeed, Figure C.1 presents  WH and  WH Activities separately.

To take into account all data associations, next section presents the exhaustive procedure

using the Jacobian matrix.


Appendix D
List of independent input values

Input Value Input Value


α 0.5 mean_Insp 0.5
β_del 0.9 nbMachine 2.0
β_ord 0.55 nb_travel 3.0
β_pick 0.95 NoComplet Ord Ship 17.0
β_rec 0.85 Ord Del OT 1311.0
β_rep 0.8 Ord Ship OT 1334.0
β_ship 0.95 pal_truck 25.0
β_sto 0.85 pallet_area 1.2
BuildC 1988.0 Prob OrdLi Pick 24.0
cap 5000.0 Prob OrdLi Ship 17.0
Cor OrdLi Pick 1367.0 Prob Del 23.0
Cor OrdLi Ship 1334.0 Prob Rep 4.6
Cor Del 1311.0 Prob Sto 2.0
Cor Rep 617.0 Prob Unlo 9.0
Cor Sto 674.0 Prod Ord 18.4
Cor Unlo 691.0 Prod pal 40.0
Cust Ord 1417.0 Prod noAvail 275.0
Cust Complain 18.0 Prod Cost 99.9
ΔT(Insp)2 1.0 Profit 100.0
deprec 1 500.0 Rate 0.1
deprec 2 200.0 Remain_Inv 30500.0
empl Admin 3.0 scrap1 23.0
empl Del 2.0 Scrap_Del1 13.0
empl Pick 4.0 scrap2 5.0
empl Rec 1.0 Scrap_Pick1 4.0
empl Rep 1.0 scrap3 4.0
empl Ship 3.0 scrap4 17.0
empl Sto 1.0 Scrap_Ship1 17.0
EqMaintC 4118.0 scrap5 1.0
error data system1 1.0 scrap6 7.0
error data system2 3.0 Truck Maint C 1165.0
error data system3 1.0 War Cap 5000.0
HAdmindel 63.0 war used area 3800.0
HAdminpick 21.0 War WH 168.0
HAdminrec 21.0 $/hadmin 7.0
HAdminrep 21.0 $/hdel 5.0
HAdminship 21.0 $/hpick 5.0
HAdminsto 21.0 $/hrec 5.0
HEq Stop 14.4 $/hrep 5.0
Inv Cap 1000.0 $/hship 5.0
kg Prod 10.0 $/hsto 5.0
l_used 2.0 $ oil 2.39

Figure D.1: Independent input values used for Jacobian assessment.


Appendix E
Theoretical Framework of indicator relationships

Here we show the theoretical framework of indicator relationships resulted from Jacobian

analysis. To create this schema we perform the same of manual procedure presented in

Appendix C.

Maintc
Scrapq StockOutq
WarUtp Delp
CustSatq
InvUtp TOp
Pickp
Invc PerfOrdq

Pickq
OTDelq
Pickt

Stoq Labc TrUtp

CSc
Putt Trc

Stop
OrdLTt

DSt
OrdProcc

Recp
Delt
Invq
Rect
Shipt

Recq Shipp

Rept Labp Shipq


Thp
EqDp

Repq Repp OrdFq


OTShipq

Productivity indicators Data Shared:


More than Six Data
Cost indicators
Three up to Six Data
Quality indicators Two Data
Time indicators One Data

Figure E.1: Indicator relations according to the number of shared data.


Appendix F
Results of Dynamic Factor Analysis application

This appendix reports the initial results obtained with the Dynamic Factor Analysis ap-

plication. The R code and the procedure to perform DFA in R are from Holmes (2015),

available in the website: http://faculty.washington.edu/eeholmes/

The R code is applied for 50 month time series data of the 40 standardized indicators.

The main reason to reduce the dataset to 50 month is because a big dataset does not allow

the convergence of the model. As presented in Chapter 3, Equation 3.9, the objective is to

obtain the Z values, which correspond to the loadings of the PCA method.

Table F.1 demonstrates the DFA results for two dierent R matrix propositions with

the number of trends, m, varying from 1 up to 8. The R matrix measures the covariance

matrix of the observation errors. It can be calculated considering four error conditions:

diagonal and equal, diagonal and unequal, equal variance covariance and unconstrained.

It is shown just two dierent conditions in Table F.1 because are the best results obtained

for our database.

The logLik (loglikelihood) and the AICc (Akaike Information Criterion with a Correc-

tion for nite sample sizes) are the measures to evaluate the quality of the results. The

lower the logLik and AICc values, better the model. The column K shows the number of

parameters in the model and m represents the number of trends used to represent data.

R m logLik K AICc
diagonal and unequal 1 -2383,03 80,00 4932,81
diagonal and unequal 2 -2096,71 119,00 4446,61
diagonal and unequal 3 -2043,87 157,00 4428,67
diagonal and unequal 4 -1684,87 194,00 3799,65
diagonal and unequal 5 -1542,66 230,00 3605,39
diagonal and unequal 6 -1380,38 265,00 3372,07
diagonal and unequal 7 -1261,68 299,00 3226,89
diagonal and unequal 8 -1200,32 332,00 3197,27
unconstrained 1 70,60 860,00 2878,99
unconstrained 2 112,88 899,00 3043,34
unconstrained 3 166,17 937,00 3196,85
unconstrained 4 205,20 974,00 3390,58
unconstrained 5 236,82 1010,00 3611,29
unconstrained 6 256,13 1045,00 3869,29
unconstrained 7 277,26 1079,00 4136,78
unconstrained 8 295,63 1112,00 4423,39

Table F.1: DFA results for 40 indicators.

The bold line in Table F.1 shows the best result for these test: a model with just
EqDp -0,029
Invc -0,049
Invq 0,006
InvUtp 0,243
38 F. Results
Labc of Dynamic
-0,118Factor Analysis application
Labp 0,200
Maintc 0,017
OrdFq 0,188
one trend. Table F.2 shows the loading values obtained and the highlighted cells have
OrdLTt -0,087
|values| > 0.15. It is possible to see that many loadings are really low, resulting that these
OrdProcc -0,117
indicators can not be considered in the model. According to Table F.1, only 11 indicators
OTDelq 0,039
from the initial 40 are included in the aggregated model.
OTShipq 0,145
PerfOrdq 0,028
Indicator Loading Indicator
Pickp Loading
0,106
CSc -0,190 Pickq 0,012
CustSatq -0,029 Pickt -0,022
Delp 0,107 Putt 0,019
Delt -0,109 Recp 0,151
DSt -0,014 Recq 0,022
Rect -0,031
EqDp -0,029
Repp 0,179
Invc -0,049
Repq 0,077
Invq 0,006 Rept -0,002
InvUtp 0,243 Scrapq -0,123
Labc -0,118 Shipp 0,105
Labp 0,200 Shipq 0,209
Maintc 0,017 Shipt 0,004
OrdFq 0,188 StockOutq -0,086
OrdLTt -0,087 Stop 0,145
OrdProcc -0,117 Stoq 0,007
Thp 0,192
OTDelq 0,039
TOp -0,174
OTShipq 0,145
Trc -0,084
PerfOrdq 0,028 TrUtp 0,196
Pickp 0,106 WarUtp 0,163
Pickq 0,012
Pickt -0,022
Table F.2: Loadings for DFA result of m=1 and R= unconstrained.
Putt 0,019
Recpbeen made0,151
Several other tests have but the best results according to the logLik and
Recq 0,022
AICc values are always for m = 1, which exclude a great quantity of indicators from the
Rect -0,031
model. As our objective is to maintain the majority of indicators to evaluate the global
Repp 0,179
performance, we do not use this result in our integrated model.
Repq 0,077
Rept -0,002
Scrapq -0,123
Shipp 0,105
Shipq 0,209
Shipt 0,004
StockOutq -0,086
Stop 0,145
Stoq 0,007
Thp 0,192
TOp -0,174
Trc -0,084
TrUtp 0,196
WarUtp 0,163
Appendix G
Results of Anderson Darling Test

R.

The statistic analysis is performed for each indicator using the software Minitab 16 Each

graphic summarizes the Anderson Darling Test, skewness and kurtosis measurement for all

40 performance indicators. Moreover, the mean and standard deviation are demonstrated

in each gure for the 100 month time series.

These mean and standard deviation values are used in the optimization model, to

calculate the standardized indicator values.

Figure G.1: Cost indicator data test.

Figure G.2: Time indicator data test.


40 G. Results of Anderson Darling Test

Figure G.3: Cost indicator data test. Figure G.4: Time indicator data test.
41

Figure G.5: Quality indicator data test. Figure G.6: Productivity indicator data test.
42 G. Results of Anderson Darling Test

Figure G.7: Quality indicator data test. Figure G.8: Productivity indicator data test.
43

Figure G.9: Quality indicator data test.

Figure G.10: Productivity indicator data


test.
Appendix H
Optimization model

This appendix presents the optimization model coupled with CADES Component Opti-
mizer r .

OBJECTIVE FUNCTION

GP = (1/N) * C1 + (1/N) * C2 + (1/N) * C3 + (1/N) * C4 + (1/N) * C5 + (1/N) * C6

COMPONENT EQUATIONS

C1 = -0.22 * CSc_NORM + 0.25 * Delp_NORM - 0.26 * Delt_NORM + 0.24 *

Labp_NORM - 0.26 * OrdLTt_NORM - 0.24 * OrdProcc_NORM + 0.25 * Pickp_NORM

-0.25 * Pickt_NORM + 0.24 * Repp_NORM -0.25 * Rept_NORM + 0.25 * Shipp_NORM

-0.26 * Shipt_NORM + 0.24 * Thp_NORM -0.24 * Trc_NORM

C2 = - 0.24 * Labc_NORM - 0.37 * Putt_NORM + 0.36 * Recp_NORM + 0.37 *

Stop_NORM + 0.29 * TrUtp_NORM

C3 = 0.22 * CustSatq_NORM - 0.43 * Invc_NORM -0.44 * InvUtp_NORM + 0.41 *

TOp_NORM - 0.33 * WarUtp_NORM

C4 = 0.51 * OrdFq_NORM + 0.53 * OTShipq_NORM - 0.24 * Scrapq_NORM +

0.5 * Shipq_NORM

C5 = 0.4 * CustSatq_NORM + 0.46 * OTDelq_NORM + 0.51 * PerfOrdq_NORM -

0.34 * Scrapq_NORM

C6 = -0.61 * DSt_NORM + 0.31 * Invq_NORM - 0.59 * Rect_NORM

STANDARDIZED INDICATOR EQUATIONS

intern Rect_NORM = (Rect - Mean_Rect)/STD_Rect

intern Putt_NORM = (Putt - Mean_Putt)/STD_Putt

intern DSt_NORM = (DSt - Mean_DSt)/STD_DSt

intern Rept_NORM = (Rept - Mean_Rept)/STD_Rept

intern Pickt_NORM = (Pickt - Mean_Pickt)/STD_Pickt

intern Shipt_NORM = (Shipt - Mean_Shipt)/STD_Shipt

intern Delt_NORM = (Delt - Mean_Delt)/STD_Delt


46 H. Optimization model

intern OrdLTt_NORM = (OrdLTt - Mean_OrdLTt)/STD_OrdLTt

intern Labp_NORM = (Labp - Mean_Labp)/STD_Labp

intern Recp_NORM = (Recp - Mean_Recp)/STD_Recp

intern Stop_NORM = (Stop - Mean_Stop)/STD_Stop

intern Repp_NORM = (Repp - Mean_Repp)/STD_Repp

intern Pickp_NORM = (Pickp - Mean_Pickp)/STD_Pickp

intern Shipp_NORM = (Shipp - Mean_Shipp)/STD_Shipp

intern Delp_NORM = (Delp - Mean_Delp)/STD_Delp

intern InvUtp_NORM = (InvUtp - Mean_InvUtp)/STD_InvUtp

intern WarUtp_NORM = (WarUtp - Mean_WarUtp)/STD_WarUtp

intern Thp_NORM = (Thp - Mean_Thp)/STD_Thp

intern TOp_NORM = (TOp - Mean_TOp)/STD_TOp

intern TrUtp_NORM = (TrUtp - Mean_TrUtp)/STD_TrUtp

intern Invc_NORM = (Invc - Mean_Invc)/STD_Invc

intern Trc_NORM = (Trc - Mean_Trc)/STD_Trc

intern OrdProcc_NORM = (OrdProcc - Mean_OrdProcc)/STD_OrdProcc

intern Labc_NORM = (Labc - Mean_Labc)/STD_Labc

intern CSc_NORM = (CSc - Mean_CSc)/STD_CSc

intern Invq_NORM = (Invq - Mean_Invq)/STD_Invq

intern Shipq_NORM = (Shipq - Mean_Shipq)/STD_Shipq

intern OTShipq_NORM = (OTShipq - Mean_OTShipq)/STD_OTShipq

intern OrdFq_NORM = (OrdFq - Mean_OrdFq)/STD_OrdFq

intern OTDelq_NORM = (OTDelq - Mean_OTDelq)/STD_OTDelq

intern PerfOrdq_NORM = (PerfOrdq - Mean_PerfOrdq)/STD_PerfOrdq

intern CustSatq_NORM = (CustSatq - Mean_CustSatq)/STD_CustSatq

intern Scrapq_NORM = (ScrapRate - Mean_ScrapRate)/STD_ScrapRate

EQUATIONS RELATING DATA

1. EQUATIONS ALREADY USED IN THE FIRST ANALYTICAL MODEL

intern WEfDel = beta_del * WHDel

intern WEfShip = beta_ship * WHShip

intern WEfPick = beta_pick * WHPick

intern WEfRep = beta_rep * WHRep

intern WEfSto = beta_sto * WHSto

intern WEfRec = beta_rec * WHRec

intern HAdmin_ord = beta_ord * WHAdmin

DeltaT_Insp = mean_Insp * nb_trucks

nb_trucks = Total_unlo / pal_truck

intern avepallet = aveinv / Prod_pal

intern Good_sold = (Total_del) * Prod_Ord


47

intern Kg_Tr = (Total_del) * Prod_Ord * kg_Prod

Product_Ship = (Cor_OrdLiShip + Prob_OrdLiShip)* Prod_Ord

WarCapUsed = (avepallet * pallet_area) + CapUsedAreas

aveinv = ((Total_sto * Prod_pal) + Remain_Inv) /2

Remain_Inv = Total_sto * Prod_pal - Total_pick * Prod_Ord

intern Sales = (ProductCost + Prot) * Good_sold

PalProcInv = Total_unlo + Total_sto + Total_rep

ErrorDataSystem = ErrorDataSystem1 + ErrorDataSystem2 + ErrorDataSystem3

2. EQUATIONS INCLUDED FOR OPTIMIZATION

Pal_Unlo = CorUnlo + ProbUnlo

ProbUnlo = Scrap_Unlo + ErrorDataSystem1 + Other_Prob_unlo

Pal_Sto = CorSto + ProbSto

ProbSto = Scrap_Sto + ErrorDataSystem2 + Other_Prob_sto

Pal_moved = CorRep + ProbRep

ProbRep = Scrap_Rep + ErrorDataSystem3 + Other_Prob_rep

Ord_LiPick = Cor_OrdLiPick + Prob_OrdLiPick

Prob_OrdLiPick = Scrap_Pick + ItemnoAvail_ord + Other_Prob_pick

ItemnoAvail = ItemnoAvail_ord * Prod_Ord

Ord_Ship = Cor_OrdLiShip + Prob_OrdLiShip

Prob_OrdLiShip = Scrap_Ship + No_OT_ship + NoComplet_OrdShip + Other_Prob_ship

Ord_Ship_OT = Ord_Ship - No_OT_ship

OTDel_ord = Ord_Del - No_OT_del

Ord_Del = CorDel + ProbDel

ProbDel = Scrap_Del + No_OT_del + Other_Prob_del

Ord_OT_ND_CD = CorDel

CustComplain = Ord_Del - NoComplain_ord

CONSTRAINTS

Ctrl_0_WHAdmin_and_SumAdmins = WHAdmin - WEfAdmin

Ctrl_1_TotalUnlo_and_TotalSto = Pal_Unlo - Pal_Sto

Ctrl_2_TotalOrder_and_TotalRep = ((Cust_Ord * Prod_Ord )/ Prod_pal ) - Pal_moved

Ctrl_2A_TotalOrder_and_TotalRep = (Pal_Sto + (Remain_Inv/ Prod_pal) ) -

Pal_moved

Ctrl_3_Cust_Ord_and_Total_pick = Cust_Ord - (Ord_LiPick/ Line_Ord)

Ctrl_4_TotalShip_and_TotalPick = (Ord_LiPick/ Line_Ord) - Ord_Ship

Ctrl_4A_Product_Out_and_Prod_Ship = (Ord_LiPick * Prod_Ord) - Product_Ship


48 H. Optimization model

Ctrl_5_TotalDel_and_TotalShip = Ord_Ship - Ord_Del

Ctrl_6_OT_ND_DC_and_OTDel_ord = OTDel_ord - Ord_OT_ND_CD

TIME INDICATORS

Rect = (WEfRec + HAdmin_rec + DeltaT_QueueRec + DeltaT_Insp + DeltaT_Others1)

/ (CorUnlo + ProbUnlo)

Putt = (WEfSto + HAdmin_sto + DeltaT_QueueSto + DeltaT_Others2) / (CorSto

+ ProbSto)

DSt = (WEfRec + WEfSto + HAdmin_rec + HAdmin_sto + DeltaT_QueueRec +

DeltaT_QueueSto + DeltaT_Insp + DeltaT_Others1 + DeltaT_Others2) / ( CorUnlo

+ ProbUnlo)

Rept = (WEfRep + HAdmin_rep + DeltaT_QueueRep + DeltaT_Others3) / (Cor-

Rep + ProbRep)

Pickt = (WEfPick + HAdmin_pick + DeltaT_QueuePick + DeltaT_Others4) / (Cor_OrdLiPick

+ Prob_OrdLiPick)

Shipt = (WEfShip + HAdmin_ship + DeltaT_QueueShip + DeltaT_Insp2 + DeltaT_Others5)

/ (Cor_OrdLiShip + Prob_OrdLiShip)

Delt = (WEfDel + HAdmin_del + DeltaT_QueueDel + DeltaT_Others6) / (CorDel

+ ProbDel)

OrdLTt = (WEfPick + HAdmin_pick + DeltaT_QueuePick + DeltaT_Others4 +

WEfShip + HAdmin_ship + DeltaT_QueueShip + DeltaT_Insp2 + DeltaT_Others5

+ WEfDel + HAdmin_del + DeltaT_QueueDel + DeltaT_Others6 + HAdmin_ord) /

(CorDel + ProbDel)

PRODUCTIVITY INDICATORS

Labp = Product_Ship / WH

Recp = (CorUnlo + ProbUnlo) / WHRec

Stop = (CorSto + ProbSto) / WHSto

Repp = (CorRep + ProbRep) / WHRep


49

Pickp = (Cor_OrdLiPick + Prob_OrdLiPick) / WHPick

Shipp = (Cor_OrdLiShip + Prob_OrdLiShip) / WHShip

Delp = (CorDel + ProbDel) / WHDel

InvUtp = (avepallet / palSpace)*100

TOp = Good_sold / aveinv

TrUtp = (Kg_Tr / (capTruck * nbTravel))*100

Thp = Product_Ship / WarWH

WarUtp = (WarCapUsed / WarCap)*100

COST INDICATORS

Invc = (aveinv * ProductCost * rate) + (ItemnoAvail * Prot )

Trc = (TruckMaint + (value_oil * liter_used_travel * nbTravel) + (value_h_del *

WHDel) + alpha * (value_h_del * WHDel) + Deprec1 + Other1)/ (CorDel + ProbDel)

OrdProcc = ((beta_ord * WHAdmin * value_h_admin)+ alpha * (beta_ord * WHAd-

min * value_h_admin) + Deprec2 + Other2)/ Cust_Ord

Labc = WHRec * value_h_rec + WHSto * value_h_sto + WHRep * value_h_rep

+ WHPick * value_h_pick + WHShip * value_h_ship + ((1-beta_ord)* WHAdmin *

value_h_admin) + WHOthers * value_h_others + alpha * (WHRec * value_h_rec +

WHSto * value_h_sto + WHRep * value_h_rep + WHPick * value_h_pick + WHShip *

value_h_ship + ((1-beta_ord)* WHAdmin * value_h_admin) + WHOthers * value_h_others)

CSc = (((OrdProcc * Cust_Ord) + Labc + Maintc)/ Sales) *100

QUALITY INDICATORS

Invq = ((PalProcInv - ErrorDataSystem)/ PalProcInv)*100

Shipq = ((Cor_OrdLiShip) / (Cor_OrdLiShip + Prob_OrdLiShip))*100

OTShipq = (OTShip_ord / (Cor_OrdLiShip + Prob_OrdLiShip))*100

OrdFq = (((Cor_OrdLiShip + Prob_OrdLiShip) - NoComplet_OrdShip) /(Cor_OrdLiShip

+ Prob_OrdLiShip))*100
50 H. Optimization model

OTDelq = (OTDel_ord / (CorDel + ProbDel))*100

PerfOrdq = (OT_ND_DC_ord / (CorDel + ProbDel))*100

CustSatq = (((CorDel + ProbDel) - CustComplain) / (CorDel + ProbDel))*100

ScrapRate = ((((Scrap_Unlo + Scrap_Sto + Scrap_Rep) * Prod_pal) + ((Scrap_Pick

+ Scrap_Ship + Scrap_Del) * Prod_Ord))/ Product_Ship)*100


Appendix I
Mean and standard deviation values of indicators

A complete list of mean and standard deviation values for all indicators are described in

this appendix, Table I.1. The input dataset to obtain this list are the 100 month time

series of each indicator. These values are included as xed variables in the optimization

model.

Standard
Indicator Mean
deviation
CSc 0,36 0,02
CustSatq 99,53 0,39
Delp 3,93 0,46
Delt 0,28 0,03
DSt 0,84 0,10
Invc 221207,91 32507,65
Invq 99,87 0,08
InvUtp 53,51 7,66
Labc 12902,27 810,65
Labp 17,40 1,35
OrdFq 99,27 0,46
OrdLTt 1,25 0,15
OrdProcc 0,90 0,11
OTDelq 99,29 0,41
OTShipq 99,28 0,43
PerfOrdq 99,05 0,50
Pickp 1,98 0,24
Pickt 0,51 0,06
Putt 0,14 0,01
Recp 7,96 0,59
Rect 0,70 0,10
Repp 3,96 0,32
Rept 0,24 0,02
Scrapq 4,27 1,05
Shipp 2,64 0,31
Shipq 99,02 0,52
Shipt 0,38 0,05
Stop 7,96 0,57
Thp 156,64 12,19
TOp 1,33 0,21
Trc 3,26 0,34
TrUtp 84,07 5,43
WarUtp 50,28 2,81

Table I.1: The variable's mean and standard deviation.


Appendix J
Optimization results

The results of the optimization for the inputs and intermediate outputs are presented,

respectively, in Table J.1 and Figure J.2.

Table J.1: Input results after maximization and minimization.

INPUT RESULTS
Limits in Hours Picking, Shipping and Limits
Time data [unit]
Maximization Minimization Delivery data [unit] Maximization Minimization
β_del 0,34 1,00 Prod noAvail [orders] 3000 0
β_ord 0,30 0,30 No_OT_del [orders] 0 700
β_pick 0,48 1,00 No_OT_ship [orders] 0 700
β_rec 0,53 1,00 No Cust Complain [orders] 3000 0
β_rep 0,41 1,00 NoComplet Ord Ship [orders] 0 0
β_ship 0,48 1,00 Other_Prob_pick [orders] 2 0
β_sto 0,44 1,00 Other_Prob_del [orders] 0 0
Hadmindel [hour] 4,9 1,0 Other_Prob_ship [orders] 0 0
HAdminpick [hour] 1,0 1,0 Cor OrdLi Pick [orders] 3000 660
HAdminrec [hour] 1,0 1,0 Cor OrdLi Ship [orders] 3000 0
HAdminrep [hour] 1,0 1,0 Cor Del [orders] 3000 0
HAdminship [hour] 1,0 1,0 scrap4 [orders] 0 40
HAdminsto [hour] 2,6 141,9 scrap5 [orders] 0 0
scrap6 [orders] 0 0
Limits
Replenishment data [unit]
Maximization Minimization Unloading and Storing data Limits
Cor Rep [pallet] 996 460 [unit] Maximization Minimization
error data system 3 [pallet] 0 0 Cor Sto [pallet] 1000 340
scrap3 [pallet] 0 40 Cor Unlo [pallet] 1000 340
Other_Prob_rep [pallet] 4 0 scrap1 [pallet] 0 15
scrap2 [pallet] 0 18
Limits in $ Other_Prob_sto [pallet] 1,14 0
Cost data
Maximization Minimization Other_Prob_unlo [pallet] 0,5 0
Maintc R$ 1 000,0 R$ 1 000,0 error data system 1 [pallet] 0 4,5
Truck Maint C R$ 50,0 R$ 200 000,0 error data system 2 [pallet] 0 2

Limits
Other data [unit]
Maximization Minimization
War WH [hour] 210 80
Prod Ord [product] 13,3 12,6
war used area [m2] 1000 4000
nb_Travel [travels] 80 300
mean_Insp [h] 0,27 0,5
Cust Ord [orders] 3000 1593
54 J. Optimization results

Table J.2: Intermediate output results after maximization and minimization.

INTERMEDIATE OUTPUT RESULTS


Results Results
Constraints [unit] Data [unit]
Maximization Minimization Maximization Minimization
CTRL_0 [hour] 45 0,10 aveinv [product] 20000 10000
CTRL_1 [pallet] 0 0 Prob Data [pallet] 0 6,24
CTRL_2 [pallet] 0 0 Cust Complain [orders] 0 700
CTRL_2A [pallet] 0 0 ΔT(Insp) [hour] 10,9 7,7
CTRL_3 [order] 0 893 nb_trucks [trucks] 40 14,39
CTRL_4 [order] 0 0 Prod noAvail [products] 0 0
CTRL_4A [product] 0 0 Ord Del OT [orders] 3000 0
CTRL_5 [order] 0 0 Ord OT, ND, CD [orders] 3000 0
CTRL_6 [order] 0 0 Ord Ship OT [orders] 3000 0
PalProcInv [pallets] 3000 1219
Component Results Prob OrdLi Pick [orders] 2,3 40
Equation Maximization Minimization Prob OrdLi Ship [orders] 0 700
C1 49,40 -184,56 Prob Del [orders] 0 700
C2 24,42 -29,05 Prod Proc [products] 40000 8790
C3 3,42 -50,04 Prob Rep [pallet] 4,3 40
C4 3,49 -193,39 Prob Sto [pallet] 1,2 20
C5 3,52 -282,76 Prob Unlo [pallet] 0,5 20
C6 7,59 0,04 Remain_Inv [products] 0 5605
WarCapUsed 1600 4300
Pal Sto [pallet] 1000 360
Pal Unlo [pallet] 1000 360
Pal Moved [pallet] 1000 500
OrdLi Pick [orders] 3000 700
Ord Ship [orders] 3000 700
Ord Del [orders] 3000 700
Appendix K
Abstracts
Global warehouse management: a methodology to
determine an integrated performance measurement

ABSTRACT: The growing warehouse operation complexity has led companies to

adopt a large number of indicators, making its management increasingly dicult. Besides

the great quantity of information, it may be hard for managers to assess the interdepen-

dence of indicators with distinct objectives (e.g. the level of a cost indicator shall decrease,

whereas a quality indicator level shall be maximized), making complex the evaluation of

the overall performance of logistic systems, including the warehouse.

In this context, this thesis develops a methodology to achieve an integrated warehouse

performance measurement. It encompasses four main steps: (i) the development of an

analytical model of performance indicators usually used for warehouse management; (ii)

the denition of indicator relationships analytically and statistically; (iii) the aggregation of

these indicators in an integrated model; (iv) the proposition of a scale to assess the evolution

of the warehouse performance over time according to the integrated model results.

The methodology is applied to a theoretical warehouse to demonstrate its application.

The indicators used to evaluate the warehouse come from the literature and the database is

generated to perform the mathematical tools. The Jacobian matrix is used to dene indi-

cator relationships analytically, and the principal component analysis to achieve indicators'

aggregation statistically. The nal aggregated model comprehends 33 indicators assigned

in six dierent components, which compose the global performance indicator equation by

means of component's weighted average. A scale is developed for the global performance

indicator using an optimization approach to obtain its upper and lower boundaries.

After some tests to verify the usability of the integrated model, we conclude that the

proposed methodology reaches its objective providing a decision support tool for managers

so that they can be more ecient in the global warehouse performance management with-

out neglecting important information from indicators.

Keywords performance evaluation, warehouse performance, performance indicator,


aggregated indicators, logistics.
Gerenciamento global de armazéns: uma metodologia
para mensurar o desempenho de forma agregada

RESUMO: A crescente complexidade das operações em armazéns tem levado as em-


presas a adotarem um grande número de indicadores de desempenho, o que tem dicultado

cada vez mais o seu gerenciamento. Além do volume de informações, os indicadores normal-

mente possuem interdependências e objetivos distintos, as vezes até opostos (por exemplo,

o indicador de custo deve ser reduzido enquanto o indicador de qualidade deve sempre ser

aumentado), tornando complexo para o gestor avaliar o desempenho logístico global do

sistema, incluindo o armazém.

Dentro deste contexto, esta tese desenvolve uma metodologia para obter uma medida

agregada do desempenho global do armazém. A metodologia é composta de quatro etapas

principais: (i) o desenvolvimento de um modelo analítico dos indicadores de desempenho

já utilizados para o gerenciamento do armazém; (ii) a denição das relações entre os indi-

cadores de forma analítica e estatística; (iii) a agregação destes indicadores em um modelo

integrado; (iv) a proposição de uma escala para avaliar a evolução do desempenho global

do armazém ao longo do tempo, de acordo com o resultado do modelo integrado.

A metodologia é aplicada em um armazém teórico para demonstrar sua aplicabili-

dade. Os indicadores utilizados para avaliar o desempenho do armazém são provenientes

da literatura, e uma base de dados é gerada para permitir a utilização de ferramentas

matemáticas. A matriz jacobiana é utilizada para denir de forma analítica as relações

entre os indicadores, e uma análise de componentes principais é realizada para agregar

os indicadores de forma estatística. O modelo agregado nal compreende 33 indicadores,

divididos em seis componentes diferentes, e a equação do indicador de desempenho global

é obtido a partir da média ponderada dos seis componentes. Uma escala é desenvolvida

para o indicador de desempenho global utilizando um modelo de otimização para obter os

limites superior e inferior da escala.

Depois de testes com o modelo integrado, pôde-se concluir que a metodologia proposta

atingiu seu objetivo ao fornecer uma ferramenta de ajuda à decisão para os gestores, per-

mitindo que eles sejam mais ecazes no gerenciamento global do armazém sem negligenciar

informações importantes que são fornecidas pelos indicadores.

Palavras-chave avaliação de desempenho, desempenho de armazém, indicador de de-


sempenho, indicadores agregados, logística.
Gestion globale des entrepôts logistiques: une
méthodologie pour mesurer la performance de façon
agrégée

RÉSUMÉ: La complexité croissante des opérations dans les entrepôts a conduit les

entreprises à adopter un grand nombre d'indicateurs de performances, ce qui rend leur

gestion de plus en plus dicile. De plus, comme ces nombreux indicateurs sont souvent

interdépendants, avec des objectifs diérents, parfois contraires (par exemple, le résultat

d'un indicateur de coût doit diminuer, tandis qu'un indicateur de qualité doit être max-

imisé), il est souvent très dicile pour un manager d'évaluer la performance globale des

systèmes logistiques, comprenant l'entrepôt.

Dans ce contexte, cette thèse développe une méthodologie pour atteindre une mesure

agrégée de la performance de l'entrepôt. Elle comprend quatre étapes principales: (i)

le développement d'un modèle analytique d'indicateurs de performance habituellement

utilisés pour la gestion de l'entrepôt; (ii) la dénition de relations entre les indicateurs, de

façon analytique et statistique ; (iii) l'agrégation de ces indicateurs dans un modèle intégré;

(iv) la proposition d'une échelle pour suivre l'évolution de la performance de l'entrepôt au

l du temps, selon les résultats du modèle agrégé.

La méthodologie est illustrée sur un entrepôt théorique pour démontrer son applica-

bilité. Les indicateurs utilisés pour évaluer la performance de l'entrepôt proviennent de la

littérature, et une base de données est générée pour permettre l'utilisation des outils math-

ématiques. La matrice jacobienne est utilisée pour dénir de façon analytique les relations

entre les indicateurs, et une analyse en composantes principales est faite pour agréger les in-

dicateurs de façon statistique. Le modèle agrégé nal comprend 33 indicateurs, répartis en

six composants diérents, et l'équation de l'indicateur de performance globale est obtenue

à partir de la moyenne pondérée de ces six composants. Une échelle est développée pour

l'indicateur de performance globale en utilisant une approche d'optimisation pour obtenir

ses limites supérieure et inférieure.

Après des testes réalisés avec le modèle intégré, nous concluons que la méthodologie

proposée atteint son objectif en fournissant un outil d'aide à la décision pour les man-

agers an qu'ils puissent être plus ecaces dans la gestion globale de la performance de

l'entrepôt, sans négliger des informations importantes fournis par les indicateurs.

Mots clés évaluation de performance, performance d'entrepôt logistique, indicateur de


performance, indicateur agrégé, logistique
RÉSUMÉ ÉTENDU
Gestion globale des entrepôts logistiques: une
méthodologie pour mesurer la performance de façon
agrégée

K.1 Contexte du Problème de Recherche


La complexité croissante des opérations dans les entrepôts a conduit les entreprises à utiliser

un grand nombre d'indicateurs de performances, ce qui rend leur gestion de plus en plus

dicile. D'autre part, comme ces nombreux indicateurs sont souvent interdépendants,

peuvent avoir des objectifs diérents, parfois contraires (par exemple, le résultat d'un in-

dicateur de coût doit diminuer, tandis qu'un indicateur de qualité doit être maximisé), il

est souvent très dicile pour un manager d'évaluer la performance globale des systèmes

logistiques, comprenant l'entrepôt. Dans ce contexte, l'agrégation des indicateurs de per-

formance peut simplier considérablement l'analyse du système global, en résumant les

informations d'un ensemble de sous-indicateurs (Franceschini et al., 2006). La motivation

principale de ce travail est de soutenir les décisions du gestionnaire sur la performance

globale de l'entrepôt d'une manière ecace, en sachant que les personnes ont des limites

de capacité pour traiter une grande quantité d'expressions de performance (Clivillé et al.,

2007). Par conséquent, cette thèse propose un système qui regroupe les indicateurs et

donne une évaluation résumée de la performance globale de l'entrepôt, compte tenu de

toutes les informations pertinentes. Dans la littérature, plusieurs auteurs ont discuté de

la nécessité d'une mesure globale, mais très peu de travaux ont essayé d'atteindre cet ob-

jectif. Ainsi, les principales lacunes de recherche que cette thèse se propose de combler

sont : dans un ensemble de mesures, si certaines sont bonnes et d'autres sont mauvaises,

comment connaitre la performance globale? (Johnson et al., 2010). Le dé est de con-

cevoir une structure de mesures (par exemple, les regrouper) et en extraire un sens global

de performance (à savoir, être en mesure de répondre à la question Dans l'ensemble, où se

situe-t?on?) (Melnyk et al., 2004). De la même façon, Lohman et al. (2004) arme qu'une

question conceptuelle est toujours sans réponse : Quels sont les eets de la combinaison de

plusieurs mesures dans un score global? Au-delà de la critique sur l'utilité d'un indicateur

global et de la possible réticence des gestionnaires pour utiliser les indicateurs agrégés, le

principal dé est de fournir des relations ables entre les indicateurs.

Il est dicile de modéliser les relations entre les indicateurs puisque plusieurs facteurs

inuencent leurs valeurs. De Koster and Balk (2008) illustrent cette situation en armant

que des mesures communes utilisées dans les entrepôts (par exemple les lignes de commande

récupérées par personne et par heure, les taux d'erreur de livraison, les délais de satisfaction

des commandes) ne sont pas mutuellement indépendantes et, en plus, chacune d'elles peut

dépendre de multiples entrées. Le résultat est que les indicateurs ne sont pas seulement

inuencés par un autre indicateur (par exemple, les lignes de commande récupérées par

personne et par heure inuencent la quantité de commandes en retard), mais ils peuvent

aussi être inuencés par d'autres paramètres de l'entrepôt, comme l'automatisation du

système, la taille de l'assortiment, la taille de l'entrepôt, etc.

À partir de ce contexte, les objectifs du travail sont présentés.


K.2 Objectif Général
L'objectif principal de cette thèse est de développer une méthodologie pour l'évaluation de

la performance de l'entrepôt de façon agrégée.

K.2.1 Objectifs Spéciques


A partir de l'objectif général présenté, les objectifs spéciques sont proposés comme suit:

• Dénition et classication des indicateurs de performance de l'entrepôt logistique;

• Développement d'un modèle analytique reliant les indicateurs de performance aux

données;

• Création d'une méthodologie pour déterminer la mesure de la performance de l'entrepôt

logistique de façon agrégée;

• Proposition d'une méthode pour vérier analytiquement les liens entre les indicateurs

de performance;

• Détermination d'un modèle d'optimisation pour créer une échelle de la performance

agrégée.

Les étapes qui ont permis d'atteindre ces objectifs lors de notre travail de thèse, sont

présentées dans les prochaines sections.

K.3 Étude bibliographique pour baser le développement de


la méthodologie
Pour développer la méthodologie proposée dans cette thèse, plusieurs sujets ont été analysés

dans la littérature comme illustré par la Figure K.1.

Figure K.1: Les sujets étudiés pour développer la méthodologie proposée dans ce travail.

Premièrement, une révision structurée de la littérature sur les entrepôts logistiques a

été faite an d'identier les derniers développements et ses possibles lacunes de recherche.
De plus, les résultats obtenus ont mis en évidence les indicateurs de performance les plus

utilisés par les entrepôts logistiques pour évaluer leur performance. En raison des diérents

types d'indicateurs trouvés dans la littérature, certaines classications sont eectuées. Tout

d'abord, nous avons diérencié les indicateurs directs des indicateurs indirects : les indi-

cateurs directs sont dénis par des expressions mathématiques simples tandis que les indi-

cateurs indirects ont besoin d'outils plus sophistiqués de mesure (par exemple l'analyse de

régression, logique oue, DEA, etc.). Après cette étape, les indicateurs directs sont classi-

és selon deux axes (le résultat est présenté dans le Tableau K.1): (i) les lignes du tableau

font la classication selon les dimensions de qualité, coût, temps et productivité; (ii) les

colonnes classient les mesures selon les activités exécutées par l'entrepôt logistique. On

note que certains indicateurs mesurent plusieurs activités en même temps (par exemple,

l'indicateur de délai de satisfaction de la commande, Order lead time); dans ce cas ils

sont classiés dans le tableau comme des indicateurs transversaux.

Après cette revue de la littérature, la Figure K.1 montre d'autres sujets étudiés pour

développer la méthodologie générale. La littérature sur la performance agrégée a per-

mis de vérier comment les travaux passés ont réussi à proposer une performance globale

sur n'importe quel domaine d'application. En parallèle, les travaux sur le regroupement

d?indicateurs de performance ont montré les outils mathématiques les plus utilisés pour

cette agrégation. Pour appliquer ces outils mathématiques, il est nécessaire de compren-

dre comment ils fonctionnent et quelles sont leurs contraintes. Finalement, pour atteindre

l'objectif de proposer une échelle pour l'indicateur agrégé, des méthodes pour la générer

ont été étudiées.

A partir de cette base bibliographique il est possible de développer la méthodologie

pour mesurer la performance de l'entrepôt logistique de façon agrégée, ce qui est présenté

dans la prochaine section.


Table K.1: Classication des indicateurs directs selon les dimensions et limites des activités.

Dimensions Activity - Specic Indicators


Receiving Storing Inventory Picking Shipping Delivery
receiving putaway order pick- shipping delivery lead
Time
time time ing time time time
delivery accu-
physical shipping ac-
racy; on-time
storage inventory picking ac- curacy; or-
Quality delivery;
accuracy accuracy; curacy ders shipped
cargo damage
stock-out rate on time
rate
distribution
Cost inventory cost
cost
inventory
receiving picking pro- shipping transport uti-
Productivity space utiliza-
productivity ductivity productivity lization
tion; turnover
Dimensions Process - Transversal Indicators
Inbound Processes Outbound Processes
Dock to stock time Order lead time
Time
Global= Queuing time
Order ll rate, Perfect orders
Quality
Global= Customer satisfaction, Scrap rate
Order processing cost
Cost
Global= Cost as a % of sales
Outbound space utilization
Productivity
Global= Throughput
K.4 Méthodologie pour mesurer la performance de l?entrepôt
logistique de façon agrégée
La méthodologie développée pour atteindre une mesure agrégée de la performance de

l'entrepôt comprend quatre étapes principales (voir Figure K.2): (i) conceptualisation,

(ii) modélisation, (iii) résolution du modèle, (iv) implémentation et mise à jour.

La Figure K.2 montre les quatre étapes de la méthodologie avec ses principales sorties.

La conceptualisation a comme résultat le modèle analytique d'indicateurs de performance

habituellement utilisés pour la gestion de l'entrepôt. À partir du modèle analytique, la

phase de modélisation mesure de façon théorique les liaisons entre les indicateurs de per-

formance et les agrège en utilisant des outils statistiques. Ces résultats servent de base pour

la phase de résolution du modèle, où la dénition du modèle nal de performance agrégée

est faite. De plus, une échelle est créée dans cette phase pour aider à suivre l'évolution de

la performance de l'entrepôt au l du temps, selon les résultats du modèle agrégé. Finale-

ment, la phase d'implémentation présente comment la méthodologie doit être appliquée et

dans quelles situations le modèle agrégé sera mis à jour.

• Analytical model of • Theoretical model of


performance indicators indicator relationships
• Model for indicators
aggregation
Conceptualization Modeling

Implementation
Model Solving
& • Determination of an
• Integrated model
implementation Update Integrated performance
• Model update model
• Scale definition

Figure K.2: Les phases de la méthodologie proposée avec leurs principales étapes. Source:
Adapté de Mitro et al. (1974).

La Figure K.3 détaille les étapes qu'il faut suivre dans chaque phase de la méthodologie

(rectangles en pointillés). La première phase, la conceptualisation, comprend la détermina-

tion des limites d'application de la méthodologie, à savoir, dans quels secteurs de l'entrepôt

la performance sera mesurée et quels seront les indicateurs utilisés pour cela. Cela signie

que, pour appliquer la méthodologie, il est nécessaire de dénir les secteurs où la perfor-

mance sera évaluée et l'ensemble d'indicateurs qui seront utilisés. Ces indicateurs doivent

être connus en termes d?équations, puisque le modèle analytique est formé essentiellement

par ce groupe d'équations.

Une fois le modèle analytique développé (dernière étape de la conceptualisation, voir

Figure K.3), il est nécessaire d'acquérir des données à partir des indicateurs. Ces données

sont des séries temporelles obtenues à partir des résultats des indicateurs, qui sont évalués

périodiquement dans l'entrepôt. De cette étape, deux analyses peuvent être menées en
Definition of the scope of performance measurement

Definition of the indicator set


Conceptualization
Determination of indicator and data equations

Analytical model of perfomance indicators

Indicator time series acquisition

Assessment of the Jacobian matrix Statistical tools application


Modeling
Theoretical model of indicator Model for indicators
relationships aggregation

Analysis of the mathematical results

Determination of the integrated performance model Model Solving

Scale definition

Implementation and Update

Figure K.3: Les étapes pour appliquer la méthodologie.

parallèle: la détermination des relations théoriques entre les indicateurs, et l'utilisation

de données historiques pour eectuer l'agrégation d'indicateurs. Le modèle théorique est

déni à partir de la mesure de la matrice jacobienne, et l'agrégation des indicateurs est

obtenue par l'utilisation d'outils statistiques qui réduisent les dimensions d'un groupe de

variables.

Ensuite, à partir des résultats de l'application des outils mathématiques, un modèle

de relations quantitatives entre les indicateurs est construit. Il est appelé  modèle de

performance agrégée , et a comme résultat la performance globale de l'entrepôt. Comme

les valeurs obtenues à partir de ces indicateurs agrégés ne peuvent pas être interprétées

librement, il est nécessaire de créer une échelle pour elles, représentée par l'étape dénition

de l'échelle dans la Figure K.3. Enn, l'étape de mise en oeuvre montre l'utilisation du

modèle pour la gestion périodique d'un entrepôt et la mise à jour dénit le moment où la

méthodologie doit être révisée.

Les prochaines sections présentent l'application de la méthodologie développée dans

cette thèse.
K.5 Application de la méthodologie sur un entrepôt théorique
La méthodologie est illustrée sur un entrepôt théorique pour démontrer son applicabilité.

La Figure K.4 montre l'entrepôt théorique étudié, nommé entrepôt standard. Le nom

standard vient des activités opérationnelles qui sont faites dans l'entrepôt logistique et

qu'on trouve dans la plupart des entrepôts : réception, stockage, réapprovisionnement in-

terne, préparation de commandes, expédition et livraison. Ainsi, la mesure du rendement

est eectuée sur les activités opérationnelles, y compris également l'activité de distribution.

La Figure K.4 ne détaille pas seulement les délimitations des activités mais aussi leur em-

placement dans l'entrepôt, et les unités de mesure pour les indicateurs de performance. Les

indicateurs utilisés pour évaluer la performance de l'entrepôt proviennent de la littérature,

et une base de données est générée pour permettre l'utilisation des outils mathématiques.

Figure K.4: L'entrepôt standard.

Après la dénition des limites pour l'application de la méthodologie et des indicateurs

de performance qui seront utilisés, il est nécessaire de construire le modèle analytique.

Premièrement, les équations des indicateurs de performance (41 au total) sont dénies.

Comme il y a beaucoup de données utilisées pour mesurer les indicateurs qui ont des liens

entre eux, il fallait aussi élaborer des expressions quantitatives pour les données, an de

trouver toutes les relations possibles entre les indicateurs. Le groupe nal d'expressions,

nommé modèle analytique, contient 106 équations d'indicateurs de performance et de leur

données.

Pour obtenir les premières idées sur l'agrégation des indicateurs en utilisant des outils

mathématiques, un ensemble de données est nécessaire. Dans un contexte réel, les données

sur les activités de l'entrepôt existent déjà et peuvent être collectées. Cependant, comme

notre entrepôt étudié est théorique, une base de données pour le modèle analytique est

générée, représentant le ux de produits entre les processus. Cette base de données est

utilisée pour calculer les indicateurs de performance de façon mensuelle, ce qui génère une
série temporelle d'indicateurs, qui sont couplés avec les outils mathématiques.

Le premier outil mathématique utilisé est la matrice jacobienne, et les résultats obtenus

sont présentés dans la section suivante.

K.5.1 Détermination analytique des relations entre les indicateurs de


performance
La relation quantitative entre les indicateurs est le résultat de diérentes variations et

eets qui se produisent en même temps dans les activités de l'entrepôt. Il est possible

de vérier deux formes principales de relations : les eets de l'enchaînement d'activités,

et des données partagées par les indicateurs. L'eet d'enchaînement est l'impact d'un

indicateur de performance sur un autre, ce qui correspond à l'activité suivante dans la

chaîne d'activités eectuées par l'entrepôt logistique. Dans le cas de l'eet des données

partagées, il détermine que deux indicateurs sont liés par les données qu'ils ont en commun.

L'idée principale de cet eet est que si deux indicateurs ont en commun une ou plusieurs

données, ils ont une sorte de relation, car si une donnée change, les deux indicateurs seront

touchés et vont changer d'une certaine façon. Par exemple, les indicateurs productivité du

travail et taux de perte utilisent la même donnée, quantité de produits traités, dans leur

équation. Si la quantité de produits traités change, les deux indicateurs vont également

changer, chacun avec une intensité diérente.

En se basant sur le fait que les liaisons entre les indicateurs viennent du partage de

données, on peut calculer la matrice Jacobienne du modèle analytique. La matrice Jaco-

bienne est utilisée pour dénir de façon analytique les relations entre les indicateurs. La

Jacobienne est une matrice des dérivées partielles qui est utilisée pour déterminer la rela-

tion entre les sorties et les entrées (Montgomery and Runger, 2003). En d'autres termes,

la matrice Jacobienne est une dérivée partielle des n sorties (indicateurs de performance)

par rapport aux m entrées (données des indicateurs). Chaque cellule de la matrice donne

la valeur de la dérivée partielle qui peut être interprétée comme la variation de la sortie

lorsque l'entrée correspondante varie, en conservant d'autres entrées constantes.

Dans cette thèse, le logiciel CADES


r a été utilisé pour mesurer la matrice Jacobienne.

La Figure K.5 montre son interface, et CADES


r calcule et donne les résultats numériques

de la matrice jacobienne pour l'ensemble des données d'entrée fournies.

La matrice Jacobienne calculée est d'abord analysée par rapport à ses colonnes. Il y a

principalement deux types d'entrées (les colonnes de la matrice) : celles liées à une seule

sortie et d'autres liées à plusieurs sorties. À titre d'illustration, seulement les parties de la

matrice où les entrées sont liées à deux sorties ou plus sont présentées dans le Tableau K.2,

étant donné que c'est le résultat le plus important pour déterminer les liaisons entre les

indicateurs. Chaque cellule, dans le Tableau K.2, contient les valeurs des dérivées partielles

de la sortie considérée par rapport aux données d'entrée correspondantes.

L'hypothèse présentée, sur l'eet de données partagées, signie que si les dérivées par-

tielles de deux indicateurs sont non nulles pour la même entrée, il existe une relation entre

ces indicateurs. Il est possible de vérier dans le Tableau K.2 que plusieurs indicateurs ont

des entrées communes, ce qui signie une liaison entre eux. Par exemple, les indicateurs

Labc et OrdLTt ont trois entrées communes: βord , emplPick, emplShip, ce qui désigne

une relation. Pour vérier toutes les liaisons qui existent entre les indicateurs de perfor-
INPUTS: 81
INDEPENDENT DATA
OUTPUTS: ALL 41
INDICATORS

JACOBIAN MATRIX:
41 x 81

Figure K.5: L'interface du logiciel CADES


r : les entrées, les sorties et le résultat de la

matrice Jacobienne.
Table K.2: Vue partielle de la matrice Jacobienne avec les entrées liées à deux sorties ou
plus.

alpha beta_del beta_ord CorDel CorRep CorSto CorUnlo emplPick emplRec emplRep emplShip emplSto
CSc 0,24550 0 0,00000 -0,00038 0 0 0 0,02593 0,02593 0,02593 0,02593 0,02593
CustSatq 0 0 0 0,00101 0 0 0 0 0 0 0 0
Delp 0 0 0 0,00298 0 0 0 0 0 0 0 0
Delt 0 0,25190 0 -0,00021 0 0 0 0 0 0 0 0
DSt 0 0 0 0 0 0 -0,00067 0 0,20400 0 0 0,20400
EqDp 0 0 0 0 0 0 0 0 0 0 0 0
Invc 0 0 0 0 0 199,8 0 0 0 0 0 0
Invq 0 0 0 0 0,00013 0,00013 0,00013 0 0 0 0 0
InvUtp 0 0 0 0 0 0,05000 0 0 0 0 0 0
Labc 9988,0 0 -5292,0 0 0 0 0 1260,0 1260,0 1260,0 1260,0 1260,0
Labp 0 0 0 0 0 0 0 -1,5 -1,5 -1,5 -1,5 -1,5
Maintc 0 0 0 0 0 0 0 0 0 0 0 0
OrdFq 0 0 0 0 0 0 0 0 0 0 0 0
OrdLTt 0 0,25190 0,37780 -0,00101 0 0 0 0,11960 0 0 0,11960 0
OrdProcc 1,4 0 3,7 0 0 0 0 0 0 0 0 0
OTDelq 0 0 0 -0,07367 0 0 0 0 0 0 0 0
Putt 0 0 0 0 0 -0,00036 0 0 0 0 0 0,21120
Recp 0 0 0 0 0 0 0,00595 0 -4,2 0 0 0
Recq 0 0 0 0 0 0 0,00184 0 0 0 0 0
Rect 0 0 0 0 0 0 -0,00033 0 0,20400 0 0 0
Repp 0 0 0 0 0,00595 0 0 0 0 -3,7 0 0

mance, il est nécessaire de comparer toutes les lignes du Tableau K.2, deux par deux. Le

résultat nal de cette analyse est présenté dans le Tableau K.3. Les diérentes couleurs

représentent la quantité des données partagées par deux indicateurs de performance (rouge

pour 1 donnée partagée, bleu pour 2, verte pour 3 ou plus de données partagées). Il est

important de noter que le nombre de données partagées par des indicateurs donnent une

indication sur les relations existant entre eux, mais pas sur l'intensité de ces relations.

La prochaine section applique les outils statistiques pour évaluer les relations entre les

indicateurs et proposer un premier modèle agrégé pour la performance de l'entrepôt.


Table K.3: La matrice avec les liaisons entre les indicateurs et la quantité de données partagées.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
1 CSc 0
2 CustSatq 2 0
3 Delp 3 2 0
4 Delt 3 2 4 0
5 DSt 3 0 1 1 0
6 EqDp 1 0 1 1 1 0
7 Invc 3 0 0 0 0 0 0
8 Invq 0 0 0 0 2 0 2 0
9 InvUtp 0 0 0 0 0 0 4 2 0
10 Labc 15 0 1 1 3 1 0 0 0 0
11 Labp 7 0 1 1 3 1 1 0 0 6 0
12 Maintc 2 0 0 0 0 0 0 0 0 0 0 0
13 OrdFq 0 0 0 0 0 0 0 0 0 0 2 0 0
14 OrdLTt 7 2 4 6 1 1 0 0 0 5 3 0 0 0
15 OrdProcc 6 0 1 1 1 1 0 0 0 5 1 0 0 3 0
16 OTDelq 2 2 2 2 0 0 0 0 0 0 0 0 0 2 0 0
17 OTShipq 0 0 0 0 0 0 0 0 0 0 2 0 2 0 0 0 0
18 PerfOrdq 2 2 2 2 0 0 0 0 0 0 0 0 0 2 0 2 0 0
19 Pickp 2 0 1 1 1 1 2 0 0 2 2 0 0 2 1 0 0 0 0
20 Pickq 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 2 0
21 Pickt 2 0 1 1 1 1 2 0 0 2 2 0 0 4 1 0 0 0 4 2 0
22 Putt 2 0 1 1 4 1 2 2 2 2 2 0 0 1 1 0 0 0 1 0 1 0
23 Recp 2 0 1 1 4 1 0 2 0 2 2 0 0 1 1 0 0 0 1 0 1 1 0
24 Recq 0 0 0 0 2 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0
25 Rect 2 0 1 1 8 1 0 2 0 2 2 0 0 1 1 0 0 0 1 0 1 1 4 2 0
26 Repp 2 0 1 1 1 1 0 2 0 2 2 0 0 1 1 0 0 0 1 0 1 1 1 0 1 0
27 Repq 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0
28 Rept 2 0 1 1 1 1 0 2 0 2 2 0 0 1 1 0 0 0 1 0 1 1 1 0 1 4 2 0
29 Scrapq 1 0 0 0 0 0 2 0 1 0 4 0 2 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0
30 Shipp 2 0 1 1 1 1 0 0 0 2 4 0 2 2 1 0 2 0 1 0 1 1 1 0 1 1 0 1 2 0
31 Shipq 0 0 0 0 0 0 0 0 0 0 2 0 2 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 2 2 0
32 Shipt 2 0 1 1 1 1 0 0 0 2 4 0 2 5 1 0 2 0 1 0 1 1 1 0 1 1 0 1 2 4 2 0
33 StockOutq 1 0 0 0 0 0 5 0 0 0 1 0 0 0 0 0 0 0 2 2 2 0 0 0 0 0 0 0 1 0 0 0 0
34 Stop 2 0 1 1 2 1 2 2 2 2 2 0 0 1 1 0 0 0 1 0 1 4 1 0 1 1 0 1 0 1 0 1 0 0
35 Stoq 0 0 0 0 0 0 2 2 2 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 2 0
36 Thp 2 0 1 1 1 1 1 0 0 1 5 0 2 1 1 0 2 0 1 0 1 1 1 0 1 1 0 1 4 3 2 3 1 1 0 0
37 TOp 4 2 2 2 0 0 5 2 4 0 1 0 0 2 0 2 0 2 0 0 0 2 0 0 0 0 0 0 2 0 0 0 1 2 2 1 0
38 Trc 4 2 4 4 1 1 0 0 0 2 1 0 0 4 2 2 0 2 1 0 1 1 1 0 1 1 0 1 0 1 0 1 0 1 0 1 2 0
39 TrUtp 4 2 2 2 0 0 1 0 0 0 1 0 0 2 0 2 0 2 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 1 4 3 0
40 WarUtp 0 0 0 0 0 0 4 2 4 0 0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 1 0 0 0 0 2 2 0 4 0 0 0
K.5.2 Agrégation d'indicateurs de performance par des outils statis-
tiques - premiers résultats
Parmi les outils statistiques pouvant être utilisés pour agréger les indicateurs, le PCA

(Analyse de Composantes Principales) a été choisi dans ce travail parce qu'il n'a pas de

contraintes sur l'utilisation de séries temporelles comme données d'entrée, et aussi pour sa

facilité d'application. Vu que le PCA agrège les indicateurs à partir de leurs corrélations

statistiques, la mesure de la matrice des corrélations entre les indicateurs est présentée

dans le Tableau K.4.

Les numéros à l'intérieur du Tableau K.4 sont les coecients de corrélation, nommés r
de Pearson (ou simplement r ). Toutes les cellules sélectionnées présentent une corrélation

signicative, avec la valeur de p < 0,01. Les cellules bleues montrent des corrélations

moyennes, avec des valeurs absolues des coecients de corrélation comprises entre 0,4 et

0,59; et les cellules roses montrent les corrélations élevées, avec des valeurs absolues des

coecients de corrélation comprises entre 0,6 et 1.

Il est possible de vérier que certains indicateurs dans le Tableau K.4 n'ont que des

corrélations faibles, ou juste quelques corrélations moyennes. Par exemple, EqDp , Invq et

Maintc ne disposent pas de corrélations supérieures à 0,4 (|r| ≥ 0, 4). Ce type de résultat

montre que ces indicateurs peuvent avoir des problèmes au moment d'être incorporés dans

les résultats du PCA, puisque les composants sont constitués sur les corrélations entre les

variables.

En parallèle de la matrice de corrélation, l'application du PCA est faite pour l'ensemble

des 40 indicateurs de performance. Avant d'insérer les données dans l'outil PCA, ces

données ont été standardisées à cause de la sensibilité du modèle aux grandes variations

que les données peuvent présenter. L'objectif de ce PCA est de vérier le comportement

des indicateurs dans les situations d'agrégation, ce qui fournit plus d'éléments pour dénir

le groupe nal d'indicateurs qui fera partie du modèle agrégé. Le résultat de cette première

analyse est présenté dans la Figure K.6.

La Figure K.6 et les suivantes, qui montrent le résultat d'un PCA, sont divisées en

trois parties: un tableau montrant l'écart type, la proportion de la variance et la proportion

cumulative de la variance pour les composantes principales (en bas de la gure); un tableau

d'indicateurs versus composantes (situé en haut à gauche de la gure); le graphique scree


plot dans la partie droite de la gure. Chacune de ces trois parties est expliquée dans ce

qui suit.

Le tableau en bas de la Figure K.6 comporte trois informations diérentes à anal-

yser. Dans un premier temps, l'écart type de chaque composante principale, lorsqu'il est

supérieur à un, est déni comme l'un des critères pour choisir les composantes à retenir.

A titre d'exemple, dans la Figure K.6 il y a 10 composants (PC1 jusqu'à PC10) avec un

écart type supérieur à un, ce qui indique que ces dix éléments doivent être considérés dans

la représentation de l'ensemble d'indicateurs de performance. La deuxième information, la

proportion de la variance, démontre la contribution de chaque composant pour expliquer la

variance de données, tandis que la proportion cumulée (troisième ligne) présente la somme

des écarts types pour toutes les composantes. À partir du choix des dix composants, la

proportion cumulée est de 86,55 %, ce qui signie que les dix éléments expliquent 86,55 %

de la variance totale des indicateurs.


Table K.4: Matrice des corrélations statistiques.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
1 CSc 1
2 CustSatq -0,1 1
3 Delp -0,6 0 1
4 Delt 0,65 -0 -0,98 1
5 DSt 0,02 -0,1 -0,16 0,13 1
6 EqDp -0,1 0,1 0,07 -0,1 -0,2 1
7 Invc 0,1 -0,2 -0,04 0,03 0,16 -0 1
8 Invq -0,1 -0,1 0,07 -0,1 -0,1 -0 0,09 1
9 InvUtp 0,11 -0,2 -0,02 0,02 0,16 -0 0,97 0,1 1
10 Labc 0,55 -0,2 -0,52 0,52 0,1 -0 0,14 -0 0,12 1
11 Labp -0,96 0,1 0,67 -0,7 -0,1 0,1 -0,13 0,1 -0,1 -0,7 1
12 Maintc -0 -0,1 0,11 -0,1 -0 -0 0,03 -0 0,04 0,21 0,07 1
13 OrdFq 0,04 -0,2 0 0,01 0 -0 0,11 0,1 0,1 0,06 -0 -0 1
14 OrdLTt 0,65 -0 -0,98 1,0 0,13 -0 0,03 -0 0,02 0,52 -0,7 -0,1 0 1
15 OrdProcc 0,60 0 -0,97 0,99 0,14 -0 0,04 -0 0,02 0,43 -0,6 -0,1 0 0,99 1
16 OTDelq -0 0,6 -0,15 0,12 0,05 -0 -0,05 0 -0 -0 -0 -0 -0,2 0,12 0,12 1
17 OTShipq 0,07 -0,2 -0,03 0,04 -0 -0 0,03 0 0,03 0,06 -0 -0 0,9 0,04 0,05 -0,2 1
18 PerfOrdq -0 0,7 -0,03 0,03 -0 0,1 -0,03 -0 -0 -0 -0 -0 -0,1 0,03 0,02 0,9 -0,2 1
19 Pickp -0,6 0 1 -0,97 -0,2 0,1 -0,05 0,1 -0 -0,5 0,66 0,11 0 -0,97 -0,97 -0,1 -0 -0 1
20 Pickq -0,1 0,1 0,07 -0,1 -0 0,1 -0,18 -0 -0,1 -0,1 0,11 -0,1 -0,2 -0,1 -0,1 0,1 -0,1 0,1 0,04 1
21 Pickt 0,63 -0 -0,97 1,00 0,13 -0 0,03 -0 0,02 0,51 -0,7 -0,1 0 1,0 0,99 0,1 0 0 -0,98 -0,1 1
22 Putt 0,38 -0,1 -0,32 0,33 -0,2 -0 -0,12 -0 -0,1 0,74 -0,5 0,12 0 0,33 0,24 -0,1 0,1 -0,1 -0,3 -0 0,31 1
23 Recp -0,39 0,1 0,34 -0,3 0,15 0,2 0,11 0,1 0,14 -0,76 0,51 -0,1 0 -0,3 -0,3 0,1 -0,1 0,1 0,33 0,01 -0,3 -1 1
24 Recq 0,08 0,1 -0,21 0,19 0,1 -0 0,01 0,2 0,04 0 -0,1 -0,1 -0,1 0,19 0,22 0,1 -0,1 0,1 -0,2 0,09 0,21 -0,1 0,04 1
25 Rect -0 -0,1 -0,12 0,1 1,00 -0 0,17 -0 0,17 0,02 -0 -0 0 0,1 0,11 0,1 -0 -0 -0,1 -0 0,1 -0,3 0,24 0,1 1
26 Repp -0,95 0,1 0,69 -0,7 -0,1 0,1 -0,14 0,1 -0,1 -0,69 0,98 0,07 -0 -0,7 -0,7 0 -0,1 0 0,68 0,08 -0,7 -0,5 0,5 -0,1 -0 1
27 Repq -0,1 -0 0,06 -0 0,03 -0 0,18 0,1 0,19 -0,1 0,11 0,07 0,1 -0 -0 0 0,1 -0 0,06 -0,1 -0 -0,1 0,13 0,1 0,04 0,04 1
28 Rept 0,96 -0,1 -0,7 0,73 0,06 -0 0,13 -0 0,12 0,69 -0,98 -0,1 0 0,73 0,68 -0 0,1 -0 -0,7 -0,1 0,72 0,47 -0,5 0,1 0,01 -0,99 -0 1
29 Scrapq 0,08 -0,3 0,03 -0 -0 0 -0,02 0,1 -0 0,12 -0,1 0,09 -0,2 -0 -0 -0,4 -0,3 -0,4 0,04 -0,3 -0 0,12 -0,1 -0,4 -0 -0 -0,41 0,03 1
30 Shipp -0,6 0 1,0 -0,98 -0,2 0,1 -0,05 0,1 -0 -0,5 0,67 0,11 -0 -0,98 -0,97 -0,1 -0,1 -0 1,0 0,07 -0,97 -0,3 0,34 -0,2 -0,1 0,69 0,06 -0,7 0,04 1
31 Shipq 0,05 -0,1 0,03 -0 -0 -0 0,07 -0 0,08 0,06 -0 0 0,8 -0 -0 -0,1 0,8 -0 0,05 -0,2 -0 0,02 -0 -0,2 -0 -0 0,03 0,02 -0,3 0,01 1
32 Shipt 0,65 -0 -0,98 1,00 0,13 -0 0,03 -0 0,02 0,52 -0,7 -0,1 0 1,00 0,99 0,1 0,1 0 -0,97 -0,1 1 0,33 -0,3 0,2 0,09 -0,7 -0 0,73 -0 -0,98 -0 1
33 StockOutq 0,06 -0,1 -0,04 0,03 0,08 -0 0,4 0 0,19 0,11 -0,1 -0,1 0,1 0,03 0,02 -0,1 0,1 -0 -0 -0,5 0,01 0,04 -0,1 -0,1 0,07 -0,1 0,09 0,08 0,05 -0 0,07 0,03 1
34 Stop -0,4 0,1 0,32 -0,3 0,16 0,2 0,14 0,1 0,16 -0,7 0,48 -0,1 -0 -0,3 -0,2 0,1 -0,1 0,1 0,31 0,01 -0,3 -0,99 0,99 0,1 0,25 0,48 0,14 -0,47 -0,1 0,32 -0 -0,3 -0 1
35 Stoq -0 0,2 0,06 -0,1 -0,2 -0 -0,15 0,2 -0,1 -0,2 0,04 -0,2 0,2 -0,1 -0 0,1 0,1 0 0,06 0,01 -0 -0,1 0,18 0,1 -0,1 0,05 0,01 -0,04 -0,46 0,06 0,2 -0 -0,1 0,14 1
36 Thp -0,96 0,1 0,67 -0,7 -0,1 0,1 -0,13 0,1 -0,1 -0,69 1 0,07 -0 -0,7 -0,6 -0 -0 -0 0,66 0,11 -0,7 -0,5 0,51 -0,1 -0 0,98 0,11 -0,98 -0,1 0,67 -0 -0,7 -0,1 0,48 0,04 1
37 TOp -0,4 0,1 0,17 -0,2 -0,1 0 -0,88 -0 -0,91 -0,1 0,38 0,08 -0,1 -0,2 -0,2 0 -0,1 0 0,17 0,13 -0,2 0,14 -0,1 -0,1 -0,1 0,39 -0,2 -0,38 0,04 0,17 -0,1 -0,2 -0,2 -0,2 0,07 0,38 1
38 Trc 0,59 0 -0,96 0,98 0,14 -0 0,01 -0 -0 0,41 -0,6 -0,1 -0 0,98 0,98 0,1 0 0 -0,96 -0,1 0,98 0,24 -0,3 0,2 0,11 -0,6 -0 0,66 -0 -0,97 -0 0,98 0 -0,2 -0 -0,6 -0,2 1
39 TrUtp -0,5 0,2 0,4 -0,4 -0,1 0,2 -0,08 0,2 -0,1 -0,96 0,64 -0,2 -0 -0,4 -0,3 0,1 -0 0,1 0,37 0,07 -0,4 -0,8 0,77 0,1 0,02 0,62 0,09 -0,62 -0,1 0,4 -0,1 -0,4 -0,1 0,75 0,18 0,64 0,04 -0,3 1
40 WarUtp -0,1 -0,1 0,01 -0,1 0,12 -0 0,6 0,1 0,61 0,14 0,01 0,01 0,1 -0,1 -0,1 -0,1 0 -0,2 0,01 -0,1 -0 0,06 -0,1 0,1 0,11 0,03 0,04 -0,04 0,04 0,01 0,12 -0,1 0,2 -0,1 -0,1 0,01 -0,5 -0,1 -0 1
Le tableau d'indicateurs versus composantes montre, dans les cellules, les charges aij
qui donnent le poids de chaque indicateur dans la composante respective. Les cellules

en surbrillance sont celles qui vérient |charge| ≥ 0, 3, désignant les indicateurs pris en