Vous êtes sur la page 1sur 17

CHAPTER III

RESEARCH METHODOLOGY

3.1 Introduction

The preceding parts as in the I and II chapters presented the


background of the study, literature review pertaining to the study. In this
chapter, the components of research methodology are discussed. The
design of any research project requires considerable attention to the
research methods. Research design is described in the first section. This
chapter depicts the sampling technique, data collection methodology,
tools and techniques used to test the hypotheses. Relevant hypotheses are
also presented.

3.2 Research design

A Research design is the arrangement of conditions for collection


and analysis of data in a manner that aims to combine relevance to the
research purpose with economy in procedure. It constitutes the blueprint
for the collection, measurement and analysis of data. It details the
necessary procedures for obtaining the information required to structure
or solves research problems. The research design adopted in the study is
conclusive in nature. It tests the specific hypotheses and examines the
relationship. Conclusive nature is meant to provide information that is
useful in reaching conclusions or decision making.

3.3 Sampling frame, Sampling method and Sample Size

Sampling frame is a listing of all the elements in the population


from which the sample is drawn. Prowess Database maintained by Centre
for Monitoring Indian Economy was used to draw the companies listed in

61

Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.


Bombay Stock Exchange, with registered offices in Tamil Nadu. This
yielded 312 companies. Service sector companies belonging to Hotels
and Hospitals were eliminated because the risks faced, strategies and
practices are different. After eliminating the service sector companies, the
result comprised 302 companies.

Universal sampling method was used. The questionnaire was


circulated to all the companies in the yielded list. After repeated
remainders, continuous follow ups and personal, mail and phone contacts,
only 176 companies responded, which is nearly 58% of the population.
The accomplished response rate is satisfactory for statistical
generalisation, because the companies which responded to the survey
significantly belonged to the non responded companies category and
represented their views and opinions. As per the statistics large sample
size is 30. Whereas in this present study the sample size is 176, which is
nearly 58% of the population and found to be an adequate response rate
in comparison to the previous studies ( the response rate of the 1998
Wharton Survey of Derivates, as reported in Bodnar et.al. (1998) is 21%,
Berkman et.al. (1996) 64%, De Ceuster et.al. (1997) 22%, Bodnar et.al.
(1998) 50%).

3.4 Data Collection

Data can be collected from primary or secondary sources. Primary


data refers to information obtained firsthand by the researcher on the
variables of interest for the specific purpose of the study. Secondary data
refers to information gathered from sources already existing.

3.4.1 Primary Data

Primary data refers to first hand data obtained from the respondents.
Primary data was used to gather first hand information about the Risk

62

Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.


Management objectives, measurement tools, Risk tackling techniques and
Risk Management practices adopted by the organisations. In order to
fulfill the objectives set, a questionnaire was devised to administer among
the respondents. The use of survey questionnaires is common for Risk
Management studies ( Yazid et.al. 2008; Beasley et.al. 2006; 2007;
Ghaleb Abbas et.al. 2007; Liebenberg & Hoyt, 2003).The contents of the
questionnaire are primarily based on the style of risk management
questionnaire of Audit Office of New South Wales, Economic
Intelligence Survey and KPMG Survey. The questionnaire was circulated
to a few leading industrial experts in Coimbatore for pilot testing.
Necessary corrections have been made as per their suggestions. The
reliability result is presented in the next section. The questionnaires were
addressed to the Finance Directors. The following techniques were
adopted to collect data, because of the wide geographical area involved.

1. Personal contact.

2. Mailed the questionnaire. The questionnaire was hosted in the


Web. The URL link to the questionnaire was given in the mail.

3. Posted the questionnaire with self-addressed return envelope.

To encourage their willingness to participate in this survey,


respondents were promised a copy of the results, with due permission
from the University. This helped to increase the percentage of response.
After continuous follow up, only 176 responses were received, which is
nearly 58% of the population. Since the present study focuses on
Corporate Risk Management Practices among different sectors, the 176
companies that responded belonged to various sectors like Automotive,
Manufacturing, Chemicals, FMCG, Resources, IT& Technology and

63

Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.


Construction. The specimen of the questionnaire given to the sample
respondents is presented in the Appendix I section of the thesis.

3.4.1.1 Reliability Analysis

Reliability refers to random error in measurement. Reliability


indicates the accuracy or precision of the measuring instrument
(Norland, 1990). It is the degree to which the observed variable measures
the true value and is error free (Hair et.al. 1998).The reliability of this
questionnaire was tested by calculating the Cronbach alpha which is an
established method to work out the internal consistency. The Cronbach
Alpha provides a coefficient of inter-item correlations that is the
correlation of each item with the sum of all the other items. The higher
the coefficients, the better the measuring instruments. The closer the
Cronbach Alpha is to 1, higher the internal consistency. A reliability
value of at least 0.7 is acceptable (Nunnally, 1978). Further Bagozzi and
Yi (1998) state that a scale is said to reliable, if the reliability value of
each construct is greater than 0.60. The reliability score of the construct is
tabulated below for better understanding.

TABLE 3.1 Results of Reliability Analysis

S.No. Construct Cronbachs No of items


Alpha

1 Effective Risk 0.879 20


Management practices

Table 3.1 represents the results of reliability analysis. The


Cronbachs Alpha value is found to be closer to 1 (i.e 0 .879). Usually 0.7
and above value is acceptable (Nunally 1978). The results show that
instrument is highly reliable.

64

Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.


3.4.2 Secondary Data

The primary data were supplemented by a spate of secondary


sources of data. The secondary data related to the study was gathered
from leading journals like Journal of Risk Finance, The Review of
Financial Studies, Risk Management, Journal of Finance, Financial
analyst. A number of textbooks were studied to obtain pertinent literature
on Corporate Risk Management. The other sources of secondary data
include Monograph, Working papers, Conference and Seminar
proceedings, Research publications and interactions with subject experts.

3.5 Hypotheses Testing

Hypothesis is usually considered as the principal instrument in


research. A hypothesis can be defined as a logically conjectured
relationship between two or more variables expressed in the form of a
testable statement. In classical tests of significance, two kinds of
hypotheses are used. The null hypotheses are used for testing. It is a
statement that no difference exists between the parameters and the
statistic being compared to it. The formulated hypotheses in this study
were

H01: No difference exists between the average score of the


likelihood of occurrence of risks and the mean level of all types of risks.

H02: No difference exists between the average score of the impact


of occurrence of risks and mean level of all types of risks.

H03: No difference exists between the average risk score and the
mean scores of all types of risks.

65

Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.


H04: No relationship exists between paid up capital and likelihood
occurrences of all risks.

H05: No relationship exists between paid up capital and impact of


occurrences of all risks.

H06: No relationship exists between paid up capital and Risk


score.

H07: No difference exists among the mean scores of likelihood


occurrence of risks based on nature of industry.

H08: No difference exists among the mean scores of impact of


occurrence of risks based on nature of industry.

H09: No difference exists between the mean values of Risk scores


based on the nature of industry.

H010: No difference exists among the overall risk score and size of
the company in terms of employees.

H011: No difference exists in the tools used to manage Foreign


exchange risks based on paid up capital.

H012: No difference exists in the tools used to manage Foreign


exchange risk based on sales.

H013: No difference exists between the respondents with and


without Risk register database on Overall Risk score means.

H014: No association exists between Credit Risk measuring tool


and nature of industry.

66

Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.


H015: No association exists between Market Risk measuring tool
and nature of industry.

H016: No association exists between Budgeted expenses for Risk


Management and size of the company in terms of Sales, Employees, Paid
up capital, Nature of industry, Years of existence.

H017: No association exists between documented Risk policy and


size of the company in terms of Sales, Employees, Paid up capital, Nature
of industry, Years of existence.

H018: No association exists between the presence of organisation


communication strategy and size of the company in terms of Sales,
Employees, Paid up capital, Nature of industry, Years of existence.

H019: No relationship exists between the holding Risk register


database and size of the company in terms of Sales, Employees, Paid up
capital, Nature of industry, Years of existence.

H020: No relationship exists between the Nature of industry and


Credit Risk Management tools.

H021: No relationship exists between the Nature of industry and


operating risk management tools.

H022: No consistency exists in ranking the objectives of Risk


Management among the respondents.

3.6 Tools used in the study

The role of statistical tools is important in analyzing the data and


drawing inferences there from. By virtue of a mass of data obtained from
research survey, as well as data from secondary sources collected and

67

Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.


presented in the present report, descriptive and analytical research was
considered most appropriate for the study. Tools used in the present study
were Mean, Standard deviation, Chi-square, ANOVA, One sample t
test, Correlation, Kruskal Wallis Test, Independent t test, Factor Analysis,
and Structural Equation Modeling (SEM).

3.6.1 Mean

The most popular and widely used measure of representing the


entire data by one value is mean. Its value is obtained by adding together
all the items and by dividing this total by the number of items.

X 1 X 2 X 3 ..... X n
X
N

X
X
N

Where X = Arithmetic Mean, X= Sum of all the values of the


variable X, N= Number of observations.

The Mean values were calculated for Likelihood Occurrences of


Risk, Impact of Risk of all categories.

3.6.2 Standard deviation

The standard deviation concept was introduced by Karl Pearson in


1823. It is the widely used measure of studying dispersion. Standard
deviation is also known as root means of the squared deviations from the
mean. Standard deviation helps in judging the representativeness in the
mean. Higher the standard deviation lower is the dependability to the
mean and vice versa. It is represented by .

68

Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.


( x-x) 2

Standard deviation was calculated for Likelihood Occurrences of


Risk and Impact of Occurrence of all risks categories.

3.6.3 Pearson Coefficient of correlation

Correlation is the statistical analysis which measures and analyses


the degree or extent to which two variables fluctuate with reference to
each other. The correlation measures the closeness of the relationship
between the variables. The Pearson correlation is denoted by r. The
formula for computing r is as follows

r
xy
N x y

r= correlation co-efficient

x, y = variables

The coefficient of correlation describes not only the magnitude of


correlation but also its direction. If the sign of the correlation is plus the
direction is positive and if it is minus the direction is negative. To
measure the nature of correlation the following table is used.

0.0 to 0 .2 - Less relationship

0.2 to 0.4 - Definite relationships


0.4 to 0.6 - Marked relationship

0.6 to 0 .8 - High correlation

0.8 to 0 .99 - Very high correlation

1 - Perfect correlation

69

Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.


The degree of relationship between paid up capital, sales and
likelihood of occurrence of risks, impact of risk and Risk scores is
calculated.

3.6.4 Chi-square Test

Probably the most widely used nonparametric test of significance is


the chi-square test. It is particularly useful in tests involving nominal data
but can be used for higher scales. The Chi-square (2) test was used and
the formula is given below.

(O E ) 2
2
Chisquare test ( ) = E

Whereas, O = observed frequency


E = expected frequency

The calculated value of 2 is compared with the table value of 2 for


given degrees of freedom at a certain specified level of significance. If
the calculated value of 2 is greater than the table value, then the
formulated hypothesis is rejected and vice versa.

Chi-square test is used to find association between risk measurement


tools and nature of industry, ERM practices like documentation of risk
policy, maintaining risk register database with respect to the company
size in terms of employees, sales, paid up capital etc.,

3.6.5 ANOVA

The statistical method for testing the null hypothesis is that means
of several populations are equal is analysis of variance. It uses a single

70

Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.


factor, fixed effects model to compare the effects of one factor on a
continuous dependent variable. ANOVA uses squared deviations of the
variance, so computation of distances of the individual data points from
their own mean or from the grand mean can be summed. In an ANOVA
model, each group has its own mean and values that deviate from that
mean. The total deviation is the sum of squared differences between each
data point and the overall mean. The total deviation of any particular data
point may be partitioned into between-groups variance and within-groups
variance. The between groups variance represents the effect of the
treatment of the factor. The differences of between-group means imply
that each group was treated differently and the treatment will appear as
deviations of the sample means from the grand mean. The within groups
variance describes the deviations of the data points within each group
from the sample mean. This results among subjects and from random
variation. It is often called as an error.

The test statistic for ANOVA is F ratio. It compares the variance


from the last two sources.

Between groups variance Mean square between


F
Within groups variance Mean square within

Sum of squares between


Where Mean square between
Degrees of freedom between

Sum of squares within


Mean square between
Degrees of freedom within

In this study F ratio is used to judge whether there exist significant


difference among the mean scores of likelihood of occurrence of risks,
Impact of Occurrence of risks, Risk score based on nature of industry. If
the calculated F value is less than the table value, the difference is taken

71

Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.


as insignificant. In case the calculated F value is more than its table value,
the difference is considered as significant.

3.6.6 Kruskal Wallis Test

The Kruskal Wallis test is appropriate for data collected on an


ordinal scale or for interval data. Kruskal Wallis is a one-way analysis of
variances by ranks. It assumes independence of samples and an
underlying continuous distribution. Data are prepared by converting
ratings or score to ranks for each observation being evaluated. The ranks
range from the highest to the lowest of all data points in the aggregated
samples. The ranks are then tested to decide if they are samples from the
population.

12 k
H Ri2 3(n 1)
n(n 1) i 1

Where,

H = Kruskal-Wallis Test

n = total number of observations in all samples

Ri = Rank of the sample

In this study, Kruskal Wallis test is used to test whether the nature
of industry have influence over the type of risk management tool used.

3.6.7 One Sample t test

The one-sample t-test compares the mean score of a sample to a


known value, usually the population means (the average for the outcome
of some population of interest). The basic idea of the test is a comparison
of the average of the sample (observed average) and the population

72

Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.


(expected average), with an adjustment for the number of cases in the
sample and the standard deviation of the average.

x
t n

Where,

t = one sample t-test value

= population mean

= Standard deviation

x = Sample mean

n = number of observations in sample

One samplet test was used to test whether Likelihood


Occurrence of different Risk categories, Impact of different Risk
categories, Risk score is around the calculated average score.

3.6.8 Independent t test

The independent t-test is used to test for a difference between two


independent groups on the means of a continuous variable. It is an
inferential statistical test that determines whether there is a statistically
significant difference between the means in two unrelated groups.

X1 X1
t
2
S x1x 2 . ( S x21 S x22
n

Where Sx1x2 is the grand deviation, 1=group one, 2=group two.

73

Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.


The denominator of t is the standard error of the difference
between two means. For significance testing, the degrees of freedom for
this test are 2n 2 where n is the number of participants in each group.
Independentant t test was used to compare the mean value between the
respondents who have Risk register database and do not.

3.6.9 Kendalls Coefficient of Concordance (W)

Kendalls coefficient of concordance, represented by the symbol


W, is an important non-parametric measure of relationship. It is used for
determining the degree of association among several (k) sets of ranking
of N objects or individuals. In other words Kendalls coefficient of
concordance for ranking (W) calculates agreements between 3 or more
rankers as they rank a number of subjects according to a particular
characteristics.

s
W
1 2 3
k (N N )
12

Where s ( R j R j )2 ;

k= No.of sets of Ranking

N= number of objects Ranked

1 2 3
k ( N N ) = maximum possible sum of the squared deviations.
12

Kendalls coefficient of concordance is used to test whether there


is any agreement between respondents ranking in terms of their
objectives in managing risk.

74

Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.


3.6.10 Factor analysis

The analysis will isolate underlying factors that explain the data.
Factor analysis is an interdependence technique. The complete sets of
interdependent relationships are examined. There is no specification of
either dependant variables or independent variables, or causality. Factor
analysis assumes that all the rating data on different attributes can be
reduced down to few important dimensions. This reduction is possible
because of the influence of the other attributes. The statistical algorithm
deconstructs the rating (called a raw score) into its various components,
and reconstructs the partial scores into underlying factor scores. The
degree of correlation between the initial raw score and the final factor
score is called a factor loading. There are two approaches to factor
analysis: Principal component analysis (the total variance in the data is
considered); and common factor analysis (the common variance is
considered).

Note that principal component analysis and common factor


analysis differ in terms of their conceptual underpinnings. The factors
produced by principal component analysis are conceptualized as being
linear combinations of the variables whereas the factors produced by
common factor analysis are conceptualized as being latent variables.
Computationally, the only difference is that the diagonal of the
relationship matrix is replaced with communalities in common factor
analysis. The key statistics associated with factor analysis are Bartletts
test of sphericity, Correlation matrix, Communalities (h2), Eigen values,
Factor loadings, Factor loading plot, Factor matrix and scores, Kaiser-
Meyer- Olkin (KMO) measures of sampling adequacy, percentage of
variance.

75

Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.


Factor Analysis is employed to reduce the number of variables in a
research problem to a smaller and more manageable number by
combining related ones into factors. The primary decision in stage 1 is to
decide how many factors to extract from the data. The simple rule of
thumb normally used is all the factors with Eigen value of 1 or more
should be extracted. Next Rotated component matrix should be viewed
column wise. For each column, the variables which have a high loading
should be identified and a combined meaning for the factor is found.

3.6.11 Structural Equation modeling (SEM)

SEM uses various types of model to depict relationships among


observed variables, with the same basic goal of providing a quantitative
test of a theoretical model hypothesized. The goal of SEM is to determine
the extent to which the theoretical model is supported by sample data.
SEM is a comprehensive statistical approach to test hypothesis about
relations among observed and latent variables (Hoyle, 1995). It is a
methodology for representing, estimating, and testing a theoretical
network of (mostly) linear relations between variables (Rigdon, 1998).
SEM which is one of the most powerful multivariate data analysis, which
obtains more appropriate results than traditional methods such as
regression. It is an applicable statistical tool to test the relationship
proposed in parsimonious model. (Saghei and Ghasemi, 2009). It has
been proved that SEM functionality is better than other multivariate
techniques including multiple regression, path analysis and factor
analysis. Contrary to other statistical tools such as regression, SEM
enables the researchers to answer a set of interrelated research questions.
This method is based on modeling the relationships among multiple
independent and dependent constructs simultaneously. This simultaneous
capability differs greatly from other methods such as linear regression,

76

Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.


LOGIT, ANOVA and MANOVA, which can only analyse only one layer
of linkages between dependent and independent variables at a time (Eddie
W.L Chang, 2001, David et.al. 2000). There are two approaches in
estimating SEM, namely the covariance based SEM and PLS path
modeling. There are different tools available to perform covariance based
SEM, such as LISREL, AMOS, EQS. LISREL is sometimes used as
synonym for covariance based SEM. Covariance based SEM is one of
the popular methods used in social science. Meanwhile the limited usage
of PLS in the last decades can be explained to a considerable degree by
the lack of progress regarding the softwares use and methodological
options. This situation has now changed tremendously. Currently,
researchers can choose between several alternative software solutions
available (VISUAL PLS, SMART PLS, PLS-Graph) which provide a
clear improvement especially in terms of user friendliness (Dirk Temme,
2006). PLS is typically recommended where sample size is small and
covariance based SEM is generally advisable with minimum sample size
of 200 (Marsh et.al. 1998)

3.7 Softwares used

The following softwares were used to perform the analysis

1. SPSS 16

2. Visual PLS 4.05 b

Visual PLS is a graphical user interface for LVPLS (Latent


Variables Path Analysis with Partial Least Squares Version) which runs
in the Window platform and enables the analysis of raw data only. The
results are offered as LVPLS output (plain text file) as well as in
HTML/excel format. HP ProBook 6450b Model computer was used.

77

Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

Vous aimerez peut-être aussi