Académique Documents
Professionnel Documents
Culture Documents
This manuscript has been reproduced from the microfilm master. UMI films
the text directly from the original or copy submitted. Thus, som e thesis and
dissertation copies are in typewriter face, while others may be from any type of
computer printer.
In the unlikely event that the author did not send UMI a complete manuscript
and there are missing pages, these will be noted.
Also, if unauthorized
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
A Thesis in
Public Administration
by
Helaiel Almutairi
Doctor o f Philosophy
May 2001
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Copyright 2001 by
Almutairi, Melaiel M. F.
UMI
UMI Microform 3014588
Copyright 2001 by Bell & Howell Information and Learning Company.
All rights reserved. This microform edition is protected against
unauthorized copying under Title 17, United States Code.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Date of Signature
Rupert F. Chisholm
Professor o f Management
Thesis Advisor
Chair of Committee
Coordinator for Graduate Programs in Public
Administration
bhdi Khosrowpour
(ssociate Professor o f Information Systems
J^dbert F. Munzenrider
Associate Professor dr Public Administration
Piar
o f Public Policy and Administration
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Abstract
Reproduced with permission o f the copyright owner. Further reproduction prohibited without permission.
In this study, the relationships in the DeLone and McLean model were tested. Six
Kuwaiti public organizations were randomly selected as the studys sample. A survey
methodology was chosen to collect data. A total of 363 usable questionnaires were
obtained. Factor analysis, correlation analysis, regression analysis, and path analysis were
used to analyze the studys model.
Initial findings of this study did not support the DeLone and McLean model as it
was originally proposed. The findings indicated that information systems success is a
three variables model. This model proposes that Satisfaction affects Individual Impact
that, in turn, affects Organizational Impact. Also, Satisfaction directly affects
Organizational Impact. Based on the research findings, several implications for public
administration theory and management and future research are stated and proposed in the
conclusion.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
TABLE OF CONTENTS
Page
LIST OF FIGURES
LIST OF TABLES
LIST OF ABBREVIATIONS
ii
Chapter 1
INTRODUCTION
Chapter 2
LITERATURE REVIEW
13
System Quality
Measures of System Quality
14
19
Information Quality
Measures of Information Quality
20
22
System Use
Measures of System Use
23
27
User Satisfaction
Measures of User Satisfaction
29
32
Individual Impact
Measures of Individual Impact
33
34
Organizational Impact
35
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
36
37
46
Chapter 3
46
47
49
RESEARCH METHODOLOGY
52
Model Formulation
Model to be Tested
Research Question and Hypothesis
Operationalization
System Quality and Information Quality
52
60
61
63
63
System Use
64
User Satisfaction
64
Individual Impact
64
Organizational Impact
65
66
67
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
70
74
76
77
Chapter 4
RESEARCH FINDINGS
Respondent Characteristics
Age and Education
78
78
79
Gender
79
80
81
Correlation Analysis
83
Factor Analysis
Factor Analysis of the Independent variables
(SQ, IQ, US, SU)
The System Quality scale
85
86
87
89
89
90
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
95
viii
96
96
96
99
100
Scales Reliabilities
100
105
Regression Analysis
First Regression Analysis: Regressing of Individual Impact
on Satisfaction
106
110
111
112
114
114
Path Analysis
Chapter 5
108
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
120
124
124
ix
129
134
BIBLIOGRAPHY
136
APPENDIXES
156
Appendix A:
157
Appendix B:
162
Appendix C:
Appendix D:
165
167
Appendix E:
179
Appendix F:
182
Appendix G:
193
Appendix H:
196
Appendix I:
198
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
LIST OF FIGURES
Figure
Page
15
40
42
43
54
62
98
113
116
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
xi
LIST OF TABLES
Table
Page
73
75
80
82
83
Eigenvalue of Factors
87
89
91
92
10
Eigenvalue of Factors
95
11
97
12
101
13
102
14
103
15
104
16
106
17
109
110
111
18
19
20
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
117
xii
21
118
22
119
121
122
23
24
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
List of Abbreviations
IS:
Information System
SQ:
System Quality
IQ:
Information Quality
SU:
System Use
US:
User Satisfaction
II:
Individual Impact
OI:
Organizational Impact
OB:
Organizational Boundary
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Chapter 1
INTRODUCTION
Information systems are widely used in public organizations. These systems are
particularly appropriate because public organizations are, by their nature, information
intensive. As such, they need information management systems to collect, store, and
retrieve large volumes of information. Consequently, many public organizations have
invested substantial resources in information management systems.
The use and investment in information management systems by public organizations
will continue to increase for two reasons. First, today, virtually everyone is using some type
of information system in their day-to-day activities. Public organizations cannot afford to be
left behind technologically, since many private citizens are using these systems to manage
their personal information and, at the same time, these consumers expect to use these same
technologies to communicate with the government agencies they interact with. Second,
virtually every effort to enhance the effectiveness and efficiency of public organizations
mandates the use of information systems to improve service delivery and reduce costs (e.g.,
reinventing government movement).
With the universal use of, and investment in, information systems, one would expect
there to be an extensive body of literature concerning research into the use of information
systems in the public sector. However, this is not the case. The first authors to articulate a
case for a separate line of research for the use of information systems in public organizations
were Bozeman and Bretschneider (1986). Bozeman and Bretschneider justified this separate
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
line o f research based on the argument that MIS literature in the private sector overlooks the
effect o f external environmental variables on information systems, which is a distinguishing
characteristic of public organizations. They proposed a new line of research to be called
Public Management Information Systems (PMIS) and, since the 1980s, there have been
many researchers who have contributed to this field.
As the field grew, however, the PMIS literature did not mature to meet the needs of
practice. One particular area that is in urgent need of further exploration is the evaluation of
information systems currently in place in public organizations.
with the substantial investment in information systems and the push to develop
performance-based public organizations, public sector managers are handicapped by a lack
of appropriate instruments to measure the success of their information systems and, in turn,
are unable to justify investment in existing and future information systems. This is
supported in Caudle, Gorr, and Newcomer (1991) and Swain (1995), whose investigations
of key issues facing public sector managers found that the need to be able to measure
effectiveness was ranked highly.
The current contribution to PMIS research in this area is limited to several theoretical
studies (Stevens & McGowan, 1985; Bozeman & Bretschneider, 1986; Newcomer, 1991).
Researchers in this area argued that external players must be taken into account when
evaluating information systems. Valid measures, however, are in short supply, if they exist
at all. The public information system management literature must mature more quickly to
afford enable public sector managers the necessary instruments to measure their information
systems.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Chapter 2
LITERATURE REVIEW
This chapter presents three bodies of literature. Section one presents studies that
have investigated the relationship between the external environment and information
systems within public organizations and the implications of this relationship on evaluating
information systems in the public sector. The common denominator of these studies is the
emphasis on the importance of external variables in evaluating information systems in the
public sector.
Section two presents studies that have evaluated information system success. Most
of these studies were conducted in the private sector. These studies investigated and
analyzed different dimensions of information system success and how these dimensions are
related to other organizational variables (e.g., task characteristics, race, user participation,
job satisfaction, etc.). Most of these studies focused on one or two dimensions of evaluating
information system success.
Section three presents studies that have attempted to develop comprehensive models
for evaluating information systems success by integrating the dimensions identified in the
studies in Section two.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
is called the general environment, and includes all of the external actors that operate in the
public organization environment, such as economic variables, technology variables, and
demographic variables. In the third type of external environment (the remote environment)
the writers included intangible factors that a public organization managers deals with when
he performs his functions, such as uncertainty, complexity, and threats.
The role the information system plays in the public sector organization is greatly
influenced by these external and internal subsystems. According to the researchers, when
public organization managers do strategic planning, they must take into account the
expectations of major outside and inside interests. One approach is to develop a database
that incorporates these expectations.
Another example is that in the operational environment there are legislative,
executive, judicial, and financial/budgetary controllers who impose certain authority and
financial standards on public organizations. For example, often, public organizations are
obliged to follow several legislative statutes (e.g., paper reduction acts) intended to improve
the internal operation of these organizations. In response to these standards, public
organization information structures should be able to generate relevant information for both
external reporting and internal control.
Stevens and McGowan (1985) identified several criteria to use to evaluate
information systems in public organizations: 1) accuracy and applicability of information
provided to managers and users, 2) timeliness of information, 3) User Satisfaction, and 4)
acceptance by managers and users. These researchers also proposed that these criteria could
be applied to the internal, operational, and control objectives, as well as the analysis of the
environmental influences that may directly affect internal organizational functions (p. 141).
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
In other words, these criteria could be used to assess the success of information system from
the perspectives of both the internal and external users.
Bozeman and Bretschneider (1986) also attempted to develop a model for the Public
Management Information System (PMIS). These researchers strongly believed that external
factors, or what they called the distal environment (e.g., political and economic authorities),
influenced the internal factors in an organization, or what they called the proximate
environment; which include variables that are related to the work context and the attitudes
and behaviors of individuals in an organization. According to the researchers, this strong
external influence on the internal factors is what makes information systems within public
organizations different from those in private organizations.
Consequently, the researchers argued that MIS performance measures should reflect
the unique characteristics of public organizations. According to the researchers in both the
public and private sectors, accountability is important; however, this concept in public
organizations has greater importance as a result of the nature of the distal environment. For
example, public organization managers are more accountable to individuals and groups
outside the organization. Consequently, measurements of performance of information
systems should reflect the systems ability to
...handle special queries that aggregate data in unanticipated ways, and
produce special reports and analysis. These non-routine forms of analysis
will have extremely short time frames, thus adding the dimension of
timeliness to the measurement of accountability (Bozeman & Bretschneider,
1986, p. 482).
Furthermore, the researchers added that timely responses to external requests for data
are concerns when evaluating information systems in public organizations at the
environmental level. According to the researchers, during budget cycles, external political
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
players such as executive branch agencies and legislatures require data that enable these
external actors to evaluate public organizations. These researchers argued that the degree
to which an organization responds to external data requests in a timely fashion with
appropriate and accurate data can have either positive or negative effects on MIS within the
organization (Bozeman & Bretschneider, 1986, p. 482).
In an empirical study, Bugler and Bretschneider (1993) studied the adoption of
information systems in public organizations and found that there is a relationship between
external relationships and the adoption of information systems. Organizations that have
closer external relationships are found to have a higher interest in adopting information
systems for the purpose of improving these relationships.
In another empirical study, Bretschneider (1990) tested the following hypotheses:
(1) Public Management Information System managers must contend with a greater level of
interdependency across organizational boundaries than do private MIS managers, and (2)
Public Management Information Systems planning is more concerned with extraorganizational linkages, while private MIS is more concerned with internal coordination.
After testing these propositions, Bretschneider (1990) concluded
The environment of PMIS differs from that of its private sector counterpart.
The difference is primarily in the form of greater interdependencies, leading,
at least in part, to increased accountability, procedural delays, and red tape.
Secondly, within these more constrained environments, traditional MIS
prescriptions are not automatically adopted. This suggests that the
environment o f public organizations has led to adaptation o f standard
management practices. In other words, the organizational environment
affects or tailors the nature of management action (p. 543).
Rocheleau (1999) reviewed several cases of information system implementation
projects in several public organizations and concluded that political factors are often the
Reproduced with permission o f the copyright owner. Further reproduction prohibited without permission.
10
most crucial in determining how successful information technology is (p. 23). Rocheleau
recommended that Managers [of information systems] will often have to be involved in
exerting political influence and engage systems outside their direct control in order to assure
a successful outcome (p. 31). Including outside representatives in the evaluation process is
one form of engaging outside systems.
In studying the adoption of microcomputers in both private and public sectors,
Bretschneider and Wittmer (1993) found that organizational environment (i.e., greater levels
o f interdependence across organizational boundaries and higher levels of red tape) and task
environment (i.e., the nature and characteristics of tasks) play major roles in innovation and
adoption of information technology. Thus, these researchers strongly recommended taking
into account the nature o f these environments in the management of information systems.
In an empirical study, Mansour and Watson (1980) tested the applicability of the
private sector computer-based information system models in the public sector. The model
tested was:
CBIS performance
Under each category, there were several specific variables. Under CBIS
Performance, there were applications performance, the degree of integration in the
database, the decision function provided by decision models, the organizational levels
served, and the interfaces between system elements. The Behavior category included
degree of top management involvement in systems development, the effectiveness of
relationships between computer specialists and other organizational personnel, the amount
o f resistance to change by organizational personnel, and the quality and quantity of
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
computer specialists. Under the Structural category, there were the organizational
placement of the data processing function, the frequency with which educational programs
are offered to end users, the availability of interactive computing, and the length of time the
organization has operated a CBIS. Finally, the Environment category incorporated the
amount of competition the organization faces in the marketplace, the variety of products or
services offered by the organization, the frequency with which the organization offers new
products or services, the amount of customer requirements, and the amount of external
regulation.
The variables for each category were selected based on the outcome of a
comprehensive survey of the literature, which identified a list of variables in each category.
Second, a panel of IS experts reviewed the list. Variables were included in the final list
based on the weights that these experts assigned to the variables. The final list of variables
was tested on both private and public organizations, although the Environmental variables
were excluded in the public organization case. The researchers argued that this is due to
lack of relevance [of the environmental variables] for governmental organizations, given
the way the environmental variables were defined. Governmental organizations function in
an environment that is much different from thai faced by private business organizations (p.
525). According to these researchers, even among government organizations, there are
considerable differences in the external environment. Consequently, Mansour and Watson
(1980) proposed that
In order to explore fully the impact o f environmental variables on CBIS
performance in governmental organizations, it would be necessary to
categorize the different types o f governmental organizations, develop
appropriate environmental variables for each category, and collect data from
organizations in the different categories (p. 526).
Reproduced with permission o f the copyright owner. Further reproduction prohibited without permission.
12
The researchers did not undertake this effort, but it is certainly a possible area for
future research.
Newcomer (1991) argued that users of information systems in public organizations
are not only the members of the organization, but also users that exist in the extended
environment such as legislative, central management and oversight agencies, program
clients, other governmental agencies, suppliers, and media. Thus, Newcomer argued that
these users should be taken into account when evaluating information system.
Moreover, Newcomer proposed specific information system success indicators in
public organizations. These indicators included usefulness and reliability, ease of use, errorresistant operations, authorized-use controls, protected system and operations, time savings,
system economic payoff or cost result, user acceptance, and contextual considerations
(which includes, among other things, the unique nature of public-sector access and
accountability). Regarding the last indicator, Newcomer (1991) stated, Public-sector
information system evaluation must consider how well information systems meet numerous
legislative requirements (p. 383).
Bozeman and Straussman (1990) also suggested taking into account the external
environment in evaluating information systems in public organizations. The researchers
stated
Public officials satisfaction (a surrogate for citizens satisfaction) with the
final set of goods and services is one measure of PMIS...Such measures are
important indicators of technological success of PMIS (p. 123).
In summary, the studies reviewed in this section of the literature review indicate that
there is close interdependency between information systems in public organizations and the
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
13
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
14
account all six dimensions of information system success and the relationships among these
dimensions (Figurel). Although, DeLone and McLean (1992) argued that contingency
variables such as the environment of the organization being studied should be taken into
account, these variables were not a main dimension of their model.
In the private sector information system literature, DeLone and McLeans (1992)
taxonomy was described being comprehensive enough to take into account all dimensions of
information systems success (Seddon, 1997; Ballantine et al., 1996). As such, in the
following subsections, DeLone and McLeans taxonomy will be used to organize findings of
studies that investigated information system success in private organizations. The review
will focus on two things: (1) identifying key variables and relationships among them, and 2)
how the variables were operationalized and measured.
System Quality
Studies examining system quality used features of the systems themselves to assess
quality. Some studies evaluated information systems by investigating how information
systems utilized organizational resources such as materials and financial resources. For
example, Kriebel and Raviv (1980,1982) used microeconomics to develop and test a
mathematical model for evaluating the efficiency of computer services supply in
organizations. They attempted to model the input resources required and the output
products or services provided by the information system department.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
System
Quality
Individual
Impact
Information
Quality
User
Satisfaction
Organizational
Impact
In the same vein, Conklin, Gotterer, and Rickman (1982) studied the impact of
background jobs on response times. In this study, the terminal response time was defined as
the interval from the time the operator depressed the transmit key until the response
character appeared on the screen, using a stopwatch to measure the response time.
Since
Conklin and colleagues found that longer response time related to decreased user
satisfaction with the system, this study supports the importance o f the users perception of
system quality.
Using a different approach, a number of studies evaluated the quality of information
systems by examining the organizational effectiveness (i.e., how well the users of the system
are accomplishing their organizational goals) and identifying factors that should exist in an
organization in order to ensure a high quality information system. For example, several
researchers have examined the relationship between user participation in the development of
information systems and system quality (Glorfeld, 1994). Edstrom (1977) investigated the
relationship between users influence in the different phases of the system development
process and information system success and found that there is a positive relationship
between users influence in the initiation phase and the perceived success o f the system.
The participants in this study were asked to rate the implemented information system on a 7point Likert-type scale from complete failure to complete success.
Franz and Robey (1986) investigated the relationship between user involvement in
information system development and perceived system usefulness. The study was
conducted on 118 user managers from 34 companies. The researchers found that greater
user involvement in all information system development stages is related to greater
perceived usefulness (surrogate measure of system quality). In the same vein, Kaiser and
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
17
Srinivasan (1980) used the perceived worth of the information system as a measure of
system quality. The researchers found that there is a relationship between user involvement
and group process skills, such as the ability to adapt to change, communication skills, level
o f conflict and agreement, and information technology effectiveness. The researchers stated
clearly, user involvement with the activities of the system leads to higher measures of
perceived worth o f the system (p. 202).
In an experimental study, King and Rodriguez (1981) investigated the relationship
between participation and the users perception of the worth of the system (surrogate
measure of system quality). The researchers found support for the relationship between
participation and perceived worth of the system, but that participation did not lead to an
increase in system usage.
Similarly, Tait and Vessey (1988) investigated the relationship between user
involvement and system success. System success was measured using and instrument
developed by Bailey and Pearson (1983), which included several items that measured
system quality. Although the researchers did not find any support for this relationship, they
found that system complexity, time, and financial resource constraints have strong direct and
indirect effects on system success through user involvement.
The interest in the relationship between user involvement and information success
led Torkzadeh and Doll (1994) to develop a measurement for user involvement. The
researchers assessed the short-range and long-range stability of the items that measure
perceived involvement, desired involvement, and involvement congruence using the testretest method. The researchers concluded that the instruments are internally consistent,
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
18
stable, and should be used with confidence in user involvement research without concern
about a reactivity effect.
Goslar (1986) investigated the usefulness of several decision support system features
(used as surrogate measure of system quality) for marketing problem solving. Features
examined in this study were the interrogation (e.g., what-if analysis, impact analysis,
sensitivity analysis), computation (e.g., standard arithmetic calculation, complex
mathematical models, cost benefit ration, forecasting (e.g., moving average, regression,
polynomial fit), range analysis (e.g., normal distribution, uniform distribution, general
cumulative distribution), and simulation analysis. Goslar found that interrogation features,
computational features, and forecasting models were considered most useful by DSS users,
while range analysis features were considered the least useful.
Davis (1989), in several empirical studies, found that perceived usefulness (the
effects of the system on work) and perceived ease of use (whether easy to use and interact
with system), which are two surrogates of system quality- are associated with system
acceptance (current and future usage), with correlation coefficients ranging from.45 and.85
respectively. Davis also found that usefulness and perceived ease of use are significantly
correlated with each other (r =.69).
In the context of testing the technological acceptance model, Karahanna and Straub
(1999) found that usefulness (the belief that an information system is useful in job
performance) is affected by perceived ease of use (the extent that an information system is
friendly).
Yuthas and Young (1998) conducted a study to test whether user satisfaction and
system usage are appropriate indicators of decision-making effectiveness (system quality).
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
19
System usage was defined as the extent and nature of use of information system.
Satisfaction was defined as extent of improvement in decision-making outcomes. Yuthas
and Young concluded that user satisfaction and system usage measures are not acceptable
alternatives to direct performance measurement.
Researchers have used many surrogate measures for system quality, ranging from
single-item scales to multi-item measurements. For example, Barki and Huff (1985) used a
single semantic differential item to measure overall user satisfaction regarding decision
support systems. Similarly, Edstrom (1977) measured the success of information system
through one question by which users rated the implemented system. The multi-item
instruments measured system quality through perceived value or worth, usefulness, and
perceived ease of use. For example, Davis (1989) developed and validated two
measurements for perceived usefulness and perceived ease of use. Each instrument consists
o f six items.
Bailey and Pearson (1983) developed and validated instruments to measure general
user satisfaction. Seven items from this instrument were assigned to measure system
quality. This instrument has been validated by several researchers (Ives, 1983; Baroudi &
Orlikowski, 1988; Iivari & Ervasti, 1994; Mahmood & Becker, 1985,1986) and has become
a standardized measure in the MIS field.
Doll and Torkzadeh (1988) developed an instrument to measure end user computing
satisfaction (EUCS). The instrument merged items that measure the quality of information
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
20
(content, format, and timeliness) with items that measure the quality of the system
(accuracy, ease of use). In the EUCS, there are 13 items, four o f which were designed to
measure system quality (ease of use and accuracy). Torkzadeh and Doll (1991) and
Hendrickson, Glorfeld, and Cronan (1994) validated this instrument. Hendrickson and
colleagues conducted their study on public organizations and found that the EUCS measure
is valid and stable over time.
Information Quality
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
21
system/organization (p. 406). The second model is I/O, which presents the information
system from the viewpoint of the user. The third model is C/O which determines the
internal structure and action of an information system. This study is relevant to the P model,
which takes into account the viewpoints of external users of the information system, such as
interest groups. However, the researchers did not provide the means to measure the effect of
these external players on the quality of information systems. Iivari and Koskela (1987)
justify this by stating
It is difficult to provide effectiveness criteria (schemas) of wide applicability.
Due to the diversity of potential effects, the principle of many points of view
should be applied to their identification reflecting the various interests
involved and taking into account not only the economic effects...but also
various social, technical, and managerial effects (p. 414-415).
In the same vein, Mahmood and Medewitz (1985) investigated the relationship
between the selection of a DSS design method and its ultimate success. DSS success was
measured through DSS usage, user satisfaction, and user attitude and perception criteria.
Data was collected from managers, intermediaries, and designers. Among the most highly
rated DSS successes were several items that related to information quality such as accuracy
of DSS reports, useful output reports, and better types o f output reports. Consequently, this
study notes the connection between system design and information quality.
Blaylock and Rees (1984) tested the relationship between a decision-makers
cognitive style and the output of information system-information. The researchers used
Larcker and Lessigs (1980) questionnaire measuring usefulness of information by
examining two components: importance of information, and usefulness o f information. The
first term is defined as the quality that causes a particular information set to acquire
relevance to the decision maker (p. 123). Usefulness is defined as the information quality
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
22
that allows a decision maker to utilize the [information] set as an input for problem solution
(p. 123). The researchers found a strong correlation between cognitive style and usefulness
of information.
In an exploratory field study of five senior executives, Jones and McLeod (1986)
examined where and how senior executives get their decision-making information. The
studys findings indicated that executives obtain a great deal of information from both the
environment and from informal information sources, and that formal computer-based
information systems do not seem to provide much information directly to the executive.
These researchers have suggested that executive information systems be conceptualized for
design in the broadest terms possible to include internal and external information sources,
personal and impersonal sources, and a broad spectrum o f media (meetings, computer and
non-computer reports, telephone, etc.) that vary in information richness (p. 244). This
study showed how the external sources of information are important and related to the
quality o f information used by an organizations members.
Like the preceding dimension, researchers have used many surrogate measures for
information quality. For example, Bailey and Pearson (1983) developed a user satisfaction
instrument, which included nine items that measure information quality: accuracy,
timeliness, precision, reliability, currency, completeness, format of the output, volume of
output, and relevancy. This instrument has been validated by several researchers (Ives et al.,
Reproduced with permission o f the copyright owner. Further reproduction prohibited without permission.
23
1983; Baroudi & Orlikowski, 1922; Iivari & Ervasti, 1994; Mahmood & Becker,
1985/1986), and has become a standardized measure in the MIS field.
Gallaghers (1974) multi-item measurement assessed information quality by utilizing
two measures of perceived value: an estimated dollar value in response to the following
question:
Assume that your company plans to eliminate all data processing and to
obtain this report from another firm on an annual subscription basis. What is
the maximum amount you would recommend paying for this report for you?
(Gallagher, 1974, p. 48)
The second was a set o f fifteen 7-point semantic differential bipolar adjective pairs
to which the respondent was asked to indicate his opinion of the report. The 7-point scale
ranged from -3 (extremely unfavorable) to +3 (extremely favorable). The score on this
measure of perceived value is the average of responses to all 15 adjective pairs.
Doll and Torkzadehs (1988) EUCS instrument included eight items that assessed
information quality. The eight items measured information quality through its content,
format, and timeliness. Each item was scored on a 5-point Likert-type scale.
System Use
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
24
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
25
by the MIS function to users of the system, the use o f the information system and its effect
on user organizational processes and performance, and the effect of the information system
on organizational performance. Hamilton and Chervany argued that within each type of
objective there is interdependence among the objectives. In others words, each objective
affects the objective that follows. The linkage between the objectives of the two
perspectives, according to the writers, takes place when the organizational performance
objective (effectiveness perspective) affects the environment, which, in turn, affects the
resource investment objective (efficiency perspective). The writers argued that system
usage could be a measure of information system effectiveness because effects on
organizational objectives and performance do not follow directly and immediately, but
rather result from use of the information system (Hamilton & Chervany, 1981, p. 58).
Hamilton and Chervany (1981) made another interesting recommendation to extend
the evaluation o f the information system process to include not only the primary user of the
information system but also other people involved in the achievement of information system
objectives, both from the efficiency and effectiveness perspectives.
A number of empirical studies have been conducted to investigate the relationship
between information system usage and other organizational variables. For example, King
and Rodriguez (1978) investigated the relationship between user involvement and system
usage. Their experimental study was conducted with managers enrolled in a part-time MBA
program who had completed virtually all of the program requirements. The researchers did
not find a relationship between user involvement and system usage.
In the same vein, Kim and Lee (1986) investigated the relationship between user
participation and degree of MIS usage. They proposed a four-dimensional model for this
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
26
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
27
2.
3.
the more satisfied users are with an IS, the greater the use; and the
less their satisfaction, the lower the level o f use.
Karahanna and Straub (1999) studied 100 e-mail system users and found that
system use is affected by the mediums usefulness, which is affected by perceptions of the
ease o f use. LISREL 7 was used to analyze the relationships between the variables. The
goodness of fit index for the model of these relationships was .96. In this study, usefulness
is defined as the belief that an information system is useful in the job, while the ease of use
is defined as the extent to which an information system is friendly.
Baroudi et al. (1986) gave empirical evidence that system usage and user satisfaction
are linked. The researchers noted user information satisfaction is an attitude toward the
information system, while system usage is a behavior (p. 234). The study provided
evidence that user satisfaction is related to greater system usage (r =.28), although the study
did not identify the direction of this relationship:
Satisfaction -> Usage
versus
Usage-> Satisfaction
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
28
Dor, Segev, & Steinfeld, 1981), to self-reported perception o f past usage (e.g., Lucas,
1975c).
Kim and Lees (1986) study developed a measure of usage that took into account the
voluntary aspect of the usage. Kim and Lees measurements took into account the
frequency of the use and the voluntariness of use. Each was measured on a single item, 7point Likert-type scale from 1 (much less frequent use) to 7 (very frequent use). The scale
associated with voluntariness was anchored by 1 (completely mandatory use) and 7
(completely voluntary use). To compute the system usage index, the responses to the two
items are multiplied (thus, the range is from 1 to 49) and the square root of the product is
taken for the purpose o f normalizing the scale.
Building on Igbaria (1992) and Igbaria, Pavri, and Huff (1989) and Anakwe,
Anandaeajan, & Igbaria (1998) measured usage through four indicators; which are actual
daily use of the computer, frequency of use, number of packages used by participants, and
number of tasks the system is used for. Their study was conducted on nine organizations in
Nigeria.
Doll and Torkzadeh (1998) developed a multidimensional measure of how
extensively information technology is utilized in an organizational context for decision
support, work integration, and customer service functions. The instrument consists o f 74
items, 62 of which measured System Use, while 12 items measured the impact of IT on
work. Using a pilot sample of 89 usable interviews, the two researchers validated the
instrument.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
29
User Satisfaction
User satisfaction is the measure of the successful interaction between the information
system itself and its users (Glorfeld, 1994). DeLone and McLean (1992) argued that user
satisfaction has been widely used for the following reasons:
First, satisfaction has a high degree of face validity. It is hard to deny the
success o f a system, which its users say they like. Second, the development
of the Bailey and Pearson instrument and its derivatives has provided a
reliable tool for measuring satisfaction and for making comparisons among
studies. The third reason for the appeal of satisfaction as a success measure
is that most of the other measures are so poor; that are either conceptually
weak or empirically difficult to obtain (p. 69).
Many researchers have studied user satisfaction and how it is related to other
variables. For example, Mahmood and Becker (1985/1986) tested the relationship between
end users satisfaction and organizational maturity of information system. User satisfaction
was measured using Pearsons instrument. The organizational maturity of the information
system was measured using Nolans stage model. Nolans model consists of six stages
(initiation, contagion, control, integration, administration, and maturity). Under each stage,
there are several variables that distinguish this stage. For example, among the variables that
distinguish the maturity stage in the area of data processing expenditure is tracks rate of
sales growth and in the area of applications portfolio is application integration mirroring
information follows. The researchers found a weak direct correlation relationship between
variables in the maturity stage and the level of User Satisfaction.
Ginzberg (1981) investigated the relationship between users expectations and users
satisfaction. A single item that measures the overall satisfaction with the information
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
30
system measured user satisfaction. The studys findings indicated that users who maintain
realistic expectations prior to implementation were more satisfied with the system and used
the system more than users whose pre-implementation expectations were unrealistic.
Lu and Wang (1997) tested the relationship between user satisfaction, management
styles, and user participation. The study was conducted on IS managers who work in
companies in Taiwan. The researchers found that user participation is not always
significantly correlated with User Satisfaction. Regarding management styles, the
researchers found that management style should be adapted to the IS stage. At the initiation
stage, people-oriented management style has a connection with user involvement, but not
with User Satisfaction. At the development, both people-oriented and task-oriented styles
are related to user participation and user satisfaction. At the maturity stage, management
styles have no connection to user involvement, but have significant correlation with user
satisfaction.
Woodroof and Kasper (1998) integrated three organizational behavior theories of
motivation (equity, expectancy, and needs) with user satisfaction. Their argument is based
on the notion that the satisfaction construct is different from the dissatisfaction construct and
that the process of an information system is not like the outcome of an information system.
Accordingly, the writers proposed including four variables in the DeLone and McLean
model: process user dissatisfaction, outcome user dissatisfaction, process user satisfaction,
and outcome user satisfaction. These four variables, according to the writers, affect
separately and jointly usage and satisfaction in DeLone and McLean model.
Baroudi and colleagues (1986) tested the relationship between user satisfaction and
System Usage. User satisfaction was measured through the use of Bailey and Pearson
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
31
instrument. The researchers found a positive relationship between the two variables(r=.28);
however, the causal ordering of this relationship could not be identified.
Khalil and Elkordy (1999) investigated the relationship between user satisfaction and
systems usage using a sample of Egyptian banks. To measure user satisfaction, the
researchers used the short version of the User Satisfaction instrument originally developed
by Bailey and Pearson (1983). The researchers tested the reliability of this instrument. The
overall reliability coefficient of instrument was 0.82. This meant that the total score of the
instrument is reliable as a measure of the level of user satisfaction. Moreover, the reliability
coefficients for each of the basic elements in the instrument were calculated using factor
analysis. The reliability coefficients were: relationship with IS staff and systems (0.81),
quality of systems output (0.64), and users understanding of systems and users
involvement in systems development (0.67). Regarding the relationship between user
satisfaction and usage, the researchers found a positive correlation between the two concepts
(r = .36).
While some studies did identify a positive relationship between usage and user
satisfaction, several studies did not find such relationship (e.g., Schewe, 1976; Cheney &
Dickson, 1982; Srinivasan, 1985). Kim, Suh, and Lee (1998) argued that contingency
variables (task variability and task analyzability) have an effect on usage and a moderating
effect on the relationship between usage and user satisfaction. An empirical study
conducted on several companies in Korea was used to give evidence for the effect of the
contingency variables. In this study, User Satisfaction was measures through six items
adapted from Maish (1979), Ginzberg (1981), Sanders (1984), and Lee and Kim (1992).
The Cronbachs alpha for the six items was 0.874.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
32
In an exploratory study, Ang and Soh (1997) examined the relationships between
user satisfaction, job satisfaction, and users computer background. The researcher found
that user information satisfaction (UIS) provides a sound indication of job satisfaction;
however, there was no relationship between UIS and computer background.
Palvia and Palvia (1999) investigated the variables that influence user satisfaction in
small businesses. The variables the researchers tested were gender, age, race, education, and
computing skills. Among these variables, gender and age were the only variables that had
significant association with user satisfaction.
Reproduced with permission o f the copyright owner. Further reproduction prohibited without permission.
33
Individual Impact
Individual impact refers to the effect of information on the behavior of the recipient
of the information (DeLone & McLean, 1991). DeLone and McLean indicated that
performance o f users o f information system and individual impact are closely related.
Improving performance indicates that the information system has a positive impact.
Millman and Hartwick (1987) found that office automation has led to positive effects
on the workplace. The 75 managers utilized for the sample reported that automation led to
improving their effectiveness, as well as the effectiveness o f their organization. Similarly,
Bikson, Stasz, and Mankin (1985) studied the impact of automation on individuals work.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
34
These researchers found that the majority of people employed in automated offices felt that
information systems enriched their work.
Marcolin, Munro, and Campbell (1997) investigated the relationships among job
characteristics (feedback, autonomy, task identity, and skill variety), individual traits
(computer anxiety and locus of control), individual beliefs surrounding technology usage
(perceived relative advantage and perceived ease o f use), and user ability to employ
information systems. The findings indicated that skill variety, computer anxiety, and
relative advantage of information systems were important in identifying users with higher
and lower abilities. The regression coefficients of these variables ranged from .10 to -.47.
Igbaria and Tan (1997) investigated the implications and consequences of IT
acceptance by examining the relationship between IT acceptance and its impact on
individual users. The research model involved three components: user satisfaction, system
usage, and individual impact. The findings indicated that user satisfaction is an important
factor affecting system usage, and that user satisfaction has the strongest direct effect on
individual impact.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Millman and Hartwick (1987) used a questionnaire to assess the impact of office
automation on middle management. Managers were asked whether office automation has
increased, decreased, or had no effect on 15 different aspects of these managers job and
work (e.g., importance of job, amount of work required on the job, accuracy demand on the
job, skill needed on the job, interesting job).
Doll and Torkzadeh (1998) used 12 items as part of their multidimensional measure
to test the impact of IT on task productivity, task innovation, customer satisfaction, and
management control. Torkzadeh and Doll (1999) further validated the same 12 items for the
purpose o f developing an instrument for measuring the impact of information technology on
work. The reliability scores were 0.93,0.95,0.96, and 0.93 for task productivity, task
innovation, customer satisfaction and management control, respectively. The overall
reliability for the instrument was 0.92.
Organizational Impact
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
36
overall organizational expenses and how they related to information systems. The
researchers found that companies that lease information systems tended to have higher
organizational expenses.
Mahmood and Soon (1991) attempted to develop a comprehensive model to measure
the effects of information system on organizations through integrating several internal
organizational variables and external variables. The variables included in the model were:
new entrants, entry barriers, buyers and consumers, competitive rivalry, suppliers, search
cost and switching costs, products and services, economics of production, internal
organizational efficiency, inter-organizational efficiency, and pricing.
The type o f variables that each researcher focused on influenced how researchers
measured the impact of information systems on an organization. Researchers who focused
on internal variables used financial measures such as return on investment (Vasarhelyi,
1981) and cost/benefit analysis (Johnston & Vitale, 1988). Other researchers included nonfinancial measures. For example, Jenster (1987) examined productivity, innovations, and
product quality.
Mahmood and Soon (1991) developed a measure to assess the impact of information
systems on several of the strategic variables mentioned in the previous section. The
instrument included 50 items beginning with the phrase, To what extent do you think
information technology..., measured on a 5-point Likert-type scale from 1 (no extent) to 5
(very great extent). Sabherwal (1999) developed and tested a measure of the impact of
Reproduced with permission o f the copyright owner. Further reproduction prohibited without permission.
37
System Quality
2.
Information Quality
3.
Information Use
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
38
4.
User Satisfaction
5.
Individual Impact
6.
Organizational Impact.
Next, the researchers developed a model for IS success. DeLone and McLean
argued that this model recognizes success as a process construct which must include both
temporal and causal influence in determining I/S success (p. 83).
As illustrated in Figure 1, DeLone and McLean arranged the six information system
success categories (dimensions) listed above to suggest two things: (1) the interdependence
between these dimensions; and (2) the time sequence or causal ordering of these dimensions.
The DeLone/McLean model proposes that SYSTEM QUALITY and INFORMATION
QUALITY singularly and jointly affect both SYSTEM USE and USER SATISFACTION.
Additionally, the amount of SYSTEM USE can affect the degree of USER
SATISFACTION - positively or negatively - and the degree of USER SATISFACTION
also affects SYSTEM USE. SYSTEM USE and USER SATISFACTION are direct
antecedents of INDIVIDUAL IMPACT. Lastly, this IMPACT on individual performance
should eventually have some ORGANIZATIONAL IMPACT (DeLone & McLean, 1992, p.
82, 87).
In order to effectively use this model, the researchers suggested two things. One is
to systematically combine individual measures from the information systems success
categories (dimensions) for the purpose of creating a comprehensive measurement
instrument. Second, contingency variables (such as structure, size, and environment of
organization) should be taken into account when selecting an information system measure of
success.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
DeLone and McLean called for further development and validation of their model.
This call motivated many researchers to test, expand, and modify DeLone and McLeans
model. In fact, most o f the studies that attempted to develop a comprehensive model or
partial model for information system success were based on DeLone and McLeans model.
Myers, Kappelman, and Prybutok (1997) note that DeLone and McLeans model is
the most comprehensive IS assessment model offered by existing IS research. However, as
noted earlier in the chapter, the relationship between an IS and its external environment is
not conceptually included in the model.
Seddon and Kiew (1994) tested part of DeLone and McLeans model. The
researchers proposed the causal paths among the six variables in the model as illustrated in
Figure 2. The researchers tested the relationships among the four variables in the box after
replacing use with usefulness and adding a new variable called user involvement. They
found support for the relationships between the specified variables. The correlation analysis
in Seddon and Kiews study indicated that the four variables are directly associated, with
Pearson correlation coefficients ranging from .5468 to .7302.
Reproduced with permission o f the copyright owner. Further reproduction prohibited without permission.
System
Quality
Use
Individual
im pact
Information
Quality
Organizational
Impact
User
Satisfaction
41
Glorfeld (1994) represented the relationships among the variables in DeLone and
McLeans model as:
IT effectiveness = /(SQ,IQ, SU, II, 01)
SU = /(SO,IO,US)
US = / (SQ,IO, SU)
11= /(S U ,U S )
01 = /(I I )
After combining User Satisfaction, System Quality, and Information Quality into one
variable called satisfaction, Glorfeld tested the model (see Figure 3). His findings
supported the relationships among the variables except the relationship between individual
impact and organizational impact. There was a significant negative relationship between
these two variables. Glorfeld argued that is could be due to the small sample size or to the
composition of the sample.
Garrity and Sanders (1998) extended the user satisfaction variable in DeLone and
McLeans model, proposing that task support satisfaction, quality of work life satisfaction,
interface satisfaction, and decision-making satisfaction are the constructs that underline any
measurement o f user satisfaction.
Seddon (1997) modified and extended DeLone and McLeans model by discussing
more deeply the meaning of information system use and adding four new variables
(Expectations, Consequences, Perceived Usefulness, and Net Benefits to Society) to the
model.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
System
Usage
Individual
Impact
rganizationan
IT
Effectiveness
Satisfaction
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
f t o t k l b ita T lo r a l m o d rl o f IS U se
Expectation about
the net benefits of
future IS use
IS U se
| Observation.
| Personal Experience, mod
I R qxnts from Others
1. Measures of
XfifolDBtfiott A
System Quality
XOtMVll flHCCplttd
Measures o f Net
Benefits o f IS Use
3. Other Measures of
Net Benefits of IS
Use
N et benefits to:
Peedback
(Partial basis
far revised
expectations)
System
Quality
Perceived
Useftilness
Infnrmatinn
User
Satisfaction
Individuals
| Organisations |
Quality
Society
e * .. Volitiooal IS lire
IS Si
M odal
Kmy:
Rectangular boxes
Rounded boxes
Solid-line arrows
Dotted-line arrow
IS Success model
Partial behavioral model o f IS Use
Independent (necessary and sufficient) causality
Tnnm-> (not causal, since observers goals are unknown)
4*
U>
44
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
45
overcome the perceived weaknesses in DeLone and McLeans model. They called their
model The 3-D Model Of Information Systems Success. In this model, they took into
account external factors, based on their belief that, Information gained from systems is
more likely to be used in the wider context of supporting value chain activities and more
open management than for purely internal consumption (Ballantine et al., p. 10).
The 3-D model includes three levels and three filters between the three levels. First,
there is the development level, which includes variables such as user involvement and
system type. Next is the deployment level, which includes variables such as user
satisfaction, user skills, and task impact. Last is the delivery level, which includes variables
such as use of output, benefits management, and support of champion. Between these levels
are three filters that affect the three levels. The implementation filter is between the
development and deployment levels. The integration filter is between the deployment and
delivery levels. Finally, there is the environment filter, which comes after the delivery level.
The researchers argued that information system success is influenced by factors that
exist in the environment, such as competitor movement and political, social, and economic
factors. These factors are not in the control of the organization. The researchers explained
that the environmental filter was added to the model because it has implications for
measurement o f success. For example, the ability o f an information system to reach its
organizational goals could be hindered by factors outside the organization.
No follow up conceptual or empirical studies have been conducted to extend or
validate the 3-D Model of information systems success. This may be due to the complexity
of this model, which makes empirical testing very difficult.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
46
The literature review chapter contained three sections. The following is an abstract
and assessment of the three sections.
The first section of the literature review dealt with the relationship between the
external environment and information systems within public organizations, including several
studies that addressed the implications of organizational dependency on the external
environment on the evaluation of information systems in public organizations (see Bozeman
& Straussman, 1990; Newcomer, 1991). Studies that investigated the relationships between
information systems in public organizations and the external environment have concluded
that there is very close interdependency between information systems in public
organizations and the external environment (see Stevens & McGowan, 1985; Bozeman &
Bretschneider, 1986).
Several researchers have empirically tested the interdependency between information
systems in public organizations and the external environment. The findings of these studies
indicate that information systems in public organizations are more dependent on the external
environment than those in private organizations (see Bretschneider & Wittmer, 1993;
Bretschneider, 1990). Some researchers argue that failure to recognize this interdependency
could lead to catastrophic results (Bretschneider & Wittmer, 1993).
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
47
Mansour and Watsons (1980) empirical study tested the applicability of the private
sector IS models on the public sector. These researchers concluded that several external
variables in private sector IS models (e.g., amount of competition, variety of products
offered by the organization, the frequency with which the organization offers new products,
etc.) are not applicable to public sector organizations because public organizations function
in a different environment that the one faced by private sector organizations.
Some researchers have investigated the implications of the dependency on the
external environment on the evaluation of information systems in public organizations.
These writers argued that evaluation of information systems in public organizations must be
extended to include those actors in the external environment who can influence these
systems (see Bozeman & Straussman, 1990; Newcomer, 1991).
Many measures were offered to evaluate information system in public organizations.
These measures include accuracy, applicability, timeliness, User Satisfaction, attitude of
both managers and users (Stevens and McGowan, 1985); timely and accurate response to
external requests (Bozeman and Bretschneider, 1986); usefulness and reliability, ease of use,
time saving, user acceptance, meeting legislative requirements (Newcomer, 1991); and
public official satisfaction (Bozeman & Straussman, 1990).
DeLone and McLeans taxonomy (1992) was used to organize the studies in this
section. Their taxonomy includes six dimensions of information systems success (system
quality, information quality, system use, user satisfaction, individual impact, and
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
48
organizational impact). Under each of these dimensions, many studies were discussed in
terms of what variables were included and how the dimension was measured.
In terms of the system quality variable, several studies found a relationship between
system quality and user involvement. Many researchers have developed instruments to
measure system quality, some of which are in wide use because these instruments have
demonstrated that they are reliable and valid through several studies (see Doll & Torkzadeh,
1988; Bailey & Pearson, 1983). Moreover, system acceptance was found to relate to ease of
use and usefulness. User satisfaction and system usage were not found to be indicators of
system quality.
Many variables were found to relate to the Information Quality variable (e.g., user
participation in the system design, cognitive style, and external sources). As with the
preceding variable, information quality was measured using both single-item and multi-item
scales.
Under the system use variable, individual perceptions, user involvement, situational
variables, ease of use of the system, degree of social influence exerted by supervisors,
perceptions of the social presence of the system, and user satisfaction were found to relate to
system use (e.g., King & Rodriguez, 1978; Kim & Lee, 1986; Lucas, 1975b; Baroudi et al.,
1986). System use was measured through actual daily use of the computer, frequency of
use, the number o f software applications used by the participants, the number of tasks the
system is used for (Igbaria, 1992; Anakwe, Anandaeajan, & Igbaria, 1998), how many times
the system was used and willingness of use (Kim & Lee, 1986), and how extensively
information technology is utilized in an organizational context for decision support, work
integration, and customer service functions (Doll & Torkzadeh, 1998).
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
49
This section o f the literature review dealt with studies that have attempted to
integrate studies in the second section into comprehensive models o f information system
success. Most o f models in this part were based on DeLone and McLeans model.
Several researchers added new variables (Seddon, 1997; Seddon & Kiew, 1994),
combined existing variables (Glorfeld, 1994), or changed the causal paths (Seddon & Kiew,
1994; Glorfeld, 1994) in the DeLone and McLean model. Some studies identified
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
50
conflicting results regarding relationships among DeLone and McLeans six variables. For
example, Glorfeld (1994) found a positive relationship between user satisfaction and
individual impact. Teo and Wong (1998), however, did not find a relationship between the
same variables. Several researchers argued that may be due to the small sample size or to
the composition of the sample (Glorfeld, 1994) and differences in the measurement of
concepts involved (Teo & Wong, 1998). All of these studies were conducted in the private
sector. Furthermore, there hasnt been any study attempting to validate the whole DeLone
and McLean model in its original form.
Most of the models in this part did not include the external environment as a core
dimension of information system success, or include variables in the external environment
that are not relevant to public organizations. Even those models that incorporated external
variables were theoretically complex and difficult to test empirically. As such, no study has
attempted to build on these models or empirically test them.
From the preceding three bodies o f literature, it appears that there is a need to
develop a comprehensive model for assessing information systems in public organizations.
Unfortunately, there have been few studies (empirical or conceptual) conducted on public
organizations that could be used as a base for building a comprehensive model for assessing
information systems in the public sector. However, models developed to assess information
system effectiveness in private organizations can be modified for use in public
organizations. O f the available concepts, DeLone and McLeans (1992) model is the most
appropriate for using as the basic building block for developing a comprehensive model for
assessing information systems in the public sector. Since it is considered the most
comprehensive information system assessment model available in the information system
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
51
literature (Myers, Kappelman, & Prubutok, 1997). As such, DeLone and McLean4s model
has gained wide acceptance among information system researchers who attempted to test
and validate the usefulness o f different parts of this model (e.g., Seddon & Kiew, 1994;
Glorfeld, 1994; Igbaria & Tan, 1997; Seddon, 1997; Teo & Wong, 1998, Garrity & Sanders,
1998). This suggests that DeLone and McLeans model has gained strong theoretical and
some empirical support as a unified model for assessing information system success in the
public sector information system literature.
The present study will use DeLone and McLeans model as the foundation for
building a comprehensive model for evaluating information systems in the public sector.
The logical steps in the process of the development o f a comprehensive model for
information system success are, first, to test the applicability of DeLone and McLeans
model in the public sector. This step is essential due to the fact that studies that have
investigated information systems in the public sector and those that have specifically
investigated the evaluation o f information system within public organization did not either
test the applicability of the DeLone and McLean model as a whole or any o f the six
variables and relationships proposed in this model. Thus, testing the applicability of
DeLone and McLeans model before using it as the foundation for building a comprehensive
model for evaluating information system in the public sector is a must. Once the model has
been tested and validated for use, the external environment variables can be added to the
model.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
52
Chapter 3
RESEARCH METHODOLOGY
This chapter presents the research methodology in eight sections. The first section
describes the steps that were taken to develop a comprehensive model for evaluating
information systems in public organizations, the model tested in this study, and the research
question and hypothesis. The second section describes how measurements used in this study
were operationalized. The third section describes the population and sample. The fourth
section describes how the data were collected. The fifth section describes the translation and
pilot study. The sixth section describes the data screening and first round of reliability
analysis. The seventh section presents an overview of the data analysis plan. The eighth
section presents the limitations of this study.
Model Formulation
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
53
The empirical findings concerning the relationships among the variables of DeLone
and McLeans model further supports the inclusion of these relationships into the model
proposed by this study as a second step.
The third step is the incorporation of three frames in the model. One is called the
general environment, the second is called task environment, and the third is called
organizational boundary. The concept for these frames was adopted from several studies in
the organization theory literature, the information system literature, and the public
management information system literature.
In the organization theory literature, Thompson (1967) elaborated on the concept of
external environment, asserting that there is a part o f the external environment that is most
relevant to an organization called the task environment. Thompson (p. 27) defines the task
environment as those parts o f the environment which are relevant or potentially relevant to
goal setting and goal attainment. This includes, for example, suppliers of raw materials
that represent the input for the organization and customers that buy the organizations
products or services, regulatory agencies, and other organization that directly affect the
operations of the organization . Thus, Thompson indicates that conceptually there are two
types of external environment: the task environment and the residual general environment.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
r G e n e r a l . E n v i r o n m e n t . _________________________________________________________________________
Organizational Boundary
System
Quality
System
Use
Individual
Impact
Information]
Quality
i
Organizational
.
Impact
User
Satisfaction
55
Hall (1972) and Miles (1980) also proposed two types of external environment. Hall
mentioned two types of environmental conditions: general conditions (those conditions of
concern to all organizations, such as the economy and demographic changes) and specific
environmental conditions (specific environmental influences on the organization, such as
other organizations with which it interacts or particular individuals who are crucial to it).
Hall noted that interactions in the specific environment are direct, while the general
environment is not a concrete entity in interaction, but rather comprises conditions that
must be grappled with (p. 298).
Miles (1980) agreed with the concepts of general environment and specific
environment. Miles includes those conditions that are important for the whole classes of
organizations (e.g., technological conditions, legal conditions, political conditions, etc) in
the general environment, asserting that these conditions are potentially relevant for an
organization, but do not have day-to-day interaction within the organization. Miles explains
that the general environment has an impact on both the organization and its specific
environment. On the other hand, Miles notes that conditions in the specific environment
have immediate relevance with the organization and have direct interaction with the
organization. This is equivalent to Thompsons concept of task environment.
In the information system literature, Ives and Davis (1980) proposed a model for IS
research using two information system environments: the external environment and the
organizational environment. Ives and Davis defined external environment as including
legal, social, political, cultural, economic, educational, resource, and industry/trade
considerations. Variables in the external environment can affect information systems within
organizations through the resources and constraints that these variables can impose or offer.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
56
For example, legislative budgetary requirements could impose constraints on the resources
available for IS development.
According to Ives and Davis, the organizational environment is marked by the
organizational goals, tasks, structure, volatility, and management philosophy/style. These
variables can affect IS development and management. For example, the centralization or
decentralization of the organizational structure can affect on how information is developed
and managed.
In the public management information system literature, Bozeman and Bretschneider
(1986) proposed frames similar to those proposed by Ives and Davis (1980). Bozeman and
Bretschneider (1986) maintain that the frame for public management information system
research consists of three levels: society, organization, and individual. The society level
includes environmental variables that define resources and constraints on MIS; the
organizational level includes variables within the organizational context that affect
information system such size, structure, time frame, organizational resources, and
organizational maturity; and the individual context reflects characteristics of individual
actors within an organization, including cognitive style, level of satisfaction with MIS, and
other such personal and demographic information (pp. 475-478).
Bozeman and Bretschneider (1986) further elaborated on their frame for public
management information systems by combining the previous variables into four models of
publicness and proposing two types of environment. The environmental variables were
included in two models (economic authority model and political authority model), which
includes the unique economic and political characteristics o f public organizations. The
organizational variables were included in a third model (work context model) and the
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
57
individual variables were included in a fourth model (personnel and personnel system
model).
Bozeman and Bretschneider (1986) contend that the four models are located in two
types of environment: the economic authority model and the political authority model are
located in the distal environment, and the work context model and the personnel and
personnel system model are located in the proximate environment. Bozeman and
Bretschneider (pp. 480-481) stated:
[T]he models are interrelated because they stand in hierarchical relation. The
Political Authority and Economic Authority models comprise the distal
environment and introduce constraints which are broad and sweeping (e.g.,
market failures, public interest) and these remote factors of the distal
environment can be viewed as directly influencing the "proximate"
environment (i.e., the Work Context Model), which in turn directly
influences the attitudes and behaviors of individuals in organizations (e.g.,
the Personal Model).
Thus, we have explained that there are three types of environment that an
information system lives in. The first environment includes variables that exist within the
organization, the second environment includes external variables that have immediate
relevance and direct interactions with the organization, and the third environment includes
external variables that have potential relevance and do not have direct interaction with the
organization. Although different researchers have different names for these types of
environments, the different terms ultimately mean the same types of environment.1
1 It is interesting to note that some researchers have used the term organizational environment to refer to
conditions that exist within the organization. From a systems theory viewpoint, the term organizational
environment denotes everything that exists outside the organizational boundaries. This situation had led to
some confusion regarding the variables that exist within this type of environment In order to prevent further
confusion regarding the variables that exist within each frame, the inner frame in the model in Figure 5 will be
called organizational boundary.
Reproduced with permission o f the copyright owner. Further reproduction prohibited without permission.
58
Thus, the three environments will be incorporated in DeLone and McLeans model
as three frames within each other. The organizational boundary frame includes all internal
variables that exist within the organizational boundaries. The middle frame, task
environment, includes those external variables that have immediate relevance and direct
interactions with the organization. The outer frame, general environment, includes those
external variables that have potential relevance and do not have direct interaction with the
organization. These titles were chosen because of their familiarity in the literature.
As step four in the process of developing the Seven-Dimension model, a seventh
dimension was added to DeLone and McLeans model. Based on the first section of the
literature review, this seventh dimension is called External Environment Satisfaction (EES).
EES denotes the satisfaction of external stakeholders that use an information system or its
outputs, and could directly or indirectly influence the information system. This influence
could be, for example, through many of the constraints that could be imposed on public
organizations from the external environment (i.e., legal and budgetary constraints).
Figure 5 represents the final product after finishing all the steps. The causal paths
among the seven dimensions in the model are represented mathematically as:
(3A.1)
(3.A.2)
(3.A.3)
(3.A.4)
(3.A.5)
(3.A.6)
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
59
where SQ, IQ, SU, US, IM, and 01 represent System Quality, Information Quality, System
Use, User Satisfaction, Individual Impact, and Organizational Impact. EES represents the
External Environment Satisfaction. OB represents the effects of factors within the
Organizational Boundary that affect the previous six variables, such as size of the
organization and control of the information system (centralized vs. decentralized). The
model operationalization section presents the definitions and how these variables are
measured.
Equation 3.A.1 suggests that Organizational Impact is determined directly by
Individual Impact and indirectly by the rest o f the variables in the model through affecting
Individual Impact; moreover, factors within the Organizational Boundary and External
Environment Satisfaction determine Organizational Impact.
Equation 3.A.2 suggests that Individual Impact is determined directly by System Use
and User Satisfaction; and indirectly by System Quality and Information Quality through
affecting System Use and User Satisfaction. Moreover, factors within the Organizational
Boundary and External Environment Satisfaction determine Individual Impact.
Equation 3.A.3 suggests that System Quality, Information Quality, User Satisfaction,
factors within the Organizational Boundary, and External Environment Satisfaction
determine System Use.
Equation 3.A.4 suggests that System Quality, Information Quality, System Use,
External Environment Satisfaction, and factors within the Organizational Boundary
determine User Satisfaction.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
60
Equation 3.A.5 suggests that factors within the Organizational Boundary and
External Environment Satisfaction determine System Quality.
Equation 3.A.6 suggests that factors within the Organizational Boundary and
External Environment Satisfaction determine Information Quality.
Model to be Tested
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
61
model. The incorporation o f the external environment satisfaction dimension will be left for
future research.
Figure 6 illustrates the model to be empirically tested in this study. The relationships
in this model are:
01 = / (SQ, IQ, SU, US, IM)
(3.B.1)
IM = / (SU, US)
(3.B.2)
(3.B.3)
(3.B.4)
This study is being conducted to answer the following question: To what extent is
DeLone and McLeans (1992) model useful in evaluating information systems in the public
sector? In order to answer this research question, an empirical study will be conducted to
test the relationships among the variables in DeLone and McLeans model. In the proposed
study, the following null hypothesis will be tested:
HO:
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
System
Quality
System
Use
Individual
Impact
Information]
Quality j
User
Satisfaction
Organizational
.
Impact
63
Model Operationalization
In order to empirically test DeLone and McLeans model, all variables in the model
must be operationalized. Existing measures o f information system success that have
acceptable psychometric qualities will be used. Questionnaires have been developed for this
purpose, taking into account the necessary cultural factors. Appendix A includes the users
questionnaire and Appendix B includes the supervisors, heads of departments, and general
managers questionnaire.
Items from Bailey and Pearson (1983) will be used to operationalize System Quality
and Information Quality. System quality is concern with whether or not there are bugs in
the systems, the consistency of the user interface, ease of use, responses rates in interactive
systems, documentation, and, sometimes, quality and maintainability of the program code
(Seddon & Kiew, 1994, p. 101). Seven items were used to operationalize the System Quality
variable.
Information Quality is concern with such issues as timeliness, accuracy, relevance,
and format of information generated by an information system (Seddon & Kiew, 1994,
p. 101). Nine items were used to operationalize the Information Quality dimension. Bailey
and Pearsons instrument is widely accepted, has been tested for reliability and validity by
several researchers (Ives et al., 1983; Baroudi & Orlikowski, 1988; Iivari & Ervasti, 1994;
Reproduced with permission o f the copyright owner. Further reproduction prohibited without permission.
64
Mahmood & Becker, 1985/1986; Li, 1997; Khalil & Elkorody, 1997), and has become a
standard instrument in the MIS field.
System Use
System usage examines the actual use of information system, the extent of use of
information system in the users jobs, and the numbers of information system packages used
in the users jobs. Igbaria, Pavri, and Huff (1989) developed a four-item scale to measure
this variable, and the instrument has been proven reliable and valid (Igbaria, 1990,1992;
Anakwe, Anandaeajan, & Igbaria, 1998). This scale will be used to operationalize System
Use in this study.
User Satisfaction
Individual Impact
Reproduced with permission o f the copyright owner. Further reproduction prohibited without permission.
65
operationalize the Individual Impact variable. Torkzadeh and Doll (1999) further validated
the same 12 items for the purpose of developing an instrument for measuring the impact of
information technology on work. The instrument measures the impact on four work aspects
(task productivity, task innovation, customer satisfaction, and management control). The
reliability scores were 0.93,0.95,0.96 and 0.93 for task productivity, task innovation,
customer satisfaction and management control, respectively. The overall reliability for the
instrument was 0.92.
Organizational Impact
Reproduced with permission o f the copyright owner. Further reproduction prohibited without permission.
66
The main focus of this study is to assess the usefulness of DeLone and McLeans
model in evaluating information systems in public organizations. Thus, the unit of analysis
is the information system function in a public organization. Information system function is
defined as all IS groups and departments within the organization (Saunders & Jones, 1991,
p. 2).
The study was conducted in the State of Kuwait. The State of Kuwait is a
constitutional monarchy that has been ruled by the al-Sabah family since the mid-18th
century. It is located in Middle East, at north end of the Arabian (Persian) Gulf. It is
bordered on the north by Iraq and Saudi Arabia on the south. The national language is
Arabic and Islam is the state religion. The State of Kuwait has a total area of 17,818 square
km (6,880 miles) and its population is estimated to be 2,031,000 in the year of
20001 (Anonymous, 2001). The public sector in the State of Kuwait consists of ministries,
partially owned independent organizations, and fully owned independent organizations.
In Kuwait, the term public organization is usually used to denote the 18
government ministries. Thus, the population in the context of this study will be the
information system end users in the 18 ministries in the State o f Kuwait.
The researcher used a simple random (lottery) method to choose 6 of the 18 Kuwaiti
ministries for participation in this study: Ministry of Interior, Ministry of Communications,
Ministry of Treasury, Ministry of Electricity and Water, Ministry o f Social Work, and
Ministry o f Justice. After identifying the six ministries, the researcher obtained the
necessary approvals to conduct the study. This included getting the approval of the Office
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
67
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
68
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
69
making. After reviewing the modified management questionnaire, the two professors did
not suggest additional changes.
Once the questionnaires were finalized, they were further tested in a pilot study. The
questionnaires were pre-tested in a pilot study in two Kuwaiti ministries (Ministry of
Electricity and Ministry o f Communications). The formal approval to conduct the pilot
study in the two ministries was taken as part of the overall approval to conduct the study.
The researcher distributed 20 questionnaires in each o f the two ministries, 10
questionnaires for end users and 10 questionnaires for managers. To encourage individuals
to participate in the pilot study, the researcher visited the two ministries and met with the
subjects and their superiors. In these meetings, the researcher explained the goal of the
study and reviewed the questions with them. Moreover, the researcher encouraged the
participants to comment on and discuss any part of the questionnaire they might consider to
be ambiguous. The participants were also encouraged to write down any comments about
any questions that might be unclear.
A total of 35 questionnaires was collected. The researcher reviewed each section of
the questionnaires, including both wording and content. The responses to each question
were evaluated. Overall, the pilot study participants indicated that the questionnaires were
understandable. Most o f the participants agreed that the items the information system helps
me create new ideas, the information system helps me come up with new ideas, and the
information system helps me try out innovative ideas have the same meaning and suggested
combining them into one item. Other participants had a few minor wording changes and
clarifications. For example, the term information system was not clear enough for many
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
70
o f the participants, so the term computer and its Arabic translation were added to the
questionnaires.
After discussing these suggestions with several professors, the researcher modified
the questionnaires. Once the changes were complete, the researcher informally discussed
the second versions of the questionnaires with several participants. These participants
confirmed that the new questionnaires were clearer than the first version and did not suggest
any further changes.
Data Collection Method
The study uses two surveys to collect data. The end user survey collected data about
the Information Quality, System Quality, System Usage, User Satisfaction, and Individual
Impact variables. The management survey collected data about the Organizational Impact
variable and was distributed to employees who are supervisors, department heads, and/or
general managers). The logic behind designing two surveys is that end users interact with
information systems on a daily basis, so they have the necessary knowledge to evaluate
variables that are directly related to information systems and their productivity.
Management, on the other hand, should have the knowledge about the overall performance
o f the organization, so employees in this level should be able to evaluate whether
information systems have either a positive or negative effect on overall organizational
performance.
Once the instruments and procedures were approved, the questionnaires were
administered to the employees in their workplace. The administration of the questionnaires
started with an initial contact with the managers of the government units to explain the goal
Reproduced with permission o f the copyright owner. Further reproduction prohibited without permission.
71
and significance o f the research, the importance of their participation, and to set a date and
time for the participants to complete the questionnaires. Next, the researcher went to these
organizations and administered the instruments. At the beginning of each survey
administration session, the researcher thanked the participants for their interest and
cooperation and briefly introduced the goal and significance o f the research and the
importance o f their participation. Subjects were told that the study represents a doctoral
dissertation attempting to develop a comprehensive model for evaluating information
systems in the public sector. Furthermore, the researcher emphasized that participation in
the survey was completely voluntary and advised the subjects that all responses would be
kept confidential. Finally, participants were instructed that there were no right or wrong
answers and they need only to record their first perceptions after reading each question.
The participants were given all the time they needed to complete the survey.
Envelopes were, also, provided to the participants to insert the completed questionnaires in
them.
Some organizations agreed to return the completed forms to the researcher on the
same day that they were distributed; other organizations requested a week to collect the
completed questionnaires because their employees were extremely busy. Thus, in order to
give the participants all the time they needed to complete the questionnaires and ensure that
the answers on the questionnaires reflect the real perceptions of the participants, the
researcher agreed to come back on the selected day.
In the organizations requiring more than one day to process the survey, the
researcher used a three-step follow-up. First, when the researcher delivered the surveys, he
reminded the participants that his telephone number and e-mail address were on the cover
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
72
letter (Appendices H and I), and that they should not hesitate to call him anytime if they had
any questions regarding the survey. Second, three days aiter the survey was distributed, the
researcher visited these organizations for the purpose of answering any questions that the
participants might have and to remind them about the day the questionnaires were to be
collected. Third, the day before the questionnaires would be collected the researcher made
telephone calls to the participants. Those that he was able to reach on their office phones
were given the opportunity to ask any questions that they might have and to remind them
when the questionnaires would be collected. In addition, the researcher asked them to share
this message with any of their colleagues who could not be reached by phone.
A total of 500 questionnaires were distributed, 350 questionnaires to end users and
150 to management employees. A total of 390 questionnaires (78%) were returned (see
Table 1); 298 were end users questionnaires and 92 were management questionnaires. This
makes the response rate 85% for end users and 61% for management. According to several
public organization managers and Kuwaiti researchers, the lower response rate for the
management questionnaire might be due to the summer holidays and the preoccupation of
managers with the end of the budget cycle, which coincided with the distribution of the
questionnaires.
O f the 390 questionnaires, 27 questionnaires were eliminated because the
participants left one or more o f the questions that assessed Information Quality, System
Quality, User Satisfaction, System Usage, Individual Impact, or Organizational Impact
unanswered. This makes the total number of usable questionnaires 363 (73%), of which 278
(79%) were end users and 85 (57%) were management).
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Table 1
Questionnaires Distribution and Response for Six Ministries
Name o f
Total
Top management
departments
Distributed
Distributed Collected
End users
Usable
Total
Distributed Collected
Usable
Collected Usable
Ministry of
Communications
83
25
15
13
52%
58
53
49
84.48%
68
62
Ministry of
Electricity
83
25
13
12
48%
58
42
39
67.24%
55
51
Ministry of Finance
83
25
14
11
44%
58
48
43
74.13%
62
54
Ministry of Interior
85
25
12
13
52%
60
46
44
73.33%
58
57
Ministry of Justice
83
25
18
17
68%
58
55
51
87.93%
73
68
Ministry of Social
Work
83
25
20
19
76%
58
54
52
89.66%
74
71
500
150
92
85
57%
350
298
278
79%
390
363
Total
U>
74
One of the primary goals of data analysis is to verify that the data are accurately
coded and ensure that the responses are valid (Tabachnik & Fidell, 1989). In this research,
several steps have been taken to maximize the reliability of the data. First, the returned
questionnaires were checked for completeness. Incomplete questionnaires (one or more
unanswered questions) were dropped from the data set. Exceptions for this were the
demographic questions, since answers to these questions do not affect the tests involving
any key variables in the study. All acceptable questionnaires were assigned an identification
number.
Second, the data were coded and entered into a computer data file using the SPSS
For Windows (release 10) software package. The researcher did the data entry. Third, the
raw data were checked for entry errors through the use of the FREQUENCY procedure in
SPSS. When errors were found, the data were compared to the original surveys and
codebook, and mistakes were corrected.
Fourth, after checking for errors and cleaning the data, the reliability of the
instruments was checked. Reliability refers to the accuracy or precision of a measuring
instrument (Kerlinger, 1986, p. 405). In other words, reliability is the extent to which an
experiment, test, or any measurement procedure yields the same results on repeated trials.
Cronbachs alpha is the most popular method used to test reliability. The value of alpha
ranges from 0 to 1. When alpha get closer to 1, this implies that the reliability of the
instruments is high.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
75
Table 2
Reliability of Measurement Instruments
Instrument
Reliability2
System Quality
.91
Information Quality
.85
System Usage
.75
User Satisfaction
.95
Individual Impact
.95
Organizational Impact
.89
2 Cronbachs Alpha
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
76
Once the errors were identified and the data cleaned, the COMPUTE procedure in
SPSS was used to create the composite variables. The composite variables were created by
averaging the responses included for each variable.
The population for this study includes public organizations in the State of Kuwait. A
sample of 500 participants was drawn from this population using a simple random approach.
Subjects in the sample came from six of the eighteen ministries in the State of Kuwait. The
six ministries were also selected using a simple random approach. Various personal and
professional characteristics were represented in the sample in terms of gender, age,
education, organizational level.
Because of the nature of the sample and the methods used to select the studys
participants and ministries in terms of demographics, the findings of this study should be
generalizable to the studys population - that is, government ministries in the State of
Kuwait. Because the focus is limited to Kuwaits public sector, the external validity o f the
studys findings is limited to ministries in the State of Kuwait. External validity refers to the
generalizability of a research finding to other populations, settings, treatment arrangements,
and measurement arrangement (Kidder & Judd, 1986). Consequently, the findings o f this
study may not be generalizable to other types of organizations in the State of Kuwait, or to
other countries.
Another limitation o f this study is related to the data collection method. The survey
questionnaire was the only instrument used to collect data from the studys subjects. Thus, a
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
77
large part o f the reliability o f the collected data depends on the respondents attention to
detail when answering the questions. Although the researcher took reasonable precautions to
eliminate any threat to the reliability of the data (e.g., meeting with the respondents before
the questionnaires were distributed, giving the respondents all the time they needed to
complete the questions, and obtaining the advance approval o f upper management), it is
impossible to guarantee the reliability of the data if the survey questionnaire was the only
instrument used to collect data.
This study used a five-step plan to analyze the data. First, the study used descriptive
statistics to show the distribution of responses. Second, a correlation analysis was used to
examine the strength and direction of the associations among the variables in the study.
Third, a factor analysis was used to check the unity and number of concepts and variables in
the study. After completing factor analysis, a second round of correlation analysis and
reliability analysis were conducted. Fourth, variables found to be significant in the second
round of correlation analysis were included in a regression analysis to examine the strength
of the relationships among the dependent variables and independent variables in the study.
Fifth, variables found to be significant in the regression analysis were included in the path
analysis. The path analysis is used to cover the direct and indirect effects o f the independent
variables on the dependent variables and on other independent variables. As such, it
represents a confirmatory tool for the regression analysis conducted in the fourth step.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
78
Chapter 4
RESEARCH FINDINGS
The findings are presented in eight sections. Section one describes the participant
demographic profile. Section two presents the findings of the first round of correlation
analysis. Section three presents the findings of factor analysis. Section four presents the
findings o f the second round of reliability analysis of the measurements used in this study.
Section five presents the findings of the second round of correlation analysis. Section six
presents the findings of the regression analysis. Section seven presents the findings of path
analysis. Section eight presents a comparison between the findings of regression analysis
and the findings o f path analysis.
Respondent Characteristics
This section presents the first step in the data analysis plan that is descriptive
statistics o f the studys sample. Tables 3 and 4 present a profile of the survey respondents
with regard to gender, age, education, government career, length of service in current
organization, and knowledge o f information systems. The characteristics for both
employees and managers are discussed for each category.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
79
Most of the employee respondents (92%) were 20-39 years old. The manager
respondents were older (ages 30-50) (84%). Moreover, manager respondents had higher
education levels than those of the employee respondents. Over half of the managers (56%)
had a bachelor degree or higher, while 26% of the employee respondents had the same
education level.
The age and education levels indicate that, as we go higher in the government
hierarchy, the age and education levels increase. However, if we consider that 92% of the
employees and 75% o f the managers are between 20 and 39 years of age, we can conclude
that the age differences between the employees and managers in Kuwaiti public
organizations is not very significant. The employee respondents education levels are high,
with almost a third of the respondents (32%) having completed high school and a technical
institution (see Table 3), indicating that these employees have at least one year of post
secondary technical training.
Gender
More than half (56%) o f the employee respondents were female, while almost three
quarters (71%) of the manager respondents were male.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
80
Table 3
Respondent Profile: Personal Characteristics
Characteristic Value
Age
<20
years
Education
Managers
Frequency
Percent
0
0
20 - 29 years
149
52.1
10.5
30 - 39 years
114
39.9
49
64.5
40 - 49 years
18
6.3
15
19.7
1.0
5.3
Male
122
43.3
54
71.1
Female
160
56.7
22
28.9
<High School
58
20.4
4.0
High School
61
21.5
10
13.3
92
71
32.4
25.0
20
37
26.7
49.3
Masters
5.3
Doctorate
0.7
1.3
>50
Gender
Employees
Frequency
Percent
2
0.7
years
Reproduced with permission o f the copyright owner. Further reproduction prohibited without permission.
81
organizations longer than the employee respondents (69% manager respondents in current
organizations 6-20 years, 86% employee respondents in current organization 1-15 years.
The length o f government career and years of service in the current organization
distribution of both the employees and managers indicates that the individuals in the sample
have good experience with information system in their organization that could enable these
individuals to evaluate the different dimensions of information system.
Over half o f the employee respondents (51%) had from 1 to 5 years experience
using information systems (see Table 4). The second largest group (24%) had from 6 to 10
years experience. Several employees (7%) had 16 to 20 years experience, and a few (2%)
had 21-25 years experience. Only 16% of the employees had less than one years
experience with information systems. The manager respondents had a similar distribution of
experience levels. Almost two thirds (66%) of the manager respondents had between 1 and
10 years experience using information systems (Table 4).
These statistics indicate that both the employees and the managers in the sample
have sufficient background in information systems to enable them to evaluate the different
dimensions o f information systems.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
82
Table 4
Respondent Profile: Professional Characteristics
Employee
Characteristic
Manager
Value
Frequency
23
Percent
8.6
Frequency
2
Percent
2.9
Length of
<1
year
Government
1 to 5
years
103
38.4
4.3
6 to 10 years
84
31.3
12
17.1
11 to 15 years
38
14.2
23
32.9
16 to 20 years
14
5.2
17
24.9
21 to 25 years
2.2
10.0
>26
years
8.6
Experience
<1
year
39
15.9
9.7
With information
1 to 5
years
126
51.2
22
35.5
Systems
6 to 10
years
58
23.6
19
30.6
11 to 15 years
16
6.5
12.9
16 to 20 years
2.4
8.1
21 to 25 years
.4
1.6
>26
years
1.6
Years of service
<1
year
25
8.8
2.7
With Current
1 to 5
years
132
46.5
13
17.3
Organization
6 to 10 years
76
26.8
24
32
11 to 15 years
37
13
19
25.3
16 to 20 years
3.2
12
21 to 25 years
1.8
5.3
>26
5.3
years
Reproduced with permission o f the copyright owner. Further reproduction prohibited without permission.
83
Correlation Analysis
This section represents the second step in the data analysis plan that is examining the
strength and direction of associations among the variables in the study. Pearsons
correlation coefficient was used in this study. Table S shows the results of the correlation
analysis.
Table 5
Pearsons Correlation Matrix of the Six Variables in the Study
1
1. Information quality
2. Organizational impact
3. System usage
4. Individual impact
S. User satisfaction
1.00
.62**
.52**
.71**
.76**
.77**
1.00
.57**
.80**
.74**
.81**
1.00
.57**
.50**
.63**
1.00
.67**
.83**
1.00
.77*
6. System quality
1.00
N = 287
**p<0.01
The Pearson correlation coefficients in Table 5 clearly indicate that there are strong
direct associations among the variables in the study. The largest correlation coefficient is
between System Quality and Individual Impact (r = 0.83) and between System Quality and
Organizational Impact (r = 0.81). The smallest correlation coefficient is between User
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
84
Satisfaction and System Usage (r = 0.50). The rest of the correlation coefficients are
between.50 and .80. All correlation coefficients are statistically significant at the.01 level.
The high Pearson coefficients in Table 5 are in line with other studies in the
literature. For example, Seddon and Kiew (1994) tested part of DeLone and McLeans
model. The researchers proposed those causal paths among the six variables of the model as
illustrated in Figure 2. The researchers tested the relationships between the four variables in
the box after replacing use by usefulness and adding a new variable called user
involvement. The correlation analysis in Seddon and Kiew study indicated that the four
variables are directly associated, with Pearson correlation coefficients ranging from .55 to
.739. Seddon and Kiew (1994, p. 109) commented on these high correlations by saying
Such high correlation o f multi-factor measures and overall satisfaction
measures are not uncommon. Bailey and Pearson (1983, p. 536) report a
correlation o f .79 between their normalized importance-weighted measure of
User Satisfaction (based on up to 39 questions) and their single-scale measure
of overall.. .satisfaction.
Likewise, in several empirical studies and in the context o f developing a new
measure of usefulness and perceived ease of use, Davis (1989) found that usage is directly
associated with usefulness and perceived ease of use with Pearson correlation coefficients
ranging from .45 and .85.
Nevertheless, the high Pearson correlation coefficients in Table 5 raise serious
concern that there may be two problems among the variables. First, there may be two or
more variables that measure the same concept. In other words, there is concern regarding
the unity and number of concepts and variables in this study. The second problem is multicollinearity that exists when there is high correlation among the independent variables.
Reproduced with permission o f the copyright owner. Further reproduction prohibited without permission.
85
Consequently, these two problems have to be investigated before going further in the
data analysis. The first problem will be investigated using factor analysis. The second
problem will be investigated through the use of the VIF measure, which is a specific
measure that is used to test for multicollinearity.
Factor Analysis
This section presents the third step in the data analysis plan. This step checked the
unity and number of concepts and variables in the study. The results are reported here for
the factor analysis that investigated whether multiple variables measured the same concept.
In factor analysis, this is accomplished by examining the loading of each item on the factors
produced by the factor analysis. In the literature, there is no agreement on the cutoff of the
degree of loading to include an item under a specific factor. For example, Churchill (1987)
argued for a cutoff of 0.35 or 0.30. On the other hand, Rencher (1998) argued that a cutoff
of 0.30 is unacceptable. Hair, Anderson, Tatham, and Black (1992) argued that loadings
greater than 0.50 are considered very significant. Because this the first study conducted in
public organizations to evaluating information systems and there are no established
measures in this sector, this study uses 0.60 as the cutoff for item loading and an eigenvalue
of 1.
In factor analysis, when a group of items loads highly on one factor, these items are
considered the items that measure this factor. In some cases, the factors produced and the
items loading perfectly correspond to the variables used and the items used to measure these
variables. However, in other cases, this correspondence does not take place. To solve this
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
86
problem, the researcher might change the variables he is using and create new variables.
The new variables will be the factors produced by the factor analysis and the items that
loaded highly on it. However, in creating the new variables, statistical reasons should not be
the only rationale. Conceptual considerations should be taken into account (Lich, 1998). In
other words, the researcher has to go back to the literature and see whether the items that
loaded highly on one factor are used, in the literature, to measure similar concepts. If the
answer is yes, then grouping these items is conceptually and statistically correct. However,
if the answer is no then grouping of these items is statistically correct but theoretically
incorrect. In this study, the researcher paid attention to both statistical and theoretical
considerations.
Two factor analyses were conducted in this study. The first analysis included all
items that measure the independent variables (System Quality, Information Quality, System
Usage, and User Satisfaction) while the second analysis included items that measure the two
dependent variables (Individual Impact and Organizational Impact). An iterative approach
was used to conduct factor analysis. Items that did not make the loading cutoff and/or items
that loaded on more than one factor were dropped from the analysis. The remaining items
were than resubmitted into another round of factor analysis. This process continued until the
researcher reached a meaningful factor structure.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
87
Table 6 shows the eigenvalue of each factor of the four factors that were extracted.
Table 6
Eigenvalue of Factors
% Variance
Eigenvalue
% Cumul. Variance
Factor 1
11.50
28.75%
28.75%
Factor 2
3.84
9.59%
38.34%
Factor 3
2.63
6.58%
44.92%
Factor 4
2.15
5.38%
50.30%
Eigenvalue refers to the amount of variance that a factor can account for. It is clear
from Table 6 that all four factors have eigenvalues greater than 1.0, which is the cutoff
adopted in this study. Factor 1 has the largest eigenvalue (11.50) and explains 28.75 % of
the variance. In total, the four factors explain 50.30 % o f the variance.
Table 7 shows the factor loading after using varimax rotation. The following is a
discussion of the loading findings.
This scale consists of seven items that were borrowed from Bailey and Person
(1983). The scale asks the user about the time lapse between the request for data and the
response to that request, the ease and difficulty of using the system, ease and difficulty of
the sentences and words used in the system, balance between cost and benefits, trust in the
system, flexibility of the system, and connectivity of the system. Six items (SQ1, SQ3, SQ4,
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
88
SQ5, SQ6, SQ7) have loaded highly on Factor 1, with loading ranging from .72 to .87. Item
SQ2 did not make the cutoff, so it was dropped from further analysis.
This scale consists of nine items borrowed from Bailey and Person (1983). The scale asks
the use about information correctness, information availability, output variability,
information consistency, age o f information, information comprehensiveness, display of
output, amount of information, and degree of congruence between what the users need and
output. Six items (IQ1, IQ2, IQ4, IQ5, IQ7, IQ9) loaded highly on Factor 1, with loadings
ranging from .60 to .80. Items IQ3, IQ6, and IQ8 did not make the cutoff, so they were
dropped from further analysis.
This scale consists of 20 items that were borrowed from Igbaria, Pavri, and Huff
(1989). The scale measures usage through actual daily use of the computer, frequency of
use, the number of packages used by the participants, and the number of tasks the system is
used for. The 20 items on this scale did not load on a single factor; rather, they loaded on all
four factors. Item SU2 loaded on Factor 1 with loading of .60. Items SU6, SU7, SU8, SU9,
and SU10 loaded highly on Factor 2, with loadings ranging from .78 to .85. Items SU12,
SU16, and SU17 loaded highly on Factor 3, with loading ranging from .60 to .68. Items
SU3, SU4, SU5 loaded on Factor 4 with loadings ranging from .68 to .75. Items SU1, SU2,
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
89
Table 7
Factors of Independent Variables: Rotated Factor Matrix
Items
SQ1
0.87
0.10
0.01
0.01
SQ3
0.85
0.13
0.00
-0.02
SQ4
0.82
0.04
-0.09
-0.08
SQ5
0.85
0.08
0.01
0.02
SQ6
0.72
0.05
0.06
0.02
SQ7
0.77
0.07
-0.1
-0.08
IQ I
Information correctness
0.67
-0.03
0.11
-0.12
IQ2
0.70
-0.07
0.16
0.03
IQ4
Information consistency
0.72
0.00
0.13
-0.11
IQ5
0.80
-0.07
0.09
0.10
IQ7
0.61
-0.09
0.08
0.14
IQ9
0.65
0.05
-0.09
-0.04
US1
0.80
0.01
0.09
0.00
US2
0.84
0.03
0.07
-0.02
US3
0.87
0.03
0.08
-0.07
US4
0.85
0.04
0.02
0.00
SU3
0.07
0.16
-0.02
0.68
SU4
0.01
0.35
0.00
0.75
SU5
-0.03
0.28
0.01
0.75
SU6
0.09
0.79
0.16
0.19
SU7
-0.02
0.83
0.07
0.10
SU8
0.04
0.81
0.20
0.03
SU9
0.05
0.85
0.13
0.06
SU10
-0.01
0.83
0.10
0.13
SU12
0.03
0.07
0.60
-0.11
SU16
0.13
0.09
0.68
0.05
SU17
-0.06
0.09
0.61
-0.07
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
90
SU11, SU13, SU14, SU15, SU18, SU19, and SU20 did not make the cutoff, so they were
dropped from further analysis.
This scale consists of four items that were borrowed from Seddon and Yip (1992).
The scale measures system adequacy, system efficiency, system effectiveness, and general
satisfaction with the system. All four items loaded highly on Factor 1 with loadings ranging
from .80 to .87.
Table 8 summarizes the items eliminated from further analysis because they did not
make the cutoff. Table 9 summarizes the findings of principal component factor analysis on
the independent variables after dropping the items that did not make the cutoff.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
91
Table 8
Summary of Items Eliminated from Further Analysis
SQ2
IQ3
IQ8
Amount of information
IQ6
Information comprehensiveness
SU1
Time spent (in hours) using the system during working hours
SU2
sun
SU13
SU14
SU15
SU18
SU19
SU20
Reproduced with permission o f the copyright owner. Further reproduction prohibited without permission.
92
Table 9
Summary of Item Loadings
Items
SQI
0.87
0.10
0.01
0.01
SQ3
0.8S
0.13
0.00
-0.02
SQ4
0.82
0.04
-0.09
-0.08
SQ5
0.85
0.08
0.01
0.02
SQ6
0.72
0.05
0.06
0.02
SQ7
0.77
0.07
-0.1
-0.08
IQ I
Information correctness
0.67
-0.03
0.11
-0.12
IQ2
0.70
-0.07
0.16
0.03
IQ4
Information consistency
0.72
0.00
0.13
-0.11
IQ5
0.80
-0.07
0.09
0.10
IQ7
0.61
-0.09
0.08
0.14
IQ9
0.65
0.05
-0.09
-0.04
USI
0.80
0.01
0.09
0.00
US2
0.84
0.03
0.07
-0.02
US3
0.87
0.03
0.08
-0.07
US4
0.85
0.04
0.02
0.00
SU6
0.09
0.79
0.16
0.19
SU7
0.02
0.83
0.07
0.10
SU8
0.04
0.81
0.20
0.03
SU9
0.05
0.85
0.13
0.06
SU10
-0.01
0.83
0.10
0.13
SU12
0.03
0.07
0.60
-0.11
SU16
0.13
0.09
0.68
0.05
SU17
-0.06
0.09
0.61
-0.07
SU3
0.07
0.16
-0.02
0.68
SU4
0.01
0.35
0.00
0.75
SU5
-0.03
0.28
0.01
0.75
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
93
These loadings posed two dilemmas for the researcher. Should he combine all items
that loaded highly on Factor 1 into one variable? Should he deconstruct the usage variable
to three constructs according to the items loadings? If yes, does this coincide with the
literature, or did other writers deconstruct the usage variable to three constructs?
The loading o f items that measure Information Quality, System Quality, and User
Satisfaction on one factor is not uncommon. Several writers have reached the same results
using factor analysis and examining two or more o f these variables. For example, Ishman
(1996) found that item that measured User Satisfaction loaded with the composite measure
that was used to measure Information and System Quality. Ishman said, it might be
concluded from this result that this single-item [User Satisfaction] is measuring the same
dimension of information success as the eight items it loads with (p. 25). McHaney et al.
(1999) tested the reliability of the end user computing satisfaction measure (EUCS). This
scale is a composite o f several items that measure Information Quality and System Quality
(e.g., items: SQ1, SQ3, IQ9, SQ5, IQ1, IQ2, IQ5, IQ7, IQ9). Using factor analysis, the
writers found that all items loaded on one factor, with loading values ranging from .76 to
.94. Glorfeld (1994) combined System Quality, Information Quality, and Satisfaction into
one variable, which he called Satisfaction (Figure 3). This was done after conducting a
factor analysis where all items that measure the three variables loaded on one factor.
Accordingly, supported by the statistical evidence found in this study through the use
of principal component factor analysis with varimax rotation and the conceptual evidence
found through the work o f other researchers on the same variables, the researcher decided to
combine all items that loaded highly on Factor 1 into one variable. The only exception was
Item SU1 because this item measures the average use of the information system; thus, the
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
94
conceptual base o f this item does not coincide with the rest of the items. The new variable
was called Satisfaction (as it was called by Glorfeld) to contribute to building unified
concepts in the information system and public management information system fields.
Regarding System Usage, several researchers have dealt with System Usage as a
multi-dimensional concept (Igbaria, 1992; Igbaria et al., 1989; Kim & Lee, 1986). The
dimensions that these researchers identified included actual daily use, frequency of use,
number o f packages used, level o f sophistication of usage, and inclusion of computer
analysis in decision-making usage as measured by the number o f tasks the system is used in.
None of these studies, however, used factor analysis to analyze the inter-correlations of the
items that were included under each dimension.
The statistical evidence in this study indicated that usage is not a unitary construct
and could be deconstructed into three constructs. These three constructs correspond with
several dimensions identified by other researchers. For example, SU6 , SU7, SU8 , SU9, and
SU10 correspond to the inclusion of computer analysis in the decision-making dimension;
SU12, SU16, and SU17 correspond to the number of packages used; and SU3, SU4, SU5
could be considered as a subset of the inclusion of computer analysis in decision-making
dimension.
Because this is the first study to evaluate information systems in the public sector
that attempts to develop a comprehensive model - and in order to avoid further
complicating the investigation and analysis of the studys model - the researcher chose to
consolidate all usage items that loaded on Factors 2,3, and 4 into one variable called System
Usage. Deconstructing the System Usage variable, and how this could affect the
relationships with other variables in the model, will be left to future research.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
95
This subsection presents the findings of the factor analysis conducted on the two
dependent variables Individual Impact and Organizational Impact. In total, 18 items were
used to measure the two dependent variables. These 18 items were entered into the principal
component factor analysis with varimax rotation. Table 10 shows the eigenvalue of each
factor of the two factors that were extracted.
Table 10
Eigenvalue of Factors
% Variance
Eigenvalue
% Cumul. Variance
Factor 1
6.27
36.90%
36.90%
Factor 2
5.02
29.56%
66.46%
Table 10 indicates that both factors have eigenvalues greater than 1.0, which is the
cutoff point for this study. Factor 1 has the largest eigenvalue (6.27) and explains 36.90 %
of the variance. In total, the two factors explain 66.46 % of the variance.
Table 11 shows the factor loading after using varimax rotation. The following is a
discussion of the loading findings.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
96
This scale consists o f ten items that were borrowed from Doll and Torkzadeh (1998).
The instrument measures the impact of information system on four works aspects (task
productivity, task innovation, customer satisfaction, and management control). All ten items
loaded highly on factor one with loadings ranging from 0.69 to 0.79. These loadings exactly
coincide with the conceptual grouping provided in Chapter 3.
This scale consists of eight items. Five items were borrowed from Sabherwal (1999)
and three items were borrowed from Mahmood and Soon (1991). Seven items of this scale
have loaded on one factor with loadings ranging from 0.63 to 0.79. Item 012 that measures
the impact of information system on reducing administrative costs did not make the cutoff.
Thus, it was eliminated from the analysis at the second round of factor analysis.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
97
Table 11
Factors of Dependent Variables: Rotated Factor Matrix
Items
IM1
0.69
0.39
IM2
0.77
0.34
IM3
0.70
0.36
1M4
0.77
0.16
1M5
0.74
0.44
1M6
0.72
0.50
1M7
0.73
0.51
IM8
0.75
0.40
IM9
0.79
0.28
IM10
0.75
0.39
Oil
0.30
0.66
013
0.24
0.78
014
0.36
0.64
015
0.32
0.79
016
0.45
0.70
017
0.26
0.63
018
0.35
0.74
This section presents the implications of the results o f factor analysis on the studys
model in terms of modifying the relationships in the model, modifying the studys research
question, and the studys hypothesis. Figure 7 depicts the 4-factor model that was produced
by factor analysis.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
98
cd
Reproduced with permission o f the copyright owner. Further reproduction prohibited without permission.
Q.
99
Based on the outcome o f the factor analysis, the conceptual groupings of all
variables in the study have changed. Thus, it is expected that associations among the studys
variables have also changed. The equations in Chapter 3 that presented the relationships in
DeLone and McLeans models have been modified to reflect the new information as a result
of the factor analysis (Figure 7). The modified equations are:
01 = / (SU, STIS, II)
(4.A.1)
II = / (SU, STIS)
(4.A.2)
SU=/ (STIS)
(4.A.3)
STIS= / (SU)
(4.A.4)
where STIS, SU, II, and 01 represent Satisfaction, System Use, Individual Impact, and
Organizational Impact.
Equation 4.A.1 suggests that Organizational Impact is determined directly by
Individual Impact and indirectly by the rest of the variables in the model through affecting
Individual Impact. Equation 4.A.2 suggests that Individual Impact is determined directly by
Satisfaction and System Use. Equation 4.A.3 suggests that System Usage is determined
directly by Satisfaction. Equation 4.A.4 suggests that Satisfaction determined directly by
System Usage.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
too
The research question and hypothesis is expected to change to reflect the change in
the relationships in the model and subsequent changes in the equations that represent these
relationships. Consequently, the studys research question has been modified to: To what
extent is the modified (4-factor) model useful in evaluating information systems in the public
sector?
The following null hypothesis will be tested: The relationships that are indicated in
the 4.A equations do not exist.
Scales Reliabilities
As a result of the factor analysis, most of the measures used in this study have been
modified. Consequently, the reliabilities of these measures have to be determined again.
The reliability analyses for these measures are contained in Tables 12 to 15.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
101
Table 12
Scale Reliability o f the Satisfaction Variable_________________
Items
1.Time between request and the fulfillment of request
2. Sentences and words used to interact with the system
3. Balance between cost and benefit
4. Trust in the system output
S.System ability to change
6. System ability to connect to other organizations
7. Information correctness
8. Time information available comparing to time it is needed
9. Information consistency
10. Age of the information
11. Martial design of the display of the output
12. Degree of congruence between what the user wants and the output
13. How adequately does the system meets the information needs
14. How efficient is the system?
15. How effective is the system?
16. General satisfaction with the information system
Cronbachs Alpha for Satisfactions 95
Corrected
ItemTotal
Correlation
(N= 287)
Alpha
if Item
Deleted
1
2
3
4
5
6
7
8
9
10
11
12
13
14
.84
.82
.77
.81
.67
.72
.65
.68
.72
.78
.56
.59
.78
.82
.95
.95
.95
.95
.95
.95
.95
.95
.95
.95
.95
.96
.95
.95
15
16
.86
.84
.95
.95
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
102
Table 13
Scale Reliability of the System Usage Variable
items
1. Extend ot use in Historical References task
2. Extend of use in Looking for trends task
3. Extend of use in finding problems and alternatives task
4. Extend of use in Planning
5. Extend of use in Budgeting
6. Extend of use in communication
7. Extend of use in controlling and guiding activities task
8. Extend of use in decision making task
9. Package used in the job (W ord Peocessing)
10. Package used in the job (Graphic)
11. Package used in the job (communication)
Corrected
ItemTotal
Correlation
1
2
3
4
5
6
7
8
9
10
11
.35
.54
.47
.71
.68
.67
.71
.71
.11
.22
.16
(N= 287)
Alpha
if Item
Deleted
.83
.82
.82
.80
.80
.80
.80
.80
.84
.84
.84
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
103
Table 14
Scale Reliability of the Individual Impact Variable
items
1. Acomplish more work using the information system
2. Information system lead to increasing prodcutivity
3. Information system save time
4. Information system helps in applying new methods to do the job
5. Information system helps in meeting customer needs
6. Information system led to increasing customer satisfaction
7. Information system led to improving customer service
8. Information system helps management control the work process
9. Information System improves management control
10. Information System helps management control performance
1
2
3
4
5
6
7
8
9
10
Corrected
ItemTotal
Correlation
Alpha
if Item
Deleted
.76
.79
.74
.67
.84
.85
.87
.81
.77
.79
.95
.95
.95
.95
.94
.94
.94
.94
.95
.95
(N= 287)
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
104
Table 15
Scale Reliability of the Organizational Impact Variable
items
1. Distinguish the organization from other organizations
2.
3.
4.
5.
6.
7.
Corrected
ItemTotal
Correlation
1
2
3
4
5
6
7
.63
.72
.64
.76
.75
.60
.74
(N= 287)
Alpha
if Item
Deleted
.88
.87
.88
.86
.87
.88
.87
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
105
The results of the reliability analysis show that the four variables have alpha
coefficients higher than the minimum acceptable alpha value (0.70) used in this study, and
the internal consistency reliability o f the measurement instruments that were not modified
greatly are within the range of studies reviewed in Chapter 2.
After finishing the factor and reliability analyses, the COMPUTE procedure in SPSS
was used to create the composite variables. The composite variables were created by
averaging the items that loaded highly on each factor.
As a result o f the factor analysis and the resulting change in the theoretical grouping
of the variables, a second round of correlation analysis was conducted. Table 16 shows the
results o f the correlation analysis.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
106
Table 16
Pearson Correlation Matrix O f the Four variables in the Study
2. System usage
1 .0 0
.07
OO
*
#
1. Satisfaction
.76**
1 .0 0
.06
.05
1.00
.78**
3. Individual impact
4. Organizational impact
1 .0 0
N=287
*.p< 0 .0 1
The Pearson correlation coefficients in Table 16 show that there are strong direct
associations among some o f the variables in the study and there are no associations among
other variables. There is a strong direct association between Satisfaction and Individual
Impact (r = 0.78), between Satisfaction and Organizational Impact (r = 0.76), and between
Individual Impact and Organizational Impact (r = 0.78). However, the correlation analysis
indicated that usage is not associated (directly or inversely) with any variable. All
correlation coefficients are statistically significant at the.01 level.
Regression Analysis
This section presents the fourth step in the data analysis plan that is the result of the
regression analyses that were conducted to examine the strength of the relationships among
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
107
the dependent variables and independent variables in the study. Only those independent
variables found to be statistically significant in the second round of correlation analysis were
entered into the regression analyses. Consequently, three regression analyses were
conducted.
In the first regression analysis, Individual Impact was the dependent variable.
According to Equation 4.A.2 [IM = f (SU, STIS)], Individual Impact was hypothesized to
have direct relationships with Satisfaction and System Usage. However, the correlation
analysis findings indicated that the satisfaction variable was the only variable found to have
a statistically significant direct association with Individual Impact. As such, the satisfaction
variable was the only variable entered in the first regression analysis as the independent
variable.
In the second regression analysis, Organizational Impact was the dependent variable.
According to Equation 4.A.1 [01 = f (SU, STIS, II)], Individual Impact was hypothesized to
have direct positive relationship with Organizational Impact and indirect positive
relationships with satisfaction and usage. Due to the fact that the indirect relationship could
not be assessed or obtained through regression analysis, the second regression was
conducted to test for direct positive relationship between Individual Impact and
Organizational Impact. Therefore, Individual Impact was entered in the second analysis as
the independent variable.
Examining the Pearson correlation matrix in Table 16, the researcher found positive
association between Satisfaction and Organizational Impact (r = 0.76). The high correlation
coefficient of this association raises a question of whether there is a direct positive
relationship between Satisfaction and Organizational Impact. Looking back through the
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
108
literature to answer this question, the researcher found that other researchers did not
investigate this relationship. Thus, to explore for this relationship, a third multiple
regression analysis was conducted to assess whether there is a direct relationship between
Satisfaction and Organizational Impact. In this analysis, the Organizational Impact variable
was the dependent variable. The Satisfaction variable and the Individual Impact variable
were the independent variables.
Equations 4.A.3 [SU = f (STIS)] and 4.A.4 [STIS = f (SU)] were not tested in the
regression analysis because the correlation analysis findings indicated that the association
between Satisfaction and System Usage was not statistically significant. In fact, System
Usage did not correlate with any variables in the model. Thus, it was dropped for the
regression analysis.
This section is divided into four subsections. The first three subsections present the
findings of the three regression analyses. SPSS for window, release 10, was used to conduct
the three regression analyses. The fourth subsection summarizes the changes that took place
in the studys model as a result for the regression analysis findings.
In the first regression analysis, Individual Impact was the dependent variable while
satisfaction was the independent variable. Table 17 shows the results of the first regression
analysis
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
109
Table 17
Regression Analysis for Variable Predicting Individual Impact (N = 287)
Variable
Satisfaction
R ^ .6 1
B
.67
R2 (Adjust) = .61
SEB
.032
P
.78
2 1 .2 0 *
F=449.47*
*P<.01
Reproduced with permission o f the copyright owner. Further reproduction prohibited without permission.
no
In the second regression analysis, Organizational Impact was the dependent variable
while Individual Impact was the independent variable. Table 18 shows the results of the
second regression analysis.
Table 18
Regression Analysis for Variable Predicting Organizational Impact (N= 287)
Variable
Individual impact
R ^ . 60
SEB
.96
R2 (Adjust) = .60
P
.04
.78
20.94*
F = 438.43*
*P<.01
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Ill
In the third regression analysis, Organizational Impact was the dependent variable
while Individual Impact and Satisfaction were the independent variables. Table 19 shows
the results of the third regression analysis.
Table 19
Regression Analysis for Variables Predicating Organizational Impact (N= 287)
Variable
SEB
Satisfaction
.42
.06
.40
7.272*
Individual impact
.58
.078
.47
8.493*
R2=. 67
R2 (Adjust) =. 67
F =285.583*
*P<.01
Satisfaction and Individual Impact were found to be significant predictors o f
Organizational Impact. The two variables accounted for 67% of the variation in
Organizational Impact. The calculated F o f 285.583 was significant at an alpha level of <
0.01. The standardized beta values for the two variables indicate that the two variables have
positive relationships with Organizational Impact. Furthermore, examining the standardized
beta values for the two variables show that Individual Impact has a stronger effect on
Organizational Impact than Satisfaction. The beta coefficient for Individual Impact was .47
while the beta coefficient for satisfaction was 0.40. Both beta coefficients are significant
(P < -01).
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
112
Because the two independent variables had high coefficient correlation (r = 0.78), the
multicollinearity problem was tested for using specific technique called Variance Inflation
Factor (VIF). This measure is used to show the degree to which an independent variable is
explained by other independent variables in the regression equation (Hair et al., 1992). In
the literature, a cutoff of 10 for VIF is usually used to indicate whether the multicollinearity
problem exists or not (Hair et al.). This study uses this cutoff (10) for VIF. The VIF value
for the two variables is 2.57. This indicates that there is no multicollinearity problem in the
third regression analysis. Thus, statistical results from this regression analysis do not
include biases due to multicollinearity.
In summary, after completing the regression analysis, all relationships tested were
found to be statistically significant. Figure
regression analysis. Figure 8 shows that the relationship between Satisfaction and
Individual Impact is positive and significant, with a beta coefficient o f 0.78; and the
relationship between Individual Impact and Organizational Impact is also positive and
significant, with a beta coefficient of 0.78 after controlling for the effect of satisfaction and a
beta coefficient of 0.47 without controlling for the effect of satisfaction. Furthermore, this
study found a direct positive relationship between satisfaction and Organizational Impact,
with a beta coefficient of 0.40.
The model in Figure 8 was further tested using path analysis.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
113
CO
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
114
Path Analysis
This section presents the fifth step in the data analysis plan: path analysis. There are
several advantages of path analysis over regression analysis. First, it enables the researcher
to confirm the model produced by regression analysis. Pedhazur (1997) argued that path
analysis is intended to shed light on the tenability of the causal models a researcher
formulates (p. 770). Second, path analysis enables the researchers to determine the indirect
effects of the variables in his model. In other words, path analysis allows for decomposing
the effects of variables into direct effects, indirect effects, and total effects. Thus, the
researcher will be able to determine the actual impacts of the variables in his model.
Third, conducting path analysis through the use of computer software (e.g., AMOS,
LISREL) will allow the researcher to determine the goodness o f fit of his model; that is how
the model fits the data collected. Theyre several measures for the degree of fit (e.g.,
Goodness of Fit Index, Adjusted Goodness of Fit Index, Root Mean Square Residual).
Because of the above advantages, path analysis is used in this study. The following
subsection discusses the results of path analysis.
This subsection presents the results of the path analysis conducted on the studys model
This included resulting path coefficients and their significant, effects of each variable in the
model (direct, indirect, and total), and measures of the goodness of fit of the studys model.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
115
The study model resulting from the regression analysis was the starting point for the path
analysis. Figure 8 depicts the model resulting from the regression analysis.
Using AMOS, this model was entered into the path analysis. Figure 9 depicts the
model produced by the path analysis including the path coefficients. Table 20 summarizes
the path coefficients for each path in the model produced. Path coefficients could be
reported as standardized or non-standardized (Norris, 1997). Several researchers, however,
recommended using standardized coefficients if the intention is to compare the magnitude of
each path in the model (Norris, 1997; Asher, 1976; Retherford & Choe, 1993; Loehlin,
1992). Thus, this study reports the path analysis results as standardized coefficients.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
117
Table 20
Summary of Standardized Path Coefficients of Paths in the Model
Produced by the Path Analysis
Path
Satis
p IM
Path coefficient
(Standardized)
.78
SE
P-value
.03
.0 0 0
IM
*oi
Al
.06
.0 0 0
Satis
01
.40
.07
.0 0 0
Satis = satisfaction
IM = Individual Impact
01 = Organizational Impact
Table 20 shows that all path coefficients are significant. The strongest direct effect on
Organizational Impact comes from Individual Impact, with path coefficient 0.47.
Satisfaction affects Organizational Impact with path coefficient 0.40. The path from
satisfaction to Individual Impact has a coefficient of 0.78.
The complete magnitude of the effects of these variables, however, can only be
assessed through knowing the indirect effects of these variables. As part of the final output
of path analysis, AMOS release 4.01 provides the direct, indirect, and total effects of each
variable in the tested model. Table 21 summarizes the effects for each variable in the
model.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
118
Table 21
Summary o f Direct. Indirect. Total Effects of Research Model Variables
Effect
Satisfaction on Individual Impact
Direct
.78
Indirect
.............
Total
.78
.47
----------
.4 7
.40
.37
.76
The results in Table 21 shed more light on the magnitude of the effect of some of the
variables in the model. Satisfaction has a path coefficient equal to 0.40 with Organizational
Impact. However, talcing into consideration the effect o f Satisfaction on Individual Impact,
which in turn affects Organizational Impact, the total impact of Satisfaction including the
direct and indirect effect is 0.76. Therefore, by examining the total effect of the different
variables, Individual Impact no longer continued to be the most important factor affecting
Organizational Impact (path coefficient = 0.47). Rather, Satisfaction became the most
important factor affecting Organizational Impact (path coefficient = 0.76). Consequently,
Satisfaction became the most important factor affecting both Individual Impact (path
coefficient = 0.78) and Organization Impact. Other variables in the model continue to have
the same path coefficients.
One of the advantages of conducting path analysis using computer statistical
packages, such as AMOS, is that the significance of the model as a whole could be assessed
through the measures of fit which assess the goodness of fit of a model with the data
collected. These measures are provided by these statistical packages as part of the final
output o f path analysis. This study used several measures o f fit. These measures include
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
119
Goodness o f Fit Index (GFI), Root Mean Square Residual (RMR), Incremental Fit Index
(IFI), and Comparative Fit Index (CFI). Using multiple measures o f goodness of fit
increases confidence in the fit o f the studys model. Table 22 shows the results of the four
measures of fit used in this study to assess the goodness of fit of the model produced by the
path analysis.
Table 22
Measures of Goodness of Fit for the Model Produced by the Path Analysis
Measure o f fit
GFI
or closer to
IFI
or closer to
CFI
or closer to
RMR
or closer to 0
Column 2 in Table 22 provides the cutoff of the degree of fit for each measure of fit
while column three provides the degrees of fit of the model produced by the path analysis
using each measure. Based on the results of the measures of fit for the model produced by
the path analysis, the researcher concluded that the model produced by the path analysis is
significant because the model met all the cutoffs of the measures of fit used in this study.
Reproduced with permission o f the copyright owner. Further reproduction prohibited without permission.
120
This section presents a comparison between the statistics produced by the regression
analysis and path analysis. The main goal of this comparison is to determine whether the
two different statistical methods produced the same outcomes. If the two statistical methods
show the same statistics, then we can conclude that the model produced by the regression
analysis is statistically significant and vice versa.
The comparison covered two areas. First, the significant and non-significant
relationships among the variables in the studys model; second, the actual magnitude of the
effects of the variables in the studys model. The later covered the direct, indirect, and total
effects found using the two statistical methods.
Table 23 summarizes the results of the testing the relationships among the variables
in the studys model using regression analysis and path analysis. Statistics in Table 23
clearly indicate that the two statistical methods produced the same significant relationships.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
121
Table 23
Summary of Relationships Found among the Variables in the Studys Model Using
Regression Analysis and Path analysis
Relationship
Satis
IM
Satis
^ IM
<>!
01
Satis= satisfaction
Path analysis
Regression analysis
Path
coefficient
(standardized) Significance
.78
Yes
Regression
Coefficient
(beta)
Significance
.78
Yes
.47
Yes
.47
Yes
.40
Yes
.40
Yes
IM=Individual Impact
OI=Organizational Impact
Table 24 summarizes the different types o f effect for each variable in the studys model,
which were determined by the use of regression analysis and path analysis. In the regression
analysis, the total effect of Satisfaction on Organizational Impact was calculated through the
following equation:
Total effect
B31
direct effect
(B21 * B32)
indirect effect
Reproduced with permission o f the copyright owner. Further reproduction prohibited without permission.
122
= .399
(.782M66)
= .399
.365
= .764
Table 24
Summary of Direct, Indirect, Total Effects of Research Model Variables
Determined by Regression Analysis and Path Analysis
Effect
Total
Direct
Indirect
Path
Regression
Regression
Path
Regression
analysis
analysis
analysis
analysis
analysis
Path
analysis
Satisfaction on
Individual Impact
.78
.78
.78
.78
Individual Impact
on Organizational
Impact
.47
.47
.47
.47
Satisfaction on
Organizational
Impact
.40
.40
.76
.76
.37
.37
The statistics in Table 24 clearly indicate that using both regression analysis and path
analysis led to reaching the same coefficients for the different types of effects. Thus, the
same variable that was important in the model produced by the regression analysis was also
important in the model produced by the path analysis. Satisfaction in both analyses was the
important variable.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
123
Based on the preceding comparison, the researcher concluded that the model
produced by the regression analysis is significant. Because, first, when the model was
entered into path analysis, the same statistics that were produced by the regression analysis
were produced by the path analysis. Thus, this result confirms the results of the regression
analysis. Second, as part of the path analysis output, measures o f goodness of fit provided
further evidence that the model was significant. The studys model met all cutoffs of the
measures of fit.
Figures 8 and 9 depict the studys model that was tested and validated. In this
model, the relationship between Satisfaction and Individual Impact is positive and
significant, with a beta coefficient of 0.78; the relationship between Individual Impact and
Organizational Impact is also positive and significant, with a beta coefficient of 0.47 after
controlling for the effect of Satisfaction and beta coefficient of 0.78 without controlling for
the effect of Satisfaction; and the relationship between Satisfaction and Organizational
Impact is positive and significant, with a beta coefficient of direct effect of 0.40 and a beta
coefficient of indirect effect of 0.37.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
124
Chapter 5
This chapter presents the conclusions of this study. It is divided into three sections.
The first section presents an overview of the purpose of the study, and summarizes and
interprets the results. The second section discusses the implications and contributions of this
study to public administration theory and management. The final section discusses possible
avenues of future research based on the results of this study.
As public organizations in Kuwait enter the new information age, their investment in
and usage of information systems are expected to be substantial. In order to obtain the
resources needed to invest in these systems, public organizations need to justify the expense
by explaining the projected outcome and providing clear evidence that these systems will
increase the efficiency and effectiveness of these organizations in delivering services to the
public. This opportunity requires an examination of management information systems in
public organizations. A more specific question that requires exploration is how to evaluate
the success of information systems in public organizations. That is where this study comes
into play.
By focusing on how to evaluate information systems in public organizations, this
study has helped fill a gap in the research literature. This research conducted a
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
125
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
126
include any external actors in the evaluation process. A seventh variable, External
Environment Satisfaction, was added to the DeLone and McLean model to denotes the
satisfaction of external actors.
The study research question was: to what extent is DeLone and McLeans model
useful in evaluating information systems in the public sector?. Several equations that
represent the relationships among the studys model were formulated as the hypotheses for
this study. Six Kuwaiti public organizations were randomly selected as the studys sample.
A survey methodology was chosen to collect data. A total of 363 usable questionnaires
were obtained. Factor analysis, correlation analysis, regression analysis, and path analysis
were used to analysis the studys model.
Initial findings of this study did not support the DeLone and McLean model as it was
originally proposed. Factor analysis o f the 40 items questionnaire that measure Information
Quality, System Quality, System Usage, and User Satisfaction resulted in a two-factor
solution. The first factor was labeled Satisfaction. Items included under this factor were
most of the items measuring Information Quality, System Quality, and User Satisfaction.
The second factor was labeled Usage. Items included under this factor were items that
measure System Usage. Thus, the study findings led to respecifying the studys model.
Under the revised model, DeLone and McLeans six variables model became four as
indicated in Figure 7. This revised model proposes that Satisfaction and System Usage
affect each other and Individual Impact. Individual Impact, in turn, affects Organizational
Impact. In other words, when users o f information perceive that these systems are high
quality systems, produce high quality information, and usage of information systems
increases, the perceptions of these users that information systems are making them more
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
127
productive in terms of providing them with timely and needed information for their work
related responsibilities increase. This in turn increases the perception that information
systems enhance the effectiveness o f the organization. Moreover, the study findings
indicated that when the usage of information systems increases, the perceptions that these
systems are high quality systems and produce high quality information increase and vice
versa. The respecification of the study's model led to modifying the research question and
hypotheses. The research question became: To what extent is the modified model (four
factor model) useful in evaluating information systems in the public sector?
Correlation analysis was first used to analyze the revised model. Findings indicated
that there were positive direct associations among the variables in the model except for the
System Usage variable. This variable did not relate directly to any of the other variables in
the model. The instrument used to measure System Usage could explain this result. This
instrument consists of twenty items that measures four dimensions of usage: (1) actual daily
use, (2) frequency of use, (3) total tasks, and (4) total applications. The numbers of items
that measure each dimension are one item (US1), one item (SU2), eight items (SU3-SU10),
and ten items (SU11-SU20), respectively. The use of factor analysis has led to eliminating
the two items that measure the actual daily use dimension and frequency of use dimension.
Thus, lack o f measure for these two dimensions might be a possible cause for the lack o f
associations with other variables in the study model. Especially, if we take into
consideration that during the pilot study and consultation process, several participants and
Kuwaiti professors have indicated that information systems have been recently introduced in
their organizations, not widely used in all work related tasks, and limited numbers of
software packages are used in their organizations. In other words, in the Kuwaiti public
Reproduced with permission o f the copyright owner. Further reproduction prohibited without permission.
128
organizations, there is a need to use a measures of information systems use that rely on the
actual daily use dimension and frequency of use dimension more than on the other
dimensions of usage.
An interesting and unexpected finding of the correlation analysis is the significant
direct association between Satisfaction and Organizational Impact. Previous research gave
no indication for such a relationship between the two variables. Because of this surprising
and potentially important result, the researcher decided to extend the analysis to include the
investigation o f this relationship in the subsequent regression analysis and path analysis.
Consequently, the lack of associations with the System Usage variable led to
respecifying the four variables model (Figure 7) to become a three variable model (Figure
8). The revised model proposes that Satisfaction affects Individual Impact that, in turn,
affects Organizational Impact. Also, Satisfaction directly affects Organizational Impact.
In other words, when users of information perceive that these systems are high
quality systems and produce high quality information, the perceptions of these users that
information systems are making them more productive in terms of providing them with
timely and need information for their work related responsibilities increase. This in turn
leads to increasing the perception that information systems lead to enhancing the
effectiveness of the organization. Moreover, the study findings indicated that when the
perception of having high quality information systems that produce high quality information
increases, the perception that information systems lead to enhancing the effectiveness of the
organization increases.
The three-variable model was tested using regression analysis and path analysis.
Both analyses supported the above relationships. Path analysis findings indicated that the
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
129
model fit the data. Both analyses show that satisfaction had significant positive impact on
both Individual Impact and Organizational Impact. Thus, the results of this study provided
support for the three variables model of evaluating information system success (Figures 8,
9).
In summary, the three-variable model o f evaluating information system has emerged
from the original six-variable model o f DeLone and McLean and the four variable model
through several respecification steps that emerged from previous stages of analysis. Unlike
the two models that precede it, the three-variable model proposes that information systems
success is a three-dimensional model and the relationships between these dimensions are as
indicated in Figures 8 and 9. As such, this study has provided a new empirically test model
o f information system success in public organizations.
The original six-variable model o f DeLone and McLean and the four variables
model, nevertheless, were useful in giving the general frame that include the possible
dimensions of information system success that could exist in the organizational boundary.
Based on the preceding, the studys research question was answered.
The study o f information systems in public organizations will become very important
as we enter the information age in which usage and investment in public organization
information systems are expected to increase greatly. However, the study o f information
systems in public organizations is still in its infancy in both the public administration theory
and public management literatures. An emerging line of research called public management
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
130
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
131
knowledge (Kaboolian, 1996) or practical theory model (Harmon & Mayer, 1994).
Kaboolian (1996) defined dual relevant knowledge as that can benefits both the theoretical
side of the field and the practical side. Harmon and Mayer (1994) defined a practical theory
as the one that either illuminates possibilities for action that would not otherwise be apparent
or stimulates greater understanding o f what the person has already been doing.
According to Kaboolian (1996), one way to go about this [creating dual relevance
knowledge] is to ask questions of relevance to practitioners and test, evaluate, and develop
the insights of the disciplines in the course of the answering those questions(p.80).
The three-variable model is based on the six-variable model of DeLone and
McLean. These authors developed their model by conducting a comprehensive review of
relevant empirical literature. Consequently, the three-variable model belongs to the type of
theoretical models described by both Kaboolian (1996) and Harmon and Mayer (1994).
Thus, findings of this study can assist public managers in dealing with the
challenges in the information age. The three-variable model and instruments developed and
validated in this study can be used to measure the success of existing information systems in
increasing the effectiveness and efficiency of both the performance of individuals and the
organization. Using the three-variable model and instruments, public managers could use
the results of the evaluation to provide empirical evidence to overseeing actors about the
level of success of their information systems and in turn justifying the public resources
investments in these systems.
Findings o f this study, furthermore, provide guidance on how public managers may
influence the success of information systems within their organizations. Findings o f this
study indicate that Satisfaction is a key variable in the three-variable model. The empirical
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
132
evidence in this study suggests that as Satisfaction increases, Individual Impact and
Organizational Impact also increase. The Satisfaction variable measures the satisfaction of
information systems users with the quality of information, quality of systems, and their
overall satisfaction. Thus, public managers may positively influence the success of
information systems through increasing the quality of both the information and the systems
themselves.
Looking deeper into Information quality and System Quality variables, we could
argue that Information Quality reflects the needs of users for information that are necessary
to accomplish their work while System Quality reflects the technical needs and conditions
that should be in place for the information systems to have high quality, such as type of
cables and cooling systems used in buildings and the level of electric power available.
Thus, satisfying of the needs of the technical subsystem (information systems) and
social subsystem (users) could lead to higher level of satisfaction. Public managers could
satisfy these needs using different methods. For example, public managers could allocate
more resources toward buying more powerful information systems.
However, an effective and efficient method to do so should be introduced at the
design stage of information systems and that takes both the needs of both subsystems
(technical and social). In other words, to design a successful information system, public
managers should not follow the technological imperative model, which views technology as
the independent variable that determines other dimensions in an organization. According to
this approach, the introduction of high technology leads to increasing productivity of an
organization. This design approach could increase productivity in the short run; however, in
the long run it is doomed to fail. A logical explanation for why the increase in productivity
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
133
could occur in the short run is stated by Chisholm (1988) when he explained that
suboptimal designs occur because the employees who operate most high-technology systems
bear the consequences o f design decisions and must make the system work regardless o f its
designs(p.4l).
The sociotechnical systems approach provides a way of achieving the joint
optimizing o f both the technical and social systems within an organization. Several writers
have proposed and used this approach to design information system with great success (e.g.,
Chisholm, 1988; Hogan, 1993; Sharma et al., 1991; Shani & Sena, 1994; Purser, 1991;
Terlage, 1994). For example, Chisholm (1988) stated
The advanced information technology requires new strategies...and different
organizational and workplace designs that emphasize the human attributes of
learning, questioning, and deciding to reach the technologys potential for
contributing to organization effectiveness...The sociotechnical systems
(STS) approach provides an effective way of working to improve total system
performance through improved links between the human system and
technology, (p.45)
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
134
study, consulting with experts in information system and public administration, factor
analysis, and test of internal consistency.
A fourth contribution o f this study is testing a model, instruments, and a research
process that are based on prior research in the United States in a Middle Eastern country
(Kuwait). This study has reached similar findings to that found in the United States in terms
of the relationships in the study model and the results of the instruments validation. Thus,
the external validity o f the model, concepts and instruments was enhanced by this study.
Several avenues of future research are suggested by the findings of this study. First,
while this study has provided much needed empirical support for the three variables model
of information system success, broader empirical support for this model is necessary and
needed. Thus, future research should test the applicability of this model in different types of
public organizations (e.g., non-profit organizations) and other societies.
Second, as stated through this study, this study represents a first step in developing a
comprehensive model for evaluating information systems in the public sector. Thus, a
logical extension of this study is to add external actors into the three variables model of
evaluating information systems. Equations in Chapter three could be used as the basic
hypotheses for this future research.
Third, because this study employed quantitative methods and only questionnaires to
collect data, future research should employ also qualitative methods. For example, actual
observation of users of information systems or interviewing these users may give valuable
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
135
insights regarding their satisfaction with these systems rather than just asking them
questions about their perceived satisfaction. Likewise, reviewing secondary data such as
individuals and overall organizational productivity reports could provide additional insights
regarding the individual impact and organizational impact variables. In other words, the
study would greatly benefit from some type of triangulation in the data collection method.
Fourth, findings of this study did not show positive associations or relationships
between System Usage and other variables in the model. As stated in this study, a possible
reason for this could be the measure used in this study to test for System Usage. This
measure was adopted from Igbaria et al. (1989). This measure relies more heavily on two
dimensions. First, the numbers of organizational function those information systems are
used in; second, the numbers of software package used in work related responsibilities.
During the pilot study and consultation process, several participants and Kuwaiti professors
have indicated that information systems have been recently introduced in their organizations
and these systems are not widely used in all work related tasks. Thus, this could be the
reason for the low usage of information systems and lack of associations of this variable to
other variables in the study model; which led, in turn, to dropping System Usage from the
study model. Thus, future research should use other measures of System Usage that do not
have such reliance (e.g., Kim & Lee, 1986; Sherman, 1997). This might lead to including
System Usage into the model.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
136
BIBLIOGRAPHY
Ang, J. & Soh, P. H. (1997, October). User information satisfaction, job satisfaction
and computer background: An exploratory study. Information & Management, 32 (5), 255266.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
137
Ballantine, J., Bonner, M., Levy, M., Martin, A., Munro, L., & Powell, P. L. (1996).
A 3-D model of information systems success: The search for the dependent variable.
Information Resources Management Journal, 9 (4), 5-14.
Baroudi, J. J., Margarethe, H., & Olson, B. I. (1986). An empirical study o f the
impact of user involvement on system usage and information satisfaction. Communication
of the ACM. 29 (3), 232-238.
Blaylock, B. K. & Rees, L. P. (1984, Winter). Cognitive style and the usefulness of
information. Decision Sciences. 15 (1). 74-91.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
138
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
139
Caudle, S., Gorr, W., & Newcomer, K. (1991). Key information systems issues for
the public sector. MIS Quarterly. 15. 171-188.
Conklin, J. H., Gotterer, M. H., & Rickman, R. (1982, August). On-line terminal
response time: The effects o f background activity. Information & Management. 5 (3), 1220 .
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
140
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
141
Edstrom, A. (1977, July). User influence and the success of MIS projects: A
contingency approach. Human Relations. 30 (7), 589-607.
Ein-Dor, P. & Segev, E. (1978, June). Organizational context and the success of
management information system. Management Science. 24 (10). 1064-1077.
Ein-Dor, P., Segev, E., & Steinfeld, A. (1981, December) Use of management
information systems: An empirical study. Proceedings of the Second International
Conference on Information Systems. Cambridge, MA 215-228.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
142
Hair, J., Anderson, R. E., Tatham, R. L., & Black, W. (1992). Multivariate data
analysis. Englewood, NJ: Prentice-Hall.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
143
Hogan, C. (1993). How to get more out of videoconference meetings: A sociotechnical approach. Training and Management Development Methods. 7 (1), 5-21.
Iivari, J. & Koskela, E. (1987, September). The PIOCO model for information
systems design. MIS Quarterly. 11 (3), 401-419.
Ishman, M. (1998). Measuring information success at the individual level in crosscultural environments. In E. J. Garrity & G. L. Sanders (Eds ), Information success
measurement. Hershey, PA: Idea Group Publ.
Reproduced with permission o f the copyright owner. Further reproduction prohibited without permission.
144
Ives, B., Hamilton, S., & Davis, R. (1980). A framework for research in computerbased management information systems. Management Science, 26 (9), 910-934.
Ives, B., Margrethe, M., & Baroudi, J. J. (1983). The measurement of user
information satisfaction. Communication of the ACM. 26 (10). 785-793.
Ives, B. & Olson, M. H. (1984, May). User involvement and MIS success: A review
o f research. Management Science. 30 (5), 586-419.
Jones, J. W. & McLeod, R., Jr. (1986, Spring). The structure of executive
information systems: An exploratory analysis. Decision Sciences, 17 (2), 220-249.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
145
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
146
Kim, Y. & Kim, Y. (1999, October/December). Critical issues in the network era.
Information Resources Management Journal. 4 141. 14-23.
Kim, C., Suh, K., & Lee, J. (1998). Utilization and user satisfaction in end-user
computing: A task contingent model. Information Resources Management Journal. 11 (4),
11-24.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
147
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
148
Lu, H.-P. & Wang, J.-Y. (1997, April). The relationship between management
styles, user participation, and system success over MIS growth stages. Information &
Management. 32 (3). 203-213.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
149
Marcolin, B. L., Munro, M. C., & Campbell, K. G. (1997, Summer). End user
ability: Impact of job and individual differences. Journal of End User Computing. 9 (3), 312 .
Meador, C. L., Guyote, M. J., & Keen, P. G. W. (1984, June). Setting priorities for
DSS development. MIS Quarterly.
(2). 117-129.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
150
Munro, B. H. (1997). Statistical methods for health care research (3rd ed.Y New
York, NY: Lippincott.
Palvia, P. C., Palvia, S. C., & Zigli, R. M. (1992). In M. Khosrowpour (Ed), Global
information technology management. Harrisburg, PA: Idea Group Publishing.
Reproduced with permission o f the copyright owner. Further reproduction prohibited without permission.
151
Retherford, R. D. & Choe, M. K. (1993). Causal analysis. New York, NY: John
Wiley & Sons.
(4), 63-73.
Reproduced with permission o f the copyright owner. Further reproduction prohibited without permission.
152
(3), 240-253.
Seddon, P.B. & Kiew, M. Y. (1994). A partial test and development of the DeLone
and McLean model of IS success. Proceedings of the International Conference on
Information Systems. Vancouver, BC, Canada (ICIS 94), 99-110.
Shani, A. B. & Sena, J. A. (1994, June). Information technology and the integration
of change: Sociotechnical system approach. The Journal of Applied Behavioral Science. 30
(2), 247-291.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
153
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
154
Tait, P. & Vessey, I. (1988, March). The effect of user involvement on system
success: A contingency approach. MIS Quarterly. 12 (1), 91-108.
Torkzadeh, G. & Doll, W. J. (1999). The development o f a tool for measuring the
perceived impact of information technology on work. Omega. 27. 327-339.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
155
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
156
Appendixes
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Appendix A
English Version o f the End Users Questionnaire
Reproduced with permission o f the copyright owner. Further reproduction prohibited without permission.
158
Section I: Please circle the most appropriate answer that describes your perception of
the system Quality.
1
Extremely X
2
Quite X
3
Slightly X
4
Neural, does not apply
5
Slightly Y
6
Quite Y
Extremely Y
1.
How do you evaluate the elapsed time between a user-initiated request for service or action and reply to
that request?
Fast: 1...2...3...4...5...6...7: Slow
Consistent: 1...2...3...4...5...6...7: Inconsistent
2.
How do you evaluate the ease or difficulty of utilizing the capability of the computer system?
Simple: 1...2...3...4...5. . . 6 . ..7: Complex
Easy-to-use: 1...2...3...4...5. . . 6 . ..7: Hard-to-use
3.
How do you evaluate the set of vocabulary, syntax, and grammatical rules used to interact with the
computer system?
Simple: 1... 2. ..3. ..4. ..5. ..6. ..7: Complex
Easy-to-use: 1...2...3...4...5...6...7: Hard-to-use
4.
How do you evaluate the relative balance between the cost and the considered usefulness of the computerbased information products or services that are provided? The costs include any costs related to providing
the resource, including money, time, manpower, and opportunity. The usefulness includes any benefits
that the user believes to be derived from the support.
Positive: 1...2...3...4...5...6...7: Negative
Good: 1...2...3...4...5...6...7: Useless
5.
6.
How do you evaluate the capacity of the information system to change or to adjust in response to new
conditions, demands, or circumstances?
Flexible: 1...2...3...4...5...6...7: Rigid
High: 1...2...3...4...5...6...7:Low
7.
How do you evaluate the ability of systems to communicate/transmit data between systems servicing
different functional areas?
Sufficient: 1...2...3...4...5...6...7: Insufficient
Good: 1...2...3...4...5...6...7: Bad
Section II: Please circle the most appropriate answer that describes your perception of
the Information Quality.
1
^xjrenKl^^^^^jitOC^^^^^^lightljrJ^^^^^^jeuraj^doe^ioyggl^^^^lightljJ^^guit^^^yflreineljJf^^^
1.
2.
How do evaluate the availability of the output information at a time suitable for its use?
Timely: 1...2...3...4...5...6...7: Untimely
Consistent: 1...2...3...4...5...6...7: Inconsistent
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
159
3.
How do evaluate the variability of the output information from that which it purports to measure?
Consistent: 1...2...3...4...5...6...7: Inconsistent
High: 1...2...3...4...5...6.. 7: Low
4.
5.
6.
7.
How do evaluate the material design of the layout and display of the output contents?
Good: 1...2...3.. 4...5...6...7: Bad
Readable: 1...2.. 3...4...5...6...7: Unreadable
8.
How do evaluate the amount of information conveyed to you from the computer-based systems?
Concise: 1...2...3...4...5. . . 6 . ..7: Redundant
Necessary: 1...2...3...4...5. . . 6 . ..7: Unnecessary
9.
How do evaluate the degree of congruence between what you want or require and what is provided by the
information products and services?
Relevant: 1...2...3...4...5. . . 6 . ..7: Irrelevant
Good: 1...2...3...4...5. . . 6 . ..7: Bad
Section III: Please circle the most appropriate answer that describes your usage of the
information system.
1.
On average working day that youuse a computer, how much time do you spend on the system?
(1) Almost never
(2) less than Vi hour
(3) From Vi hour to 1 hour
(4) 1-2 hours
(5) 2-3 hours
(6) more than 3 hours
2.
3.
With respect to the requirements of your current job, please indicate to what extent do you use the
computer to perform the following tasks: (Please Circle one)
Not at all
1
To a great extent
5
1. Historical reference
1
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
160
2. Looking for trend historical reference
1
3. Finding problems/alternatives
4. Planning
5. Budgeting
8. Making decisions
4. With respect to the requirements of your current job, please indicate the number of packages you use from
the following: (Please Check)
(1) Spreadsheets.
( ) (2) Word processing. ( )
(3) Data management packages. ( ) (4) Modeling systems. ( )
(5) Statistical systems.
(
) (6) Graphical packages. ( )
(7) Communication packages. ( ) (8) Own programming. ( )
(9 ) 4GL
( ) (10) Others
( )
Section IV: On the following, Please circle the number which best reflects your overall
1
Extremely
2
Quite
3
Slightly
4
Neural, does not apply
5
Slightly
6
Quite
Extremely
(1) How adequately do you feel that the system meets the information processing needs of your area of
responsibility?
Adequate: 1...2...3...4...5. . . 6 . ..7: Inadequate
(2) How efficient is the system?
Efficient: 1...2...3...4...5. . . 6 . ..7: Inefficient
(3) How effective is the system?
Efficient: 1...2...3...4...5. . . 6 . ..7: Inefficient
(4) Overall, are you satisfied with the system?
Dissatisfied: 1...2...3...4...S...6...7: Satisfied
Section V: Please indicate the extent to which information systems has impacted your
job in the following:
5
1
2
3
4
Not at All
A Little
Moderately
Much
Great Deal
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
161
Task productivity:
(1) Information system allows me to accomplish more work than would otherwise be possible.
1
2
3
4
5
(2) Information system increases my productivity.
1
2
3
4
5
(3) Information system application saves my time.
1
2
3
4
5
Task innovation:
(4) Information system helps me try out innovate ideas.
1
2
3
4
5
Customer satisfaction:
(5) Information system helps me meet customer needs.
1
2
3
4
5
(6) Information system improves customer satisfaction.
1
2
3
4
5
(7) Information system improves customer service.
1
2
3
4
5
Management control:
(8) Information system helps the management to control the work process.
1
2
3
4
5
(9) Information system improves management control.
1
2
3
4
5
(10) Information system helps management control performance.
1
2
3
4
5
End o f Questionnaire
Reproduced with permission o f the copyright owner. Further reproduction prohibited without permission.
162
Appendix B
English Version o f the Management Questionnaire
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
163
Section I: please indicate the extent to which information a system has helped your
institution in the following:________________________________________________
1
Not Much
7
Extensively
Reproduced with permission o f the copyright owner. Further reproduction prohibited without permission.
164
End of Questionnaire
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
165
Appendix C
Letter of Approval from the Human Subjects Committee at Pennsylvania State
University
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
pen n State
Dale:
V ic e lY o i i le n i fu r R e s e a rc h
T h e P e n n s y lv a n ia S ta te U n iv e r s ity
4K14 1 X A S 4 7 7 5
O f fic e fu r R e g u la to ry C o m p lia n c e
2 1 2 K e m G r a d u a te H uiU ling
F a v iH U lX f iJ - W iW
U n iv e r s ity P a rk . PA IMMl2*.VUll
f...
ekel. Dirccjcfrol Regulatory A llairs
From:
The Behavioral and Social Sciences Committee of the Institutional Review Board has reviewed and approved
your proposal for use o f human subjects in your research. T h is a p p ro v a l has been g ran ted fo r a o n e-year
period.
Approval for use o f human subjects in this research is given for a period covering one year from today. If
y o u r stu d y extends beyond this ap p ro v a l period , you m u st co n ta c t this office to req u e st a n a n n u a l
review o f th is research .
Subjects must receive a copy o f any informed consent documentation that was submitted to the Compliance
Office for review.
By accepting this decision you ag ree to notify the C om pliance O ffice o f (1) an y ad d itio n s o r pro ced u ral
changes th a t m odify th e su b jects' risks in any w ay a n d (2) an y u n a n tic ip a te d su b je ct events th a t a rc
en co u n tered d u rin g th e co n d u ct o f this research . P rio r a p p ro v a l m u s t be ob tain ed fo r an y p lann ed
changes to the a p p ro v ed protocol. U nanticipated su b ject events m u st be rep o rted in a tim ely fashion.
On behalf o f the committee and the University, I thank you for your efforts to conduct your research in
compliance with the federal regulations that have been established for the protection of human subjects.
C A Y /jhn
cc:
K. English
R. Chrisholm
S. Peterson
H. Sachs
AnLquul OpjswtumiyI'mscrsiiy
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
167
Appendix D
Letters of Approval from Participating Ministries
Reproduced with permission o f the copyright owner. Further reproduction prohibited without permission.
KUWAIT UNIVERSITY
4 m V>
r.j
:. ,.t < tt-.ui i . .
ac=d*r3tihth
sllUXlvtl
' V/*1, t-
ijru:
_ 2 ^ _ j UK2Wi K
1000/6/13 \^ ix
j >u ji
j l l S jI j j
<(( J uy a t^ J e ^ '
JJ* / Jujl
^ Jlj ,.ljj
2j_f> ^C-
Qjy
S Ij J < J L k -U li ^ U I o f y j J
^JuJI hlotJI
>!! i
i-U
<ij\}}\
w.U-11
pil
L S '. s p ^ i ^
ji),
Jy
>
> >
'jjy j i S j '
("*'< v tr
V > ^
. . . V \ V ''
c - jill H 05S ; u - j l *163
lJ .
^
[<> - '
:a h c
T '5 ^ .U t TT5M
T T 5 5 \ JlUjTt
a n j r raa\ \ V
5 \XOjiE
jUC
TTAtVVuJiU.T-
Tel. 251(11(11 - Operator 2523911 Ext. 3001 - Fix. 2528477 - P.(V Hot 5486 131155 Kuwait
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
iJU-IjJl Sjljj
MINISTRY OF INTERIOR
GmmwlA4mU*nttm DtpartoMt
U U il ijb ? l
D W ii
.T*** ^
RaCNa.
f j2 * ll
Irfjljll
11
jH
Uiti
I La_iL
^ I t UAI^lf | C t U iM J i j l l j | l^ iU
^1*
m J ^ U JI ^> m JI
* ;
^j^ll
:C<jU'
o v ^ N o
^ aU JI
MLa|Ja|l
y e ltil ^LrikHlfl
IV
C e^^b
jll* iT i % / \ / \ X
u*
t/J* jp b ^ b UU1flj ^UJLf J.A I *JI
,U i1 ^ L J I 4^l^2 ^ iJ I o U i^ J I U l i J - J i i j (JCSjU
1*1
. i-j-uai i-ij^u
t<
fLt
<-j IjVI J j ^ L I UUJI ijbVI
.i| . n . i i : r i r *)
O lU ^ iJ
iklU ^a
l ^ u t y lS fJ I
L L a i i ik U ljil
/u ^
Opr: 4144173 - 4M4613 - FU.4M2697 IAmAV:i r * U .lA U W Y .tA ttl \ r J1UJI
Reproduced with permission o f the copyright owner. Further reproduction prohibited without permission
SJL>UJI ijfjj
MINISTRY OFJNTERIOR
fJ3ai
c*' 'S'
U .p 4 J I I j l
mU # J m |I *!
.i ~ L * J I
:I
J-f-JI
i i J#-* jUffSall dl m |I
II
U S h a . g ^ ll i > !f ' t t t f i l t
U l l j i l ( U U m i j A j M j ) *jJI*
ilA f t > /y \T
j
c ^ L J i j i l j C e J j i v A l J + O -*
,jt- J I 13+*
fjl*
i* tP J U p i . . -. ..VI Ilk t i i j i
.U il
l ^ U I I i j b f i f L ^ J u .
cp.-.
wl
i , t j ,
y e ltil jjljj* lll ^t>5
< J* * k 2
'
^All
J i i y i j U L * lj ,0*11* J .1 A -J I ^
^L + JU JI
U li
'i i ''
J - J H j ^ S ijU l
. U ^ IU ll U O j+ U
MIMt5<U*Oll > 'j Jj*** ljl..Aij
IUUil JjW I
*)
l A t T I ^ V l p - f U . t A U W T . t A t t l i r DUJI
with permission o, the copyright owner. Former reproduction prohibited w ithout permission
MINISTRY OF INTERIOR
Gmmi Arfmliton tlMi D m
-u W -U l S jljj
I i-U )l Sjb>l
Data:
! j W TY
a ^ \ o
f> a i
tj
u iuji ijijVi fu j- j-
i y i
^1
j ^ uji
jh*_ji
>fiijti n ! jl
al
||j ^ l
j Ij VI
jjpW*
Uil^M ( C a
a ijlj>JL
^ ^ k JI
r^ e v
U Im
c w jU I
L jk a L
} <JU
^UmTaaa/^/\T
igjj*inll (^1 4Ijl ^ 1* yUJI li+f
J iljiij u u v i j (OftiL j* m ii
IIIM
>*J J j - * - W'Aitj
fU j - J - ^
o j L y i j j ^ u j i-u jiijL v i
h O
z z 'j-* >
. t i l U j . . , C U . j j i l v li J I > U _ J : d . U * ^ l l
aStlU^aJjJ - Al i
Ifl*#4 /
. i _ _ L J I / l_ i i
l A t m V ^ l i . t A U W T . I A t f W r illx JI
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
iJU-IjJl i j l j j
MINISTRY OF INTERIOR
Oiiit
BetNe.
T*** ^
TV
* N f \ U
f*jJ*1
j j j U 1 UJI SjIjVI f U
rJ erJ t
j - j - J^bliJI : -
^>*lll Mf
#>i^; B J#-*
| | ! <|l
Jl J -
Cj^
1 ^ Ifl
' <
U . U J I C ,U tS ~ il
1- - 1S
U iljU ( < i U
^ i J M
y ^ J i j l l u U -* 5 VI I i - C ijji
Ui~
'j ) * ^ u
y U f T .. . / V 'T
Cg lj* n II
l2 J I lifd
J i l j i l j l i L - V l j fU ll. J . ^ i - . n
~ .LSI ^ u j i -.!> ; ^ u i a U x j i u u
'S + J * I J J
J - J i 5 j f iS j W
. L *L ai U jjii
l l l l | | l . ' h
J j^
Ij l . i f i j
fU J - J - . / v
( ^ 7 3 7 1 ^ 0 U U I ijLVI
l* tM ! illlA ijll
y jJ J it lijJ I j y ijA iJ I
.Jk_UJI/Ui-A
f-t A* -*
.____________________________________ J
Opt.:4144172 4144(1)Fu:4l42697
l A m W . ^ l i . i A t l W T . l A l f W r 3LU
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
2y^Ol3Lel^
KmnttUntanttj
C*MAtt*MwldM
:**
in JweLwlwSL^
r
*
Cajjt* Juu
^Jail Ji* / JmII irfiil otLlaaKt gjj J* &IjU
^ ^
U^J<
J s v - f . j * - ^ - > J I U .W : ^ j b > \ p U l S j T - i - U I i j b V l , 1
li,
_ > tj jl) * JL Ja i
i - * i
.Uj-il
<<
tl* J l w
2j ^
j j ^ j U
^ jTj/U
l jyP
j-** li/*
j j Sj H
<siC
nu>t 13068
ju-ji tut
. rmn ui^i^pB
T* 2323911 P.O.Boa S4M SMI,Cod*No. 13066 Kunl
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
KUWAIT UMVEKSTTY
CdhyrflihtrHwSriif
M*b>
W jM
Fjfefl
O tB ttaf tfca DM
imW i n*
(>
*&J
Jjljj j
j>
((( Jwj i J f
m j "-* jyw
^Jtll Ji* / jjJl
otLlVl gj/
j^H (/y
- *#ui tjbyi |m *
j j y Laf*< I p j ll j ^ / l y
.Uj-lt
<i
iJSJl
Ua*
C>U110SSUi^JilA'\v^ .. T*TAIWw^(U.r"t^UiT*mnUir.T\-tM
Tt 3910101 O p m w 2323911 E it 3001 F o . 232*477 P.O. Box MM 13033 Kawtil
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
175
KUWAIT UNIVERSITY
Coikft afAdalaistnifreScieaca
Office of tfc*Daa
1wS
UU.I t j \ j j
SjU
<<< * V *ur s * u
2mj -,e' Jmh
ilj
SSiljll
JJii >*/
j--O S^"
Reproduced
pennies,on
C-
lint 118/S I W K m * * . /
KUWAIT UNIVBBSnY
'm m t
dgleM^iaieCy '
b
r-.> 1
*
' '
01
**J4*
->$$*
'
H*
^'**' U^V*p-Jj
1'inff
o-** ( ^
T : v - ; $ g
!
.;*./"^'
*v
A'U&SK, i l U f A * * f f .t
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Ku w a it u w v r a s n r
CifcpiMtaMiwlitawi
r^ n
I#
'i^ S S E .
j;;i _
(MBflatftteDaaa
i : i ;
' "
:. - .
/
:;Vv
. '*5' "
J - J l, U l i f V l
*
t
p&
J*
v*Wjr
.*
fj^**' 4 ^ ~
1
n
,-J t i > .
(l
'i
>c
' .'.
I-
1*.
IS
j^:
i .
ijL>)h|m*
jll <uLle *Li^i^Ul <*rlj j j ^11 wjU*1IUtTjJji jj*y LJ*i ijSjll ^*,
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Kuwait University
Cy^ttllLeeLg*
JI-
CsIiqi ofMrinMMiwSdMOH
vno*n3\^A
* j\jjd fj
0
^ tji* ,.llj
**eljJl4uLl *LII
iujjll
- *-Ul
|<i
j
L ^i
.U*U
3J ^ \ JmP
Ijj* jujM
* u 0 1 V > J 13069 ^
jo
J I > ^ J I . J U - J I a l A I v w * - T a m i l U l o f ll C i^ O
T oL 2 3 2 3 9 1 1 P . O .B o x 9 4 M S M
C o d a N o. 13 095 K uw M
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
179
Appendix E
Signed Letters from the Translators
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Subject:
___________ _
/^ Y
'
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Lj-~ \ f
Signature:
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
182
Appendix F
Arabic Version of the End Users Questionnaire
Reproduced with permission o f the copyright owner. Further reproduction prohibited without permission.
J-tf ^Jl iW ^l
J*y *
L1 jjUit ^Uaj jA ~
A'~ * oJl>- j l oL* jJjo iw. Ila;
oJL^Ij
J jV l (*~*Jl
^Ux>
^Li (jy L*
^ t*^( ) */''
J M V jl Jb> ( )
J U i*.xl( ) : *4 *
jl
(J-& ( )
s^' *
()
4 ^ <^*^( ) : J**"
^
\J.nJt* l5-&1( ) k * c - k ( )
? o l * ^ U a i
* A J b s JJ y * 'j S l '
UL^-i* (J-k ( ) k- yA
l i.o j*
^U aJl Ii*
tS-ti ( )
()
41p
j k ,
'i
y * t . T
iJ i( ) '. J$-<
jt Jb >
( )
4 U ck( ) !<--**<
L* j
iJi ( )
)a**yA
^"
( ) Ja-iji* c-d ( )
t5Ji ( )
ji J b > ( )
j 3 o l* l U
^ .T
I* . 1
4U c5-k( )
jk .N
ji Jb > ( )
()
U <J>ji( ) :,^k*
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
184
fUaj o U jc*d ( )
^ jd ( )
dJdil
L.o
J u ^ a l( ) :^ U
.y k j V jl Jblat ( )
cS-d ( )
Ja^y* (^ui ( )
(JJli( );(yii-^
i-d ( )
Jay
()
-' yk L . 1
j j a i N j l JuL st ( )
Jayj>
cS-d ( )
?ol
i<!'<ijv< L* o l^ jL d l J i j Jai jJ Cj L
iA --h <jA ( )
J l p cS-d( ) \
( )
Jay (jjd ( )
J u (^ o i() :xsp
jd a i V jf OiU ( )
-M-* tS-d ( )
oL ajiail OPyJ
<lL ~ ^
J-tf
.Jly * j j O ^ l j
Ja->> c-d ( )
ftl ( ) ajLil
jL&il
^Ijsb
<-sd t 5 -d ( )
JA
J lp ^ J i ( ) :^JM.
JJ f ^ 9 - j l :^ U l |*-~aJ|
k-x<U-|]
J ? e j\ .
oL*jI*il ^Uaj
Ory*iJ.I o U ^ * il O u d l dl*-Ju yk L . ^
,k-.yi ^ .d ( )
^Ip ^jA( ) :^ U
j j a i V j i Jbbt ( )
t 5J i ( )
Ji<y (^Jl ( )
^ lp ^Jki( ):yaoi*
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
91j
^i-w csjJl ( )
Jx -ji.
y_yJ
(>
^a L*. T
J j k i ^ jf Jb > ( )
0
Ji>J cJd ( )
^L^Jlp J jMoi-l
c-d ( )
iji;*^ cxl ( )
jjp
b r jjJ jX...iiI y U .f
ti-li ( )
tS ^ ( ) jsi^
jJ x j ^ jf -bl* ( )
c-^ ( )
Ja*ji
t5*^( ) j^+0
tSJi ( )
Jlp i-d( ) : J U
jJ* v y JbU ( )
jdL ( )
t5Jd ( )
J b (^Jd( ) :
a.nj> (J jl ( )
ix~*y*<-d ( ) '
^Ip l*x1 ( )
j
k- yA
()
J xj
ji ju>
X-*
()
J lp i-Xl( ) : [fr*
uA-<w <s-d ( )
Jx-^ 1 t^jd ( )
J U t 5 jd( ) :J*U
jd u . ^ jl Jubt ( )
Uu*d t5Jl ( )
Jai-d ()
JIp
) :J-ti j*p
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
L* . "\
186
? o ry u -A l o L * > J J
i-ll ( )
^ s s i* y* I* . V
cji ( )
t5-^( ) ***
jiu
>_>*<
()
. ^ j t Jbbt ( )
Ja-y tS-1^ ( )
tS-^( )
O r O l j L I l 0*^1 >
^
.1jj* y ^ 1>A
(jA ( )
A~tyA t ji ( )
<^Ai( ) \ipryA
j j u . V jl JbU ( )
l jl ( )
L$ji ( )
l jl( )
is A ( )
jjy L
* i>
_..^iLlll OrjjJ
J x -y * < jA ( )
^UxJ
tjA ( )
.J l y * j j iJ b rlj iib rl
lil '/* J* ( )
o U L -r ^ ^
()
(>
jJr
J1 Ivi-ll-M |aII
*0y. {
_>*j l
o Jai J
ja *
JIp l-^( )
? O l*jin il ^Uii
) :s L -
j , k j ^ ji
Uunja cJJlI ( )
..<_j>
j j l Or*_j)l J^"
0
o t * jU il
Ip L o
* J 5* ( )
o U L r-T j / ( )
4*ia-l V( )
o Ip L
.T->^()
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
187
? o L ^ U a i I f j ^Jbici-J ^ ll o l j i l JAp
(j oly* *a< ( )
(jJl
<J| a-Ij y ( )
o ly . Jap ( )
f_*!l
v ()
1Ja<jd L . T
V( )
4 Lr* **< ( )
lA* (^lp JAj ^ ll 'Olpr'^l fLl ( ) #jl*>i| **J t\ e J>\ llflAAJ g jl iiJij^Jl oLlialC. ( j i o L _i . r
:<JL)t
. l>A.i,J tjr *
-la. p t5-^( )
j l olJLeP ^ ip
^)
'L*r
(I )
1*
4<JbiOwil 'il ( )
j l sOAJL%P 4 A^lyd
tOjlpL^l
. i j l a l JS'L La iL>jA>- 41
tjd ( )
cS-d ( )
2)yiidi k .-j-
c-i V ( )
C$a 1 ( ) - k - y * t^Ai ( )
4 lp (^Ai( )
(o )
IJbc J 1* t -^ ( ) :<*Api*i-(
**j m J \ y ( )
.JaJa^ll
tfA ( )
Ja-^ tJAJk ( )
(d>*)
.o L Ijjll J - p ( )
q i .hJp
(.^a I ( )
JajO (JAi ( )
J lp (^Al( )
lAr J I p <^Ai ( )
IddAdwl
po-i V ( )
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
jLflj'V* (^ )
l *
( ) ja~1yA (^-ti ( )
J ^ t-^( )
1* ^
J ^ t"^ ( )
i<*Jb*I<if
-UJbiO-t v ( )
^-ry J (#jJa.Jl)
Uuw l5J1( )
J a -.^ .< ^ jd ()
ju^jd()
( )
ia ^ yA < j j l
()
J lp
1-^ J l ^
-uJWO-l V ( )
j^ )
***
0 * ' jjJl
fl> l
( )
jl* - i
ftl** J '
^ ll
c L jIfc iJ
iL l
_pl o
l :lla :C .
j l * i U-i .
01*15 ^ l* * ( )
^ a li J* p
r -j
( )
^ A
o liL >
^ l*> ( ^
oUL J p iy ( )
)
o it U p - i j * p 0 * iy!( )
*j> ( )
oVUall )
()
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
189
o l> J d l ^Uai
lj f -
;' j ^ (****''
j 1_A-S . \
? < u i d < W > U L l iJLy*il oLarL^Ml idj J o l< jidl ^Uii 0LiS'
Uuvi tiJll ( ) du-
ijjl
( )
J U < ^d ( )
: JIS*
V jf -bd ( )
Uuw <^jd ( ) -L -y * t^jd ( )
J lp
<j j (
: J l T jjp
^^
c-d ( )
^I p cd( ^
* JUi
j j a j V jl -bd* ( )
J ^ i d ( ) t j d i jjp
jJa^. v jf OiLP ( )
c$d ( ) -I*~>y* c$d ( )
J lp t 5 d ( ) '.j>y j&
* o d > d l ^Uai
>*< t5-d ( ) du-ji* < ^d ( )
jJ u .N j l j b > t ( )
<^d ( )
c$d ( )
Jlp i^ d ( )
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
jjp
iw lJ-l] o U j L i i l ^Uai j j l b
J j J ^Jl
^tl ( ) a j L i |
. J | J m> jJ ^ J a.A>-lj 5j I |
^Uai
jl j J
* y
9*
'
(*ill
^pjl . i...U^]l
is c jJ b i,; .lojJl
jl^ rl c . l a .ol
Lj Jla11 ^U ii
.^
. o l > j I aU
iJbi-
c -il
,J*\y( )
( )
.<uLb_jll ^ j p d i [ a jL j iJ i c ^
IOpp j s S (JJ i
) j!~>
.i..iJ j^ ) l
(a*-cJl jil^*( )
i^sr p
J& ( )
l -^
C -i^Jl ,j<a-lij ,J !
.*.i.y>jJl
j^ V
) p <^-vi ij*\y ( )
<
J l C U j I aII
<^.li
) jk ^
t5-^ j*'>*()
c-^ J * ^ ( )
c-^
,jjh y { ) p
J^( )
<y^y
^UaJ . i
J**y ( ) ^ * y *
()
L*_^1a1I ^ U ii. fl
J** ( )
U ^ aII ^U xJ . *1
J * - ^ l -^
.((jow pl^ 1 )
js^* t5-^
.* ( )
c-^
. ( j y w l ^ i l ) j J s l ^ l i L i j a J l j ^ 1 i 3 l
1.1st- p
^Uai . f
i-l! J * \y( )
_jl*il ^Uai . T
Ji^ ( )
OJLpi ijv ^
J*l^*( ) bi*>a
L*_j 1a 1I ^Oai .V
Ji
Reproduced with permission o f the copyright owner. Further reproduction prohibited without permission.
()
Ur js& t-djit^*( ) p
&-d
U L jJI 1
^j l p S jt ^ l JLp Li o L i^ ia il ^U ii .A
J'>* ji* ( )
W.Jill ^jiaJ j^
jIp JlpL o b f U a i . ^
Uar
j i jil^( ) p cS-li J \ y ( )
J*'>* JS* ( )
()
. o le - o > L *jl*>
............................................................................I*. ^
( o
^ ^^
Al ld> J
^ '()
jO~~PrL* ( )
* j& 0
^ Ji*- ( )
( ) I j * 1* J i lp 4j
( ) ^ JlY- ( )
yjJT a jll
/H )
.T
:^ M .r
T.^Jit()
) o lp ij^ ili ( ^ i* tp 4j j j I i
lj *
J jl (
illi- l .
JJ^ ( )
J* ^
^ U a d l J iLOiJ-l
io - Y \ "\ ( ) io< ^ o ^
5JLP
.Y
Jil ( )
Y-Y\()
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
SSJU-I
jJ}
t 3JLP
iw.
qS
.A
Jif ( )
T o- T N O
(o ly -< )............. (
^oLy*ii4-Jajti
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
193
Appendix G
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
o l * ^ U s i jjJtl
Jju ^Jl
J_j*" jS*
d j ^ j~*il
- & M J * L*>-s7p
V
IJbr
jj
jv jIj
jafc
i o l j b ^ l
*1
k ilJ jb l j - J . \
jp
S ijb ^ l jjl d l
Y
Jai*
.J u U U ll ii j b ^ l oLJLuJl oaUT j w
V
. ( j
V
*1
lS
Y
~*iyj -a
o l j b^f l J - - o ] l jy ~ ^ j
.T
4>y& j\ jtjiu . i
y
.3jb ^ ll
.Y
1
^
*U
Y
.flp
v
Sjb^l J **1 *A
r
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
^iJijJl
j> U . \
^ '()
^ O
Jl*- O ^ J i r -
()
^ J
it
/* ()
.Y
\sjM .r
T . ^ j i f o
r ^ p T . i
*\jy J ( )
T ^ 1 ( )
40
^ ~ ^
( ) o l_ jo
^ *\ ( ) o l j o
^ ( ) aJi>-l j 4 0
i o Y I ^ ^ O
?4J|JI
4 0 T \ ^ ( ) 4 0 \ O^
^j A
J il ( )
T-TI()
ioT 1^>ri()
JJLP i ^ . A
Jil ( )
T0-T\()
^JT ijd l ^
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
1%
Appendix H
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
197
How do you know whether the information system in your organization is successful or not? What
measures of success do you use to measure that success of your information systems? And are your sure that
the measures you use are the most appropriate one?
As my dissertation research, 1 am investigating the issue of how to evaluate information systems in
the public sector and thus attempting the answer the above questions. Conducting the study in your
organization has been approved by the top management.
The enclosed survey is designed to gather information about the various measures that are used to
evaluate information systems success. The collected information will play a major role in developing a
comprehensive model for evaluating information systems in the public sector.
Consequently, your participation in this study is essential for its success in developing the
comprehensive model. Participation in this study is voluntary. If you decide to participate in this study, please
carefully read the instructions in each section and answer all questions without discussing with anyone. There
are no right or wrong answers to these questions. Usually, it is your first reaction to a question is a good
indication of how you feel. Mark the response that best indicates your reaction, and do not spend too much
time on any one item. After completing the survey, please return it back to me. Completing the survey will take
15-25 minutes.
Your answer will be kept strictly confidential. No one other than me will be allowed to have access to
your answer. Individual responses will be anonymous. The data will be aggregated and analyzed only on group
basis.
The studys findings will be provided to you upon request. If you have any questions or comments,
please feel free to ask me. In advance, thank you for your participation in this study.
Sincerely,
Helaiel Almutairi
Doctoral student
E-mail: hmall6@psu.edu
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
198
Appendix I
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
199
o i j^Sfi j *
I ijpf
ouS
*dl
J^-Lil
oLlU~* ,j*
Jl^lf j
oUlaJ*l J
.U*ll
AJUt .ol*jJull
^ l^ ^
tJp *Ljj . i - o L e l e l L l
aaIp
Ji
IJLa
4<*<*ljjJl aJA ^ ^ ^
j i l * jti
... o l ^-Sfl J ^ S /l
J^l kill
i<l^^|l
jltjJ *i
jit
Olgt&A^U dJLjt.*.! ,^Li jo A d l) 4*j1Jj j i j i l uLUli J 4Jj jsrji iOlRo '<ji ^1p i<br^li ^ l^si*||i jlp
*
*
^1p jjS'jll Oljadl ^1p ollw*- JUtjl Lrjil (CaIjjJi j!Li UjM J i^-jll iJl^* J
|*^jjl*c]
.UAI J j f i l l
-*4jjI
(^jsUl JslA
i A1AV : o
h m a ll6 @ p s u .e d u :^ j> ^ ii Jk j
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
>*t
VITA
AUTHOR:
Helaiel Almutairi
PROFESSIONAL EXPERIENCE:
Administrative researcher, Department of Nationality and Passports, Ministry of
Inerter, Kuwait, September 1988- July 1994
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.