Vous êtes sur la page 1sur 12

International Research Journal of Finance and Economics

ISSN 1450-2887 Issue 36 (2010)


© EuroJournals Publishing, Inc. 2010
http://www.eurojournals.com/finance.htm

A Study of User Information Satisfaction on Financial


Management Information System
(‘Sistem Maklumat Pengurusan Kewangan’)

Hasri Mustafa @ Abdul Razak


Department of Accounting and Finance, UPM
E-mail: m.hasri@econ.upm.edu.my

Zulkarnain Muhamad Sori


Department of Accounting and Finance, UPM
E-mail: zms@econ.upm.edu.my

Ayoib Che Ahmad


School of Accountancy, UUM
E-mail: ayoib@uum.edu.my

Norlida Mahussin
Faculty of Science & Technology, USIM
E-mail: norlida@usim.edu.my

Abstract

Information technology support service is likely to be a fundamental requirement


for future organizations for their sustained existence. Many corporate organizations are
now moving towards establishing IT department especially in the area of accounting
information system. As a result, these organizations are in great needs for accurate
information on the current system and end user’s need in order to be more effective and
efficient in their operations. The present paper examines on the Sistem Maklumat
Pengurusan Kewangan (SMPK) or Financial Management Information System in one of
the oldest universities in Malaysia and tries to explain the User Information Satisfaction
(UIS) in both system needs and system performance. While results on system needs-
performance gap analysis highlighted the importance level of SMPK’s features on the side
of end-users’ needs, Snake Diagram is used to visualize the ‘size’ of SMPK’s UIS. Results
from factor analysis however, are used for further investigation in the regression analysis.
The multivariate results showed that User-Self Confidence, System Output, User-Self
Development, User-Support, System Operation, User-Self Reliance and Problem Response
Time are significantly associated with UIS. Hence, any improvements of the SMPK’s
features must be in line with these factors. This study is likely to benefit information
technology division staffs in providing future reference for better SMPK ‘s internal control.

Keywords: Information Technology, Organization, System, Accounting Information


System, University
International Research Journal of Finance and Economics - Issue 36 (2010) 122

Introduction
It would be a great mistake if accountants would just leave the information system to be handled by
end-users without prior auditing supports during the process of System Development Life Cycle
(SDLC). In fact, information technology support service is likely to be a fundamental requirement for
future organizations for their sustained existence. Many corporate organizations are now moving
towards establishing IT department especially in the area of accounting information system. As a
result, these organizations are in great needs for accurate information on the current system and end
user’s need in order to be more effective and efficient in their operations.
The major purpose of this study is to examine the effectiveness of Sistem Maklumat
Pengurusan Kewangan (SMPK)1 or Financial Management Information System. The study focuses on
system needs and system performance to reveal some general ideas on which part of the SMPK features
that are needed by users most and which part that needs improvement. While results on system needs-
performance gap analysis highlighted the importance level of SMPK’s features on the side of end-
users’ needs, Snake Diagram is used to visualize the ‘size’ of SMPK’s UIS. Result from factor analysis
however, will be used for further investigation in regression analysis. This paper also provides survey
results on system supporting, handling and documenting of SMPK by Information Technology
Division. The results of the study should be of a good reference for information technology
administrators and accountants to study after the process of SDLC.

2.0. Motivation of the Study


With the advent of new technology, organizations are learning to continuously adapt and change to
meet future challenges. The conventional approach of depending on input quantities will change
rapidly to input qualities as the economy is moving towards the k-economy. The contribution of input
qualities has been shown to reduce cost in some organizations and increase growth at the same time.
This could be done through the efficient utilization of knowledge workers. Many end-users are now
well prepared to put ideas over the development process (Shaw et. al., 1996). The study of UIS on end-
user’s perspective indeed is vital for company’s system development process and system’s survival, as
UIS’s study helps evaluator to realize the social and economic benefits of investing in information
technology (Mirani and King, 1994; Shaw et. al., 1996; Essex and Magal, 1998; Whyte and Bytheway,
1996; and Doll and Torkzadeh, 1991).
According to Redman (1997), UIS can help an organization to achieve a happier and a more
productive work force, since it is used to measure the level of understanding of end-user on the need of
a business unit. Previous studies on UIS also found the instrument to be applicable for analyzing the
accounting software (Seddon et. al, 1992; Seddon and Yip, 1992; Yuthas and Eining; 1995) and
network system in several organizations or industries (Woo and Fock, 1999; Remenyi, 1996).
Accountants, in fact, can use the UIS instrument to measure whether the accounting system’s features
are really matched with firm and end-user requirements. According to IFAC (IEG-11, 1995),
accountants must be able to perform the roles of end-user, designer, manager and auditor in
maintaining the system’s effectiveness and efficiency.

3.0. User Information System


The growing function of end-users in performing Information System (IS) development task has
encouraged many scholars to focus on UIS as an evaluation tool for measuring the effectiveness of IS
(Yuthas and Eining, 1995; Doll and Torkzadeh, 1988; Essex and Magal, 1998). The measurement of
UIS in IS literature faces tremendous revolution since 1970s. Research in UIS can be categorized into
several groups such as system characteristics (Ives et. al, 1983; Doll and Torkzadeh, 1988; Baroudi and

1
The information about SMPK is based on interviews with Information Technology Division staffs and examination on
archival documents.
123 International Research Journal of Finance and Economics - Issue 36 (2010)

Orlikawski, 1988), system services quality (Parasuraman et. al, 1985, 1988) or system expectation-
performance analysis (Kim, 1990; Zeithaml et. al, 1990).
In accounting system literature, instruments developed by Ives et. al, (1983), Doll and
Torkzadeh (1988) and Baroudi and Orlikawski (1988) were used in many IS studies. Recent studies
found that Doll and Torkzadeh’s end user computing satisfaction instrument (1988) to be the most
stable and reliable for measuring UIS with general ledger system (Seddon et. al., 1992; Downing,
1997) and computer simulation application on decision support system (McHaney and Cronan, 1998).
In marketing research literature, the development of SERVQUAL instrument by Parasuraman
et. al (1988) has encouraged IS researchers to use the instrument to test the quality of the system
services. Some studies (see for example, Van Dyke et. al, 1999; Pitt and Watson 1995; Kettinger and
Lee, 1994) indicated that the IS-version of the SERVQUAL instrument would produce a better result
in predicting the ‘level’ of UIS as what has been proposed by Kim (1990) and Zeithaml et. al (1990).
This study will combine the above three instruments to measure the effectiveness of Financial
Management Information System or better known as Sistem Maklumat Pengurusan Kewangan
(SMPK). The scope of the study will be divided into system needs and system performance, as
proposed by Hall (2001).

4.0. Method of the study


Sample and Instrumentation
A total of 74 SMPK’s users from 13 faculties were asked to complete a full set of questionnaire within
a three-week period. 51 users completed and replied the questionnaires indicating a 68.92% response
rate. Even though questionnaire used in this study was replicated from Remenyi’s survey (1996), the
original instruments were developed by Doll and Torkzadeh (1988), Parasuraman et. al (1988), and
Zeithaml et. al (1990). Before the questionnaires were given to the respondents, a brief explanation of
five to ten minutes was given to each of them. Two sets of questions were used to analyze the
effectiveness of SMPK. While questions on the first part (Part A) was used to measure SMPK end-
users need, questions on the second part (Part B) was used to evaluate the SMPK current performance.
Note that both Parts in the questionnaire used a five–point Likert scale to analyze the data. For Part A,
this scale utilizes the anchor of Irrelevant , Not Important, Less Important, Important and Critical
(scale of 1 - 5) to identify users’ needs. For Part B, this scale utilizes the anchor of Very Poor, Poor,
Fair, Good and Excellent (scale of 1 - 5) to identify users’ perception on the performance of SMPK.
After completing all the questions in Part A and Part B, the respondents were asked a single question in
Part C to identify the overall service satisfaction score. The Interval Scale in Part B was used for this
purpose. Additionally, another set of questions in Part D was designed to obtain demographic data.

5.0. Research Findings


System Needs-Performance Analysis
Table 1.0 shows Pearson Correlation Coefficients between system-need scores and system-
performance scores. Four main columns are provided. The first and the second columns represent all of
the questions in Part A and Part B respectively. The third column is derived by subtracting the system
performance scores with system needs scores (Part B - Part A). The last column of the table is
originated from correlation between overall satisfaction scores (Part C) and gap scores in the third
column. Basic analysis of SMPK in Table 1.0 shows, amongst others, that:
i) Low percentage down time was second in system needs but ranked the lowest number in system
performance.
ii) User’s understanding received high level for both system needs and system performance.
iii) Time-to-time sessions, useful output format, flexibility in producing professional report and
accessibility of external databases received lowest attention in term of perceived user’s needs.
International Research Journal of Finance and Economics - Issue 36 (2010) 124

iv) Twelve attributes received a gap value of above –0.50 and they are low percentage down time,
user training, system response time, timely and up-to-date information, problem response time,
system documentation, security and privacy, computing facilities, responsiveness to users
needs, accuracy of information, users participation in planning and time-to-time conferences.
v) User’s learning curve and personal control gained the smallest gap value.

Gap Analysis
The analysis of SMPK in meeting the level of UIS is explained by a positive, negative
or zero gap value. According to Remenyi (1996):
i) Attributes which display a gap greater than zero (i.e. positive gap) indicate that the system is
committing overperforming.
ii) Attributes which display a gap lower than zero (i.e. negative gap) indicate that the system is
committing underperforming.
iii) Where the gap is actually zero (i.e. no gap), there is an exact match between expectations and
performance.

Table 1.0: Univariate Analysis of SMPK


System Needs (Part A) System Performance (Part B) Gap Analysis (B - A) Gap Correlation With
Attributes
Rank Mean Std Deviation Rank Mean Std Deviation Mean Std Deviation Overall Satisfaction (Part C)1
17 User's Understanding 1 4.4118 0.6059 2 3.9412 0.7593 -0.4706 0.9665 0.9900
5 Low Percentage Down Time 2 4.3725 0.7736 25 3.0196 0.9053 -1.3529 1.1802 0.2040
13 Problem Response Time 3 4.3725 0.8237 12 3.6078 0.8736 -0.7647 0.9075 0.1710
3 Accuracy of Information 4 4.3529 0.5941 8 3.7647 0.8146 -0.5882 0.9418 0.2430
25 Timely and Up-to-Date 5 4.2941 0.8785 16 3.4314 0.9001 -0.8627 1.0202 0.2700
Information
2 Ease of Use 6 4.2941 0.5018 3 3.9216 0.5947 -0.3725 0.8709 0.2290
16 Positive Attitude from System 7 4.2941 0.6097 5 3.8824 0.7911 -0.4117 0.8984 0.4240**
Staffs
10 Security & Privacy 8 4.2745 0.7233 11 3.6275 0.6917 -0.6470 0.9965 0.1840
11 System Response Time 9 4.2353 0.7896 20 3.3137 0.8364 -0.9216 1.2140 0.3480*
12 User Training 10 4.2157 0.5767 22 3.2745 0.8962 -0.9412 1.1029 0.0280
19 Personal Productivity 11 4.1961 0.6934 1 3.9608 0.8237 -0.2353 0.8852 -0.1760
14 User Participation in Planning 12 4.1373 0.6639 13 3.5882 1.2029 -0.5491 1.1192 0.3150*
1 Computing Facilities 13 4.1373 0.8490 15 3.5098 0.8803 -0.6275 1.3109 0.3100*
7 User Confidence 14 4.0980 0.7670 6 3.8627 0.8251 -0.2353 0.8623 -0.0930
22 System Documentation 15 4.0588 0.7593 19 3.3725 0.8237 -0.6863 0.8600 0.2280
20 User Learning 16 4.0588 0.7046 4 3.9216 0.7961 -0.1372 0.9385 -0.0660
9 Responsiveness to users need 17 4.0196 1.0861 18 3.3922 0.7233 -0.6274 1.1128 0.2880*
8 Personal Control 18 4.0196 0.7068 7 3.8431 0.5787 -0.1765 0.7670 0.0600
6 Technical Competence From 19 4.0196 0.5828 10 3.6471 0.5224 -0.3725 0.7200 0.0400
System Professionals.
18 Cost Effectiveness 20 3.9804 0.6161 9 3.7451 0.8208 -0.2353 0.9505 0.1510
23 Database Development 21 3.9020 0.9435 17 3.4118 0.7791 -0.4902 1.0074 0.1690
24 User Time-to-Time 22 3.7843 0.9862 21 3.2745 0.9608 -0.5098 1.1726 0.2450
Conference
21 Useful Output Format 23 3.7843 0.9014 14 3.5686 0.8545 -0.2157 1.1369 0.0130
15 Flexibility in Producing 24 3.4706 1.2861 24 3.1569 1.2550 -0.3137 1.4070 0.0050
Professional Reports
4 Accessibility of External 25 3.4510 1.1192 23 3.1569 1.1022 -0.2941 1.1540 0.1990
Database (Import & Export
File)
Note:
1
Pearson Correlation Coefficients
* Correlation is significant at the 0.05 level (2-tailed)
** Correlation is significant at the 0.01 level (2-tailed)

Figure 1.0 exhibits a graphical view of gap analysis. The X-axis for Snake Diagram in Figure
1.0 represents all 25 attribute numbers while the Y-axis characterizes the mean scores from both system
needs (Part A) and systems performance (Part B) as well as the gap/system analysis (Part B – Part A).
The mean and gap scores are taken directly from the data in Table 1.0. As exhibited, even though user
training (attribute number 12) received high expectation from end-users, the system was perceived to
provide inadequate training as required by them. With value of –0.9412 for gap score, it can be inferred
that SMPK’s end-users has received insufficient training from the management. In addition, Low
125 International Research Journal of Finance and Economics - Issue 36 (2010)

Percentage Down Time (attribute number 5), System Response Time (attribute number 11) and
Problem Response Time (attribute number 13) are also deficient with gap scores of more than –0.7500.
These problems should get rigorous attention from the responsible staffs of Information Technology
Division because they will affect the issuance of timely and up-to-date information (attribute number
25).

Figure 1.0: Snake Diagram on SMPK’s UIS

5.0000

4.0000

3.0000
Gap/Score

2.0000

1.0000

0.0000

11
13
15
17
19
21
23
25
1
3
5
7
9

-1.0000

-2.0000
Attribute Number/Feature

System Needs System Performance System Analysis

Factor Analysis
Table 2.0 below provides Factor Analysis of system-need scores. Eight factors in system needs and six
factors in system performance were found to have Eigenvalues2 of greater than 64% of Kaiser-Meyer-
Olkin (KMO)3 – a measure of sampling adequacy. Total variance for all selected factors is 77.491% in
system needs and 73.548% in system performance. Based on factor analysis, eleven factors or variables
from both tables were selected and included in the regression model for further statistical analysis to
examine the determinants of UIS.

2
Eigenvalues determine how many factors should be retained in the analysis. According to Kim and Mueller (1978),
factors that have low Eigenvalues will contribute little to the explanation of variances in the model, and may be ignored
as redundant.
3
Small values of KMO indicate that factor analysis is not a good choice. Small values mean that the correlation of pairs of
variables cannot be explained by the other variables. As a rule-of-thumb, a KMO below 0.50 is unacceptable (Kim and
Mueller, 1978).
International Research Journal of Finance and Economics - Issue 36 (2010) 126
Table 2.0: Factor Analysis of System Needs Scores

Factor Loading Gap Correlation


% of Variances With Satisfaction1
Factor 1: User-Self Development 14.812%
A 06 Technical Competence From System Staffs 0.526 0.0400
A 12 User Training 0.657 0.0280
A 17 User Understanding 0.684 0.9900
A 18 Cost Effectiveness 0.688 0.1510
A 19 Personal Productivity 0.864 -0.1760
A 20 User Learning 0.778 -0.0660
Factor 2: User Support 13.348%
A 07 User Confidence 0.607 -0.0930
A 22 System Documentation 0.868 0.2280
A 23 Database Development 0.719 0.1690
A 24 User Time-to-Time Conference 0.694 0.2450
A 25 Timely & up-to-date Information 0.856 0.2700
Factor 3: User-Joint Development # 11.404%
A 09 Responsiveness to User Needs 0.796 0.2880*
A 14 User Participation in Planning 0.534 0.3150*
A 15 Flexibility in Producing Professional Reports 0.724 0.0050
A 16 Positive Attitude From System Staffs 0.687 0.4240**
A 24 User Time-to-Time Conference 0.550 0.2450
Factor 4: Quality of Services # 9.859%
A 02 Ease of Use 0.786 0.2290
A 03 Accuracy of Information 0.560 0.2430
A 05 Low Percentage Down Time 0.712 0.2040
A 11 System Response Time 0.608 0.3480*
A 13 Problem Response Time 0.502 0.1710
Factor 5: Data Maintenance # 7.760%
A 03 Accuracy of Information 0.676 0.2430
A 10 Security and Privacy 0.829 0.1840
Factor 6: Analysis-Decision Facilities # 7.737%
A 01 Computing Facilities 0.714 0.3100*
A 04 Accessibility of External Database (Import/Export) 0.790 0.1990
A 13 Problem Response Time 0.508 0.1710
Factor 7: User-Self Confidence # 6.747%
A 07 User Confidence 0.601 -0.0930
A 08 Personal Control 0.871 0.0600
Factor 8: System Output # 5.824
A 21 Useful Output Format 0.847 0.0130
KMO = 0.643
Note: These denote variables selected for the Regression Analysis
1
Pearson Correlation Coefficients
* Correlation is significant at the 0.05 level (2-tailed)
** Correlation is significant at the 0.01 level (2-tailed)
127 International Research Journal of Finance and Economics - Issue 36 (2010)
Table 3.0: Factor Analysis of System Performance Scores

Factor Loading Gap Correlation


% of Variances With Satisfaction1
Factor 1: User-Self Development # 20.520%
B 14 Users Participation in Planning 0.512 0.3150*
B 16 Positive Attitude from System Staffs 0.832 0.4240**
B 17 User Understanding 0.884 0.9900
B 18 Cost Effectiveness 0.836 0.1510
B 19 Personal Productivity 0.783 -0.1760
B 20 User Learning 0.856 -0.0660
B 21 Useful Output Format 0.753 0.0130
Factor 2: User Support # 18.039%
A 04 Accessibility of External Database (Import/Export) 0.582 0.1990
A 07 User Confidence 0.714 -0.0930
A 12 User Training 0.549 0.0280
A 22 System Documentation 0.861 0.2280
A 23 Database Development 0.860 0.1690
A 24 User Time-to-Time Conference 0.635 0.2450
A 25 Timely & up-to-date Information 0.805 0.2700
Factor 3: System Operation # 12.610%
A 05 Low Percentage Down Time 0.744 0.2040
A 09 Responsiveness to User Needs 0.625 0.2880*
A 10 Security and Privacy 0.775 0.1840
A 11 System Response Time 0.743 0.3480*
Factor 4: User-Self Reliance # 8.976%
A 02 Ease of Use 0.772 0.2290
A 08 Personal Control 0.686 0.0600
Factor 5: User-Joint Development 7.502%
A 14 User Participation in Planning 0.699 0.3150*
A 15 Flexibility in Producing Professional Reports 0.677 0.0050
Factor 6: Problem Response Time # 5.900%
A 13 Problem Response Time 0.621 0.1710
KMO = 0.649
Note: These denote variables selected for the Regression Analysis
1
Pearson Correlation Coefficients
* Correlation is significant at the 0.05 level (2-tailed
** Correlation is significant at the 0.01 level (2-tailed)

Regression Model - Theoretical Development and Methodology


Many scholars (Haksever et. al., 1997; Downing, 1997; Stratigos, 1999) support the point that the
longer the user is exposed to the system, the more satisfied the user is. The result can be explained by
the fact that over time the end-user is becoming familiar with the system’s features. However, a more
plausible theory proposed by Lawrence and Low (1993) believe that UIS actually has the possibility to
diminish over a certain period of time as IS quality comes to dominate the user’s requirements. When
the system specifications need to be changed in order to cope with numerous demands, the quality of
IS system may not be important for user’s requirements. This is consistent with the notion of ‘IS
obsolescence’ which has resulted either from improvements in technology, changes in organizational
need and user’s requirement, or an expansion of business operations through merger, acquisition and
diversification (Summers, 1989). Four (4) independent variables namely years of working in
Uuniversity (YEARUNIV), years of experience in using a personal computer (YEARPC), years of
experience in using a personal computer network (YEARNPC) and years of experience in using SMPK
(YEARSMPK) were used to determine how far the results could support Lawrence and Low’s (1993)
argument.
Indeed, UIS itself has come out from the process of user-confirmation that was associated with
uncertainty and ambiguity (Hartzel and Flor, 1995). User who might be unsure with the level of system
needs at their understanding level, is expected to face more uncertainty and ambiguity. Other things
being equal, the more educated the person is, the more understandable he/she will be (Robbins, 1998).
International Research Journal of Finance and Economics - Issue 36 (2010) 128

It can be said that the lower the level of understanding of IS, the less confidence the user is and
therefore, the less satisfaction in future IS usage. Education background (EDUCATION) for each user
of SMPK is taken as another independent variable to see whether academic qualification has an effect
on the UIS level. The value of one (1) is assigned for end-users who have higher education level than
STPM, and zero (0) otherwise.
Interestingly, Robbins (1998) claims that there is no big differentiation on job performance
between men and women. In IS literature, studies from McGill et. al (1998) and Downing (1997)
revealed that gender is not one of the significant factors that influence user satisfaction level. However,
gender (GENDER) is also included in the regression model to test for its effect (if any). The variable is
coded one (1) if the user is a male user and zero (0) for female end-user.
Table 4.0 below provides the description of variable and their expected direction in the
regression estimations.

Table 4 0: Description of All Variables

Variable Variable Measurement Predicted Sign


Dependent
UIS Score Index of User Information Satisfaction
based on Part C of the questionnaire. (n.a.)
Independent

Non-Gap Variables:
Years of working in University (-)
Year experiences in using personal computer (-)
YEARUNIV (-)
YEARPC Year experiences in using network personal (-)
YEARNPC computer None
YEARSMPK Year experiences in using SMPK
Dummy variable is coded 1 for male end- (+)
GENDER
EDUCATION user, 0 otherwise.
Dummy variable is coded 1 for end-users’
academic qualification higher than STPM, 0 (+)
otherwise.
(+)
Gap Variables: (+)
Factor 3.0
User-Joint Development* (+)
Factor 4.0

Factor 5.0 (+)


Quality of Services*
(+)
Data Maintenance* Factor 6.0
(+)
Analysis-Decision
Facilities* (+)
Factor 7.0
User-Self Confidence* (+)
Factor 8.0
(+)
System Output* Factor 1. 0
(+)
Note: * Factors selected From Table 2.0
** Factors selected From Table 3.0
(Variable selection process depends on % of variance described earlier)
129 International Research Journal of Finance and Economics - Issue 36 (2010)

Regression Model
For the regression analysis, the non-Gap and Gap variables are estimated one at a time due to data
constraint. Note that Gap variables however, were determined from the results of Factor Analysis as
explained earlier (see Table 2.0 and Table 3.0). Each variable’s description and predicted sign in Table
4.0 for both demographic factors/non-gap variables and gap variables can be translated in the following
regression models:
UIS(1) = α0 + b1G1 + b2G2 + b3G3 + b4G4 + b5G5 + b6G6 + b7G7 + b8G8 + b9G9 + b10G10 + b11G11 + ∈
Where;
G1 = User-Joint Development
G2 = Quality of Services
G3 = Data Maintenance
G4 = Analysis-Decision Facilities
G5 = User-Self Confidence
G6 = System Output
G7 = User-Self Development
G8 = User Support
G9 = System Operation
G10 = User Self- Reliance
G11 = Problem Response Time
UIS(2) = α0 + b1NG1 + b2NG2 + b3NG3 + b4NG4 + b5NG5 + b6NG6 + ∈
Where;
NG1= YEARUNIV
NG2 = YEARPC
NG3 = YEARNPC
NG4 = YEACSMPK
NG5 = GENDER
NG6 = EDUCATION

Regression Results and Discussions


Result from regression analysis in Table 5.0 shows that 54% of User Information Satisfaction variance
can be explained by these gap variables. The result also shows that the above UIS(1) model is highly
acceptable.

Table 5.0: Result of Full Regression Analysis

Correlation with Regression Significance


Satisfaction Coefficient
Constant 0.962 0.186
Gap Variables:
User-Joint Development 0.085 -1.617 0.101
Quality of Services 0.075 - 0.780 0.674
Data Maintenance 0.116 3.874 0.232
Analysis-Decision Facilities 0.130 - 0.0284 0.987
User-Self Confidence 0.329* 4.716 0.146
System Output 0.360* 4.293 0.282
User-Self Development 0.380** 1.272 0.057
User Support 0.433** 0.389 0.729
System Operation 0.572** 4.620 0.010
User Self-Reliance 0.441** - 4.819 0.299
Problem Response Time 0.350* 7.080 0.089
* Correlation is significant at the 0.01 level (2-tailed)
** Correlation is significant at the 0.05 level (2-tailed)
F = 3.667*
R-squared = 0.534
International Research Journal of Finance and Economics - Issue 36 (2010) 130

The above Table 5.0 indicates that:


i) The following factors namely User-Self Confidence, System Output, User-Self
Development, User-Support, System Operation, User-Self Reliance and Problem Response
Time are positively significant correlated with UIS.
ii) The factors of User-Self Development, System Operation and Problem Response Time are
significantly associated with total UIS. Any improvements of the SMPK’s features must be
in line with these three factors. Results from regression coefficients also indicate that to
increase the level of UIS, these factors need to be taken into accounts.
Table 6.0 shows correlation between UIS and non-Gap variables. As can be seen below, even
though non-gap variables of YEARUNIV, YEARPC, YEARNPC and YEARSMPK were found to be
highly interrelated among them, none of the variables was significantly correlated with UIS.
Furthermore, regression results did not show any significant association between those variables and
the dependent variable (i.e. UIS) and therefore, they are not presented.

Table 6.0: Non-Gap Variables Correlation With Satisfaction

UIS Yearuniv Yearpc Yearnpc Yearsmpk Gender Education


UIS 1.00
YEARUNIV 0.184 1.00
YEARPC -0.008 0.542** 1.00
YEARNPC -0.029 0.449** 0.811** 1.00
YEARSMPK -0.007 0.431** 0.778** 0.966** 1.00
GENDER -0.196 0.323* 0.258 0.127 0.062 1.00
EDUCATION -0.203 -0.409** -0.168 -0.165 -0.222 0.218 1.00
** Correlation is significant at the 0.01 level (2-tailed)
* Correlation is significant at the 0.05 level (2-tailed)

Overall, the results support the finding of Remenyi (1996), Downing (1997) and McGill et. al.
(1998) about the non-significant result of demographic factors (job status, tenure, experiences and
education level) in influencing the level of UIS. This findings are basically consistent with many of the
IS researchers’ conclusion regarding the ‘sensitivity and subjectivity’ in the measurement of UIS
(Galletta and Lederer, 1989; Hawk and Raju, 1991; Essex and Magal, 1998; Etezadi-Amoli and
Farhoomand, 1991; Whyte and Bytheway, 1996).

6.0 Limitations, Future Research and Conclusion


Further studies are encouraged to look at other factors that may associate indirectly with end-users.
Factors like changes in user’s need, job rotation and cultural implications may be important in
determining the level of UIS. Further research should be able to determine possible internal and
external factors as well to identify the ‘correct’ size of UIS. Other limitations might be due to the fact
that SMPK is the only system that university staffs rely on for routine transactions. Some staffs might
be too apprehensive to tick the ‘true’ answer. In addition, the definition of UIS itself may not be
identical for every end-user. Hence, it might be difficult to measure UIS consistently, since it could
reflects different needs for different end-users.
Research in UIS can also be expanded to different industries. If possible, research on UIS
should also be applied to evaluate the quality of services in service sectors like hospital, security firms
and transportation companies or in government sectors. Hopefully future research will be able to
educate accountants and IS staffs to be more responsible in providing better information to the existing
and potential end-users.
The studies of UIS should not merely focus on the prediction of behavior (satisfaction) by
attitude (usage). According to Doll and Torzadeh (1991), the measurement of UIS is to help managers
identify the social and economic benefits of investment in information technology. In today’s
environment, latest advent on IS can quickly diminish current end-users’ preference. End-users become
131 International Research Journal of Finance and Economics - Issue 36 (2010)

more educated and have more exposure on many alternative similar systems. As business activities
change from business-to-customer to business-to-business activities, companies are no longer selling
the hardware products. Their revenues are likely to depend on how much they are able to ‘sell’ the
information. The study of UIS provides valuable information for management strategy.

References
[1] Baroudi, J. J. and Orlikowski, W. J. (1988), A Short Measure of User Information Satisfaction,
Journal of Management Information Systems, Vol. 4 (4), pp. 44-59
[2] Delone W. H. and McLean E. R. (1992), Information System Success: The Quest for the
Dependent Variable, Information Systems Research, Vol. 3 (1), pp. 60-95
[3] Doll W. J. and Torkzadeh G. (1988), The Measurement of End-User Computing Satisfaction,
MIS Quarterly, Vol. 12, (2), pp.259-274
[4] Doll W. J., Torkzadeh G. (Mar1991), The Measurement of End-User Computing Satisfaction:
Theoretical and Methodological Issues. MIS Quarterly, Vol. 15(1), pp. 5-10
[5] Downing, C. E. (1997), An Empirical Examination of User satisfaction with an Information
System Implementation, Operations and Strategic Management Department Wallace E. Carroll
School of Management, Boston College,
www.hsb.baylor.edu/ramsower/ais.ac97/papers/downing.htm
[6] Essex, P. A. and Magal, S. R. (1998), Determinants of Information Center Success, Journal of
Management Information Systems, Vol. 15 (2), pp.95-117
[7] Etezadi-Amoli, J. and Farhoomand, A. F. (Mar1991), On End-User Computing Satisfaction,
MIS Quarterly, Vol. 15 (1), pp.1-4
[8] Galletta, D. F. and Lederer, A. L. (1989), Some Cautions on the Measurement of User
Information Satisfaction, Decision Sciences, Vol. 20 (3), pp. 419-438
[9] Hall, J. A. (2001), "Accounting Information System: The System Development Process",
Thomson Learning (3rd edition), pp. 642-678, South-Western US
[10] Haksever C., Cook R., and Chaganti R. (1997), Service Quality for Small Firms: Can the Gaps
model Help?, www.sbaer.uca.edu/sbaer/proceeding/97proceedings.html
[11] Hartzel K. S. & Flor P. R. (1995), Expectation Formation in the Information System
Development Process. The Joseph M. Katz Graduate School of Business, University of
Pittsburgh, PA 15260, http://hsb.baylor.edu/ramsower/acis/paper/hartzel.htm
[12] Hawk, S. R. and Raju, N. S. (1991), Test-Retest Reliability of User Information Satisfaction: A
Comment, Decision Sciences, Vol. 22 (5), pp. 1165-1170
[13] IFAC Education Committee (1995) New IFAC Project – Super IT Programs in Accounting
Education: Information Technology in the Accounting Curriculum (IEG-11), www.ifac.org
[14] Ives B., Olson M. H., and Baroudi J. J. (1983), The measurement of User Information
Satisfaction, Communication of ACM, Vol. 26 (10), pp.785-793
[15] Kettinger, W. J. and Lee, C. C. (1994), Perceived Service quality and User Satisfaction With
the Information Services Function, Decision Sciences, Vol. 25 (5/6), pp. 737-766
[16] Kim, J. and C. W. Mueller (1978), Introduction to factor Analysis: What it is and How to do it,
University Paper, Sage, London
[17] Kim, K. K. (1990), User Information Satisfaction: Towards Conceptual Clarity, Proceedings of
ICIS, Copenhagen
[18] Lawrence M. and low G. (1993), Exploring Individual User Satisfaction Within User-Led
Development, MIS Quarterly, Vol. 17(2), pp. 195-208
[19] Magal, S. R. (1991), A Model for Evaluating Information Center Success, Journal of
Management Information Systems, Vol. 8 (1), pp. 91-106
[20] McGill T. J., Hobbs V. J., Chan R., and Khoo D. (Feb1998), User Satisfaction as a Measure of
Success in End User Application Development: An Empirical Investigation. Research Working
Paper IT/98/02, Murdoch University.
International Research Journal of Finance and Economics - Issue 36 (2010) 132

[21] McHaney, R. and Cronan, T. P. (1998), Computer Simulation Success: On the Use of the End-
User Computing Satisfaction Instrument: A Comment, Decision Sciences, Vol. 29 (2), pp. 525-
536
[22] Mirani R., King W. R. (1994), Impacts of End-User and Information Center Characteristics on
End-User Computing Support, Journal of Management Information Systems, Vol. 11 (1), pp.
141-166
[23] Mirani, R. and King, W. R. (1994), The Development of a Measure for End-User Computing
Support, Decision Sciences, Vol. 25 (4), pp. 481-498
[24] O’Brien J. A. (1999), Management Information Systems: Managing Information Technology in
the Internetworked Enterprise, Fourth Ed., Irwin McGraw-Hill, USA
[25] Parasuraman, A., Zeithaml, V. A. and Barry, L. L (1985), A Conceptual Model of Service
Quality and its Implications for Future Research, Journal of Marketing, Vol. 49 (4), pp. 41-50
[26] Parasuraman, A., Zeithaml, V. A. and Barry, L. L (1988), SERVQUAL: A Multiple Item Scale
Measuring Consumer Perceptions of Quality, Journal of Retailing, Vol. 64 (1), Spring, pp. 12-
40
[27] Pitt, L. F. and Watson, R. T. (1995), Service Quality: A Measure of Information Systems
Effectiveness, MIS Quarterly, Vol. 19 (2), pp.173-187
[28] Redman W. (1997), Keeping Those Users Satisfied, Communications Week, Issue 669, pp. 63-
64
[29] Remenyi D. (1996), A Holistic Approach to IT Function Evaluation, Leslie Willcocks,
Chapman & Hall, London
[30] Robbins, S. P. (1998), Organizational Behavior: Concepts, Controversies, Applications, New
Jersey, Prentice Hall, Inc.
[31] Seddon, P. , Wong, M. and Yip, S. K. (1992), Computer-Based General Ledger Systems: An
Exploratory Study, Journal of Information Systems, Vol. 6 (1), pp. 93-110
[32] Seddon, P. and Yip, S. K. (1992), An Empirical Evaluation of User Information Satisfaction
(UIS) Measures for Use With General Ledger Accounting Systems, Journal of Information
Systems, Vol. 6 (1), pp. 75-92
[33] Shaw, N. C., Ang, J. S. K., and Patridge, J. E. L. (1996), Technological Frames and End-User
Computing, National University of Singapore,
http://hsb.baylor.edu/ramsover/ais.ac.96/papers/shaw.htm
[34] Spreng, R. A. and Mackoy, R. D. (1996), An Empirical Examination of a Model of Perceived
Service Quality and Satisfaction, Journal of Retailing, Vol. 72 (2), pp. 201-214
[35] Stratigos, A. (nov/Dec1999), Measuring End-User Loyalty Matters, Online, Wilton, Vol. 23
(6), pp. 74-78
[36] Van Dyke, T. P. , Prybutok, V. R. and Kappelman, L. A. (1999), Cautions on the Use of the
SERVQUAL Measure to Assess the Quality of Information System Services, Decision
Sciences, Vol. 30 (3), pp. 877-891
[37] Whyte G. & Bytheway A. (1996), Factors affecting information system’s success. International
Journal of Service Industry Management, Vol. 7, No. 1, pp. 74 – 93
[38] Willcocks, L. (1996), Investing in Information Systems: Evaluation and Management,
Chapman & Hall, London
[39] Woo K., and Fock H. (1999), Customer Satisfaction in the Hong Kong Mobile Phone Industry,
The Service Industries Journal, Vol. 19 (3), pp. 162-174
[40] Yuthas K., and Eining M.M. (1995), An Experimental Evaluation of Measurements of
Information System Effectiveness. Journal of Information Systems, Vol. 9 (2), pp. 69-84
[41] Zeithaml V. A., Parasuraman A., and Barry L. L. (1990), Delivering Quality Service: Balancing
Customer Perceptions and Expectations, Free Press, N. York

Vous aimerez peut-être aussi