Académique Documents
Professionnel Documents
Culture Documents
Norlida Mahussin
Faculty of Science & Technology, USIM
E-mail: norlida@usim.edu.my
Abstract
Introduction
It would be a great mistake if accountants would just leave the information system to be handled by
end-users without prior auditing supports during the process of System Development Life Cycle
(SDLC). In fact, information technology support service is likely to be a fundamental requirement for
future organizations for their sustained existence. Many corporate organizations are now moving
towards establishing IT department especially in the area of accounting information system. As a
result, these organizations are in great needs for accurate information on the current system and end
user’s need in order to be more effective and efficient in their operations.
The major purpose of this study is to examine the effectiveness of Sistem Maklumat
Pengurusan Kewangan (SMPK)1 or Financial Management Information System. The study focuses on
system needs and system performance to reveal some general ideas on which part of the SMPK features
that are needed by users most and which part that needs improvement. While results on system needs-
performance gap analysis highlighted the importance level of SMPK’s features on the side of end-
users’ needs, Snake Diagram is used to visualize the ‘size’ of SMPK’s UIS. Result from factor analysis
however, will be used for further investigation in regression analysis. This paper also provides survey
results on system supporting, handling and documenting of SMPK by Information Technology
Division. The results of the study should be of a good reference for information technology
administrators and accountants to study after the process of SDLC.
1
The information about SMPK is based on interviews with Information Technology Division staffs and examination on
archival documents.
123 International Research Journal of Finance and Economics - Issue 36 (2010)
Orlikawski, 1988), system services quality (Parasuraman et. al, 1985, 1988) or system expectation-
performance analysis (Kim, 1990; Zeithaml et. al, 1990).
In accounting system literature, instruments developed by Ives et. al, (1983), Doll and
Torkzadeh (1988) and Baroudi and Orlikawski (1988) were used in many IS studies. Recent studies
found that Doll and Torkzadeh’s end user computing satisfaction instrument (1988) to be the most
stable and reliable for measuring UIS with general ledger system (Seddon et. al., 1992; Downing,
1997) and computer simulation application on decision support system (McHaney and Cronan, 1998).
In marketing research literature, the development of SERVQUAL instrument by Parasuraman
et. al (1988) has encouraged IS researchers to use the instrument to test the quality of the system
services. Some studies (see for example, Van Dyke et. al, 1999; Pitt and Watson 1995; Kettinger and
Lee, 1994) indicated that the IS-version of the SERVQUAL instrument would produce a better result
in predicting the ‘level’ of UIS as what has been proposed by Kim (1990) and Zeithaml et. al (1990).
This study will combine the above three instruments to measure the effectiveness of Financial
Management Information System or better known as Sistem Maklumat Pengurusan Kewangan
(SMPK). The scope of the study will be divided into system needs and system performance, as
proposed by Hall (2001).
iv) Twelve attributes received a gap value of above –0.50 and they are low percentage down time,
user training, system response time, timely and up-to-date information, problem response time,
system documentation, security and privacy, computing facilities, responsiveness to users
needs, accuracy of information, users participation in planning and time-to-time conferences.
v) User’s learning curve and personal control gained the smallest gap value.
Gap Analysis
The analysis of SMPK in meeting the level of UIS is explained by a positive, negative
or zero gap value. According to Remenyi (1996):
i) Attributes which display a gap greater than zero (i.e. positive gap) indicate that the system is
committing overperforming.
ii) Attributes which display a gap lower than zero (i.e. negative gap) indicate that the system is
committing underperforming.
iii) Where the gap is actually zero (i.e. no gap), there is an exact match between expectations and
performance.
Figure 1.0 exhibits a graphical view of gap analysis. The X-axis for Snake Diagram in Figure
1.0 represents all 25 attribute numbers while the Y-axis characterizes the mean scores from both system
needs (Part A) and systems performance (Part B) as well as the gap/system analysis (Part B – Part A).
The mean and gap scores are taken directly from the data in Table 1.0. As exhibited, even though user
training (attribute number 12) received high expectation from end-users, the system was perceived to
provide inadequate training as required by them. With value of –0.9412 for gap score, it can be inferred
that SMPK’s end-users has received insufficient training from the management. In addition, Low
125 International Research Journal of Finance and Economics - Issue 36 (2010)
Percentage Down Time (attribute number 5), System Response Time (attribute number 11) and
Problem Response Time (attribute number 13) are also deficient with gap scores of more than –0.7500.
These problems should get rigorous attention from the responsible staffs of Information Technology
Division because they will affect the issuance of timely and up-to-date information (attribute number
25).
5.0000
4.0000
3.0000
Gap/Score
2.0000
1.0000
0.0000
11
13
15
17
19
21
23
25
1
3
5
7
9
-1.0000
-2.0000
Attribute Number/Feature
Factor Analysis
Table 2.0 below provides Factor Analysis of system-need scores. Eight factors in system needs and six
factors in system performance were found to have Eigenvalues2 of greater than 64% of Kaiser-Meyer-
Olkin (KMO)3 – a measure of sampling adequacy. Total variance for all selected factors is 77.491% in
system needs and 73.548% in system performance. Based on factor analysis, eleven factors or variables
from both tables were selected and included in the regression model for further statistical analysis to
examine the determinants of UIS.
2
Eigenvalues determine how many factors should be retained in the analysis. According to Kim and Mueller (1978),
factors that have low Eigenvalues will contribute little to the explanation of variances in the model, and may be ignored
as redundant.
3
Small values of KMO indicate that factor analysis is not a good choice. Small values mean that the correlation of pairs of
variables cannot be explained by the other variables. As a rule-of-thumb, a KMO below 0.50 is unacceptable (Kim and
Mueller, 1978).
International Research Journal of Finance and Economics - Issue 36 (2010) 126
Table 2.0: Factor Analysis of System Needs Scores
It can be said that the lower the level of understanding of IS, the less confidence the user is and
therefore, the less satisfaction in future IS usage. Education background (EDUCATION) for each user
of SMPK is taken as another independent variable to see whether academic qualification has an effect
on the UIS level. The value of one (1) is assigned for end-users who have higher education level than
STPM, and zero (0) otherwise.
Interestingly, Robbins (1998) claims that there is no big differentiation on job performance
between men and women. In IS literature, studies from McGill et. al (1998) and Downing (1997)
revealed that gender is not one of the significant factors that influence user satisfaction level. However,
gender (GENDER) is also included in the regression model to test for its effect (if any). The variable is
coded one (1) if the user is a male user and zero (0) for female end-user.
Table 4.0 below provides the description of variable and their expected direction in the
regression estimations.
Non-Gap Variables:
Years of working in University (-)
Year experiences in using personal computer (-)
YEARUNIV (-)
YEARPC Year experiences in using network personal (-)
YEARNPC computer None
YEARSMPK Year experiences in using SMPK
Dummy variable is coded 1 for male end- (+)
GENDER
EDUCATION user, 0 otherwise.
Dummy variable is coded 1 for end-users’
academic qualification higher than STPM, 0 (+)
otherwise.
(+)
Gap Variables: (+)
Factor 3.0
User-Joint Development* (+)
Factor 4.0
Regression Model
For the regression analysis, the non-Gap and Gap variables are estimated one at a time due to data
constraint. Note that Gap variables however, were determined from the results of Factor Analysis as
explained earlier (see Table 2.0 and Table 3.0). Each variable’s description and predicted sign in Table
4.0 for both demographic factors/non-gap variables and gap variables can be translated in the following
regression models:
UIS(1) = α0 + b1G1 + b2G2 + b3G3 + b4G4 + b5G5 + b6G6 + b7G7 + b8G8 + b9G9 + b10G10 + b11G11 + ∈
Where;
G1 = User-Joint Development
G2 = Quality of Services
G3 = Data Maintenance
G4 = Analysis-Decision Facilities
G5 = User-Self Confidence
G6 = System Output
G7 = User-Self Development
G8 = User Support
G9 = System Operation
G10 = User Self- Reliance
G11 = Problem Response Time
UIS(2) = α0 + b1NG1 + b2NG2 + b3NG3 + b4NG4 + b5NG5 + b6NG6 + ∈
Where;
NG1= YEARUNIV
NG2 = YEARPC
NG3 = YEARNPC
NG4 = YEACSMPK
NG5 = GENDER
NG6 = EDUCATION
Overall, the results support the finding of Remenyi (1996), Downing (1997) and McGill et. al.
(1998) about the non-significant result of demographic factors (job status, tenure, experiences and
education level) in influencing the level of UIS. This findings are basically consistent with many of the
IS researchers’ conclusion regarding the ‘sensitivity and subjectivity’ in the measurement of UIS
(Galletta and Lederer, 1989; Hawk and Raju, 1991; Essex and Magal, 1998; Etezadi-Amoli and
Farhoomand, 1991; Whyte and Bytheway, 1996).
more educated and have more exposure on many alternative similar systems. As business activities
change from business-to-customer to business-to-business activities, companies are no longer selling
the hardware products. Their revenues are likely to depend on how much they are able to ‘sell’ the
information. The study of UIS provides valuable information for management strategy.
References
[1] Baroudi, J. J. and Orlikowski, W. J. (1988), A Short Measure of User Information Satisfaction,
Journal of Management Information Systems, Vol. 4 (4), pp. 44-59
[2] Delone W. H. and McLean E. R. (1992), Information System Success: The Quest for the
Dependent Variable, Information Systems Research, Vol. 3 (1), pp. 60-95
[3] Doll W. J. and Torkzadeh G. (1988), The Measurement of End-User Computing Satisfaction,
MIS Quarterly, Vol. 12, (2), pp.259-274
[4] Doll W. J., Torkzadeh G. (Mar1991), The Measurement of End-User Computing Satisfaction:
Theoretical and Methodological Issues. MIS Quarterly, Vol. 15(1), pp. 5-10
[5] Downing, C. E. (1997), An Empirical Examination of User satisfaction with an Information
System Implementation, Operations and Strategic Management Department Wallace E. Carroll
School of Management, Boston College,
www.hsb.baylor.edu/ramsower/ais.ac97/papers/downing.htm
[6] Essex, P. A. and Magal, S. R. (1998), Determinants of Information Center Success, Journal of
Management Information Systems, Vol. 15 (2), pp.95-117
[7] Etezadi-Amoli, J. and Farhoomand, A. F. (Mar1991), On End-User Computing Satisfaction,
MIS Quarterly, Vol. 15 (1), pp.1-4
[8] Galletta, D. F. and Lederer, A. L. (1989), Some Cautions on the Measurement of User
Information Satisfaction, Decision Sciences, Vol. 20 (3), pp. 419-438
[9] Hall, J. A. (2001), "Accounting Information System: The System Development Process",
Thomson Learning (3rd edition), pp. 642-678, South-Western US
[10] Haksever C., Cook R., and Chaganti R. (1997), Service Quality for Small Firms: Can the Gaps
model Help?, www.sbaer.uca.edu/sbaer/proceeding/97proceedings.html
[11] Hartzel K. S. & Flor P. R. (1995), Expectation Formation in the Information System
Development Process. The Joseph M. Katz Graduate School of Business, University of
Pittsburgh, PA 15260, http://hsb.baylor.edu/ramsower/acis/paper/hartzel.htm
[12] Hawk, S. R. and Raju, N. S. (1991), Test-Retest Reliability of User Information Satisfaction: A
Comment, Decision Sciences, Vol. 22 (5), pp. 1165-1170
[13] IFAC Education Committee (1995) New IFAC Project – Super IT Programs in Accounting
Education: Information Technology in the Accounting Curriculum (IEG-11), www.ifac.org
[14] Ives B., Olson M. H., and Baroudi J. J. (1983), The measurement of User Information
Satisfaction, Communication of ACM, Vol. 26 (10), pp.785-793
[15] Kettinger, W. J. and Lee, C. C. (1994), Perceived Service quality and User Satisfaction With
the Information Services Function, Decision Sciences, Vol. 25 (5/6), pp. 737-766
[16] Kim, J. and C. W. Mueller (1978), Introduction to factor Analysis: What it is and How to do it,
University Paper, Sage, London
[17] Kim, K. K. (1990), User Information Satisfaction: Towards Conceptual Clarity, Proceedings of
ICIS, Copenhagen
[18] Lawrence M. and low G. (1993), Exploring Individual User Satisfaction Within User-Led
Development, MIS Quarterly, Vol. 17(2), pp. 195-208
[19] Magal, S. R. (1991), A Model for Evaluating Information Center Success, Journal of
Management Information Systems, Vol. 8 (1), pp. 91-106
[20] McGill T. J., Hobbs V. J., Chan R., and Khoo D. (Feb1998), User Satisfaction as a Measure of
Success in End User Application Development: An Empirical Investigation. Research Working
Paper IT/98/02, Murdoch University.
International Research Journal of Finance and Economics - Issue 36 (2010) 132
[21] McHaney, R. and Cronan, T. P. (1998), Computer Simulation Success: On the Use of the End-
User Computing Satisfaction Instrument: A Comment, Decision Sciences, Vol. 29 (2), pp. 525-
536
[22] Mirani R., King W. R. (1994), Impacts of End-User and Information Center Characteristics on
End-User Computing Support, Journal of Management Information Systems, Vol. 11 (1), pp.
141-166
[23] Mirani, R. and King, W. R. (1994), The Development of a Measure for End-User Computing
Support, Decision Sciences, Vol. 25 (4), pp. 481-498
[24] O’Brien J. A. (1999), Management Information Systems: Managing Information Technology in
the Internetworked Enterprise, Fourth Ed., Irwin McGraw-Hill, USA
[25] Parasuraman, A., Zeithaml, V. A. and Barry, L. L (1985), A Conceptual Model of Service
Quality and its Implications for Future Research, Journal of Marketing, Vol. 49 (4), pp. 41-50
[26] Parasuraman, A., Zeithaml, V. A. and Barry, L. L (1988), SERVQUAL: A Multiple Item Scale
Measuring Consumer Perceptions of Quality, Journal of Retailing, Vol. 64 (1), Spring, pp. 12-
40
[27] Pitt, L. F. and Watson, R. T. (1995), Service Quality: A Measure of Information Systems
Effectiveness, MIS Quarterly, Vol. 19 (2), pp.173-187
[28] Redman W. (1997), Keeping Those Users Satisfied, Communications Week, Issue 669, pp. 63-
64
[29] Remenyi D. (1996), A Holistic Approach to IT Function Evaluation, Leslie Willcocks,
Chapman & Hall, London
[30] Robbins, S. P. (1998), Organizational Behavior: Concepts, Controversies, Applications, New
Jersey, Prentice Hall, Inc.
[31] Seddon, P. , Wong, M. and Yip, S. K. (1992), Computer-Based General Ledger Systems: An
Exploratory Study, Journal of Information Systems, Vol. 6 (1), pp. 93-110
[32] Seddon, P. and Yip, S. K. (1992), An Empirical Evaluation of User Information Satisfaction
(UIS) Measures for Use With General Ledger Accounting Systems, Journal of Information
Systems, Vol. 6 (1), pp. 75-92
[33] Shaw, N. C., Ang, J. S. K., and Patridge, J. E. L. (1996), Technological Frames and End-User
Computing, National University of Singapore,
http://hsb.baylor.edu/ramsover/ais.ac.96/papers/shaw.htm
[34] Spreng, R. A. and Mackoy, R. D. (1996), An Empirical Examination of a Model of Perceived
Service Quality and Satisfaction, Journal of Retailing, Vol. 72 (2), pp. 201-214
[35] Stratigos, A. (nov/Dec1999), Measuring End-User Loyalty Matters, Online, Wilton, Vol. 23
(6), pp. 74-78
[36] Van Dyke, T. P. , Prybutok, V. R. and Kappelman, L. A. (1999), Cautions on the Use of the
SERVQUAL Measure to Assess the Quality of Information System Services, Decision
Sciences, Vol. 30 (3), pp. 877-891
[37] Whyte G. & Bytheway A. (1996), Factors affecting information system’s success. International
Journal of Service Industry Management, Vol. 7, No. 1, pp. 74 – 93
[38] Willcocks, L. (1996), Investing in Information Systems: Evaluation and Management,
Chapman & Hall, London
[39] Woo K., and Fock H. (1999), Customer Satisfaction in the Hong Kong Mobile Phone Industry,
The Service Industries Journal, Vol. 19 (3), pp. 162-174
[40] Yuthas K., and Eining M.M. (1995), An Experimental Evaluation of Measurements of
Information System Effectiveness. Journal of Information Systems, Vol. 9 (2), pp. 69-84
[41] Zeithaml V. A., Parasuraman A., and Barry L. L. (1990), Delivering Quality Service: Balancing
Customer Perceptions and Expectations, Free Press, N. York