Vous êtes sur la page 1sur 17

The current issue and full text archive of this journal is available at

www.emeraldinsight.com/0968-4883.htm

Measuring student satisfaction at


a UK university

Measuring
student
satisfaction

Jacqueline Douglas, Alex Douglas and Barry Barnes


Faculty of Business and Law, Liverpool John Moores University, Liverpool, UK

251

Abstract
Purpose The purpose of this paper is to report on the design and use of a questionnaire to measure
student satisfaction at Liverpool John Moores Universitys Faculty of Business and Law.
Design/methodology/approach The paper utilised the concept of the service-product bundle to
design the survey questionnaire and then used SPSS and Quadrant Analysis to analyse the results to
determine which aspects of the Universitys services were most important and the degree to which
they satisfied the students.
Findings The most important aspects were those associated with teaching and learning, while the
least important were those associated with the physical facilities.
Practical implications The concept of the service-product bundle is a valid and reliable tool for
the design of a satisfaction survey and segments a Universitys service offering in such a way as to
allow management to target resources at those areas that are perceived to be low satisfaction and high
importance. The questionnaire can be utilised in most education establishments.
Originality/value Utilising the concept service-product bundle places responsibility for
questionnaire content and design firmly on the service provider rather than the user.
Keywords Service levels, Higher education, Students, Surveys, United Kingdom
Paper type Research paper

Introduction
Students opinions about all aspects of academic life are now sought by educational
institutions worldwide, generally, in the form of a satisfaction feedback questionnaire.
It is this student satisfaction survey, within the context of Liverpool John Moores
Faculty of Business and Law that this paper addresses.
In the UK, Higher Education (HE) students were considered to be the primary
customers of a University (Crawford, 1991), even before they were liable for the
payment of up-front tuition fees. Students are the direct recipients of the service
provided, i.e. a three year degree programme made up of a number of modules at each
level. As if to confirm this status of the student as customer, the Higher Education
Funding Council for England (HEFCE) has introduced a National Student Survey. This
survey is aimed at final year students to seek their views on a number of aspects of
teaching, assessment and support provided by their university and its courses. The
results will ultimately be used by Government and Funding Bodies to produce league
tables of university performance. The position of a university in any league tables will
impact ultimately on its image. Image has a strong impact on the retention of current
students and the attraction of potential students (James et al, 1999). Indeed recruitment
and retention of students has been moved to the top of most universities agendas by
HEFCE due to their desire to increase the UK student population in line with
Government targets. Poor retention rates may have adverse funding consequences for
institutions (Rowley, 2003a). This paper takes the view that student satisfaction,
retention and recruitment are closely linked. Thus student satisfaction has become an

Quality Assurance in Education


Vol. 14 No. 3, 2006
pp. 251-267
q Emerald Group Publishing Limited
0968-4883
DOI 10.1108/09684880610678568

QAE
14,3

252

extremely important issue for universities and their management. The aim is to try to
maximise student satisfaction, minimise dissatisfaction and therefore retain students
and so improve the institutions performance across a number of league tables.
A number of previous research studies (see for example, Galloway, 1998 and
Banwet and Datta, 2003) into student perceptions of quality/satisfaction have utilised
the SERVQUAL framework (Parasuraman et al., 1988). However, SERVQUAL has
been much criticised over the years (see for example, Buttle, 1996; Asubonteng et al.,
1996; Pariseau and McDaniel, 1997; Aldridge and Rowley, 1998). Taking these
criticisms into consideration the questionnaire used in the satisfaction survey asked
only for perceptions of performance of a range of service aspects (as well as
importance) but did not aim to collect data associated with expectations. Indeed, the
survey questionnaire was designed around the concept of the service-product bundle.
This concept is discussed in the next section.
The service-product bundle
The outcome of service delivery is a tangible product, and a bundle of goods and
services as the product offering (Sasser et al., 1978). The service-product bundle refers
to the inseparable offering of many goods and services including what Liverpool John
Moores University has to offer its students. This bundle consists of three elements:
(1) the physical or facilitating goods;
(2) the sensual service provided the explicit service; and
(3) the psychological service the implicit service.
For a university the facilitating goods include the lectures and tutorials, presentation
slides, supplementary handout documents/materials and the recommended module
text. It also includes the physical facilities such as the lecture theatres and tutorial
rooms and their level of furnishing, decoration, lighting and layout as well as ancillary
services such as catering and recreational amenities.
The explicit service includes the knowledge levels of staff, staff teaching ability, the
consistency of teaching quality irrespective of personnel, ease of making appointments
with staff, the level of difficulty of the subject content and the workload.
The implicit service includes the treatment of students by staff, including
friendliness and approachability, concern shown if the student has a problem, respect
for feelings and opinions, availability of staff, capability and competence of staff. It
also includes the ability of the universitys environment to make the student feel
comfortable, the sense of competence, confidence and professionalism conveyed by the
ambience in lectures and tutorials, feeling that the students best interest is being
served and a feeling that rewards are consistent with the effort put into
courseworks/examinations. All of the above are based on students perceptions of
the various parts of the service and the data is usually collected via some form of
feedback questionnaire.
Why collect student feedback?
Rowley (2003b) identified four main reasons for collecting student feedback:
(1) to provide auditable evidence that students have had the opportunity to pass
comment on their courses and that such information is used to bring about
improvements;

(2) to encourage student reflection on their learning;


(3) to allow institutions to benchmark and to provide indicators that will contribute
to the reputation of the university in the marketplace; and
(4) to provide students with an opportunity to express their level of satisfaction
with their academic experience.
The last bullet point as the rationale behind the survey undertaken for the particular
research project described in this paper.
Keeping customers satisfied is what leads to customer loyalty. Research conducted
by Jones and Sasser Jr (1995) into thirty organisations from five different markets
found that where customers have choices the link between satisfaction and loyalty is
linear; as satisfaction rises, so too does loyalty. However, in markets where competition
was intense they found a difference between the loyalty of satisfied and completely
satisfied customers. Put simply, if satisfaction is ranked on a 1-5 scale from completely
dissatisfied to completely satisfied, the 4s though satisfied were six times more
likely to defect than the 5s.
Customer loyalty manifests itself in many forms of customer behaviour. Jones and
Sasser Jr (1995) grouped ways of measuring loyalty into three main categories:
(1) intent to re-purchase;
(2) primary behaviour organisations have access to information on various
transactions at the customer level and can track five categories that show actual
customer re-purchasing behaviour; viz, recency, frequency, amount, retention,
and longevity; and
(3) secondary behaviour e.g. customer referrals, endorsements and spreading the
word are all extremely important forms of consumer behaviour for an
organisation.
Translating this into university services, this covers intent to study at a higher level
within the same institution, how frequently and recently a student used ancillary
services, such as the library, catering and IT services, and lastly the willingness to
recommend the institution to friends, neighbours and fellow employees.
Issues impacting on student satisfaction
Price et al. (2003) recently reported on the impact of facilities on undergraduate student
choice of university. They surveyed a number of universities over two years in order to
determine students reasons for selecting a particular university. The average results
for the two years were fairly similar the top eight reasons being; it had the right
course, availability of computers, quality of library facilities, good teaching reputation,
availability of quiet areas, availability of areas for self-study, quality of public
transport in the town/city and a friendly attitude towards students. Clearly, students
perceptions of a universitys facilities are one of the main influences on their decision to
enrol.
Coles (2002) found that student satisfaction is decreased when class sizes are larger
in earlier cohorts, and when students are taking compulsory core modules rather than
optional modules.

Measuring
student
satisfaction
253

QAE
14,3

254

The quality of any of the service encounters, or moments of truth (Carlzon, 1989)
experienced by customers forms part of their overall impression of the whole service
provided, (Dale, 2003) and by implication, their impression of the organisation itself.
As Deming (1982) commented, most people form their opinions based on the people
that they see, and they are either dissatisfied or delighted, or some other point on the
continuum in between. In order to deliver high quality services to students, universities
must manage every aspect of the students interaction with all of their service offerings
and in particular those involving its people. Services are delivered to people by people,
and the moments of truth can make or break a universitys image (Banwet and Datta,
2003). In order to deliver total student satisfaction, all employees of a university should
adhere to the principles of quality customer service, whether they be front-line contact
staff involved in teaching or administration, or non-contact staff in management or
administrative roles (Gold, 2001; Low, 2000, cited in Banwet and Datta, 2003).
In a recent survey conducted with 310 all male Saudi Arabian students attending
the King Fahd University of Petroleum and Minerals, Sohail and Shaikh (2004) found
that contact personnel was the most influencing factor in students evaluation of
service quality. However, physical environment, layout, lighting, classrooms,
appearance of buildings and grounds and the overall cleanliness also significantly
contributed to students concepts of service quality.
Galloway (1998) studied the role of the faculty administration office in one UK
University on student perceptions of service quality. He found that it impacted directly
on students and influenced their perceptions of the quality of the whole institution. The
office performance also had a direct impact on academic and technical staff within the
faculty. These front-line staff in their turn had a direct impact on students, potential
students and other clients. The main predictors of quality for students were found to
be:
.
office has a professional appearance;
.
staff dress smartly;
.
never too busy to help; and
.
opening hours are personally convenient.
Banwet and Datta (2003) believed that satisfied customers are loyal, and that satisfied
students were likely to attend another lecture delivered by the same lecturer or opt for
another module or course taught by her/him. In their survey of 168 students who
attended four lectures delivered by the same lecturer, covering perceived service
quality, importance and post-visit intentions, they found that students placed more
importance on the outcome of the lecture (knowledge and skills gained, availability of
class notes and reading material, coverage and depth of the lecture and teachers
feedback on assessed work) than any other dimension. This supports the findings of
Schneider and Bowen (1995) who deduced that the quality of the core service influences
the overall quality of the service perception. For universities the core service delivery
method is still the lecture. Overall Banwet and Datta (2003) found that students
intentions to re-attend or recommend lectures was dependent on their perceptions of
quality and the satisfaction they got from attending previous lectures. This is
supported by the research of Hill et al. (2003) who utilised focus groups to determine
what quality education meant to students. The most important theme was the quality

of the lecturer including classroom delivery, feedback to students during the session
and on assignments, and the relationship with students in the classroom.
Research by Tam (2002) to measure the impact of Higher Education (HE) on
students academic, social and personal growth at a Hong Kong university found that
as a result of their university experience students had changed intellectually, socially,
emotionally and culturally. This growth was evidenced as students progressed from
one year to another as their university career developed. Is this also the case with
student perceptions of service quality and satisfaction? A number of researchers have
suggested that this might indeed be the case (Hill, 1995; ONeil, 2003) although
obtaining valid and reliable data to support such a stance is difficult. This study aims
to determine if there are differences in those aspects of a university service that
students consider important, as well as their satisfaction levels, associated with their
year/level of study, i.e. first, second and third.
Methodology
A quantitative survey was designed to elicit student satisfaction levels across the
Universitys service offerings. The questionnaire consisted of 60 questions informed by
previous research studies and subdivided into the various categories of the
service-product bundle including, lecture and tutorial facilities, ancillary facilities,
the facilitating goods, the explicit service and the implicit service. At the end students
were asked for their overall satisfaction rating and whether they would recommend the
University to a prospective student. The satisfaction questions were preceded by a
series of demographic questions that would allow the sample population to be
segmented. These included, interalia, questions regarding gender, age, level of study,
mode of study and country of origin.
Participation in the survey was entirely voluntary and anonymous. The length and
complexity of the questionnaire was influenced, in part, by the balance between the
quest for data and getting students to complete the survey.
The questionnaire was piloted among 100 undergraduate volunteers. The length of
time it took them to complete the survey was noted and at the end they were asked for
any comments regarding the validity and reliability of individual questions. They were
also asked if there was anything missing from the questionnaire. Based on the
feedback received a number of questions were amended and the design of the
questionnaire altered slightly. It took on average 12 minutes to complete the
questionnaire.
In order to get as large and representative a sample as possible, core modules from
programmes in all five Schools within the Faculty of Business and Law at all three
undergraduate levels were targeted. Staff teaching these modules were approached
and permission sought to utilise 15 minutes of their lecture time in order to explain the
rationale behind the survey and to persuade students to complete the survey in class.
Generally this personal touch was successful in eliciting a good response. Over the
course of the two weeks the survey was undertaken, only one person refused to
complete the questionnaire.
Researchers are divided as to whether or not determinants of satisfaction should be
weighted by their importance because different attributes may be of unequal
importance to different people (Angur, 1998; Harvey, 1995; Patterson and Spreng,
1997). In this study both satisfaction and importance were measured.

Measuring
student
satisfaction
255

QAE
14,3

256

There is no such thing as the perfect rating scale. However, some produce more
reliable and valid results than others. Devlin et al. (1993) determined that a good rating
scale should have, inter alia, the following characteristics:
.
minimal response bias;
.
discriminating power;
.
ease of administration; and
.
ease of use by respondents.
In order to accommodate these characteristics, the rating scale contained five points
with well-spaced anchor points representing the possible range of opinions about the
service. The scale contained a neutral category and the negative categories were
presented first (to the left).
Thus, undergraduates were required to respond utilising a 5-point Likert scale of 1
to 5, where 1 is very unsatisfactory, 2 is unsatisfactory, 3 is neutral (neither satisfactory
or unsatisfactory), 4 is satisfactory and 5 is very satisfactory. This type of scale
provides a common basis for responses to items concerned with different aspects of the
University experience. The importance that students place on each criteria was
measured utilising a 5-point Likert scale, where 1 is very unimportant, 2 is
unimportant, 3 is neutral (neither important or unimportant) 4 is important and 5 is
very important. Respondents were asked to tick the box next to the number that
represented their opinion on each item.
A sample of 865 students from a total within the Faculty of 3800 was surveyed.
The questionnaires were analysed using SPSS v. 11 and Quadrant Analysis
conducted in order to determine those areas perceived as being the least satisfactory
with the greatest importance rating.
Finally, respondent focus groups were assembled to discuss some of the issues that
required more in-depth analysis and which, due to constraints of space and time, were
not explicitly asked about in the original survey.
Results
A total of 864 questionnaires were returned, although not all had complete data sets.
Table I details the demographic mix of the respondents.
Based on all student responses, the most important (i.e. list of the top ten starting
from the highest value) and least important (i.e. list of the bottom ten starting from the
lowest value) aspects of the University service are shown in Table II.
As can be seen from Table II the most important areas of the University services are
those associated with learning and teaching. Interestingly, given the recommendations
of a Government White Paper (HEFCE et al., 2003) that from 2006 all newly recruited
university teaching staff should obtain a teaching qualification that incorporates
agreed professional standards, the most important aspect of the service is the teaching
ability of staff, closely followed by their subject expertise. The consistency of teaching
quality irrespective of the teacher is also considered by the respondents as important,
recognising that teaching quality can be variable. The students also recognise the
importance of the lecture and tutorial, which is not surprising given that for most
universities that is still the core service offering and is very much linked to the teaching
ability and subject knowledge of staff. Teaching and learning support materials were

%
Gender
Nationality
Mode of study
Level of study

Male
Female
Home (UK)
European Union (EU)
International (non-EU)
Full-time
Part-time
Sandwicha
Level 1
Level 2
Level 3

46
54
89
4
7
80
8
12
30
48
22

Note: aSandwich students are those whose programme of study includes a year in industry

Ranking Most important


1
2
3
4
5
6
7
8
9
10

Teaching ability of staff


Subject expertise of staff
IT facilities
Lectures
Supplementary lecture materials
Tutorials
Consistency of teaching quality irrespective of
teacher
Blackboarda
The Learning Resources Centre
The approachability of teaching staff

Measuring
student
satisfaction
257

Table I.
Demographic mix of
respondents

Least important
Decoration in lecture facilities
Vending machines
Decoration in tutorial rooms
Furnishings in lecture facilities
Recreational facilities
Availability of parking
The layout of tutorial/seminar rooms
The layout of lecture facilities
The on-campus catering facilities
The quality of pastoral support

Note: aBlackboard is a virtual learning environment that students can access off and on campus

also ranked highly, particularly supplementary handout materials and the use of
Blackboard for enhancing student learning. These are mostly associated with the
explicit service delivered to the students and the facilitating goods.
With regard to facilities, students have ranked the importance of IT facilities very
highly, reflecting the usefulness of connection to the Internet for research purposes and
software packages for producing high quality word-processed documentation for
coursework assignments and dissertations. This links well with the high ranking of the
Learning Resource Centre where IT facilities can be accessed and books and journals
sourced in hard copy or electronic copy.
Table II also shows those areas of the service that students find relatively
unimportant. These are mostly associated with the lecture and tutorial facilities and
the ancillary services, for example, layout and decoration of lecture and tutorial
facilities, catering facilities and vending machines.
A further analysis was undertaken to determine whether different segments of the
respondent population had similar or different rankings of the University services
attributes with regard to importance and unimportance.

Table II.
Most important and least
important aspects of
service

QAE
14,3

258

With regard to mode of study, Table III shows the rankings for students studying
full-time with the University. Whilst acknowledging the fact that 80 per cent of the
sample population is full time students, the rankings of those service aspects
considered most important are very similar to those for the sample population as a
whole, the only difference being that supplementary tutorial materials replaces
approachability of staff. Once again the majority of aspects considered least
important are associated with the facilities and ancillary services.
When the views of Part-time students are considered, a number of interesting
differences in their priorities are worthy of discussion. Table IV shows the rankings of
service aspects for part time students. The IT facilities drops from third to tenth in
their importance rankings, perhaps indicative of the fact that they have access to IT
facilities at work and/or at home, thus rendering it less important relative to other
aspects of service. Blackboard (a virtual learning environment that allows teaching
staff to make learning and other material available via the internet), on the other hand
rises from 10th to 7th in importance indicating its usefulness as a teaching aid for
students who do not attend the University on a daily basis and who may miss classes
due to work or family commitments. Interestingly, the helpfulness of technical staff
is considered unimportant, again reflecting their access to such help at work or a
greater level of expertise on their part through working with IT on a daily basis.

Ranking Most important


1
2
3
4
5
6
7
Table III.
Most important and least
important service aspects
for full-time students

8
9
10

Teaching ability of staff


Subject expertise of staff
IT facilities
Lectures
Tutorials
Supplementary lecture materials
Consistency of teaching quality irrespective of
teacher
The Learning Resources Centre
Supplementary tutorial materials
Blackboard

Ranking Most important


1
2
3

Table IV.
Most important and least
important service aspects
for part-time students

4
5
6
7
8
9
10

Teaching ability of staff


Subject expertise of staff
Consistency of teaching quality irrespective of
teacher
Teaching and learning equipment in lectures
The Learning Resources Centre
Lectures
Blackboard
Supplementary lecture materials
Supplementary tutorial materials
IT facilities

Least important
Decoration in lecture facilities
Decoration in tutorial rooms
Vending machines
Furnishing in tutorials
Furnishing in lectures
Availability of parking
Recreational facilities
The layout of tutorial/seminar rooms
The on-campus catering facilities
The layout of lecture facilities

Least important
Recreational facilities
Vending machines
Decoration in lecture facilities
Furnishings in lecture facilities
Decoration in tutorial rooms
Quality of pastoral support
The on-campus catering facilities
The layout of tutorial/seminar rooms
Helpfulness of technical staff
The lecture facilities overall

The data was next segmented on the basis of student nationality. The ranking of
service aspects for UK (home-based) students was similar to the rankings for full-time
students, however, this may be due to the predominance of UK students in the study.
With regard to the European Union (EU) students, a number of differences in rankings
merit discussion. The responsiveness of teaching staff to requests was ranked the
second most important service aspect. This probably reflects the need for more support
of students whose first language is not English as well as their unfamiliarity with the
UK Higher Education system. Ranked 4th most important aspect was textbook
availability within the LRC. Perhaps reflecting their lack of financial resources to
purchase any set texts or their requirement for the text as a support for their learning.
EU students, mainly Erasmus exchange students from France, Germany, Spain and
Italy, ranked as unimportant similar service aspects as those for UK students.
International (non-EU) students, mostly Asian from China, India and Pakistan,
ranked textbook availability within the LRC as the most important aspect of the
service. This is indicative of the learning style of many international students.
Experience and anecdotal evidence from a number of Chinese international teaching
colleagues suggests that the Chinese education system is very textbook centred, with
the teaching structured round the textbook and the examination also based on the
textbook. Depth of knowledge on the one text is expected, rather than breadth of
knowledge from a number of sources.
With regard to the views of students studying at different levels in the University
the main difference between the different years was the fact that level three students
ranked 4th most important the feeling that rewards gained are consistent with the
effort you put into assessment and ranked 5th most important the approachability of
teaching staff. Neither of these aspects appeared in the rankings by the other two
levels. Both of these reflect the important contribution final year assessment marks
have to the achievement of their honours degree level.
Finally, overall 88 per cent of the students surveyed would recommend the
University to a friend or neighbour. This figure decreased to 78 per cent for
respondents currently in their final year of undergraduate study significantly lower
than for levels 1 and 2. However, at the time this study was undertaken no comparative
data from other universities existed as the HEFCE survey had yet to be conducted.
Importance-performance analysis was first applied to elements of a marketing
programme by Martilla and James (1977). The so-called quadrant analysis is a
graphic technique used to analyse importance and attribute ratings (Dillon et al., 1993).
It produces a grid that shows which attributes are important among those that a
service delivers. The analysis allows for the determination of whether aspects of a
particular service provision are the aspects that respondents value as being important.
In this study quadrant analysis was conducted on the top two questionnaire box scores
for both importance (i.e. very important and important) and satisfaction ratings (i.e.
very satisfied and satisfied). Thus for every service aspect the percentages scores
of those respondents who were either satisfied or very satisfied were identified as were
the percentages of respondents who considered that aspect important or very
important. Once all 60 questions had been similarly analysed the median score was
determined for both importance and satisfaction. Then for each percentage score its
distance from the median was calculated. This could be positive or negative. The
importance scores were the Y axis scores and the satisfaction scores the X axis scores

Measuring
student
satisfaction
259

QAE
14,3

260

Figure 1.
Satisfaction and
importance grid

for the purposes of plotting the results on the grid in Figure 1. Quadrant analysis has
typically been used to analyse student feedback data in UK universities for a number
of years. Indeed Harvey (1995) reports on just such a study at a UK university.
Figure 1 illustrates the 2 2 matrix or grid linking the perceived degree of student
satisfaction with an attribute with its perceived importance the Satisfaction and
Importance Grid. Responses can fall into one of the four areas on the grid depending on
whether they are considered high importance and high satisfaction (B), high
importance and low satisfaction (A), low importance and low satisfaction (C) and low
importance and high satisfaction (D).
From a management point of view the area of most interest is those service areas
that are considered to be of high importance but are rated low as regards satisfaction
Quadrant A on the Grid. These are the priority areas for improvement. Service areas in
Quadrant B, high importance and high satisfaction, are those areas where quality
should be maintained by keeping up the good work. Quadrant C will contain service
aspects that are low priority because, despite under performing, they are deemed to be
of little importance by students, whilst Quadrant D will have to be examined due to a
possible overkill in that despite low importance the University may be over
performing.
Table V shows where each of the questions asked about various aspects of the
University services are situated on the Satisfaction and Importance Grid based on the
views of all respondents.
From the University teaching staff and management standpoints the results are
encouraging. Based on the views of all respondents, the ten service aspects deemed
most important are all in Quadrant B. These are those aspects that are considered high
importance and are rated high in satisfaction terms. The quadrant analysis also
confirms the lack of importance of the physical facilities. However, a number of areas
have been identified as high importance and low satisfaction. The textbooks used on
courses are poorly rated as is the feedback given to students on their performance. The
availability of teaching staff and their responsiveness to requests also gives cause for
concern.
The degree of concern can be estimated when the views of various segments of the
respondent population are considered. International students in particular rank the
textbook availability in the Learning Resource Centre as the most important aspect of
the service, whilst EU students rank the responsiveness of staff to requests as their
second most important aspect of the University service offering. With regard to

Quadrant B high importance and high satisfaction


The teaching and learning equipment
The Learning Resources Centre
The IT facilities
The lectures
The tutorials
The powerpoint/slide presentations
Supplementary lecture materials, e.g. handouts
Supplementary tutorial materials, e.g. handouts
The recommended course textbooks overall
The textbooks availability in local bookstores
Blackboard overall
Subject expertise of staff
Teaching ability of staff
The consistency of teaching quality irrespective of the teacher
The appropriateness of the method of assessment (i.e. coursework and/or exam)
The friendliness of teaching staff
The approachability of teaching staff
Concern shown when you have a problem
The competence of staff
The feeling that your best interests are being served
The feeling that rewards gained are consistent with the effort put into assessment.
(continued)

Quadrant A high importance and low satisfaction

Textbook value for money


Promptness of feedback on performance
Usefulness on feedback on performance
Availability of staff
Way timetable is organised
Course workload
Textbooks availability within the Learning Resources Centre
Textbook usefulness in enhancing understanding of modules
Responsiveness of teaching staff to requests

Measuring
student
satisfaction
261

Table V.
Importance satisfaction
grid for all students

The lecture room level of cleanliness


The lecture room lighting
The tutorial teaching and learning equipment
The tutorial room lighting
The level/difficulty of subject content
The appropriateness of the quantity of assessment
The University environments ability to make you feel comfortable
The sense of competence, confidence and professionalism conveyed by the
ambience in lectures
The sense of competence, confidence and professionalism conveyed by the
ambience in tutorials

The lecture rooms overall


The lecture room layout
The lecture room decoration
The lecture room furnishings
The lecture class sizes
The tutorial rooms overall
The tutorial room level of cleanliness
The tutorial room layout
The tutorial room decoration
The tutorial room furnishings
The tutorial class sizes
The on-campus catering facilities overall
The vending machines overall
The toilet facilities overall
The recreational facilities overall
The availability of parking
The appropriateness of the style of assessment (i.e. individual
and/or groupwork
The helpfulness of technical staff
The helpfulness of administrative staff
The quality of pastoral support
The respect for your feelings, concerns and opinions

Table V.
Quadrant D low importance and high satisfaction

262

Quadrant C low importance and low satisfaction

QAE
14,3

availability of staff, level three students considered this more important than level one
or two students. This may be partly explained by the fact that level three students have
to research and submit a dissertation and are assigned a personal supervisor to guide
them in this task. Lack of availability of a personal supervisor may give cause for
concern and lead to dissatisfaction, especially if their peers have personal tutors who
are seen to be more available.
The focus groups confirmed and enhanced much of the information gathered from
the questionnaire. The students considered the teaching as much more important than
wobbly tables. This confirmed the low importance status of the physical facilities
and in particular the furnishings and decoration, although there was a limit to their
toleration of these aspects.
The focus group participants also indicated that when it came to selecting elective
modules their choice was influenced according to the tutor who was teaching particular
modules.
With regard to tutorial classes they commented that if there were too many students
in a class they did not receive enough individual attention. They considered classes
with more than 20 students in them as too large.
The issue of textbook costs and availability raised some interesting comments.
Students could pay 40 for a textbook and hardly use it considered a waste of
money. The availability of textbooks was confirmed as not good.
Staff responsiveness and availability was considered variable, with some staff
always available and quick to respond to e-mails, while others were never around and
would take weeks to respond to e-mails. This was a major cause of dissatisfaction. A
reasonable time to acknowledge e-mails was put at between 24 and 48 hours. This
aspect links to the promptness and usefulness of feedback on assessments. Feedback
was considered too slow. A reasonable timescale for the returning of coursework was
three weeks, but it was recognised as being dependent upon the number of students
taking a module. The feedback was considered useless if it concentrated on the work
done (which would usually not be repeated) rather than on future work. There were
problems deciphering some staff handwriting. Some Schools/modules gave no
feedback at all.
The way timetables were organised also came in for much criticism, as many
students had to fit part-time employment around their timetables.
With regard to non-teaching contact staff, the helpfulness of office or administration
staff and the IT technical staff, although considered to be of low importance, were rated
low in satisfaction based on the questionnaire returns. The focus groups did comment
on a perceived lack of helpfulness. The office was always busy with long queues.
Technical staff were considered helpful and the LRC staff generally helpful.
Conclusions
Based on the results of this comprehensive study of students studying within the
Faculty of Business and Law at Liverpool John Moores University (LJMU) it is clear
that many of the physical aspects of the University services are not important with
regards to student satisfaction. This finding supports previous findings by Schneider
and Bowen (1995), Banwet and Datta (2003) and Hill et al. (2003) all of whom found that
the most important aspects of a universitys service offerings were associated the core
service, i.e. the lecture, including the attainment of knowledge, class notes and

Measuring
student
satisfaction
263

QAE
14,3

264

materials and classroom delivery. Furthermore, the findings also confirm the research
of Price et al. (2003) in that it seems that the Universitys physical facilities influence
students choice. However, once here it is the quality of the teaching and learning
experience that is of importance. Fortunately, LJMU has a state-of-the-art Learning
Resource Centre equipped with scores of computer stations fitted with the latest
software and linked to the Internet. The Faculty of Business and Law also has a new
technologically advanced Lecture Theatre and a large IT suite. These aspects of the
facilities can be, and indeed are, used to attract students to the Faculty of Business and
Law, for example during Open Days. However, once students have enrolled, it is the
quality of the teaching and learning that will cause satisfaction or dissatisfaction and
they are prepared to tolerate, to a large extent, wobbly tables and paint flaking off
walls as long as the teaching they receive is at an acceptable level. This may have
implications for management responsible for resource allocations to various areas of
the University services and infrastructure.
Student feedback tends to confirm that they do receive high quality teaching
from staff with high levels of expertise in their various academic disciplines. The
lecture and tutorial are the core service provided by the university and it is what
goes on in these classrooms that determine student satisfaction with the explicit
service. They are prepared to tolerate to a large extent deficiencies in the physical
aspects of the facilities as long as the teaching they receive is perceived to be at
an acceptable level.
The focus groups confirmed the findings of Banwett and Datta (2003) that students
vote with their feet based on their experiences in lectures and are more likely to enrol
on an optional module delivered by a teacher perceived as providing good teaching.
However, in line with Coles (2002) large classes are likely to cause dissatisfaction.
The explicit service is aided (and abetted) by the facilitating goods such as
PowerPoint presentation slides, supplementary handout materials and the
recommended textbooks. Given the large numbers of students taking most modules
there will never be sufficient textbooks available in the LRC to satisfy demand.
With regard to quality improvement it may be worthwhile introducing explicit
standards of service to various aspects of the University services. These would cover
most moments of truth. For example, teaching staff would undertake to respond to
all student e-mails within 48 hours and aim to provide feedback on coursework
assignments within 15 working days. Similar standards could be introduced in the
LRC and administration offices and could encompass internal and external service
level agreements. Management can also be involved in such service standards by
guaranteeing that tutorial classes will not have more than 20 students. It is also the
responsibility of management to provide the resources necessary to meet any
standards.
Universities world-wide are now competing for students both nationally and
internationally. In order to recruit and retain students they should aim to enhance
student satisfaction and reduce student dissatisfaction. This can only be achieved if all
the services that contribute to academic life are delivered to a suitable standard. The
students are the sole judges of whether or not this has been achieved therefore student
satisfaction surveys should be undertaken on a regular basis and a universitys service
offering adapted accordingly.

Glossary
EU

European Union

HE

Higher Education

HEFCE

Higher Education Funding Council for England and Wales

IT

Information Technology

LJMU

Liverpool John Moores University

LRC

Learning Resource Centre

References
Aldridge, S. and Rowley, J. (1998), Measuring customer satisfaction in higher education,
Quality Assurance in Education, Vol. 6 No. 4, pp. 197-204.
Angur, M.G. (1998), Service quality measurements in a developing economy: SERVQUAL
versus SERVPERF, Journal of Customer Service in Marketing Management, Vol. 4 No. 3,
pp. 47-60.
Asubonteng, P., McCleary, K.J. and Swan, J.E. (1996), SERVQUAL revisited: a critical review of
service quality, The Journal of Services Marketing, Vol. 10 No. 6, pp. 62-81.
Banwet, D.K. and Datta, B. (2003), A study of the effect of perceived lecture quality on
post-lecture intentions, Work Study, Vol. 52 No. 5, pp. 234-43.
Buttle, F. (1996), SERVQUAL: review, critique, research agenda, European Journal of
Marketing, Vol. 30 No. 1, pp. 8-32.
Carlzon, J. (1989), Moments of Truth, HarperCollins, New York, NY.
Coles, C. (2002), Variability of student ratings of accounting teaching: evidence from a Scottish
business school, International Journal of Management Education, Vol. 2 No. 2, pp. 30-9.
Crawford, F. (1991), Total Quality Management, Committee of Vice-Chancellors and Principals,
occasional paper (London, December), cited in Hill, F.M. (1995), Managing service quality
in higher education: the role of the student as primary consumer, Quality Assurance in
Education, Vol. 3 No. 3, pp. 10-21.
Dale, B.G. (2003), Managing Quality, 4th ed., Blackwell Publishing, Oxford.
Deming, W.E. (1982), Out of the Crisis, Massachusetts Institute of Technology, Cambridge, MA.
Devlin, S.J., Dong, H.K. and Brown, M. (1993), Selecting a scale for measuring quality,
Marketing Research, Vol. 5 No. 3, pp. 12-17.
Dillon, W.R., Madden, T.J. and Firtle, N.H. (1993), Essentials of Marketing Research, Irwin,
Boston, MA.
Galloway, L. (1998), Quality perceptions of internal and external customers: a case study in
educational administration, The TQM Magazine, Vol. 10 No. 1, pp. 20-6.
Gold, E. (2001), Customer Service: A Key Unifying Force for Todays Campus, Netresults,
National Association of Student Personnel Administrators, 22 January, available at:
www.naspa.org/netresults, cited in Banwet, D.K. and Datta, B. (2003), A study of the
effect of perceived lecture quality on post-lecture intentions, Work Study, Vol. 52 No 5,
pp. 234-43.
Harvey, L. (1995), Student aatisfaction, The New Review of Academic Librarianship, Vol. 1,
pp. 161-73.

Measuring
student
satisfaction
265

QAE
14,3

266

HEFCE, UUK and SCP (2003), Final Report of the TQEC on the Future Needs and Support for
Quality Enhancement of Learning and Teaching in Higher Education, TQEC, London.
Hill, F.M. (1995), Managing service quality in higher education: the role of the student as
primary consumer, Quality Assurance in Education, Vol. 3 No. 3, pp. 10-21.
Hill, Y., Lomas, L. and MacGregor, J. (2003), Students perceptions of quality in higher
education, Quality Assurance in Education, Vol. 11 No. 1, pp. 15-20.
James, D.L., Baldwin, G. and McInnis, C. (1999), Which University? The Factors Influencing the
Choices of Prospective Undergraduates, Centre for the Study of Higher Education,
Melbourne.
Jones, T. and Sasser, W.E. Jr (1995), Why satisfied customers defect, Harvard Business Review,
November-December, pp. 88-99.
Low, L. (2000), Are College Students Satisfied? A National Analysis of Changing Expectations,
Noel-Levitz, Iowa City, IA, cited in Banwet, D.K. and Datta, B. (2003), Study of the effect of
perceived lecture quality on post-lecture intentions, Work Study, Vol 52 No 5, pp. 234-43.
Martilla, J.A. and James, J.C. (1977), Importance performance analysis, Journal of Marketing,
January, pp. 77-9.
ONeill, M. (2003), The influence of time on student perceptions of service quality: the need for
longitudinal measures, Journal of Educational Administration, Vol. 41 No. 3, pp. 310-24.
Parasuraman, A., Zeithaml, V.A. and Berry, L.L. (1988), SERVQUAL: a multiple-item scale for
measuring conumer perceptions of service quality, Journal of Retailing, Vol. 64 No. 1,
pp. 12-24.
Pariseau, S.E. and McDaniel, J.R. (1997), Assessing service quality in business schools,
International Journal of Quality and Reliability Management, Vol. 14 No. 3, pp. 204-18.
Patterson, P.G. and Spreng, R.A. (1997), Modelling the relationship between perceived value,
satisfaction and repurchase intentions in a business-to-business service context: an
empirical examination, International Journal of Service Industry Management, Vol. 8
No. 5, pp. 414-34.
Price, I., Matzdorf, F., Smith, L. and Agahi, H. (2003), The impact of facilities on student choice
of university, Facilities, Vol. 21 No. 10, pp. 212-22.
Rowley, J. (2003a), Retention: rhetoric or realistic agendas for the future of higher education,
The International Journal of Educational Management, Vol. 17 No. 6, pp. 248-53.
Rowley, J. (2003b), Designing student feedback questionnaires, Quality Assurance in
Education, Vol. 11 No. 3, pp. 142-9.
Sasser, W.E., Olsen, R.P. and Wyckoff, D.D. (1978), Management of Service Operations, Allyn
and Bacon, Boston, MA.
Schneider, B. and Bowen, D.E. (1995), Winning the Service Game, Harvard Business School Press,
Boston, MA.
Sohail, M.S. and Shaikh, N.M. (2004), Quest for excellence in business education: a study of
student impressions of service quality, The International Journal of Educational
Management, Vol. 18 No. 1, pp. 58-65.
Tam, M. (2002), Measuring the effect of higher education on university students, Quality
Assurance in Education, Vol. 10 No. 4, pp. 223-8.

Appendix
Questionnaire design
The questionnaire was designed round the concept of the service-product bundle described
earlier. It therefore contained questions relating to the physical facilities/facilitating goods, the
explicit service and the implicit service (see Figure A1)

Measuring
student
satisfaction
267

Figure A1.
The implicit service

To purchase reprints of this article please e-mail: reprints@emeraldinsight.com


Or visit our web site for further details: www.emeraldinsight.com/reprints

Vous aimerez peut-être aussi