Vous êtes sur la page 1sur 87

A STRUCTURAL CONTINGENCY

THEORY MODEL OF LIBRARY AND


TECHNOLOGY PARTNERSHIPS
WITHIN AN ACADEMIC LIBRARY
INFORMATION COMMONS

Cameron K. Tuai

Purpose – The integration of librarians and technologists to deliver


information services represents a new and costly organizational challenge
for many library administrators. To understand how to control the costs
of integration, this study uses structural contingency theory to study the
coordination of librarians and technologists within the information
commons.
Design/methodology/approach – This study tests the structural
contingency theory expectation that an organization will achieve higher
levels of performance when there is a positive relationship between the
degree of workflow interdependence and the complexity of coordinative
structures necessary to integrate these workflows. This expectation was
tested by (a) identifying and collecting a sample of information common;
(b) developing and validating survey instruments to test the proposition;
and (c) quantitatively analyzing the data to test the proposed contingency
theory relationship.

Advances in Library Administration and Organization, Volume 31, 1–87


Copyright r 2012 by Emerald Group Publishing Limited
All rights of reproduction in any form reserved
ISSN: 0732-0671/doi:10.1108/S0732-0671(2012)0000031004
1
2 CAMERON K. TUAI

Findings – The contingency theory expectations were confirmed by


finding both a positive relationship between coordination and interdepen-
dence and a positive relationship between perceptions of performance and
degree of congruency between interdependence and coordination.
Limitations – The findings of this study are limited to both the context of
an information common and the structures tested. Future research should
seek to both broaden the context in which these findings are applicable,
and test additional structural relationships as proposed by contingency
theory
Practical implications – This study contributes to the library profession in
a number of ways. First, it suggests that managers can improve IC
performance by matching coordination structures to the degree of
interdependence. For instance, when librarians and technologists are
strictly co-located, managers should coordinate workflows using less
resource-intensive policies rather than meetings. Second, the instruments
developed in this study will improve the library manager’s ability to
measure and report unit interdependence and coordination in a valid and
reliable manner. Lastly, it also contributes to the study of structural
contingency theory by presenting one of the first empirical confirmations
of a positive relationship between interdependence and coordination.
Originality/value – This study represents one of the first empirical
confirmations of the structural contingency theory expectations of both a
positive relationship between workflow interdependence and coordination,
and a positive relationship between performance and coordination’s fit to
workflow interdependence. These findings are of value to both organiza-
tional theorists and to administrators of information commons.

Keywords: Information commons; structural contingency theory;


integration; cooperation; coordination; workflow

The integration of librarians and technologists to deliver information


services represents a new and costly organizational challenge for many
library administrators. To understand how to control the costs of
integration, this chapter will use structural contingency theory to study the
coordination of librarians and technologists within the information
commons (ICs). Contingency theory seeks to optimize organizational
performance by proposing a positive relationship between the degree of
workflow interdependence and the complexity of coordinative structures
Structural Contingency Theory Model of Library and Technology Partnerships 3

necessary to integrate these workflows. To test this theory, the chapter


identified a sample IC population, developed survey instruments, and
quantitatively analyzed the resulting data. The chapter confirmed con-
tingency theory expectations by finding both a positive relationship between
coordination and interdependence, and a positive relationship between
perceptions of performance and degree of congruency between interdepen-
dence and coordination. Note that these findings are limited to the context
of an IC. Future research opportunities include extending the context, or
examining additional variables, such as technology. This chapter contributes
to the library profession in two ways. First, it suggests that managers can
improve IC performance by matching coordination structures to the degree
of interdependence. For instance, when librarians and technologists are
strictly co-located, managers should use policies, not meetings to coordinate
workflows. Second, it improves the library manager’s ability to validly and
reliably measure and report unit interdependence and coordination. This
chapter also contributes to organizational theory and structural contingency
theory by presenting one of the first empirical confirmations of a positive
relationship between interdependence and coordination.
The growing use of partnerships between librarians and technologists to
deliver information services represents a new organizational challenge for
many library administrators. Integrating these two culturally different
partners is a complex undertaking, which likely falls outside the collective
knowledge of many library administrators. In this context, much can be
learned from examining the integration of librarians and technologists.
Since the widespread introduction of personal computing into the
academy, librarians have discussed the potential for improving information
services by combining library and computing services. A sampling of voices
from the 1980s finds librarians pondering, ‘‘With the changes that have
taken place during the past fifteen years in the library and in the computer
center y does one dare ask about the next fifteen years to 2000 AD?’’ (Neff,
1986, p. 19). Others worrying, ‘‘A multiplicity of issues must be considered
as we take the best from y libraries and computing – and move toward the
integrated information support system of the future’’ (Molholt, 1985,
p. 288). And yet others, prognosticating ‘‘For the sake of scholarship and
research, the two [libraries and computing centers] must devise an integrated
approach to delivering the common commodity’’ (Jones, 1984a, p. 32).
Some 25 years later, although librarians still ponder, worry, and
prognosticate on how information technology (IT) will affect library
services, what has changed is that the integration of public access to library
and computing services has largely come to pass within public service units
4 CAMERON K. TUAI

such as the ICs. Unfortunately, a review of the literature shows that many of
the issues that concerned librarians in the past have yet to be resolved. More
specifically, although an extensive body of literature exists on the topic, the
majority of it is merely ‘‘surveys of practice, speculation about practice, and
recommendations regarding suitable organizational and management
strategies’’ (Lynch, 1990, p. 218). This literature may be ideal for identifying
and describing administrative issues, but the absence of methodological
rigor limits its generalizability and value in the design and operation of an
integrated IC. Kirk (2008) neatly summarizes the approach this researcher
has taken to resolve these issues regarding the integration of libraries and
computer centers:

I believe it is more important to talk about the relationship between technology-based


units and library services and to conceive of them as ‘‘collaborating organizations’’ that
may take on a number of structures. A specific structure is not the destination. The
critical issue in thinking about a merged organization is not to find a model to apply in a
particular institution but rather to understand the dynamics of coordination and
collaboration and how structures are suited to support those dynamics. (p. 3)

The proposed research takes Kirk’s challenge by using theoretical and


empirical methods drawn from the field of organizational research, in
particular the ideas from structural contingency theory. This theory is ideally
suited to the proposed research area because its primary focus is on
understanding the relationship between organizational context and structure.
Five steps will support this effort to understand the dynamics of
coordination and integration:

1. Review the theoretical literature in order to define a conceptual


framework for the research.
2. Situate the conceptual frameworks into the empirical literature in order
to define and propose the relationships that form the research questions.
3. Analyze the theoretical and empirical literature’s methodological
approaches to the research questions in order to create a research
instrument.
4. Gather a sample of ICs and develop measures.
5. Analyze and report the applicability of structural contingency theory
expectations with respect to the ICs.

The work aims to develop, test, and examine the mechanics of


coordination and integration within the IC. The findings should allow IC
managers to reduce the costs of collaboration and give information science
Structural Contingency Theory Model of Library and Technology Partnerships 5

researchers the tools to address questions concerning library and computing


center integration.

LITERATURE REVIEW

This literature review describes the boundaries that define the areas of
concern included and excluded from the research area. Drawing from
structural contingency theory literature, the empirical literature, and the
library literature, the conceptual framework will describe the concepts and
variables concerned with the integration of collaborative workflows within
an IC. In particular, the conceptual framework will focus on the variables
and relationships of workflow interdependence, coordination, behavioral
differentiation, and performance.
Structural contingency theory, or contingency theory for short, defines
organizations as ‘‘collectivities oriented to the pursuit of relatively specific
goals and exhibiting relatively highly formalized social structures’’ (Scott,
1992, p. 23). Within this definition, contingency theorists describe organi-
zations in terms of four structural features: centralization, formalization,
division of labor, and coordination. These organizational structures are
dependent upon three contexts or contingencies: size, technology, and
interdependence. Given the relationships among the independent contin-
gencies and the dependent structures, researchers generally use contingency
theory within an intraorganizational unit of analysis. This includes both the
structures internal to a particular unit and the structures external to it.
Contingency theory normally does not examine the individual in isolation,
nor an organization’s interaction with its environment or other organiza-
tions. Therefore, researchers will generally not apply contingency theory to
study the social or psychological levels of the organization’s effects on
individuals, nor will they apply it to investigate the ecological level of
organizations or classes of organizations interacting with their environments
(Scott, 1992).
The underlying premise of contingency theory is that no one best way
exists to organize, but not all ways of organizing are equally effective
(Galbraith, 1973). Given this supposition, contingency theorists endeavor to
identify the optimal organizational structure for a given organizational
contingency or context. Within a collaborative information service context,
numerous contingencies exist; the area of concern for the proposed research
is the integration of librarians and technologists within an IC. Contingency
theory defines integration as ‘‘the process of achieving unity of effort among
6 CAMERON K. TUAI

the various subsystems in the accomplishment of the organization’s task’’


(Lawrence & Lorsch, 1967a, p. 4). Donaldson (2001) refines this definition
by stating that integration is the product of the relationship between
interdependence and coordination. Combining these two definitions allows
one to describe the conceptual framework for this research in terms of the
coordination of the interdependent service workflows of librarians and
technologists within the information service unit of the ICs.
Contingency theory is a widely accepted organizational theory in the field
of management. Within library and information science management, it is
similarly accepted both in textbooks (Jones, 1984b; Stueart & Moran, 2007)
and the journal literature (Kirk, 2004; Moran, 1978; Weiner, 2003). A study
of particular relevance from the library literature is Weng’s (1997b)
dissertation, which is the only study to apply contingency theory empirically
to a library setting. Weng uses a divisional unit of analysis and focuses
largely upon the relationship between technology and organizational-level
structures. Although her research examines intra-unit levels of interdepen-
dence, she does not measure coordination in terms of service workflows.
Further, she calculates unit-level interdependence by summing individual
surveys rather than taking the mean of the individuals within the particular
unit. This approach is similar to other studies that have similarly calculated
unit scores and then compared these scores as representing the character-
istics of the unit as a whole (Perrow, 1967). This work builds upon Weng’s
research by focusing specifically on the coordination of workflow inter-
dependence at an intra-unit level of analysis, but unit level means to
represent the department as the ‘‘unit of analysis’’ rather than individual
employees (Scott, 1992).
The following literature review examines the three variables of interest:
interdependence, coordination, and behavioral differentiation. It then
introduces how these variables relate in terms of a fit expectation and how
the fit or non-fit expectations affect performance. Lastly, this review presents
critiques of contingency theory and its broader ontological assumptions.

Interdependence

Interdependence is the contingency that describes the connection between


activities within a particular work process. In his book Organizations in
Action, Thompson (1967) describes three widely cited degrees of increasing
interdependence: pooled, sequential, and reciprocal. Pooled interdepen-
dence occurs when an organization’s various operations contribute to the
Structural Contingency Theory Model of Library and Technology Partnerships 7

organization but does not require any mutual interactions. Thus a failure
within one unit’s work processes does not directly affect other units’ work
processes. For example, a failure in a library’s circulation unit does not
directly affect the serials cataloging unit. The next higher degree of
interdependence is sequential, which occurs when unit A’s outputs inform
unit B’s inputs. Thus a failure of A directly affects B, but not vice versa. For
example, a failure in acquisitions will affect cataloging, but a failure in
cataloging will not affect acquisitions. The highest degree of interdepen-
dence is reciprocal, which occurs when inputs and outputs move back and
forth between operations. Failure of either operation results in the failure of
the other. Within the library, this relationship can be seen between
circulation and shelving. Circulation’s receipt of books forms the input for
shelving; conversely, shelving’s proper placement of books in the stacks
forms the input for circulation. Thus, failure of either partner will result in
problems for the other.
The challenge in examining the predictor variable of interdependence
within the professional information services literature is that it is rarely, if
ever, explicitly mentioned. This is likely because the interdependence, as
defined by the relationship between workflow actions (Thompson, 1967;
Van de Ven, Delbecq, & Koenig, 1976) requires a level of detail that is rare
in the library case study literature. Although explicit mentions of inter-
dependence are infrequent, Thompson’s observation of increasing degrees of
interdependence within units at lower hierarchical levels suggests that some
degree of interdependence must exist between librarians and technologists
within a collaborative information service unit. To illustrate Thompson’s
theoretical categories of interdependence in terms of an IC’s workflow
interdependence, one can extrapolate definitions and examples from the
library literature to illustrate these categories.
The professional library literature on integrated library and computing
centers describes interdependence theoretically and practically in various
ways. Bailey and Tierney’s (2008) handbook on the learning commons
summarizes some of the common conceptions of interdependence within an
IC setting, such as seamless integration of technology into the construction
of individual and shared knowledge, integration of a continuum of library
services and technology, and integration of facility that focuses on complete
service to users. One challenge in analyzing the professional literature is to
move beyond broad descriptions of services or strategies to the more specific
level of service workflows. Beagle (1999) and Lippincott (2009) present a
number of good examples describing levels of interdependent workflows and
coordination that correspond to Thompson’s (1967) categories.
8 CAMERON K. TUAI

At the pooled level of interdependence, Beagle (1999) describes IC services


as ‘‘walk-through consultancy,’’ (p. 84) which involves librarians not only
providing traditional library reference services but also assisting to the best
of their abilities with patron demands for the digital processing of
information. Lippincott (2009) describes a similar level of service occurring
in a situation of co-located services, which she defines as librarians and
technologists located in the same physical space for the purpose of
simplifying patron referral to the help desk. These two models of IC service
describe different divisions of labor; in both cases, however, each partner is
largely unaware of the services offered by the other and the IC provides no
new services other than convenience of access to both library and
technology support. From an interdependence perspective, the workflow
required to deliver convenience of access requires little to no interaction
between librarians and technologists. In reviewing the case literature, the
services Beagle and Lippincott define at the pooled level of interdependence
are described in (a) co-located desks (Franks & Tosko, 2007; Nikkel, 2003;
Spencer, 2007); (b) high division of labor (Baker & Kirk, 2007; Fitzpatrick,
Moore, & Lang, 2008; Griffin, 2000); and (c) low division of labor as
represented by librarians taking over the majority of IC functions, to the
exclusion of the IT partner (Alexander, Lassalle, & Steib, 2005; Kent &
McLennan, 2007; Nikkel, 2003). Researchers should keep in mind two
points in inferring degrees of interdependence from these definitions.
First, evidence of lower levels of interdependence (e.g., pooled) does
not necessarily mean the absence of higher levels of interdependence
(e.g., sequential) within the same unit (Thompson, 1967). Second, the degree
of interdependence within a particular IC workflow may differ from that of
other IC workflows. Therefore, grouping all IC services into one category of
interdependence may not accurately reflect the actual level of workflow
interdependencies.
The second level of interdependence, sequential, can be seen in Beagle’s
(1999) referral consultancy, which occurs when librarians diagnose and refer
patrons to the appropriate technology staff members or vice versa.
Lippincott’s (2009) category of inter-unit cooperative services – which
occurs when mutual understanding allows for development of an overall
service plan, informed referrals, and planning of new types of service –
describes a sequential level of interdependence. Both definitions describe a
situation where the level of mutual understanding is sufficient for the
partners to shape their services to better fit their partners. This could occur
when employees understand both library and technology aspects so that
they can provide a modest level of service before referring the patron to the
Structural Contingency Theory Model of Library and Technology Partnerships 9

other partner. For instance, reference librarians will more likely conduct
reference interviews for technology questions if they have an understanding of
the synergies offered by delivering information in a digitally integrated
environment. The case study literature describes these types of services in
terms of informed referrals (Crockett, McDaniel, & Remy, 2002); cross-
functional assistance (Church, 2005; Cowgill, Beam, & Wess, 2001; Spencer,
2007); and tiered service (Fitzpatrick et al., 2008). In comparison, inter-
dependence is at only a pooled level if IC staff members break questions into
either library or technology issues without offering additional assistance.
The last level of interdependence, reciprocal, is captured by Beagle’s
(1999) idea of case management and Lippincott’s (2009) definition of
collaborative services. Beagle defines case management as librarians and
computing center staff teaming together to resolve patron information
needs. Lippincott defines reciprocal interdependence as a collaborative level
of service found when technologists and librarians develop common goals
and programs. In both cases the authors speculate that the IC has yet to
reach this level of service. A review of the case literature largely supports this
speculation, finding little evidence of services requiring reciprocal levels of
interdependence. One of the few examples of services that require a
reciprocal level of interdependence comes from Earlham College which
offers services such as ‘‘training users to find and utilize podcasts, customize
course management systems, create web sites, develop multimedia
presentations, and other software tasks’’ (Baker & Kirk, 2007, p. 385).
Empirical research on interdependence within this context comes
primarily from Weng (1997a) who measures Thompson’s (1967) three
categories of interdependence using Van de Ven et al.’s (1976) instrument.
Two other studies of interest are Lynch (1974) and Tushman (1979) who
define interdependence as a sub-scale of technology, similar to Lawrence
and Lorsch’s (1967a) approach, rather than as a measure of workflow
interdependence separate from technology Thompson proposed. Both
Lynch and Tushman measure intradepartmental dependence using an
instrument similar to Mohr’s (1971) in that interdepartmental relations are
nonspecific and composed of a single measure in contrast to Thompson’s
three coordination measures. Support of this researcher’s use of Thomp-
son’s definition of interdependence comes from Lynch’s conclusion that
interdependence is likely not a technology variable but rather a structural
variable. She supports this conclusion through her factor analysis, which
loads task interdependence with the coordinative variable of rules. This
loading supports Thompson’s idea of a relationship between interdepen-
dence and coordination.
10 CAMERON K. TUAI

Coordination

Contingency theorists define coordination as ‘‘the means of integrating or


linking together different parts of an organization to accomplish a collective
set of tasks’’ (Van de Ven et al., 1976, p. 322). March, Simon, and Guetzkow
(1958) explain the relationship between interdependence and coordination
as matching the degree of uncertainty associated with a given degree of
interdependence with the appropriate level of flexibility associated with a
given degree of coordination. For example, within pooled interdependence,
coordination is relatively simple because the unit is largely dependent upon
its own actions and therefore it faces a relatively small number of
contingencies. In comparison, in reciprocal interdependence, coordination
is more complex because a unit’s actions depend upon the other partner and
thus a greater number of workflow contingencies exist.
Thompson (1967) posits three levels of coordination: coordination by
standardization, coordination by plan, and coordination by mutual
adjustment. Each of these relates to his definitions of interdependence,
with standardization coordinating pooled interdependence, planning coor-
dinating sequential interdependence, and mutual adjustment coordinating
reciprocal interdependence. The lowest degree of coordination, coordina-
tion by standardization, integrates pooled interdependence through the
establishment of routines and rules that constrain the actions of the inter-
dependent partners. The second level of coordination planning inte-
grates sequential workflows through standardized programs. For example,
managers will set targets or goals that allow the partners to coordinate
independently within the parameters of a goal or target. Lastly, coordina-
tion through mutual adjustment involves personal communications either
laterally between the partners or vertically within the hierarchy and is
appropriate in cases of reciprocal interdependence between units.
The organizational theory literature includes a number of refinements to
Thompson’s original categorization of coordination. Van de Ven et al.
(1976) expand Thompson’s definition of coordination through standardiza-
tion to include both formal and informal policies and procedures. Galbraith
(1973) offers an intermediate level of coordination between Thompson’s first
and second categories in the form of coordination through hierarchical
referral which occurs when a manager directly coordinates a workflow
contingency that falls outside of policies and programs. Van de Ven et al.
(1976) refine coordination by mutual adjustment by defining personal
communications along two axes: planned or unplanned, and one-to-one or
group. Lawrence and Lorsch (1967a) also expand coordination through
Structural Contingency Theory Model of Library and Technology Partnerships 11

mutual adjustment by describing coordination through a facilitator who


negotiates between interdependent units. The literature further calibrates the
concept of coordination by breaking it into impersonal or personal, formal
or informal (Van de Ven et al., 1976).
Examples of coordination as discussed within the library literature help to
illustrate how the theoretical categories of coordination fit into an IC
context. The simplest form of coordination found within the IC literature is
the formal documents that define the roles and responsibilities of the
librarians and technologists (Duncan, 1998; Trawick & Hart, 2000;
Woodsworth, 1988). The second level of coordination, coordination by
planning, manifests two different forms: in documents that provide some
degree of flexibility in constraining staff actions such as schedules or work
plans (Thompson, 1967; Van de Ven et al., 1976); and in formal personal
communications, such as in hierarchical referrals, goal setting, scheduled
meetings, and task forces (Daft, 2001; Galbraith, 1973). An information
service seldom has a counterpart to coordination through schedules or work
plans, coordinative structures that generally occur in a manufacturing
context. However, information service researchers describe a number of
examples of coordination through the second form of planning, including
cross-representation on departmental team meetings (Foley, 1997; Frand &
Bellanti, 2000; Todd, Mardis, & Wyatt, 2005), managers acting as liaisons
(Barton & Weismantel, 2007; Fitzpatrick et al., 2008; Fox, Fritz, Kichuk, &
Nussbaumer, 2001), and establishment of common goals through manager-
level committees (Alexander et al., 2005; Dallis & Walters, 2006; Morales &
Sparks, 2006). Within the information services literature, a common feature
of this second form of coordination is the communication of each partner’s
interpretation of how formal and informal rules constrain actions. For this
research, coordination by plan is defined as the constraining structures
resulting from commonly held beliefs or norms that coordinate service
workflows. In comparison to coordination by standardization, coordination
by plan is more flexible and relies more upon descriptive rather than
prescriptive constraints.
Thompson’s (1967) highest level of coordination, coordination by mutual
adjustment, allows the greatest level of flexibility. Partners, through face-to-
face exchanges of information, may change the rules that constrain their
actions based upon their mutual interpretation of environmental change.
The IC case study literature describes coordination by mutual adjustment
primarily in terms of informal contact (Baker & Kirk, 2007; Church,
2005; Dallis & Walters, 2006; Greenwell, 2007), through activities that
promote informal contact such as exchange of personnel (Samson,
12 CAMERON K. TUAI

Granath, & Pengelly, 2000), cross-functional teams (Foley, 1997; Fox et al.,
2001), and cross-training (Kent & McLennan, 2007; Metzger, 2006; Wolske,
Larkee, Lyons, & Bridgewater, 2006). In reviewing the IC literature
concerning coordination by mutual adjustment, it is difficult to ascertain
whether these coordinative structures influence workflows to the degree
Thompson implied. The integrative effects of lateral communication in
information services seem to fit the degree of uncertainty linked to
sequential interdependence better than that of reciprocal interdependence.
For instance, Baker and Kirk (2007) describe the benefits of face-to-face
communication in terms of the opportunity to ‘‘gain a finer understanding
of their colleagues’ work’’ ( p. 382); Greenwell (2007) says it ‘‘allows service
desk personnel to make better referral’’ (p. 40); and Dallis and Walters
(2006) report it is ‘‘the key factor in helping students successfully navigate
the suite of IC services is the atmosphere of cooperation’’ (p. 253). These
statements would better describe the sequential level of interdependence
associated with Beagle’s (1999) definition of IC services at a referral
consultancy level, although they do fit somewhat better with Lippincott’s
(2009) broader definition of ideas of cooperative services as the development
of mutual goals.
Empirically, Weng (1997a) measures coordination using the degree of
hierarchical authority present within the library unit. For instance, higher
levels of hierarchical authority would be similar to coordination through
policies and procedures; and lower levels of hierarchical authority would be
similar to coordination through mutual adjustment. A closer examination of
Weng’s measure presents an example of the challenges in measuring
coordination, especially in terms of maintaining consistency in the unit of
analysis. Weng (1997a) identifies four items in decreasing hierarchy of
authority: (a) upper management outside your department, (b) department
heads, (c) individuals within the unit, and (d) groups within the unit. To
measure centralization/decentralization she defines upper management as
representing centralization and department heads, while individuals and
groups represent decentralization. This definition is problematic because
hierarchy of authority is dependent upon the unit of analysis. For example,
‘‘upper management outside your department’’ will always represent a
centralized authority; however, the ‘‘department head’’ is a centralized
authority at an intradepartmental level of analysis but not at the
interdepartment level. Another way to see this is that staff within the
department (intradepartmental) will view the department head as a
centralizing force; however, other department heads (interdepartmental)
view themselves as a decentralized force relative to upper management.
Structural Contingency Theory Model of Library and Technology Partnerships 13

By using only one measure, Weng confounds her assessment of centraliza-


tion of authority because she considers department heads as representing
centralization at an interdepartmental unit of analysis. This methodological
error is subtle, but it demonstrates the ease with which issues arise with
regard to the measurement of coordination and consistency in unit of
analysis. Other researchers who have measured coordination within libraries
include Vorwerk (1970) and Hook (1980) who define coordination as the
degree to which a department can define its own objectives and make
changes to its activities, similar to Lawrence and Lorsch’s (1967b, p. 250)
measurement of coordination.

Differentiation

The IC literature frequently discusses theoretical and anecdotal differences


between librarians and technologists. Authors of these papers will often note
how differences between partners can create barriers to cooperation and
how library IC managers can minimize this through actions such as retreats,
cross-training, or informal gatherings (Blain, 2000; Sharrow, 1995; Vose,
2008). For the present analysis, the issue of cooperation is similar to issues
concerning coordination; the literature seems to reflect a perception that
differences between librarians and technologists act as a positive moderating
force on the relationship between workflow interdependence and coordina-
tion. Within the organizational theory literature, Lawrence and Lorsch
(1967a, 1967b) most clearly address this issue in terms useful for this
research.
Lawrence and Lorsch (1967a, 1967b) examine the relationship between
behavioral differentiation and integration. They measure differentiation
between units in terms of (a) the unit’s goal orientation, (b) interpersonal
orientation, and (c) time orientation (1967a), and define integration as ‘‘the
process of achieving unity of effort among the various subsystems in
the accomplishment of the organization’s task’’ (1967a, p. 4). They focus
on behavioral differentiations as found in the organizational units for
R&D, sales, and production units in six chemical processing plants. The
researchers report that greater levels of differentiation between departments
require more complex integrative structures in order to achieve effectiveness.
Examples of these integrative devices range from simple coordinative
structures such as hierarchy and rules to more complex structures such as
integrating individuals or integrating departments. Lawrence and Lorsch’s
theory is key to understanding behavioral differentiation; other researchers
14 CAMERON K. TUAI

have refined their research by extending the definition of behavioral


differentiation in terms of concepts such as idiosyncratic norms, values, and
language research (Allen & Cohen, 1969; Blau, 1972; Daft, 1978; Daft &
Macintosh, 1981; Dearborn & Simon, 1967).
Lawrence and Lorsch’s (1967b) definitions of behavioral differentiations
can be reasonably interpreted into IC manager’s perceptions of the
differences between librarians and technologists. Turning first to goal
orientation, the IC literature discusses both explicit differences (Blain, 2000;
Sharrow, 1995; Vose, 2008) and implicit differences through statements
regarding traditional roles (Baker & Kirk, 2007; Cowgill et al., 2001; Foley,
1997) and service models (McKinstry & McCracken, 2002; McLean, 1997;
Tucker & McCallon, 2008). Similarly, the literature also discusses inter-
personal differences, both explicitly (Frand & Bellanti, 2000; Wolske
et al., 2006; Yohe & AmRhein, 2005) and implicitly, through comparison
of librarians with technologists in terms of ‘‘shushing bookworms’’ versus
‘‘speakers of a foreign language’’ (Telatnik & Cohen, 1993), passive-
aggressive versus aggressive-abrasive (Flowers & Martin, 1994), or female
versus white male (Channing & Dominick, 2000). Lastly, the literature also
mentions behavioral differentiation in terms of time, with librarians
described as having a long-term perspective and thus favoring gradual
change and predictability while technologists have a short-term perspective
and favor rapid change and risk taking (Foley, 1997; Nielsen, Steffen, &
Dougherty, 1995). Foley (p. 100) who empirically tested for differences in
time orientation through an internal survey found that librarians perceive
the environment as ‘‘stable and calm’’ while the computing staff perceive the
environment as ‘‘dynamic and chaotic.’’ A caveat in regard to these
descriptions of time orientation is that both examples are over ten years old
and perceptions may have changed.
The empirical library literature concerning behavioral differentiation
consistently shows the difficulty in capturing this measure. Hook’s (1980)
and Vorwerk’s (1970) replication of Lawrence and Lorsch’s (1967a) work
demonstrates this challenge. Hook and Vorwerk each chose public services
and technical services as two of their differentiated divisions, with Hook
also sampling systems office, and Vorwerk sampling library administration.
Their sampling instruments are largely duplicates of Lawrence and Lorsch’s
with slight modifications. The goal in each study was to measure
coordination, integration, and differentiation, to confirm that different
library divisions will develop both different organizational structures and
different staff member behavioral orientations. Further, the researchers
expected a positive relationship between behavioral differentiation and the
Structural Contingency Theory Model of Library and Technology Partnerships 15

complexity of integrating, or coordinating, activities. Unfortunately, neither


Vorwerk nor Hook was able to replicate Lawrence and Lorsch’s study
because both were unable to differentiate behavioral orientations within
their subpopulations. Analysis of Vorwerk’s and Hook’s method provides
some insight into the challenges of measuring differentiation within
libraries.
Comparing Lawrence and Lorsch’s (1967a) population to Vorwerk’s
(1970) and Hook’s (1980) suggests a methodological problem in the proper
stratification of the population. The underlying assumption in stratified
sampling is that subsets will be homogeneous within and heterogeneous
across the population (Babbie, 2004). Examination of Vorwerk’s data
(which is similar to Hook’s) reveals that this assumption does not hold for
the variables within his study. For example, Lawrence and Lorsch describe
a wide range of technological certainty spread between 3.5 and 9 on a
scale of 3 to 9. In comparison, Vorwerk reports only a narrow range of
technological certainty – between 8.0 and 9.9 on a scale of 3–21. The narrow
range of technological differentiation results in considerable homogeneity
of technological certainty between their stratification units of library
divisions. The problem of homogeneous subsets carries forward into the
variables that are dependent on technology. For example, Vorwerk reports
structural formality ranging between 11.5 and 15 on a scale of 5–20, in
comparison to Lawrence and Lorsch’s study, where formality ranged
from 10.8 to 19.5 on a scale of 6–24. Ultimately, both Vorwerk and Hook
found almost no differences among their subunits in terms of differentia-
tion between divisional staff’s interpersonal orientation, time orientation,
and goal orientation; the researchers, therefore, were unable to explore
Lawrence and Lorsch’s posited relationships among context, structure, and
coordination.
Weng’s (1997a, 1997b) study of 355 department heads or unit supervisors
from 136 academic libraries also investigates differentiation. Like Hook
(1980) and Vorwerk (1970), Weng stratifies her library population using the
organizational divisions of public services and technical services as units of
analysis. Also like Hook and Vorwerk, she has a methodological problem in
regard to stratification. She is able to differentiate between the two divisions
on one of her six dependent measures (degree of participation) and only one
of her three independent measures (technology). Weng (1997b) comments
on the methodological issue of homogeneity between her subsets, noting
that, ‘‘comparing specific departments instead of approximate divisions of
public and technical services may yield more significant results and provide
better understanding of structural and technological differences in library
16 CAMERON K. TUAI

departments’’ (p. 126). Lynch’s (1974) work in developing and validating an


instrument to measure Perrow’s (1967) construct of technology in libraries
also discusses issues of differentiation and notes that, although she is able to
identify technological variations, they are ‘‘not very large.’’ She concludes
that the nature of the work in libraries may result in low interdepartmental
differentiation with regard to technology, although she concedes that the
problem could also lie in her instrument.
The inability of these reports to differentiate among library divisions
likely lies in their methodological assumption that the population subsets of
public services, technical services, and administration/systems are internally
homogeneous. This assumption has low face validity; for example, within
the public service division, the units of reference and circulation would
intuitively seem to have different types of technologies and structures.

Performance

The assessment of performance represents a significant challenge within


organizational theory. Thompson (1967) writes, ‘‘assessment involves some
standard of desirability against which actual or conceivable effects of causal
actions can be evaluated,’’ but goes on to note ‘‘there is nothing automatic
about standards of desirability nor is knowledge of effects always easily
come by’’ (p. 84) Daft (2001) enumerates some of the challenges by
observing that organizations are large, diverse, and fragmented, perform
many activities simultaneously, pursue multiple goals, and generate both
intended and unintended outcomes. Fry and Slocum (1984) offer some
insight into the issues of measuring performance in their study on police
departments. These researchers measure police department performance
using an instrument designed for assessing performance within government
agencies. Performance was the summation of productivity, flexibility, and
adaptability. Unfortunately, their analysis showed that subjects were unable
to differentiate between these criteria. Fry and Slocum conclude that a single
criterion for defining performance in a complex organization presents the
possibility that ‘‘one criterion may be met at some expense of the other
because both cannot be completely satisfied simultaneously’’ (p. 242).
Contingency theory researchers have taken a variety of approaches to
define and measure performance. Cheng (1983) defines the performance of
the research units in his study through the quality and quantity of their
outputs. Output quality is a self-reported measure of the unit’s contribution
to the generation of both new and existing knowledge and to the field as a
Structural Contingency Theory Model of Library and Technology Partnerships 17

whole; output quantity is the number of journal articles published. Mark


(1985) measures patient care effectiveness in terms of treatment response,
activities of the treatment program, staff qualifications, quality of treatment
plan, and assessment of treatment outcomes. Of interest to this research
is that both instruments use a mix of industry-specific, subjective, and
objective measures. For instance, promptness-of-care was objectively
measured as the percentage of patients seen by a doctor within 15 minutes;
quality-of-nursing-care was the subjective measure of the quality-of-
nursing-care within a unit relative to similar units. Tushman (1978) takes
a more generic approach in measuring performance of an R&D laboratory;
citing Lawrence and Lorsch (1967b), he defines performance as the
manager’s evaluation of all the projects with which he or she was familiar.
Similarly, David, Pearce, and Randolph (1989) define performance in banks
using only the subjective measures of the manager’s opinion regarding staff
levels of cooperation, quantity, quality, competence, leadership, effective-
ness, initiative, dependability, communication, and overall job performance.
Gresov (1990), on the other hand, strictly uses objective measures to assess
performance in government employment security offices as claims taken,
processed, or resolved. Lastly, Kauser and Shaw’s (2004) study of strategic
alliances provides suggestions for how performance might be defined in
a partnership. Kauser and Shaw measure performance using traditional
for-profit metrics such as profitability, market share, and sales growth; they
also recognize the uniqueness of organizations formed through strategic
alliances, defining performance as satisfaction with coordination of activi-
ties, interaction between managers, compatibility of activities, participation
in decision making, level of commitment, information sharing, management
activities, and level of honesty.
Measuring performance within the context of a library is complex because
the value of the services involves influencing ‘‘what we know, what we believe,
and our attitudes’’ (Buckland, 2003, p. 3). This intangibility leads to the
intertwining of the library’s activities with other institutional activities making
it difficult to disaggregate library contributions (Weiner, 2009). Nonetheless,
a considerable body of literature attempts to overcome problems of assess-
ment, especially with the recent trend of academic institutions demanding
greater levels of accountability from libraries (Blandy, 1996; de Jager, 2002;
Hernon & Dugan, 2006). The empirical library literature reveals three
approaches to assessing performance: (a) from a library perspective using
simple quantification of inputs, processes, and outputs (such as gate clicks or
resource expenditures); (b) from a customer perspective, measuring customer
satisfaction; and (c) from an institutional perspective, examining the effect of
18 CAMERON K. TUAI

service on educational achievement (Dugan & Hernon, 2002). Quantification


of input, process, and output measures remains a common means to assess
libraries, but within the literature there is an overall sentiment that these
measures are inadequate (Dugan & Hernon, 2002; Hernon & Whitman, 2001;
Lindauer, 1998). Therefore, there has been considerable work done to
determine how to measure library performance through assessment of patron
satisfaction and education outcomes.
Performance as assessed through patron satisfaction is often associated
with measures of service quality (Hernon & Whitman, 2001; Roszkowski,
Baky, & Jones, 2005; Thompson, Cook, & Kyrillidou, 2005) such as those
associated with the user survey instrument LibQUAL þ (Edgar, 2006;
Kayongo & Jones, 2008; Thompson, Kyrillidou, & Cook, 2008). The popu-
larity of LibQUAL þ is evident in Thompson et al.’s (2008) use of close to
300,000 LibQUAL þ surveys in their analysis. LibQUAL þ is one of the
few library assessment tools that draws from a theoretical foundation
(McDonald & Micikas, 1994; Shi & Levy, 2005) and has been subject to
both reliability and validity testing (Heath, Cook, Kyrillidou, & Thompson,
2002; Heath et al., 2002). It assesses performance by examining patron
satisfaction in terms of service expectations, perceptions of service, and
minimum levels of expected service. Libraries use these three measures to
evaluate patron satisfaction in terms of affect of service, information
control, and library as place. The popularity of this instrument is
undeniable, but the instrument’s criterion and content validity have been
criticized. Roszkowski et al. (2005) find that asking patrons directly about
their level of satisfaction is better correlated with the LibQUAL þ measure
of ‘‘desired service level’’ than LibQUAL’s actual measure of satisfaction
‘‘desired service level’’ minus ‘‘perceived service level.’’ Shi and Levy (2005)
echo this concern, noting that the measure of a patron’s desired and
minimum levels of service lacks clarity with regard to their ‘‘positions and
propositions’’; they conclude that ‘‘LibQUAL þ is not yet an adequately
developed tool to measure and represent a dependable library service
assessment result’’ (p. 272). Edgar’s (2006) critique also concerns the content
validity of the LibQUAL þ measure of satisfaction. He argues that
assessing library performance through patron satisfaction must include not
only service delivery but also the benefit derived from the consumption of
the service, such as letter grade, chance of graduation, or economic benefit.
The Association of College and Research Libraries (ACRL, 1998) also notes
that assessing library performance in terms of satisfaction is a ‘‘facile
outcome y too often unrelated to more substantial outcomes that hew more
closely to the missions of libraries and the institutions they serve’’ (p. 3).
Structural Contingency Theory Model of Library and Technology Partnerships 19

Assessment of library performance through its contribution to the


educational process is relatively scattered in comparison with the measure
of satisfaction centered on LibQUAL þ . One definition of performance as
measured by library service outcomes is ‘‘the ways in which library users
are changed as a result of their contact with the library’s resources and
programs’’ (ACRL, 1998, p. 3). The research on performance generally
takes a patron perspective, a library perspective, or an institutional
perspective (Dugan & Hernon, 2002). For instance, the Association of
College and Research Libraries (ACRL) lists objectives of library outcomes
as ‘‘do students improve their chances of having a successful career?’’ (1998,
p. 4). Although attaining this objective would benefit all three perspectives,
it likely provides the most direct benefit to the patron. On the other hand,
ACRL (1998) proposes asking ‘‘does the library’s bibliographic instruction
program result in a high level of information literacy among students?’’
(p. 4), which suggests a library perspective. Finally, examples of an
institutional perspective include library expenditures and a university’s
reputation (Weiner, 2009). Unfortunately, the library assessment literature
presents a largely conflicting picture (de Jager, 2002; Matthews, 2007; Weng,
1997a). De Jager’s (2002) review of the empirical library assessment research
finds that research into the effects of library actions such as resource
expenditures, time spent in the library, and materials borrowed, both
confirms and fails to confirm changes in GRE scores, grades, and student
retention. De Jager also finds conflicting evidence with regard to library
instruction and information literacy on student outcomes such as grades and
GPA. The inconsistency of the research findings concerning library service
assessment suggests that it is ‘‘very difficult to show empirically and
conclusively whether students benefit from [library] services, or the extent to
which student development may be attributed to the library’’ (p. 140).
Interestingly, de Jager also observes:
I think that students who study in the Knowledge Commons are ideal subjects for
exploring the kinds of difference that the library makes to their university experience and
their academic achievement. (p. 144)

De Jager contends that it is easier to assess library performance in terms


of holistic services, such as those offered in the knowledge commons, than to
disentangle and isolate the performance of individuals.
Weng (1997b) measures performance in academic libraries using both
objective and subjective measures. Her objective measures, drawn from the
Integrated Postsecondary Education Data System Academic Libraries
Survey, quantify public service performance through circulation, interlibrary
20 CAMERON K. TUAI

loan, and general reference counts such as transactions, service hours,


and number of presentations. Her subjective measures come from three
self-reporting questions of a unit’s quantity, quality, and efficiency relative to
a similar unit. Weng defines performance as ‘‘how well service satisfies
the demands placed upon it by its users’’ and measures it by combining
objective and subjective measures (p. 71). She finds no correlation between
the objective and subjective measures of performance.

Interdependence and Coordination

Van de Ven et al.’s (1976) research article, ‘‘The Determinants of


Coordination Modes within Organizations,’’ is key to understanding the
relationship between interdependence and coordination. This article is
particularly valuable in the current research because it uses Thompson’s
(1967) measure of workflow coordination versus organizational coordina-
tion as assessed using centralization, formalization, and division of labor.
Building upon the literature, Van de Ven et al. (1976) define three types of
increasingly complex coordination structures: impersonal, personal, and
group. Their definition of impersonal coordination is drawn directly from
Thompson’s definition of coordination through standardization. Personal
coordination, their second level of coordinative complexity, draws from
a number of sources that describe both vertical communication (Blau,
1972; Hickson, Pugh, & Pheysey, 1969; Thompson, 1967) and horizontal
communication between line staff (Lawrence & Lorsch, 1967a). Lastly, they
categorize group levels of coordination into either scheduled or unscheduled
(Hage, Aiken, & Marrett, 1971). Van de Ven and colleagues use Thompson’s
definitions of pooled, sequential, and reciprocal interdependence to define
this concept.
Working with 197 work units drawn from 16 large state employment
security agencies, their findings support the contingency expectation of
a positive relationship between interdependence and coordination. Later
research (Cheng, 1983; Gresov, 1990; Tushman, 1979) generally supports
their findings, although some inconsistencies do exist. In particular, the
empirical support for a positive relationship between interdependence and
coordination becomes weaker at low levels of interdependence (Cheng, 1983;
Van de Ven et al., 1976; Weng, 1997b). Van de Ven and colleagues (p. 330)
note the challenges in measuring the relationship between interdependence
and coordination, concluding that their research into interdependence and
coordination raises more questions than answers.
Structural Contingency Theory Model of Library and Technology Partnerships 21

Differentiation and Coordination

Contingency theory suggests that under conditions of high interdependence


there will be a positive relationship between differentiation and coordination
(Donaldson, 2001; Hall & Tolbert, 2005; Scott & Davis, 2007). Lorsch and
Lawrence (1972) describe this relationship in terms of differentiation adding
to the level of coordination beyond that normally required by the level of
workflow interdependence. Necessarily this occurs only at high levels of
interdependence because units with a low level of workflow interdependence
require simple forms of coordination regardless of their degree of behavioral
differentiation. Lorsch and Lawrence (1972) support this supposition with
findings from units with a low degree of interdependence, where variations
in differentiation had little impact on the types of coordinative structures
employed. For example, an interlibrary loan department has little workflow
interdependence with serials and cataloging, and, thus, variations in
behavioral differentiation between the two units will have little effect on
the coordinative structures employed. Conversely, within an integrated
information service, reference may have a high level of workflow inter-
dependence with technologists and differentiation will thus have a positive
effect on coordination. Given a high level of workflow interdependence and
high differentiation, Lorsch and Lawrence identify a coordinative structure
of individuals or groups whose purpose is to integrate the actions of
interdependent partners within highly differentiated units. Later researchers
both confirm and elaborate on the role of these mediators (Ito & Peterson,
1986; Tushman, 1977; Tushman & Scanlan, 1981) and the importance of
coordinative structure (Leifer & Huber, 1977).

Fit and Performance

Donaldson (2001) describes the relationship between interdependence and


coordination as ‘‘the combination of contingency and structure that is held
to be a fit cause’s high performance and that the combination held to be a
misfit causes low performance’’ (p. 7). For this research project, the thrust of
contingency theory is that IC will achieve optimal performance through the
fit or congruency of coordination and the librarians’ and technologists’
workflow interdependence.
In reviewing the contingency theory literature on fit and performance, the
primary empirical work of interest is by Tushman (1978, 1979). Tushman
measures his contingency variable of interdependence and his structural
22 CAMERON K. TUAI

variable of coordination at a unit, divisional, and organizational level of


analysis. He hypothesizes that high-performing units will have a positive
relationship between interdependence and the extent of coordination
through oral communications. His research confirms this hypothesis,
finding that high-performing units at both the intradivisional and
interdivisional levels of analysis have a positive relationship between
interdependence and the mean amount of communication. Of particular
interest is his failure to confirm the hypothesis at an intra-unit level where he
found that low (not high) levels of performance were associated with a
positive relationship between interdependence and coordination (1979).
Tushman (1978) suggests that, because intra-unit members share ‘‘common
norms, values, and coding schemes,’’ (p. 641) there is less need for complex
personal communications to coordinate workflows. Therefore, a positive
relationship between interdependence and coordination within a behavio-
rally homogeneous environment, such as those found at the intra-unit level,
is unnecessary and can lead to lower levels of performance. Tushman (1979)
tests this supposition by measuring the centralization of communication
structures as a ratio of vertical to horizontal communications. Note that his
definition of centralization of communication is more akin to Lawrence and
Lorsch’s (1967b) concept of facilitation than to the simpler coordination
structure of hierarchical control. Using this measure, Tushman found a
positive fit between decentralized communication and interdependence at an
inter-unit level of analysis and a negative fit between a centralized
communication and interdependence at an intra-unit level of analysis. He
writes that low behavioral differentiation allows member-to-member coordi-
nation but that high behavioral differentiation requires a more complex and
resource-intensive, integrator-to-member coordination. This finding also
supports Donaldson’s (2001, pp. 44–47) supposition that the variability of
the character of integration mechanisms depends upon the degree of
differentiation between interdependent partners.
Turning to the empirical literature, Weng (1997b) provides some insight
into the measurement of fit and performance within libraries. She finds a
significant correlation between formalization, as measured by the number of
formal documents, and performance within low and medium levels of
interdependence. Of particular interest is the positive correlation between
performance and formalization under conditions of low interdependence, in
line with the contingency theory’s expectations. She also reports a
nonsignificant correlation at the highest level of interdependence. Van de
Ven et al. (1976, p. 330) offer some degree of support to these findings,
reporting a convex relationship between interdependence and the use of
Structural Contingency Theory Model of Library and Technology Partnerships 23

rules and plans. Cheng (1983), examining a different context, supports


Weng’s results, finding a significant positive correlation between high and
medium levels of interdependence and the structural variable of coordina-
tion. He also finds a significant positive relationship between both
interdependence and coordination congruency and his performance vari-
ables. Unfortunately, Cheng’s measures of interdependence and coordina-
tion differ substantially from Weng’s. Cheng uses Mohr’s (1971) instrument,
which measures interdependence in two categories; Weng uses Van de Ven
et al.’s (1976) instrument, which measures interdependence in four catego-
ries. Additionally, Cheng draws his measure of coordination from the
Handbook of Organizational Measures (1972), which measures coordination
in terms of organizational actions such as corrective, preventive, and
regulatory types of coordination. Researchers do not commonly use these
measures, and, therefore, their ability to contribute to construct validity is
limited. Weng measures coordination using Van de Ven and Ferry’s (1980)
instrument, which categorizes coordination into hierarchy of authority,
degree of participation, and formalization. Although these measures are
more applicable to organizational structures, there is some parallel between
these measures and Thompson’s (1967) coordination measures.
Fit and performance represent a complex relationship, and researchers
have taken significantly different methodological approaches to its study.
Little agreement exists within the literature concerning concepts such as
research design, statistical analysis, or even definitions. Perhaps it is
inevitable that research into the subjective concept of performance leads to
empirical challenges. Nonetheless, fit and performance are essential in
establishing congruency propositions regarding contingency and structure.

Critiques of Structural Contingency Theory

A review of the information science literature finds both support for and
opposition to applications of contingency theory, with the primary critique
relating to contingency theory’s assumption of organizational rationality
(Moran, 1978; Rayward, 1969; Weng, 1997a). For many organizational
studies’ researchers, organizations are subject to both the subjective and
unpredictable humanistic tendencies of their members, and the rational
forces of efficiency. Scott (1992) labels the focus on the role of human nature
within organizations as a natural systems perspective. He defines the natural
systems perspective of organizations as ‘‘collectivities whose participants
share a common interest in the survival of the system and who engage in
24 CAMERON K. TUAI

collective activities, informally structured, to secure this end’’ (p. 25). This
definition of organization emphasizes the role of the participants’ goals and
informal structures, in comparison to the rational perspective’s focus on the
role of the organization’s goals and formalized structures. The natural
systems’ critique of contingency theory thus highlights the degree to which
collectivities are oriented to the pursuit of personal goals that may or may
not align with the rational goals of the organization (Lynch, 1990; Weng,
1997a). Thus, the central criticism of contingency theory is its positivist
definition of rational forces such as technology, size, or workflow inter-
dependence, as determinants of organizational structures. Within the field of
information science, social informatics provides an alternative approach to
the rational assumptions of contingency theory.
Social informatics is the ‘‘interdisciplinary study of the design, uses, and
consequences of information technologies that take into account their
interaction with institutional and cultural contexts’’ (Kling, 2007, p. 205).
It aligns with this research project in its focus on information and
communication technologies (ICTs) within organizational structures. Social
informatics defines ICTs as ‘‘the artifacts and practices for recording,
organizing, storing, manipulating, and communicating information’’ (Kling,
Rosenbaum, & Sawyer, 2005, p. 11). Social informatics’ critique of structural
contingency theory is similar to Scott’s (1992) in that it takes issue with
contingency theory’s assumption of technological determinism (Henfridsson,
Holmström, & Söderholm, 1997; Kling, 1980, 2007). It proposes that any
study of the role of ICTs in organizations take into account the social,
technical, and the institutional context in which the ICT is employed. The
advantage of this approach is that it leads ‘‘to a broader understanding of
how computerization is engaged and what its effects are’’ (Kling et al., 2005,
p. 15). The social informatics literature encompasses a wide range of
disciplines and offers a number of perspectives on understanding ICTs within
organizations. There are two critiques of contingency theory which illustrate
some of the issues presented within social informatics. The first critique
comes from strategic contingency theory which examines the role of power or
other social forces in influencing the contingency theory relationship between
technology and structure. The second critique concerns the duality of
technology, arguing that the relationship between technology and structure
is recursive and therefore researchers cannot assign causality exclusively to
either variable.
Child (1972) builds strategic contingency theory on the premise that,
because individuals interpret organizational goals in terms of their own self
interests, they will not pursue the organization’s goals to the extent assumed
Structural Contingency Theory Model of Library and Technology Partnerships 25

under a rational perspective (Jones, 1984b; Oulton, 1991; Rayward, 1969).


Strategic contingency theory loosens structural contingency theory’s
deterministic ontology by suggesting that technology constrains rather than
determines decisions regarding organizational structure. Therefore, man-
agers who are empowered to make structural choices will allow the ‘‘light of
ideological values’’ to influence their decision within this constrained set
(Child, 1972, p. 16). In the library literature, only Crawford (1997) and Lim
(2004) have applied strategic contingency theory in libraries. Unfortunately,
these papers have limited value to the focus in this undertaking on
interdependence as a determinant of coordination structures: Crawford uses
structure to examine the development of power and Lim does not examine
structure at all.
An empirical example of strategic contingency theory is Barley’s (1986)
seminal study on the role of CT scanners on the organizational structures of
two radiology departments. Although Barley confirms the expected positive
relationship between the CT-scanner-induced technological uncertainty and
decentralization, he finds that the degree of decentralization differs between
units. He explains these differences by suggesting that ‘‘technologies do
influence organizational structures in orderly ways, but their influence
depends on the specific historical process in which they are embedded’’
(p. 107). In other words, both technology and other factors, such as the
cultural and social background of an organization, can influence an
organization’s structure.
The second critique of contingency theory’s assumption of organizational
rationality develops from the assertion that technology is both an
independent and a dependent variable in relation to structure. Although a
number of theories take this position, this brief account will focus on the
concept of the duality of technology, which captures much of the thinking in
this area and is prominently mentioned as a critique by Scott (2007).
Duality of technology represents a criticism to structural contingency
theory by positing a recursive, rather than deterministic relationship among
technology, human behavior, and organizational structure. Orlikowski
(1992, 2000) and Orlikowski and Robey (1991) describes the duality of
technology as a series of relationships that involve (a) human behavior
influencing technology, (b) technology influencing human behavior, (c)
organizational properties influencing human behavior, and (d) technology
influencing organizational properties (1992). She links human behavior,
technology, and organizational properties by examining their relationships
over time; this allows her to see each element influence the other. In other
words, over time each element can become a determinant of any other
26 CAMERON K. TUAI

element. Orlikowski notes that technological duality ‘‘in contrast to models


that relate elements linearly, assumes that elements interact recursively, may
be in opposition, and that they may undermine each other’s effects’’ (p. 412).
One means to approach the critiques of structural contingency theory is
through Markus and Robey’s (1998) framework for describing the nature
and direction of organizational causality. Markus and Robey frame the
nature and direction of causality within an organization by categorizing the
influence of IT on organizational life. The idea of duality of technology
provides an emergent perspective in which ‘‘the uses and consequences of
information technology emerge unpredictably from complex social interac-
tions’’ (p. 588). Further, although technological duality is necessary for a
particular structure to emerge, it alone is insufficient to cause the structure.
In other words, the duality of technology approach would support the
contingency relationship that a complex form of technology is necessary for
a complex structure to emerge, but complex technology does not inevitably
lead to a complex structure because other factors may interfere. In
comparison, Markus and Robey would suggest that structural contingency
theory posits technology as an exogenous force that determines or strongly
constrains the behavior of organizational structures. They also contend that
technology is both necessary and sufficient for the emergence of a particular
organizational structure. Strategic contingency theory falls somewhere
between a contingency perspective and a technological duality, in that
technology constrains structure to a range of choices that generally is
necessary and sufficient to cause such a structure to appear.
In reviewing the critiques of structural contingency theory, it is important
to note that they do not necessarily refute the proposition that organiza-
tional structures are subject to rational forces such as workflow
interdependence or technology. Rather, these critiques seek to broaden and
acknowledge additional complexities of organizational contingencies and
structures beyond the rational assumptions of structural contingency
theory. Information science supports this broader approach by noting that
understanding of organizational structures requires a number of theoretical
approaches, including those drawn from both rational and humanistic
perspectives (Kling, 2007). The exclusive use of structural contingency
theory for this research project is justifiable in two ways. First, some
allowances for the structural influences can be found within the natural
systems perspective such as Lawrence and Lorsch’s (1967a) instrument for
measuring behavioral differentiation. Second, the structural contingency
theory literature suggests that the influence of social forces in determining
structural variance is negatively associated with the size of the unit of
Structural Contingency Theory Model of Library and Technology Partnerships 27

analysis. Because this context is an operational-level unit, the smallest


organizational unit of analysis, natural forces would be expected to be
minimal. Thompson (1967) explains the nature of a unit’s hierarchical
position on the influence of humanistic forces in terms of a positive
relationship between technological rationality and smaller hierarchical units
of analysis. He suggests that organizations purposefully create line-level
units (the lowest hierarchical entity) to shield them from the effects of
environmental uncertainty. One would therefore expect a more rational fit
between the independent variables and structural design for line-level units,
thus diminishing the influence of social forces. Child (1972) provides an
example of this hierarchically based rationality, suggesting that the degree of
strategic choice available to a decision maker decreases as the means to
measure performance rationally increases. From Thompson’s perspective, a
lower level unit’s rationalized environment includes a relatively high
capacity to measure performance and is thus subject to a smaller amount
of social influence as described by Child. Comstock and Scott (1977)
empirically support this rationale, finding that the smaller units of analysis
of workflow coordination and interdependence act as better determinants of
subunit structure than does the larger unit of analysis of a subunit’s task
technology. Finally, Scott (1992) recognizes the positive relationship
between the deterministic power of sociotechnical forces and organizational
size: ‘‘when we shift from technology to technical systems, the opportunity
for social and political forces to operate is greatly enlarged’’ (p. 246).

RESEARCH METHODS

In his monograph, The Contingency Theory of Organizations, Donaldson


(2001) writes that the core paradigms of contingency theory are (a) there
is an association between contingency and organizational structure,
(b) contingency determines organizational structure, and (c) a good fit
between an organization’s contingency and its structure leads to better
performance than a poor fit between contingency and structure. To test
the applicability of contingency theory within the academic library unit of
the IC, the researcher will test the following hypotheses:
H1. There will be a positive relationship between interdependence and
coordination.
H2. There will be a positive relationship between differentiation and
coordination.
28 CAMERON K. TUAI

H3. There will be a positive moderating effect of behavioral differentia-


tion on the relationship between interdependence and coordination.
H4. There will be a positive relationship between performance, and the
degree to which coordination fits interdependence.
These hypotheses will be explored at a group or unit level of analysis. In
order to quantify the values needed, the analysis will use the mean of the
individual employee’s responses as proxies for the IC characteristic of
interest. This assumes homogeneity of unit members’ perceptions and
heterogeneity of the mean of individual unit members across units. For this
assumption to hold, the influence of the measured characteristic within
the unit must influence the unit’s members’ perceptions of the characteristic
to a greater degree than the individual perception of the characteristic
independent of the unit. To test this relationship, the researcher will compare
the homogeneity of the individual employee’s perception of the characteristic
against the mean of the perception across units, with the expectation that
there will be perceptual homogeneity of the characteristic within the unit
and heterogeneity across units (Klein, Dansereau, & Hall, 1994) (Table 1).
Conducting the survey entailed the creation of two online questionnaires
and their simultaneous administration. One survey examined perceptions
of performance from managers; the second examined perceptions of
interdependence, differentiation, and coordination from line-level staff
(Appendix A). The purpose in splitting the measurement of performance
from the measurement of the operational variables is to guard against
common rater method bias (Podsakoff, MacKenzie, Podsakoff, & Lee,
2003). This form of method bias occurs when the respondent is the source for
both an operational-level measure and a performance measure. In such
situations, respondents may be tempted to match their operational actions
to an internal expectation of performance. For instance, respondents may

Table 1. Measures Used to Test and Explore Hypotheses.


H1 H2 H3 H4

Interdependence Independent
Differentiation Independent Independent
Coordination Dependent Dependent
Performance Dependent
Fita Dependent Independent
a
f (Fit) ¼ [Interdependence, Coordination].
Structural Contingency Theory Model of Library and Technology Partnerships 29

feel that high levels of horizontal coordination coupled with complex forms
of interdependence should lead to higher levels of performance. This
expectation could taint their responses concerning their perception of
performance. To minimize the chances of respondents reporting what they
believe should occur rather than what is actually occurring, line-level
employees responded to the survey concerning operational-level actions, and
managers responded to questions concerning performance.
The majority of the items on the instrument were drawn from the
organizational assessment instrument (OAI) (Van de Ven & Ferry, 1980).
Three broad modifications were made in order to fit the instrument into the
context of an IC. The first change was to unify, when appropriate, Likert-
type answer categories around the anchors of ‘‘To No Extent’’ and ‘‘To a
Great Extent.’’ Babbie (2004, pp. 253–254) notes the advantage of single-
response categories, in that respondents can compare the strength of earlier
answers against later answers, thus providing the respondent and researcher
with greater comparability among questions. The disadvantage is that
respondents may develop a pattern in answering questions based upon
previous answers, creating a common method bias (Podsakoff et al., 2003).
Because the questionnaire is relatively short, it was felt that this bias would
be marginal. The second change to the original questions was to increase the
number of choices available from five to seven. This change increases the
sensitivity of the instrument while keeping the number of choices within a
reasonable range (Spector, 1992). The last change to the instrument was to
use context-specific nouns instead of pronouns. For example, ‘‘to what
extent has this unit carried out its responsibilities y’’ was replaced with
‘‘to what extent have the computing consultants carried out their responsi-
bilities y.’’ This use of context-specific nouns is similar to the approach
taken within the study of nursing (Leatt & Schneck, 1981; Velasquez, 2007;
Zinn, Brannon, Mor, & Barry, 2003).
The instrument was pretested in two ways, first library and technology
managers at the Indiana University ICs completed the survey and follow-up
interviews were conducted to gather suggestions for improvement. Second,
six librarians and six technologists completed the survey with follow-up
interviews. The revised instrument was included in the final application
for approval from the Indiana University Institutional Review Board
(Appendix B). Approval was received in March 2009. Subjects for the study
were recruited using purposive sampling. The researcher identified eligible
ICs as those formed through a partnership between the libraries and
university computing, through a review of the literature, web searches, and
posting an invitation to participate in the listserv INFOCOMMON-L
30 CAMERON K. TUAI

(a listserv for the public discussion of issues regarding information commons


or information services). Initial contact with potential respondents was
through a recruitment letter or email (Appendix C).
Regression analysis is the accepted approach to analyze the relationship
between the interdependent variables of interdependence and differentiation
and the dependent variable of coordination. Although regression of
workflow interdependence and coordination is reasonably straightforward,
the measurement of performance as a function of fit poses a number of
methodological challenges. The introduction of performance as a measure
of the congruence between structure and contingency is particularly
problematic due to ‘‘confusion in research findings from the somewhat
misplaced creativity of researchers who seem reluctant to replicate
definitions of measures’’ (Comstock & Scott, 1977, p. 177). The literature
shows wide variation in both definitions of performance and the statistical
approaches to measuring fit. Compounding the challenge for the proposed
research is the paucity of research into fit relationships among interdepen-
dence, coordination, and differentiation (David, Pearce, & Randolph, 1989;
Fry & Slocum Jr., 1984; Weng, 1997b). The methodological approaches to
measuring fit as a function of performance reveals a high level of diversity;
Donaldson (2001) alone describes six different approaches to this measure.
Weng (1997b) measures fit by first splitting her population into low,
medium, and high levels of interdependence. She then correlates coordina-
tion with performance for each group. Given the expectation of a positive
relationship between interdependence and coordination, she posits that
there will be a positive correlation in the subpopulation with high inter-
dependence and a negative correlation in the subpopulation with low
interdependence. Cheng (1983) uses a similar methodological approach,
also dividing his population into three degrees of interdependence, then
correlating structure and performance within these three subgroups. A
different approach involves splitting the population into high- and low-
performing units, and then correlating the contingency variable against the
structural variable within each subpopulation. With this approach the
analyst found that there is a higher level of correlation between expected
contingency relationships in the high-performing subpopulation than in the
low-performing subpopulation (Khandwalla, 1973; Mark, 1985; Mohr,
1971). The methodological literature generally supports this simpler, binary
subgroup analysis along performance measures (Arnold, 1982; Donaldson,
2001; Podsakoff, personal conversation, 2007). Another approach to binary
subgroup analyses is to divide the population into either fit or misfits and
then compare the means for differences in performance (Ghoshal & Nohria,
Structural Contingency Theory Model of Library and Technology Partnerships 31

1989; Gresov, 1990; Mohr, 1971). Beyond correlation analysis, researchers


have also examined fit relationships using moderated regression analysis
(Argote, 1982; Cheng, 1983; Mark, 1985). Arnold (1982) argues that this
provides superior statistical validity over a correlation analysis approach.
Donaldson rebuts this suggestion, writing that moderated regression
analysis ‘‘does not reflect the concept of fit as congruence and so it is not
an operationalization of fit as that concept has been meant in contingency
theory’’ (p. 210). The literature reveals mixed results regarding moderated
regression analysis, with most researchers reporting conflicting findings
(Argote, 1982; Fry & Slocum Jr., 1984; Mark, 1985).
In consultation with Near (personal conversation, 2011), the researcher will
calculate the effect of congruence on performance by standardizing the values
of interdependence and coordination using a z ¼ (xm)/s. The analysis will
then take the absolute value of difference and correlate these values with the
performance measures. The assumption is that, as the absolute value of
the differences increases, there should be a negative correlation with perfor-
mance. In other words, as the standardized value of interdependence moves
away from the standardized value of coordination, the IC is either over-
coordinating or under-coordinating. In either case, their perception of perfor-
mance should decrease.

Manager Survey

The information service manager survey seeks to measure perceptions of


performance. The literature reveals little agreement on measuring perfor-
mance within libraries (Bailey & Tierney, 2008; Kuruppu, 2007; Miller,
2008). Precedence for the use of a perceptual measure comes from within the
relevant library literature (Stemmer, 2007; Vorwerk, 1970; Weng, 1997a).
Van de Ven and Ferry (1980) note that questions dealing with organiza-
tional goals and effectiveness criteria are inherently subjective and largely
reflect the basic values or gut feelings of the individual decision-maker. They
further note that the determination of the basic questions regarding the
unit’s desired results is normative and relies on the decision maker’s
introspection with regard to goals, criteria, and standards of effectiveness.
The library literature as a whole generally supports the use of perceptual
measures by acknowledging the absence of an objective measure that is both
valid and reliable in assessing performance between subpopulations
(Dugan & Hernon, 2002; Jääskeläinen & Lönnqvist, 2009; Lindauer,
1998; S. Weiner, 2009). Given the limitations and challenges of measuring
32 CAMERON K. TUAI

information service performance objectively, the researcher used multiple


subjective questions drawn from instruments with proven validity and
reliability to triangulate an estimate of unit performance. The manager
survey consists of three demographic questions and four performance
questions with multiple parts (Table 2).

Table 2. Questions in Manager Survey.


Question Description Question Type Anchors

Q1 Name of college/university Filter n/a


Q2 Number of full- and Control n/a
part-time staff
Q3 Length of time unit Control n/a
has been open
Q4: a–h Perceived unit performance Performance Summated rating
(Van de Ven & Ferry, scale: 7 Response
1980, p. 405) choices from ‘‘To
no extent’’ to
‘‘To a great
extent’’
Q5: a–e Perceived effectiveness of Performance Summated rating
inter-unit relationship scale: 7 Response
(Van de Ven et al., choices from ‘‘To
1976, p. 417) no extent’’ to
‘‘To a great
extent’’
Q5: f Perceived effectiveness of i Performance Summated rating
nter-unit relationship scale: 7 Response
(Van de Ven et al., choices from ‘‘We
1976, p. 417) get much less
than we ought’’
to ‘‘We get much
more than we
ought’’
Q6: a–d Inter-unit consensus/conflict Performance Summated rating
(Van de Ven & Ferry, scale: 7 Response
1980, p. 411) choices from ‘‘To no
extent’’ to ‘‘To a
great extent’’
Q7: a–c Perceived funding levels Control Summated rating
scale: 7 Response
choices from ‘‘To no
extent’’ to ‘‘To a
great extent’’
Structural Contingency Theory Model of Library and Technology Partnerships 33

Q1. Name of college/university: Contained in both the manager and staff


surveys, this question allows for the association of the managers’
academic affiliation with that of their staff. As such it serves as a filter
question for methodological purposes but is not part of the analysis
itself.
Q2. Number of full- and part-time staff: This question has two purposes.
First, when matched with the number of staff returns from a specific
academic, it allows the researcher to identify and target reminder emails
to institutions with low percentages of unreturned surveys. Second, it
serves as a control variable.
Q3. Length of time unit has been open: Control variable. Units which have
been opened for a longer period of time may have developed
systematically different methods of coordination than others, so it is
necessary to control for the ‘‘age’’ of the unit in order to assess the
effects associated with interdependence.
Q4. Perceived unit performance: Instructions – ‘‘To what extent do you
think the information commons was successful in the following areas
during the past year?’’ Original instructions ‘‘In relation to other
comparable organizational units, how did your unit rate on each of
the following factors during the past year’’ (Van de Ven et al., 1976,
p. 455). The researcher removed the comparison aspect of the question
because pre-test interviews revealed confusion regarding selection of
comparable units. IC managers had difficulty identifying comparable
internal units because they felt the IC was unique within the library.
Other respondents interpreted the question to mean comparison with
ICs in other institutions and felt they did not have enough information
to answer the question. Therefore the question was rephrased to focus
on perceptions of internal performance rather than comparable
performance. This question consists of sub-questions a–h, drawn in
their original order from the OAI, except Q4-h:
 Q4-h. Patron satisfaction: Patron satisfaction is a common perfor-
mance catch-phrase found in the information service literature
(Roszkowski et al., 2005); it provides a holistic rather than a criteria-
specific measure of performance and it provides one additional
measure of performance.
Q5. Perceived effectiveness of inter-unit relationship (Van de Ven & Ferry,
1980, p. 417): Van de Ven and Ferry, and Vorwerk (1970) describe this
question as measuring the extent to which the parties subjectively
believe that each unit carries out its commitments and feel their
relationship is equitable, worthwhile, productive, and satisfying. They
34 CAMERON K. TUAI

expect a positive relationship between the degree of perceived


effectiveness and performance. This question consists of sub-questions
a–g, drawn in their original order from the OAI (p. 497).
Q6. Inter-unit consensus and conflict (Van de Ven & Ferry, 1980, p. 411):
This question measures performance by assuming a positive relation-
ship between performance and partner consensus regarding goals,
means, and terms of relationship. Lawrence and Lorsch (1967a)
describe this relationship in terms of the degree of partner differentia-
tion and the complexity of the coordinative structures necessary to
achieve integration. They posit that under conditions of high
interdependence and high degree of differentiation, there will be high
degrees of conflict between partners. Successful resolutions of these
conflicts through coordinative structures lead to higher levels of
performance; unsuccessful resolutions lead to lower levels of perfor-
mance. One would thus expect higher levels of performance in cases of
high levels of consensus and low conflict. This question consists of sub-
questions a–c, drawn in their original order from the OAI (p. 491),
except Q6-d:
 Q6-d. Inter-unit consensus and conflict (resource expenditures): This
question asks about the extent to which the partners agree about
resources expenditures. The question contributes to the measure of
consensus and conflict.
Q7. Funding levels: This question is a control variable that measures access
to resources. This question consists of sub-questions Q7: a–c. It is used
as a control because IC units with more resources should have higher
levels of performance, other things being equal.

Staff Survey

The information service staff survey seeks to measure the variables


of interdependence, coordination, and behavioral differentiation. The
researcher based the survey questions regarding interdependence and
coordination on the OAI (Van de Ven & Ferry, 1980) and Van de Ven
et al.’s (1976) article, ‘‘Determinants of Coordination Modes within
Organizations.’’ Analysis of the pre-test follow-up interviews revealed a
wide variety of interpretations of the categories associated with inter-
dependence and coordination. These discussions, plus a review of the
literature regarding the categorization of interdependence and coordination
Structural Contingency Theory Model of Library and Technology Partnerships 35

(Daft, 2001; Donaldson, 2001; Galbraith, 1973; Scott, 1992; Thompson,


1967) shaped the final wording of these questions. The measures regarding
behavioral differentiation come from Lawrence and Lorsch (1967a), who
define differentiation in terms of time orientation, goal orientation, and
interpersonal orientation. The survey used the interpersonal orientation
question verbatim as measured through the Least Preferred Coworker
(LPC) test, but modified goal orientation to fit the context of the
information service unit better (Table 3).
The staff survey consists of five questions with multiple parts.

Q1. Name of college/university: Contained in both the manager and staff


surveys; this question allows the researcher to connect the manager’s
academic affiliation with that of the staff from their institution.
Q2. Workflow interdependence within unit (Van de Ven & Ferry, 1980,
pp. 451–452): One of the anticipated issues associated with inter-
dependence is the expectation that only a small degree of variance will
be found. A number of facts support this supposition. First, the survey
will draw all of its data exclusively from information service units.
These units are generally providing the same service, as evidenced by
the information service literature, so the variance in interdependence

Table 3. Questions in the Staff Survey.


Question Description Question Types Anchors

Q1 Name of college/university Filter n/a


Q2: a–c Workflow interdependence Interdependence Summated rating scale: 7
within unit (Van de Ven & Response choices from
Ferry, 1980, p. 402) ‘‘Almost none’’ to
‘‘Almost all’’
Q3: a–f Coordination (Van de Ven Coordination Summated rating scale: 7
et al., 1976, p. 327) Response choices from
‘‘To no extent’’ to
‘‘Great extent’’
Q4: a–r Least Preferred Coworker Differentiation – n/a
(LPC) (Lawrence & Lorsch, Interpersonal
1967b, pp. 33–34)
Q5: a–h Goal orientation (Lawrence & Differentiation – Summated rating scale: 7
Lorsch, 1967b, pp. 36–39) Goal Response choices from
‘‘Least important’’ to
‘‘Most important’’
36 CAMERON K. TUAI

will likely be small. Second, analysis of research on interdependence


reveals considerable clustering around a single point. Kim and
Umanath (1992), working with software development subunits, found
that 83% of their interdependence observations fell within the range of
low interdependence. Overton, Schneck, and Hazlett (1977), whose
research context was nursing subunits, report that the degree of
interdependence is homogeneous within each nursing subunit. Van de
Ven et al. (1976) report over two-thirds of their interdependence
observations falling within the ‘‘pooled’’ category. Given the likelihood
of low variance of interdependence within information service units, the
researcher modified the OAI instrument in an attempt to increase its
sensitivity. First, using an approach similar to Overton et al. (1977), the
categories of interdependence were described to fit the workflow
contexts of an information service unit. This included shrinking the
number of categories of interdependence from four to three but
broadening their conceptual definitions. Few examples of reciprocal
levels of interdependence are expected, so the OAI’s expansion of
Thompson’s (1967) categorization of three levels of interdependence
to include a fourth, higher, level – team-work – seemed unnecessary.
Further, the instrument illustrated the three categories of interdepen-
dence with specific information service examples: pooled interdepen-
dence to direct referral; sequential interdependence to informed
referral; and reciprocal interdependence to anything requiring higher
levels of interdependence. The second modification to the OAI
instrument broadens the IC’s context in order to increase the frequency
of responses indicating reciprocal levels of interdependence.

The broadening of the IC’s context is in response to pre-test interviews


where respondents generally associated reciprocal levels of interdependence
with services incidental to the unit’s purpose, for example, dealing with
problem patrons or recommending a good restaurant. More broadly
speaking, reciprocal-level interdependence occurred when a situation both
fell outside of the staff’s collective experiences and was not linked to either
profession’s particular domain. Although the case study literature does not
reflect these types of situations, it was felt that the one-stop-shopping nature
of information service units necessarily includes these non-library/technol-
ogy services. To encourage respondents to think more broadly about
information services, and to increase the capture of reciprocal levels of
interdependence, the question’s instructions included the phrase, ‘‘The
operation of the information commons involves numerous activities
Structural Contingency Theory Model of Library and Technology Partnerships 37

including not only library/technology services but also tasks such as


enforcing food and drink policy or use of group space.’’ Other modifications
to the instrument included dropping the OAI graphics associated with
interdependence, because respondents indicated that they were either
confusing or provided no additional insight. One respondent’s comment
mentioned that a patron’s problem may start at one point in time but be
resolved later. For instance, a patron may ask a librarian to assist in searching
a database; leave to conduct the search; then return to have a technologist
help process the information into a power point presentation. Under these
circumstances, the librarian and the technologist will record a pooled level of
interdependence, when it is in fact a sequential level. The literature does not
mention this phenomenon so it is difficult to estimate the degree to which such
scenarios occur. Intuitively, this type of service workflow does not seem
unreasonable, suggesting that the survey may over-represent pooled inter-
dependence and under-represent higher degrees of interdependence.

Q3. Coordination (Van de Ven et al., 1976): The measure for coordination
is based upon work by Van de Ven et al. (1976). Using pre-test
interviews and a review of the literature (Daft, 2001; Galbraith, 1973;
Thompson, 1967), the researcher modified the instrument to customize
it to the information service unit context. The modifications included
dropping four of the nine categories in the original instrument and
adding one category. Unless otherwise noted, the survey adopts the
remaining categories verbatim. Modifications to the original instru-
ment are as follows:
 Dropped the response that described coordination through ‘‘work
plans’’ or ‘‘work schedules.’’ This type of coordination is generally
associated with a manufacturing context where sequential inter-
dependence requires specification of quantities and delivery dates in
order for one step in the work process to coordinate with the next.
Pre-test interviews revealed that respondents interpreted this form of
coordination to mean shift schedules, which is better associated with
rules and procedures – a lower form of coordinative structure.
 Dropped assistant supervisors as coordinators. The survey merged
this category with coordination through senior administrators. Pre-
test interviews revealed confusion as to the difference between
assistant and senior administrators because there were often no
assistant supervisors within the unit.
 Added Galbraith’s coordinative structure of ‘‘service statements
or goals’’ as a form of Thompson’s (1967) coordination through
38 CAMERON K. TUAI

planning. The researcher justifies adding this form of coordination


because it is often mentioned in the information service case
literature.
 Dropped formally designated work coordinators because this type of
coordination generally occurs with larger numbers of personnel. Pre-
test interviews revealed confusion regarding this question.
 Dropped coordination through standing committee and merged into
coordination through regularly scheduled meetings. The initial
source of this question defines a standing committee as a regularly
scheduled meeting between senior administrators. Pre-test interviews
found that respondents were uncertain as to the definition of a
standing committee or were unsure of their existence.
Q4. Interpersonal orientation (Lawrence & Lorsch, 1967b): Lawrence and
Lorsch measure interpersonal differentiation using Fiedler’s (1964)
‘‘Least Preferred Coworker’’ instrument, which was adopted verba-
tim. This instrument tests for agreement among coworkers about the
interpersonal orientations that they prefer; lack of agreement
indicates differentiation among coworkers.
Q5. Goal orientation (Lawrence & Lorsch, 1967b, pp. 257–258): Similar
to other applications of this instrument (Hook, 1980; Vorwerk, 1970)
the wording of this question was modified to fit the context. Lawrence
and Lorsch define three functions within an organization and
associate each with a specific type of goal: Sales units are associated
with market goals; manufacturing units with technoeconomic goals
(costs and quality); and research and development with scientific
advancement goals. Vorwerk kept the same thematic goals, adjusting
each to fit his organizational units of public services to market goals,
technical services to technoeconomic, and systems to scientific
advancement. Of particular relevance here is his finding that public
services goals centered on markets, while systems goals centered on
either market or technoeconomic. Vorwerk’s finding, in conjunction
with a review of the literature on the differences between librarians
and technologists, plus the descriptions of goals provided by
Lawrence and Lorsch, suggests that library staff will have a market
orientation and technology staff will have a technoeconomic
orientation. Because the information service unit has no equivalent
of a research and development staff, the survey dropped the goal
orientation of scientific advancement. Additionally, the survey
changed the instrument from a forced-scale to an ordinal response
in order to improve the statistical analyses.
Structural Contingency Theory Model of Library and Technology Partnerships 39

ANALYSIS AND FINDINGS

The data were collected using an online survey administered from April
2010 to November 2010. The initial population of 112 organizations
produced n ¼ 62 unique institutional responses. Participants provided 435
surveys, of which n ¼ 315 were usable (Table 4); responses that contained at
least one completed survey section were deemed usable.
To describe the characteristics of the institutions participating, in the
survey, the researcher downloaded demographic data for 2007–2008
(Department of Education [DoE], 2008) from the National Center for
Educational Statistics (NCES). Because NCES collects data only on U.S.
schools, Table 4 does not reflect Canadian institutions. The ‘‘# of Years IC
opened’’ was gathered from the survey (Table 5).
The analysis uses the mean responses of each IC’s members as proxies for
the IC characteristics of interest. To test the applicability of this approach
an analysis of variance (ANOVA) test will check for within group
homogeneity and between group heterogeneity. If within group homo-
geneity is significantly different from between group homogeneity at ao.05,

Table 4. Survey Returns by Profession and Position Characteristics.


Staff Manager Total Individual Total Units

Librarian 103 71 174 48


Computing consultant 90 51 141 52
Total 193 122 315 62a
a
Unique units.

Table 5. IC Sample Characteristics.


N Mean Standard Deviation

Total library staff 48 149 134


Total library holdings 48 7,617,989 2,200,785
Total reference transactions 48 3,070 855
Total student populationa 48 39,432 15,579
# of Years IC opened 58 8 3
a
Student full-time equivalent.
40 CAMERON K. TUAI

then the analysis will accept an IC member’s mean answers as a proxy for
the unit characteristics in question.

Measurement Development

Because theoretical expectations may not hold when extended to this new
area of research, principal component analysis (PCA) was used to factor
instrument items into overall measures. To maximize correlation between
items after the initial extraction, the analysis employed a varimax ortho-
gonal rotation, the most common and recommended rotation approach
(Tabachnick & Fidell, 2007). The factorability of items was determined by
confirming an inter-item correlation of RW.30, and that the Kaiser–Meyer–
Olkin measure of sampling adequacy was greater than .60. To test the
measure’s reliability, the analysis looked for Cronbach’s alpha of aW.70.
The value for each measure will be the mean of its items. This approach
allows the inclusion of surveys where some items are missing within the
measure. Additionally, this makes it easier to interpret the analysis because
most items in the survey are on a scale of 1–7, which will mirror the
analysis’s findings.

Interdependence

Workflow interdependence was calculated by combining pooled, sequen-


tial, and reciprocal interdependence into a single Guttman, or additive-type
scale (Thompson, 1967), an approach taken by a number of similar studies
(Van de Ven & Ferry, 1980; Weng, 1997b). Measurement development did
not require factor analysis because the items used to estimate interdepen-
dence represent only a single conception of interdependence. Calculation
of an IC’s level of interdependence used the same weights as Van de Ven
and Ferry (1980) used in their measure of interdependence. They adopted
these weights in order to create a measure of interdependence that
would increase when levels of reciprocal or sequential interdependence
were high.
Interdependence ¼ average½ðpooled interdependence  0Þ
þ ðsequential interdependence  0:3Þ
þ ðreciprocal interdependence  0:6Þ
Structural Contingency Theory Model of Library and Technology Partnerships 41

Table 6. Level of Interdependence.


Mean Standard Deviation Minimum Maximum Valid N

Information commons total 3.32 1.04 0.99 6.93 49

The use of these weights resulted in the measure for interdependence


ranging from 1 to 7 (Table 6).
Analysis of within and between group variance failed to reveal a reliable
effect of a unit’s interdependence on its members’ perceptions of inter-
dependence F(48,137) ¼ 1.06, p ¼ .39. This suggests either that perceptions
of intra-unit interdependence were heterogeneous or those of inter-unit
interdependence were homogeneous. In other words, either the IC’s level
of interdependence has a weak influence on individual members’ perceptions
of interdependence, or all units had the same level of interdependence.
The researcher elected to use this measure of interdependence in spite of
the issue of inter-unit homogeneity because previous studies also report
clustering of interdependence around sequential interdependence. Weng
(1997b) reports a m ¼ 11.79, s ¼ 2.81 on a scale of 120; and Van de Ven
and Ferry (1980) report values between m ¼ 3.95 and m ¼ 5.14 and sE1.60
on a scale of 2–10. These findings support the idea that all library work
units, as in the case of Weng’s study, and other units, tend to have sequential
levels of workflow regardless of the service offered. The similarities of these
findings to those reported here also lend construct validity to the measure
adopted. Additional support for the use of the measure of interdependence
comes from the nursing literature, where Alexander and Randolph (1985)
suggest that high levels of standard deviation, which unfortunately they do
not report, offer enough variance to justify use of homogeneous measures.

Coordination

The survey measured coordination using six items and confirmed the
applicability of factor analysis by finding that all coordinative measures
correlated at r(178)W0.30, po.01 to at least one other measure, and a
Kaiser–Meyer–Olkin (KMO) W0.60 , which is the accepted minimum for
this measure (Tabachnick & Fidell, 2007). Exploratory PCA of the
coordination variables using an unspecified solution (EigenvalueW1)
42 CAMERON K. TUAI

extracted two factors accounting for 58% of the variance across variables.
Communalities or the amount of variance accounted for in the variable by
the solution were all W.30, which supports the applicability of factor
analysis (Neill, 2008). Orthogonal varimax rotation with Kaiser Normal-
ization of the initial factor matrix found two factors (Table 7), with Factor 1
loading with policy, goals, manager, meetings, cross-training, and teams
at W.60; and Factor 2 loading with observation, and direct contact also
loading at W.60, which suggests very good representation (Tabachnick &
Fidell, 2007). A challenge to finding a simple solution was cross-training and
teams, which cross-loaded onto Factor 2 at W.40. The factoring of the items
largely conforms to the theoretical expectation with policies, goals,
manager, meetings, cross-training, and teams relating to formal methods
of coordination, and observation and direct contact relating to informal
methods of coordination (Van de Ven et al., 1976). This logical grouping
lends support to the construct validity of the measure. The analysis finds
further support for this measure’s construct validity in a forced three-factor
solution where the factors load in increasing order of coordinative
complexity: Factor 1 loading with policies, goals, and manager; Factor 2
loading with meetings, cross-training, and teams; and Factor 3 loading with
observation and direct contact. Interpreting these factors would suggest

Table 7. Rotated Confirmatory Factor Matrix of Coordination Items.


Item Factors

1 2

Policy .773
Goals .804
Manager .701
Meetings .606
Observation .715
Direct contact .814
Cross-training .604 .531
Teams .680 .400
Eigenvalue 3.60 1.06
% Total variance explained 45.04 58.28
Cronbach’s alpha .83 .49

Extraction method: Principal Component Analysis.


Factor extraction ¼ 3.
Factor loadingo0.4 are suppressed.
Rotation method: Varimax with Kaiser Normalization.
Structural Contingency Theory Model of Library and Technology Partnerships 43

Factor 1 loads formal written forms of coordination, Factor 2 loads formal


face-to-face coordination, and Factor 3 loads informal face-to-face
coordination.
Co-location of staff occurred in spite of an overall lack of reciprocal
interdependence, and likely confounds the identification of the three distinct
levels of coordination Thompson predicts. That is, co-location is likely to
inflate levels of mutual adjustment beyond those needed by the level of task
interdependence. Van de Ven and Ferry (1980) also run into problems in
factoring coordination; they report a smaller number of coordination
factors than predicted in their factoring intra-unit information flows. They
expected three factors: Written, Personal, and Group communications;
instead they were able to factor only two, Written and Oral. In the
discussion they note that their factor analysis indicates a need for future
research into the grouping of items that constitute coordination, a concern
that they reiterate in reporting their coordination measure’s low level of
reliability, a ¼ .68.
The analysis dropped Factor 2, which loaded with items observation and
direct contact, due to its low level of reliability. Deletion of items found no
increase in reliability. Analysis of within and between group variance of
Factor 1, as loaded with policy, goals, manager, meetings, cross-training,
and teams, found both an acceptable level of reliability and a significant
effect of a unit’s coordination on its members’ perceptions of coordination,
F(47,134) ¼ 1.59, po.05. The analysis calculated coordination by first taking
the mean of the individual unit employee’s perceptions of policy, goals,
manager, meetings, cross-training, and teams then taking the mean value
of the IC members as representation of coordination for the IC. Therefore,
the scale for Coordination is from 1 (to no extent) to 7 (to a great extent)
(Table 8).

Behavioral Orientation

The construct of behavioral orientation was evaluated through two mea-


sures: Interpersonal Orientation using the LPC scale and Goal Orientation

Table 8. Level of Coordination.


Mean Standard Deviation Minimum Maximum Valid N

Unit total 3.64 1.02 1.50 6.50 51


44 CAMERON K. TUAI

through the importance placed upon market and technoeconomic goals. The
LPC score is the sum of 18 behavioral adjectives with positive anchors being
reverse scored: higher scores indicating a greater level of relationship
orientation and lower values a greater level of task orientation. The analysis
calculated the LPC by taking the average score for each respondent, with
the unit score being the average of the respondents within each IC. Because
the items that compose the LPC are not multiple measures, factor analysis
was not necessary. Cronbach’s alpha was calculated at a ¼ .91 (n ¼ 110)
indicating a good level of reliability (Table 9).
Analysis of within and between group variance for behavioral differentia-
tion assumes that the LPC will be influenced by both the librarian’s or
technologist’s cultural differences, as well as the ICs. Analysis of within and
between group variance did not find a reliable effect of a unit’s LPC
characteristics on its members’ perceptions of the LPC, for either librarians
or technologists. Analysis of within and between group variance of the staff
regardless of their profession also did not find a reliable effect of the unit’s
LPC characteristic on its members perceptions of the LPC. Only analysis
of LPC differences between librarians and technologists regardless of the
unit in which they worked found a reliable difference on LPC (Table 10).
This suggests that librarians and technologists differ on the LPC because

Table 9. Least Preferred Coworker.


Mean Standard Deviation Minimum Maximum Valid N

Librarian 3.22 1.10 1.39 8.00 43


Technologist 3.58 1.04 1.47 6.06 29
Unit 3.31 0.95 1.39 6.06 46a
a
Unique units.

Table 10. ANOVA for Least Preferred Coworker.


dfa F p

Unit by librarian (42, 52) 1.11 0.36


Unit by technologist (29, 50) 1.00 0.48
Unit (46, 128) 1.06 0.39
Profession (1, 173) 5.66 0.02
a
(Between Group, Within Group).
Structural Contingency Theory Model of Library and Technology Partnerships 45

of individual characteristics and not the IC in which they work (Klein et al.,
1994). The measurement of LPC differs from Vorwerk’s (1970) findings, in
that he is largely unable to differentiate between public services and systems,
reporting public services with a m ¼ 4.9 and systems with a m ¼ 4.5, on a scale
of 1–8. His analysis does not conduct an ANOVA or report standard
deviations. The other difference is that he finds public services to have a
higher level of relationship orientation than systems staff, whereas this
analysis found technologists to have a higher level of relationship orienta-
tion than librarians. This last finding is admittedly a bit surprising, given the
stereotype of technologists being more task oriented. One possible expla-
nation could be that IC technology staff are the equivalent of technology
help desk staff and primarily composed of students who may differ from
full-time systems technology staff; this could explain the higher than
expected relationship score.
The second behavioral measure was the sum of the eight questions
concerning goal orientation. The applicability of factor analysis was
supported by seven of the eight items correlating with one other item at
r(149)W0.30, po.01 and KMO ¼ 0.71, self-service being the exception.
Exploratory PCA using orthogonal varimax rotation with Kaiser Normal-
ization using an unspecified solution (EigenvalueW1) extracted three factors
accounting for 61% of the variance across items. Examination of Factor 3
found that two of its three items cross-loaded at W.30 and that the third
factor was self-service. Dropping self-service and re-running the factor
analysis found two factors, with Factor 1 loading onto service, satisfaction,
instruction, usage, and Factor 2 loading onto security and cost. The analysis
dropped integration because it cross-loaded onto each factor at W.40. The
resulting rotated solution found two factors accounting for 56% of the
variance, with no cross-loadings. Internal consistency of the two factors as
measured through the Cronbach’s alpha was only moderate for Factor 1
(a ¼ .66) and low for Factor 2 (a ¼ .53). Elimination of items did not
increase reliability. The factors largely conform to the theoretical expecta-
tion with service, satisfaction, instruction, and usage relating to marketing-
type goals, and security and cost to operational-type goals (Lawrence &
Lorsch, 1967b). The analysis dropped Factor 2 because it showed a low level
of reliability. The analysis calculated behavioral orientation of Factor 1,
which closely resembles Lawrence and Lorsch’s (1967a) Market Goals by
taking the mean of service, satisfaction, instruction, and usage (Table 11).
The Market Behavioral Orientation score is the sum of service,
satisfaction, instruction, and usage. The analysis calculated the Market
Behavioral score by taking the average score for each respondent, with the
46 CAMERON K. TUAI

Table 11. Rotated Exploratory Factor Matrix of Goal Items.


Item Factors

1 2

Service .607
Satisfaction .729
Instruction .763
Usage .663
Security .838
Cost .756
Eigenvalue 2.26 1.11
% Total variance explained 37.70 56.34
Cronbach’s alpha .66 .53

Extraction method: Principal Component Analysis.


Factor loadingo0.4 are suppressed.
Rotation method: Varimax with Kaiser Normalization

Table 12. Marketing Goal Orientation.


Mean Standard Deviation Minimum Maximum Valid N

Librarian 5.62 0.64 4 6.83 42


Technologist 5.35 0.78 4 7 28
Unit 5.55 0.65 4 7 46a
a
Unique units.

unit score being the average of the respondents within each IC. The scale for
Marketing Goal Orientation is from 1 (least important) to 7 (most important)
(Table 12).
Analysis of within and between group variance for behavioral differentia-
tion assumes that both the IC staff’s profession and the ICs to which they
belong will influence their goal orientation. Analysis of within and between
group variance of librarians did find a reliable effect of a unit’s marketing
orientation on its members, suggesting that the unit does influence the
librarians’ perceptions of the unit’s market orientation. Similarly, analysis of
within and between group variance of technologists also found a reliable
effect of a unit’s marketing orientation on its members, suggesting that the
unit does influence the technologists’ perceptions of the unit’s market
orientation. Lastly, analysis by profession by unit and profession also found
a reliable effect of a marketing orientation, suggesting that both the unit and
Structural Contingency Theory Model of Library and Technology Partnerships 47

Table 13. ANOVA for Marketing Behavioral Orientation.


dfa F p

Unit by librarian (41, 49) 2.10 0.01


Unit by technologist (29, 48) 1.72 0.05
Unit (47, 121) 1.77 0.01
Profession (1, 167) 11.00 0.00
a
(Between Group, Within Group).

the profession of the staff influence their perception of their goals regarding
a marketing orientation (Table 13).

Perceptions of Performance

The IC managers responded to 18 items that measure perceptions of


performance. The applicability of factor analysis was supported by all 18
items correlating with at least one other item at r(102)W.30, ao.01, and a
KMO ¼ .91. Exploratory PCA of the performance variables using an
unspecified solution (EigenvalueW1) extracted three factors accounting for
67% of the variance across variables, with communalities all W.30. PCA
using an orthogonal varimax rotation with Kaiser Normalization found that
Factor 3 presented significant concerns with four of its five items cross-
loading at W.40. In order to create the simplest factor structure possible, the
items associated with Factor 3 that cross-loaded at W.40 – them-us, us-
them, quality, and innovation – were dropped as well as the single factor of
quantity. Re-running the PCA using an orthogonal varimax rotation with
Kaiser Normalization found a two-factor solution accounting for 68% of
the variance across variables, with communalities all W.30, with no cross-
loadings (Table 14).
The two factors relate to the two theoretical measures of performance,
with Factor 1 linking to Internal Perceptions of Success and Factor 2
linking to Perceptions of Relationship between librarians and technologist,
thus supporting the construct validity of the measure. The researcher had
expected to receive only one survey from each senior information service
manager, but 16 ICs sent multiple manager surveys. This allowed
the analysis to check for the influence of the individual ICs on their
managers’ perceptions of performance. The measure Internal Perceptions
of Success (Factor 1) showed a reliable effect of a unit’s perception of
48 CAMERON K. TUAI

Table 14. Rotated Exploratory Factor Matrix of Performance Items.


Item Factors

1 2

Reputation .785
Goals .829
Efficiency .766
Morale .792
Patron satisfaction .655
Productive .842
Effort .751
Satisfaction .794
Relative payoff .780
Goal .843
Service .800
Relate .827
Spend .747
Eigenvalue 1.74 7.15
% Total variance explained 68.44 41.65
Cronbach’s alpha .85 .93

Extraction method: Principal Component Analysis.


Factor loadingo0.4 are suppressed.
Rotation method: Varimax with Kaiser Normalization.

performance on managers F(53, 64) ¼ 1.16, po.01. Unfortunately only two


ICs sent back multiple responses for the measure Perception of Relation-
ships, and as such it was not possible to conduct an ANOVA on this
measure.
Cronbach’s alpha showed good reliability in both factors. An ANOVA
test explored whether the IC’s performance characteristic affected the
manager’s perception of performance. Analysis of Perceptions of Perfor-
mance found a reliable effect of a unit’s performance on its members’
perceptions of performance, F(55, 66) ¼ 1.64, p ¼ .03. Analysis of Percep-
tions of Relationship did not find a reliable effect of a unit’s Perception of
Relationship on its members’ perceptions F(54, 63) ¼ 1.40, p ¼ .10.
Individual values for both Internal Perception of Performance and
Perception of Relationship were calculated by taking the mean of the
individual IC survey items associated with the measure. The scale for
Perception of Performance and Perception of Working Relationship is from
1 (to no extent) to 7 (to a great extent) (Table 15).
Structural Contingency Theory Model of Library and Technology Partnerships 49

Table 15. Performance Measures.


Mean Standard Deviation Minimum Maximum Valid N

Perception of performance 5.54 0.72 3.47 6.60 56


Working relationship 4.67 0.89 2.63 6.50 55

Table 16. Exploratory Factor Matrix of Funding Items.


Item Factors

1
Day-to-day .839
Current .883
New .668
Eigenvalue 2.39
% Total variance explained 79.66
Cronbach’s alpha .87

Extraction method: Principal Component Analysis.


Factor loadingo0.4 are suppressed.
Rotation method: Varimax with Kaiser Normalization.

Funding

Three items measured the control variable, Perceptions of Funding. The


applicability of factor analysis was supported: all three items correlated
with at least one other item at r(113)W.30, ao.01, and KMO ¼ 0.67.
Exploratory PCA of the performance variables using an unspecified solution
(EigenvalueW1) extracted one factor accounting for 79% of the variance
across variables, with communalities all W0.30. The extraction of only one
factor meant that rotation of the solution was not necessary. Examination
of the Cronbach’s alpha showed good reliability (Table 16).
Perception of Funding was calculated by taking the mean of the survey
items associated with the measure. The unit value for Perception of Funding
is the mean of the IC responses to the measure. As such the scale for
Perception of Funding is from 1 (to no extent) to 7 (to a great extent).
Analysis of Perception of Funding did not find a reliable effect of a unit’s
Funding on its managers’ Perception of Funding F(54, 61) ¼ 1.63, p ¼ .08
(Table 17).
50 CAMERON K. TUAI

Table 17. Perception of Funding.


Mean Standard Deviation Minimum Maximum Valid N

Unit total 5.18 0.93 3.00 7.00 55

Inferential Statistics

Linear regression was used to measure the relationship between the


predictor variables and the criterion variables. The researcher checked for
the appropriateness of linear regression using a Durbin–Watson test, which
checks for independence of errors. Values between 1 and 2 generally indicate
independence of errors, with values greater than 2 indicating an under-
estimation of the level of statistical significance. Calculation of the
regression model included only a single-control variable, organization size,
because the maximum number of ICs included in the analysis was n ¼ 49,
which precludes use of more than the control variable plus the predictor. In
all analyses the researcher attempted to use no more than one predictor
variable for every 10 cases; this ratio of predictor variables to number of
cases should produce relatively reliable regression results, whereas a larger
ratio might result in unreliable estimates. Simple correlation of unit
variables finds support for the expected positive relationship between
interdependence and coordination (Table 18).

Predictors of Coordination

H1. There will be a positive relationship between interdependence and


coordination.
Step 1 of testing the first hypothesis regressed the first control variable:
number of full-time students enrolled (SFTE) at the IC’s institution. Drawn
from National Center for Education Statistics: Academic Library data set,
this value is the number of undergraduate and post-baccalaureate students
enrolled full-time, plus one-third enrolled part-time over a 12-month
enrollment period (DoE, 2008). The analysis found a significant relationship
between the control variables SFTE and coordination. Step 2 introduced the
predictor variable interdependence into Step 1 and found a significant
relationship with R2 ¼ 23% of the variance in coordination, a 13% increase
over the control variables. Examination of the Durbin–Watson statistic
Structural Contingency Theory Model of Library and Technology Partnerships 51

Table 18. Correlation among Unit-Level Independent,


Dependent, and Control Variables.
1 2 3 4 5 6 7

1. Interdependence 1.00
(49)
2. Coordination .39 1.00
(49) (51)
3. LPC difference .08 .11 1.00
(26) (26) (26)
4. Market goal difference .47 .15 .33 1.00
(24) (24) (24) (24)
5. Years opened .11 .08 .14 .28 1.00
(47) (49) (26) (24) (58)
6. # of Students enrolled .23 .32 .08 .27 .20 1.00
(46) (48) (26) (24) (47) (48)

N in parentheses.
 po.05; po.01.

(d ¼ 2.1) supports the assumption of the independence errors. Subtracting


R2 from the Adjusted R2 found 3.6% of the variance would be lost if the
model were derived from the population rather than the sample. Examina-
tion of the ANOVA table finds that the model significantly improves the
ability to predict coordination F(2, 43) ¼ 6.34, po.01. Table 19 shows that
model 2 significantly influences coordination, with a b ¼ 0.36 or b ¼ 0.13
indicating the number of standard deviation of change in coordination for
every standard deviation change in interdependence.
Turning to the second control variable, years opened, when testing the
hypothesis no significant relationship was found between the control
variables and coordination. The second step introduced the predictor
variable interdependence and found a significant relationship that predicted
18% of the variance in coordination, a 17% increase over the control
variables. Examination of the Durbin–Watson statistic (d ¼ 1.9) supported
the assumption of the independence errors. Subtracting R2 from the
Adjusted R2 found 3.7% of the variance would be lost if the model were
derived from the population rather than the sample. The ANOVA finds
that the model significantly improves the ability to predict coordination
F(2, 44) ¼ 4.93, po0.05 versus predicting coordination using means.
As indicated in Table 20, only interdependence significantly influences
coordination.
52 CAMERON K. TUAI

Table 19. Regression of Interdependence as a Predictor of Coordination


(n ¼ 43).
B SE b b

Step 1
Constant 3.21 0.22
SFTE 0.00 0.00 0.32
Step 2
Constant 2.19 0.44
SFTE 0.00 0.00 2.36
Interdependence 0.36 0.13 0.36

Note: R2 ¼ 0.10 for Step 1; DR2 ¼ 0.04 for Step 2 (po.05).


 po.05.

Table 20. Regression of Interdependence as a Predictor of Coordination


(n ¼ 43).
B SE b b

Step 1
Constant 3.84 0.38
Time open 0.03 0.05 0.11
Step 2
Constant 2.65 0.53
Time open 0.05 0.04 0.16
Interdependence 0.39 0.13 0.42

Note: R2 ¼ 0.09 for Step 1; DR2 ¼ 0.28 for Step 2 (po.05).


 po.01.

The positive relationship between interdependence and coordination is


somewhat supported by Weng’s (1997b) report of a significant positive
correlation between intradepartmental dependence and her three measures
of organizational structures: Hierarchy of Authority (r ¼ .13), Participation
(r ¼ .28), and Formalization (r ¼ .25). The caveat in comparing Weng’s
measure to these findings relates to methodological concerns about her
study, specifically her use of organizational-level measures of coordination
at a group level of analysis, applying a fixed definition of Hierarchy of
Authority at both an intra- and an interdepartmental unit of analysis, and
representing group-level characteristics through the summation of indivi-
dual responses, instead of a group mean.
Structural Contingency Theory Model of Library and Technology Partnerships 53

Analysis of the data supports H1 by finding a significant positive relation-


ship between interdependence and coordination within the IC.

H2. There will be a positive relationship between differentiation and


coordination.

Step one for testing the first research question duplicates Step one of H1
by regressing the control variables SFTE against coordination and found a
significant relationship. Step two introduced the predictor variable of
behavioral differentiation as measured through the LPC and found no
influence of LPC on coordination. Introduction of the control variables
found no significant relationship to coordination.
Turning to the second behavioral differentiation measure, Difference in
Market Orientation, examination of the control variable SFTE found
significant influence of this variable on coordination. Introduction of
Market Orientation into the model found no significant relationship with
coordination. Using the control variable Time Open found no significant
influence of this variable on coordination, or in the second model, which
includes the measure Marketing Orientation.
These findings do not support H2, which suggests that there will be a positive
relationship between differentiation and coordination.

H3. There will be a positive moderating effect of behavioral differentia-


tion on the relationship between interdependence and coordination.

To test the third hypothesis, the researcher first entered interdependence,


with the more exploratory behavioral differentiation measures entered into
the second model. Examination of the effects of the second model which
introduced LPC found no significant influence on coordination. Similarly,
the effects of Marketing Goal Orientation in the second model found no
significant influence on coordination.
These findings do not support H3, which suggested that there will be a
positive moderating affect of behavioral differentiation on the relationship
between interdependence and coordination.

Interdependence and Coordination Fit Effects on Performance

H4. There will be a positive relationship between performance, and the


degree to which coordination fits interdependence.
54 CAMERON K. TUAI

Step one of testing the fourth hypothesis regressed the control measure
Perceptions of Funding against the dependent variable Perceptions of
Performance and found no significant influence of the control on
performance. To test the influence of congruency or fit to the expected
relationship between interdependence and coordination, the analysis
standardized these measures using a z score, z ¼ (xm)/s, then took the
absolute value of interdependence minus coordination. This actually leads to
a ‘‘misfit score’’ because higher number equals misfit and not fit. To
transform this to a ‘‘fit score,’’ the analysis multiplied the fit score by negative
one (Tables 21 and 22).
Introduction of the predictor variable of Fit between interdependence
and coordination found a statistically significant relationship that predicted
R2 ¼ 20% of the variance in performance, a 13% increase over the control
variables. Examination of the Durbin–Watson (d ¼ 1.7) supports the
assumption of the independence errors. Subtracting R2 from the Adjusted
R2 found 4.1% of the variance would be lost if the model were derived from
the population rather than the sample. Examination of the ANOVA table

Table 21. Correlation among Unit-Level Control and Predictor


Variables of Performance.
1 2 3 4

1. Perception of Performance 1.00


(56)
2. Perception of Relationship 0.58 1.00
(55) (55)
3. Fit 0.38 0.20 1.00
(43) (43) (49)
4. Perception of Funding 0.25 0.30 0.07 1.00
(55) (55) (43) (55)

N in parenthesis.
 po.05; po.01.

Table 22. Unit-Level Control and Predictor Variables of Performance.


Mean Standard Deviation Minimum Maximum N

Fit 0.86 0.72 0.01 2.55 49


Structural Contingency Theory Model of Library and Technology Partnerships 55

Table 23. Regression-of-Fit Expectation as a Predictor of Performance


(n ¼ 43).
B SE b b

Step 1
Constant 4.68 0.55
Funding 0.18 0.11 0.25
Step 2
Constant 5.03 5.31
Funding 0.16 0.10 0.23
Fit 0.34 0.13 0.37

Note: R2 ¼ 0.06 for Step 1; DR2 ¼ 0.14 for Step 2 (po.01).


 po.01

finds that the model significantly improves the ability to predict Perceptions
of Performance F(2, 40) ¼ 4.89, po.01. As indicated in Table 20, congru-
ency significantly predicts Perception of Performance: a b ¼ 0.34 or
b ¼ 0.37 indicating the number of standard deviation of change in
coordination for every standard deviation change in Perceptions of Perfor-
mance. This finding is somewhat supported by Weng’s (1997b) report that
at low and medium levels of intradepartmental dependence, there is a
significant positive correlation between formalization and performance. In
other words, at low and medium levels of interdependence, higher levels of
formalization led to higher levels of performance. A further review of her
findings reveals no other significant correlation to performance in her
interdependence and coordination fit measures, and none of her regression
coefficients regarding fit and performance was significant.
These findings support H4, which suggests that there will be a positive
relationship between performance and the degree to which coordination fits
interdependence (Table 23).

CONCLUSION

The integration of library and campus computing partnerships to deliver


information services represents a growing organizational challenge for many
library administrators. A review of the library literature finds a long history
of interest in this topic, although the majority of the materials are anecdotal
or speculation on best practice. The researcher’s goal in the work reported
56 CAMERON K. TUAI

here has been to use structural contingency theory to explore empirically the
integration of librarians and technologists. By testing the contingency
theory expectations regarding workflow interdependence, coordination, and
performance, this work makes two broad contributions to library manage-
ment and organizational theory. The first contribution is the empirical
testing and confirmations of Thompson’s (1967) proposed relationship
between workflow interdependence and coordination.
Although the literature reveals a number of studies that empirically test
Thompson’s proposed relationship between interdependence and coordina-
tion, they all fail to measure these items as originally defined. The work
presented here is one of the first empirical studies to duplicate Thompson’s
propositions and confirm his findings. This contribution was unanticipated
because the literature generally presents the relationship between workflow
interdependence and coordination in a largely factual manner (Daft, 2001;
Donaldson, 2001; Scott, 1992). Yet a careful review finds consistent
deviation from Thompson’s definition of either workflow interdependence
or coordination. For instance, Van de Ven et al. (1976) point out in their
seminal article on workflow interdependence and coordination that their
measure of interdependence does not allow them to describe this item in
terms of a Guttman scale (p. 335), a central characteristic of Thompson’s
definition of interdependence. Thus, Van de Ven and colleagues measure
interdependence as a single item, instead of an aggregate of three. This
deviation from Thompson’s definition of interdependence as a Guttman
scale of three items is also found in both Tushman’s (1979) and Lynch’s
(1974) studies, which purposefully duplicate Mohr’s (1971) single-item
definition of interdependence. Even the studies that do measure inter-
dependence as defined by Thompson deviate from the author’s definition of
coordination, confounding these studies’ efforts to duplicate the original
proposition. For instance, Cheng (1983) uses a completely different
definition of coordination, and Lynch (1974)and Weng (1997b) define
coordination at the organizational rather than the workflow level as her unit
of analysis as Thompson had originally intended to do.
By empirically testing workflow interdependence and coordination, this
study makes other contributions regarding the effect of behavioral
differentiation. The failure to confirm behavioral differentiation as a
moderating force in the relationship between workflow interdependence
and coordination contributes to IC practitioners’ understanding; it refutes a
common perception that IC managers can improve performance by
minimizing the behavioral differences between librarians and technologists.
This conclusion comes from two findings: (a) that differences between
Structural Contingency Theory Model of Library and Technology Partnerships 57

librarians’ and technologists’ marketing orientation had no influence on


perceptions of performance; and (b) staff’s task or member orientation is
outside the control of the manager.
The researcher’s second broad contribution is the correction of problems
concerning group-level analysis – a problem that is particularly acute in the
library contingency theory literature. The issue of conducting research at a
group level of analysis manifests as both a problem in collecting the sample
and in measuring characteristics. At the root of these issues is the conceptual
problem that groups exist only as abstractions. Therefore, researchers can
measure group characteristics only by collecting and aggregating the staff’s
perceptions of the characteristic. The collecting of staff perceptions, and
then aggregating these perceptions into a proxy for a particular group’s
characteristic, represents one of the challenges of working at a group level of
analysis. For instance, Weng (1997b) collects data on ‘‘perceptions of
centralization’’ from public service staff located in different institutions. Her
error arises when, instead of aggregating staff from a particular institute as a
proxy for that institution’s level of centralization, she incorrectly aggregates
all the public service staff data, regardless of the institution of origin. The
problem is that, by not including home institution data in her calculations,
she conceptually claims that the institution has no influence on staff
members’ perceptions of centralization, thus making centralization an
individual characteristic. This attribution of the group characteristic of
centralization to an individual is not only nonsensical (how can an
individual have a characteristic of centralization), but also confounds her
findings. A related problem in conducting group-level research is selecting
an instrument that will validly measure the group characteristic at the
correct unit of analysis.
The library contingency theory literature (Hook, 1980; Vorwerk, 1970;
Weng, 1997b) shows consistent construct validity problems in the
researcher’s instruments not measuring the concept described. This results
largely from instruments measuring a group characteristic at a unit of
analysis different from that intended. For example, Hook and Vorwerk both
attempted to measure a library division’s degree of formalization. To do so
they both survey individual staff members within the departments that form
the divisions. The problem of construct validity arises because these
departments also have the characteristic of formalization; thus, staff
members’ perceptions of formalization would likely reflect their depart-
ments’ levels of formalization and not the division’s, as Hook and Vorwerk
expected. In other words, within the public services division, the circulation
department will likely have a high level of formalization, while the reference
58 CAMERON K. TUAI

department will likely have a low level of formalization. Aggregating this


data will have the following effects: (a) because the two departments have
different levels of formalization, they will report a heterogeneous perception
of the division’s degree of formalization; and (b) because the departments
that compose the division are heterogeneous, the aggregation of these
heterogeneous scores as a proxy for the division’s characteristic will likely
result in all division scores moving toward a median or homogeneous value.
This situation presents two problems. First Hook’s and Vorwerk’s divisional
data actually reflect departmental level data, a construct validity problem.
Second, by having heterogeneity within their intradivisional data, and
homogeneity within their interdivisional data, they are unable to test their
contingency theory expectations because their data have no variance at their
divisional unit of analysis.

Background

The research is situated within work in information commons (ICs), a


relatively new public service unit whose purpose is to provide an integrated
information and technology research environment. Primarily found within
an academic setting, ICs are a natural extension of the ongoing relationship
libraries have had with computing clusters and reference floor computing.
The IC’s general objective is to integrate information resources and
technology into a seamless research environment. Primarily catering to
undergraduates, ICs have become an increasingly common service offering
within academic libraries and most libraries report significant increases in
gate counts as evidence of their effectiveness. To begin a more robust
examination of the administrative challenges of managing the ICs, the
researcher employed structural contingency theory to frame and define the
process of integration.
Contingency theory is a relatively mature theory within the field of
organizational sciences. Its central proposition is that an organization’s
structures are dependent upon its contingencies. Within the framework of
the dissertation, the contingency of interest is the degree of workflow
interdependence between librarians and technology staff. The research
question involves how the contingency of workflow interdependence
influences the coordination structures the organization employs in order
to integrate these workflows. Contingency theory posits that there will be a
positive relationship between workflow interdependence and coordination
structures. It further suggests that when the level of coordination fits the
degree of workflow interdependence, there will be an improvement in
Structural Contingency Theory Model of Library and Technology Partnerships 59

performance. These two expectations form the two hypothesis of the


dissertation. Contingency theory also posits that differences in behavioral
characteristics will act as a positive moderating force on the relationship
between coordination and interdependence. This relationship is of particular
interest because the IC literature commonly addresses issues of cultural
differences between librarians and technologists as a factor in partnership
integration. The researcher adopted an exploratory approach to under-
standing the relationship between cultural or behavioral differences and
coordination and interdependence because the empirical research on this
relationship is relatively small.
The researcher hopes the work will contribute to information science by
applying contingency theory carefully in the library, particularly in the ICs.
Although a few other studies have applied contingency theory in libraries,
the dissertation builds upon this research by using a work-group level of
analysis rather than a division level of analysis. This smaller unit of analysis
allows one to examine the relationship between interdependence and
coordination as integration of service workflows, rather than the integration
of organizational-level processes. Additionally, it minimizes the methodo-
logical problem of heterogeneity within, or homogeneity between, units of
analysis.

Research Methods

The researcher sought to establish the reliability and validity of the measures
by customizing existing organizational science instruments to the context of
the IC. Contingency theory, generally, categorizes interdependence and
coordination into three categories of increasing complexity. Using these
categories, the researcher measured, in order of increasing complexity,
interdependence as pooled, sequential, and reciprocal; and coordination as
standardization, planning, and mutual adjustment. To customize these
categories to an IC context, the researcher analyzed approximately 80 IC
case studies to identify common practices that matched the level of
complexity associated with each category. The work associated pooled
interdependence with the IC service of collocation of library and technology
services within a common space. At this level of interdependence, librarians
and technologists are largely independent with little workflow occurring
between partners. Managers coordinate this level of interdependence
through the standardization of policies and procedures. Less evident within
the literature is sequential interdependence, which the researcher associated
with the service of informed referrals. Informed referrals involve a higher
60 CAMERON K. TUAI

level of interdependence because both librarians and technologists must


have some understanding of each other’s services in order to ensure a
smooth transition between partners. To coordinate this level of inter-
dependence, managers commonly relied upon formal face-to-face interac-
tions, such as meetings or cross-training. The most complex form of
interdependence, reciprocal, was largely absent in the literature, although
some authors speculated about future services, such as joint instructional
programs. Interestingly, managers commonly describe employing the most
complex form of coordination, mutual adjustment, as manifested in the
form of teams and informal gatherings. Using these examples to customize
existing instruments to an IC context, the researcher sought to improve the
instruments through pre-testing at the Indiana University ICs.
The analysis identified an initial sample of 112 ICs that met the criteria of
being a partnership between library and campus computing. The response
rate was 62 individual ICs with 174 individual library surveys and 141
technology staff surveys. Because the unit of analysis is the IC, the
researcher used the mean of the IC’s staff responses as proxies for the IC
degree of workflow interdependence, coordination, and behavioral differ-
entiation. Development of the measures from the sample used PCA to factor
the data, with a varimax rotation to simplify the factor loadings.

Analysis

In spite of efforts to increase the sensitivity of the measure of Interde-


pendence, analysis of the variance (ANOVA) found homogeneity between
units. This finding suggests, unsurprisingly, that most ICs have similar levels
of workflow interdependence. This observation is bolstered by the literature,
which finds a high level of homogeneity in IC service offerings. Develop-
ment of the measure of Coordination found, similar to other studies, a lower
than expected number of solutions, with Coordination loading onto a single
factor. The literature offers one explanation for this finding: the increasingly
common suggestion that complex forms of coordination, such as teams,
retreats, and other structures that lower partner behavioral differences, will
universally increase performance. If IC managers believe this, then the
finding of a single measure of coordination would indicate that managers
view all types of coordination similarly, regardless of the level of complexity.
Results obtained here suggest that this may produce more coordination –
and greater costs for ICs – than is needed.
Development of the behavioral measures presented issues with the LPC
showing inter-unit homogeneity. Behavioral differentiation as expressed
Structural Contingency Theory Model of Library and Technology Partnerships 61

through goal orientations also presented a number of challenges with both


market and technoeconomic goals showing low levels of reliability. The
researcher included Market Goal Orientation in the analysis because it had
good capability to distinguish between librarians and technologists, and
among the ICs in which they worked. Lastly, performance measures factors,
Perceptions of Performance and Perception of Working Relationship,
showed strong reliability and good inter-unit heterogeneity.
Analysis of the data confirmed both hypotheses. First, it established that
there is a positive relationship between workflow interdependence and
coordination. Examination of the control variables found that the number
of full-time equivalent students has a significant positive relationship with
coordination, although the introduction of interdependence negates its
influence. This suggests that although larger schools will have higher levels
of coordination, it is not size that influences coordination, but rather the
higher levels of interdependence that exist in large schools lead to increases
in coordination. Additionally, there was some concern that the aggregation
of simple and complex coordination items into a single measure would not
accurately reflect the hypothesis of a positive relationship between inter-
dependence and coordination complexity. To check that increasing work-
flow interdependence did lead to increased coordination complexity, the
analysis weighted coordination in a manner similar to interdependence
and confirmed a significant positive correlation. Comparison of coordina-
tion with weights and without weights showed that weighting coordination
increased the strength of the correlation. The researcher kept coordination
unweighted for ease of comparison to the literature, which does not use
weights. The analysis confirmed that fit between interdependence and
coordination will lead to higher levels of performance, with the control
variable of funding providing no significant influence on performance.
Analysis of behavioral differences’ influence on coordination and fit found
no significant relationship. This finding was not unexpected because of the
low level of workflow interdependence reported by ICs and the expectation
that the influence of behavioral differentiation on coordination diminishes
as workflow interdependence decreases.

Interpretation of Findings

A review of the information science literature finds a paucity of theoretical


and empirical research on the management of information organizations.
Self-descriptions by the journals that focus on library or information-based
organizations all emphasize their focus on practice which results in a relative
62 CAMERON K. TUAI

absence of theory development. Perhaps the demand for higher quality


research on information organization management would be stronger if
library and information science managers were aware of the benefit of
management theory. Yet, although library management is a core course in
most library and information science schools, it seems that students
graduate with little ambition of pursuing administrative positions, let alone
improving library management through the application of administrative
sciences. For those who do find themselves in administration, the trouble lies
not in their almost exclusive reliance on experience in making managerial
decisions, but in an almost hostile opinion of management sciences. The
source of this attitude likely comes from the all too common library
experience with poorly implemented managerial initiatives, which is
perpetuated in the literature through its lack of empirical and theoretical
material. This criticism of the state of library management would be easier
to accept if similar professions faced the same issues, yet a review of the
nursing and K-12 administration literature finds a significantly better caliber
of work in regard to the application of managerial sciences. In presenting
this criticism of the state of library management, the researcher does not
propose this dissertation as the starting point for the creation of a library
management discipline. Rather, it is important to reopen a largely lost
discussion of the need, possibility, and benefit of empirical research into
library management.
The goal with this undertaking was to apply structural contingency theory
to examine the integration of library and technology partnerships in the
context of an academic library IC. This goal is admittedly narrow in
application, but the author suggests that the study offers a number of
findings. One such benefit lies within the instruments developed. These will
allow library managers to use theory-based definitions to describe
organizational actions and consequences in a valid and reliable manner.
Such instruments will improve library managers’ ability to connect their
actions to the broader organizational science literature. Intertwined with
this benefit is the potential for the instrument to provide a means to
standardize descriptions of library administrative actions. Standardization
will allow managers to begin to report their actions in a manner that will
increase the generalizability of their findings, thus overcoming a significant
weakness in the current case study literature. For instance, nursing
managers commonly report the technological complexity of a particular
service using a standardized measure in the nursing literature. This allows
other nursing managers to use the same measure and compare their actions
to those of the case study. Further, if nursing managers as a body begin to
Structural Contingency Theory Model of Library and Technology Partnerships 63

report their actions in terms of standardized measures, best practices can


emerge around these common definitions, actions, and consequences; this
ultimately allows nursing managers to improve their decision-making
capability beyond their local experience and history. This standardization
and the eventual emergence of best practices benefit not only practitioners
but also educators.
If the information science community is to improve the state of library
management, one area of importance is the education of library manage-
ment students. In teaching library management, library educators almost
exclusively teach from materials that are not library specific. Examination of
library management textbooks supports this critique: the majority of the
materials in library management textbooks simply reflect materials taught in
schools of business. The researcher suggests that one solution to this
problem lies in reporting library actions through standardized instruments
such as those found within this dissertation. An example of how this benefit
would manifest can be found within the literature concerning process
improvement. Within the library management literature, there appears to be
an increase in the use of process improvement. The problem for library
management educators teaching process improvement is that the literature
offers no insight into selecting the library context best suited for process
improvement. The process improvement case study literature reveals
variation in the degree of success, and, in that process improvement
implementations occur within different contexts, even a careful read of the
cases would offer only anecdotal evidence as to the optimal context for
library process improvement. With standardized instruments, managers
who conduct a process improvement implementation could describe their
unit characteristics in standardized terms related to service complexity as
defined by workflow interdependence, and integration complexity as defined
through coordination. These standardized measures would allow library
management instructors to educate students on how to use these measures
to generalize case study contexts into other library situations, thus moving
the instruction of process improvement from a broad managerial setting
into a library-specific setting. Standardized measures also provide an
additional benefit in that it helps create best practices. For instance, nursing
managers commonly report the technological complexity of a particular
service using a standardized measure found in the nursing literature.
This allows other nursing managers to use the same measure and compare
their actions to those of the case study. As nursing managers begin to
report their actions in terms of standardized measures, best practices soon
emerge around these common definitions, actions, and consequences. This
64 CAMERON K. TUAI

standardization and the eventual emergence of best practices benefits not


only practitioners but also educators.

Limitations

A number of limitations to the findings suggest some cautions regarding


what can be inferred about the integration of library and technology
partners within an academic library IC. First, one cannot extend these
findings beyond the contexts, measures, and relationships described.
Further, even within these parameters, the research can explain only a
minority of the variance within the criterion variable of coordination;
therefore, the predictive power of its model is limited. In particular, the poor
reliability of the measures of behavioral differentiation hampered the model.
This problem, in turn, contributed to the inability to confirm an expected
positive moderating effect of behavioral differentiation on coordination.
At a broader level, the use of a cross-sectional approach to data collection
also limits the findings. The lack of data concerning how interdependence
affects coordination over time is of some concern because the use of
particular forms of coordination continues to change. In particular, the use
of teams and other complex forms of coordination appears to be increasing.
The growing popularity for face-to-face coordination may lessen the
relationship between workflow interdependence and coordination as
managers increase the importance of criteria other than interdependence
in their decisions regarding coordination structures.
Lastly, the difficulties inherent in understanding group-level activities also
present a limitation on the findings. As has been discussed, problems occur
from measuring group behavior by averaging individual groups, which in
turn is an average of individual members within that group. This reductionist
approach to understanding group behavior presents legitimate epistemologi-
cal challenges to the rational assumptions of structural contingency. So while
the researcher stands by the reliability and validity of these findings, he also
acknowledges that other methods, especially those associated with a social
constructivist epistemology, will likely offer additional insight into coordina-
tion beyond those reported in this study.

Suggestions for Future Research

The management literature contains many references to SWOT, a planning


tool that looks at an organization’s Strength, Weaknesses, Opportunity, and
Structural Contingency Theory Model of Library and Technology Partnerships 65

Threats. Some writers suggest that weaknesses and threats are really just
opportunities to create new strengths. Similarly, one perspective on a
dissertation’s limitations is that they are simply suggestions for future
research. Adopting this perspective reveals a number of potential research
questions. One area is the expansion of the context beyond library
partnerships with technologists.
Research into how libraries work with other units presents an excellent
research opportunity because there is currently a strong trend within the
academy to use partnerships to leverage individual unit capabilities. Examples
of other library partnerships of interest include those with vendors, teaching
and research units, and other university service units. Alternatively, future
researchers could expand beyond the dissertation’s context of the IC to
include other multiunit collaborations, for example, with digital repositories,
cyber-infrastructures, or digitization programs. A second research opportu-
nity is the expansion of predictor variables of workflow interdependence to
include technology. Similarly, one could expand upon the criterion variable of
coordination to include other structural measures, such as centralization,
formalization, and division of labor. Necessarily, analysis of these alternative
structural variables would also imply shifting the unit of analysis from a
group to an organizational level of analysis. Another potential area for future
research opportunities is to extend the rational assumptions into social factors
such as power or politics.
The dissertation was naturally constrained in its investigation of
coordination by its use of structural contingency theory, which is largely
rationally based. Future research could develop a broader understanding of
coordination by adopting a constructivist approach. This shift in
epistemological assumptions would allow researchers to account more
completely for human factors. Further, the shift from a rational to a
constructivist perspective would allow researchers to use qualitative
methods more easily, thus improving the ability to understand the complex
social interactions that influence the success and failure of library partner-
ships. Qualitative methods also represent a significant opportunity to
improve upon the dissertation’s measures of behavioral differentiation.
Beyond the methodological opportunities, a constructivist approach would
allow researchers to connect more readily with the theories and ideas
generally adopted in information science. Social informatics’ success with
the Socio-Technical Interaction Network model is one example of an
information science approach that could expand upon the findings reported
here. Because many researchers within this discipline draw upon organiza-
tional science to build upon and advance their ideas, information science
offers the opportunity to expand libraries’ understanding of partnerships
66 CAMERON K. TUAI

without completely abandoning the strengths of quantitative approach


taken in this dissertation.
In speculating about future research possibilities, it is easy to see how the
proper nurturing of small beginnings can lead to significant contributions to
the profession. In beginning an almost six-year research effort to improve
understanding of library and technology partnerships within the academic
ICs, it was often difficult to see how anything could come from this
monumental effort. But as the end draws near, it is with immense satisfaction
that the author can report that the dissertation does make some level of
contribution to practice, both within its immediate findings and in its
suggestions for future research. If history is of any value in predicting the
future, libraries will always remain intertwined with technology and partner-
ships. Therefore, the research area addressed in this dissertation has some
chance of forwarding the work of the author and others, which is perhaps
a hope common to all academics. In concluding the dissertation, the
author offers one last refection: shortly before leaving practice to pursue the
PhD, a close colleague suggested that the measure of success for a library
and information science scholar is simply the capability to predict any-
thing. In applying this measure to the dissertation, it is the author’s belief
that the work does meet this standard and thus represents a successful
scholarly effort.

REFERENCES
ACRL. (1998). Task force on academic library outcomes assessment report. Retrieved from
http://www.ala.org/ala/mgrps/divs/acrl/publications/whitepapers/taskforceacademic.
cfm. Accessed on December 14, 2009.
Alexander, D. E., Lassalle, C. C., & Steib, L. C. (2005). Manning the boat with a diverse (non-
traditional) crew. Paper presented at the Proceedings of the 33rd Annual ACM
SIGUCCS Conference on User Services, Monterey, CA.
Alexander, J. W., & Randolph, W. A. (1985). The fit between technology and structure as a
predictor of performance in nursing subunits. Academy of Management Journal, 28(4),
844–859.
Allen, T. J., & Cohen, S. I. (1969). Information flow in research and development laboratories
(technical communication patterns in R&D laboratories, discussing effects of work
structure, social relations, etc). Administrative Science Quarterly, 14(1), 12–19.
Argote, L. (1982). Input uncertainty and organizational coordination in hospital emergency
units. Administrative Science Quarterly, 27(3), 420–434.
Arnold, H. J. (1982). Moderator variables: A clarification of conceptual, analytic, and
psychometric issues. Organizational Behavior and Human Performance, 29(2), 143–174.
Babbie, E. (2004). The practice of social research. Belmont, CA: Wadsworth/Thomson Learning.
Structural Contingency Theory Model of Library and Technology Partnerships 67

Bailey, D. R., & Tierney, B. (2008). Transforming library service through information commons:
Case studies for the digital age. Chicago, IL: American Library Association.
Baker, N., & Kirk, T. G. (2007). Merged service outcomes at Earlham college. Reference
Services Review, 35(3), 379–387.
Barley, S. R. (1986). Technology as an occasion for structuring: Evidence from observations of
CT scanners and the social order of radiology departments. Administrative Science
Quarterly, 31(1), 78–108.
Barton, E., & Weismantel, A. (2007). Creating collaborative technology-rich workspaces in an
academic library. Reference Services Review, 35(3), 395–404.
Beagle, D. R. (1999). Conceptualizing an information commons. Journal of Academic
Librarianship, 25(2), 82–89.
Blain, A. (2000). A partnership for future information technology support at a community
college. In L. L. Hardesty (Ed.), Books, bytes, and bridges: Libraries and computer centers
in academic institutions (pp. 189–198). Chicago, IL: American Library Association.
Blandy, S. G. (1996). Introduction. In S. G. Blandy, L. Martin & M. Strife (Eds.), Assessment
and accountability in reference work (pp. 1–3). New York, NY: The Haworth Press.
Blau, P. M. (1972). Interdependence and hierarchy in organizations. Social Science Research,
1(1), 1–24.
Buckland, M. K. (2003). Five grand challenges for library research. Library Trends, 51(4),
675–686.
Channing, R. K., & Dominick, J. L. (2000). Wake forest university. In L. L. Hardesty (Ed.), Books,
bytes, and bridges: Libraries and computer centers in academic institutions (pp. 137–141).
Chicago, IL: American Library Association.
Cheng, J. L. C. (1983). Interdependence and coordination in organizations: A role-system
analysis. Academy of Management Journal, 26(1), 156–162.
Child, J. (1972). Organizational structure, environment and performance: The role of strategic
choice. Sociology, 6(1), 1–22.
Church, J. (2005). The evolving information commons. Library Hi Tech, 23(1), 75–81.
Comstock, D. E., & Scott, W. R. (1977). Technology and the structure of subunits:
Distinguishing individual and workgroup effects. Administrative Science Quarterly,
22(2), 177–202.
Cowgill, A., Beam, J., & Wess, L. (2001). Implementing an information commons in a
university library. Journal of Academic Librarianship, 27(6), 432–439.
Crawford, G. A. (1997). Information as a strategic contingency: Applying the strategic
contingencies theory of intraorganizational power to academic libraries. College &
Research Libraries, 58(2), 145–155.
Crockett, C., McDaniel, S., & Remy, M. (2002). Integrating services in the information
commons: Toward a holistic library and computing environment. Library Administration
and Management, 16(4), 181–186.
Daft, R. L. (1978). System influence on organizational decision-making: Case of resource-
allocation. Academy of Management Journal, 21(1), 6–22.
Daft, R. L. (2001). Organizational theory and design (7th ed). Cincinnati, OH: Thomson Learning.
Daft, R. L., & Macintosh, N. B. (1981). A tentative exploration into the amount and
equivocality of information processing in organizational work units. Administrative
Science Quarterly, 26(2), 207–224.
Dallis, D., & Walters, C. (2006). Reference services in the common environment. Reference
Services Review, 34(2), 248–260.
68 CAMERON K. TUAI

David, F. R., Pearce, J. A., & Randolph, W. A. (1989). Linking technology and structure to
enhance group performance. Journal of Applied Psychology, 74(2), 233–241.
de Jager, K. (2002). Successful students: Does the library make a difference? Performance
Measurement and Metrics, 3(3), 140–144.
Dearborn, D. C., & Simon, H. A. (1967). Selective perception: A note on the departmental
identifications of executives. Sociometry, 21(2), 140–144.
Department of Education, Institute of Educational Science. (2008). Library statistics program:
Academic libraries, 2008 [Data File]. Retrieved from National Center for Educational
Statistics Website, http://nces.ed.gov
Donaldson, L. (2001). The contingency theory of organizations. Thousand Oaks, CA: Sage
Publications Inc.
Dugan, R. E., & Hernon, P. (2002). Outcomes assessment: Not synonymous with inputs and
outputs. The Journal of Academic Librarianship, 28(6), 376–380.
Duncan, J. M. (1998). The information commons: A model for (physical) digital resource
centers. Bulletin of the Medical Library Association, 86(4), 576–582.
Edgar, W. B. (2006). Questioning LibQUAL þ t. portal: Libraries and the Academy, 6(4), 445–465.
Fiedler, F. E. (1964). A contingency model of leadership effectiveness. Advances in Experimental
Social Psychology, 1, 149–190.
Fitzpatrick, E. B., Moore, A. C., & Lang, B. W. (2008). Reference librarians at the reference
desk in a learning commons: A mixed methods evaluation. The Journal of Academic
Librarianship, 34(3), 231–238.
Flowers, K., & Martin, A. (1994). Enhancing user services through collaboration at rice
university. CAUSE/EFFECT, 17(3), 19–25.
Foley, T. J. (1997). Combining libraries, computing, and telecommunications: A work in
progress. Paper presented at the Proceedings of the 25th Annual ACM SIGUCCS
Conference on User Services: Are You Ready?, Monterey, CA.
Fox, D., Fritz, L., Kichuk, D., & Nussbaumer, A. (2001). University of Saskatchewan
information commons: Reconfiguring the learning environment. Retrieved from http://
library2.usask.ca/Bfox/ic.pdf. Accessed on July 21, 2009.
Frand, J., & Bellanti, R. (2000). Collaborative convergence: Merging computing and library
services at the Anderson graduate school of management at UCLA. Journal of Business
& Finance Librarianship, 6(2), 3–25.
Franks, J. A., & Tosko, M. P. (2007). Reference librarians speak for users: A learning commons
concept that meets the needs of a diverse student body. The Reference Librarian, 47(97),
105–118.
Fry, L. W., & Slocum, J. W., Jr. (1984). Technology, structure, and workgroup effectiveness:
A test of a contingency model. Academy of Management Journal, 27(2), 221–246.
Galbraith, J. R. (1973). Designing complex organizations. Boston, MA: Addison-Wesley.
Ghoshal, S., & Nohria, N. (1989). Internal differentiation within multinational corporations.
Strategic Management Journal, 10(4), 323–337.
Greenwell, S. (2007). Around the world to the technology at the hub@ wt’s, the university of
Kentucky’s information commons. Library Hi Tech News, 24(6), 40–42.
Gresov, C. (1990). Effects of dependence and tasks on unit design and efficiency. Organization
Studies, 11(4), 503–529.
Griffin, R. (2000). Technology planning: Oregon state university’s information commons. OLA
Quarterly, 6(3), 12–13.
Hage, J., Aiken, M., & Marrett, C. B. (1971). Organization structure and communications.
American Sociological Review, 36(5), 860–871.
Structural Contingency Theory Model of Library and Technology Partnerships 69

Hall, R. H., & Tolbert, P. S. (2005). Organizations: Structures, processes, and outcomes. Upper
Saddle River, NJ: Prentice Hall.
Heath, F., Cook, C., Kyrillidou, M., & Thompson, B. (2002). Arl index and other validity
correlates of libqual þ scores. portal: Libraries and the Academy, 2(1), 27–42.
Heath, F., Cook, C., & Thompson, R. (2002). Reliability and structure of LibQUAL þ scores:
Measuring perceived library service quality. portal: Libraries and the Academy, 2(1),
3–12.
Henfridsson, O., Holmström, J., & Söderholm, A. (1997). Beyond the common-sense of
practice: A case for organizational informatics. Scandinavian Journal of Information
Systems, 9(1), 47–56.
Hernon, P., & Dugan, R. E. (2006). Institutional mission-centered student learning. In
P. Hernon, R. E. Dugan & C. Schwartz (Eds.), Revisiting outcomes assessment in higher
education (pp. 1–12). Westport, CT: Libraries Unlimited Inc.
Hernon, P., & Whitman, J. R. (2001). Delivering satisfaction and service quality: A customer-
based approach for libraries. Chicago, IL: American Library Association.
Hickson, D. J., Pugh, D. S., & Pheysey, D. C. (1969). Operations technology and organization
structure: An empirical reappraisal. Administrative Science Quarterly, 14(3), 378–397.
Hook, R. D. (1980). A comparative study of three medium-sized academic libraries using a
contingency theory of management. Unpublished Ph.D. thesis, University of Southern
California, Los Angeles, CA.
Ito, J. K., & Peterson, R. B. (1986). Effects of task difficulty and interunit interdependence on
information processing systems. Academy of Management Journal, 29(1), 139–149.
Jääskeläinen, A., & Lönnqvist, A. (2009). Designing operative productivity measures in public
services. VINE, 39(1), 55–67.
Jones, C. L. (1984a). Academic libraries and computing: A time of change. EDUCOM Bulletin,
20(1), 9–12.
Jones, K. H. (1984b). Conflict and change in library organizations: People, power, and service.
London: Clive Bingley Ltd.
Kauser, S., & Shaw, V. (2004). The influence of behavioural and organisational characteristics on
the success of international strategic alliances. International Marketing Review, 21(1), 17.
Kayongo, J., & Jones, S. (2008). Faculty perception of information control using
LibQUAL þ t indicators. The Journal of Academic Librarianship, 34(2), 130–138.
Kent, P. G., & McLennan, B. (2007). Developing a sustainable staffing model for the learning
commons: The Victoria university experience. Paper presented at the International
Conference on Information and Learning Commons: Enhancing its Role in Academic
Learning and Collaboration.
Khandwalla, P. N. (1973). Viable and effective organizational designs of firms. The Academy of
Management Journal, 16(3), 481–495.
Kim, K. K., & Umanath, N. S. (1992). Structure and perceived effectiveness of software
development subunits: A task contingency analysis. Journal of Management Information
Systems, 9(3), 157–181.
Kirk, T. (2004). The role of management theory in day-to-day management practices of a
college library director. Library Administration and Management, 18(1), 35–38.
Kirk, T. (2008). The merged organization: Confronting the service overlap between libraries
and computer centers. Library Issues: Briefings for Faculty and Administrators, 28(5), 1–4.
Klein, K. J., Dansereau, F., & Hall, R. J. (1994). Levels issues in theory development, data
collection, and analysis. The Academy of Management Review, 19(2), 195–229.
70 CAMERON K. TUAI

Kling, R. (1980). Social analyses of computing: Theoretical perspectives in recent empirical


research. Computing Surveys, 12(1), 61–110.
Kling, R. (2007). What is social informatics and why does it matter? The Information Society,
23(4), 205–220.
Kling, R., Rosenbaum, H., & Sawyer, S. (2005). Understanding and communicating social
informatics: A framework for studying and teaching the human contexts of information and
communication technologies. Medford, NJ: Information Today.
Kuruppu, P. U. (2007). Evaluation of reference services – A review. The Journal of Academic
Librarianship, 33(3), 368–381.
Lawrence, P. R., & Lorsch, J. W. (1967a). Differentiation and integration in complex
organizations. Administrative Science Quarterly, 12(1), 1–47.
Lawrence, P. R., & Lorsch, J. W. (1967b). Organization and environment: Managing
differentiation and integration. Boston, MA: Harvard University Press.
Leatt, P., & Schneck, R. (1981). Nursing subunit technology: A replication. Administrative
Science Quarterly, 26(2), 225–236.
Leifer, R., & Huber, G. P. (1977). Relations among perceived environmental uncertainty,
organization structure, and boundary-spanning behavior. Administrative Science
Quarterly, 22(2), 235–247.
Lim, S. (2004). Power of systems offices in academic library organizations. University of
Wisconsin-Madison.
Lindauer, B. G. (1998). Defining and measuring the library’s impact on campus wide outcomes.
College and Research Libraries, 59, 546–571.
Lippincott, J. K. (2009). Information commons: Surveying the landscape. In C. Forrest &
M. Halbert (Eds.), A field guide to the information commons. Lanham, MD: Scarecrow
Press.
Lorsch, J. W., & Lawrence, P. R. (1972). Environmental factors and organizational integration.
In J. W. Lorsch & P. R. Lawrence (Eds.), Organization planning: Cases and concepts
(pp. 38–48). Homewood, IL: The Dorsey Press.
Lynch, B. P. (1974). An empirical assessment of perrow. Administrative Science Quarterly,
19(3), 338–356.
Lynch, B. P. (1990). Management theory and organizational structure. In M. J. Lynch (Ed.),
Academic libraries: Research perspectives (Vol. 165, pp. 215–234). Chicago, IL: American
Library Association.
March, J. G., Simon, H. A., & Guetzkow, H. S. (1958). Organizations. New York, NY: Wiley.
Mark, B. (1985). Task and structural correlates of organizational effectiveness in private
psychiatric hospitals. Health Services Research, 20(2), 199–224.
Markus, M. L., & Robey, D. (1988). Information technology and organizational change:
Causal structure in theory and research. Management Science, 34(5), 583–598.
Matthews, J. R. (2007). Library assessment in higher education. Westport, CT: Libraries
Unlimited.
McDonald, J. A., & Micikas, L. B. (1994). Academic libraries: The dimensions of their
effectiveness. Westport, CT: Greenword press.
McKinstry, J., & McCracken, P. (2002). Combining computing and reference desks in an
undergraduate library: A brilliant innovation or a serious mistake? Libraries and the
Academy, 2(3), 391–400.
McLean, N. (1997). Convergence of libraries and computing services: Implications for reference
services. LASIE, 28(3), 5–9.
Structural Contingency Theory Model of Library and Technology Partnerships 71

Metzger, M. C. (2006). Enhancing library staff training and patron service through a cross-
departmental exchange. Technical Services Quarterly, 24(2), 1–7.
Miller, J. (2008). Quick and easy reference evaluation: Gathering users’ and providers’
perspectives. Reference & User Services Quarterly, 47(3), 218–222.
Mohr, L. B. (1971). Organizational technology and organizational structure. Administrative
Science Quarterly, 16(4), 444–459.
Molholt, P. (1985). On converging paths: The computing center and the library. Journal of
Academic Librarianship, 11(5), 284–288.
Morales, S., & Sparks, T. (2006). Creating synergy to make it happen. Paper presented at the
Proceedings of the 34th Annual ACM SIGUCCS Conference on User Services,
Edmonton, Alberta, Canada.
Moran, R. F. (1978). Contingency theory and its implications for the structure of an academic
library. ERIC Document Reproduction Service No. ED163949, East Lansing, MI.
Neff, R. K. (1986). Merging libraries and computer centers: Manifest destiny or manifestly
deranged? Information Reports and Bibliographies, 15(3), 17–20.
Neill, J. (2008). Writing up a factor analysis. University of Canberra, Centre for Applied
Psychology.
Nielsen, B., Steffen, S. S., & Dougherty, M. C. (1995). Computing center/library cooperation in
the development of a major university service: Northwestern’s electronic reserve system.
Paper presented at the Realizing the Potential of Information Resources: Information,
Technology, and Services – Proceedings of the 1995 CAUSE Annual Conference,
New Orleans, LA.
Nikkel, T. (2003). Implementing the Dalhousie learning commons. Feliciter, 49(4), 212–214.
Orlikowski, W. (1992). The duality of technology: Rethinking the concept of technology in
organizations. Organization Science, 3(3), 398–426.
Orlikowski, W. J. (2000). Using technology and constituting structures: A practice lens for
studying technology in organizations. Organization Science, 11(4), 404–429.
Orlikowski, W. J., & Robey, D. (1991). Information technology and the structuring of
organizations. Information Systems Research, 2(2), 143–169.
Oulton, A. J. (1991). Strategies in action: Public library management and public expenditure
constraints. London, UK: Library Association Publishing.
Overton, P., Schneck, R., & Hazlett, C. B. (1977). An empirical study of the technology of
nursing subunits. Administrative Science Quarterly, 22(2), 203–219.
Perrow, C. (1967). A framework for the comparative analysis of organizations. American
Sociological Review, 32(2), 194–208.
Podsakoff, P. M., MacKenzie, S. B., Podsakoff, N. P., & Lee, J.-Y. (2003). Common method
biases in behavioural research: A critical review of the literature and recommended
remedies. Journal of Applied Psychology, 88(5), 879–903.
Price, J. L. (1972). Handbook of organizational measurement. Lexington, MA: D.C. Heath and
Company.
Rayward, W. B. (1969). Libraries as organizations. College and Research Libraries, 30(4),
312–326.
Roszkowski, M. J., Baky, J. S., & Jones, D. B. (2005). So which score on the LibQUAL þ t tells
me if library users are satisfied? Library & Information Science Research, 27(4), 424–439.
Samson, S., Granath, K., & Pengelly, V. (2000). Service and instruction: A strategic focus. In
L. L. Hardesty (Ed.), Books, bytes, and bridges: Libraries and computer centers in
academic institutions (pp. 153–163). Chicago, IL: American Library Association.
72 CAMERON K. TUAI

Scott, R. (1992). Organizations: Rational, natural, and open systems (3rd ed). Upper Saddle
River, NJ: Prentice-Hall.
Scott, R., & Davis, G. F. (2007). Organizations and organizing: Rational, natural, and open
systems perspectives. Upper Saddle River, NJ: Pearson Prentice Hall.
Sharrow, M. J. (1995). Library and it collaboration projects: Nine challenges. CAUSE/
EFFECT, Winter, 55–56.
Shi, X., & Levy, S. (2005). A theory-guided approach to library services assessment. College and
Research Libraries, 66(3), 266–277.
Spector, P. E. (1992). Summated rating scale construction: An introduction (Vol. 82). Newbury
Park, CA: Sage University Paper Series.
Spencer, M. E. (2007). The state-of-the-art: NCSU libraries learning commons. Reference
Services Review, 35(2), 310–321.
Stemmer, J. K. (2007). The perceptions of effectiveness in merged information services
organizations: Combining library and information technology services at liberal arts
institutions. Ohio University.
Stueart, R. D., & Moran, B. B. (2007). Library and information center management (7th ed).
Westport, CT: Libraries Unlimited.
Tabachnick, B. G., & Fidell, L. S. (2007). Using multivariate statistics (5th ed). Boston, MA:
Pearson.
Telatnik, G. M., & Cohen, J. A. (1993). Working together: The library and the computer center.
Paper presented at the Proceedings of the 21st Annual ACM SIGUCCS Conference on
User Services, San Diego, CA.
Thompson, B., Cook, C., & Kyrillidou, M. (2005). Concurrent validity of libqual þ (tm) scores:
What do libqual þ (tm) scores measure? The Journal of Academic Librarianship, 31(6),
517–522.
Thompson, B., Kyrillidou, M., & Cook, C. (2008). Library users’ service desires: A libqual þ
study. The Library Quarterly, 78(1), 1–18.
Thompson, J. (1967). Organizations in action. New York, NY: McGraw Hill.
Todd, K., Mardis, L., & Wyatt, P. (2005). Synergy in action: When information systems
and library services collaborate to create successful client-centered computing labs.
Paper presented at the Proceedings of the 33rd Annual ACM SIGUCCS Conference on
User Services, Monterey, CA.
Trawick, T. A., & Hart, J. (2000). The computing center and the library at teaching university:
Application of management theories in the restructuring of information technology. In
L. L. Hardesty (Ed.), Books, bytes, and bridges: Libraries and computer centers in
academic institutions (pp. 178–188). Chicago, IL: American Library Association.
Tucker, J. M., & McCallon, M. (2008). Abilene Christian university: Margaret and Herman
Brown library. In D. R. Bailey & B. Tierney (Eds.), Transforming library service through
information commons: Case studies for the digital age (pp. 99–103). Chicago, IL:
American Library Association.
Tushman, M. L. (1977). Special boundary roles in the innovation process. Administrative
Science Quarterly, 22(4), 587–605.
Tushman, M. L. (1978). Technical communication in R&D laboratories: The impact of project
work characteristics. The Academy of Management Journal, 21(4), 624–645.
Tushman, M. L. (1979). Work characteristics and subunit communication structure: A
contingency analysis. Administrative Science Quarterly, 24(1), 82–98.
Structural Contingency Theory Model of Library and Technology Partnerships 73

Tushman, M. L., & Scanlan, T. J. (1981). Boundary spanning individuals: Their role in
information transfer and their antecedents. Academy of Management Journal, 24(2),
289–305.
Van de Ven, A. H., Delbecq, A. L., & Koenig, R. (1976). Determinants of coordination modes
within organizations. American Sociological Review, 41(2), 322–338.
Van de Ven, A. H., & Ferry, D. L. (1980). Measuring and assessing organizations. New York,
NY: Wiley.
Velasquez, D. (2007). The development and testing of a questionnaire to measure complexity of
nursing work performed in nursing homes. Geriatric Nursing, 28(2), 90–98.
Vorwerk, R. J. (1970). The environmental demands and organizational states of two academic
libraries. Unpublished Ph.D. thesis, Indiana University, Bloomington, IN.
Vose, D. S. (2008). Binghamton university, state university of New York: Glenn g. Bartle
library. In D. R. Bailey & B. Tierney (Eds.), Transforming library service through
information commons: Case studies for the digital age (pp. 29–34). Chicago, IL: American
Library Association.
Weiner, S. (2009). The contribution of the library to the reputation of a university. The Journal
of Academic Librarianship, 35(1), 3–13.
Weiner, S. G. (2003). Resistance to change in libraries: Application of communication theories.
portal: Libraries and the Academy, 3(1), 69–78.
Weng, H. (1997a). A contingency approach to explore the relationship among structure,
technology, and performance in academic library departments. In D. E. Williams &
E. D. Garten (Eds.), Advances in library administration and organization (Vol. 15,
pp. 249–317). Greenwich, CT: JAI Press.
Weng, H. (1997b). A contingency approach to explore the relationships among structure,
technology, and performance in academic library departments. Unpublished Ph.D. thesis,
Rutgers University.
Wolske, M., Larkee, B., Lyons, K., & Bridgewater, K. (2006). Lessons learned from the library:
Building partnerships between campus and departmental it support. Paper presented at
the Proceedings of the 34th Annual ACM SIGUCCS Conference on User Services,
Edmonton, Alberta, Canada.
Woodsworth, A. (1988). Computing centers and libraries as cohorts: Exploiting mutual
strengths. Journal of Library Administration, 9(4), 21–34.
Yohe, M., & AmRhein, R. (2005). Its not your parents’ library: No box required. Paper
presented at the Proceedings of the 33rd Annual ACM SIGUCCS Conference on User
Services Monterey, CA.
Zinn, J. S., Brannon, D., Mor, V., & Barry, T. (2003). A structure-technology contingency analysis
of caregiving in nursing facilities. Health Care Management Review, 28(4), 293–306.
74 CAMERON K. TUAI

APPENDIX A: INFORMATION COMMONS SURVEY


Structural Contingency Theory Model of Library and Technology Partnerships 75
76 CAMERON K. TUAI
Structural Contingency Theory Model of Library and Technology Partnerships 77
78 CAMERON K. TUAI
Structural Contingency Theory Model of Library and Technology Partnerships 79
80 CAMERON K. TUAI
Structural Contingency Theory Model of Library and Technology Partnerships 81

APPENDIX B: INSTITUTIONAL REVIEW BOARD


APPROVAL
82 CAMERON K. TUAI
Structural Contingency Theory Model of Library and Technology Partnerships 83
84 CAMERON K. TUAI
Structural Contingency Theory Model of Library and Technology Partnerships 85
86 CAMERON K. TUAI
Structural Contingency Theory Model of Library and Technology Partnerships 87

APPENDIX C: RECRUITMENT LETTER

Dear ____________

I am contacting you to ask for your participation in my PhD dissertation


research project concerning the management of information service units;
specifically those that involve collaboration between the libraries and a
university computing unit. This research project surveys both library staff
and managers, and computing consultant staff and managers. The super-
visor survey concerns perceptions of performance. The staff survey concerns
service workflows between staff members. Pre-testing of the survey suggests
that the survey will take between 10-15 minutes to complete.
Your answers and that of your staff will be especially helpful in
understanding how units in the information commons are coordinated and
in identifying how to improve inter-unit relationships. Moreover, we hope
that your answering the questions will help you step back and evaluate for
yourself how your unit coordinates with other information service unit.
What I am asking is that you fill out the appropriate questionnaire and then
recommend and forward the survey link to the rest of your information
service unit. A link to these surveys is located at the bottom of this page and
can be cut and pasted into an email.
Your answers are strictly confidential. No report will identify any
individual person, information service unit, or academic institution. Please
do read the information study sheet (IRB Study #09-11000774) linked off
the survey site. If you have any questions please feel free to contact me either
through email ctuai@indiana.edu or phone (812) 855-2018.

URL to survey, ‘‘Partnerships in the Delivery of Information Services’’


http://www.surveymonkey.com/s/iu_survey

Best Wishes,
Cameron

Vous aimerez peut-être aussi