Vous êtes sur la page 1sur 35

Report of the Workshop on Collaboration Engineering

Monday, January 7th, 2008

Robert O. Briggs, Gert-Jan de Vreede, Gwendolyn Kolfschoten

Program
Welcome Collaboration Engineering introduction Vreede, Briggs, Kolfschoten A Measurement Framework for Patterns of Collaboration Gwendolyn Kolfschoten, Paul Lowry, Mehruz Kamal, Doug Dean Session 1 Introduction & brainstorm Break Session 1 New GSS functionalities Lunch Wrap up of Session 1 Measures for Groupthink and Disciplinary Ethnocentrism in Interdisciplinary Collaboration John Murphy, Justin M. Yurkovich, Alanah J. Davis, Robert O. Briggs Break Koncero Open Source Collaboration Project Joel Helquist Designing and Evaluating thinkLets for Convergence Alanah Davis, Gert-Jan de Vreede, John Murphy Summary workshop results End 9.00 9.05 9.20 9.50 10.15 10.45 12.00 13.00 13.45 14.15 14.45 15.15 15.45 16.00

Brainstorm on new functionalities for future Group Support Systems


New GSS are being developed rapidly, take for example ThinkTank (GroupSystems), NEMO2, Koncero, TeamSupport, Tinkature. Furthermore, social software is increasingly offering functionalities also found in Group support systems such as Wikis, Sharepoint, Sametime, etc. While each of these systems have some new and unique functionalities, they are not revolutionary different than the first Group Support Systems introduced around 1990. In this years workshop we therefore did a brainstorm to come up with new functionalities and visions for the design of Group Support Systems. Old' Brainstorming in text Organizing in clusters Laptops / LAN Online Decision Support New' Brainstorming with graphics Organizing on maps or models SmartBoardRooms Mobile/Virtual Vision support

Two groups worked on this challenge and came up with different ideas: Group 1; Gert-Jan de Vreede, Triparna Gangopadhyay, Doug Druckermiller, Paul Lowry, Luke Steinhauser, Gwendolyn Kolfschoten, Marco Wilde The Group Support System of the future exsits of pluglets small collaboration tools that can be combined in a group support suite, but that are also integrated to regularly used desktop tools such as outlook. Tools that will be offered as pluglets can be distinquished in three categories; content tools, logistic support and temperature tools. The content tools are tools for brainstorming, editing, reviewing, voting, spreadsheet analysis, voice chat, clustering, modeling, and for graphical annotation (emoticons) and graphical design. The logistic tools enable the planning of the collaboration process and contain an agenda, process overview, connection with personal agendas participant lists, to do lists, progress indication, and planning/scheduling tools. The third category of tools contains tools to monitor and evaluate the group dynamics, especially also in a distributed setting. This tool contains process evaluation tools, tools to move the group to a next activity, facilitation in the box integrated instructions, attention drawing tools, turn-taking tools, process chat tools, and artificial facilitation intelligence tools that highlight problems in group dynamics or process to the facilitator based on analysis of results. Group 2; Robert Briggs, Alana Davis, John Murphey, Barry Bohm, Joel Helquist. What new capabilities would be useful to enhance practitioners ability to build consensus? Things that are hard about building consensus: Understanding proposal Assuring that others understand the proposal the same as I do

Projecting the outcomes of implementing a proposal Understanding what opportunities must be sacrificed if we commit to a proposal Accountability for commitments made Understanding and mitigating risks associated with proposals Dealing with risk-aversion/risk taking differences Seamless integration of supporting technology

A Measurement Framework for Patterns of Collaboration


Gwendolyn L. Kolfschoten Systems Engineering Department, Faculty of Technology Policy and Management Delft University of Technology G.L.Kolfschoten@tudelft.nl Paul Benjamin Lowry Information Systems Department, Marriott School of Management Brigham Young University Paul.Lowry.PhD@gmail.com Douglas L. Dean Information Systems Department, Marriott School of Management Brigham Young University doug_dean@byu.edu Mehruz Kamal Information Systems and Quantitative Analysis Department, College of Information Science and Technology University of Nebraska, Omaha mkamal@mail.unomaha.edu

Working paper, May 2007 Please do not cite without permission Copyright 2007 Gwendolyn L. Kolfschoten, Mehruz Kamal, Paul Benjamin Lowry, and Douglas L. Dean All Rights Reserved

Abstract
Collaboration Engineering is an approach to designing collaborative work practices for high-value recurring tasks, and deploying those designs for practitioners to execute for themselves without ongoing support from professional facilitators. Collaboration Engineers design collaboration processes that are combinations of fundamental patterns of collaboration. This paper contains the results of a meta analysis of literature related to the following six fundamental patterns of collaboration: generate, reduce, clarify, organize, evaluate, and build consensus. We describe each fundamental pattern of collaboration with common subtypes of these patterns. Ideation, gathering, and reflecting are subtypes of generation. Filtering, summarizing, and abstraction are subtypes of reduction. Sense making and building shared understanding are subtypes of clarification. Categorizing, outlining, sequencing, creating causal decomposition, and modeling are subtypes of organization. Choice and preference communication are subtypes of evaluation. Building agreement and building commitment are subtypes of consensus building. This article clarifies these patterns and describing common subtypes. It also describes the key interventions required to evoke them, we also describe constructs used in the literature to measure these patterns. Together, the fundamental patterns, subtypes, and constructs used to measure these subtypes are combined into a framework for the explication and measurement of interventions in collaborative work.

Keywords
Collaboration Engineering, measurement, metrics, patterns of collaboration, GSS, collaborative software, group support systems

1. Introduction
Collaboration Engineering (CE) is an approach to designing collaborative work practices for highvalue recurring tasks, and deploying those designs for practitioners to execute for themselves without ongoing support from professional facilitators [1]. To successfully design collaboration processes (CPs) with predictable outcomes, the designer must know how the implementation of collaborative patterns will produce specific outcomes [1]. By predictable group interventions we mean an understanding of how a specific interventionwhich evokes specific patterns of collaborationwill result in specific outcomes. Facilitation interventions for CE may be captured as design patterns called thinkLets [1] which are scripted, reusable, and transferable collaborative activities. ThinkLets are best practices and therefore to some extent predictable, however the predictability of thinkLets can largely improve if we gain more understanding about the relation between facilitation interventions and the patterns of collaboration that emerge in the group as a result of these interventions. Two key objectives of theory are to explain and predict [2]. Further, logical positivism requires that theoretical propositions be subjected to empirical measurement and verification [3]. Explanation, prediction, and measurement, however, can be applied to different units of analysis. For example, one way to study a collaborative process, such as risk analysis, is to study the overall process as a whole. Another way to study risk management, or any other process, would be to break the overall process down into fundamental patterns of collaboration and then to study the inputs, process, and outcomes of each pattern. The six fundamental collaborative patterns described by Briggs, et al. [1] are the following: generate, reduce, clarify, organize, evaluate, and build consensus. Since these six patterns make up most, if not all, collaborative work, they are essential topics of study in their own right. Three additional distinctions are important regarding research of these patterns. First, there are multiple approaches, or processes, for how each pattern can be accomplished. For example, in the case of the generate pattern, past research has shown that there are many interventions that stimulate a group to produce different outcomes including different types of questions [4, 5], other stimuli [6], processes [7, 8], and motivational approaches [9]. If research focuses on outcomes of multipattern processes (MPPs) in which generation is one step, it is difficult to study the specific effects of these variations both on the MPPs and the individual constituent patterns. Second, even when the same process is used, the process may be supported by different types of media and tools such as flip charts, paper, sticky notes, a single computer, or some form of collaborative software. If the effect of different supporting tools is evaluated for MPPs (e.g., problem solving, risk assessment) researchers are unable to study the specific merits of each approach for each pattern. Finally, single-user computer applications and collaborative work environments, such as GSS and online communities are not all created equal [10]. Rather, different tools and different features of these tools offer different forms of support. It would be easier to understand the effects of these tools and features if researchers study these features in relation to each specific pattern rather than for MPPs. In summary, by studying each fundamental pattern as the primary unit of analysis, researchers will be better able to discover the effects that different processes, media, and tools have on outcomes. This understanding will help produce more predictable, effective outcomes for each pattern of collaboration, allowing collaborative engineers to put these together to form more effective and predictable MPPs. Having said this, there are both advantages and disadvantages of studying an overall process as opposed to studying its fundamental components. There are two primary advantages of studying MPPs instead of fundamental patterns: First, larger overall processes are sometimes more interesting to practitioners and organizations and thus can be more interesting to researchers. Second, in terms of adoption by organizations, the overall process is typically the unit of analysis that managers manage and measure. The disadvantage of using the overall process as the unit of analysis is that components of the process can be poorly understood and measured. Returning to the risk management example, it is common to generate a list of potential risks and to reduce this list to a smaller set of key risks that the organization should further analyze. Next, mitigation strategies for these risks may be generated and prioritized. Finally, these mitigation approaches may need to be elaborated on and adapted to the organization. While studying such an overall process can be beneficial, it can also obscure a clearer understanding of each of the parts. An understanding of the parts, however, is essential if designers of collaboration activities are to understand the pros and cons of different approaches for each pattern. The only way to assess these outcomes at the pattern level is through appropriate measurement. Unless research and measurement are conducted at the

pattern level, it is virtually impossible to predict how the application of different processes and support approaches will benefit or impede MPPs. A greater understanding of how implementations of each pattern produce specific outcomes can provide a many-fold benefit because a set of improved, understood implementations could be used in a myriad of larger end-to-end processes while making the overall process easier to manage, repeat, and optimize. Study at the individual pattern level, will also help researchers focus on and refine measures for each pattern. Accordingly, we call for researchers to study implementations of fundamental patterns of collaboration. This paper provides three main contributions that extend previous research. First, we more clearly define the fundamental patterns of collaboration. Second, we describe common subpatterns that are instantiations of each pattern. Third, we describe constructs that are measured by researchers during the study of these subpatterns. All of these are combined into a framework for future research. This framework will help researchers better understand choices among implementations and what measures will best assess the key outcomes of these implementations. The remainder of this paper is organized as follows. First, we summarize background research. Next, we describe the research approach. Next we discuss our findings for each of the patterns of collaboration and describe common subpatterns for each pattern. Based on these findings we present a framework to measure patterns of collaboration. The paper concludes with recommendations for further research.

2. Background
Briggs et al.[1] described the six fundamental patterns of collaboration that make up common collaborative activities. These are shown in Table 1. A number of meta-analyses have reported outcomes studied in collaborative or group settings [1117]. Fjermestad and Hiltz offer the most complete overview of outcome constructs that have been studied. Outcomes used to describe the success of collaborative activities are categorized as efficiency, effectiveness, satisfaction and consensus. For efficiency the time spend on the task is measured, effectiveness reflects different factors related to quantity and quality of outcomes and Table 1. Fundamental Collaborative Patterns # 1 2 3 4 5 6 Pattern Generate Reduce Clarify Organize Evaluate Build consensus Description Move from having fewer to having more concepts in the pool of concepts shared by the group. Move from having many concepts to a focus on fewer concepts that the group deems worthy of further attention. Move from having less to having more shared understanding of concepts and of the words and phrases used to express them. Move from less to more understanding of the relationships among concepts the group is considering. Move from less to more understanding of the relative value of the concepts under consideration. Move from having fewer to having more group members who are willing to commit to a proposal.

factors such as communication, focus and understanding. Satisfaction includes factors reflecting the perception of participants about participation and outcomes. Finally, consensus and commitment are sometimes reported as outcomes. Most of these outcomes factors pertain to the results of MPPs, instead of a single pattern of collaboration. A notable exception is the generate pattern, which while frequently used in MPPs, has also been much studied as a stand-alone pattern. While some of the factors can be used to measure fundamental patterns of collaboration, most of the fundamental patterns are not covered or are only partially covered by

these factors. Table 2 shows a mapping between the six patterns of collaboration and the outcome factors reported by Fjermestad and Hiltz [11]. For reduce and organize, no outcome factors are reported. This is likely because these patterns are often interim steps in MP collaborations [18]. Also, one reason reduction has received so little research attention is that it has sometimes been confused with convergence but, as we will explain later, it is not the same thing. Measurement of organization Table 2. Outcome Factors from Fjermestad and Hiltz Pattern Generate Reduce Clarify Organize Evaluate Build consensus Outcome factors Number of comments Idea Quality Creativity/Innovation [not addressed] Level of understanding [not addressed] Decision quality Depth of Evaluation Commitment to results Decision agreement Commitment

is likely omitted in the literature because it can be accomplished in many ways. This overview shows that much GSS research has focused on MPPs. This is also visible if we look at some of the research models used in GSS such as Nunamakers input-process-output model [19], the adaptive structuration theory [20] and the task-technology-fit model of Zigurs & Buckland [21]. While very useful, each of these models focuses on the effect of collaboration support on an entire task rather than a specific collaborative pattern.

3. RESEARCH APPROACH
To study ways in which fundamental patterns have been implemented, studied, and measured, we searched for articles that studied these patterns. Collaboration is a broad field, which is studied in a variety of disciplines, such as GSS research, human-computer interaction, organizational science, psychology, social psychology, decision science, education, and many others. To find articles related to patterns of collaboration, we searched for articles based on the names of the six fundamental patterns and synonyms for these patterns. We used a variety of search indices including Google Scholar, Scopus, IEEE Explore, PsycINFO, ACM Digital Library, EBSCO, and Web of Science. Where possible, we used articles that offered an overview of research regarding the pattern. In other instances, we had to explore different studies to derive subpatterns and different approaches used to create the pattern. We kept collecting articles until no new subpatterns and metrics were found. Furthermore, we examined the citations in the located papers to find additional papers describing similar interventions and metrics used. This search produced 97 articles. From the articles we captured the definition of the phenomena of interest, which reflected the pattern or subpattern, the approach to evoke the pattern, and the metrics used to assess the outcomes. This approach was used for all patterns except the generate pattern because one of the authors had recently published a comprehensive meta-analysis on measurement approaches for this pattern that included 90 journal articles in top journals from the 1990 to 2005 time period [4]. We now discuss our findings.

4. Measuring patterns of collaboration


Generate Pattern of Collaboration The generate pattern is defined as moving from having fewer to having more concepts in the pool of concepts shared by a group. Generate is a pattern in which the group creates content. Briggs et al. (2003) initially called this pattern diverge because it was conceptualized in the context of creativity and ideation.

Subsequent discussion between the current authors and Briggs led to the renaming of the concept as generate because beyond ideation content is often generated by a group when collecting information. There are two types of generation: (1) new or creative ideas as (solutions, problem solving); and (2) collection of expertise, facts, and explanations (knowledge sharing). Ideation is by far the most researched forms of generate. Early idea-generation research used quantity as a measure of quality, assuming that if a sufficient number of ideas were produced, the resulting idea pool would be more likely to contain high-quality ideas [22]. Briggs and Reinig [23] state that the ratio between quality and quantity is especially important when assessing the success of creativity. The positive correlation between quantity and quality has been found in some studies [24-26], but a negative correlation has been found in other studies [27, 28]. Studies that go beyond merely enumerating ideas require researchers to select a definition of one or more of the three constructs that are typically operationalized as the dependent variable(s): (1) idea quality; (2) idea novelty, which is sometimes referred to as rarity or unusualness; and (3) idea creativity. Dean et al. [4]found that more recent studies tend to go beyond just counting the number of ideas produced by a group. Instead they both count ideas and assess ideas on a number of dimensions. Recent metaanalyses have drawn a distinction between two fundamental views of creativity: the novelty-centric view, where ideas are considered creative if they are novel or rare, and the quality-centric definition of creativity, where ideas must not only be novel but must also be feasible, effective in solving the problem, relevant to the problem, and clearly defined [4, 29]. Moreover, Dean et al. [4] found that some studies have focused on quality indicators such as effectiveness, relevance, and clarity but have excluded novelty The terms quality ideas, novel ideas, and creative ideas should be reserved for ideas that meet specific thresholds of the constructs included in these measures. To identify novel ideas, quality ideas, and creative ideas, Dean et al. [4] recommend a threshold approach similar to that introduced in [24] and later used by [30]. In this approach, quality ideas should meet a specific threshold on workability and relevance. Creative ideas should meet a specific threshold on novelty, workability, and relevance. These threshold measures are noncompensatory measures in the sense that strength in one quality indicator should not compensate for weakness in another area. Finally, total quality, total novelty, and total creativity, can be defined by sum the score for each of these dimensions. Gathering or elaboration is the second form of the generate pattern. This subpatttern has the objective to accumulate relevant informationas opposed to finding new or unique solutions. In this subpattern the amount of information is less important than getting the relevant information required for the purpose of the decision at hand. The constructs of relevance, clarity, accuracy, reliability, timeliness, and intrinsic quality of information gathered (purposefulness, usefulness) [31]. Which of these factors are most relevant depends on the purpose of data gathering. Table 3 summarizes the subpatterns and measurement constructs for the generate pattern. Table 3. Subpatterns and Measurement Constructs for Generate # contributions (per participant/time interval) # unique contributions # off topic/ outside scope # double # purposeful Quality Uniqueness Creativeness Ratio quality/ creativity/ uniqueness/ double/ purposeful/ off-topic vs. # total or other criterion Completeness Relevance Clarity Accuracy Reliability Timeliness Purposefulness Usefulness

Creativity

Generate

Gathering

Reduce Pattern of Collaboration The reduction pattern of collaboration deals with moving from having many concepts to a focus on fewer concepts that a group deems worthy of further attention. Although reduction has been studied less than generation, it is an essential pattern because it allows a group to both lessen and simplify the concepts that remain under consideration. In part, reduction is necessary because generation can produce a large volume of content of varying relevance. Moreover, generated content can be at multiple levels of abstraction and granularity. Reduction has sometimes been confused with convergence but it is not the same pattern. Briggs et al. [10] described convergence as both reduction and the establishment of shared meaning. Briggs et al. [1] later separated these two concepts into reduction and clarification. Reduction can be a step that can lead to further discussion, clarification, and evaluationall of which are steps that a group can use to move towards convergence. The distinction between reduction and clarification is important because each can be accomplished independently. For example, it is possible to reduce information without achieving clarification and it is possible to accomplish clarification without reduction. The aim of reduction is to keep only the information that meets a specific criterion or criteria. This can be achieved either by removing what is not wanted or selecting what is wanted. Metrics for this include the preciseness or match of selected item with respect to the information need, and the ratio between the amount of overall information and the amount of information that meets the information needs [32]. Another metric is the ratio between the amount of input information and the amount of information after reduction, in this case similar criteria can be used as in the generate pattern (e.g., number of unique before and after reduction, number of high quality before and after, etc.) [33]. A second approach to reduction is a form of summarization. Rather than selection, the aim of this approach is not to select part of the information, but rather to capture the essence of information with less information elements. Namely, all redundant information is removed, but no choices are made in regard to which remaining contributions are preferred over other contributions. There are several approaches to summarizing. First, summarizing can be accomplished when only the unique information is selected. Second, similar contributions can be merged, to keep only unique contributions. Third, an instance of similar contributions can be selected to represent multiple instances. Last, extraction of the most important contributions can also be used as a summarizing method [34]. In this instance, an evaluation pattern is used to accomplish reduction. Outcome metrics for the quality of summarization assess how well the summary represents the original set of contributions. Other key indicators are conciseness and consistency. An idea summarization would be both parsimonious and complete. A third approach to reduction is to reduce information through abstraction. The purpose of abstraction is to make content more intellectually manageable by allowing group members to pay attention to relevant information and to ignore other details. Smith et al. [35] describe two approaches for abstraction: generalization and aggregation. Generalization refers to an abstraction in which a set of similar objects is regarded as a generic object, such as employed persons being abstracted as a generic object employee. With this form of abstraction individual differences between employeessuch as different names, job positions, and agescan be ignored. Aggregation refers to an abstraction in which a relationship between objects is regarded as a higher-level object. In such an abstraction many details of the relationship may be ignored. For example, a certain relationship between a person, airplane, and a date can be abstracted as a reservation without bringing to mind all of the details of the underlying relationshipfor example, the flight number, the date and time of the flight, and the airline. Metrics for abstraction are appropriate when they assess the appropriateness of the abstraction, whether the purpose is to retain the relevant information while omitting unneeded details in the form of aggregation or generalization. Moreover, abstractions should be complete in terms of the purpose they are intended to serve, be consistent, unambiguous, and without overlap. The difference between summarizing and abstraction is that in summarizing the same information is represented with less information elements while in abstraction the information is characterized by high level concepts that encompass relevant information in the original set. Table 4 summarizes the subpatterns and measurement constructs for the reduce pattern. Table 4. Subpatterns and Measurement Constructs for Reduction

Filtering

Reduce Summarizing

Abstracting

Relevance of selected information Amount of information selected Amount of relevant information vs. total amount selected Amount of information selected vs. initial amount of information Amount of relevant in selected information vs. amount of relevant in initial information Representativeness Conciseness Consistency Completeness Parsimoniousness Representativeness Completeness Amount of ambiguity Amount of overlap

Clarify Pattern of Collaboration The clarify pattern of collaboration deals with moving from having less to having more shared understanding of concepts, words, problems, and possible solutions. The clarify pattern can best be understood in terms of two subpatterns: development of shared understanding and sense making. Mulder, Swaak, & Kessels [36] explain the notion of shared understanding as implying mutual knowledge, mutual beliefs, and mutual assumptions. Groups achieve shared understanding when the group comes to a common understanding of concepts and words that are germane to the task at hand. Sense-making usually requires some development of shared understanding of concepts and terms but also includes the development of a common understanding of the problem, the context of the problem, and the possible actions the group might take to solve the problem. In other words, sense making is done to prepare the group to act in a principled, and informed manner [37]. Sense-making tasks often involve searching for ideas and input made by others in a group that are relevant for the purpose at hand and then extracting and reformulating the information so that it can be used [38]. Activities such as paraphrasing, summarizing, explaining are part of developing share understanding and sense making. In line with shared understanding, there have also been a number of studies that have investigated the nature of mental models in being able to understand how learning takes place. In the context of the clarify pattern, mental models provide a means to specify relevant knowledge as well as the relationships between knowledge components through the use of concept relatedness ratings [39, 40]. The same constructs can be used to assess both shared understanding and sense making. Table 5 summarizes the subpatterns and measurement constructs for the clarify pattern. Table 5. Subpatterns and Measurement Constructs for Clarify Building shared understanding Clarify Sense making Conceptual learning Validity Clarity Motivation Relevance

Organize Pattern of Collaboration The organize pattern involves moving from less to more understanding of the relationships among concepts the group is considering. In other words, organize means to make complex ideas understandable [41]. Briggs et al. [10] elaborate on the organize pattern as follows: The point of reducing complexity and making things more understandable is to reduce the effort of a follow-on activity.

When a group organizes a group of concepts, they develop an understanding of the relationships among the concepts. They consider possible relationships among concepts, and determine which relationships exists among which concepts. Briggs et al. provide the example that a group might organize a mixed list of ideas into a number of categories, or arrange them into a tree. They also emphasize that an "organize" pattern is typically not the end result of a group taskthat organization is typically done prior to further divergence or elaboration. Not surprisingly, categorization, sometimes referred to as classification, is the most common form of organize [e.g., 42], and is often used as a step after brainstorming. Categorization and classification are synonymous and represents the basic cognitive process of arranging into classes or categories. Collaborative software is explicitly used to represent the classes or categories. The quality of a categorization effort can be measured using several items, such as the number of ideas organized, the number of categories created, and the number of repeated items (or duplicates). Chen et al. [43] provided an AI algorithm that tried to classify the most relevant ideas in a brainstorming session; they measured their attempts to automatically classify in terms of concept recall (the percentage of relevant meeting ideas that were properly captured) and concept precision (the percentage of the concepts in the lists that are deemed relevant to the actual meeting topics). Whether more categories are better than fewer categories depends on the task; accordingly, there are two major types of categorization tasks: ones such as in brainstorming where the number of categories is not pre-determined, and others such as in software evaluation tasks where the number of categorizes are predetermined. Regardless, duplicates are counterproductive in virtually every group task. Search for items that meet specific criteria is another form of organization. For example, research in software evaluation has focused primarily on software inspections [44, 45] and heuristic evaluation [46, 47]. In both tasks, users are trying to find defects or violations of pre-determined rules of how code should be written (software inspections) or violations of interface usability heuristics. The key difference with these types of tasks is that the categorization of potential violations is fixed and known in advance, as they are determined by predetermined rules and heuristics. Given this, these tasks involve search and the precise determination of whether a violation is correctly reported (correct reports) or incorrectly reported (false positives). The third sub-pattern of organize that is common throughout the literature involves creating outlines with the purpose of creating an overall logical flow between the elementsnot simple categorization. In other words, the order of the categories and subcategories highly matters in an overall thought-organization process. This sub-pattern is most commonly seen in outlining that is involved in collaborative writing, such as [48-52]. Collaborative writing is a complex process that involves multiple collaboration engineering patterns [49]. However, a key part of this task that has been included in collaborative writing research involves outlining the groups paper. In this type of organize task, the concern is not on the correct categorization of items, but the logical flow and organization of related content. Thus assessment of document quality is the predominant measure when dealing with collaborative writing; other measures include length of document and time spent outlining. The fourth organize subpattern pertains to developing sequence relationships among concepts; in other words, the sequencing of information and tasks (which is more common). This pattern of collaboration is generally found in workflow management [53], collaborative scheduling [54], collaborative project planning [55], and collaborative process planning [56]. Outcomes are often not focused on the sequencing task but on its effect on planning such as flexibility, number of scheduling conflicts, number of conflicts resolved, number of disturbances, responsiveness to disturbances, and schedule performance. A fifth subpattern in organizing covers the causal relations between concepts. Causal modeling in collaborative setting is researched in Group Model Building literature [57] and in soft systems methodology (strategic options development and analysis)[58, 59]. However, both systems mostly look at effects of the entire approach to problem solving and an interpretive research approach is used in most of these studies. Limited information is therefore available for specific outcomes and metrics used to evaluate the collaborative causal modeling activity. Rouwette [57] reports the size of the resulting model and its dynamic complexity (number of feedback loops), besides the metrics for collaborative modeling outcomes in general. The sixth and last organize subpattern is found in literature on collaborative modeling. Collaborative modeling and Group Model building are researched in different GSS research communities and in System Dynamics research [60, 61]. Collaborative modeling is, like clustering used to create shared understanding and to reduce complexity, however, since a modeling language is used, one of the quality

dimensions of collaborative modeling is the consistency of the resulting model with modeling conventions or rules [60]. Completeness and validity are other quality indicators [60, 61]. Further outcome measures are individual user comprehension of the models and the resulting efficacy of shared mental models (e.g., whether every group members has the same comprehension). These are very important because the purpose of modeling is to convey and facilitate meaning with multiple stakeholdersmany of whom are not technical or capable of creating the models but need to understand themsuch as business process managers, executives, and customers [e.g., 62, 63, 64]. However, in collaborative modeling, strong individual user comprehension is not sufficient, the degree to which shared cognition or shared mental models accurately occurs is also a critical measure to evaluatethat is, the degree to which the stakeholders in the modeling effort are in alignment with their share understanding of the model [65]. Better task performance results when individual group members mental models are aligned with task requirements into a cohesive, shared mental model [65]. Table 6 summarizes the subpatterns and measurement constructs for the organize pattern. Evaluate Pattern of Collaboration The evaluate pattern involves movement from less to more understanding of the relative value of the concepts under consideration. The two purposes for evaluation are to support decision making and to support group communication [66]. Evaluation can be done through a voting or rating mechanism as well as through evaluative dialog. Voting and rating are the key approaches non-dialog evaluation. Both voting and rating include two key steps: collecting individual preferences and aggregating these into some form a group preference [67, 68]. Table 6. Subpatterns and Measurement Constructs for Organize Number of ideas organized Number of categories created Number of duplicates Number of correctly categorized items Number of relevant items Correct reports False positives External judged document quality Participant perceived document quality Length of document Time spent outlining Flexibility # scheduling conflicts # conflicts resolved # disturbances responsiveness to disturbances schedule performance Model size Dynamic complexity Arbitrariness Clarity Complexity Consistency Completeness/Parsimony Validity User comprehension Modeling self-efficacy Shared mental models

Categorizing

Outlining

Sequencing

Organize

Building causal decomposition

Modeling

The terms voting and rating overlap considerably because of a variety of voting and rating approaches. For example, group members may vote and be able to select one or more solutions from a larger set. Or group

members may rate a set of items on one or more dimensions, so that the highest rated items move to the top. Depending on the situation, the ultimate decision can be made by the group or the preferences of a group can be used to inform a decision maker. Evaluation is commonly used to support communication as a means of surfacing preferences, assumptions, agreement, and disagreements. This allows groups to explore the reasons for the agreement and disagreements while working towards some ultimate decision. Many aggregation methods are found in the literature [e.g., 67, 68-70]. The various methods fall into one of the following three categories: (1) majority rule, (2) consensus rule, or (3) a selection based on expertise. Outcomes of voting are also various. An outcome can be in the form of a preference indication, a decision, or a measure of some property (e.g., the feasibility of a solution) [69, 70]. In the case of rational choice, correctness and the appropriate representation of participants expertise are important metrics. When consensus is a groups objective, fairness, acceptance, and lack of post-decision regret are important [70]. Consensus is often a metric of divergence of preference as will be discussed. Both in rational choice and consensus decisions, participant satisfaction and robustness of the result with respect to meeting some objective are important. Other important indicators of evaluate are the complexity of the voting task [69] and the complexity of the aggregation method [71]. Complexity of the voting task drives the effort required by participants; too many items or too many dimensions can make voting tedious [69]. Moreover, the ability to combine votes in ways that are understood and meaningful for group members is important [72]. Besides voting, qualitative evaluation can be performed by having group members textually comment on the desirability of different options. In effect, group members state pros and cons of different potential solutions. As opposed to generate, which may be focused on generating solutions, the focus in this pattern to evaluate solutions that are already produced. The generate pattern in this case has a more evaluative or reflective character that can help a group come to a more common view of the relative desirability of different options. This pattern is different from the gathering pattern in the sense that the gathering pattern is used for knowledge construction or sharing while qualitative evaluation is focused on reflection [48] or illumination and verification [73].. Qualitative evaluation is not about eliciting preference directly, but rather about explaining specific qualities of the concepts under consideration as a basis for (mostly) rational choice, such as evaluating the feasibility of solutions. When people review or offer feedback, a criterion often considered is that such feedback is constructive, thorough and knowledgeable [74]. Completeness and clarity are also important indicators. Table 7 summarizes the subpatterns and measurement constructs for the evaluate pattern. Table 7. Subpatterns and Measurement Constructs for Evaluate Social choice Consensus Fairness Satisfaction Acceptance method Lack of post decision regret Robustness strategy Robustness weight factors Complexity aggregation Effort/complexity voting Correctness Expertise Robustness weight factors Complexity aggregation Effort/complexity voting Mutual understanding Complete representation of preferences in results Insight in disagreement/conflict Thoroughness or completeness Accuracy Clarity

Choice Rational choice Evaluate

Communication of preference

qualitative evaluation

Constructiveness

Consensus Building Pattern of Collaboration Consensus is usually defined as an agreement, acceptance, or lack of disagreement or some other indication that stakeholders commit to a proposal [66]. Methods to achieve consensus are either though the aggregation of preferences as in social choice or through resolving different aspects of disagreement or conflict through negotiation, creating shared understanding or through the inclusion or exclusion of additional factors/stakes. Metrics for disagreement or divergence in preference are very numerous and range from standard deviation of voting results [75] to fuzzy sets [76] and Saaty's consistency ratio [77]. Another approach to measure consensus is to measure commitment or buy-in of the different stakeholders [78]. Table 8 summarizes the subpatterns and measurement constructs for the consensus building pattern. Table 8. Subpatterns and Measurement Constructs for Consensus Choice Building agreement visualizing divergence of preference See evaluation choice Standard deviation Kendalls coefficient Fuzzy consensus Saaty's consistency ratio Perception of agreement/ consensus /commitment number of participants that commit % of participants that commit Satisfaction of participants

Consensus Building Building commitment

5. patterns of collaboration: effects and causes


Briggs et al. [1] provided a first decomposition of the patterns of collaboration into subpatterns. Based on the current research we have produced a more refined set of subpatterns as summarized in Table A1 (Appendix 1). Table A1 further summarizes the subpatterns we identified, and defines these patterns in terms of the effect created in the collaborative pattern of the group effort and line with the general pattern they belong to. Note that these definitions do often not match the definition of the concept indicated with the pattern label, but only define the pattern of collaboration described by that label. Based on the patterns created in the group and the outcomes evoked, we show the key facilitation interventions required to create these patterns. We define these interventions as rules which offer the basis for the thinkLets [1, 79, 80]. used in Collaboration Engineering. Last, the table lists the metrics for the key outcomes that should be achieved. With this overview, the table offer collaboration engineers a detailed overview of the effects of the interventions they make both the outcome and the pattern evoked. Furthermore, each pattern is now described as a cause and a fact, which offers a first basis for theory building in the tradition of logical positivism.

6. Discussion
The six fundamental collaborative patterns described by Briggs, et al. [1], namely generate, reduce, clarify, organize, evaluate, and build consensus make up most if not all collaborative work. To design Collaboration processes with predictable outcomes, designers must know how the implementation of collaborative patterns will produce specific outcomes [79, 81]. While these six collaboration patterns are well-known in Collaboration Engineering, more research needs to be done to better understand the subpatterns involved in these patterns, and the primarily constructs and measures. Without constructs and measures to these subpatterns, it is virtually impossible for logical positivists to conduct empirical theoretical work that meaningfully explains and predicts outcomes for different collaboration patterns and contexts. This paper addressed this gap in the literature.

The first major contribution of this paper is that we offer a framework of the subpatterns that are common instantiations of each of the fundamental collaborative patterns. We presented a set of interventions that are likely to produce the patterns of collaboration and the outcomes intended in these patterns. Our second major contribution is that we describe the constructs that are measured during the study of these subpatterns in the literature. Together, these contributions result in a framework for future research that includes the patterns, subpatterns, and constructs that can be studied and measured by empirical logical positivists. This framework will help researchers to create more specific effects in collaborative settings and to better understand the variations in those effects and the interventions causing them. Better understanding of the choices among implementations and what measures will best describe the outcomes of these implementations will also help collaboration engineers to design more predictable collaboration processes. From Table A1 we can draw some interesting insights. Some interventions require only an instruction to the group while others require managing discussion and gaining commitment with respect to a group-outcome, rather than simply visualizing or combining the results of the individual contributions of the group members. The latter is particularly the case in the generate pattern; while some generation patterns support participants to inspire each other or the help each other in striving to completeness or quality, they do not require the group to commit to a group result upon completion of the pattern. Similarly, the clarify and organize pattern are used to gain more understanding of the concepts under discussion and to gain insight in their relations. However, the focus of the pattern is increased understanding of the group members as opposed to a group outcome. Only in sense making, shared negotiated meaning can result as a group outcome. In the reduce, evaluate, and consensus building patterns, choices are made and group members need to commit to the outcome of the pattern in order to gain a successful result. In these patterns the facilitation interventions are also less specific. Instead of a clear creativity instruction (generate pattern) let the participants add ideas, these interventions often prescribe discussion based on some proposal that involves a choice. Such choices can be about the completeness of a summary or they can require consensus on a decision. However, when one or more group members reject the proposal, the outcome of the intervention is negative, or the activity is not finished. Naturally these patterns are therefore the most demanding patterns for a facilitator as was also concluded by den Hengst et al [82].

7. Conclusion and further research


In this paper, we presented a framework in which we describe the facilitation interventions to cause collaborative outcomes and the patterns that describe the group process that emerges as an effect of these interventions. This framework can be used by collaboration engineers to design predictable intervention in group process, but more important, it can be used by researchers to drill down one level and study collaboration on the elementary level of group interaction and single activities. On this level of analysis we can evaluate different methods, styles and approaches to intervention that enable us to resolve some of the paradoxes in collaboration and GSS research [83]. GSS tools offer means to support groups in their collaborative activities. Many discussions have been focused on the appropriate use [20, 84] of GSS and its effect on group effort [11, 12]. Our framework does not focus on any technology. In our opinion, tools should be instrumental to the processes they support and should not determine the choice of methods to execute a task [79]. For some of the processes presented here, tool support is limited, and could be further developed. Others can be supported by simple functionalities that are offered in a variety of tools. We hope that GSS developers use this overview of collaborative patterns and further expand their tools including functionalities to support each of the patterns discussed. We further hope that this framework will serve as the basis for a new area in collaboration research in which researchers focus on collaborative activities on a deeper level, to gain understanding in the distinct collaborative activities that groups perform to achieve their goals, and the specific interventions required to evoke a more precise and predictable effect in collaborative effort.

Acknowledgements
We appreciate general funding provided by the Information Systems Department and Kevin and Debra Rollins Center for e-Business, both at the Marriott School, Brigham Young University. We appreciate research assistance provided by James Gaskin and Aaron Bailey.

References
[1] R. O. Briggs, G. L. Kolfschoten, G. J. d. Vreede, and D. L. Dean, "Defining Key Concepts for Collaboration Engineering," in Americas Conference on Information Systems , Acapulco, Mexico, 2006, pp. 121-128. R. Dubin, Theory Building. New York, New York, USA: Free Press, 1978. A. E. Blumberg and H. Feigl, "Logical Positivism," Journal of Philosophy, vol. 28, pp. 281-296, 1931. D. L. Dean, J. M. Hender, T. L. Rodgers, and E. Santanen, "Identifying quality, novel, and creative ideas: constructs and scales for idea evaluation," Journal of the Association for Information Systems, vol. 7, pp. 646-698, 2006. E. L. Santanen, G. J. d. Vreede, and R. O. Briggs, "Causal Relationships in Creative Problem Solving: Comparing Facilitation Interventions for Ideation," Journal Of Management Information Systems, vol. 20, pp. 167-197, 2004. J. M. Hender, D. L. Dean, T. L. Rodgers, and J. F. Nunamaker, Jr.,, "An Examination of the Impact of Stimuli Type and GSS Structure on Creativity: Brainstorming Versus Nonbrainstorming Techniques in a GSS Environment," Journal of Management Information Systems, vol. 18, pp. 59-85, 2002. A. R. Dennis and J. S. Valacich, "Group, Sub-Group, and Nominal Group Idea Generation: New Rules for a New Media?," Journal of Management, vol. 20, pp. 723736, 1994. A. Pinsonneault, H. Barki, R. B. Gallupe, and N. Hoppen, "Electronic Brainstorming: The Illusion of Productivity," Information Systems Research, vol. 10, pp. 110-133, June 1999. M. M. Shepherd, R. O. Briggs, B. A. Reinig, J. Yen, and J. Jay F. Nunamaker, "Invoking social comparison to improve electronic brainstorming: beyond anonymity," Journal of Management Information Systems, vol. 12, pp. 155-170, 1995-96. R. O. Briggs, G. J. d. Vreede, and J. F. J. Nunamaker, "Collaboration Engineering with ThinkLets to Pursue Sustained Success with Group Support Systems," Journal of Management Information Systems, vol. 19, pp. 31-63, 2003. J. Fjermestad and S. R. Hiltz, "An Assessment of Group Support Systems Experimental Research: Methodology and Results," Journal of Management Information Systems, vol. 15, pp. 7-149, 1999. J. Fjermestad and S. R. Hiltz, "A Descriptive Evaluation of Group Support Systems Case and Field Studies," Journal of Management Information Systems, vol. 17, pp. 115-159, 2001. P. L. McLeod, "An Assessment of the Experimental Literature on Electronic Support of Group Work: Results of a Meta-Analysis," Human-Computer Interaction, vol. 7, pp. 257280, 1992. B. B. Baltes, M. W. Dickson, M. P. Sherman, C. C. Bauer, and J. S. LaGanke, "Computer-mediated communication and group decision making: A meta-analysis," Organizational Behavior and Human Decision Processes, vol. 87, pp. 156179, 2002. M. I. Hwang, "Did Task Type Matter in the Use of Decision Room GSS? A Critical Review and a Meta-analysis," International Journal of Management Science, vol. 26, pp. 1-15, 1998. C. K. Tyran and M. Sheperherd, "GSS and Learning Research: A Review and Assessment of the Early Studies," in Hawaii International Conference on System Science, Hawaii, USA, 1998, pp. 1-10.

[2] [3] [4]

[5]

[6]

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16]

[17]

[18]

[19]

[20] [21] [22] [23]

[24]

[25]

[26]

[27]

[28] [29]

[30]

[31] [32] [33]

[34]

A. R. Dennis and B. H. Wixom, "Investigating the Moderators of the Group Support Systems Use with Meta-Analysis," Journal of Management Information Systems, vol. 18, pp. 235-257, 2002. G. L. Kolfschoten, J. H. Appelman, R. O. Briggs, and G. J. d. Vreede, "Recurring Patterns of Facilitation Interventions in GSS Sessions," in Hawaii International Conference On System Sciences, Hawaii, USA, 2004. J. F. J. Nunamaker, R. O. Briggs, D. D. Mittleman, D. Vogel, and P. A. Balthazard, "Lessons from a Dozen Years of Group Support Systems Research: A Discussion of Lab and Field Findings," Journal of Management Information Systems, vol. 13, pp. 163-207, 1997. G. DeSanctis and M. S. Poole, "Capturing the Complexity in Advanced Technology Use: Adaptive Structuration Theory," Organization Science, vol. 5, pp. 121-147, 1994. B. A. Reinig, R. O. Briggs, and J. F. Nunamaker, "Flaming in the Electronic Classroom," Journal Of Management Information Systems, vol. 14, pp. 45-60, 1998. A. F. Osborn, Applied Imagination. New York, New York, USA: Scribners, 1953. R. O. Briggs and B. A. Reinig, "Bounded Ideation Theory: A New Model of the Relationship Between Idea Quantity and Idea-quality During Ideation," in Hawaii International Conference on System Science, Waikoloa, Hawaii, USA, 2007. M. Diehl and W. Stroebe, "Productivity Loss in Brainstorming Groups: Toward the Solution of a Riddle," Journal of Personality and Social Psychology, vol. 53, pp. 497509, 1987. R. B. Gallupe, A. R. Dennis, W. H. Cooper, J. S. Valacich, L. M. Bastianutti, and J. J.F. Nunamaker, "Electronic Brainstorming and Group Size," Academy of Management Journal, vol. 35, pp. 350-369, 1992. J. S. Valacich, R. Wachter, B. E. Mennecke, and B. C. Wheeler, "Computer-Mediated Idea Generation: The Effects of Group Size and Group Heterogeneity," in Proceedings of the 26th International Conference on System Sciences , 1993, pp. 152-160. T. Connolly, L. M. Jessup, and J. S. Valacich, "Effects of Anonymity and Evaluative Tone on Idea Generation in Computer-Mediated Groups," Management Science, vol. 36, pp. 689-703, 1990. S. S. Gryskiewicz, "A Study of Creative Problem Solving Techniques in Group Settings," University of London, 1980. J. A. Plucker, R. A. Beghetto, and G. T. Dow, "Why Isn't Creativity More Important to Educational Psychologists? Potentials, Pitfalls, and Future Directions in Creativity Research," Educational Psychologist, vol. 39, pp. 83-96, 2004. A. R. Dennis, J. S. Valacich, T. Carte, M. Garfield, B. Haley, and J. E. Aronson, "Research Report: The Effectiveness of Multiple Dialogues in Electronic Brainstorming," Information Systems Research, vol. 8, pp. 203-211, 1997. Y. Wand and R. Y. Wang, "Anchoring Data Quality Dimensions in Ontological Foundations," Communications of The ACM, vol. 39, pp. 86-95, 1996. N. J. Belkin and W. B. Croft, "Information Filtering and Information Retrieval: Two Sides of the Same Coin?," Communications of The ACM, vol. 35, pp. 29-38, 1992. M. Kamal, A. J. Davis, J. Nabukenya, T. V. Schoonover, L. R. Pietron, and G. J. d. Vreede, "Collaboration Engineering For Incident Response Planning: Process Development and Validation," in Hawaii International Conference on System Science, Waikoloa, Hawaii, USA, 2007, pp. 1-10. R. Barzilay, K. R. McKeown, and M. Elhadad, "Information Fusion in the Context of Multi-Document Summarization," in Annual meeting of the Association for Computational Linguistics on Computational Linguistics, College Park, Maryland, USA, 1999, pp. 550-557.

[35] [36]

[37] [38] [39]

[40]

[41]

[42]

[43]

[44]

[45] [46]

[47]

[48]

[49]

[50]

[51]

J. M. Smith, J. M. Smith, and D. C. P. Smith, "Database Abstractions: Aggregation and Generalization," ACM Transactions on Database Systems, vol. 2, pp. 105-133, 1977. I. Mulder, J. Swaak, and J. Kessels, "Assessing learning and shared understanding in technology-mediated interaction," Educational Technology & Society, vol. 5, pp. 35-47, 2002. C. A. Ntuen, D. Leedom, and E. Schmeisser, "Performance of Individuals and Groups in Sensemaking of Information from Computer Decision Aiding Systems," 2005. K. E. Weick, Sensemaking in Organizations. Thousand Oaks, California, USA: Sage Publications Inc., 1995. N. Deshpande, B. d. Vries, and J. v. Leeuwen, "Collaborative Design Knowledge Construction and Measuring Shared Understanding," in Progress in Design & Decision Support Systems in Architecture and Urban Planning, J. P. van Leeuwen and H. J. P. Timmermans, Eds. Eindhoven, NL: Eindhoven University of Technology, 2006, pp. 303312. T. E. Goldsmith and P. J. Johnson, "A structural assessment of classroom learning," in Pathfinder Associative Networks: Studies in Knowledge Organizations, P. Schvaneveldt, Ed. Norwood, New Jersey, USA: Ablex Publishing Corporation, 1990, pp. 241-254. R. O. Briggs and P. Gruenbacher, "EasyWinWin: Managing complexity in requirements negotiation with GSS," in Hawaii International Conference on System Sciences , Waikoloa, Hawaii, USA, 2002, pp. 1-10. M. Gris and R. B. Gallupe, "Information Overload: Addressing the Productivity Paradox in Face-to-Face Electronic Meetings," Journal of Management Information Systems, vol. 16, pp. 157-185, 2000. H. Chen, P. Hsu, R. Orwig, L. Hoopes, and J. F. j. Nunamaker, "Automatic Concept Classification of Text from Electronic Meetings," Communications of the ACM, vol. 37, pp. 56-73, 1994. T. L. Rodgers, D. L. Dean, and N. J. F. Jr., "Increasing inspection efficiency through group support systems," in Hawaii International Conference on System Sciences, Waikaloa, Hawaii, USA, 2004, pp. 18-27. C. K. Tyran and J. F. George, "Improving Software Inspections with Group Process Support," Communications of the ACM, vol. 45, pp. 87-92, 2002. T. L. Roberts, P. B. Lowry, and N. C. J. Romano, "Improving Design Artifact Reviews with Group Support Systems and an Extension of Heuristic Evaluation Techniques," in Hawaii International Conference on System Sciences , Waikoloa, Hawaii, USA, 2005, pp. 1-10. P. B. Lowry and T. L. Roberts, "Improving the Usability Evaluation Technique, Heuristic Evaluation, Through the Use of Collaborative Software," in 9th Annual Americas Conference on Information Systems, Tampa, Florida, USA, 2003, pp. 2203-2211 P. B. Lowry, A. Curtis, and M. R. Lowry, "Building a Taxonomy and Nomenclature of Collaborative Writing to Improve Interdiciplinary Research and Practice," Journal of Business Communication, vol. 41, pp. 66-99, 2004. P. B. Lowry and J. F. J. Nunamaker, "Using the thinkLet framework to improve distributed collaborative writing," in Hawaii International Conference on System Sciences, Waikoloa, Hawaii, USA, 2002, pp. 4051-4060. P. B. Lowry and J. F. J. Nunamaker, "Using Internet-Based, Distributed Collaborative Writing Tools to Improve Coordination and Group Awareness in Writing Teams," IEEE Transactions on Professional Communication, vol. 46, pp. 277-297, 2003. P. B. Lowry, J. F. J. Nunamaker, Q. E. Booker, A. Curtis, and M. R. Lowry, "Implementing Distributed Collaborative Writing in Traditional Educational Environments," IEEE Transactions on Professional Communication, vol. 47, pp. 171189, 2004.

P. B. Lowry, J. F. J. Nunamaker, A. Curtis, and M. R. Lowry, "The Impact of Process Structure on Novice, Internet-Based, Asynchronous-Distributed Collaborative Writing Teams," IEEE Transactions on Professional Communication, vol. 48, pp. 341-364, 2005. [53] B. Gronemann, G. Joeris, S. Scheil, M. Steinfort, and H. Wache, "Supporting CrossOrganizational Engineering Processes by Distributed Collaborative Workflow Management - The MOKASSIN approach," in International Conference on Concurrent Multidisciplinary Engineering, Bremen, Germany, 1999. [54] W. T. Chan, D. K. H. Chau, and X. Lang, "Collaborative Scheduling over the internet," Computer-Aided Civil and Infrastructure Engineering, vol. 14, pp. 15-24, 1999. [55] N. C. J. Romano, F. Chen, and J. F. j. Nunamaker, "Collaborative Project Management Software," in Hawaii International Conference on System Sciences, Wikoloa, 2002. [56] J. Kempenaers, J. Pinte, J. Detand, and J. P. Kruth, "A Collaborative Process Planning and Sceduling System," Advances in Engineering Software, vol. 25, pp. 3-8, 1996. [57] E. A. J. A. Rouwette, J. A. M. Vennix, and T. v. Mullekom, "Group Model Building Effectiveness: a Review of Assessment Studies," System Dynamics Review, vol. 18, pp. 5-45, 2002. [58] P. B. Checkland, Systems Thinking, Systems Practice. Chichester: John Wiley & Sons, 1981. [59] C. Eden, "A Framework for Thinking About Group Decision Support Systems (GDSS)," Group Decision and Negotiation, vol. 1, pp. 199-218, 1992. [60] D. L. Dean, R. E. Orwig, and D. R. Vogel, "Facilitation Methods for Collaborative Modeling Tools," Group Decision and Negotiation, vol. 9, pp. 109-127, 2000. [61] M. d. Hengst, "Collaborative Modeling of Processes: What Facilitation Support does a Group Need?," in Eleventh Americas Conference on Information Systems, Omaha, Nebraska, USA, 2005, pp. 73-80. [62] W. Kintsch, "The role of knowledge in discourse comprehension - a construction integration model," Psychological Review, vol. 95, pp. 163-182, 1988. [63] R. Agarwal, P. De, and A. P. Sinha, "Comprehending object and process models: An empirical study," IEEE Transactions on Software Engineering, vol. 25, pp. 541-556, 1999. [64] A. Basu and R. W. Blanning, "A formal approach to workflow analysis," Information Systems Research, vol. 11, pp. 17-36, 2000. [65] J. A. Cannon-Bowers and E. Salas, "Reflections on shared cognition," Journal of Organizational Behavior, vol. 22, pp. 195-202, 2001. [66] R. O. Briggs, G. L. Kolfschoten, and G. J. d. Vreede, "Toward a Theoretical Model of Consensus Building," in Americas Conference on Information Systems, Omaha, Nebraska, USA, 2005, pp. 101-110. [67] J. Levin and B. Nalebuff, "An Introduction to Vote-Counting Schemes," Journal of Economic Perspectives, vol. 9, pp. 3-26, 1995. [68] B. Gavish and J. H. Gerdes, "Voting Mechanisms and Their Implications in a GDSS Environment," Annals of Operation Research, vol. 71, pp. 41-74, 1997. [69] K. E. Cheng and F. P. Deek, "A Framework for Studying Voting in Group Support Systems," in Hawaii International Conference on System Sciences , Waikoloa, Hawaii, USA, 2007. [70] P. Balthazard, W. R. Ferrell, and D. L. Aguilar, "Influence Allocation Methods in Group Decision Support Systems," Group Decision and Negotiation, vol. 7, pp. 347-362, 1998. [71] A. Carneiro, "A group decision support system for strategic alternatives selection," Management Decision, vol. 39, pp. 218-226, 2001. [72] P. Young, "Optimal Voting Rules," The Journal of Economic Perspectives, vol. 9, pp. 5164, 1995.

[52]

[73] [74]

[75] [76]

[77]

[78]

[79]

[80]

[81]

[82]

[83]

[84]

J. J. Shah and N. Vargas-Hernandez, "Metrics for Measuring Ideation," Design Studies, vol. 24, pp. 111-134, 2003. R. A. McNutt, A. T. Evans, R. H. Fletcher, and S. W. Fletcher, "The Effects of Blinding on the Quality of Peer Review," Journal of the American Medical Association, vol. 263, pp. 1371-1376, 1990. J. W. B. Martz and M. M. Shepherd, "Group Consensus: The Impact of Multiple Dialogues," Group Decision and Negotiation, vol. 13, pp. 315-325, 2004. F. Herrera, E. Herrera-Viedma, and J. L. Verdergay, "A Model of Consensus in Group Decision Making under Linguistic Assessments," Fuzzy Sets and Systems, vol. 78, pp. 7387, 1996. J. M. Moreno-Jimenez, J. A. Joven, A. R. Pirla, and A. T. Lanuza, "A Spreadsheet Module for Consistent Consensus Building in AHP-Group Decision Making," Group Decision and Negotiation, vol. 14, pp. 89-108, 2005. R. O. Briggs, G. L. Kolfschoten, and G. J. Vreede, de, "Instrumentality Theory of Consensus," in First HICSS Symposium on Case and Field Studies of Collaboration, Kauai, Hawaii, USA, 2006. G. L. Kolfschoten, R. O. Briggs, G. J. Vreede, de, P. H. M. Jacobs, and J. H. Appelman, "Conceptual Foundation of the ThinkLet Concept for Collaboration Engineering," International Journal of Human Computer Science, vol. 64, pp. 611-621, 2006. G. L. Kolfschoten and S. P. A. v. Houten, "Predictable Patterns in Group Settings through the use of Rule Based Facilitation Interventions," in Group Decision and Negotiation conference, Mt Tremblant, 2007. G. J. Vreede, de, R. O. Briggs, and G. L. Kolfschoten, "ThinkLets: A Pattern Language for Facilitated and Practitioner-Guided Collaboration Processes," International Journal of Computer Applications in Technology, vol. 25, pp. 140-154, 2006. M. d. Hengst and M. Adkins, "Which Collaboration Patterns are Most Challenging: A Global Survey of Facilitators," in Hawaii International Conference on System Science, Waikoloa, Hawaii, USA, 2007, pp. 1-10. E. L. Santanen, "Resolving Ideation Paradoxes: Seeing Apples as Oranges through the Clarity of ThinkLets," in Hawaii International Conference on System Sciences, Hawaii, USA, 2005, pp. 1-10. A. R. Dennis, B. H. Wixom, and R. J. Vandenberg, "Understanding Fit and Appropriation Effects in Group Support Systems Via Meta-Analysis," Management Information Systems Quarterly, vol. 25, pp. 167-183, 2001.

Measures for Groupthink and Disciplinary Ethnocentrism in Interdisciplinary Collaboration


John D. Murphy, Justin M. Yurkovich, Alanah J. Davis, Robert O. Briggs Institute for Collaboration Science, University of Nebraska at Omaha [jmurphy, jyurkovich, alanahdavis, rbriggs]@mail.unomaha.edu Abstract
This paper summarizes our presentation at the HICSS 2008 workshop where we discussed the development of an instrument designed to measure perceptions of groupthink and disciplinary ethnocentrism in interdisciplinary collaborative groups.

1. Introduction
Collaboration Engineering (CE) has been shown to improve group results, and it is becoming increasingly common for interdisciplinary groups who want to use CE services. Previous work has identified groupthink and disciplinary ethnocentrism as two specific issues that should be considered when developing a CE solution for such groups [1]. The same study presented a proposed session design, however the question remains as to how effective the proposed CE session design was at addressing the issues of groupthink and disciplinary ethnocentrism. This paper presents the development of an instrument designed to measure participant perceptions of groupthink and disciplinary ethnocentrism in interdisciplinary collaborative groups for future studies to use in evaluating CE session designs for interdisciplinary collaboration. The following sections will briefly present the background as well as the development of specific measures to help evaluate how effective CE session designs are when addressing groupthink and disciplinary ethnocentrism. The paper concludes with a short discussion of future work to use and validate these measures.

2. Background
This section presents the theoretical background on the two phenomena of interest: groupthink and disciplinary ethnocentrism.

2.1. Groupthink
The term groupthink was first coined by Janis [2]. Groupthink describes the tendency for a group to avoid negatively-perceived social consequences when evaluating contributions. For example, someone may choose to not question or criticize a possible solution for fear of being perceived as not being a team player. Groups that experience groupthink will seek to maintain unanimity and consensus regardless of possible errors in direction or effort [3]. Classic examples of this phenomenon are the decisions made surrounding the Bay of Pigs Invasion in 1961 and the NASA Space Shuttle Challenger explosion in 1986. In both cases, incorrect actions were not challenged or questioned due to a desire to maintain consensus within a group [3, 4]. The manifestation of groupthink is rooted in an inadequate effort to reasonably appraise alternate courses of action [5]. Consequently, if groups are to prevent groupthink, alternative suggestions must be judged objectively and sufficient time must be spent on the evaluation process so that potential flaws or drawbacks are not overlooked.

2.2. Disciplinary Ethnocentrism


Campbell [6] develops the notion of disciplinary ethnocentrism by drawing parallels to the phenomenon that occurs when nationalistic or tribalistic tendencies cause one group to shun members or ideas from another group. Building upon this concept, he described disciplinary ethnocentrism as a tendency of

disciplines to look within themselves for solutions rather than including people from other disciplines. Disciplinary ethnocentrism is characterized by a redundant buildup of highly similar specialized knowledge by individuals within departments or fields. Because of the buildup, the gaps between the various discipline clusters tend to be exaggerated, and any overlap that may occur between clusters goes unnoticed. Conversely, using collective communication and competence to more fully address a given issue integrates valuable multi-disciplinary perspectives.

3. Instrument Development Process


The general approach to developing measures for these phenomena of interest was to examine extant research in order to identify antecedent conditions with established measures for those conditions. This approach allowed us to inherit the validity and reliability of those existing measures. When appropriate preexisting measures could not be found, we developed measures by building items that focused on specific aspects of the definition of antecedent construct.

3.1. Construct Identification


Janis [2, 3] originally identified group cohesion, structural faults, and the decision-making context as the antecedents of groupthink, with group cohesion proposed as the primary factor. Subsequent empirical studies yielded conflicting results where some of those antecedents where loosely supported and new, more precise instantiations of the constructs emerged [4, 7-9]. The more precise statements of antecedent constructs generally emerged from Janis structural faults construct, and the specific constructs identified were directive leadership, stress/time pressure, and insulation. Later conceptual and empirical study identified collective efficacy as another potential antecedent of groupthink which possibly subsumed some of the other antecedent constructs [10, 11]. Due to the muddled results of this previous work, we chose to develop measures for all of these antecedents constructs (as shown in Figure 1) and will determine which seem to best apply in later empirical work.

Figure 1: Groupthink Antecedents

Disciplinary ethnocentrism is a comparatively newer construct, so we were not able to identify any existing antecedent constructs for it. As such, we chose to develop items which we propose will measure it directly.

3.2. Pre-existing Measures

Group Cohesion has been rather extensively studied over the years. Unfortunately, none of the previously-cited empirical studies provided their measures of this effect. However, Bollen and Hoyle [12] developed specific measures for a large social study, which Chin et al [13] converted to the small group context and Salisbury et al [14] re-validated in the virtual team context. Those measures are shown in Table 1. I feel that I belong to this group I am happy to be part of this group I see myself as part of this group This group is one of the best anywhere I feel that I am a member of this group I am content to be part of this group Table 1: Group Cohesion Measures

Directive leadership has also been extensively studied over the years. Ohio State University had developed one of the most extensive collections of measures for a wide range of leader behaviors [15]. This study extracted measures from their domination sub-category as the closest match to the directive leadership construct. Table 2 shows the measures for directive leadership. He/she encourages members to express their ideas. He/she has members share in decision making. He/she gets group approval on important matters before going ahead. He/she invites criticism of his/her ideas Table 2: Directive Leadership Measures

3.3. Developed Measures


The research team used an electronic team meeting format for item generation for each of the constructs that needed measures developed for it. The general process was to jointly review the construct definition and then have each individual brainstorm as many potential measures for that construct as possible. The team then jointly discussed each of the proposed measures, comparing it back to the construct definition and also evaluating it for consistency and faithfulness with the full body of literature relevant to that construct. The team then used a multi-vote procedure where each member identified what he/she considered as the four most-appropriate measures based on our team discussions and his/her understanding of the literature. Each item receiving votes was again discussed and evaluated jointly by the team; with subsequent re-votes as necessary until the team arrived at an agreed set of 4-5 items per construct. For the Stress/Time Pressure construct, we derived the following definition from Janis original work [3] and later work by Moorhead et al [4]: the degree to which losses are expected no matter which alternative is selected and there is a low hope of finding a better solution, possibly exacerbated by a lack of time to properly consider all alternatives. The measures for this construct, which will be reverse coded to match the intent of the other scales, are shown in Table 3. I do not think the group came up with the best solution. There is little hope to finding a better solution than what our group concluded with. Time constraints kept the group from performing to the best of its ability. Coming to a decision was emphasized more than making the best decision Table 3: Stress/Time Pressure Measures

Janis [3] described insulation as the degree to which the group is insulated from the judgment of qualified associates until after a final decision has been made. However, specific measures for this construct were not included in the study from Janis or any of the later empirical studies. As such, the newly derived measures for insulation are shown in Table 4. Individuals outside of this group were aware of what was going on throughout the entire process I believe that others outside this group will judge the end result harshly (r)* I felt that the group had adequate access to information crucial to our success The advice of knowledge participants was ignored in the group's solution (r) Table 4: Insulation Measures * (r) denotes a reverse-coded item Whyte [10] defined collective efficacy as the degree to which the group holds a collective belief about the group's ability to successfully perform some task. Measures for his construct were not found in previous research. As such, the newly derived measures for collective efficacy are shown in Table 5. I perceive our group as one of the most capable that I have ever worked with. Tasks like this are very easy for this group I was comfortable accepting suggestions from my team members We accomplished our task smoothly and effectively Table 5: Collective Efficacy Measures

3.4. Scale Development


To ensure that we can fully inherit existing validity and reliability for the measures we have adopted from other sources, we will continue to use their original scales. All of the newly-developed items use a 7point Likert scale. This range provided adequate sensitive to detect differences in participant perceptions without overwhelming them with too many responses options.

4. Conclusion and Future Research


This paper presented the development of an instrument designed to measure perceptions of groupthink and disciplinary ethnocentrism in interdisciplinary collaborative groups. This work presented is intended as an incremental step toward the application of these scales in sessions designed and conducted for interdisciplinary teams. The immediate goal is to gather enough data to establish the reliability and validity of these measures. Moreover, through that application, the expectation is that some of the antecedents will be statistically validated and others will not. At that point, the theoretical foundations for these measures will need to be reexamined and adjusted to better explain why and why not, respectively.

5. References
[1] J. D. Murphy and J. Yurkovich, "Collaboration engineering for interdisciplinary teams," in Thirteenth Americas Conference on Information Systems Keystone, Colorado, USA: Association of Information Systems eLibrary, 2007. [2] I. L. Janis, "Groupthink," Psychology Today, pp. 43-46, 74-76, November 1971.

[3] I. L. Janis, Groupthink: Psychological studies of policy decisions and fiascoes, 2nd ed. Boston: Houghton Mifflin, 1982. [4] G. Moorhead, R. Ference, and C. P. Neck, "Group decision fiascoes continue: Space shuttle challenger and a revised framework," Human Relations, vol. 44, pp. 539-550, 1991. [5] B. Mullen, T. Anthony, E. Salas, and J. E. Driskell, "Group cohesiveness and quality of decision making: An integration of tests of the groupthink hypothesis," Small Group Research, vol. 25, pp. 189-204, 1994. [6] D. T. Campbell, "Ethnocentrism of disciplines and the fish-scale model of omniscience," in Interdisciplinary Collaboration: An Emerging Cognitive Science, S. J. Derry, C. D. Schunn, and M. A. Gernsbacher, Eds. Mahwah: Lawrence-Elrbaum, 2005, pp. 3-23. [7] C. P. Neck and G. Moorhead, "Groupthink remodeled: The importance of leadership, time pressure, and methodical decision-making procedures " Human Relations, vol. 48, pp. 537-557, May 1995. [8] M. L. Flowers, "A laboratory test of some implications of Janis's groupthink hypothesis," Journal of Personality and Social Psychology, vol. 35, pp. 888-896, Dec 1977. [9] G. Moorhead and J. R. Montanari, "An empirical investigation of the groupthink phenomenon," Human Relations, vol. 39, pp. 399-410, May 1986. [10] G. Whyte, "Recasting Janis's groupthink model: The key role of collective efficacy in decision fiascoes," Organizational Behavior and Human Decision Processes, vol. 73, pp. 185-209, Feb/Mar 1998. [11] C. P. Koerber and C. P. Neck, "Groupthink and sports: An application of Whyte's model," International Journal of Contemporary Hospitality Management, vol. 15, pp. 20-28, 2003. [12] K. A. Bollen and R. H. Hoyle, "Perceived cohesion: A conceptual and empirical examination," Social Forces, vol. 69, pp. 479-504, Dec 1990. [13] W. W. Chin, W. D. Salisbury, A. W. Pearson, and M. J. Stollak, "Perceived cohesion in small groups: Adapting and testing the perceived cohesion scale in a small-group setting," Small Group Research, vol. 30, pp. 751-766, Dec 1999. [14] W. D. Salisbury, T. Carte, and L. Chidambaram, "Cohesion in virtual teams: Validating the perceived cohesion scale in a distributed setting," Database for Advances in Information Systems, vol. 37, pp. 147-155, Spring 2006. [15] J. K. Hemphill and A. E. Winer, "Development of the leadership behavior development questionnaire," in Leader Behavior: Its Description and Measurement. vol. 9, R. M. Stogdill and A. E. Coons, Eds. Columbus, OH: Bureau of Business Research, 1957, p. 168.

Koncero Open Source Collaboration Project


Joel H. Helquist Utah Valley State College joel.helquist@uvsc.edu John Kruse The MITRE Corporation john@kruser.org Mark Adkins AK Collaborations mark@akcollaborations.com Abstract Group support systems (GSS) can increase group productivity by effectively harnessing the knowledge, abilities, and skills of group members. Current collaboration tools force a tradeoff between the complexity of the collaborative engagement and the number of active participants. Tools that can handle large numbers of active users primarily support simple collaboration (e.g., chat), while those that support complex interactions (e.g., whiteboard) are quite limited in the number of parallel users that can actively participate. This paper proposes a new collaboration approach that can accommodate larger groups and improve flexibility and agility. This effort leverages the proven Group Support Systems pattern while addressing the problems that have kept such systems from moving into the mainstream. A new collaborative tool, Koncero, is presented. 1. Introduction Group support systems (GSS) can increase group productivity by effectively harnessing the knowledge, abilities, and skills of group members [1-2]. According to Nunamaker et al. [3], GSS increases productivity through such things as increased information, objective evaluation, and synergy. However, while current collaborative tools are good in certain contexts, they are not readily able to address complex collaborations in a variety of contexts. Current GSS tools present several bottlenecks that limit the effectiveness and widespread dispersion of GSS tools within organizations. By removing the process bottlenecks that afflict current collaborative systems, we can investigate new ways to build collaborative processes. This paper presents some of the current GSS bottlenecks followed by a new approach to GSS-style collaboration. The final section introduces a new GSS tool, Koncero, which facilitates the new collaborative framework. 2. Collaboration bottlenecks Current collaborative GSS processes include some inherent bottlenecks that potentially limit the effectiveness of collaborative groups. Likewise, these bottlenecks limit the number of different contexts within which a group can successfully use GSS tools and methodologies. The facilitator often presents one of the largest bottlenecks to GSS collaboration efforts. Previous research has illustrated the various problems presented by reliance on an expert facilitator [4]. An expert facilitator is often a prohibitive expense that many organizations cannot justify. Without a dedicated facilitator, the GSS tools and collaborative approaches either fall into disuse or are used in an inefficient or

unproductive manner. The expert facilitator possesses a limited set of cognitive abilities and cognitive bandwidth. As the size of a group grows, the facilitator becomes increasingly taxed in facilitating the group through the collaborative workflows. The quantity of participants, and the volume of information generated and dispersed, can lead to cognitive and information overload for the facilitator. Second, some of the collaborative activities are executed in a serial workflow. During parallel workflow, the users are able to work autonomously and independently at a workstation. One example of this is generating brainstorming ideas. Each collaborative participant is able to brainstorming in parallel without having to take turns or wait for other participants. In a serial workflow, only one participant can contribute at a time. The result is that the balance of the collaborative group is forced to take turns, increasing the risk of participants becoming completely disengaged and disinterested. One example of a serial activity is the clarification and synthesis of brainstorming ideas. Typically, these activities are conducted through an expert facilitator. The participants work through the facilitator to perform these activities, limiting the ability of each participant to work in parallel. These bottlenecks are exacerbated when the collaborative context is changed. Factors such as synchronicity, proximity, flexibility, and scalability can further complicate the collaborative process and amplify the bottlenecks. The end result is a decline in the value of the collaborative tool as the number of active participants increases, see Figure 1.

Figure 1: Collaborative value 3. Current collaborative environment

The current collaborative tools accommodate varying levels of complexity. Simple collaboration is often executed through simple, generalized collaborative software. These general tools include such things as chat, whiteboard, email, and application sharing. This set of collaborative tools provides support for collaboration between a relatively small group of users. These applications are not necessarily equipped to handle more complex tasks or larger groups of participants. Complex collaborative efforts, on the other end of the spectrum, often utilize specialized software that has been developed specifically for structured and wellunderstood decisions. The problem with this class of collaborative software is that the tools are somewhat limited in their ability to be agile or flexible. One example of these tools is GSS tools, which provide excellent support for specific collaborative efforts. These GSS tools are generally powerful but include a high level of overhead and complexity. Additionally, the tools may not be as flexible or dynamic as collaborative efforts may require. 4. Modified GSS workflow Participant-driven Group Support Systems (PDGSS) provide a framework to enable successful and highly dynamic collaboration between distributed, asynchronous groups. PD-GSS seeks to leverage the skills and abilities of each group member to reduce the burden of, or dependence on, the facilitator. The name participant-driven refers to notion that the participants are collectively performing more of the evaluative, subjective, and process control tasks autonomously and in parallel [5-7]. The premise behind PDGSS is that the collaborative workflow can be deconstructed into discrete modules that can be worked on iteratively by the participants. Each participant is able to contribute effort to the group; the sum of the individual efforts moves the group toward the end collaborative objective. The core idea behind this is that by aggregating the small judgments and efforts of the participants, a system can mimic the efforts of a human facilitator and overcome the collaborative bottlenecks that may suppress innovation and effectiveness. The end goal of this modified workflow is to create a collaborative workflow that is more flexible and agile than previous collaborative efforts. However, the current set of GSS tools does not provide the functionality and flexibility necessary to implement this more dynamic approach. 5. Koncero A solid foundation for a new GSS tool has been developed by Professor Conan Albrecht at Brigham Young University. Koncero is an open-source (GPLv2) collaborative architecture that enables rapid development of GSS functionalities. Koncero provides many advantages over existing GSS tools. First, Koncero is a lightweight, browserbased application. Koncero is written in Python and DHTML. No special plugins or extra software are required to participate in a collaborative session. The web-centric approach reduces the technical requirements to participate and reduces the complexity associated with setting up, upgrading, and supporting a GSS session. Another related feature is reliability. The simplicity of the Koncero architecture allows it to avoid the complexity that might make it more error prone. As it stands, Koncero has run with over fifty participants in sessions that lasted over several days. The system tends to run with few problems for weeks at a time and has been used as the foundation for a project tracker and a collaborative wargame. The saving of data is fast and quite stable. The primary

design goal of Koncero is ease of use, both in terms of system administration and collaboration. Koncero has a small footprint, and it can be installed and administered with little training. From a developers standpoint, Koncero utilizes a modelviewcontroller (MVC) architecture to expedite modular development of new or modification of existing GSS functionalities. Additionally, all of the data from the collaborative sessions is stored in a highly flexible tree data structure. Each meeting is represented by a node with child nodes for each collaborative activity, comments, etc. This node hierarchy is extremely flexible and allows for the storage of any kind of data that the group requires (e.g., text and graphics), see Figure 2.

Figure 2: Koncero data structure

6. Conclusion Koncero provides a GSS platform that is easily adaptable to various collaborative activities. Koncero is lightweight (less than 10MB installed), extensible (Commenter activity is approximately 600 lines of code), and responsive (low bandwidth and browser based). Koncero is open source and free, providing an avenue for practitioners and academics to freely examine and experiment with alternative GSS methodologies. Koncero can accommodate the more traditional, facilitated GSS session in addition to other approaches, including thinkLets and PD-GSS. 7. References [1] M. Adkins, M. Burgoon, and J.F. Nunamaker, Jr., "Using group support systems for strategic planning with the United States Air Force," Decision Support Systems, vol. 34, pp. 315-337, 2003. [2] J.F. Nunamaker, Jr., A.R. Dennis., J.S. Valacich., D.R. Vogel, and J.F. George, Electronic meeting systems to support group work, Communications of the ACM 34, 7, 40-61, 1991. [3] J.F. Nunamaker, Jr., R.O. Briggs, and D.D. Mittleman, Electronic meeting systems: Ten years of lessons learned, in D. Coleman and R. Khanna (eds.), Groupware: Technology and Application, Prentice Hall, Upper Saddle River, NJ, 146-299, 1995.

[4] R. O. Briggs, G. J. Vreede, de, and J. F. Nunamaker, Jr., "Collaboration engineering with thinkLets to pursue sustained success with group support systems," Journal of Management Information Systems, vol. 19, pp. 31-64, 2003. [5] J. H. Helquist, Santanen, E.L., and Kruse, J., "Participant-driven GSS: Quality of brainstorming input and allocation of participant resources," in 40th Annual Hawaii International Conference on System Science, Waikoloa, HI, 2007. [6] J. H. Helquist, J. Kruse, and M. Adkins, "Developing large scale participant-driven group support systems: An approach to facilitating large groups," in Proceedings of the First HICSS Symposium on Field and Case Studies of Collaboration, Los Alamitos, CA, 2006. [7] J. H. Helquist, J. Kruse, and M. Adkins, "Group support systems for very large groups: A peer review process to filter brainstorming input," in Proceedings of the Americas Conference on Information Systems Acapulco, Mexico, 2006.

An Exploratory Evaluation of a Convergence thinkLet


Alanah Davis, Gert-Jan de Vreede, John Murphy, Robert Briggs
Institute for Collaboration Science University of Nebraska at Omaha

{alanahdavis, gdevreede, jmurphy, rbriggs}@mail.unomaha.edu Abstract


This paper summarizes our research on the design and evaluation of convergence thinkLets. We describe an exploratory study to evaluate a convergence thinkLet, FocusBuilder, according to previous research that presented a convergence thinkLet selection guide. This effort is a first step towards the further exploration and evaluation of convergence thinkLets.

1. Introduction
Convergence is one of the five original patterns of collaboration from Collaboration Engineering (CE) research. In the process of convergence, participants move from having many concepts to a focus on & understanding of a few deemed worthy of further attention [1]. The goal of convergence is to reduce a groups cognitive load in order to: 1) address all relevant concepts, 2) conserve resources, 3) have less to think about, and 4) achieve a shared meaning [2]. The process of converging is led by facilitators using thinkLets. ThinkLets are codified best facilitation practices that describe all information required to create a predictable, repeatable pattern of collaboration among people working together toward a joint goal. However, the motivation for this research lies in the fact that, while successful, these convergence thinkLets are not fully understood. Specifically, researchers and facilitators have found that convergence is time consuming, slow, and painful [3, 4]. Therefore, there is benefit in improving the process of converging. The motivation for this research is also based on the fact that convergence is one of the most critical patterns that take place in collaboration processes as it lays the foundation for shared understanding and the overall advancement of the groups task [5]. Therefore, a complete understanding of convergence is important for three main reasons. First, an understanding is necessary to adequately handle the results of productive generation tasks. Second, an understanding is necessary to better train and support facilitators and practitioners that consider convergence the most challenging pattern of collaboration. Finally, an understanding is necessary to restore some balance in the literature on patterns of collaboration. It has been suggested that there is an abundance of research on the generation (i.e. brainstorming) pattern of collaboration, but not as much on convergence and other patterns [2, 6]. This research seeks to explore the design and evaluation of convergence thinkLets. In the following section we present previous research of convergence followed by existing convergence thinkLets that have been designed. We then present an exploratory study to evaluate the convergence thinkLet of FocusBuilder according to previous research which has presented a thinkLet selection guide [2]. This effort is a first step in the direction of further exploration and evaluation of convergence thinkLets.

2. Previous Research
Research on convergence suggests that there are four sub-processes that are combined to create useful variations [1]. The first sub-process involves the process of filtering or selecting a subset from a pool of concepts. The second sub-process is the process of generalizing or reducing the number of concepts under consideration through generalization, abstraction, or synthesis. The third sub-process is the process of establishing a shared meaning or agreeing on connotation about the labels used to communicate various concepts. The final sub-process found in the research is the process of judging or identifying which existing concepts merit further attention. However, like Davis, de Vreede, and Briggs [2], we remove judging as the other sub-processes all involve some element of judging. Research proposes different approaches for converging in collaboration. One approach suggests that face-to-face convergence is necessary because verbal communication is the key for developing a shared understanding (i.e. one of the primary goals of convergence [7]. This research focuses on media

synchronicity theory (MST) which suggests that individuals prefer highly synchronized communication media (e.g. face-to-face) for convergence (i.e. shared understanding) and less synchronized media (e.g. email) for conveyance (i.e. facts) [7]. Another approach suggests that a combination of electronic and verbal modes is sufficient [1]. However, even with the different approaches, previous research has found that the pattern of convergence in collaboration is still complicated [3]. This complication is attributed to information overload in convergence, the cognitive effort required to narrow brainstormed ideas, and the difficulty in establishing a meeting memory or storing the ideas for future reference when shared understanding takes place verbally [3].

3. Design of Convergence thinkLets


Previous research has described eleven existing thinkLets which have been used by facilitators in a number of collaboration sessions [2]. These convergence thinkLets include: 1) FastFocus, 2) OneUp, 3) BucketBriefing, 4) DimSum, 5) Pin the Tail on the Donkey, 6) BroomWagon, 7) GoldMiner, 8) ExpertChoice, 9) GarlicSqueezer, 10) ReviewReflect, and 11) RichRelations. The same research then presented two categories of performance criteria for evaluating convergence thinkLets. These categories include results oriented measures of speed, level of comprehensiveness, level of shared understanding, level of reduction, and level of refinement of outcomes as well as the process or experience oriented measures of acceptance by participants, ease of use for facilitator, ease of use for participants, satisfaction with thinkLet by facilitator, and satisfaction with thinkLet by participants [2]. Based on these performance criteria, two new convergence thinkLets were designed [2]. The first new thinkLet is called FastHarvest. With FastHarvest, participants form subgroups that are responsible for converging on particular brainstorming subtopics. Each subgroup then extracts concise versions of ideas that relate to their assigned subtopic. At the end, each subgroup presents their findings to the whole group and clarifies the meaning (not merit) of their extractions. The second new convergence thinkLet is called FocusBuilder. With FocusBuilder, brainstorm ideas are divided into as many subsets as there are participants. Participants then extract the critical ideas and formulate them in a clear and concise manner. Participants are then paired to share and combine their ideas into a new list consisting of non-overlapping ideas. This pairing continues until there are two subgroups that present their results to one another. The final two lists are combined into a single list with nonredundant ideas. FocusBuilder is evaluated in an exploratory study presented in the following section.

4. Evaluation of Convergence thinkLets


Davis, de Vreede, and Briggs [2] recommend a design science program of research to assess the performance of convergence thinkLets using a combination of research methods and designs. In this study we designed an exploratory laboratory experiment to assess the performance of FocusBuilder with respect to participant satisfaction. Our exploratory research question asked Does ownership (i.e., whether or not the participants generated the brainstorming of ideas) of the divergence task that precedes a convergence task affect the resultant participant satisfaction in a facilitated GSS session? The study included four groups, with four members each. Two of the groups were comprised of undergraduate students and the other two groups were comprised on graduate students. The students were motivated to participate in the study for a variety of reasons. First, they were interested in experiencing the use of collaboration technology. Second, the task addressed campus issues in which they had a vested interest. Finally, they were provided modest monetary compensation for their time. The participants were led through a process and used the tool GroupSystems. The independent variable in the study was the design of the process. One undergraduate and one graduate group both began with a divergence activity before the convergence activity, using the FocusBuilder thinkLet. The remaining undergraduate and graduate groups then used the pre-defined divergence results from the first two groups before the convergence activity. The participant satisfaction was measured at the end of the session using measures from Gallupe et al. [8].

5. Findings

The evaluation of the FocusBuilder thinkLet showed that groups who rely on pre-defined ideas when using the FocusBuilder thinkLet are more satisfied than groups who brainstorm their own ideas to start. Specifically, the results suggest that the ownership or accountability of ideas leads to increased participant satisfaction. There is a limitation to these findings due to the fact that this was an exploratory study with a small overall sample size.

6. Future Directions
This research has presented a background of convergence thinkLet design as well as an evaluation of one such thinkLet. Future research should continue to validate the convergence thinkLet selection guide from Davis, de Vreede, and Briggs [2]. This can be done with a structured, in-depth evaluation of each thinkLet along the criteria presented in this paper. Future research should also focus on understanding the causalities that underlie different performance characteristics as well as assessing the value of the design science paradigm in Collaboration Engineering.

7. References
[1] G.-J. de Vreede and R. O. Briggs, "Collaboration engineering: Designing repeatable processes for high-value collaborative tasks," in 38th Annual Hawaii International Conference on Systems Science, Los Alamitos, 2005. [2] A. Davis, G.-J. de Vreede, and R. O. Briggs, "Designing thinkLets for convergence," in 13th Annual Americas Conference on Information Systems (AMCIS-13), Keystone, Colorado, 2007. [3] H. Chen, P. Hsu, R. Orwig, L. Hoopes, and J. F. Nunamaker Jr., "Automatic concept classification of text from electronic meetings," Communications of the ACM, vol. 37, pp. 56-72, October 1994. [4] G. K. Easton, J. F. George, J. F. Nunamaker Jr., and M. O. Pendergast, "Using two different electronic meeting system tools for the same task: An experimental comparison," Journal of Management Information Systems, vol. 7, pp. 85-101, 1990. [5] G.-J. de Vreede, A. Fruhling, and A. Chakrapani, "A repeatable collaboration process for usability testing," in 38th Hawaii International Conference on System Sciences, 2005. [6] R. O. Briggs, J. F. Nunamaker Jr., and R. H. Sprague Jr., "1001 Unanswered research questions in GSS " Journal of Management Information Systems, vol. 14, pp. 3-21, December 1997. [7] A. R. Dennis and J. S. Valacich, "Rethinking media richness: Towards a theory of media synchronicity " in 32nd Hawaii International Conference on System Sciences (HICSS-32), 1999, p. 1017. [8] R. B. Gallupe, A. R. Dennis, W. H. Cooper, J. S. Valacich, L. M. Bastianutti, and J. F. Nunamaker, "Electronic brainstorming and group size," Academy of Management Journal, vol. 35, pp. 350-369, June 1992.

Vous aimerez peut-être aussi