Vous êtes sur la page 1sur 79

Thesis

Building a Metric for Website Evaluation; Theory and Practice

Flavius Chircu

1. Introduction
The Internet is a network of networks. From the users' standpoint, websites are the nodes of the Internet. The number of nodes (also called web domains) has grown exponentially over the past two years, reaching 43.2 million in January 1999. As Figure 1.1 shows, the highest number of nodes belong to commercial sites, which now account for about 28% percent of all Internet domains.

Figure 1.1 The growth of the Internet (Source: http://www.thestandard.com/metrics/display/0,1283,848,00.html)

With so many sites an Internet user can choose from, the success of a particular site is likely to be determined by the users' perception of websites. If a user has positive perceptions regarding a website, she is more likely to return to that website in the future.

Figure 1.2 Most Internet sites offer a variety of communication channels (Source: http://thestandard.net/metrics/display/0,1283,829,00.html)

Websites are largely hosts for communications among users and between a user and the entity owning the website (which can be, for example, a person, or a company). A recent research study completed by Jupiter Communications points out that most websites offer to their users various communication means such as email, frequently asked questions lists, chat and bulletin boards. These websites' orientation towards

Internet communication channels is also illustrated by the very small percentage (less than 10%) of websites offering traditional communication options such as phone (see Figure 1.2). Therefore, the users' perception of websites is likely to be influenced by their perception of how websites support various means of communication.

This research identifies two types of website communication. On the one hand, passive communication originates with the websites' owners. On the other hand, reactive

communication originates with the audience (i.e. users surfing the Internet). Without ignoring synergy-effects of both communication types, it is the reactive

communications that distinguish Internet communication from all its predecessors radio, TV, and print. With the underlying assumption that passive communications on the Internet are similar to traditional forms of communication (such as TV, print, and adio), this thesis will look for patterns of the reactive component of the Internet communication.i

This work is structured in three main parts: theory-driven instrument development, applications, and further research. In the first part, which encompasses chapters 2, 3 and 4, a measurement instrument for reactive communications on websites is developed. Chapter 2 of this work presents theoretical considerations about developing measurement instruments for abstract concepts. Chapter 3 reviews the theoretical

foundations for generating the measurement instrument for reactive communications.

Paramount for the review is Fred Davis' research on user acceptance of information technology (Davis 1989). His technology acceptance model (TAM), developed in 1989 and since tested and validated by other researchers, proposes that perceived usefulness and ease of use are determinants of the user's acceptance of a certain information technology. Chapter 4 applies the TAM model to the evaluation of reactive website communication tools, which can be thought of as new technology tools. A TAM-based metric that enables the ranking of websites from the point of view of their reactive communication capabilities is also proposed.

The second part of this work, Chapter 5, presents an application for the instrument developed in Chapter 4 to the evaluation of websites used during the 1998 November U.S. elections. Conclusions and suggestions for further research are the subject of the third part of this work in Chapter 6. Finally, all the details related to data collection and analysis for the development of the instrument are included in an appendix.

2. Developing Measurement Instruments


Numerous researchers have investigated the theory and practice of measurement instruments (Churchil 1979; Cook and Campbell 1979; Davis 1989; Hendrickson Massey and Cronan 1993; Peter 1981; Rosenthal and Rosnow 1991). This chapter presents a summary of their most important findings related to the subject.

2.1 Measurement Instruments Measurement is the process of assigning numbers to observations, in an attempt to represent quantities of attributes. When dealing with abstract concepts, such as human perceptions, it is impossible to measure these concepts directly. However, the concepts can be measured using instruments, also called operational definitions. This process can be described by considering two abstraction levels: the theory level, where the concepts, or constructs, reside, and the data level, where the measurement takes place with the help of a measurement instrument (see figure 3.1).

Nomological network (relationships to other constructs)

Construct

THEORY LEVEL

Measurement Instrument (Operational Definition)

DATA LEVEL

Figure 3.1 Two levels of abstraction: the theoretical construct and its operationalization

Because of this distinction, it is important to ensure that the resulting instrument actually measures the construct it is intended to measure in a consistent way. As it will

be described in the next section, this assumes that the instrument exhibits construct validity, and that the measure is reliable.

2.2 Desirable Properties of Measurement Instruments

Two desirable properties of a measurement instrument are reliability and validity. The validity of a measurement instrument is determined by assessing construct validity, internal validity, statistical conclusion validity, and external validity. Since they are extremely important for this research, construct validity and reliability will be explained in detail in the following paragraphs.

Construct validity states the degree to which a measure assesses the construct it is intended to assess. Construct validity can be assessed by verifying face validity,

content validity, convergent and discriminant validity, and nomological validity. A measure has face validity if the items composing the measure look like they measure the intended construct. While this might be considered a subjective measure, part of the subjectivity can be eliminated if the assessment of face validity is made based on expert opinions. Experts also play an important role in defining the domain of the construct from which then the items composing the measure can be selected. In this case, we say that the measure satisfies content validity, too.

The traditional method for assessing convergent and discriminant validity is by using the multi-trait, multi-method (MTMM) matrix proposed by Campbell and Fiske. This matrix assumes that at least three measures and three constructs are used. The matrix consists of the correlations between measures and constructs, and can be used to investigate if the measure of interest correlates with other measures with which it is supposed to correlate (convergent validity), and if it does not correlate with measures from which it should differ (discriminant validity).

Nomological validity, sometimes called criterion validity, is concerned with the degree to which the measure correlates with one or more outcome criteria, as specified by theory or previous research. This type of validity can be concurrent, for a criterion that can be measured in the present, or predictive, for a criterion to be measured in the future. Nomological validity might be difficult to asses if no previous valid results exists, or if it is not feasible to collect data about the outcome criteria.

Reliability is a necessary criterion for construct validity.

Reliability measures the

degree to which the measure is free of random error, and therefore yields consistent results. In other words, reliability is concerned with how repeatable a measure is. A natural way of measuring reliability is through test/retest or test/alternative form test methods. The test, either in the same or in an alternative form, is administered to the same subjects. The expectation is that, if the reliability of the instrument is high, the

results of the second test would be highly correlated with the results of the first test. For these methods, the reliability coefficient is called stability, and represents the correlation coefficient between the item scores for the first and the second test. However, this method might influence subjects by repeating the test. A learning effect (subjects learning how to answer to the same or similar items) might artificially inflate the reliability coefficient. The results also depend on when, how and by whom the replications are conducted.

Another way of measuring reliability is through an internal consistency coefficient. This method is preferable, since it does not require the test to be repeated. Traditional ways of measuring internal consistency reliability are Spearman-Brown reliability index and Cronbachs alpha coefficient. Probably the most popular reliability measure is the alpha coefficient, which measures inter-item correlations. It is important to note that if a measure consists of items drawn from the same domain, intending to measure the same construct in several ways, then the items satisfy the criteria of face and content validity. Moreover, the items can be considered different methods of measuring the same construct, and their correlation should satisfy the MTMM conditions for convergent validity. This will also ensure a high Cronbachs alpha coefficient. One rule of thumb for measurement construction is, therefore, to choose several (usually 36) items intended to measure the same construct in different ways.

A measure cannot be valid if it is not reliable. However, reliability is not sufficient for validity. A measure can be reliable, but can measure a totally different construct, or measure a certain construct wrong. A classical example is a yardstick that is 35 inches long. Using the yardstick for measurements would yield reliable measures, but they would all be wrong.

There are several threats to validity of measurement instruments. First, face and content validity can be threatened by inadequate domain construction. Second, the instrument might suffer of mono-operation bias, meaning that only one item is used to measure a construct. As mentioned before, multiple items should be used. Third, the subjects can guess what the experimenter is trying to measure, and alter their responses. Fourth, the subjects can even suffer from evaluation apprehension, which might influence their responses. Fifth, the experimenter might influence the subjects unknowingly because he or she has certain expectations about the results. Sixth, the constructs the instrument is trying to measure might be confusing.

If after the data are collected there is no evidence of construct validity, there are several possible explanations for which no specific tests can be conducted. One possible

explanation is that the measure lacks construct validity. Another explanation is that the theory, the instrument is based on, is incorrect. Yet another explanation is that the operationalization of the construct (i.e. the way the measurement instrument is

10

constructed) is incorrect. Finally, another explanation is that other variables that are used to assess validity (in convergent and discriminant validity, and nomological validity tests) lack construct validity themselves.

Internal validity specifies that there are causal relationships among constructs in the nomological network. External validity specifies that the causal relationship among

variables can be generalized across different measures, persons, settings and times. Statistical conclusion validity specifies that statistical inferences based on covariance between variables hold with a certain statistical significance (also called risk). Since they will not be tested in this research, internal, statistical conclusion and external validity will not be addressed in more detail here.ii

2.3 Developing Measurement Instruments Numerous research articles point out the importance of systematic instrument development. It is usually suggested that the development process consists of eight steps: 1. Specify the domain of the construct. This phase is usually achieved by conducting a thorough literature search intended to identify previous research in the same or a related area. 2. Generate a sample of items. This step can be achieved by using the results of the literature search, researchers own experience, surveys of experts in the domain of

11

interest, real-world examples, critical incidents, and focus groups.

This step

consists of an iterative process, in which the opinions of the experts and focus groups are used in order to refine the items. 3. Collect data. This step involves administering the sample items generated in the previous step to subjects. 4. Purify the measure. The validity of the model should be assessed. Cronbachs coefficient and factor analysis can be used. If any problems with the instrument are detected and cannot be corrected, it is necessary to go to step 2 or even 1. 5. Collect data. This step will provide data for the next phases of analysis. New data are needed if step 4 generated changes in the sample items. 6. Assess reliability. This step involves examining Cronbachs coefficient and splithalf reliability coefficients, and repeating from step 2 as necessary. 7. Assess validity. This step involves building and analyzing a MTMM matrix,

investigating criterion validity, or conducting confirmatory factor analysis. 8. Develop norm. This step involves building a norm, or an index, that can be used to summarize the results of the measurement instrument. The index is usually a

weighted sum of factor scores, where the weights can be 1 or proportional with the percentage of variance explained by each factor.

It is important to point out that the development of a measurement instrument should be based on theory. Especially when one attempts to measure latent constructs, theory is

12

extremely important in this process, since it provides guidance regarding the domain of the construct and its relationships with other constructs in the nomological network.

3. Literature Review

The Technology Acceptance Model (TAM) was developed by Davis (1989). This model attempts to explain individual usage of technology as a function of one's perceptions regarding how useful and easy to use a certain technology is. Davis (1989) defines perceived usefulness as "the degree to which a person believes that using a particular system would enhance his or her job performance" whereas perceived ease of use is "the degree to which a person believes that using a particular system would be free of effort."

In considering perceived usefulness and perceived ease of use as determinant factors for the user behavior, Davis starts from a series of theoretical results. Those results, the corresponding elements in the initial and Davis theory, and their authors are illustrated in Table 3.1.

Based on the above theories, Davis considers perceived ease of use and perceived usefulness the fundamental and distinct constructs that are influential in decisions to use

13

information technology. He proceeds in constructing the instrument by identifying the number of items needed for high reliability, generating sample items, refining the items, collecting data, assessing reliability and validity, and investigating the relationship between the two constructs and actual usage of a technology.

Using Spearman-Brown prophecy formulaiii, Davis determines that 10 items are needed for each construct in order to obtain a reliability of at least 0.80. To allow for item elimination, the initial scales were constructed with 14 items each. The items were generated using the construct definitions and previous research. Content validity was improved with pretest interviews with 15 experienced computer users. These users were asked to prioritize the items for each construct, and to categorize all 28 items in groups that contained statements similar in meaning and dissimilar from other groups. By interpreting the groups, the representativeness/coverage of the items (i.e. content validity) was assessed.

Following instrument purification, two systems (a limited email messaging system for brief messages and a simple text editor) were evaluated by 120 users. Information about the self-reported usage of the two systems was also collected.

14

Previously developed theory Self-efficacy theory, about "judgments of how well one can execute courses of action required to deal with prospective situations" Cost-benefit paradigm, as the "cognitive tradeoff between the effort required to employ the strategy and the quality of the resulting decision" Adoption of innovations influenced by the degree to which an innovation is perceived as relatively difficult to understand and use Evaluation of information reports determined by: (1) perceived importance as "the quality that causes a particular information set to acquire relevance to a decision maker," and (2) perceived usableness as "the degree to which information format is unambiguous, clear or readable" Channel disposition model which hypothesizes that "potential users select and use information reports based on an implicit psychological tradeoff between the information quality and associated costs of access"

Constructs in previously developed theory and their TAM equivalent Self-efficacy = perceived ease of use; outcome judgement = perceived usefulness Perceived benefit = Perceived usefulness; perceived cost = perceived ease of use

Author(s) and year Bandura (1982)

Complexity = Perceived ease of use; compatibility, relative advantage = perceived usability might be an interpretation, but the constructs are difficult to interpret Perceived importance = Perceived usefulness ; perceived usableness = perceived ease of use

Jarvenpaa (1989); Kleinmutz and Schade (1988) Tornatzky and Klein (1982)

Larcker and Lessing (1980)

Attributed information quality (value) = Perceived usefulness; attributed access quality (accesibility) = perceived ease of use

Swanson (1982, 1987)

Table 3.1 Previously developed theory as a foundation for TAM (Note: "=" is intended to mean "similar to")

Reliability was evaluated using Cronbachs . The results, which were reported by construct for each individual tool and pooled (for both tools), were high, with reliabilities greater than 0.86. Convergent validity was established by high correlations between items measuring the same construct (a MTMM matrix was used).

15

Discriminant validity was established by low correlations between the scores of the same and different items applied to different systemsiv (compared with the correlations between an item and the other items measuring the same system). Davis(1989) points out that the presence of common method variance (i.e. an item measures methodological artifacts not related to the construct, such as individual differences in response style) is usually suggested by nonexistence of discriminant validity. Discriminant validity was not perfect, 3% of the correlations not being lower, as expected. The exceptions involved negatively phrased items (reversed items), which, as Davis points out, is ironic given that such items are hypothesized to lower common method variance.

Construct dimensionality was investigated using principal components analysis with oblique rotation. The items load on two distinct, correlated factors, which supports the hypothesis of distinct, though correlated, constructs.

The scales were then refined by eliminating the negatively phrased items, rephrasing others, and retaining only six items that yield a reliability of at least 0.90. For

usefulness, the items are Work More Quickly, Job Performance, Increase productivity, Effectiveness, Makes Job Easier and Useful. For ease of use, the items are Easy to Learn, Controllable, Clear and Understandable, Flexible, Easy to Become Skillful and Easy to Use.

16

Criterion validity was assessed by investigating the relationship of perceived usefulness and perceived ease of use to self-reported use. The usage was more highly correlated with usefulness than with ease of use. Moreover, regression analysis shows that the effect of usefulness on usage is significant when controlling for ease of use, but that the effect of ease of use on usage when controlling for usefulness is non-significant. Given that ease of use and usefulness are also correlated, but not highly enough to create multicollinearity problems, Davis concludes that usefulness mediates the effect of ease of use on usage, i.e. ease of use influences usage through usefulness.

Davis (1989) also conducted a second study with 40 paid MBA students asked to evaluate two graphic systems and report their intentions to use the systems. This study confirms the results of the previous one. However, the convergent validity analysis for one of the two graphic systems points out that the flexibility item does not correlate highly with other ease of use items. This might be due to the fact that ease of use might be negatively influenced by flexibility especially for novice users.

Because two different studies, evaluating four different applications, and two research settings (field with real users and lab with MBA students) were conducted, there is some evidence that the results of the study can be extended to other samples and other technologies, i.e. there is support for external validity.

17

Further research studies have confirmed the validity and reliability of the TAM measurement instrument. For example, a study by Hendrickson et al. (1993) establishes the stability (test-retest) reliability of the perceive ease of use and perceived usefulness scales using 51 and respectively 72 undergraduate students asked to evaluate one spreadsheet and one database management packages. Paired t-tests were used to assess if there are differences between the test and retest results (i.e. if the difference in test and retest means are significantly different from 0). Even if some of the individual item stabilities are low (0.58-0.78), the subscale stabilities are high (>=0.77), supporting a high test-retest reliability of the TAM instrument.

Szajna (1994) evaluates the predictive validity of the TAM instrument using choice behavior as an outcome criterion. The author posits that choice behavior is a more powerful measure of intentions than self-reported intention (which was one of the measures used by Davis in his initial study). The subjects in this follow-up study were 47 MBA students enrolled in a required MIS course, who were asked to evaluate nine database management software packages and then choose one for future use in the course. The students were split into groups and each student was required to become familiar with pre-determined features of one software package and present them to his/her group members.

18

The predictive accuracy rate in this study is 70%, much higher than the rate obtained by chance (17-25%). The author also conducted discriminant analysis, in order to assess if the instrument can segregate students preferring one package from those preferring other packages. The results suggest that the instrument is a good discriminator.

Moreover, in accordance with previous results, the study showed some non-significant results for ease of use.

The author also conducted further tests in order to eliminate alternative explanations for the results. Testing for experiment design influences involved investigating if a subject tended to choose the package s/he demonstrated, if the subjects choice was influenced by demonstration sequence, or if the subjects were influenced by others in their group. All these tests were performed using chi-squares statistics and no influences were detected.

These studies prove that it is possible to build a measurement instrument to assess the adoption of technology. They also provide useful tools for any project designed to build similar instruments. And last but not least, they illustrate the range of analysis that can be employed to ensure that the resulting instrument is valid and reliable. Therefore, the process of building the measurement instrument proposed in the present work is based on the results of these studies.

19

4. A Measurement Instrument For Evaluating Website Communication

As previously demonstrated by Davis(1989), Hendrickson et al. (1993) and Sajna (1994), TAM is a valid, reliable, low-cost and easy-to-administer instrument for evaluating usefulness and ease of use of software packages, and estimate actual and intended usage of these packages. This research proposes that TAM can also be used to evaluate website communication features such as email, hyperlinks, bulletin boards, feedback forms and discussion lists.

4.1. Web communication tools

This research considers the reactive communication technologies available on the Internet. Reactive communication technologies are defined as technologies that enable the user to receive messages and react to those messages through the same communication channel.

Before constructing the measurement instrument, a systematic analysis of 50 Internet websites was conducted in order to identify all the reactive technologies, or tools, as they are usually called, currently available. As a result, five communication tools were chosen for this study: Bulletin Boards, Discussion Lists, Email, Feedback Forms, and Hyperlinks.

20

4.1.1 Bulletin Boards.

Bulletin boards enable one to access other people's comments and to post one's own comments by means of webpages. In a bulletin board, in order to communicate, one needs to perform only point and click operations in a web browser. Also, one's identity is only optionally revealed. An example of bulletin board can be found at This is the online guest book of

http://www.lingle.org/guestbook/guestbook.html.

Linda Lingle, candidate for governorship in Hawaii. As shown in figures 4.1.1a and 4.1.1b, a bulletin board is a place on a website that provides people place for communication exchange on given topics (i.e. guest book for communication about or with the candidate).

21

Figure 4.1.1.a Bulletin Board

22

Figure 4.1.1b Bulletin Board

4.1.2 Discussion Lists

Discussion Lists are online environments around online communities that make possible communication on given topics by email. In a discussion list, in order to communicate, one needs to have an email account and email client.v Also, as one needs to subscribe to the discussion list, in other words to join the online community, one's identity is known. Once one subscribes to a discussion list, one receives every message that is sent to that list. An example of a discussion list can be found on the Ron Schmidt's website, a Republican nominee for the United States Senate from South Dakota, at

23

http://www.e-orchard.com/schmidt/.

At

this

website

there

is

link

to

http://www.freerepublic.com/, the discussion list of the Republican online forum Free Republic.

4.1.3 Email

Email is a computer application that makes possible information exchange on the Internet. In order to communicate by email, one needs to have an email client and the email address of a communication partner. The partners know the identity of each other in email communications. An example of email can be found at

http://www.leahy98.com/, the website of Patrick Joseph Leahy, Democrat incumbent for the United States Senate from Vermont and one of the leaders in using computertechnologies, including email, on Capitol Hill. As shown in figure 4.1.2, email on a website allows one to communicate with the website's owner. In the example shown in the figure 4.1.2, one could use the email to send questions for the Super Sunday Candidate Debate, a show on a local TV station.

24

Figure 4.1.2 Example of email communication

4.1.4 Feedback forms

Feedback Forms enable one to communicate back to the owner of a website using point and click operations in a web browser. Also, one's identity is only optionally revealed.

25

An example of feedback form can be found at http://www.jaylucas.org/volunteer.shtml/. As shown in figure 4.1.3, a feedback form could be an online appeal for volunteer actions for a political figure (i.e. Jay Lucas, candidate for governorship in New Hampshire).

4.1.5 Hyperlinks

Hyperlinks are texts or images on a website that enable one's access to related webpages. In order to use hyperlinks, one needs to perform only point and click operations in a web browser, and one's identity is not revealed in the process.

26

Figure 4.1.3. Example of feedback form.

27

4.2 Instrument Development

The instrument development is based on the sample items proposed by Davis (1989) for his technology acceptance model. However, these items were phrased for people who used a technology in their workplace, and not for any users in general. Therefore, some of the questions were modified to reflect the evaluation of communication technologies in any context, not only work-related. Scholars in communications and skilled Internet users evaluated the new questions in order to ensure they have content validity.vi

In the questionnaire, all items were devised on a 5-point Likert-type scale. A Likerttype item consists of a single statement, followed by a five or seven-point choice with each choice described in words. A Likert scale is an ordinal scale, meaning that the responses can be ranked according to the point grade assigned to them, but that the differences between two consecutive points in the scale are not equal or undetermined. For this research, all the items in the questionnaire are categorical ordinal data, coded from 1 to 5. An item is coded 1 when the respondent rates the item very low. Depending on the question, this means that the respondent strongly disagrees with the statement, or that the answer of the question is "very seldom", or "very difficult". An item is coded 5 when the respondent strongly agrees with the statement, or the to the question is "very frequently" or "very easy". Also, to avoid "response-set," in constructing the scaleitems, some of the items are positively phrased, while others are negatively phrased.

28

In order to illustrate how the latent variables of perceived ease of use and usefulness were measured, an example of the questions used for Email technology is presented in
1. State your opinion about the following statement on a scale from strongly agree to strongly disagree. Email has made communications more convenient for me. Strongly agree 2. Agree Neither agree nor disagree Disagree Strongly disagree

How difficult or easy has your communication become due to the use of email? Much harder Harder Neither harder nor easier Easier Much easier

3.

State your opinion about the following statement on a scale from strongly disagree to strongly agree. Email has become a very useful communication tool for me. Strongly disagree Disagree Neither disagree nor agree Agree Strongly agree

4.

How would you best describe your learning experience with email? Very easy to learn Easy to learn Neither easy nor difficult to learn Difficult to learn Very difficult to learn

5.

State your opinion about the following statement on a scale from strongly agree to strongly disagree. I clearly understand how email communication works. Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

6.

State your opinion about the following statement on a scale from strongly agree to strongly disagree. It is easy for me to become skillful with email. Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree How often do you choose email to communicate? Very often Often Neither often nor seldom Seldom Very seldom

7.

Figure 4.2.1 Questionnaire example for evaluating the email communication tool

Figure 4.2.1. The whole questionnaire is presented in the Appendix.

The questions presented in Figure 4.2.1 are intended to measure overall use of the communication tool, as well as the two TAM constructs, ease of use and usefulness. Table 4.2.1 shows how the questions and the constructs relate to each other.

29

Question (keyword) 1 (convenience) 2 (easier communication) 3 (useful) 4 (easy to learn) 5 (clear and understandable) 6 (easy to become skillful) 7 (how often)

Construct Usefulness Usefulness Usefulness Ease of use Ease of use Ease of use Overall use (self-reported)

Table 4.2.1 Questionnaire items and model constructs

The questionnaire was administered to a convenience sample of 110 students on a voluntary basis. As shown in the previous chapter, student subjects have been used in prior research to develop the TAM construct. Six of the questionnaires were incomplete and they were not included in the study. Therefore, the sample used for the development of the measurement instrument consists of 104 subjects. The first 50 subjects are undergraduate students enrolled in the Media and American Politics class at Georgetown University. The other 54 are graduate students in the Communication Culture and Technology program at Georgetown University. The students in the first group have only general knowledge of Internet communications. The students in the second group have been exposed to a variety of Internet communication tools, as an academic requirement. By considering these two groups, the measurement instrument takes into account both experienced and inexperienced users. Subjects were given definitions and examples of the communication technologies targeted in the questionnaire. The results of the survey are presented in the Appendix.

30

4.3 Data Analysis

Even if the questions were derived from a previously validated study, it was considered important to verify again construct validity and reliability. There are two reasons for this. First, some of the questions were rephrased such that they would reflect

perceptions about usage in any context, not only in the workplace. Second, a variety of web communication tools were evaluated, and it was not clear if TAM could be applied directly, without any further tests, to these tools toovii.

Validity was investigated by using factor analysis. Factor analysis is a statistical method used to identify dimensions using multiple items within a data set. Factor analysis summarizes the number of variables by looking at correlations and interactions among variables. The method identifies key underlying or hypothetical variables or groups of variables, called factors that characterize the system under study. The correlation between a variable (item) and a factor (construct) is called "loading" on that factor.

The factor analysis method can provide evidence for construct validity in two ways. First, convergent validity is supported by items that measure the same construct having high loadings on one factor. Second, discriminant validity is supported by items that measure different constructs having high loadings on their corresponding factors only, and very low loadings on the other factors.

31

Reliability will be determined using Cronbach's alpha coefficient, while nomological validity will be determined by looking at the relationship between the ease of use and usefulness constructs and self-reported usage.

4.3.1 Sample differences

The questionnaire was administered to two groups of students in order to capture responses from both experienced and inexperienced users of communication technologies. Table 4.3.1 suggests that the responses to the self-reported usage question are different between the experienced and inexperienced samples.

Indeed, an independent sample t-test for mean differences for self-reported usage for each of the communication tools reveals that there are indeed statistically significant differences between the two samples. Table 4.3.2 shows the results of this test.

Sample

Descriptive statistics

Inexperienced users

Experienced users

Mean Std. Dev. Minimum Maximum N Mean Std. Dev. Minimum Maximum N

How often do you use this tool? (1=very seldom, 5=very often) Bulletin Discussion Email Board List 1.84 2.26 4.22 .91 1.05 .95 1.00 1.00 2 4.00 4.00 5 50 49 50 2.18 2.98 4.69 .87 1.14 .54 1.00 1.00 3 4.00 5.00 5 54 54 54

Feedback Form 1.78 .95 1.00 4.00 50 2.61 .99 1.00 5.00 54

Hyperlink 3.69 1.14 1.00 5.00 49 4.37 .89 1.00 5.00 54

32 How often do you use this tool? (1=very seldom, 5=very often) Bulletin Discussion Email Feedback Hyperlink Board List Form 2.01 2.64 4.46 2.21 4.04 Pooled sample Mean Std. Dev. .90 1.15 .80 1.05 1.06 Minimum 1.00 1.00 2 1.00 1.00 Maximum 4.00 5.00 5 5.00 5.00 N 104 103 104 104 103 Table 4.3.1. Descriptive statistics for self-reported usage, for the two samples and the pooled sample. Sample Descriptive statistics

Tool Bulletin Board Discussion List Email Feedback From Hyperlink

t-test for Equality of Means Sig. (2-tailed) Mean Difference .051 -.34 .001 -.71 .003 -.47 .000 -.83 .001 -.67

Std. Error Difference .17 .21 .15 .19 .20

Table 4.3.2. Equality of means test for self-reported usage, experienced and inexperienced users.

It was decided that the analysis should be done on the pooled sample, since it will reflect different experience and usage levels. The larger size of the combined sample is optimal for the statistical analyses employed here, especially factor analysis. Since it is not the scope of this study to revisit the TAM theory, but to establish a metric for website communication tools, separate analyses on the two samples were not conducted. Further analyses should investigate the effect of experience on perceived usefulness and ease of use.

33

4.3.2 Construct Validity

Factor analysis (principal components) was used for the data corresponding to each communication tool to determine construct validity. Since previous studies have

identified a correlation between ease of use and usefulness, it was decided to use an oblique rotation (OBLIMI) of the initial factor analysis solution. This rotation allows the factors to be correlated.

The analysis revealed that the items load on only two components. Irrespective of the communication tool, questions 1, 2 and 3 loaded on one factor, while questions 4, 5 and 6 loaded on another factor. The loadings are also very high (usually 0.7-0.9) for loadings on the corresponding factor, which confirms convergent validity. The loadings are very low (0.3 or less) on the other factor, too, confirming discriminant validity (see tables 4.3.3 (a)-(e) ).

Item (b) easy to learn (b) easy to become skillful (b) clear and understandable (b) convenient (b) useful (b) easier communication

Factor 1 (usefulness) .909 .837 .799

Factor 2 (ease of use)

.829 .823 .759

Table 4.3.3 (a) Factor analysis for Bulletin Board items (loadings lower that 0.3 are not shown)

Item (f) useful

Factor 1 (usefulness) .864

Factor 2 (ease of use)

34

Item (f) easier communication (f) convenient (f) easy to become skillful (f) clear and understandable (f) easy to learn

Factor 1 (usefulness) .855 .764

Factor 2 (ease of use)

.895 .835 .734

Table 4.3.3 (b) Factor analysis for Feedback Form items (loadings lower that 0.3 are not shown)

Item (d) convenient (d) easier communication (d) useful (d) easy to become skillful (d) easy to learn (d) clear and understandable

Factor 1 (usefulness) .925 .891 .856

Factor 2 (ease of use)

.886 .857 .815

Table 4.3.3 (c) Factor analysis for Discussion List items (loadings lower that 0.3 are not shown)

Item (h) clear and understandable (h) easy to become skillful (h) easy to learn (h) convenient (h) easier communication (h) useful

Factor 1 (usefulness) .858 .769 .759

Factor 2 (ease of use)

.515

.882 .706 .525

Table 4.3.3 (d) Factor analysis for Hyperlink items (loadings lower that 0.3 are not shown)

Item (e) useful (e) convenient (e) easier communication (e) clear and understandable (e) easy to become skillful (e) easy to learn

Factor 1 (usefulness) .903 .797 .760

Factor 2 (ease of use)

.849 .796 .476

Table 4.3.3 (e) Factor analysis for Email items (loadings lower that 0.3 are not shown)

After performing factor analysis for all items for each communication tool, it is useful to investigate if each factor is unidimensional. Also, we need to analyze Cronbach's

35

alpha coefficients for each factor, in order to determine if the measure is reliable. The analysis reveals that indeed each factor is unidimensional, since all exploratory factor analyses using principal components yield only one factor for questions 1, 2 and 3 (the usefulness items), and questions 4, 5 and 6 (the ease of use items) respectively, for each communication tool. Furthermore, the reliability of each scale, usefulness and ease of use, for each communication tool, is in general between 0.7 and 0.9, with only two reliabilities being lower (0.5-0.7). A summary of these reliabilities is presented in Table 4.3.4. The guidelines for exploratory studies such as this one recommend reliability values greater that 0.5-0.6, and therefore the reliabilities obtained here seem to be good enough, and support the previous construct validity findings.

Scale Tool Bulletin Board Feedback Form Discussion List Email Hyperlink

Usefulness 0.7217 0.7555 0.8735 0.7516 0.6737

Ease of use 0.8009 0.7553 0.8048 0.5483 0.7251

Table 4.3.4 Reliabilities for usefulness and ease of use scales, for each communication tool

These analyses confirm the construct validity of ease of use and usefulness, as operationalized here, and the hypothesis that the items are measuring different, but related constructs.

36

4.3.3 Nomological validity

Nomological validity is confirmed by performing regression analyses with the selfreported usage as a dependent variable, and perceived usefulness and perceived ease of use as independent variables. The aggregate scores for the usefulness and ease of use factors are obtained by averaging the scores for the corresponding items. The results show that ease of use and usefulness are good predictors for self-reported usage, and they explain between 23% and 44% of the total variance of the dependent variable (see Table 4.3.5). These numbers are comparable with the electronic mail R2 (0.31) reported by Davis (1989), which gives an additional level of confidence in the results obtained here.

Tool Bulletin Board Feedback Form Discussion List Email Hyperlink

N 103 103 102 103 102

Independent variables Usefulness Ease of use 0.395*** 0.235** 0.456*** 0.341*** *** 0.348 0.283** 0.476*** 0.180 *** 0.386 0.377***

R2 0.252 0.436 0.282 0.230 0.429

Table 4.3.5 Standardized coefficients and R2 for regression analyses (dependent variable is self-reported usage for each tool) *** p<0.001 ** p<0.01

With the exception of email, all the other regression coefficients are significant for both usefulness and ease of use, showing that both factors are important in predicting selfreported usage. However, the standardized coefficients for usefulness are higher than those for ease of use, showing that usefulness is more important than ease of use in

37

predicting self-reported usage. For email, the most familiar among the studied tools, the range of responses was smaller. Email might just be an exception because its variation is more limited.

4.3.4 Developing an index

The scope of this research is to develop an index that will allow classification of websites depending on the type of communication tools they offer to potential users. Therefore, following the guidelines of instrument development presented in section 2, an index was created for each tool. This index is the weighted sum of the ease of use and usefulness scores. The weights are the percentages of total variance explained by the factors, and therefore reflect the importance of each factor for the overall score. The descriptive statistics of the indexes are presented in table 4.3.6.

Tool Feedback Form Bulletin Board Email Discussion List Hyperlink

Index INDEX_F INDEX_B INDEX_E INDEX_D INDEX_H

Mean 228.0395 230.8506 256.1504 262.5888 272.5056

Std. Dev. 38.7096 34.0528 34.0558 50.8088 40.2180

Min. 141.36 162.45 158.44 157.34 166.39

Max. 354.34 320.53 309.19 385.73 331.84

Table 4.3.6 Descriptive statistics for the overall tool evaluation index (N=104)

In order to determine if there are any differences among the five communication tools investigated in this research, a general linear model with repeated measures was used. The difference contrasts were computed. As shown in table 4.3.7, there are significant

38

differences among all tools with the exception of Feedback Form and Bulletin Board, which cannot be distinguished from each other from the viewpoint of the overall evaluation index. Tool comparison Bulletin Board vs. Feedback Form Email vs. Bulletin Board Discussion List vs. Email Hyperlink vs. Discussion List Significance .391 .000 .000 .000

Table 4.3.7 Tool comparison using the overall evaluation index

Therefore, we can conclude that both Bulletin Board and Feedback Form will have low evaluation scores from potential users. Email will be evaluated higher, and will be considered significantly better than both Bulletin Board and Feedback Form from the point of view of ease of use and usefulness. Discussion List will be significantly better than Email, and therefore than Bulletin Board and Feedback Form. Finally, Hyperlink will have the highest evaluation, and will be significantly better than the other communication tools.

Why do we obtain such an ordering for the communication tools? One reason would be that these tools offer different degrees of interactivity, controllability, as well as oneway vs. two-way communication. For example, it might seem strange that email is not evaluated very high. However, email is not very controllable, since a user has no control over the other party. Moreover, the email might not get a response soon, or at all, thus decreasing its usefulness and ease of use. A recent study conducted by Jupiter

39

Communications indicates that the majority of emails from website visitors receive late or no responses at all (see figure 4.3.1).

Figure 4.3.1 Response time to e-mailed questions (Source: http://thestandard.net/metrics/display/0,1283,780,00.html)

Discussion lists, on the other side, are probably perceived to be more useful and easy to use since the website visitor can view all the discussions going on, and decide if he/she is interested in any of them and in which one to participate.

In order to verify if the index is a useful measure of tool usage, further analyses were performed. Tool Feedback Form Bulletin Board Email Discussion List Hyperlink Correlation of overall evaluation index with self-reported usage 0.659 0.455 0.459 0.522 0.627

Table 4.3.8 Correlation of overall evaluation index with self-reported usage, by tool

40

As table 4.3.8 shows, the overall index is correlated with self-reported usage, and therefore we can conclude that users will see different value in different types of communication tools, and will use them accordingly.

4.3.5 Developing a metric

Therefore, the overall evaluation index proposed here seems to be an appropriate metric for evaluating website communication. However, in order to be able to use this ranking of website communication tools for evaluation purposes, a metric needs to be built. This metric will assign a unique number to a website, and enable the comparison of two or more websites based on this number.

The metric proposed in this research is based on the weighted average of the number of communication tools accessible on a specific website. The lowest-ranking tools,

feedback from and bulletin board, each receive a weight of one, since it was established previously that they do not differ significantly in terms of usefulness and ease of use. Email receives a weight of 2, discussion list a weight of 3, and hyperlink, the highestscoring communication tool, a weight of 4. The final formula for a website score is presented below:

41

Website _ score =

i = BB , FF , E , DL , H

W * n
i

In the above formula, i is the communication tool (where BB stands for bulletin board, FF for feedback form, E for email, DL for discussion list, and H for hyperlink), wi is the weight of communication tool I, and ni is the number of communication tools of type i found on the website.

This formula ensures that the website score will be a positive number and that comparison among websites is possible. In fact, the significance of a website's score is not relevant in itself. Its value comes only from comparing it with the score of other websites or the score of various versions of the same website. It can be concluded that such a metric can be used as an instrument for both website comparison and website development.

As an example, if one can access, from a specific website, 1 bulletin board, 1 email and 3 hyperlinks, the overall score will be: Website_score = (1*1+2*1+4*3)=15 A generalization of this model would treat each communication tool in a website as a dimension in a 5-dimenasional space. This is to say that a score-vector would uniquely

42

characterize a website with 5 elements as opposed to a number. For example, for a website with 1 bulletin board, 1 email and 3 hyperlinks the score would be:

Bulletin Board Website_score = 1*1

Discussion List 3*0

Email

Feedback Form

Hyperlink

2*1

1*0

4*3

Alternatively, the score would be: Website_score = [ 1, 0, 2, 0, 12] When the scores are no longer numbers but vectors, a generalized distance function is needed to compare two websites' scores.viii When a vectorial representation and

generalized distance are chosen for handling the websites' scores the instrument is able to perform more nuanced comparisons. For example, consider two sites having the following scores: BB Website_1_score = Website_2_score = 1 0 DL 0 6 E 2 2 FF 1 0 H 48 12

Then Website_1 can be considered as a website adjusted for persuasion since it has a score of 48 for hyperlinks and low scores for reactive components. On the other hand, Website_2 is rather reactive and community oriented since it has a low score on hyperlinks and a high score for discussion lists and emails combined.

43

4.3.6 Observations about the metric

The formula that gives the websites' scores is an increasing linear function and moreover, there is only one unit between tools. This means that the more links a website has the higher score it gets, and with each added communication tool the score increases with the weight of the tool. For example, if a hyperlink is added to a website the score of the website grows by 4 units. However, a model closer to reality will generate a score given by an increasing nonlinear function that stabilizes its growth after a threshold.ix Such a model could be described by the following formula:

Nonlinear _ and _ Bound _ Score = 1

1 e Website _ score

The above formula starts at 0 for a website_score of 0 (i.e. for a website with no communication tools) and asymptotically approaches 1 as the score of the website increases. This improved formula could not be applied for limitations in the actual computing power. For scores above 36, working with 40-decimal scientific numbers in Excel 97, the latter formula returns 1.x Since the mean of surveyed websites' scores is 49.4 and the standard deviation is 30.67 the scores that can meaningfully be calculated are less than half from the total.

44

For a vectorial representation of websites' scores the above formula can be used to determine the score of each component of the vector or each communication tool considered. Further research will address the applications of the nonlinear score

formula to the 5-dimensional website score.

5. Application: Evaluating Political Websites

The reasons that determined the decision to evaluate political websites as opposed to personal or commercial websites are twofold. The first reason is timing. The 1998 November U.S. elections made an unprecedented number of politicians consider Internet as support for their campaigns, besides traditional mass media. The second reason is the differential between the elections' social value and the technical or financial means available to politicians. Indeed, a methodology meant to design effective tools for Internet communications can improve the odds of politicians' success, especially for challengers who do not have readily access to their electorate.

5.1 Selection Methodology

From November 15, 1998 to December 1, 1998, a total of 50 websites was randomly selected for the website sample. The websites were selected from The Election

45

Connection website at http://www.gspm.org/electcon/. The Election Connection is a list of campaign websites, organized by state, established by candidates for the U.S. House of Representatives, U.S. Senate and governorships in the 1998 elections, and provided by the Graduate School of Political Management at The George Washington University.

The individual websites selection was done state by state. For each state, a random number was generated, less than or equal to the number of candidates' websites. The random number was generated by the RAND function within Microsoft Excel 97. RAND returns an evenly distributed random number greater than or equal to 0 and less than 1. As a number between 0 and 1 is not directly useful for selecting numbers between X and Y, with 1 X < Y, the result of RAND function has to be scaled and then rounded to the next higher integer. To generate a random number between X and Y, the following formula was used: RAND() * (Y - X) + X. Then, the website whose order on the webpage corresponded to the random number was included in the sample. For example, the State of California had 105 candidates' websites. The random number was the result of the formula: RAND() * (105-1) +1. The result of the formula was 19.76, thus the random number indicating the website was 20. The 20th website belonged to Charles Ball, a Republican nominee for the United States House of Representatives from California's Tenth Congressional District.

46

5.2 Websites' Inventory

For each selected website, the number and type of communication tools included on or accessible directly (i.e. in one click) from the first web page of the site were recorded. The types of communication tools included in this research are Bulletin Board, Discussion List, Email, Feedback Form, and Hyperlink. With the exception of

Hyperlink, the communication tools are usually accessible by following a link present on the web page. Therefore, whenever the type of the communication tool could not be determined just by examining the first page, the corresponding hyperlinks were followed. For example, a feedback form can be announced on the first web page by a hyperlink, and only after the hyperlink is clicked its type is revealed as feedback form. In turn, a Hyperlink leads the browser to another webpage. Also, the actual counting of communication tool included the redundant links, i.e. links that had the same destination web page. For example, there were four hyperlinks and one email

communication tools in the website of Charles Ball, a Republican nominee for the United States House of Representatives from California's Tenth Congressional District. Figure 5.2.1 presents the first page of this candidate's website. The type of CONTACT US communication tool at the bottom of the page is Email, fact that is revealed after the link is followed.

47

Hyperlink s

Email link

Figure 5.2.1 Example of a candidate webpage (source: http://www.charlesball.org)

5.3 Results The survey of the political candidates' websites is summarized in Figure 5.3.1. The rows include the state, the Internet address, and the number of each type of communications tools for each candidate's website, as well as the total website score based on the metric established in section 4.

48
Nr State 1 . Alabama 2 . Alaska 3 . Arizona 4 . Arkansas 5 . California 6 . California 7 . California 8 . California 9 . Colorado 1. 0 Connecticut 1. 1 Florida 1. 2 Florida 1. 3 Georgia 1. 4 Hawaii 1. 5 Idaho 1. 6 Illinois 1. 7 Indiana 1. 8 Iowa 1. 9 Kansas 2. 0 Kentucky 2. 1 Louisiana 2. 2 Maine 2. 3 Maryland 2. 4 Massachusetts 2. 5 Michigan 2. 6 Minnesota 2. 7 Missouri 2. 8 Montana 2. 9 Nebraska 3. 0 Nevada 3. 1 New Hampshire 3. 2 New Jersey 3. 3 New Mexico 3. 4 New York 3. 5 North Carolina 3. 6 Ohio 3. 7 Oklahoma 3. 8 Oregon 3. 9 Pennsylvania 4. 0 Rhode Island 4. 1 South Carolina 4. 2 South Dakota 4. 3 Tennessee 4. 4 Texas 4. 5 Utah 4. 6 Utah 4. 7 Vermont 4. 8 Virginia 4. 9 Washington 5. 0 Wisconsin TOTAL AVERAGE Address http://www.donsiegelman.com/ http://www.lindauer.org/ http://www.kolbe4congress.com/ http://www.huckabee.org/ http://www.charlesball.org/ http://www.boxer98.org/1998.htm http://www.dougose.com/ http://www.fong98.org/ http://www.norasco.net/kirkp98/ http://www.nielsen98.com/ http://www.friendsofbobgraham.org/ http://www.jeb.org/ http://www.newt.org/ http://www.lingle.org/ http://www.helenchenoweth.org/ http://www.georgeryan.org/ http://www.helmke.org/ http://www.nussleforcongress.org/ http://www.jimclark98.com/ http://www.baesler98.org/ http://www.cooksey.org/ http://www.tomallen.org/ http://www.glothforsenate.org/ http://www.tierney98.org/ http://www.munsell98.org/ http://206.147.215.114/newinski/ http://www.heckemeyer.com/ http://www.rickhill.org/index.shtml http://www.leeterry.com/default1.htm http://www.neal98.org/ http://www.jaylucas.org/ http://www.larryschneider.com/ http://www.johnson98.com/index5.htm http://www.vallone98.com/ http://www.lfaircloth.com/ http://www.taft98.org/ http://nickles98.com/ http://johnlim.com/ http://www.hoeffel98.com/ http://www.myrthyork.com/ http://www.beasley98.com/ http://www.e-orchard.com/schmidt/ http://www.wampcongress.org/ http://www.mauro.org/ http://www.utahgop.com/hansen/ http://www.chriscannon.org/ http://www.leahy98.com/ http://www.erols.com/demarism/ http://www.pattymurray98.com/ http://www.ryan98.org/ BB DL E 1 1 1 1 1 2 1 1 1 1 1 4 1 1 1 3 4 1 1 2 1 2 1 3 2 1 3 2 2 1 3 2 2 3 1 1 2 4 4 1 2 1 2 6 1 3 1 3 2 2 6 6 2 1 1 2 4 2 2 3 FF H 8 14 15 24 4 9 6 14 6 8 9 9 12 30 14 6 7 10 15 4 4 9 6 7 2 11 12 12 8 41 5 14 20 23 21 19 11 7 9 3 18 8 7 4 5 5 8 18 12 12 565 11.3 Score 34 56 63 96 18 40 26 58 26 33 38 40 52 127 58 27 30 47 70 22 22 41 26 30 13 48 54 58 36 175 25 63 84 95 96 82 46 30 37 16 78 40 32 18 22 24 37 76 55 50 2470 49.4

1 2

1 1 1 2 1 1 1 1 1 2 1 1 54 1.08

2 3 2 83 1.66

3 13 0.26 2 0.04

Figure 5.3.1 Websites' Inventory

49

As the penultimate row in the Figure 5.3.1 shows, for all 50 surveyed websites, there are 13 links to bulletin boards, 2 links to discussion lists, 54 links to email, 83 links to feedback forms, and 565 hyperlinks. Figure 5.3.2 shows the percentage of each type of communications tool for the surveyed sample.

Figure 5.3.2 The percentage of each communications tool in the surveyed sample of websites.

The last row in the Figure 5.3.1 shows the characteristics of a generic website that would reflect all 50 surveyed websites. All the values in this row are obtained by dividing their corresponding values in the previous row to 50. In other words, the closest website to all 50 sites would have 0.26 links to bulletin boards, 0.04 links to discussion lists, 1.08 links to email, 1.66 links to feedback forms and 11.3 hyperlinks.

50

The rightmost column in Figure 5.3.1 shows the scores of the individual websites as result of the formula in the section 4. The graphical representation of the scores as function of their corresponding websites is shown in the Figure 5.3.3 below.

Evaluating Political Websites


200

180

160

140

Website score

120

100

80

60

40

20

0 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 Website number 49

Figure 5.3.3. Website scores by website

5.4. Interpretation of Results

According to the empirical results presented in Table 4.3.6, the overall reactive communication on the political websites is not in concordance with the users' expectations. This discordance is summarized in Table 5.4.1. The first row in the table reproduces the ranking of website communication tools as suggested by the empirical

51

results obtained in Chapter 4. The second row is obtained from the last row in Table 5.3.1, and the numbers in parentheses are the weighed totals rounded to the closest integer. For example, 0.26 links to bulletin boards became 0 whereas 1.66 links to feedback forms became 2.

Suggested Ranking

Discussion Email Bulletin Lists Boards Email Bulletin Observed Ranking Hyperlinks Feedback (11) Forms (2) (1) Board (0) (#) Table 5.4.1 Rankings of reactive communication tools in websites (the higher rankings are to the left)

Hyperlinks

Feedback Forms Discussion List (0)

In Table 5.4.1, the suggested and observed rankings differ significantly. For the surveyed websites, there are almost no discussion lists and relatively too many feedback forms. For a purely technical reason, feedback forms as opposed to discussion lists both protect the user's identity and are much easier to use. So, it might be argued that the owners of the websites just wanted to provide their audience with the most convenient means of communication. However, bulletin boards are almost identical in their

perceived ease of use and usefulness as feedback forms with the increased advantage that a user can see what others posted as opposed to feedback forms, where a user has no access to other's messages.

The lack of discussion lists may be attributed to the lack of time and staff on the side of politicians. Indeed, a discussion list dedicated to a politician would have to be a lively space for the debate of political issues. The potential debate in a discussion list might

52

catch the politician off-guard or touch on sensitive issues the politician does not want to address. Then, even if the politician allocates the time to enter online debates, her time may not be very well spent since there could be participants who do not belong to her constituency. In reality, from all 50 surveyed websites, only one website has two links to discussion lists that are not even directly associated with the politician's campaign but with the Republican Party.xi

The presence of feedback forms at the expense of bulletin boards or discussion lists can have several reasons. As it is the case with discussion lists, it may be argued that a feedback form does not expose the politician to her constituency critiques while it still provides a communication channel. A reason for politicians' preferring feedback forms to other website communication tools may be the functionality of feedback forms. They offer an effective venue for volunteership. Indeed, as suggested by a study released by a Republican consulting firm, Campaign Solutions of Alexandria, Va., 55% of the 900 respondents who signed up online with political websites said they had never volunteered to help in a campaign before. Also, 91% of the Internet surveyed

volunteers said they had not been recruited directly by the campaigns.xii

The relatively high number of hyperlinks and the parallelism between the suggested and observed rankings reflect hyperlinks' popularity and the very nature of websites' communications. The average of one link to email per site, and the parallelism between

53

the suggested and observed rankings are indicative of the website owners' understanding of this communication tool. Overall, email is neither excessively used (two links would already be redundant) nor disproportionately represented among the other tools within the websites.

Table 5.4.2 and Figure 5.4.1 present descriptive statistics of the individual scores for the surveyed websites. The distribution of scores is slightly skewed to the left, which means that there are more websites with scores below 49.4 than above. Looking at the minimum score that is 5.74 score units from the (Mean - Std. Deviation), and the maximum score that is 94.94 score units from the (Mean + Std. Deviation) the skewed distribution is further illustrated. The conclusion one can draw is that websites' owners populate their website's individual webpages with a rather limited number links. A possible reason for such an economy lies in the functional groupings of links in separate webpages. In other words, not all the links start from the same webpage when they point to different functions in the website.
Descriptive Statistics Std. Deviation 30.6674

N Websites Scores Valid N (listwise) 50 50

Minimum 13.00

Maximum 175.00

Mean 49.4000

Table 5.4.2. Descriptive statistics for the websites scores.

54

Websites Scores
12

10

Frequency

Std. Dev = 30.67 Mean = 49.4 N = 50.00


0 0. 18 .0 0 17 .0 0 16 .0 0 15 .0 0 14 .0 0 13 .0 0 12 .0 0 11 0 0. 10 .0 90 .0 80 0 . 70 .0 60 .0 50 0 . 40 .0 30 0 . 20 0 . 10

Websites Scores

Figure 5.4.1. The histogram of the websites scores.

6. Conclusions

This work investigates the applicability of the technology acceptance model (TAM) developed by Davis (1989) for the evaluation of website reactive communication. The results indicate that TAM can be used for evaluating website communication tools. While some communication tools, such as bulletin boards and feedback forms, cannot be distinguished from the point of view of their usefulness and ease of use, the others (email, discussion list and hyperlink) have different perceived usefulness and ease of

55

use scores, which enables their ranking. These results show that TAM-based index for website communication can differentiate between different tools, and therefore is a viable metric.

The results also suggest that electronic communication tools are not equal, and therefore further research is needed to understand why these differences exist. In this respect, a very useful research avenue seems to be the validation of the website communication tools' ranking by evaluation of website containing different kinds of such tools. It is expected that those websites with few emails and discussion lists, but with bulletin boards and feedback forms, will not be perceived to be very useful by the users. Such a result would offer more support for the findings obtained here.

The implications of this research for practice are twofold. First, the metric proposed here can be used to evaluate existing websites. Second, the ranking of website

communication tools can be used to design effective websites. As we can see from the application of the TAM-based metric to political websites, many times the website design does not support the goal of website's owner.xiii This research shows that the websites for the 1998 Campaign were not as complete a forum as they were supposed to be, and that many improvements need to be done in order for these websites to enable full communication with the electorate.

56

At the end of this work, it can be concluded that there is possible to have formal instruments to look at Internet communications. By underlining the inner features of Internet communications, formal instruments help one distinguish between art and science when considering tools for Internet communications. As the limitations of the present instrument were mentioned along this work, further enterprises to refine it are not only desirable but also necessary. Further developments ought to be done in at least two directions. Firstly, at a general level, specific features that account for Internet communications technologies, as opposed to some sort of work productivity, would have to be considered.xiv For example, besides perceived ease of use and perceived usefulness, perceived controllability of the Internet communication tools might well emerge as an important criterion for assessing technology adoption. Secondly, specific communication environments may be considered for developing an accurate instrument. In other words, online political communicators might deem necessary to have a different instrument from online journalists'. Such distinctions would be better for incorporating domain-specific knowledge into a given instrument.

57

7. References
Churchil, G. A., Jr. A Paradigm for Developing Better Measures of Marketing Constructs, Journal of Marketing Research, 16, 1, 1979, pp. 64-73 Cook, T. D. and Campbell, D. T. Quasi-Experimentation: Design and analysis issues for field studies, Houghton Mifflin Company, 1979, pp. 37-94 Davis, F. D. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology, MIS Quarterly, 13,3,1989,pp. 319-340 Hendrickson, A. R., Massey, P. D. and Cronan, T. P. On the test-retest reliability of perceived usefulness and perceived ease of use scales, MIS Quarterly, 17,2,1993,pp. 227-230 Peter, J. P. Construct validity: a review of basic issues and marketing practices, Journal of Marketing Research, 18, 1981, pp. 133-145 Rosenthal, R., and Rosnow, R. L. Essentials of behavioral research: Methods and data analysis, McGraw Hill Series in Psychology, 1991, pp. 46-65 Szajna, B. Software evaluation and choice: Predictive validation of the technology acceptance instrument, MIS Quarterly, 18,3,1994,pp. 319-324

58

Appendix 1
1. State your opinion about the following statement on a scale from strongly agree to strongly disagree. Email has made communications more convenient for me.
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

2.

How often do you choose email to communicate?


Very often Often Neither often nor seldom Seldom Very seldom

3.

How difficult or easy has your communication become due to the use of email?
Much harder Harder Neither harder nor easier Easier Much easier

4.

State your opinion about the following statement on a scale from strongly disagree to strongly agree. Email has become a very useful communication tool for me.
Disagree Neither disagree nor agree Agree Strongly agree

Strongly disagree

5.

How would you best describe your learning experience with email?
Very easy to learn Easy to learn Neither easy nor difficult to learn Difficult to learn Very difficult to learn

6.

State your opinion about the following statement on a scale from strongly agree to strongly disagree. I clearly understand how email communication works.
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

7. State your opinion about the following statement on a scale from strongly agree to strongly disagree. It is easy for me to become skillful with email.
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

A feedback form enables one to send feedback to the website without using any additional information or tools--email address or email program.

& Please Enter: Your name: Your e-mail address:

59 Text of your message: (max. 1500 characters)

Submit

Clear the form

Please click the submit button only ONCE. 1. State your opinion about the following statement on a scale from strongly agree to strongly disagree. Feedback forms made communications more convenient for me.
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

2.

How often do you choose feedback forms to communicate?


Very often Often Neither often nor seldom Seldom Very seldom

3.

How difficult or easy has your communication become due to the use of feedback forms?
Much harder Harder Neither harder nor easier Easier Much easier

4.

State your opinion about the following statement on a scale from strongly disagree to strongly agree. Feedback forms have become very useful communication tools for me.
Disagree Neither disagree nor agree Agree Strongly agree

Strongly disagree

5.

How would you best describe your learning experience with feedback forms?
Very easy to learn Easy to learn Neither easy nor difficult to learn Difficult to learn Very difficult to learn

6.

State your opinion about the following statement on a scale from strongly agree to strongly disagree. I clearly understand how feedback forms communication works.

60
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

7. State your opinion about the following statement on a scale from strongly agree to strongly disagree. It is easy for me to become skillful with feedback forms.
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

A bulletin board enables one to access other people's comments and to post his or her own. For example, a bulletin board is set for the first year CCT students who take CCTP-505.
Have Republicans in Congress abandoned their tax-cut ideology because of politics? Should Congress use the budget surplus to cut taxes this year? If so, by how much, and who should the cuts benefit? Post your comments below. [SUBMIT RESPONSE]

10/1/98 Steve Yes, another example of a Bill Clinton lie, a middle class tax cut. Oops, sorry - he wasn't under oath. Oh yea - that means nothing, according to his appologists, anyway. Didn't everyone get jumping ugly over George Bush saying "read my lips?". 10/1/98 DZ dzink@rocketmail.com Mr. Moore: The one thing that is absolutely unacceptable is reducing taxes on investment income. Why are we subsidizing capital? 80% of all investment income is earned by people earning over $200,000 per year, a group that neither needs nor deserves any further subsidy.

10/1/98 Birdman Economics 101. I tell you whats not in ECON 101. The unacceptable Death tax. Whats with this Death Tax. Your Mom dies, and theres the IRS looking for their handout. Now theres a sick thought.

1.

State your opinion about the following statement on a scale from strongly agree to strongly disagree. Bulletin boards have made communicating more convenient for me.
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

2.

How often do you choose bulletin boards to communicate?


Very often Often Neither often nor seldom Seldom Very seldom

3.

How difficult or easy has your communication become due to the use of bulletin boards?
Much harder Harder Neither harder nor easier Easier Much easier

4.

State your opinion about the following statement on a scale from strongly disagree to strongly agree.

61 Bulletin boards have become very useful communication tools for me.
Strongly disagree Disagree Neither disagree nor agree Agree Strongly agree

5.

How would you best describe your learning experience within bulletin boards?
Very easy to learn Easy to learn Neither easy nor difficult to learn Difficult to learn Very difficult to learn

6.

State your opinion about the following statement on a scale from strongly agree to strongly disagree. I clearly understand how bulletin boards work.
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

7. State your opinion about the following statement on a scale from strongly agree to strongly disagree. It is easy for me to become skillful with bulletin boards.
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

A discussion list is an environment that makes possible to communicate on given topics by email. For example, cct-l@ listproc.georgetown.edu is a discussion list where CCT students and faculty communicate.
State your opinion about the following statement on a scale from strongly agree to strongly disagree. Discussion lists have made communications more convenient for me.
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

1.

2.

How often do you choose to communicate within discussion lists?


Very often Often Neither often nor seldom Seldom Very seldom

3.

How difficult or easy has your communication become due to the use of discussion lists?
Much harder Harder Neither harder nor easier Easier Much easier

4.

State your opinion about the following statement on a scale from strongly disagree to strongly agree. Discussion lists have provided me very useful communication tools.
Disagree Neither disagree nor agree Agree Strongly agree

Strongly disagree

62

5.

How would you best describe your learning experience within a discussion list?
Very easy to learn Easy to learn Neither easy nor difficult to learn Difficult to learn Very difficult to learn

6.

State your opinion about the following statement on a scale from strongly agree to strongly disagree. I clearly understand how the communication within discussion lists works.
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

7. State your opinion about the following statement on a scale from strongly agree to strongly disagree. It is easy for me to become skillful with communication within discussion lists.
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

1.

A hyperlink is an underlined text or an image that enables access to a related webpage.


State your opinion about the following statement on a scale from strongly agree to strongly disagree. Hyperlinks have made communications more convenient for me.
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

2.

How often do you use hyperlinks as an option?


Very often Often Neither often nor seldom Seldom Very seldom

3.

How difficult or easy has your communication become due to the use of hyperlinks?
Much harder Harder Neither harder nor easier Easier Much easier

4.

State your opinion about the following statement on a scale from strongly disagree to strongly agree. Hyperlinks have become very useful tools for me.
Disagree Neither disagree nor agree Agree Strongly agree

Strongly disagree

5.

How would you best describe your learning experience with hyperlinks?
Very easy to Easy to Neither easy nor Difficult to learn Very difficult to learn

63
learn learn difficult to learn

6.

State your opinion about the following statement on a scale from strongly agree to strongly disagree. I clearly understand how hyperlinks work.
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

7. State your opinion about the following statement on a scale from strongly agree to strongly disagree. It is easy for me to become skillful with hyperlinks.
Strongly agree Agree Neither agree nor disagree Disagree Strongly disagree

64

Appendix 2 Descriptive Statistics for the questionnaire items, by communication tool: Bulletin board (b) Item (b) convenient (b) easier communication (b) useful (b) easy to learn (b) clear and understandable (b) easy to become skillful Feedback form (f) Item (f) convenient (f) easier communication (f) useful (f) easy to learn (f) clear and understandable (f) easy to become skillful Discussion list (d) Item (d) convenient (d) easier communication (d) useful (d) easy to learn (d) clear and understandable (d) easy to become skillful Email (e) Item (e) convenient (e) easier communication (e) useful (e) easy to learn easy to learn (e) easy to become skillful Hyperlink (h)

N 103 104 104 104 104 103

Min. 1.00 2.00 1.00 2.00 1.00 2.00

Max. 5.00 5.00 4.00 5.00 5.00 5.00

Mean 2.9029 3.1635 2.7885 3.5962 3.5192 3.5534

Std. Dev. .7477 .5234 .7330 .6900 .9136 .6963

N 102 103 104 103 104 103

Min. 1.00 2.00 1.00 1.00 1.00 2.00

Max. 5.00 5.00 5.00 5.00 5.00 5.00

Mean 3.0490 3.1456 2.8558 3.7282 3.4904 3.7087

Std. Dev. .8251 .4932 .8524 .8766 1.0703 .7089

N 103 103 103 103 103 103

Min. 1.00 1.00 1.00 2.00 1.00 1.00

Max. 5.00 5.00 5.00 5.00 5.00 5.00

Mean 3.2233 3.3398 3.3786 3.6796 3.5825 3.6311

Std. Dev. .8394 .7989 .9087 .7948 .9953 .7408

N 104 104 104 104 104 104

Min. 1 2 2 2 1 2

Max. 5 5 5 5 5 5

Mean 4.04 4.13 4.35 4.26 3.98 4.00

Std. Dev. 1.07 .70 .79 .67 .89 .74

65

Item (h) convenient (h) easier communication (h) useful (h) easy to learn (h) clear and understandable (h) easy to become skillful Factor analysis

N 104 104 104 104 104 103

Min. 1 2 2 1 1 2

Max. 5 5 5 5 5 5

Mean 3.77 3.94 4.13 4.33 4.04 4.12

Std. Dev. .95 .72 .70 .81 .99 .77

Checking unidimensionality for each scale, for each tool Items (e) useful (e) convenient (e) easier communication Items (e) easy to become skillful (e) clear and understandable (e) easy to learn Loadings .898 .803 .784 Loadings .834 .751 .579

66

Appendix 3

xv

Email--ei for i=1 to 7 is the question number i in the email section of the questionnaire:
CRT 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 e1 3 5 4 5 4 5 4 5 4 2 4 4 4 2 2 2 4 4 3 5 5 5 5 5 4 4 4 4 2 5 2 3 2 4 4 4 4 3 4 2 4 5 4 3 e2 2 4 5 5 5 5 4 4 5 5 5 5 2 2 3 5 3 4 4 5 4 5 5 5 5 5 5 4 4 3 4 5 4 4 5 5 5 5 2 4 2 5 4 4 e3 3 5 3 5 4 4 3 4 4 5 4 5 5 3 3 4 3 4 4 5 5 5 5 5 5 5 5 4 4 4 3 4 4 2 4 5 4 4 4 3 3 4 4 4 e4 4 5 4 5 5 5 4 5 4 4 4 5 5 3 2 3 4 4 4 5 5 5 5 5 4 5 5 5 4 4 4 4 4 4 4 4 5 5 3 2 3 5 5 4 e5 4 5 4 5 3 4 4 4 4 5 4 4 5 4 2 5 4 3 4 4 4 4 5 5 5 5 5 4 4 3 5 5 4 5 4 5 4 3 5 4 4 4 4 5 e6 4 2 3 2 5 4 3 3 4 4 5 3 4 4 4 5 3 4 5 4 4 4 4 5 4 3 4 4 4 3 4 4 4 3 4 2 4 1 5 2 3 4 4 5 e7 4 2 3 3 4 4 3 4 4 4 4 4 5 4 4 4 4 2 3 5 4 4 5 5 5 3 4 4 4 4 4 4 4 3 4 4 4 2 5 3 4 4 4 5

67 CRT 45 46 47 48 49 50 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 e1 5 4 4 4 4 4 5 4 3 4 5 5 5 5 5 4 4 5 5 5 4 2 5 5 5 5 5 3 3 5 5 1 1 2 4 5 5 4 5 1 5 5 5 5 5 4 4 4 4 3 e2 5 4 4 5 4 4 5 4 5 5 5 4 5 5 4 4 5 5 5 5 4 3 5 5 5 5 5 4 3 5 5 4 5 5 4 5 5 4 5 5 5 5 5 5 5 4 5 5 4 4 e3 5 4 4 3 4 4 5 4 3 4 4 4 4 4 5 4 4 4 5 5 3 3 4 4 5 4 5 4 3 4 5 4 5 4 4 5 4 4 5 4 5 5 4 5 4 4 4 4 4 4 e4 5 4 4 4 4 3 5 5 4 4 5 4 5 4 5 4 4 5 5 5 4 2 5 5 5 5 5 4 2 5 5 4 5 2 4 5 4 4 5 4 5 5 4 5 5 4 5 4 4 4 e5 4 4 5 4 4 5 4 4 3 4 5 4 4 5 4 4 3 4 4 4 4 4 4 4 5 5 5 4 4 5 5 5 5 5 5 4 4 4 5 3 5 4 4 5 5 3 5 3 4 5 e6 2 3 4 2 4 5 5 4 3 5 5 4 4 3 4 2 4 5 5 4 5 4 4 5 4 5 4 4 4 5 5 4 4 5 4 3 4 4 4 4 5 5 5 5 5 3 4 5 3 4 e7 4 4 4 4 4 4 5 4 3 4 4 4 4 5 3 4 4 4 4 5 4 4 4 4 5 5 4 4 4 5 5 5 5 2 4 3 4 4 4 2 5 4 4 5 4 4 5 5 3 3

68 CRT 45 46 47 48 49 50 51 52 e1 5 4 4 5 4 4 5 5 e2 5 5 5 4 5 5 5 5 e3 3 5 4 5 3 4 4 5 e4 5 5 4 5 4 5 5 5 e5 4 4 5 4 3 5 4 5 e6 5 4 4 5 5 4 4 4 e7 5 4 3 4 4 4 4 5

Feedback Form--fi for i=1 to 7 is the question number i in the feedback form section of the questionnaire:
CRT 1 2 3 4 5 6 7 8 9 f1 3 3 3 3 2 2 3 3 3 f2 2 2 1 2 1 2 3 1 3 1 2 3 1 1 1 1 1 3 3 1 1 3 1 3 1 1 3 1 3 3 1 2 1 1 f3 3 3 3 3 3 3 3 3 3 3 4 3 3 3 2 2 3 3 4 3 3 4 3 4 3 3 4 3 3 3 3 3 3 3 f4 3 3 3 3 2 3 3 1 3 f5 3 3 3 3 3 3 3 3 3 f6 f7 2 3 3 3 2 3 3 3 3 3 4 4 3 3 2 3 3 #NULL ! 3 3 1 3 4 3 4 4 1 3 3 4 4 3 2 2 2 5 3 5 3 1 4 4 4 3 4 3 5 1 4 4 4 3 3 3 3 4 3 3 3 3 4 4 3 3 4 4 5 3

10 3 11 3 12 3 13 3 14 #NULL ! 15 2 16 1 17 3 18 3 19 3 20 3 21 3 22 5 23 3 24 3 25 3 26 3 27 3 28 3 29 1 30 1 31 2 32 3 33 3 34 3

3 3 3 3 3 3 3 4 3 #NULL ! 1 1 2 2 1 4 3 3 4 4 3 3 3 3 4 5 3 3 4 5 3 3 3 3 4 5 2 3 2 3 3 3 2 4 3 3 1 5 1 3

69 CRT 35 36 37 38 39 40 f1 3 3 3 2 3 3 f2 f3 3 3 1 3 1 3 1 3 1 3 1 #NULL ! 2 3 1 3 2 3 2 3 1 3 1 3 4 4 1 3 3 4 3 2 3 3 2 3 1 2 1 2 2 4 3 3 3 1 3 3 4 3 3 2 2 3 5 2 1 2 2 1 3 3 4 4 3 3 3 4 3 3 3 3 3 3 3 4 3 4 3 3 3 3 4 2 3 3 3 3 5 3 3 3 3 3 3 3 f4 3 3 3 1 3 2 3 3 3 3 1 1 4 3 3 4 3 3 3 4 3 4 3 3 2 3 3 4 3 4 3 3 3 3 4 2 3 4 2 3 5 3 3 3 3 3 3 3 f5 3 4 3 3 5 3 4 3 3 4 3 3 4 3 4 5 5 4 3 3 4 4 3 5 2 4 3 5 4 5 4 4 4 4 4 3 3 4 4 5 5 5 3 5 4 3 4 4 f6 3 2 3 4 4 2 3 3 4 4 1 2 4 2 4 5 5 4 3 5 4 4 4 3 3 4 4 5 1 4 4 4 4 5 3 2 3 3 4 5 5 3 3 4 4 3 4 4 f7 3 3 3 4 4 3 4 4 4 4 3 3 4 2 4 4 3 5 4 4 4 4 3 5 3 4 4 5 4 5 4 4 4 4 4 4 3 4 2 5 5 4 3 4 4 3 4 4

41 3 42 3 43 2 44 3 45 2 46 2 47 4 48 #NULL ! 49 3 50 4 1 3 2 2 3 3 4 5 5 4 6 3 7 3 8 3 9 3 10 3 11 3 12 4 13 3 14 4 15 3 16 2 17 4 18 4 19 4 20 1 21 3 22 4 23 2 24 4 25 5 26 4 27 3 28 3 29 3 30 3 31 3 32 3

70 CRT 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 f1 5 3 2 3 2 2 4 3 3 4 4 4 4 2 2 4 3 4 4 3 f2 3 2 5 3 2 1 2 3 2 3 4 2 5 1 3 3 2 3 4 2 f3 4 3 3 3 3 3 2 3 3 3 4 4 3 3 3 4 3 3 4 3 f4 4 3 3 3 2 1 1 3 1 3 3 2 3 2 3 4 3 3 4 3 f5 4 3 5 5 3 5 5 4 5 4 5 5 5 3 4 4 3 3 4 5 f6 4 4 5 5 4 5 5 3 4 5 3 4 5 2 4 4 4 4 4 4 f7 4 3 5 5 4 5 4 3 5 4 4 4 5 3 3 4 5 3 4 4

Bulletin Board--bi for i=1 to 7 is the question number i in the bulletin board section of the questionnaire:
CRT 1 2 b1 3 3 b2 1 1 1 1 1 1 3 3 3 2 1 3 3 3 3 2 1 3 2 1 1 b3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 2 3 3 4 3 3 b4 3 3 3 1 3 3 3 3 3 3 1 3 3 3 3 2 3 3 2 2 2 b5 3 3 3 3 3 3 3 3 3 3 4 3 3 3 3 4 4 3 4 3 3 b6 b7 2 3 3 #NULL ! 3 3 2 2 3 3 3 3 3 3 3 3 3 3 5 4 3 4 3 3 4 4 3 4 2 3 5 2 3 4 3 3 4 4 3 4 3 3

3 3 4 3 5 3 6 3 7 3 8 3 9 #NULL ! 10 3 11 1 12 3 13 3 14 3 15 3 16 2 17 3 18 3 19 4 20 2 21 3

71 CRT 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 b1 4 3 2 4 3 3 3 3 3 2 3 4 3 3 3 3 2 3 2 3 3 3 3 3 2 3 2 2 3 3 2 2 3 3 2 3 3 5 2 2 4 2 4 3 3 4 3 3 1 3 b2 3 1 2 1 1 2 4 3 1 2 3 1 1 2 1 3 1 1 2 1 2 2 3 3 1 1 1 1 2 2 1 2 2 1 3 1 2 4 1 1 2 3 3 2 2 3 3 2 2 3 b3 3 3 4 3 3 4 4 3 3 3 3 4 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 4 3 3 3 3 3 3 4 5 3 3 3 2 4 3 3 4 3 4 2 3 b4 4 3 3 3 3 4 4 3 3 2 3 3 1 3 2 3 3 3 1 3 3 3 3 3 2 3 3 2 1 3 2 3 2 3 4 3 3 4 2 2 3 2 4 3 3 3 2 3 2 3 b5 4 2 5 3 3 4 4 3 3 4 3 3 3 3 3 3 3 4 3 3 4 3 3 3 4 3 3 4 5 5 4 3 3 3 4 3 5 4 4 3 4 5 4 4 4 4 4 3 4 3 b6 4 3 5 2 3 4 4 4 3 4 3 4 2 3 2 3 4 4 2 2 4 3 3 3 2 3 1 2 5 5 4 3 5 3 4 4 5 4 3 3 4 5 4 4 4 4 4 3 2 3 b7 4 3 5 3 3 4 4 3 3 4 3 4 3 4 3 3 3 4 3 3 4 3 3 3 3 3 3 4 5 4 4 3 4 3 3 3 4 4 4 3 5 4 4 3 4 4 4 3 5 3

72 CRT 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 b1 4 2 4 4 3 2 4 3 3 2 3 4 2 3 3 2 3 3 3 1 2 3 3 3 2 4 2 5 4 3 3 b2 2 1 3 3 2 1 3 2 1 4 2 3 2 1 2 2 1 2 2 1 1 3 2 3 2 3 3 4 2 2 2 b3 3 3 4 4 3 2 3 3 3 4 3 4 4 3 3 3 3 4 3 3 3 3 4 2 3 4 2 3 4 3 3 b4 3 2 4 4 2 2 4 3 3 4 3 4 3 3 4 3 3 2 3 1 3 2 3 2 2 3 2 3 3 3 3 b5 4 4 5 4 4 4 3 4 3 4 4 4 3 5 4 4 3 4 3 3 5 4 5 5 4 4 4 4 3 3 5 b6 4 4 5 4 4 4 3 4 3 4 4 4 3 5 4 4 5 4 3 2 4 4 4 5 2 4 4 5 4 2 4 b7 4 4 5 4 4 2 4 4 3 4 4 4 3 5 4 3 4 4 3 4 3 3 4 5 4 3 4 5 3 3 4

Discussion List--di for i=1 to 7 is the question number i in the discussion list section of the questionnaire:
CRT 1 2 3 4 5 6 7 8 9 10 11 12 13 d1 3 3 3 2 2 3 3 3 3 2 3 3 3 d2 1 1 1 1 1 3 3 3 3 2 3 3 3 d3 3 3 3 3 3 3 3 3 3 4 3 3 3 d4 3 3 3 1 3 3 3 3 3 4 3 3 3 d5 3 3 3 3 3 3 3 3 3 3 3 3 3 d6 2 3 3 3 3 3 3 3 3 4 3 3 4 d7 3 3 3 3 3 3 3 3 3 4 3 3 4

73 CRT d1 d2 d3 d4 d5 d6 d7 14 3 3 3 3 3 3 3 15 3 3 3 3 3 3 3 16 2 3 2 2 4 4 4 17 3 1 3 3 4 4 4 18 3 3 3 3 3 3 3 19 2 1 2 2 3 4 3 20 4 2 3 4 4 4 4 21 3 3 3 3 3 3 3 22 4 3 3 4 4 4 4 23 4 3 4 4 4 4 4 24 5 4 5 5 5 5 5 25 3 1 4 4 2 2 3 26 3 3 3 3 3 1 3 27 2 1 3 3 3 4 3 28 3 1 3 2 3 4 3 29 3 1 3 3 3 2 3 30 3 1 3 3 3 1 3 31 2 2 3 3 4 4 4 32 4 3 4 4 4 4 3 33 3 1 3 3 3 3 3 34 2 1 3 2 3 1 3 35 2 1 3 3 3 2 4 36 3 4 4 4 4 2 4 37 3 3 3 3 3 3 3 38 5 4 4 4 3 5 4 39 3 1 3 4 4 4 4 40 #NULL #NULL #NULL #NULL #NULL #NULL #NULL ! ! ! ! ! ! ! 41 3 2 3 3 3 3 4 42 4 4 4 4 4 4 4 43 3 2 3 3 3 3 3 44 4 4 5 5 5 5 5 45 3 3 3 3 3 3 3 46 2 1 3 2 3 2 3 47 3 2 3 3 3 3 3 48 4 3 2 4 4 3 3 49 3 3 3 3 3 3 3 50 3 2 4 4 5 5 5 1 4 5 4 1 4 5 5 2 2 1 3 3 4 4 4 3 3 2 3 3 3 3 3 4 2 2 3 4 4 5 4 5 4 4 3 4 4 4 3 6 4 5 4 4 5 2 5 7 4 4 5 5 4 4 4 8 3 1 2 2 3 2 3 9 4 4 5 4 4 4 4 10 4 2 4 4 4 4 3 11 3 3 3 3 4 4 4 12 4 3 4 5 5 5 5

74 CRT 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 d1 3 4 4 3 4 4 3 2 3 2 2 4 4 4 1 1 4 3 4 3 4 3 5 4 4 4 5 4 3 3 4 4 2 3 3 4 3 5 3 4 d2 4 2 2 4 3 3 3 2 3 2 1 5 3 3 1 5 3 1 4 3 3 5 2 4 3 4 3 3 4 2 4 2 3 3 3 4 1 4 2 3 d3 4 4 3 3 4 4 3 2 3 3 3 4 4 3 1 2 4 3 3 3 4 4 5 4 4 4 5 4 4 3 4 4 1 2 3 5 3 5 3 3 d4 4 3 3 4 4 4 4 2 3 4 2 4 4 3 1 2 4 3 4 3 5 2 5 4 4 5 5 4 4 3 3 4 2 3 4 5 3 5 3 4 d5 5 4 4 4 4 5 4 2 3 4 4 5 4 4 5 5 4 3 4 4 4 3 5 5 4 5 3 4 5 5 3 5 3 2 4 5 3 4 3 5 d6 5 4 4 4 4 5 4 5 3 4 4 5 3 4 5 2 4 3 4 4 4 4 5 5 4 5 2 3 5 4 3 4 5 2 4 4 3 5 3 4 d7 5 4 4 4 4 4 4 4 3 4 4 5 3 4 4 5 4 3 4 4 4 3 5 4 4 5 3 4 5 3 4 4 3 3 3 5 3 3 3 4

Hyperlink--ei for questionnaire:

i=1 to 7

is the question number i in the hyperlink section of the

CRT 1 2 3 4

h1 3 3 4 3

h2 5 1 2 1

h3 5 3 4 3

h4 3 3 3 2

h5 4 3 3 3

h6 4 3 3 2

h7 4 3 4 3

75 CRT 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 1 2 h1 h2 4 4 4 3 4 3 4 2 4 4 5 5 4 4 4 3 4 4 3 1 5 5 3 4 4 4 4 4 4 4 4 4 5 4 5 4 4 5 5 5 4 4 3 4 5 4 5 5 4 4 3 4 4 5 4 3 3 5 4 2 4 4 3 5 4 4 1 5 3 1 4 #NULL ! 3 3 5 5 4 4 3 4 4 3 4 4 4 3 3 4 2 3 2 4 4 3 5 4 h3 4 4 4 4 4 5 2 3 4 3 4 4 4 4 4 5 5 4 5 5 4 4 4 5 4 4 5 4 4 4 5 4 4 4 3 4 3 5 3 4 3 4 4 3 4 4 3 3 h4 4 4 4 4 4 5 4 3 4 3 4 4 4 4 4 5 5 5 5 5 4 4 4 5 4 4 5 4 4 4 5 4 4 4 3 4 3 5 4 4 3 4 3 4 4 4 5 5 h5 3 4 4 4 4 5 5 3 5 3 4 5 4 4 5 5 5 4 5 5 4 5 5 5 4 5 5 4 5 4 5 5 4 4 3 5 3 5 4 5 3 4 4 5 4 5 5 4 h6 3 4 3 2 3 3 4 3 5 1 4 5 4 4 5 5 4 4 5 5 5 1 4 5 4 4 5 4 5 2 4 2 4 5 4 4 h7 4 4 4 4 4 5 3 3 5 3 4 5 4 3 5 5 5 4 5 5 4 4 4 5 4 4 5 3 4 4 5 4 4 4 4 4

3 3 4 5 4 3 5 5 3 3 2 4 4 #NULL ! 3 3 4 4 5 5 5 5 5 4

76 CRT 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 h1 3 5 4 5 5 4 4 4 3 5 4 3 3 4 4 4 5 3 3 3 2 3 3 4 5 3 5 5 3 3 5 4 5 4 5 5 1 4 4 3 4 4 3 4 5 5 5 1 4 4 h2 4 5 4 5 5 5 4 3 4 5 5 2 4 2 5 4 5 4 3 3 1 4 5 5 5 4 5 5 4 4 4 5 5 5 5 5 5 4 5 5 5 5 5 5 5 5 5 4 4 4 h3 3 5 4 5 5 3 4 4 4 4 4 3 4 4 4 3 5 3 4 3 3 3 4 4 4 3 5 4 3 3 4 4 4 3 5 4 5 4 3 5 4 3 3 4 5 5 5 5 3 4 h4 3 5 4 5 5 4 4 4 4 5 4 3 4 4 4 4 5 4 3 3 2 4 4 4 4 4 5 5 4 4 5 4 5 5 5 4 5 4 5 4 4 4 5 4 5 5 5 5 4 4 h5 3 4 5 1 5 5 4 4 4 5 5 5 4 4 4 5 5 4 3 3 4 3 5 4 5 5 5 5 4 4 4 5 5 5 5 5 5 4 5 5 5 5 5 4 5 5 5 4 2 5 h6 3 5 5 5 4 5 4 4 4 5 5 5 4 4 4 5 5 2 1 4 4 5 4 4 5 5 4 4 4 4 4 5 5 5 5 5 4 3 5 5 4 4 5 4 4 5 5 4 3 4 h7 3 5 5 5 4 5 3 4 4 5 5 4 4 3 4 5 5 5 3 3 4 4 3 4 4 5 4 3 4 4 4 4 5 5 5 5 5 4 5 5 3 2 5 4 3 5 5 3 3 4

77

NOTES:
i

Home shopping TV channels, some of C-Span shows, and radio talk shows are examples of media with

reactive components. However, unlike all these media where one has to make a phone call, Internet communication is reactive on the same channel of communication, and requires most of the time that one be reactive--Internet reactivity is built-in as existential condition for Internet communications.
ii

Since in this work a model is not being built but only applied to build a measurement instrument, there

is explicit concern only with checking construct validity or in other words to see that the instrument to be developed measures what is intended to.
iii

This formula enables the researchers to estimate how many items are needed in order to obtain a pre-

specified reliability coefficient. The actual reliability will depend on how well the items measure the construct.
iv

Discriminant validity actually implies that there is little or no correlation between constructs that should

not correlate according to theory. Davis does not have such constructs in his model, and uses different tools instead. Therefore his discriminant validity check is not very powerful.
v

An email client or email is a computer application that makes possible information exchange on the

Internet.
vi

The consulted parties were Master's and doctoral candidates in Communication Culture and Technology

and Linguistics departments at Georgetown University, respectively professors Diana Owen and Colleen Cotter from Government and Linguistics department at Georgetown University.
vii

As it was mentioned before, there is some evidence for external validity of the TAM instrument, which

enables its application to other contexts than those for which it was initially developed.
viii

As an example of a generalized distance, for a 2-dimensional space or only two communication tools

in a website, one can use the formula:

score _ difference
i

2 i

where i is each dimension or tool

and score_difference is the difference of the scores on the same dimension or for the same tool.

78
ix

The result of elections as function of spending on political campaigns is an example of nonlinear and

asymptotic growth.
x

The standard version of SPSS 9.0, which handles numbers with as much as 16 decimals, is not useful in

this case either.


xi

A suggestion for a better leverage of discussion lists in a political campaign would be to either include

links to already existing partisan lists or to foster such lists well before the official campaign starts.
xii

Campaign Solutions is available at: http://www.CampaignSolutions.com. This statement is true to the extent that the role of politicians' websites is to foster communication

xiii

between political candidates and their constituencies in order to improve the quality of political campaigns and consequently the quality of the democratic systems. Two possible explanations emerge from here. Either the political candidates did not know how to structure their online forums, case in which this work proves to be useful once more; or the political candidates did not look for means to improve the quality of the political process in their online version of the 1998 Campaign.
xiv

This observation is necessary since TAM model was developed to estimate the adoption of technology

in a work-specific environment.
xv

In this survey-data NULL represents a missing element. Crt is the number of the of the subjects and

shows the first respectively the last surveyed-group of students.

Vous aimerez peut-être aussi