Vous êtes sur la page 1sur 13

# control condition The Book: Individuals in a control condition do not receive the experimental treatment.

Instead, they either receive no treatment or they receive a neutral, placebo treatment. The purpose of a control condition is to provide a baseline for comparison with the experimental condition. Our Take: People who are not experimented on directlythe control group. correlation method The Book: Two variables are observed to determine whether there is a relationship between them. Our Take: seeing if there is a connection between two variables data The Book: (plural) are the measurements or observations. A data set is a collection of measurements or observations. A datum (singular) is a single measurement or observation and is commonly called a score or raw score. Our Take: the information you get from the study dependent variable The Book: the variable that is observed in order to assess the effect of the treatment. Our Take: the variable that remains the same (i.e., male or female; red or green). descriptive statistics The Book: Statistical procedures used to summarize, organize, and simplify data. Our Take: see statistics below experimental condition The Book: Individuals in the experimental condition do receive the experimental treatment. Our Take: the experimental "rats" frequency distribution The Book: An organized tabulation of the number of individuals located in each category on the scale of measurement. Our Take: Listing of data and how often each occurs independent variable The Book: The variable that is manipulated by the researcher. In behavioral research, the independent variable usually consists of the two (or more) treatment conditions to which subjects are exposed. The independent variable consists of the antecedent conditions that were manipulated prior to observing the dependent variable. Our Take: the variable that changes during the experiment (i.e., take a pill or not take a pill; number of times the red color is chosen versus the number of times the green color is chosen). inferential statistics The Book: Consist of techniques that allow us to study samples and then make generalizations about the populations

from which they were selected Our Take: generalizations from statistics Negative Skew The Book: Our Take: parameter The Book: A value, usually a numerical value, that describes a population. A parameter may be obtained from a single measurement, or it may be derived from a set of measurements from the population. Our Take: a population descriptor (i.e., the population average) [see statistic] population The Book: The set of all the individuals of interest in a particular study. Our Take: The entire group of people you are trying to get information about. Positive Skew The Book: A skewed distribution with the tail on the right-hand side (above-zero end) of the X-axis. Our Take: Small end on the right sample The Book: A set of individuals selected from a population, usually intended to represent the population in a research study. Our Take: The small group you actually study as opposed to the large group you are trying to get information about. sampling error The Book: The discrepancy, or amount of error, that exists between a sample statistic and the corresponding population parameter. Our Take: the difference between the sample you tested and the actual population Skewed Distribution The Book: Description of the shape of a distribution when the scores tend to pile up toward one end of the scale and taper off gradually at the other end. Our Take: A distribution that extends more to one side than another. statistic Note: Do not confuse with statisticS The Book: A value, usually a numerical value, that describes a sample. A statistic may be obtained from a single measurement, or it may be derived from a set of measurements from the sample. Our Take: a sample descriptor (average score for a sample, which will probably be different from the average score for the population). [seeparameter]

statistics The Book: A set of mathematical procedures for organizing, summarizing, and interpreting information. Our Take: Statistics is a way to help you interpret data. Symmetrical Distribution The Book: The shape of a graph occurring when it is possible to draw a vertical line through the middle and one side is a mirror image of the other. Our Take: Seeing the same thing on each side of a graph when divided down the middle. Tail (of a distribution) The Book: The section where the scores taper off toward one end of a distribution. Our Take: The small tapered ends of a distribution. variable (see independent and dependent variable also) The Book: A characteristic or condition that changes or has different values for different individuals Our Take: something that changes (see dependent or independent variable)

Chapter 1 Constant Data Dependent variable Descriptive statistics Independent variable Inferential statistics Method of authority Method of intuition Method of rationalism Naturalistic observation research Observational studies Parameter Parameter estimation research A quantity whose value doesnt change. Pi() is an example. It has a value of 3.14159+ that never changes. The measurements that are made on the subjects of an experiment. The variable in an experiment that an investigator measures to determine the effect of the independent variable. Techniques that are used to describe or characterize the obtained sample data. The variable in an experiment that is systematically manipulated by an investigator. Techniques that use the obtained sample data to infer to populations. Something is considered true because of tradition or because some person of distinction says it is true. Sudden insight, or clarifying idea that springs into consciousness, all at once as a whole. Uses reason alone to arrive at knowledge. It assumes that if the premises are sound and the reasoning is carried out correctly according to the rules of logic, then the conclusions will yield truth. A type of observational study in which the subjects of interest are observed in their natural setting. A goal of this research is to obtain an accurate description of behaviors of interest occurring in the natural setting. A type of research in which no variables are actively manipulated. The researcher observes and records the data of interest. A number calculated on population data that quantifies a characteristic of the population. A type of observational study in which the goal is to determine a characteristic of a population. An example might be the mean age of all psychology majors at your university.

## Population Rationalism Sample Scientific method Statistic True experiment Variable

The complete set of individuals, objects, or scores that an investigator is interested in studying. Uses reason alone to arrive at knowledge. It assumes that if the premises are sound and the reasoning is carried out correctly according to the rules of logic, then the conclusions will yield truth. A subset of the population. The scientist has a hypothesis about some feature of realty that he or she wishes to test. An objective, observational study or experiment is carried out. The data is analyzed statistically, and conclusions are drawn either supporting or rejecting the hypothesis. A number calculated on sample data that quantifies a characteristic of the sample. In a true experiment, an independent variable is manipulated and its effect on some dependent variable is studied. Has the potential to determine causality. Any property or characteristic of some event, object, or person that may have different values at different times depending on the conditions.

Chapter 2 Continuous variable Discrete variable Interval scale A variable that theoretically can have an infinite number of values between adjacent units on the scale. A variable for which no values are possible between adjacent units on the scale. A measuring scale that possesses the properties of magnitude and equal interval between adjacent units on the scale, but doesnt have an absolute zero point. Celsius scale of temperature measurement is a good example of an interval scale. The scale is composed of categories, and the object is measured by determining to which category the object belongs. The categories comprise the units of the scale. An example would be brands of MP3 players; the units would be Apple, Microsoft, Sony, Creative Labs, etc. This is a rank-ordered scale in which the objects being measured are rank-ordered according to whether they possess more, less or the same amount of the variable being measured. An example is ranking Division 1 NCAA collage football teams according to which college or university football team is considered the best, the next best, the next next best, and so on. A measuring scale that possesses the properties of magnitude, equal intervals between adjacent units on the scale, and also possesses an absolute zero point. The Kelvin scale of temperature measurement is an example of a ratio scale.

Nominal scale

Ordinal scale

Ratio scale

Real limits of a Those values that are above and below the recorded value by one-half of the smallest measuring continuous unit of the scale. variable Summation Operation very often performed in statistics in which all or parts of a set (or sets) of scores are added.

Chapter 3 Cumulative frequency distribution Cumulative percentage distribution Exploratory The number of scores that fall below the upper real limit of each interval.

The percentage of scores that fall below the upper real limit of each interval. A recently developed technique that employs easily constructed diagrams that are useful in

## data analysis Frequency distribution Frequency polygon

summarizing and describing sample data. A listing of score values and their frequency of occurrence. Graph that is used with interval or ratio data. Identical to a histogram, except that instead of using bars, the midpoints of each interval are plotted and joined together with straight lines, and the lines extended to meet the horizontal axis at the midpoint of the intervals that are immediately beyond the lowest and highest intervals. Similar to a bar graph, except that it is used with interval or ratio data. Class intervals are plotted on the horizontal axis, a bar is drawn over each class interval such that each class bar begins and ends at the real limits of the interval. The height of each bar corresponds to the frequency of the interval and the vertical bars touch each other rather than spaced apart as with the bar graph. A curve on which most of the scores occur at the higher values, and the curve tails off toward the lower end of the horizontal axis. The value on the measurement scale below which a specified percentage of the scores in the distribution fall.

Histogram

## Negatively skewed curve Percentile point

Percentile rank The percentage of scores with values lower than the score in question. (of a score) Positively skewed curve Relative frequency distribution Skewed curve Stem and leaf diagrams A curve on which most of the scores occur at the lower values, and the curve tails off toward the higher end of the horizontal axis. The proportion of the total number of scores that occur in each interval. A curve whose two sides do not coincide if the curve is folded in half; that is, a curve that is not symmetrical. An alternative to the histogram, that is used in exploratory data analysis. A picture is shown of each score divided into a stem and leaf, separated by a vertical line. The leaf for each score is usually the last digit, and the stem is the remaining digits. Occasionally, the leaf is the last two digits depending on the range of the scores. The stem is placed to the left of the vertical line, and the leaf to the right of the line. Stems are placed vertically down the page,and leafs are placed in order horizontally across the page.

Chapter 4 Arithmetic mean Central tendency Deviation score Dispersion Median (Mdn) Mode Overall mean The sum of the scores divided by the number of scores. The average, middle, or most frequent value of a set of scores. The distance of the raw score from the mean of its distribution. The spread of a set of scores. The scale value below which 50% of the scores fall. The most frequent score in the distribution. Sometimes called weighted mean. The average value of several sets or groups of scores. It takes into account the number of scores in each group and in effect, weights the mean of each group by the number of scores in the group. The difference between the highest and lowest scores in the distribution. A measure of variability that gives the average deviation of a set of scores about the mean. Refers to the spread of a set of scores.

Variance

## The standard deviation squared.

Chapter 5 z score Asymptotic A transformed score that designates how many standard deviation units the corresponding raw score is above or below the mean. Approaching a given value as a function extends to infinity. For the normal curve, it refers to how the Y value of the normal curve approaches 0 (the X axis) as X extends to and infinity. Y gets closer and closer to 0, but never quite reaches it. A symmetrical, bell shaped curve with mean, median, and mode equal to each other, and specified kurtosis. Kurtosis refers to the sharpness or flatness of a curve as it reaches its peak. A transformed score that designates how many standard deviation units the corresponding raw score is above or below the mean.

## Normal curve Standard scores (zscores)

Chapter 6 Y intercept Biserial coefficient Coefficient of determination Correlation Correlation coefficient Curvilinear relationship Direct relationship Imperfect relationship Inverse relationship Linear relationship Negative relationship Pearson r Perfect relationship Phi coefficient Positive relationship The Y value of a function where the function intersects the Y axis. For the linear relationship Y = bX + a, a is the Y intercept. A correlation coefficient ,symbolized by rb. It is used when one of the variables is at least of interval scaling and the otheris dichotomous. Symbolized by r2.Tells us the proportion of the total variability that is accounted for by X. The association or relationship between two variables. It focuses on the direction and degree of the relationship. A quantitative expression of the magnitude and direction of a relationship. The relationship between two variables is curved, rather than linear. In this case, a curved line fits the data better than a straight line. As X increases,Y increases. As X decreases, Y decreases. The slope of the relationship is positive. Higher values of X are associated with higher values of Y. Lower values of X are associated with lower values of Y. Also called a positive relationship. A positive or negative relationship for which all of the points do not fall on the line. As X increases,Y decreases; as X decreases, Y increases. The slope of the relationship is negative. Higher values of X are associated with lower values of Y. Lower values of X are associated with higher values of Y. Also called a negative relationship. A relationship between two variables that can be most accurately represented by a straight line. An inverse relationship between two variables. A measure of the extent to which paired scores occupy the same or opposite positions within their own distributions. A positive or negative relationship for which all of the points fall on the line. A correlation coefficient, symbolized by . Used when each of the variables is dichotomous. A direct relationship between two variables.

## Scatter plot Slope Spearman rho Variability accounted for by X Chapter 7

A graph of paired X and Y values. Rate of change. For a straight line, Slope = Y/X = Y2 Y1 / X2X1 A correlation coefficient,symbolized by rs. Used when one or both of the variables are of ordinal scaling. The change in Y that is explained by the change in X. Used in measuring the strength of a relationship.

Homoscedasticity Assumption used in conjunction with the standard error of estimate. The assumption is that the variability of Y remains constant for all values of X. Least-squares regression line Multiple coefficient of determination Multiple regression Regression Regression constant Regression line Regression of X on Y Regression of Y on X Standard error of estimate The prediction line that minimizes the total error of prediction according to the least-squares criterion of (Y - Y')2. Symbolized by R2. Gives the proportion of the total variance in Y accounted for by the multiple X variables. Also called squared multiple correlation. Technique used for predicting Y from multiple associated X variables. A topic that considers using the relationship between two or more variables for prediction. The aY and bY terms in the equation,Y' = bYX + aY. A best fitting line used for prediction. Technique used to derive the regression line for predicting X given Y. Technique used to derive the regression line for predicting Y given X. Gives us a measure of the average deviation of prediction errors about the regression line.

Chapter 8 A Probability determined after the fact, after some data has been collected. In equation form, posteriori probability p(A) = Number of times A has occurred/Total number of occurrences. A priori probability Addition rule Probability determined without collecting any data; deduced from reason alone. In equation form, p(A) = Number of events classifiable as A/Total number of possible events. Gives the probability of occurrence of one of several events. If there are only two events,A and B, the addition rule gives the probability of occurrence of A or B. In equation form, p(A or B)= p(A) + p(B) p(A and B). A set that includes all of the possible events.

## Exhaustive set of events

Independence of two The occurrence of one event has no effect on the probability of occurrence of the other. events Multiplication rule Gives the probability of joint or successive occurrence of several events. If there are only two events, the multiplication rule gives the probability of occurrence of A and B. In equation form, p(A and B) = p(A)p(B|A). Two events that cannot occur together; that is, the occurrence of one precludes the occurrence of the other. Expressed as a fraction or decimal number, probability is fundamentally a proportion; it gives the chances that an event will or will not occur.

## Mutually exclusive events Probability

Probability of The probability of occurrence of A plus the probability of occurrence of B minus the occurrence of A or B probability of occurrence of both A and B. Probability of occurrence of both A and B Random sample The probability of occurrence of A times the probability of occurrence of B given that A has occurred. A sample selected from the population by a process that ensures that (1) each possible sample of a given size has an equal chance of being selected and (2) all the members of the population have an equal chance of being selected into the sample. A method of sampling in which each member of the population selected for the sample is returned to the population before the next member is selected. A method of sampling in which the members of the sample are not returned to the population before selecting subsequent members.

## Sampling with replacement Sampling without replacement

Chapter 9 Biased coins Binomial distribution Coins for which p(head) p(tail) for any coin when flipped. Expressed in terms of P and Q, P Q 0.50. A probability distribution that results when five preconditions are met:(1)There is a series of N trials; (2) on each trial there are only two possible outcomes; (3) on each trial, the two possible outcomes are mutually exclusive; (4) there is independence between the outcome sof each trial; and (5) the probability of each possible outcome on any trial stays the same from trial to trial.The binomial distribution gives each possible outcome of the N trials and the probability of getting each of these outcomes. Mathematical expression used to generate the binomial distribution.The expression is given by N (P + Q) . Table that contains binomial distribution probabilities for many values of N and P. Coins that when flipped, p(head) = p(tail) for any coin. Expressed in terms of P and Q, P = Q = 0.50. Technique used to solve binomial problems when N > 20. A P event is one of the two possible outcomes of any trial. The number of P events is the number of such outcomes. A Q event is one of the two possible outcomes of any trial. The number of Q events is the number of such outcomes.

Binomial expansion Binomial table Fair coins Normal approximation Number of P events Number of Q events

Chapter 10 Alpha ( ) level Alternative hypothesis (H1) Beta () Correct decision A probability level set by an investigator at the beginning of an experiment to limit the probability of making a Type I error. Symbolized by H1.The hypothesis that claims the differences in results between conditions is due to the independent variable. The probability of making a Type II error. Rejecting H0 when H0 is false; retaining H0 when H0 is true.

Correlated There are paired scores in the conditions, and the groups design differences between paired scores are analyzed. Directional An hypothesis that specifies the direction of the effect of the

## hypothesis Fail to reject null hypothesis Importance of an effect

independent variable on the dependent variable. Conclusion when analyzing the data of an experiment that retains the null hypothesis as a reasonable explanation of the data. A real effect that in addition to being statistically significant, is of practical or theoretical importance.

Nondirectional An hypothesis that doesnt specify the direction of the effect hypothesis of the independent variable on the dependent variable. Null hypothesis (H0) Omega squared One-tailed probability Reject null hypothesis Repeated measures design Replicated measures design Retain null hypothesis Sign test Symbolized by H0. Logical counter-part to the alternative hypothesis. It either specifies that there is no effect, or that there is a real effect in the direction opposite to that specified by the alternative hypothesis. Unbiased estimate of the size of the effect of the independent variable. Probability that results when all of the outcomes being evaluated are under one tail of the distribution. Conclusion when analyzing the data of an experiment that rejects the null hypothesis as a reasonable explanation of the data. Like the correlated groups design. There are paired scores in the conditions, and the differences between paired scores are analyzed. Same as the repeated measures design. There are paired scores in the conditions, and the differences between paired scores are analyzed. Same as fail to reject null hypothesis. Conclusion when analyzing the data of an experiment that fails to reject the null hypothesis as a reasonable explanation of the data. Statistical inference test, appropriate for the repeated measures or correlated groups design, involving only two groups, that ignores the magnitude of the difference scores and considers only their direction or sign. The result of an experiment that is statistically reliable. Magnitude of the real effect of the independent variable on the dependent variable.

## Significant Size of effect

State of reality Truth regarding H0 and H1. Two-tailed probability Type I error Type II error Probability that results when the outcomes being evaluated are under both tails of the distribution. A decision to reject the null hypothesis when the null hypothesis is true. A decision to retain the null hypothesis when the null hypothesis is false.

Chapter 11 Pnull Preal The probability of getting a plus with any subject in the sample of the experiment when the independent variable has no effect (appropriate for sign test). The probability of getting a plus with any subject in the sample of the experiment when the independent variable has a real effect; the proportion of pluses in the population if the experiment

were done on the entire population and the independent variable has a real effect (appropriate for sign test). Power Real effect The probability that the results of an experiment will allow rejection of the null hypothesis if the independent variable has a real effect. An effect of the independent variable that produces a change in the dependent variable.

Chapter 12 null real Critical region Critical value of z Critical value of a statistic Nullhypothesis population Sampling distribution of a statistic Sampling distribution of the mean Mean of the null hypothesis population. Mean of the population specified by the hypothesized real effect. Short for critical region for rejection of the null hypothesis. Region that contains values of the statistic that allow rejection of the null hypothesis. Symbolized by zcrit.The value of z that bounds the critical region. The value of the statistic that bounds the critical region. An actual or theoretical set of population scores that would result if the experiment were done on the entire population and the independent variable had no effect; it is used to test the validity of the null hypothesis. A listing of (1) all the values that the statistic can take and (2) the probability of getting each value under the assumption that it results from chance alone, or if sampling is random from the nullhypothesis population. A listing of all the values the mean can take, along with the probability of getting each value if sampling is random from the null-hypothesis population.

Standard error The mean of the sampling distribution of the mean. of the mean

Chapter 13 Cohens d Confidence interval Confidence limits Critical value of r Critical value of t Degrees of freedom (df) Sampling distribution of t Statistic, associated with J.Cohen, that is used to measure the size of effect. A range of values that probably contains the population value. The values that state the boundaries of the confidence interval. Symbolized by rcrit. The value of r that bounds the critical region. Symbolized by tcrit. The value of t that bounds the critical region. The number of scores that are free to vary in calculating a statistic. A probability distribution of the t values that would occur if all possible different samples of a fixed size N were drawn from the null-hypothesis population. It gives(1) all the possible different t values for samples of size N and (2) the probability of getting each value if sampling is random from the null-hypothesis population.

Chapter 14

t test for correlated groups t test for independent groups Confidenceinterval approach Degrees of freedom (df) Homogeneity of variance Independent groups design

Inference test using Students t statistic Employed with correlated groups, replicated measures and repeated measures designs. Inference test using Students t statistic Employed with independent groups design.

Alternative approach to null-hypothesis approach. Uses confidence intervals as a method that allows conclusions with regard both to whether there is a real effect and to the size of the effect. The number of scores that are free to vary in calculating a statistic. Assumption underlying the independent groups t test and ANOVA. If there are k groups, the assumption is that the variances of the populations from which the ksamples are drawn, are equal. 2 2 2 In equation form, s1 = s2 = = sk . Involves experiments using two or more conditions. Each condition employs a different level of the independent variable.The most basic experiment has two conditions. Subjects are randomly selected from the subject population and then randomly assigned to the two conditions. Since subjects are randomly assigned to the conditions, there is no basis for pairing of scores between conditions. Rather,a statistic is computed for the scores of each group separately, and the two group statistics are compared to determine if chance alone is a reasonable explanation of the data. Symbolized by D. Mean of a hypothetical population of difference scores from which the sample difference scores are assumed to have been drawn. If the independent variable has no effect, then D = 0. Main approach used in this textbook for analyzing data to determine if the independent variable has a real effect. In this approach,we assume that chance alone is responsible for the difference between the scores in each group, calculate the obtained probability, and determine if the obtained probability is low enough to rule out chance as a reasonable explanation of the score differences between groups. Magnitude of the real effect of the independent variable on the dependent variable.

## Mean of the population of difference scores Nullhypothesis approach

Size of effect

Chapter 15 A Comparisons that are not planned before doing the experiment. They usually arise after posteriori comparisons the experimenter sees the data and chooses groups with mean values that are far apart, or else they arise from doing all the possible comparisons with no theoretical a priori basis. A priori comparisons F test Qcrit Qobt Within groups sum of squares (SSW) Comparisons that are planned in advance of the experiment. They often arise from predictions that are based on theory and prior research. Inference test based on the ratio of two independent estimates of the same population 2 variance, . Used in conjunction with the analysis of variance. The value of Q that bounds the critical region. The obtained value of Q. Symbolized by SSW. Statistic computed in the one-way ANOVA. The total of the sum of squares for each group.

Within groups variance Symbolized by sW2. Statistic computed in the one-way ANOVA. Estimate of the null2 estimate (sW ) hypothesis population variance that is based on the within groups variability. Analysis of variance Between-groups sum of squares (SSB) Abbreviated ANOVA.Statistical technique used to analyze multigroup experiments. Uses the F test as the basis of the analysis(es). Symbolized by SSB. Statistic computed in the one-way ANOVA. The numerator of the 2 equation for the between-groups variance estimate, sB .

## Between-groups 2 variance estimate (sB )

Symbolized by sB . Estimate of the null-hypothesis population variance that is based on the variability between the groups.

Comparison-wise error The probability of making a Type I error for any of the possible comparisons in an rate experiment. Eta squared (
2

Biased estimate of the size of effect of the independent variable. The probability of making one or more Type I errors for the full set of possible comparisons in an experiment. The value of F that bounds the critical region. Statistic computed in the analysis of variance. The overall mean of all the scores combined. Post hoc, multiple comparisons test that makes all possible pairwise comparisons among the sample means. Statistical technique used to analyze multigroup experiments in which the experimental design is an independent groups design and only one independent variable is studied. Comparisons that are not planned before doing the experiment. They usually arise after the experimenter sees the data and chooses groups with mean values that are far apart, or else they arise from doing all the possible comparisons with no theoretical a priori basis. Also see a posteriori comparisons. Comparisons that are planned in advance of the experiment. They often arise from predictions that are based on theory and prior research. Also see a prioricomparisons. Gives all the possible F values along with the p(F) for each value, assuming sampling is random from the population. Statistical technique used to analyze multigroup experiments in which the experimental design is an independent groups design and only one independent variable is studied. Also see one-way ANOVA, independent groups design. Statistical technique used to analyze multigroup experiments in which the experimental design is an independent groups design and only one independent variable is studied. Also see one-way ANOVA, independent groups design. Symbolized by SST. Statistic computed in the analysis of variance. The variability of all the scores about the grand mean. Post hoc, multiple comparisons test that makes all possible pairwise comparisons among the sample means.

Experiment-wise error rate Fcrit Grand mean NewmanKeuls test One-way analysis of variance (ANOVA), independent groups design Planned comparisons

Post hoc comparisons Sampling distribution of F Simple randomizedgroup design Single factor experiment, independent groups design Total variability (SST) Tukeys HSD test

Chapter 16 Column degrees of freedom (dfC) Factorial experiment Interaction effect Main effect Row X column degrees of Symbolized by dfC. Statistic computed in two-way ANOVA. Degrees of freedom in forming the 2 column variance estimate, sC . An experiment in which the effects of two or more factors are assessed and the treatments used are combinations of the levels of the factors. The result observed when the effect of one factor is not the same at all levels of the other factor. The effect of factor A (averaged over the levels of factor B) and the effect of factor B (averaged over the levels of factor A). Symbolized by dfRC. Statistic computed in two-way ANOVA. Degrees of freedom in forming the row 2 column variance estimate, sRC .

freedom (dfRC) Row degrees of freedom (dfR) Symbolized by dfR. Statistic computed in two-way ANOVA. Degrees of freedom in forming the row 2 variance estimate, sR .

Row X column Symbolized by SSRC. Statistic computed in two-way ANOVA. The numerator of the equation for sum of 2 computing the row X column variance estimate, sRC . squares (SSRC) Row X column 2 Symbolized by sRC . Estimate of the null-hypothesis population variance that is based on the row variance column variability. 2 estimate (sRC ) Two-way analysis of variance Within-cells degrees of freedom (dfW) Within-cells sum of squares (SSW) Within-cells variance 2 estimate (SW ) Statistical technique for assessing the effects of two variables that are manipulated in one experiment. Symbolized by dfW. Statistic computed in two-way ANOVA. Degrees of freedom in forming the 2 within-cells variance estimate, sW . Symbolized by SSW. Statistic computed in two-way ANOVA. The numerator of the equation for 2 computing the within-cells variance estimate, sW . Symbolized by sW . Estimate of the null-hypothesis population variance that is based on the withincells variability.
2