Vous êtes sur la page 1sur 3

CORRELATION RESEARCH DESIGNS Correlation designs are of two basic kinds; 1) those that measure only the degree

of association between two or more variables (Correlation Analysis) and 2) those that attempt to predict a dependent variable on the basis of one or more independent variables (Regression Analysis).

In addition, both of them can be bivariate--when only two variables are involved--or multivariate, if many variables are involved. Simple correlation studies do not have causal hypotheses; they state only that certain variables are related. In turn, regression studies do involve causal hypotheses. In other words, the independent variable is placed before the dependent variable and it is assumed that any change on the independent variable will be reflected in the dependent variable.

When two variables are related, they show a distinct pattern. It can be a positive linear relationship, like the one at the left. For instance, flame height and heat are related this way. On the other hand, the relationship may be negative, like the one at the center. An example is the relationship between training sessions and number of errors in a task. The more training, the fewer errors. Negative relationships are as interesting as positive ones for analyzing behavior. On the other hand, there is a special kind of relationship called curvilinear, which is not easy to interpret. In this type, the variables follow a certain pattern up to certain values and then this pattern is reversed. Such is the case with the number of errors with respect to task difficulty. Very difficult tasks generate fewer errors because the subjects become

more careful and self-conscious. When a curvilinear relationship is present, it is necessary to transform one of the variables to measure the actual correlation, e.g., using a logarithmic transformation. It is important to notice, when measuring the correlation between variables, that the changes that caused variations in these variables have already taken place. For instance, if we try to correlate size of city (in terms of number of habitants) with rate of structure fires, we can find a positive relationship. But, which is the cause and which the effect? What came first, increase of size or fire rate? Certainly, there were many variables were involved at different times. Therefore, the only thing we can say with certainty is that these two variables are related.

How Is Correlation Evaluated? The simplest method that we can use to determine if two variables are different is to represent them in a scatter plot or scatter diagram. Each variable is represented in one axis of the diagram and the points correspond to cases. By convention, the axis of the "X" (horizontal) is used to represent the independent variable and the axis of the "Y" is used for the dependent variable. If such differentiation is not established, a variable can be placed in any axis. A scatter diagram can show us even if two variables are related in curvilinear form, which creates special problems for interpretation. However, graphic data are not sufficient. We must measure the amount of relationship, and this can be done only through the appropriate correlation coefficient. Let us see the more frequently used.

The Pearson's r coefficient or Product-Moment is the most demanding, because all variables must be measured at interval or ratio level. The significance of this coefficient usually is tested through the Fratio and it is reported by the statistical programs. A non-significant r usually should not be reported in a study. Below, you will see some conventions for reporting correlation coefficients. If the variables are ordinal in type, you must have ranks instead of scores for the subjects. Usually the statistical program has a command that allows you to transform the original scores into ranks. Then you apply the Spearman (rho), also rs, which gives you a value of the degree of correlation between ranks. How to interpret correlations? There is no common agreement among researchers. However, the following orientations can be helpful:

1.00 = indicates a perfect positive correlation; changes in one variable are accompanied by changes in the other variable in the same direction. This is very hard to find. 0.76 to 0.9 = very strong relationship; some physical variables relate this way, but behavioral variables rarely do. 0.5 to 0.75 = strong relationship; successful studies report these. 0.3 to 0.49 = moderate relationship; many studies report these. 0.1 to 0.29 = weak relationship; quite often found. 0 to 0.09 = no correlation, but is not necessarily an indication that the variables are not related.

The same qualifications apply for negative correlation coefficients, only that the direction of the relationship goes in inverse form; e.g., -1 is a perfect negative correlation, indicating that while one variable increases the other decreases in proportion.

Vous aimerez peut-être aussi