Académique Documents
Professionnel Documents
Culture Documents
Correlation
Correlation analysis is used when you have measured two continuous variables and want to quantify how consistently they vary together The stronger the correlation, the more likely to accurately estimate the value of one variable from the other Direction and magnitude of correlation is quantified by Pearsons correlation coefficient, r
Correlation
R=0 means that knowing the value of one variable tells us nothing about the value of the other
Correlation does not show cause and effect but may suggest such a relationship
Correlation Causation
the number of churches and bars in a town smoking and alcoholism (consider the relationship between smoking and lung cancer) students who eat breakfast and school performance marijuana usage and heroin addiction (vs heroin addiction and marijuana usage)
Visualizing Correlation
Assignment of axes does not matter (no independent and dependent variables) Order in which data pairs are plotted does not matter In strict usage, lines are not drawn through correlation scatterplots
Correlations
Weak Positive Correlation
600 500 400 300 200 100 0 -100 0 -200 -300 -400
r = 0.266
r = - 0.9960
10
20
30
40
50
10
20
30
40
50
No Correlation
5000 4000 3000 2000 1000 0 0 -1000 -2000 50 100 150 200 250
r = 0.00
Linear Regression
Prediction and a cause and effect relationship Does one variable change in a consistent manner with another variable? x = independent variable (cause) y = dependent variable (effect)
If it is not clear which variable is the cause and which is the effect, linear regression is probably an inappropriate test
Linear Regression
Independent variable is under the control of the investigator (exact value) Dependent variable is normally distributed Differs from correlation, where both variables are normally distributed and selected at random by investigator
Regression analysis with more than one independent variable is termed multiple (linear) regression
Linear Regression
Best fit line based on the sum of the squares of the distance of the data points from the predicted values (on the line)
70
y = 1.0092x + 8.6509
60 50 40 30 20 10 0 0 10 20 30 40 50 Independent Variable
R2 = 0.8863
Dependent Variable
Linear Regression
y = a + bx where
a = y intercept (point where x = 0 and the line passes through the y-axis) b = slope of the line (y2-y1/x2-x1) Positive = y increases as x increases Negative = y decreases as x increases 0 = no correlation
Shows the strength of the linear relationship between two variables, symbolized by r The closer the data points are to the line, the closer the regression value is to 1 or -1
Used to estimate the extent to which the dependent variable (y) is under the influence of the independent variable (x) r2 (the square of the correlation coefficient)
Varies from 0 to 1 r2 = 1 means that the value of y is completely dependent on x (no error or other contributing factors) r2 < 1 indicates that the value of y is influenced by more than the value of x
Coefficient of Determination
Remainder (1 - r2) is the variance of y that is not explained by x (i.e., error or other factors) e.g., if r2 = 0.84, it shows a strong, positive relationship between the variables and shows that the value of x is used to predict 84% of the variability of y (and 16% is due to other factors)
Not a measure of variation of y explained by variation in x Variation in y is associated with the variance of x (and vice versa)
Independent variable (x) is selected by investigator (not random) and has no associated variance For every value of x, values of y have a normal distribution Observed values of y differ from the mean value of y by an amount called a residual. (Residuals are normally distributed.) The variances of y for all values of x are equal (homoscedasticity) Observations are independent (Each individual in the sample is only measured once.)
Figure 2: Strong curvature suggests that linear regression may not be appropriate (an additional variable may be required)
Figure 4: Actually a regression line connecting only two points. If the rightmost point was different, the regression line would shift.
Residuals
Homoscedastic Heteroscedastic
Funnel shaped and may be bowed Suggests that a transformation and inclusion of additional variables may be warranted
Helsel, D.R., and R.M. Hirsh. 2002. Statistical Methods in Water Resources. USGS (http://water.usgs.gov/pubs/twri/twri4a3/)
Data Set 1
2.5 2 1.5 1 0.5 0 -0.5 0 -1 -1.5 -2 -2.5
1.5 1 0.5 0 -0.5 0 -1 -1.5 -2 -2.5
Data Set 2
Residuals
Residuals
10
15
10
15
X Variable 1
X Variable 1
Data Set 3
4 3
2.5 2 1.5 1 0.5 0 -0.5 0 -1 -1.5 -2
Data Set 4
Residuals
2 1 0 -1 -2 X Variable 1 0 5 10 15
Residuals
10
15
20
X Variable 1
Outliers
Values that appear very different from others in the data set
Rule of thumb: an outlier is more than three standard deviations from mean Measurement or recording error Observation from a different population A rare event from within the population May indicate important phenomenon e.g., ozone hole data (outliers removed automatically by analysis program, delaying observation about 10 years)
Three causes
Outliers
Helsel, D.R., and R.M. Hirsh. 2002. Statistical Methods in Water Resources. USGS (http://water.usgs.gov/pubs/twri/twri4a3/)
Data should be interval or ratio The dependent and independent variables should be identifiable The relationship between variables should be linear (if not, a transformation might be appropriate) Have you chosen the values of the independent variable? Does the residual plot show a random spread (homoscedastic) and does the normal probability plot display a straight line (or does a histogram of residuals show a normal distribution)?
Lineweaver-Burk Plot
The Michaelis-Menton equation to describe enzyme activity:
[ S ] Vm ax vo K m [S ]
is linearized by taking its reciprocal:
Km 1 1 1 vo Vm ax Vm ax [ S ]
v (pennies/min)
1/v (pennies/min)^-1
0.070 0.060 0.050 0.040 0.030 0.020 0.010 0.000 0.000 0.020 0.040
0.060
0.080
0.100
0.120
1/S (pennies/m^2)^-1
v (pennies/min)
Residuals
0.05
0.10
0.15