Vous êtes sur la page 1sur 12

The error ellipse

● The ellipse of constant chi-square is


described by

● To describe the tilt of the error ellipse,


we need to consider the following: For
two parameters a,b, there are new
coordinates a', b' in which the error
ellipse has no tilt. There, the covariance
vanishes and the covariance matrix is Eigenvectors build columns of Q,
diagonal (see previous considerations is diagonal matrix with eigen-
about general form of the covariance values as diagonal elements.
matrix). This is equivalent to Eigenvectors point in the directions
of the half axes of the eror ellipse
● As C is symmetric and positive semi- Tilt in (a,b) calculated from this
definite, there is an orthogonal Largest eigenvalue: shortest half-axis
transformation Q such that C-1 is Smallest eigenvalue: longest half-a.
diagonal in the new coordinates, Useful to find linear combinations of
parameters that are most sensitive to
the data → eliminate superflous para-
meters!
PHYS 6710: Nuclear and Particle Physics II
Graphical representation
Eigenvalues : Length of half axis i.

PHYS 6710: Nuclear and Particle Physics II


Correlations – a general warning
● Anscombe's quartet:

Source: Wikipedia

All four data sets share:

→ It is indispensable to make a graphical representation of your data


→ Be aware of outliners in your data which may grossly change the analysis

PHYS 6710: Nuclear and Particle Physics II


Correlated chi-square fits
● So far: we have calculated the covariance matrix to get access
to
– parameter uncertainties
– confidence regions
– linear combinations of most/least significant parameters
● The covariance matrix has another application: whenever data
points are not independent (we have assumed independent
data points so far).
● In that case, the covariance matrix transports the full
information beyond the individual error bars on each data point

PHYS 6710: Nuclear and Particle Physics II


(continued)
● The correct chi-square function gets replaced according to

where y is the vector of data points, f is the vector of values of f


evaluated at x, and C is the covariance matrix.
● In case of no correlations, and the original
chis-square function is recovered.
● Example 1: Hadronic physics on the lattice. Researcher A has
produced two data points with uncertainties:
● Also, researcher A has determined the correlations between the
points, using, e.g., bootstrap.
o
Energy

● Researcher B would like to use these data points for further


analysis. He could fit to the data using normal chi-square, but would
o

lose the correlations.


● If researcher A provides the covariance matrix, researcher B can
perform a correlated chi-square fit without losing information.
PHYS 6710: Nuclear and Particle Physics II
A real world example
A. Alexandru, C. Pelissier,
Phys.Rev. D87 (2013) 1, 014503
Resonance parameters of the
rho-meson from asymmetrical
lattices
xL

uncorrelated
for different x
(different lattices)

correlated x
at same x Measurements of Eigen-
(same lattice) values of QCD Hamiltonian
in an asymmetric box with
periodic boundary conditions
Figure shows error bars, but behind it is a 6x6 More in Nuclear Seminar,
covariance matrix with 3 blocks of size 2x2 March 24, 2015,
→ Input to fit hadronic models D. Guo, B. Hu, R. Molina
PHYS 6710: Nuclear and Particle Physics II
Another example [M. Doring, U. G. Meissner, JHEP01(2009) 009]
● Fit lattice eigenvalues and observe how derived quantities (phase shift and
pole positions) behave: Towards a determination of resonance properties
directly from QCD. Here, the resonance:

Some physics is
used to stabilize fit.
Known properties
from Effective Field
Theory in V_2 (see
Griesshammer, this
course), plus unknown
piece V_4 expanded
as polynomial in energy

→ much in physics
is about finding the
best fit hypothesis!
Include the Known explicitly PHYS 6710: Nuclear and Particle Physics II
and fit the Unknown!
(continued)
● Phase shifts and amplitudes can be continued to complex energies and pole
positions (resonance properties) can be determined.
● Such quantities are complicated, only numerically known functions of the
internal paramters (more in second part of this lecture). Bootstrap can be
used to calculate confidence regions of pole positions:

● Actual “Pole position”


unknown
● Adding parameters and
observe how estimated
pole position changes
● Convergence, but quanti-
tative answer is given by
F-test.
● Note the non-linear
dependence of pole
position on parameters
→ confidence regions are
not ellipses any more
→ Bootstrap is used to
determine confidence
PHYS 6710: Nuclear and Particle Physics II regions.
Derived quantities – variable transformations
● In the previous example, the “pole position” is a complicated function of the
internal parameters, used to parameterize a fit function that is used to fit the
data points (the latter being correlated or uncorrelated):
complicated but known
Internal parameters of nonlinear dependence
Derived quantity of
fit hypothesis physical interest
(“resonance pole position”)
complicated but known
nonlinear dependence How do uncertainties
from data propagate?
Another derived quantity
(un)correlated data (phase shift delta)
complicated, not explicitly
known,nonlinear dependence

Bootstrap solves the problem: Almost automatic workflow:


● Generate synthetic data ensembles around the real data

● Perform fits on all enselmbles, save fit parameters of converged fits

● For every such set, evaluate your quantity of physical interest

● Perform statistical analysis on set of resulting points (mean, variance,

covariance matrix, confidence regions)


● Applicable to not necessarily linear problems (banana shaped confidence

regions are not a problem).


PHYS 6710: Nuclear and Particle Physics II
Variable transformations explicitly
● In previous example, coordinates are implicitly transformed:
Internal parameterization (no physical meaning of parameters)
→ “Derived” quantities with physical meaning
● Bootstrap can be replaced by explicit coordinate transformation.
● How does the covariance matrix transform?
● We have already met one very special transformation:
Diagonalization of covariance matrix == rotation in the system in
which parameters are uncorrelated.
● General case is of interest, but be aware that you may make a
non-linear problem out of your linear one!
● The resulting covariance matrix is then only an approximation
close to the minimum.

PHYS 6710: Nuclear and Particle Physics II


Transformation in multiple variables
● consider the new variables as function of old ones: y=f(x)
● Dimension m of y in general unequal dimension n of x
● f not necessarily linear
● Show: The covariance matrix in y, , is given by the
covariance matrix in x, , through the transformation

Proof: blackboard
● Allows to determine covariance matrix in derived quantities

PHYS 6710: Nuclear and Particle Physics II


Example
● Where do two linear regression lines intersect, and what is the
uncertainty in intersection point? [inspired by Prof. M.A.
Thomson, Univ. of Cambridge, UK]:

Uncertainty in
intersection point
(derived quantity)

(unrelated issue: note how fluctuations are identical


→ don't forget to reset you random number generator!)
PHYS 6710: Nuclear and Particle Physics II

Vous aimerez peut-être aussi