Vous êtes sur la page 1sur 4

Exact DE is of the form

M ( x, y)dx N ( x, y)dy 0 where

To solve, either group terms by inspection, or use the formula

First-order linear DE is of the form

dx

( D 2 1) y At cos t 2 A sin t Bt sin t 2 B cos t


At cos t

Mdx N y M x dy

dy
p x y q x , and is homogeneous if q x 0
dx
p x dx

Solve by multiplying each term by the integrating factor


A first order DE of the form dy

M N

y
x

f x, y

which turns it into an exact equation.

is homogeneous of degree n if

f t n x, t n y f x, y for every real number t on some

non-empty interval. Also, if M ( x, y)dx N ( x, y)dy 0 and M and N are both homogeneous functions of the same degree, the DE is
homogeneous. The transformation y vx ; dy
dv converts the homogeneous equation into a separable one. It may also be

dx
easier to use x uy ;

vx

dx

dx
du
u y
dy
dy

For a non-exact DE of the form M ( x, y)dx N ( x, y)dy

M N
y
x
e

f x dx

g y dy

sin t 2 A sin t 2 B cos t


Variation of Parameters for second order DE

y p x y q x y f x

Assume that

Ay1 By2 0

Ay1 By2 0

Ay1 By2 f x

and

N M
g y

M
x y

or

For the homogeneous (characteristic) part,

Eulers Equation for DE of the type


Use the transformation
xdy ydx

y
d tan 1
x2 y 2
x

y ay by f x

yh ( yc ), find the roots of the algebraic equation m2 am b 0

yh c1e

m1t

p x y q x yn

n 0,1 , use the transformation y v n1

for

bn xn y n bn1 x n1 y n1

z ln x

and

x ez

m1 i

Reduction of Order for

y p x y q x y f x

st

taking the 1 and 2 der and substituting into the DE gives

v w to get

y1w 2 y1 py1 w f

yh e t c1 cos t c2 sin t

integrate w to find v, then evaluate

4te sin 2t must be the result of a repeated complex root where 1 and 2 , so m 1 2i and the DE must
2
D 1 4 . Combine both differential operators being careful to account for repeated roots. Differentiate the particular part

y1 then use the equations

nd

substitute

D n 1 y

Substitute the derivatives back into the original equation and it will become a linear DE with constant coefficients.

m2 i

Method of Undetermined Coefficients: For the non-homogeneous part, determine a DE that would have that solution. For example,

and

dny 1
D D 1 D 2 D 3
dx n x n

y y1v

m1t

b2 x 2 y b1xy b0 y 0

dy 1 dy

dx x dz

First, you must know one solution of the homogeneous equation,

c2te

y y1

y1v 2 y1 py1 v f

which is a first order DE. Use an integrating factor to solve for w,

v.

Power Series Solutions of DE with variable coefficients

y p x y q x y 0

p(x) and q(x) are analytic functions (have power series representations)

Substitute

y am x m a0 a1 x a2 x 2
m0

of the solution and substitute back into the original equation to find the coefficients of the particular solution.

y mam x m1 a1 2a2 x 3a3 x 2

( D 1) y sin t
2

m 1

i and the roots of the LHS are also i

y m m 1 am x m2 2a2 3 2a3 x 4 3a4 x 2

yh c1 cos t c2 sin t

m2

(root is repeated, so the particular solution has a factor of t in each term)

yp At sin t A cos t Bt cos t B sin t

and

yh c1em1t c2em2t

y p At cos t Bt sin t

y2
0
y2

Ay1 By2 C y3 0

Ay1 By2 C y3 0
Ay By C y f x

2
3
1

xdx ydy 1
d ln x 2 y 2
x2 y 2
2

Second order linear DE with constant coefficients

The roots of the LHS are

y1
y1

as long as

dy
1
dv which results in the transformed DE dv

v n1
n 1 p x v n 1 q x
dx n 1
dx
dx
n 1
Transform back using v y

x
xdy ydx
d
y2
y

xdy ydx
1 x y
d ln

x2 y 2
2 x y

EX:

yp Ay1 By2 Ay1 By2

M N
f x

N
y x

Case 3: complex roots

and

Also, disregard constants of integration when finding A and B, since they are intermediate solutions. This method extends to higher
order equations, for example for a third order equation:

Bernoulli Equations are of the form dy

and

Case 2: real, repeated roots

y p A x y1 B x y2

yh c1 y1 c2 y2

yp Ay1 By2

Ay1 By2 f x yielding the system of DE:

Substituting into the original DE gives

Some common forms:

be

then

dx

Case 1: real distinct roots

B0

y c1 cos t c2 sin t 12 t sin t

0 , there may be an integrating factor, , so that

where either

xdy ydx
y
d
x2
x

A 1 2

Bt cos t

yp At cos t 2 A sin t Bt sin t 2B cos t

Radius of convergence

R 1 lim m am
m

or

R 1 lim

am1
am

Initial Value Problem: For and IVP where the coefficients are analytic at xo the solution can be expanded as the power series

1 n
n
y x y0 x x0
n
!
n 0

m 0

Steps: 1) Substitute xo and yo into the original equation to get

y0 , 2) differentiate the original equation and solve for y0 , 3) continue

differentiating and substituting to keep getting more coefficients

y0

(If you have a second order DE you have to have


EX:

y xy e x y 4

m0

m0

y0 4

General solution

Bessel Functions Type 2

P x P x

P x

(gamma function extends factorials to any )

x 2 m
m ! m 1
m

J n x 1 J n x
n

x2 y xy x 2 y 0 so = 0

1 x 2 k
k
2
2k
k 1 2 k !

1 x 2k
y1 2 k
2
k 0 2 k !

A field

P x0 is non-zero and the ratios Q x , R x , and F x

y2 y1 ln x
1 1
k 1
2 3

1
k

Linear Algebra

3
1
y x 1 40 x x 2 x 3
2
6

Recurrence Relations (general solutions):


Plug in just the series for y, y, and y, shift indices so that all xs have the same power, combine summations, and equate coefficients of
the same powers. If possible, identify the resulting series as the expansion of a known function.
xo is an ordinary point of the DE if

2 m

v et t 1dt

If and - are integers then the solutions obtained are LINEARLY DEPENDENT bc

y and evaluate

Plug into the original expansion

J n x 1 J n x

y x c1 J x c2 J x

y0 y0 xy0 e x y0 e x y0 4 0 3 11 1 4 1

for = n = any integer

x 2 m
m! m 1

J x

y y xy e x y e x y 0

Differentiate the original DE

2 m

x 2 mn
m ! n m !
m

y0 )

1
1
y x y0 y0 x y0x 2 y0x3
2!
3!
0
y0 0 4 e 1 4 y0 4 1 y0 3

Plug in the initial conditions

J x

2 m n

And

(n)
0

y0 1

The expansion about 0 is

Solve for

and

Jn x

are ALL analytic at xo. Otherwise xo

is called a singular point.

1. If

2.

is any set of two or more elements for which the following operations hold:

a and b then a b b a (closed for addition)

ab ba (closed for multiplication)

3. There exists a unique null element

0 such that a 0 a

and

0a 0

Can use power series method for ordinary points. For singular points must use Frobenius method.
A singular point is REGULAR if

For a regular singular point substitute

y cn ( x x0 )n r

y n r cn ( x x0 )n r 1

which for xo=0 is

n 0

y cn x n r

6. For every

y n r n r 1 cn ( x x0 )n r 2 (first term might not


n 0

r is repeated, there are two solutions, on Frobenius solution, and another of the form

y2

into the original DE and obtain a recurrence relation for

substituting for
3.

y2 y1 ln x cn* x n r

*
n

. Use

c0

1 for convenience when

y1 .

a there exists a

and its derivatives to obtain an equation for k and a recurrence relation for

cn* . If k = 0

n 0

then the solution is a second Frobenius solution. If the second root produces a second Frobenius solution is isnt necessary
to use the log term???? If it is necessary, use
Bessel Functions Type 1

x y xy x
2

The Bessel Functions are defined as

a such that a a 0

xy yx for all x, y

Commutative:
Associative:

xy z x yz for all x, y, z

Distributive:

If

and

are operations on then

i.

is said to be LEFT distributive over

ii.

is said to be RIGHT distributive over

iii.

is said to be DISTRIBUTIVE over

if x y z x y x z

if x y z xz yz

if it is both right and left distributive

c0 1 for convenience when substituting for y1 .

y0

1
y1 c0 2 n
x 2 n
n
n 0 2 n !1 2 3

One Frobenius solution is

unique negative element

7. The associative, commutative, and distributive properties are satisfied

If the r differ by and integer, there may not be a second Frobenius solution. To find out, substitute the equation

y2 ky1 ln x cn* x n r2

1 such that 1 a a 1 a 1 a

n 1

Substitute

a b

n 0

necessarily be 0 since r may be non-zero, so summations in derivatives start at 0, ALSO must assume co is nonzero).
Frobenius Method: Substitute the summations into the DE, shift indices, and equate coefficients to obtain an indicial equation. If:
1.
r are distinct and differ by a non-integer, then there are two Frobenius solutions based on the two rs
2.

the

5. There exists a unique element

and

b0

n 0

Also

4. If

Q x and
2 R x are analytic, and is IRREGULAR if not.
x x0
x x0
P x
P x

Matrix equality: all elements are identical


Addition/subtraction: element by element, must be same size, also
Matrix multiplication: must be conformable

A B B A and

Amn B pq C mq

as long as

n p

A B C A B C

AB BA in general, A BC AB , c AB cA B A cB , A B C AB BC , B C A BA CA

EX: 3 2

1
Null matrix: all elements are 0, note that just because

AB 0 , doesnt mean A or B are null, A 0 0


~
~

Properties of the Transpose:

Symmetric Matrices:

Am An Amn

An1 AAn

A1 A

Powers of a matrix:

AI A

IA A

Identity matrix: diagonal elements all 1

A
T

A B

A B

cA

3 1

A I
0

square matrix

cA

1.

ABC

C B A
T

A AT

If

A A

and

BB

BT AT BA AB in general so AB AB

AB

then

If A is a pxp matrix and B is any pxq matrix, then

A diag a1

Diagonal matrices can be denoted as


Symmetric

A AT

an

a2

** eigenvalues of a symmetric matrix are real, eigenvectors of distinct eigenvalues are orthogonal

Skew symmetric

A AT

No-negative definite

x Ax 0 x 0

Orthogonal

A A I

Indefinite

Ak 0

Nilpotent

A I
2

aii aij
j i

or

A A

x, y

a12

a1n

a21

a22

a2 n

an1

an 2

ann

a11

n2

A A

3.

If all elements in any row or in any column are zero, then A 0

4.

Interchanging any two rows or columns only changes the sign of the determinant.

5.

If any two rows of A are proportional or if a row is a linear combination of other rows, then

6.

Multiplying all elements of any row/column by a scalar also multiplies the determinant by that scalar.

7.

Any multiple of a row/ column can be added to any other row/column without changing the value of the determinant.

8.

If

9.

det cA c n det A

Skew Hermitian

Ann max rank is n, and if so it is said to be full rank


rank A rA

and

rank B rB

and

rank AB rank C rC

0 rC min rA , rB

then

The Trace of a matrix is the sum of the diagonal elements, and is also the product of the eigenvalues

Matrix Inversion: if

A A . All complex A A
T

A A

A1

Unitary

A1 A

a11
a21

a1n
a2 n

a11
a21

a11
a31

a12
a32

a11 a13
a31 a33

a11 a1n
a31 a3n

a11
an1

a12
an 2

a11
an1

a11
an1

a13
a23

Tr AB Tr BA

Tr AB Tr A Tr B

in general

a13
an 3

C T where C is the adjoint matrix formed by the cofactors of A (see Laplace expansion)
A

Matrix integration: element by element


Systems of Equations:
Cramers rule: If m = n and if A is a non-singular matrix (

A1

exists), then

x1

, x2 2 ,

Where

k , k = 1, 2, , n is the determinant obtained from A by removing the kth column and replacing it with

det A

and

the vector of solutions, RHS, R.

The following cases can arise:


1.
0 , R 0 unique solution, not all solutions will be zero

a1n
ann

2.

0 , R 0 the only solution is the trivial solution x1 x2 x3

3.

0,

4.
remove all elements of the jth row and kth column

2.

Cofactors: multiply the minor of

3.

The value of the determinant is the sum of the products of the elements in any row (or column) by their corresponding
cofactors.

, the result is called the cofactor and is denoted by

Ann , B A1 , BA AB I ( BA A1 A AB AA1 I )

Matrix differentiation: element by element

Minor: for element

j k

A 0.

A 0 A is called a singular matrix, and has no inverse

Tr A B Tr A Tr B

a12
a22

AB A B

j i

a11
a21

by

3 7 2 14 2(7) 35

A AT

2. If

1.

a jk

If A and B are both nxn matrices, then

1.

Laplace expansion:

a jk

The Rank of a matrix is the size of the largest non-zero determinant that can be formed from A. It is also the number of linearly
independent vectors constituting a matrix. Reduce to UT form and count non-zero rows.

Pivotal Condensation to find the determinant:

a11

2.

k is an integer

also

2 3
3
2
1 2 1
13 1
2 1
2 1
1 2
4 2
4 1

aii aij

Conjugate matrix: replace each element with complex conjugate. All elements real
Associate matrix: transpose of Conjugate Hermitian

Ax y Ay 0

Involutory

Diagonally Dominant or Strictly Diagonally Dominant

Ak 1 0

A2 A

xT Ax 0

Positive definite

Idempotent

in general

BT AB is symmetric and BT AB BT AT BT BT AT B
T

2 using row 1
3

Properties of the determinant:

Product of two symmetric matrices is not necessarily symmetric


T

1
11

must be a square matrix

xn 0

R 0 infinitely many solutions other than the trivial solution

0 , R 0 infinitely many solutions will exist IFF all the determinants k

are zero. Otherwise there will be no

solution.

Ajk

Ax x x 0
P1 AP is a diagonal matrix iff the columns of P are a set of n linearly independent

Eigenvalues and Eigenvectors: for solving


Let A be an nxn matrix. The matrix
eigenvectors of A.

A I x 0 which has a solution as long as

Rewrite the equation as


Procedure:
1.

Find

C det A I det I A

Unit step function (Heaviside Function)

Unit impulse function

0 0 t a
u
ta
1

a t t0 1 2a t0 a t t0 a

L f t a u t a e as F s

C = 0)

Solve the polynomial for s (set

2.

A I is singular.

Plug s back into the original equation to get xs (likely to be non-unique, so pick easy values)

3.

an1 x an x n an an1x

p x x n a1 x n1

Companion Matrix: Let

0
0

Then the nxn matrix

1
0

0
1

0
0

a2 x n2 a1x n1

0 t t0 a

t t0 a

a t t0 dt 1
t t0 lim a t t0

Dirac delta fx

a 0

t t dt 1
L t t0 L lim a t t0 e st
a0

0
is the companion matrix of the polynomial.
0

0
an

0
0
an 1 an 3

0
an 4

1
a1

t t0 f t dt f t0

The eigenvalues of this matrix are the roots of the monic polynomial P.
Convolution Theorem
Cayley Hamilton Theorem: Every matrix is a solution of its own characteristic equation.
Let A be the nxn matrix whose characteristic polynomial is

C det A I det I A a0 a1 a2

an1

C A a0 I a1 A a2 A
2

n 1

an1 A

If
n 1

and

are said to be orthogonal if

Also

x2

x3

is said to form an orthogonal set of vectors if

If each vector is normalized then

u2 v2

um vm

vm u1
u1

v2 u1
2

u1

u1

u1

u3 v3

vm u2
u2

T A set of vectors

u2

v3 u1
u1

u1

vm um1
um1

v3 u2
u2

x xj 0 j i

f g g f

dy
y 2x
dt

sX 8 2 X 3Y

L x X

s 2 X

L y Y x 0 8 y 0 3

3Y 8

2 X s 1 Y 3

sY 3 Y 2 X
Use Cramers Rule:

u2

3 s 1
X
s2
3

um1

dx
2x 3y
dt

T
i

then

Systems of DE using Laplace Transform:


EX:

xi xi I , and the vectors are said to be orthonormal

Gram-Schmidt orthogonalization process:

u1 v1 ,

is defined as

L1 G s g t

T 0 . The length of

and

L1 F s G s f u g t u du f g

Orthogonal vectors:

L1 F s f t

8s 17
8s 17
5
3

s 3s 4 s 1 s 4 s 1 s 4

L1 X x 5et 3e4t

s 1

s2 8
Laplace Transforms

L f t e

st

f t dt

L f t sL f t f 0
For a periodic function f t f t

1
L f t
1 e s

s st f t dt

Shifting the t-variable

L1 ebt F s f t b

L
0

L f n t s n L f t s n1 f 0 s n2 f 0
For a periodic function f t f t
1
L f t
1 e s

Shifting the s-variable

s st f t dt

L eat f t F s a

sa

f t 0 when t 0

L tf t F s

F s

is defined for

F s
1
f z dz L f t
s
s

L t n f t 1 F s
n

n = positive integer

s b

Transforming polynomials:

ax2 bx c can be transformed to a x k 2 h2

where

k b 2a and h c b2 4a

f n1 0

2
s2

3
3

s 1

3s 22
3s 22
5
2

s 3s 4 s 1 s 4 s 1 s 4
2

L1 Y Y 5et 2e4t

Vous aimerez peut-être aussi