Académique Documents
Professionnel Documents
Culture Documents
problems
Tomas Tinoco De Rubira
1. introduction
2. available methods
4. implementation
5. experiments
6. conclusions
1
1. Introduction
1.1. background
1.2. formulation
2
1.1. background
3
1.1. background
3
1.1. background
3
1.1. background
• essential for:
– expansion, planning and daily operation [5]
– real time security analysis (contingencies) [44]
3
1.1. background
• essential for:
– expansion, planning and daily operation [5]
– real time security analysis (contingencies) [44]
3
1.2. formulation
4
1.2. formulation
network
4
1.2. formulation
network
4
1.2. formulation
network
4
typically: [5][6]
5
typically: [5][6]
5
typically: [5][6]
• bus i = s: slack
– voltage magnitude and angle given
5
typically: [5][6]
• bus i = s: slack
– voltage magnitude and angle given
• bus i ∈ R: regulated
– active power and voltage magnitude given (PV)
5
typically: [5][6]
• bus i = s: slack
– voltage magnitude and angle given
• bus i ∈ R: regulated
– active power and voltage magnitude given (PV)
• bus i ∈ U: unregulated
– active and reactive power given (PQ)
5
2. Available methods
2.1. overview
2.2. Newton-Raphson
2.3. Newton-Krylov
6
2.1. overview
7
2.1. overview
7
2.1. overview
7
• improving NR
– speed: fast decoupled [5], DC power flow, simplified XB and BX [47]
– scalability: Newton-Krylov [13][14][15][20][26][32][35][45]
– robustness: integration [34] and optimal multiplier [9][11][27][41][42]
8
• improving NR
– speed: fast decoupled [5], DC power flow, simplified XB and BX [47]
– scalability: Newton-Krylov [13][14][15][20][26][32][35][45]
– robustness: integration [34] and optimal multiplier [9][11][27][41][42]
• other methods
– genetic, neural networks and fuzzy algorithms
∗ not competitive with NR-based method [44]
– sequential conic programming [28][29]
– holomorphic embedding [43]
8
2.2. Newton-Raphson
9
2.2. Newton-Raphson
9
2.2. Newton-Raphson
• linearize at xk : fk + Jk (xk+1 − xk ) = 0
9
2.2. Newton-Raphson
• linearize at xk : fk + Jk (xk+1 − xk ) = 0
properties: [36]
9
x0
k =0
evaluate J k and f k
∥ f k∥∞ < ϵ
yes
no
done solve J k p k =− f k
update x k +1 =x k + p k
k =k +1
10
for power flow: [5][6]
11
for power flow: [5][6]
∆Pi = 0, i∈R∪U
∆Qi = 0, i∈U
g
• heuristics to enforce Qmin
i ≤ Q max
i ≤ Qi ,i∈R
11
for power flow: [5][6]
∆Pi = 0, i∈R∪U
∆Qi = 0, i∈U
g
• heuristics to enforce Qmin
i ≤ Q max
i ≤ Qi ,i∈R
– If violation
∗ Qgi → fixed
∗ R→U
∗ vi → free
– more complex ones for R ← U [46]
11
2.3. Newton-Krylov
12
2.3. Newton-Krylov
12
2.3. Newton-Krylov
• BCGStab [20][32][35]
• CGS [20]
12
2.3. Newton-Krylov
• BCGStab [20][32][35]
• CGS [20]
preconditioners
• polynomial [14][32]
• LU-based [20][26][35][45]
12
2.4. optimal multiplier
13
2.4. optimal multiplier
13
2.4. optimal multiplier
13
2.4. optimal multiplier
13
2.5. continuous Newton
14
2.5. continuous Newton
14
2.5. continuous Newton
14
2.5. continuous Newton
Runge-Kutta [34]
• no speed comparison
14
2.6. sequential conic programming
15
2.6. sequential conic programming
15
2.6. sequential conic programming
15
2.7. holomorphic embedding
16
2.7. holomorphic embedding
16
2.7. holomorphic embedding
16
2.7. holomorphic embedding
16
3. Contributions and proposed methods
17
3.1. line search
18
3.1. line search
18
3.1. line search
18
3.1. line search
18
3.1. line search
18
3.1. line search
18
line search Newton-Raphson (LSNR)
19
line search Newton-Raphson (LSNR)
19
line search Newton-Raphson (LSNR)
h( x+ α p)
2) slope flatter
than these lines
0 α
19
x0
k =0
evaluate J k and f k
∥ f k∥∞ < ϵ
yes
no
done solve J k p k =− f k
update x k +1 =x k + p k
k =k +1
20
x0
k =0
evaluate J k and f k
∥ f k∥∞ < ϵ
yes
no
done solve J k p k =− f k
find α k
update x k +1 =x k +α k p k
k =k +1
21
benefits
1E+6 1E+6
1E+4 1E+4
1E+2 1E+2
1E+0 1E+0
1E-2 1E-2
1E-4 1E-4
1E-6 1E-6
1E-8 1E-8
0 5 10 15 20 25 30 35 0 5 10 15 20 25
iterations iterations
LSNR NR LSNR NR
22
3.2. starting point
23
3.2. starting point
NR needs good x0
23
3.2. starting point
NR needs good x0
23
3.2. starting point
NR needs good x0
23
3.2. starting point
NR needs good x0
23
3.2. starting point
NR needs good x0
23
flat network homotopy (FNH)
24
flat network homotopy (FNH)
24
flat network homotopy (FNH)
xi
x*
x0
24
how?
25
how?
1 X 1
minimize Fi(x) = ηi (vj − 1) + kf (x, ti)k22
2
x 2 2
j∈U
25
how?
1 X 1
minimize Fi(x) = ηi (vj − 1) + kf (x, ti)k22
2
x 2 2
j∈U
• goes away: ηi ↓ 0 as i → ∞
25
how?
1 X 1
minimize Fi(x) = ηi (vj − 1) + kf (x, ti)k22
2
x 2 2
j∈U
• goes away: ηi ↓ 0 as i → ∞
starting points:
25
3.3. formulation and bus type switching
26
3.3. formulation and bus type switching
square formulation
26
3.3. formulation and bus type switching
square formulation
26
3.3. formulation and bus type switching
square formulation
26
3.3. formulation and bus type switching
square formulation
26
analogy ...
27
28
Q violations
Q violations
28
28
power flow solutions
28
good solutions
28
Algorithm
name: NR
28
28
28
SWITCH!
28
28
28
SWITCH!
28
28
28
Algorithm
name: PBPF
28
28
28
28
28
28
vanishing regulation (VR)
29
vanishing regulation (VR)
αX t 2 βX
minimize ϕ(x) = (vj − vj ) + (vj − 1)2
x 2 2
j∈R j∈U
subject to f (x) = 0
29
vanishing regulation (VR)
αX t 2 βX
minimize ϕ(x) = (vj − vj ) + (vj − 1)2
x 2 2
j∈R j∈U
subject to f (x) = 0
29
vanishing regulation (VR)
αX t 2 βX
minimize ϕ(x) = (vj − vj ) + (vj − 1)2
x 2 2
j∈R j∈U
subject to f (x) = 0
1
minimize Fi(x) = µiϕ(x) + kf (x)k22
x 2
29
solving subproblems: apply LSNR to ∇Fi(x) = 0
30
solving subproblems: apply LSNR to ∇Fi(x) = 0
30
solving subproblems: apply LSNR to ∇Fi(x) = 0
2
P
VR ignores j fj (x k )∇ fj (xk ) to get descent directions easily, but then
30
mismatch vs regulation trade-off
eiSP2008 eiSP2012
1E+2 0.20 1E+2 0.20
0.18 0.18
1E+1 1E+1
0.16 0.16
max mismatch (pu)
volts (pu)
volts (pu)
1E-1 1E-1
0.10 0.10
1E-2 0.08 1E-2 0.08
1E-3 0.06 1E-3 0.06
0.04 0.04
1E-4 1E-4
0.02 0.02
1E-5 0.00 1E-5 0.00
0 5 10 15 20 25 30 35 40 45 0 5 10 15 20 25 30 35 40 45
iterations iterations
mismatch dvreg mismatch dvreg
aesoSP2014 ucteWP2002
1E+2 0.20 1E+2 0.20
0.18 0.18
1E+1 1E+1
0.16 0.16
volts (pu)
1E-1
volts (pu)
1E-1
0.10 0.10
1E-2 1E-2 0.08
0.08
1E-3 0.06 1E-3 0.06
0.04 0.04
1E-4 1E-4
0.02 0.02
1E-5 0.00 1E-5 0.00
0 1 2 0 10 20 30 40 50 60 70
iterations iterations
mismatch dvreg mismatch dvreg
31
penalty-based power flow (PBPF)
32
penalty-based power flow (PBPF)
extend VR:
32
penalty-based power flow (PBPF)
extend VR:
improve VR:
32
formulate as optimization, encode preferences & physical limits in objective
subject to f (x) = 0
33
formulate as optimization, encode preferences & physical limits in objective
subject to f (x) = 0
penalties:
X
u
ϕ (x) = ϕuj(vj )
j∈U
X
q
ϕ (x) = ϕqj (Qgj )
j∈R
X
r
ϕ (x) = ϕrj (vj )
j∈R
33
for unregulated magnitudes (quartic)
!4
2z − vjmax − vjmin
ϕuj(z) =
vjmax − vjmin
34
for unregulated magnitudes (quartic)
!4
2z − vjmax − vjmin
ϕuj(z) =
vjmax − vjmin
34
for unregulated magnitudes (quartic)
!4
2z − vjmax − vjmin
ϕuj(z) =
vjmax − vjmin
1 u 2
ϕrj (z) = log e2ϕj (z) + e2ϕj (z) + bj ,
2
34
1
0
v
min
,Q
min
vt v
max
,Q
max
35
apply augmented Lagrangian method [8][10][22][23]
36
apply augmented Lagrangian method [8][10][22][23]
1
minimize Lµi (x, λi) = µiϕ(x) − µiλTi f (x) + kf (x)k22,
x 2
36
apply augmented Lagrangian method [8][10][22][23]
1
minimize Lµi (x, λi) = µiϕ(x) − µiλTi f (x) + kf (x)k22,
x 2
36
apply augmented Lagrangian method [8][10][22][23]
1
minimize Lµi (x, λi) = µiϕ(x) − µiλTi f (x) + kf (x)k22,
x 2
• xk is current iterate
36
we use modified Conjugate Gradient [12]
• exits with pk = −Φk dk , Φk 0, sufficient descent direction for Lµi (·, λi)
37
we use modified Conjugate Gradient [12]
• exits with pk = −Φk dk , Φk 0, sufficient descent direction for Lµi (·, λi)
preconditioner:
merge low-stretch spanning trees [4][21], find H̃k based on resulting tree
37
3.4. real networks
38
3.4. real networks
38
3.4. real networks
38
3.4. real networks
38
4. implementation
39
4. implementation
algorithms
• objects and basic numerical routines with NumPy [3] and SciPy [30]
39
4. implementation
algorithms
• objects and basic numerical routines with NumPy [3] and SciPy [30]
parsers
• also in Python
39
5. experiments
40
5. experiments
40
method comparison (warm start)
converged
failed
unavailable
41
method comparison (flat start)
converged
failed
unavailable
42
method comparison (progress on large cases)
1E+1 1E+1
1E+0 1E+0
max mismatch (pu)
1E-2 1E-2
1E-3 1E-3
1E-4 1E-4
1E-5 1E-5
0 5 10 15 20 25 30 35 40 45 0 5 10 15 20 25 30 35 40 45
iterations iterations
LSNR VR PSSE PSSE Q-lim LSNR VR PSSE
43
method comparison (perturbing x0)
aesoSL2014
seed NR LSNR VR
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
44
method comparison (perturbing x0)
ucteWL2002
seed NR LSNR VR
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
45
contingency analysis (scaling net injections)
100 100
98 98
96 96
cases solved (%)
92 92
90 90
88 88
86 86
1.0 1.1 1.2 1.3 1.4 1.5 1.00 1.02 1.04 1.06 1.08 1.10 1.12 1.14 1.16
scaling factor scaling factor
VR LSNR NR VR LSNR NR
46
6. conclusions
line search:
• simple to implement
• not expensive
– only a few function evaluations
∗ easy to parallelize
• big value
47
flat network homotopy:
48
flat network homotopy:
to do
48
formulation and bus type switching
49
formulation and bus type switching
to do
49
References
[4] Ittai Abraham, Yair Bartal, and Ofer Neiman. Nearly tight low stretch
spanning trees. In Proceedings of the 49th Annual Symposium on
Foundations of Computer Science, pages 781–790, 2008.
[6] Göran Andersson. Modelling and analysis of electric power systems. ETH
Zurich, september 2008.
50
[7] George A. Baker and Peter Graves-Morris. Padé Approximants.
Cambridge University Press, 1996.
[9] P.R. Bijwe and S.M. Kelapure. Nondivergent fast power flow methods.
Power Systems, IEEE Transactions on, 18(2):633 – 638, may 2003.
[11] L.M.C. Braz, C.A. Castro, and C.A.F. Murati. A critical evaluation of
step size optimization based load flow methods. Power Systems, IEEE
Transactions on, 15(1):202 –207, feb 2000.
[12] Laurent Chauvier, Antonio Fuduli, and Jean Charles Gilbert. A truncated
sqp algorithm for solving nonconvex equality constrained optimization
problems, 2001.
51
[13] Ying Chen and Chen Shen. A Jacobian-free Newton-GMRES(m)
method with adaptive preconditioner and its application for power flow
calculations. Power Systems, IEEE Transactions on, 21(3):1096 –1103,
august 2006.
[15] H. Dag and and E.F. Yetkin. A spectral divide and conquer method
based preconditioner design for power flow analysis. In Power System
Technology (POWERCON), 2010 International Conference on, pages 1
– 6, october 2010.
52
pattern multifrontal method. ACM Trans. Math. Softw., 30(2):165–195,
june 2004.
[21] Michael Elkin, Daniel A. Spielman, Yuval Emek, and Shang hua Teng.
Lower-stretch spanning trees. In STOC 05: Proceedings of the thirty-
seventh annual ACM symposium on Theory of computing, pages 494–503.
ACM Press, 2005.
53
[22] R. Fletcher. Practical Methods of Optimization. A Wiley-Interscience
Publication. Wiley, 2000.
[23] P.E. Gill, W. Murray, and M.H. Wright. Practical optimization. Academic
Press, 1981.
[24] W. Group. Common format for exchange of solved load flow data. Power
Apparatus and Systems, IEEE Transactions on, PAS-92(6):1916 –1925,
nov. 1973.
[25] Aric A. Hagberg, Daniel A. Schult, and Pieter J. Swart. Exploring network
structure, dynamics, and function using NetworkX. In Proceedings of the
7th Python in Science Conference (SciPy2008), pages 11–15, Pasadena,
CA USA, August 2008.
[26] Reijer Idema, Domenico Lahaye, Kees Vuik, and Lou van der Sluis. Fast
Newton load flow. In Transmission and Distribution Conference and
Exposition, 2010 IEEE PES, pages 1 –7, april 2010.
54
[27] S. Iwamoto and Y. Tamura. A load flow calculation method for
ill-conditioned power systems. Power Apparatus and Systems, IEEE
Transactions on, PAS-100(4):1736 –1743, april 1981.
[28] R.A. Jabr. Radial distribution load flow using conic programming. Power
Systems, IEEE Transactions on, 21(3):1458 – 1459, august 2006.
[29] R.A. Jabr. A conic quadratic format for the load flow equations of
meshed networks. Power Systems, IEEE Transactions on, 22(4):2285 –
2286, november 2007.
[30] Eric Jones, Travis Oliphant, Pearu Peterson, et al. SciPy: Open source
scientific tools for Python, 2001–.
55
University of Toronto, 2003. https://tspace.library.utoronto.
ca/handle/1807/27321.
[33] Stefano Lucidi and Massimo Roma. Numerical experiences with new
truncated newton methods in large scale unconstrained optimization. In
Almerico Murli and Gerardo Toraldo, editors, Computational Issues in
High Performance Software for Nonlinear Optimization, pages 71–87.
Springer US, 1997.
[34] F. Milano. Continuous Newton’s method for power flow analysis. Power
Systems, IEEE Transactions on, 24(1):50 – 57, february 2009.
[36] Walter Murray. Newton-Type Methods. John Wiley & Sons, Inc., 2010.
56
[38] Jorge Nocedal and Stephen J. Wright. Numerical Optimization. Springer,
august 2000.
[40] Yousef Saad. Iterative Methods for Sparse Linear Systems. Society for
Industrial and Applied Mathematics, 2 edition, april 2003.
[41] J. Scudder and F.L. Alvarado. Step Size Optimization in a Polar Newton
Power Flow. Wisconsin. University - Madison. Department of Electrical
and Computer Engineering. [Papers]. University of Wisconsin, Engineering
Experiment Station, 1981.
[42] J.E. Tate and T.J. Overbye. A comparison of the optimal multiplier in
polar and rectangular coordinates. Power Systems, IEEE Transactions
on, 20(4):1667 – 1674, november 2005.
57
[43] Antonio Trias. The holomorphic embedding load flow method.
IEEE PES general meeting, july 2012. http://www.gridquant.com/
technology/.
[44] X.F. Wang, Y. Song, and M. Irving. Modern Power Systems Analysis.
Power Electronics and Power Systems. Springer, 2008.
[46] Jinquan Zhao, Hsiao-Dong Chiang, Ping Ju, and Hua Li. On PV-PQ
bus type switching logic in power flow computation. In Power Systems
Computation Conference (PSCC), 2008.
58