Vous êtes sur la page 1sur 7

TWO CHARACTERIZATIONS OF INVERSE-POSITIVE MATRICES:

THE HAWKINS-SIMON CONDITION AND THE


LE CHATELIER-BRAUN PRINCIPLE ∗
TAKAO FUJIMOTO† AND RAVINDRA R. RANADE†

Dedicated to the late Professors David Hawkins and Hukukane Nikaido


Abstract. It is shown that the Hawkins-Simon condition is satisfied by any real square matrix
which is inverse-postive after a suitable permutation of columns or rows. One more characterization
of inverse-positive matrices is given concerning the Le Chatelier-Braun principle.

Key words. Hawkins-Simon condition, Inverse-positivity, Le Chatelier-Braun principle.

AMS subject classifications. 15A15, 15A48.

1. Introduction. In economics as well as other sciences, the inverse-positivity


of real square matrices has been an important topic. The Hawkins-Simon condition
[7], so called in economics, requires that every principal minor be positive, and they
showed the condition to be necessary and sufficient for a Z-matrix(a matrix with
nonpositive off-diagonal elements) to be inverse-positive. One decade earlier, this
was used by Ostrowski [10] to define an M -matrix(an inverse-positive Z-matrix), and
shown to be equivalent to some of other conditions. (See Berman and Plemmons
[1, Ch.6] for many equivalent conditions.) Georgescu-Roegen [6] argued that for a
Z-matrix it is sufficient to have only leading principal minors positive, which was
also proved in Fiedler and Ptak [3]. Nikaido’s two books, [8] and [9], contain a proof
based on mathematical induction. Dasgupta [2] gave another proof using an economic
interpretation of indirect input.
In this note, the Hawkins-Simon condition is defined to be the one which requires
that all the leading principal minors should be positive, and shall be called in this note
the weak Hawkins-Simon condition(WHS for short). We prove this condition WHS is
necessary for a real square matrix to be inverse-positive after a suitable permutation
of columns (or rows). The proof is easy and simple by use of the elimination method.
One more characterization of inverse-positive matrices is offered, which is related to
the Le Chatelier-Braun principle in thermodynamics. That is, each element in the
inverse of leading (n − 1)× (n − 1) matrix is smaller or equal to the corresponding
element in the inverse of the original inverse-positive matrix.
Section 2 explains our notation, then in section 3 we present our theorems and
their proofs, finally giving some numerical examples and remarks in the final section
4.
2. Notation. The symbol Rn means the real Euclidean space of dimension
n(n ≥ 2), and Rn+ the non-negative orthant of Rn . A given real n × n matrix A
maps from Rn into itself. Let aij represent the (i, j) entry of A, while x ∈ Rn stands
∗ Received by the editors on ... Accepted for publication on .... Handling Editor: ...
† Department of Economics, University of Kagawa, Takamatsu, Kagawa 760-8523, Japan
(takao@ec.kagawa-u.ac.jp, ranade@ec.kagawa-u.ac.jp).
1
2 T. Fujimoto and R. Ranade

for a column vector, and xi for the i-th element of x. The symbol (A)∗, j means the
j-th column of A, and (A)i,∗ means the i-th row. We also use the symbol x(i) , which
represents the column (n − 1) vector formed by deleting the i-th element from x.
Similarly, the symbol A(i, j) means the (n − 1)× (n − 1) matrix obtained by deleting
the i-th row and the j-th column from A. Likewise, A(, j) shows the n ×(n− 1) matrix
obtained by deleting the j-th column from A. Let (A)i,∗(n) be the row (n − 1) vector
formed by deleting the n-th element from (A)i,∗ , and (A)∗(n), j be the column (n − 1)
vector formed by deleting the n-th element from (A)∗, j . The symbol ei ∈ Rn+ means
a column vector whose i-th element is unity with all the remaining entries being zero.
The inequality signs for vector comparison are as follows:
x ≥ y iff x − y ∈ Rn+ ;
x > y iff x − y ∈ Rn+ − {0};
x À y iff x − y ∈int(Rn+ ),
where int(R+ ) means the interior of Rn+ . These inequality signs are used also for
n

matrices in a similar meaning.


3. Propositions. Let us first note that when a real square matrix A is inverse-
positive, i.e., A−1 > 0, this is equivalent to the following property:
Property 1. For any b ∈int(Rn+ ), the equation Ax = b has a solution x ∈int(Rn+ ).
Now we can prove the following theorem.
Theorem 3.1. Let A be inverse-positive. Then the WHS condition is satisfied
when a suitable permutation of columns (or rows) is made.
Proof. Because of Property 1 above, there should be at least one positive entry
in the first row of A. So, such a column and the first column can be exchanged: we
assume the two columns have been so permuted, thus

(3.1) a11 > 0.

Next, we divide the first equation of the system Ax = b by a11 and subtract this
equation side by side from the i-th(i ≥ 2) equation after multiplying this by ai1 , to
obtain
     
1 a12 /a11 ··· a1n /a11 x1 b1 /a11
 0 a22 − a12 a21 /a11 · · · a2n − a1n a21 /a11   x2   b2 − b1 a21 /a11 
     
 .. .. . . ..  ·  ..  =  .. .
 . . . .   .   . 
0 an2 − a12 an1 /a11 · · · ann − a1n an1 /a11 xn bn − b1 an1 /a11

In short, the (i, j)-entry of the modified matrix is, for i, j ≥ 2,


¯ ¯
¯ a11 a1j ¯
¯ ¯
¯ ai1 aij ¯
.
a11
Again, because of Property 1, the second row has at least one positive entry, assuming
b2 was chosen so large to make b2 − b1 a21 /a11 > 0. We suppose the column containing
this element and the second column have been permutated, making
Inverse-Positive Matrices 3

¯ ¯
¯ a11 a12 ¯
¯ ¯
¯ a21 a22 ¯
(3.2) > 0.
a11

The same method of elimination is then applied to the remaining (n−1) equations
to obtain the following:
   x   
1 a12 /a11 α13 · · · α1n 1 b1 /a11
 0  x2  
 1 α23 · · · α2n     β2 
 0   ..   


 0 α 33 · · · α 3n  ·  .  =  β 3 ,
 .. .. .. . . ..    .
 
 .
. 
 . . . . .   ..   . 
0 0 · · · · · · αnn xn βn

where
¯ ¯ ¯ ¯
¯ a11 a12 a13 ¯ ¯ a11 a12 b1 ¯
¯ ¯ ¯ ¯
¯ a21 a22 a23 ¯ ¯ a21 a22 b2 ¯
¯ ¯ ¯ ¯
¯ a31 a32 a33 ¯ ¯ a31 a32 b3 ¯
α33 = ¯ ¯ , and β3 = ¯ ¯ ,
¯ a11 a12 ¯ ¯ a11 a12 ¯
¯ ¯ ¯ ¯
¯ a21 a22 ¯ ¯ a21 a22 ¯

and for i, j > 3


¯ ¯ ¯ ¯
¯ a11 a12 a1 j ¯ ¯ a11 a12 b1 ¯
¯ ¯ ¯ ¯
¯ a21 a22 a2 j ¯ ¯ a21 a22 b2 ¯
¯ ¯ ¯ ¯
¯ ai1 ai2 ai j ¯ ¯ ai1 ai2 bi ¯
αi j = ¯ ¯ , and βi = ¯¯ ¯ .
¯ a11 a12 ¯ ¯
¯ ¯ ¯ a11 a12 ¯
¯ a21 a22 ¯ ¯ a21 a22 ¯

This is because
¯ ¯  ¯ ¯  ,
¯ a11 a12 a13 ¯ Á ¯ a11 a12 ¯
¯ ¯ ¯ ¯
α33 =  ¯ a21 a22 a23 ¯  ¯ a21 a22 ¯  a11 ,
¯ ¯
¯ a31 a32 a33 ¯ a11

and the other elements can be derived in a similar way. When bi (i ≥ 3)’s are large
enough, βi (i ≥ 3)’s are all positive because
¯ ¯
¯ a11 a12 ¯
¯ ¯
¯ a21 a22 ¯ > 0.

Then thanks to Property 1, there exists at least one α3 j (j ≥ 3) which is positive, and
we assume the column having this entry and the third column have been exchanged.
Hence
4 T. Fujimoto and R. Ranade

¯ ¯
¯ a11 a12 a13 ¯
¯ ¯
¯ a21 a22 a23 ¯
¯ ¯
¯ a31 a32 a33 ¯
(3.3) α33 = ¯ ¯ > 0.
¯ a11 a12 ¯
¯ ¯
¯ a21 a22 ¯

We can proceed in a similar way, eliminating x4 , . . . , xn−1 , and finally we have

¯ ¯
|A| ¯A(,n) b¯
(3.4) ¯ ¯ ¯ ¯
¯A(n,n) ¯ xn = ¯A(n,n) ¯ .

From (3.1), (3.2), (3.3), and the subsequent equations, we know that all the
leading principal minors up to A(n,n) are positive, and thus when bn is large enough,
the RHS of (3.4) becomes positive. Then by Property 1, it follows

|A| > 0.

Therefore, our theorem is proved for a permutation of columns. For rows, we can
transpose a given matrix and the same proof applies.
Corollary 3.2. When A is a Z-matrix, the WHS is necessary and sufficient
for A to be inverse-positive.
Proof. Since the necessity is proved in Theorem 3.1, we show the sufficiency. Let
us consider the elimination method used in the proof of Theorem 3.1. We assume
that b À 0. When A is a Z-matrix, it is easy to notice that as elimination proceeds,
a positive entry is always given at the upper left corner with the other entries (or
coefficients) on the top equation being all non-positive, while the RHS of each equation
always remains positive. So, finally we reach the equation of a single variable xn
with the two coefficients on both sides being positive. Thus, xn > 0. Now moving
backward, we find x À 0. Since b is arbitrary, this shows the WHS is sufficient.
Next, we present a theorem which is related to the Le Chatelier-Braun principle.
(See Fujimoto [4].)
Theorem 3.3. Suppose that A is inverse-positive, and the WHS is satisfied.
Then each element of the inverse of A(n,n) is less than or equal to the corresponding
element of the inverse of A.
Proof. The first column of the inverse of A can be obtained as a solution vector
x ∈ Rn+ to the system of equations Ax = e1 , while the first column of the inverse of
n−1
A(n,n) as a solution vector y ∈ R+ to the system A(n,n) y = e1(n) . Adding these
two systems with some manipulations, we get the following system:
   
x1 + y1 2
 ..   
  0
(3.5) A . =d≡ 
.

 xn−1 + yn−1  0
xn (A)n,∗(n) · y
Inverse-Positive Matrices 5

By the Cramer’s rule, it follows that


¯ ¯ ¯ ¯
¯A(,n) d¯ ¯A(n,n) ¯
xn = = 2xn + · (A)n,∗(n) · y.
|A| |A|

Thus, if xn = (A−1 )n1 > 0, then (A)n,∗(n) · y < 0, and if xn = 0, then (A)n,∗(n) · y = 0,
|A(n,n) |
because |A| > 0 thanks to the WHS condition.
For the i-th(i < n) equation of (3.5), the Cramer’s rule gives us
¯ ¯
¯A(n,i) ¯
xi + yi = 2xi + · (A)n,∗(n) · y.
|A|
From this, we have

yi = xi + (A−1 )in · (A)n,∗(n) · y.

Therefore we can assert


½
yi < xi when (A−1 )n1 > 0 and (A−1 )in > 0,
yi = xi when (A−1 )n1 = 0 or (A−1 )in = 0.

For other columns, we can proceed in a similar way by use of ei .


Corollary 3.4. Suppose that the inverse of A has its last column and the bottom
row non-negative, and (A−1 )nn > 0. Then each element of the inverse of A(n,n) is
less than or equal to the corresponding element of the inverse of A.
Proof. Exactly the same proof as that of Theorem 3.3 can be applied because
|A(n,n) |
|A| = (A−1 )nn .
4. Numerical Examples and Remarks. The first example is a simple M -
matrix and given as
· ¸ · ¸
−2 1 3 1
A≡ , and A−1 = .
7 −3 7 2

By exchanging two columns, we have


· ¸ · ¸
1 −2 7 2
, and its inverse is .
−3 7 3 1

This satisfies the normal Hawkins-Simon condition. The inverse of (1) is (1), and the
entry 1 is smaller than 7, thus verifying Theorem 3.3.
The next one is not an M -matrix:
   1 1 
1 −1 1 2 2 0
A≡ 1 1 −1  , and A−1 =  0 12 12  .
1
−1 1 1 2 0 12

The inverse of A(3,3) is calculated as


6 T. Fujimoto and R. Ranade

· ¸−1 · 1 1
¸
1 −1 2 2
= .
1 1 − 12 1
2

The elements (A−1 )11 , (A−1 )12 , and (A−1 )22 are all equal to (A−1 ) , (A−1
(3,3) 11
) , and
(3,3) 12
−1 −1
(A(3,3) )22 because (A )32 = 0 and (A )13 = 0. The entry (A(3,3) )21 is, however, − 12
−1 −1

and is smaller than the corresponding entry (A−1 )21 = 0, again confirming Theorem
3.3.
The third example is
 31   
6
− 17
6
4
3
− 25
6 3 5 8 11
 − 26 16
− 43 17   1 2 4 7  1
A≡ 3
 17 − 13
3
1
3 
8 , A
−1
= 
 10 13 15 16  , and |A| = 6 .
3 3 3
− 3
− 32 3
2
0 1
2
6 9 12 14

For this matrix,


¯ 31 ¯
¯ − 17 4 ¯
¯ ¯ ¯ ¯
¯ = 7 > 0, and
6 6 3
¯A(4,4) ¯ = ¯ − 26 16
− 43
¯ 3 3 ¯ 3
¯ 17
− 13 1 ¯
3 3 3

¯· 31 ¸¯
¯ ¯ ¯ − 17 ¯
¯(A(4,4) )(3,3) ¯ = ¯ 6 6 ¯ = 3 > 0.
¯ − 26 16 ¯
3 3

These verifies Theorem 3.1. (Note that


· ¸
¯ ¯ 16
− 43
¯(A(4,4) )(1,1) ¯ = 3 = −4 < 0.)
− 13
3
1
3

On the other hand,


 31 −1  
¡ ¢−1 6 − 17
6
4
3 − 12
7 − 29
14 − 10
7
A(4,4) =  − 26
3
16
3 − 43  =  −2 − 52 −2  ,
17
3 − 13
3
1
3
22
7
19
7
9
7

which confirms Theorem 3.3.


The final example is to verify Corollary 3.4:
 17 2 5
  
− 24 3
− 24 −1 −4 1
A ≡  61 − 13 1
6
 , and A−1 =  2 −3 2  .
23 2 11
24
−3 24
5 4 3

Since
· ¸−1 · ¸
− 17
24
2
3 − 83 − 16
3
1 = ,
6 − 13 − 43 − 17
3
Inverse-Positive Matrices 7

these equations conforms to Corollary 3.4.


Remark 4.1. The Le Chatelier-Braun principle in thermodynamics states that
when an equilibrium in a closed system is perturbed, directly or indirectly, the equi-
librium shifts in the direction which can attenuate the perturbation. As is explained
in Fujimoto [4], the system of equations Ax = b can be solved as an optimization
problem when A is an M -matrix. Thus, a solution x to the system can be viewed as
a sort of equilibrium. A similar argument can be made when A is inverse-positive. In
our case, a perturbation is a new constraint that the n-th variable xn should be kept
constant even after the vector b shifts, destroying the n-th equation. The changes in
other variables may become smaller when the increase of those variables requires xn
to be greater. This is obvious in the case of an M -matrix. What we have shown is
that it is also the case with an inverse-positive matrix or even with a matrix with
positively bordered inverse as can be seen from Corollary 3.4.
Remark 4.2. Much more can be said about sensitivity analysis for the case of
M -matrices. We can also deal with the effects of changes in the elements of A on the
solution vector x. See Fujimoto, Herrero, and Villar [5].

REFERENCES

[1] Abraham Berman and Robert J. Plemmons. Nonnegative Matrices in the Mathematical Sciences.
Academic Press, New York, 1979.
[2] Dipankar Dasgupta. Using the correct economic interpretation to prove the Hawkins-Simon-
Nikaido theorem: one more note. Journal of Macroeconomics, 14:755—761, 1992.
[3] Miroslav Fiedler and Vlastimil Ptak. On Matrices with nonpositive off-diagonal elements and
positive principal minors. Czechoslovak Mathematical Journal, 12:382—400, 1962.
[4] Takao Fujimoto. Global strong Le Chatelier-Samuelson principle. Econometrica, 48:1667—1674,
1980.
[5] Takao Fujimoto, Carmen Herrero and Antonio Villar. A sensitivity analysis for linear systems
involving M-matrices and its application to the Leontief model. Linear Algebra and Its
Applications, 64:85—91, 1985.
[6] Nicholas Georgescu-Roegen. Some properties of a generalized Leontief model. in Tjalling Koop-
mans(ed.), Activity Analysis of Allocation and Production John Wiley & Sons, New York,
165—173, 1951.
[7] David Hawkins and Herbert A. Simon. Note: Some Conditions of Macroeconomic Stability.
Econometrica, 17:245—248, 1949.
[8] Hukukane Nikaido. Convex Structures and Economic Theory. Academic Press, New York, 1963.
[9] Hukukane Nikaido. Introduction to Sets and Mappings in Modern Economics. Academic Press,
New York, 1970.
[10] Alexander Ostrowski. Über die Determinanten mit überwiegender Hauptdiagonale. Commen-
tarii Mathematici Helvetici, 10:69—96, 1937.

Vous aimerez peut-être aussi