Académique Documents
Professionnel Documents
Culture Documents
Mathématique
https://doi.org/10.5802/crmath.343
Combinatorics / Combinatoire
22451-900, Brazil
b Department of Mathematics, Stockholm University, SE-106 91, Stockholm, Sweden
48824-1027, USA
d National Research University Higher School of Economics, Moscow, Russia
Abstract. The Grassmann convexity conjecture, formulated in [8], gives a conjectural formula for the maxi-
mal total number of real zeroes of the consecutive Wronskians of an arbitrary fundamental solution to a dis-
conjugate linear ordinary differential equation with real time. The conjecture can be reformulated in terms
of convex curves in the nilpotent lower triangular group. The formula has already been shown to be a correct
lower bound and to give a correct upper bound in several small dimensional cases. In this paper we obtain a
general explicit upper bound.
Résumé. La conjecture sur la convexité du Grassmannien formulée dans [8] suggère une formule pour le
nombre total maximal de zéros réels des Wronskiens consécutifs d’une solution fondamentale arbitraire
d’un système disconjugué d’équations différentielles ordinaires linéaires à temps réel. La conjecture peut être
formulée en termes de courbes convexes dans le groupe nilpotent triangulaire inférieur. Il a déjà été prouvé
que la formule donne une borne inférieure correcte et que dans plusieurs cas de basse dimension, elle donne
la borne supérieure correcte. Dans cet article nous obtenons une borne supérieure explicite générale.
∗ Corresponding author.
1. Introduction
A linear homogeneous differential equation of order n with real time has the form
y (n) (t ) + · · · + c k (t )y (k) (t ) + · · · + c 0 (t )y(t ) = 0; (1)
the real functions c k : I → R (for 0 ≤ k < n) are called coefficients and are assumed to be
continuous; the set I ⊆ R is assumed to be an interval. Such an equation defined on an open
interval I is called disconjugate on I (see [2, 6]) if any of its nonzero solutions has at most n − 1
real zeroes on I , counting multiplicities. Any differential equation as above is disconjugate on a
sufficiently small interval I ⊂ R.
The Grassmann convexity conjecture, formulated in [8], says that, for any positive integer
k satisfying 0 < k < n, the number of real zeroes of the Wronskian of any k linearly indepen-
dent solutions to a disconjugate differential equation of order n on I is bounded from above by
k(n − k), which is the dimension of the Grassmannian G(k, n; R). This conjecture can be refor-
mulated in terms of convex curves in the nilpotent lower triangular group (see Section 2 and
also [3–5]). The number k(n −k) has already been shown to be a correct lower bound for all k and
n. Moreover it gives a correct upper bound for the case k = 2 (see [7]). In this paper we obtain a
general explicit upper bound for the above maximal total number of real zeroes which, although
weaker than the one provided by the Grassmann convexity conjecture, still gives an interesting
information.
We now formulate a discrete problem which is equivalent to the above Grassmann convexity
conjecture; the equivalence is discussed in Section 2. The statement and proof of our main
theorem are self-contained and elementary.
Given integers k and n with 0 < k < n, consider a collection (v j )1 ≤ j ≤ n of vectors in Rk , or,
equivalently, a matrix M ∈ C = Rk×n . Set v j = Me j , so that v j is the j th column of M :
| | |
M = v 1 · · · v j · · · v n , v 1 , . . . , v j , . . . , v n ∈ Rk .
| | |
For two matrices M 0 , M 1 ∈ C , we say that there exists a positive elementary move from M 0 to M 1 if
there exists j < n and t ∈ (0, +∞) such that M 1 e j = M 0 e j + t M 0 e j +1 and M 1 e j ′ = M 0 e j ′ for j ′ ̸= j .
In other words, the vector v j moves towards v j +1 and the other vectors remain constant:
| | | | | | | |
M 0 = v 1 · · · v j v j +1 · · · v n , M 1 = v 1 · · · v j + t v j +1 v j +1 · · · v n , t > 0.
| | | | | | | |
We call a sequence (M s )0 ≤ s ≤ ℓ of k × n-matrices convex if, for all s < ℓ, there exists a positive
elementary move from M s to M s+1 . The terminology is motivated by their relationship with
convex curves in the lower triangular group Lo1n ; see Section 2 and [7].
For J = { j 1 < · · · < j k } ⊂ {1, . . . , n}, define the function m J : C → R by
¡ ¢ ¡ ¢
m J (M ) = det SubMatrix (M , {1, . . . , k} , J ) = det v j 1 , . . . , v j k . (2)
We are particularly interested in m • = m {1, ..., k} . Define an open dense subset of (k × n)-matrices
C∗= m −1
J [R à {0}] ⊂ C .
\
J ⊂ {1, ..., n},|J |=k
(Notice that M is merely a sequence of matrices, not an actual path of matrices; equivalently, s
assumes integer values only.)
Nicolau Saldanha, Boris Shapiro and Michael Shapiro 447
The Grassmann convexity conjecture, in the discrete model, states that for every convex
sequence M we have:
nsc(M) ≤ k(n − k). (3)
The equivalence between formulations is discussed in Section 2. The case k = 1 is easy; the case
k = 2 is settled in [7]; in the same paper we prove that for all k and n, there exists a convex
sequence M with nsc(M) = k(n − k). In this paper we prove the following estimate.
Theorem 1. For any 2 < k < n and any convex sequence of matrices M, we have
(n − k + 1)2k−3
nsc(M) < . (4)
2k−3
Inequality (4) appears to be the first known explicit upper bound for nsc(M) in terms of k and
n. This bound is a polynomial in n of degree 2k − 3. Furthermore, by a well-known (Grassmann)
duality interchanging k ↔ (n − k), one can additionally obtain
³ ´
nsc(M) < min 23−k (n − k + 1)2k−3 , 23−n+k (k + 1)2(n−k)−3 .
We know however that the bound in Theorem 1 is not sharp; see Remark 5. Our current best guess
is that the Grassmann convexity conjecture indeed holds for all 0 < k < n; this is additionally
supported by our recent computer-aided verification of the latter conjecture in case k = 3, n = 6.
In this section we present an alternative continuous version of the Grassmann convexity conjec-
ture already discussed in previous papers and prove that it is equivalent to the discrete version
described in the introduction. We follow the notations of [5] and [7].
Consider the nilpotent Lie group of Lo1n of real lower triangular matrices with diagonal entries
equal to 1. Its corresponding Lie algebra is lo1n , the space of strictly lower triangular matrices.
For J ⊂ {1, . . . , n}, |J | = k, define m J : Lo1n → R as in Equation (2); thus, m • (L) = m {1, ..., k} (L) is the
determinant of the lower-left k × k minor of L. For 1 ≤ j < n, let l j = e j +1 e ⊤ j
∈ lo1n be the matrix
with only one nonzero entry (l j ) j +1, j = 1. Write λ j (t ) = exp(t l j ).
A smooth curve Γ : [a, b] → Lo1n is convex if there exist smooth positive functions β j : [a, b] →
(0, +∞) such that, for all t ∈ [a, b],
(Γ(t ))−1 Γ′ (t ) = β j (t )l j .
X
j
We prove in [5] that all zeroes of m • ◦ Γ : [a, b] → R are isolated (and of finite multiplicity). Let
nz(Γ) be the number of zeroes of m • ◦ Γ counted without multiplicity. The Grassmann convexity
conjecture states that nz(Γ) ≤ k(n − k) for all convex Γ. (In fact, following the ideas presented
in [7, the proof of Theorem 2], one can show that the maximal numbers of zeroes of m • ◦Γ counted
with and without multiplicity actually coincide.)
Let S n be the symmetric group with the standard generators a j = ( j , j + 1), 1 ≤ j < n. Let
η ∈ S n be the permutation with the longest reduced word; its length equals n(n − 1)/2. Using
the notation of [5], we define for each permutation σ ∈ S n , a subset Posσ ⊂ Lo1n whose dimension
equals the length of σ as follows. Let σ = a i 1 · · · a i l be a reduced word: if L ∈ Posσ then there
exist unique positive t 1 , . . . , t l such that L = λi 1 (t 1 ) · · · λi l (t l ). Conversely, if t 1 , . . . , t l > 0 then
λi 1 (t 1 ) · · · λi l (t l ) ∈ Posσ . The set Posη ⊂ Lo1n is the semigroup of totally positive matrices, an open
subset (see [1]). The boundary of Posη is stratified as
∂ Posη =
G
Posσ
σ ∈ S n , σ ̸= η
(the subsets Posσ ⊂ Lo1n and the above stratification are discussed in [5, Section 5]).
448 Nicolau Saldanha, Boris Shapiro and Michael Shapiro
3. Ranks
sequences of vectors, this means that multiplication of each vector v j by a positive real number
d j does not affect the rank. For k = 2, a regular prerank function pr : C ∗ → [0, 2(n−2)]∩N has been
constructed in [7]. In the next section, particularly in the proof of Lemma 4, we construct some
prerank functions different from the actual rank function which is currently unknown. Thus,
prerank functions serve as a tool to prove upper bounds for the actual rank.
Proof. Assuming the first item, let us construct a regular prerank function. For M ∈ C ∗ , set
Regularity follows from the observation that if D ∈ Rn×n is a positive diagonal matrix and (M s ) is a
convex sequence then so is (M s D). The second item trivially implies the third. Assuming the third
item, we obtain a prerank function for C ∗ . To prove the fourth item, given a convex sequence,
restrict the above prerank function to the image of M to obtain a prerank function for M.
Given a convex sequence M = (M s )0 ≤ s ≤ ℓ and a prerank function pr for M, we obtain that
nsc(M) ≤ pr(M 0 ) − pr(M ℓ ). Thus, the fourth item implies the first, completing the proof. □
4. Step lemma
The following lemma provides the induction step necessary to settle Theorem 1.
ves, j = v s, j /ω(v s, j ) so that ves, j ∈ H1 . For j < n, set w s, j = ω(v s, j +1 )(ves, j +1 − ves, j ) ∈ H0 . For a given s,
the sequence of vectors (w s, j )1 ≤ j ≤ n−1 defines a matrix M fs ∈ C − = R(k−1)×(n−1) :
| | |
fs = w s,1 · · · w s, j · · · w s,n−1 .
M
| | |
(The normalized vectors ves, j , the vectors w s, j and the matrix M fs will be the main ingredients in
the current proof.) We assume that m • (M s ) ̸= 0, which implies m • (M fs ) ̸= 0.
Define the prerank as pr(M s ) = prI (M s ) + prI I (M s ) · r − , where
ω v s, j ω v s, j +1 < 0 · j.
¡ ¢ X £ ¡ ¢ ¡ ¢ ¤
prI (M s ) = pr− Mfs , prI I (M s ) =
1≤ j <n
Here we use Iverson notation, i.e., [ω(v s, j )ω(v s, j +1 ) < 0] equals 1 if ω(v s, j )ω(v s, j +1 ) < 0 and 0
otherwise. In particular, if j > n − k then [ω(v s, j )ω(v s, j +1 ) < 0] = 0. We therefore have
(n − k + 1)(n − k)
· ¸
prI (M s ) ∈ [0, r − ] ∩ Z, prI I (M s ) ∈ 0, ∩ Z,
2
and thus
pr(M s ) ∈ [0, r 0 ] ∩ Z.
Next we need to verify that pr(M s ) is a prerank function. Indeed, recall that there exists a
positive elementary move from M s to M s+1 , which we call the s th move in M. There are two kinds
of positive elementary moves. Namely, if sign(ω(v s, j )) = sign(ω(v s+1, j )) for all j then we say the
s th move is of type I; otherwise it is of type II.
First consider s of type II. Let j be such that v s+1, j = v s, j + t v s, j +1 , t > 0. By taking H in general
position and introducing, if necessary, intermediate points we may assume that sign(m • (M s )) =
sign(m • (M s+1 )). For j ′ ̸= j , we have v s+1, j ′ = v s, j ′ implying that sign(ω(v s+1, j ′ )) = sign(ω(v s, j ′ )).
Therefore we have
sign ω v s+1, j = sign ω v s+1, j +1 = sign ω v s, j +1 = − sign ω v s, j
¡ ¡ ¢¢ ¡ ¡ ¢¢ ¡ ¡ ¢¢ ¡ ¡ ¢¢
For j ′ = j − 1, we get
ω v s, j ′ ω v s, j ′ +1 < 0 = 1 − ω v s+1, j ′ ω v s+1, j ′ +1 < 0 ;
£ ¡ ¢ ¡ ¢ ¤ £ ¡ ¢ ¡ ¢ ¤
Thus, in all cases we obtain that prI I (M s+1 ) < prI I (M s ) which implies that pr(M s+1 ) ≤ pr(M s )
independently of the values of prI (M s ) and prI (M s+1 ).
Consider now s of type I so that prI I (M s ) = prI I (M s+1 ). Again, let j be such that v s+1, j =
v s, j + t v s, j +1 , t > 0. For j ′ ̸= j , we obtain v s+1, j ′ = v s, j ′ and therefore ves+1, j ′ = ves, j ′ . Thus
ω v s, j t ω v s, j +1 ω v s, j
¡ ¢ ¡ ¢ ¡ ¢
ves+1, j = ¡ ¢ ves, j + ¡ ¢ ves, j +1 , ¢ > 0;
ω v s+1, j ω v s+1, j ω v s+1, j
¡
implying that ves+1, j is an affine combination of ves, j and ves, j +1 . For j ′ ∉ { j − 1, j }, we obtain
w s+1, j ′ = w s, j ′ . Finally, notice that w s+1, j is a positive multiple of w s, j and w s+1, j −1 is a positive
linear combination of w s, j −1 and w s, j . More exactly,
ω v s, j ω v s+1, j
¡ ¢ ¡ ¢
w s+1, j = ¡ ¢ w s, j , w s+1, j −1 = ¢ w s, j −1 + t w s, j .
ω v s+1, j ω v s, j
¡
Nicolau Saldanha, Boris Shapiro and Michael Shapiro 451
References
[1] A. Berenstein, S. Fomin, A. Zelevinsky, “Parametrizations of Canonical Bases and Totally Positive Matrices”, Adv. Math.
122 (1996), no. 1, p. 49-149.
[2] W. A. Coppel, Disconjugacy, Lecture Notes in Mathematics, vol. 220, Springer, 1971.
[3] V. Goulart, N. Saldanha, “Combinatorialization of spaces of nondegenerate spherical curves”, https://arxiv.org/abs/
1810.08632, 2018.
[4] ——— , “Stratification by itineraries of spaces of locally convex curves”, https://arxiv.org/abs/1907.01659v1, 2019.
[5] ——— , “Locally convex curves and the Bruhat stratification of the spin group”, Isr. J. Math. 242 (2021), no. 2, p. 565-
604.
[6] P. Hartmann, “On disconjugate differential equations”, Trans. Am. Math. Soc. 134 (1968), no. 1, p. 53-70.
[7] N. Saldanha, B. Shapiro, M. Shapiro, “Grassmann convexity and multiplicative Sturm theory, revisited”, Mosc. Math.
J. 21 (2021), no. 3, p. 613-637.
[8] B. Shapiro, M. Shapiro, “Projective convexity in P 3 implies Grassmann convexity”, Int. J. Math. 11 (2000), no. 4, p. 579-
588.