Vous êtes sur la page 1sur 5

9.

19 Markov Chains
Even if we were able to identify all the rules that characterize a particular musical style (and thats a big if),
there is still a great deal of difference between music that breaks no rules and music that shows taste.
Certainly a critical element of a composers aural sensibility is a sensitivity to musical context, but none of the
methods discussed so far take the surrounding music into account to determine subsequent choices.
Markov chain techniques are sensitive to their immediately preceding context, so they can create contextually
appropriate outcomes. Markov chains use recently chosen states to influence the probability of subsequent
choices. Another advantage of Markov chains is that the rules driving the process can be readily discovered
from existing compositions. Thus, it is possible to use Markov chains to compose music that is like other
music. Harry Olson (1952) used them to construct musical examples that resembled the works of the
composer Stephen Foster, and Hiller and Isaacson (1959) used them to compose a movement of the Illiac
Suite. The technique is widely used.

Figure 9.35
Chorus from Oh Susanna by Stephen Foster.

9.19.1 Markov Chain Orders


Markov chains are ordered by how much recent history is taken into account when determining the next state.
Following Olsons lead, lets analyze a Stephen Foster song, Oh Susanna, using various orders of Markov
process. By focusing just on the chorus of the tune, we can keep the analysis from becoming too long-winded.
Figure 9.35 shows the chorus, which has 25 notes (not counting rests), labeled R0 to R24.
9.19.2 Zero th-Order Markov Process
Since the weighted choice technique (see section 9.14.4) takes no account of any previous states, it is defined
as the zeroth-order Markov process, H0. Even simple weighted choice is useful for matching the static event
frequency of data drawn from the real world.
We create the probability density function for Oh Susanna by counting how many times each pitch is visited
as a ratio of the total number of notes:
C
4/25

D
5/25

E
5/25

F
2/25

G
5/25

A
4/25

B
0

The counts are expressed as a fraction of the total number of notes. A table like this of event occurrences is
called a histogram.
Feeding the Oh Susanna probability density function into the weighted choice technique would generate a
new melody with pitches in roughly the same proportions as Oh Susanna, but the new melody would
probably have little if any of the musical character of the original.
9.19.3 First-Order Markov Process
Since music unfolds in time, the context of each note consists of the note or notes that precede it. If we want
to incorporate context into our analysis, we must study how notes succeed each other in the melody. For each
note, lets tabulate the note that follows it. We can distill from this information what the probability of the
next note will be, given the current note.
Markov Analysis We create a first-order Markov analysis by the following steps:
1. Catalog the note transitions. We pair each note in the melody with the note that follows it.
If we let the first note (F) be the current note, then the second note (also F) is the next note. So the first
transition in the melody is F F. If we now make note 2 (F) be the current note, then the next note is note 3
(A). So the second transition is F A. The third transition is A A, and so on. The transition table (table 9.8)
tabulates this information. Each cell stands for a transition from a particular current note to a particular next

note. The row indexes the current note, and the column indexes the next note. Thus, the first transition, F F,
is indicated by a 1 in row F, column F. The second transition, F A, is indicated by a 2 in row F, column A.
The third transition, A A, is indicated by a 3 in row A, column A, and so forth.
2. Tally up the number of transitions in each cell (table 9.9). What we end up with is essentially a set of
zeroth-order Markov histograms in the rows. When we go to generate a melody based

on this analysis, we select a particular histogram row depending upon which note is the current note.
3. Convert the rows into cumulative distribution functions. First we normalize each row. We want to adjust
each histogram so that the sum of its probabilities equals 1. (If any row sums to 0, we set all elements of that
row to 0.) This is shown in table 9.10.
4. Transform each column into a cumulative distribution function by summing each cell with all cells in the
row to its right (table 9.11). The table is finally in a cumulative distribution format we can use to synthesize a
first-order Markov melody. It determines subsequent notes based on how probable the transition is in the
original melody. The method of traversing this function is the same as that described in section 9.14.6.
Markov Synthesis When using table 9.11 to generate a melody, we pick a starting note at random from the
sample space, {C, D, E, F, G, A} (pitch B is ignored because nothing transitions to or from it). Lets make F
the current note. Table 9.11 shows that there is a 50/50 chance that the

next note will be F or A. (This may be easier to follow by reference to table 9.10.) Suppose A is chosen; it is
now the current note. Then there is a 50/50 chance that the next note will be G or A. Suppose G is chosen; it is
now the current note. Now it is twice as likely that E or G will be the next note than that A will be. We
proceed like this until we have enough notes. Figure 9.36 is an example generated automatically from this
data set with starting pitch F. Only the pitches were synthesized; the rhythms were copied from the original to
aid comparison. This method carries a hint of the musical character of the original into the synthesized
melody.
A first-order Markov process asks, Given the immediately preceding state , what is the likelihood that the
current state R is xn? This is written using conditional probability notation,

which is read as Given the condition that is the preceding state, let q be the probability that
state R equals xn.
Directed Graph Another way to represent the first-order Markov transition information we have developed is
to show it as a directed graph, which illustrates the flow of possibilities from state to state. States are
represented by circles, and transitions from state to state are represented as arcs (lines with arrows). The
directed graph of the chorus for Oh Susanna is shown in figure 9.37.

Figure 9.36
Oh Susanna chorus synthesized by first-order Markov process.

Figure 9.37
Directed graph of Oh Susanna, first-order Markov analysis.

The diatonic pitches of the scale are shown in circles. The arcs are labeled with their transition probabilities.
When synthesizing a melody, notice that once we leave pitch F, we can never return to it, because no pitch
besides F ever transitions to F. Pitch B is unreachable. Markov synthesis is free to cycle among the remaining
pitches. Because it contains cycles, it is a directed cyclic graph (DCG). If there were no cycles in the graph, it
would be a directed acyclic graph (DAG).
9.19.4 Second-Order Markov Process
Second-order Markov analysis basically asks, Given two events in sequence, what is the probability of the
next event? We express the probability as
which is read as Let q be the probability that R equals xn, given that and precede it in sequence. We could
represent the first few second-order transitions for Oh Susanna like this:
F : F A F : AA A : AA A : A G A : GG G : G E, . . .
How many possible second-order transitions are there for the diatonic scale? First-order Markov analysis
involves two notes (current and next) and so has 72 49 orderings. Second-order Markov analysis involves
three notes (previous, current, and next), and by the rule of enumeration, there are 73 343 possible orderings.
We still want to represent the transitions as a two-dimensional matrix so that, as before, each row represents a
zeroth-order Markov density function that determines the probability of the next note. We can manage this by
marking the rows as the pair of previous and current pitches, and the columns as the next pitch. For the
diatonic scale, this requires 49 rows and 7 columns, still a pretty big table, but to save room we can leave out
any rows that have no transitions.
The analysis is shown in table 9.12. To conserve space, the transition event order and the normalized
probability distributions are shown in the same table. For example, the listing for the first transition, , reads 2
(1.00), which means the target pitch A is the second note in the melody (counting from 0), and the probability
of this transition is 1.00. Sometimes more than one note shares the same transition. For example, is shared by
notes 9 and 19.
Figure 9.38 shows an example second-order melody synthesized from table 9.12. The melody length and
rhythms are the same as the original to facilitate comparison, although they could also be synthesized from a
Markov analysis. Note the direct quotation of the original in the first six notes. Because it takes more of the
preceding music into account when choosing the next note, melodies created from higher-order Markov
synthesis carry over more of the exact phrasing of the original melody.

If we start the Markov synthesis on other than the F:F transition, we enter the analysis matrix at a different
position, and different patterns are synthesized. Table 9.13 shows a few example note sequences generated
from beginning table 9.12 at different initial transitions.

Vous aimerez peut-être aussi