Vous êtes sur la page 1sur 1

An MEI Score Alignment Client

Andrew Horwitz (ahwitz@gmail.com), Andrew Hankinson (andrew.hankinson@mail.mcgill.ca), Ichiro Fujinaga (ich@music.mcgill.ca)

Distributed Digital Music Archives and Libraries Lab, CIRMMT, Schulich School of Music, McGill University

This project presents a browser-based interface for the simultaneous
presentation of audio and visual representations of an MEI score. If
images or audio are linked in an MEI file, they are automatically
rendered through Diva.js[1] and a custom Web Audio API[2]-based
audio player; when linked images are not included, the score is rendered
using Verovio[3] and audio is automatically generated using MIDI and
linked. The client provides tools for manually linking timepoints in the
audio to objects in the visual representation, and can play back the
audio while highlighting the currently sounding region of the score.

There is currently no combination of open-source programs or libraries
that are able to automatically link a printed score to an audio file via
MEI (sheet music-to-audio alignment). This project provides an
open-source way to manually perform the symbolic-to-audio step of
sheet music-to-audio alignment. The representations supported by this
project are standard representations of aligned media in MEI, and Figure 1: Playback of an MEI file with linked Diva.js score.
future automatic sheet music-to-MEI or MEI-to-audio aligners that do
not extend the MEI schema will likely output data in a format that can
be viewed in this client.

Playback Of Linked Media

The client will automatically load all linked audio data represented by
<timeline> collections of <when> timepoint references with
corresponding <avRef> pointers to audio files.
The client will automatically render <zone> regions contained within
<facsimile> representations on a Diva.js representation of the score.
Each <fascimile> must contain a <graphic> referencing the filename of
a page in the Diva.js document. (Figure 1)
Each <zone> descendant of each <facsimile> must be linked to a <when>
descendant of a <timeline> via @data or @when.
When the MEI links to multiple audio files (such as in Figure 2), each
one is loaded; clicking anywhere on any waveform representation
will immediately jump to that point in the score.
When the user hits play, the currently sounding region is highlighted
(Figures 1 and 2), and the highlighted region changes when the
current playback time matches a different <when> timepoint. Jumping
to any point in any audio track will also immediately jump and Figure 2: Playback of an MEI file rendered with Verovio.
highlight in the visual representation.
Generation of Linked Media [1] http://ddmal.github.io/diva.js
An included Python script automatically generates audio from MEI [2] https://webaudio.github.io/web-audio-api/
(via TiMiDity[4]) and links all musical elements to a <when> [3] http://www.verovio.org/index.xhtml
representing the time at which they sound. [4] http://timidity.sourceforge.net/
For CMN MEI files without a linked score, Verovio engraves musical
elements as an SVG score (Figure 2). Acknowledgements
The user can pause the currently playing recording, then click and This research was supported by the Social Sciences and Humanities
drag to denote the boundaries of a Diva.js <zone> or select a Verovio Research Council of Canada.
<measure> to link part of the visual representation to the paused
playback time.
A new <when> element is automatically created at the current
playback time; the created Diva.js <zone> or selected Verovio
<measure> is then linked to that <when>.
Any updates to the MEI file can be saved to the users computer.
Presented at the 2016 Music Encoding Conference, Montreal, Quebec, 17--20 May 2016