CSC 4103 November 24, 2013 The Operating System of a Multicore Computing Trumpet As parallel computing improves and processor cores continue to get smaller, the possibilities and applications for computing are expanding into exciting and unique territories. One of these areas is music, and not just music production, but in acoustic instruments themselves. The primary purpose of building computing power into acoustic instruments and using it to operate instruments is not intended to replace human performers, but rather to expand musical possibilities by providing more control and more options for expression with various interfaces. In exploring what the operating system of a trumpet outfitted with hundreds or thousands of computing cores would be like, it is important to establish the functions of the whole system. The system should come with an accelerometer. This would be an especially useful piece of hardware for marching bands. This hardware could be used to track players horn angles at a moment, and even see if they diminish over time with fatigue. Drum Corps especially have a lot of horn flashes for visual effect, and are usually fairly involved and complex. Horn movements should look consistent across the entire band and the data from the accelerometer could be used to work towards that. It is important that a player marches as smoothly as possible, a trait which could be examined via data collected by the accelerometer. The system should include a microphone to be used for anything from simple amplification to adding live effects. The system should have the ability to convert notes into MIDI protocol and the system should have a tuner which and could either use the microphone or a piezo sensor to accomplish this task. Instant feedback on tuning would be optimal in many cases. The system should have an enhanced GPS to track position for use in cleaning marching band forms. In order to be accurate enough it would likely have to employ a method described by Hentschel, Wulf, and Wagner which uses 3D laser range data and a line feature map. The ability to play the trumpet acoustically is the last major feature the system should accomplish. This has been accomplished by Honda, but what could make this feature useful is allowing for MIDI input. Thus, keyboards, software, and exciting interfaces could be used to control many trumpets in a unique manner. This final feature presents an interesting computing problem. The synthetic lips that play the horn would need to be calibrated for each note on the horn. The calibration could vary based on the instrument and conditions. In James Bilitskis method, both air pressure and lip pressure are varied to create the notes properly. Bilitski uses 8 bits to denote air pressure and 8 bits to represent lip pressure. In this scenario, the worst case brute force method could search 65025 combinations. Preliminary experimentations showed that it takes about 4 seconds to test each note. Thus without an optimized search, it could take over 4335 minutes to find a solution for only one pitch (Bilitski). His solution is a genetic algorithm approach which begins with an initial population of solutions and reproduction from those parents, including some with mutation. The computing operations needed by this algorithm would be: random number generation, comparisons, calculation of percent error, and calculation of a fitness function. Lets take a look at accommodating the note to MIDI converter. A MIDI message does not actually contain audio, but rather, contains instructions about how to play a note. MIDI is transmitted serially at 31.25 kilobits per second in the form 10 bit words, even though the data transmission is actually asynchronous. As stated by Yang, A MIDI message consists of a status byte followed by 0, 1, or 2 data bytes. In terms of hardware, there will be a 12 bit bus between a digital signal processor to hold frequencies between 27.5 Hz and 4186.01 Hz. There must be a handshaking signal between the converter and the MIDI note converter, and a MIDI interface. Hentschel, Wulf, and Wagners laser based localization for GPS begins by combing wheel odometer and inertial data of a robot with GPS location (Hentschel, 149). Then, using lasers, stationary landmarks are determined and this data is integrated into the model. For the trumpet, the wheel odometer data could be substituted with accelerometer and gyroscope data to determine direction of movement and a stationary landmark could be a directors viewing tower. The computation used to accomplish the data fusion involves: simple arithmetic, calculation of standard deviation, and matrix operations including inverse, transpose, multiplication, and covariance. Due to the variety of tasks to be performed by this operating system and the large number of processor cores available to it, I propose a clustering solution. A selected group of a processors will be dedicated to only one or a few tasks. There will be a core cluster dedicated to controlling the system at a higher level which is comparable to an object oriented approach. For example, lets say the system needs to be making a map of a players location over time, as determined by the core cluster. The core cluster will send either a signal or an instruction to the cluster dedicated to the laser-enhanced GPS and await the coordinates of the next point to be plotted on the map and its time stamp. The laser/GPS cluster will take care of all the I/O with the GPS, lasers, accelerometer, and gyroscope, and carry out the computations for the Kalman filter and Monte Carlo Localization that is discussed by Hentschel. When the computations are complete the result will be sent back to the core cluster. I chose this approach because there are a wide variety of input and output devices to manage, and not only that, but lots of data and computation to be done on the data before presenting a meaningful result. Another reason is that this approach will allow the operating system programmers a change to fully utilize the devices parallel computing abilities with greater ease due to its intuitive nature. Examples of other clusters include a live effects cluster capable of Fast Fourier Transform, a synthetic lip note calibration cluster, a tuning cluster, a frequency detection cluster, a MIDI input, and a MIDI output cluster. Let us now discuss the file system interface of this operating system. It will need to support some existing file types using file extensions such as .midi, .wav, .mp3, and .txt. But because of the unique nature of the data to save, there will be many file types which will have to be established. The operating system should support both shared and exclusive locks. There are a variety of tasks that can be performed on a single file, therefore a shared lock which allows multiple processes to acquire read access concurrently would be very useful. Not doing so could cause a great loss of potential in parallel computing. One may argue that a single level directory structure could be implemented for simplicity. However, as this trumpet would be a very expensive piece of equipment with the additional hardware, it is possible that it could be loaned out to different users so that each player could have the chance to evaluate themselves or be evaluated by the director for only the cost of a single computing trumpet. Therefore, a two-level directory is in order so that each user has his/her own user file directory. A special user directory containing only system file would also be beneficial. Otherwise, copies of system files would have to be made for every user folder, needlessly using up storage space. A few security groups will also be needed. I suggest a group called director which has read and write access to all files, assistant which has write access only to its own user folder and read access to a subset of file extensions. Specifically, if an assistant is specialized and is only concerned with, say, the tuning of a band, then they should just have read access to the file extension concerned with tuning. And finally, a group called performer would have read and write access to their own user folder. Directories shall be implemented using a hash table, to increase performance. To save from the need for a keyboard, I propose that users be able to input a specific sequence of valve combinations on the trumpet as a password. This would call for more sensors, a processor cluster to service them, and a file type for storing sequences of valve combinations. The computing trumpet system will not have a hard drive for storage, as that would be too bulky. The system should use flash memory, which is much smaller, and simplifies the allocation and scheduling algorithm. The system will implement indexed allocation so that the direct access to files is supported. This would be helpful for accessing the middle of a long file such as a recording, or gaining access to a single statistic for a performance. The disk scheduling algorithm would simply be a first come, first serve model. This is because there is no seek time for flash memory. This operating system is unique due to the need for both parallel computing and multiple I/O devices. The degree of parallelism and the number of I/O devices would be much more than your average desktop and the number of I/O devices is much more than your average supercomputer. If implemented well, the proposed system could be extremely useful in many educational and professional scenarios. Imagine a piece of music originally composed for, say, piano now able to be played upon many trumpets, but by a piano player. Pieces can always be played by different instruments than intended of course, but now the expressiveness of the piece can be in the hands of a single person. You could also have a performance where you can integrate real trumpet players into the mix by having a brass choir playing the background synthetically while the human performer takes the spotlight by expressing the melody in a way that the machines cant. Imagine using multicore computing instruments in a marching band, where it could be used as an extra set of eyes and ears for the director. He/she would be able to evaluate players individual performances to give feedback and gain access to an even more birds eye view of the bands formations. If the hardware included wireless radios, amplification could be accomplished without a performer approaching a microphone. Instruments that are not normally loud enough for solos, or brass instruments in a huge arena could be immediately amplified at the proper moment, and musical effects could be applied live. But now, lets scale up way up. Imagine the sound of a million trumpets. Can you? It is a sound that no one has heard before. With computing built into trumpets, it would be possible to bring the sound of many trumpets together from remote locations. On the other side could be more trumpets that output what the player is putting in on the other side. Or, the sound could simply be brought together on the other side and played through speakers. But the excitement of hearing a never before heard sound by playing the output acoustically through trumpets is hard to deny. At Louisiana State University, The Golden Band from Tigerland has about 70 trumpets in a 325 piece band that never fails to get a whole stadium on its feet, and yet the stadium could be filled much more with their sound. So scale up the venue to, say, the 2020 Olympics in Tokyo and the number of instruments and you would have an experience and a sound like no other. While doing a segment of the opening or closing ceremony with remote participation might seem isolating at first, it could actually be a unifying force for the world as the audience watches and listens to their friends from around the world participate in the music. Not only could it be unifying internationally, but it could also be bring local communities together. Local bands and musicians could get together to play on the night of the ceremony, creating a community-wide event. Musically, there is a lot that could be done. Not only would there be a plentiful volume of sound, but never would a composer have the opportunity to create harmonies with so many parts where each voice is supported by an army of thousands. The Olympic Fanfare and Theme composed by John Williams would be an excellent piece to include in the arrangement. The ceremony could be an opportunity for various countries to showcase their most talented players or their local music which would give the chance for the world to hear the music of, for example, Louis Armstrong, where many around the world may have never heard his music before. Expanding new computing possibilities and applications into artistic territories will expand musical opportunity by providing new avenues for expression through new interfaces, and by providing a way to scale up music in a way never attempted before. Bilitski, James. A Machine Learning Approach for Automatic Performance of a Trumpet. Sixth International Conference on Computational Intelligence and Multimedia Applications, 2005 16-18 Aug. 2005: 80 85. IEEE Xplore. Web. 24 Nov. 2013. Hentchel, Matthias, Oliver Wulf, and Bernado Wagner. A GPS and Laser-based Localization for Urban and Non-Urban Outdoor Environments. IEEE/RSJ International Conference on Intelligent Robots and Systems, 2008 22-26 Sept. 2008: 149 154. IEEE Xplore. Web. 24 Nov. 2013. Yang, Runfeng. Frequency to MIDI Converter for Musical Instrument Microphone System. Consumer Electronics, 2012 2 nd International Conference on Communications and Networks (CECNet) 21-23 April 2012: 2597 2599. IEEE Xplore. Web. 24 Nov. 2013. Special thanks to Dr. Brygg Ullmer and Dr. Stephen Beck for their input regarding the 2020 Olympics.