@jamie.bullock

Implementing audio feature extraction in live electronic music

. Birmingham Conservatoire, (October 2008)

Abstract

Music with live electronics involves capturing an acoustic input, converting it to an electrical signal, processing it electronically and converting it back to an acoustic waveform through a loudspeaker. The electronic processing is usually controlled during performance through human interaction with potentiometers, switches, sensors and other tactile controllers. These tan- gible interfaces, when operated by a technical assistant or dedicated elec- tronics performer can be effective for controlling multiple processing pa- rameters. However, when a composer wishes to delegate control over the electronics to an (acoustic) instrumental performer, physical interfaces can sometimes be problematic. Performers who are unfamiliar with electron- ics technology, must learn to operate and interact effectively with the inter- faces provided. The operation of the technology is sometimes unintuitive, and fits uncomfortably with the performer’s learned approach to her instru- ment, creating uncertainty for both performer and audience. The presence of switches or sensors on and around the instrumental performer begs the questions: how should I interact with this and is it working correctly? In this thesis I propose an alternative to the physical control paradigm, whereby features derived from the sound produced by the acoustic instru- ment itself are used as a control source. This approach removes the potential for performer anxiety posed by tangible interfaces and allows the performer to focus on instrumental sound production and the effect this has on the electronic processing. A number of experiments will be conducted through a reciprocal process of composition, performance and software develop- ment in order to evaluate a range of methods for instrumental interaction with electronics through sonic change. The focus will be on the use of ‘low level’ audio features including, but not limited to, fundamental frequency, amplitude, brightness and noise content. To facilitate these experiments, a number of pieces of software for audio feature extraction and visualisa- tion will be developed and tested, the final aim being that this software will be publically released for download and usable in a range of audio feature extraction contexts. In the conclusion, I will propose a new approach to working with audio feature extraction in the context of live electronic music. This approach will combine the audio feature extraction and visualisation techniques dis- cussed and evaluated in previous chapters. A new piece of software will be presented in the form of a graphical user interface for perfomers to work interactively using sound as an expressive control source. Conclusions will also be drawn about the methdology employed during this research, with particular focus on the relationship between composition, ‘do-it-yourself ’ live electronics and software development as research process.

Links and resources

Tags