The passage below from Curtis Roads' recently-published book, Composing Electronic Music: A New Aesthetic (Oxford, 2015), explains why "live performance and improvisation" aren't among the topics covered:
Live performance has a long tradition and is an important domain of electronic music. Recent texts by Borgo (2005), Barbosa (2008), Jordà (2007), Collins (2007), Dean (2009a), Perkis (2009), Tanaka (2009), Lewis (2009), Oliveros (2009), and Pellegrino (2010), among many others, explore the issues that surround live performance, including extensions into network-based interaction.
In the bad old days of computer music, there was no live performance. Algorithmic composition, sound synthesis, and sound processing could not be realized in real time. Today real-time interactive performance is common. I frequently perform with synthesizers and sound transformation tools, even if it is in the studio and not live onstage. Continued technical research in support of live performance is essential. This involves the design of new electronic instruments and modalities of performance interaction.
The risks associated with improvisation onstage can instill a live performance with dramatic and emotional impact. A key to success in such performances is virtuosity, a combination of talent plus rigorous practice. We hear this in Earl Howard’s Strasser 60 (2009), a tour de force of sonic textures played live on a sampling synthesizer. Behind such a piece are months of sound design and rehearsal to prepare the 20-minute performance.
Richard Devine’s Disturbances (2013), which he performed live on a modular synthesizer at UCSB, is another impressive demonstration of virtuosic control.
When I project my music in a hall, another kind of live performance takes place: sound projection or diffusion. This consists of varying the dynamics, equalization, and spatialization of music that is already composed in order to take advantage of a particular space and its sound system. Virtuosity drives such performances, but this is based as much on intimate knowledge of the music being projected as it is on physical dexterity. The key is knowing precisely when and how to change the projection, keeping in mind the resources of a given hall and its sound system. (For a discussion of the aesthetic significance of sound projection as a performance interpretation, see Hoffman 2013.)
The idea of combining acoustic instruments and electronic tape has a venerable tradition, dating back to the early concerts of the Groupe de Recherche de Musique Concrète, in which Pierre Schaeffer and Pierre Henry collaborated to make Orphée 51 for soprano and tape (Chion 1982). Extending this line, many composers, such as my colleague JoAnn Kuchera-Morin, write mixed pieces that combine a virtuoso instrumental score with electronic sound and interactive processing. Mixed pieces pose many aesthetic challenges, and I admire those who master that difficult medium. For more on live interactive electronic music with instruments, see, for example, Rowe (1993, 2001).
In contrast, my compositional practice is studio based. Playing an instrument in real time is central to my studio work, keeping in mind that “playing” and “instrument” go beyond traditional modalities to encompass interaction with software. I record these (sometimes improvised, sometimes planned) performances, and this is often how I generate the raw material for a composition. Due to the nature of my music, however, which is organized in detail on multiple timescales down to the microscale, it is impossible for me to generate it in real time onstage.
Studio practice affords the ultimate in flexibility and access to the entire field of time on multiple scales. The ability to zoom in and out from the micro to the macro and back, as well as move forward and backward in time (e.g., compose the end before the beginning, change the beginning without modifying the rest of the piece), are hallmarks of studio practice. Sounds can be reversed and their time support can be freely modified with varispeed and pitch-time changing or utterly scrambled by granulation. Once the macroform of a composition has been designed, I sometimes finish it by sprinkling it with a filigree of transients—like a dash of salt and pepper here and there in time.
These kinds of detailed studio practices take time. Indeed, a journalist emphasized the glacial timescale of my composition process, which to me is merely the natural pace of the work (Davis 2008). In order to construct an intricate sequence of sound events, I often listen at half speed or even slower. A passage of a few seconds may take a week to design. The process often begins as an improvisation. I try an experiment, listen to it, revise it, then perhaps backtrack and throw it away (deleting the past). I write notes and make a plan for the next improvisation. I reach a dead end and leave a piece for weeks in order to come back with a fresh perspective. My composition process takes place over months or years. Epicurus was composed over the period of 2000–2010. The original sound material in Always (2013) dates to 1999, and the piece was assembled over a period of three years.
Thus it makes no sense for me to pretend to have anything particularly interesting to say about onstage live performance of electronic music. I leave this for others.