Music in which electronic technology, now primarily computer-based, is used to access, generate, explore and configure sound materials, and in which loudspeakers are the prime medium of transmission (see also Computers and music, §II). There are two main genres. Acousmatic music is intended for loudspeaker listening and exists only in recorded form (tape, compact disc, computer storage). In live electronic music the technology is used to generate, transform or trigger sounds (or a combination of these) in the act of performance; this may include generating sound with voices and traditional instruments, electro-acoustic instruments, or other devices and controls linked to computer-based systems. Both genres depend on loudspeaker transmission, and an electro-acoustic work can combine acousmatic and live elements.
4. Live electronic and real-time applications.
6. Listening and loudspeakers.
8. Electro-acoustic sounds and other genres.
SIMON EMMERSON, DENIS SMALLEY
Electro-acoustic music is generally regarded as a body of art-music genres that evolved from compositional techniques and aesthetic approaches developed in Europe, Japan and the Americas in the 1950s. During this decade the growing availability of magnetic tape offered composers a high-quality recording medium which allowed greater experimentation in the manipulation of recorded sounds. This music sought to expand compositional resources beyond the sounds available from instruments and voices, to explore new sound shapes and timbres both by transforming recorded sources and by synthesizing new sounds, and to break the confines of fixed pitch and metrically based approaches to rhythm.
The invention of sound recording has made all sounds available for potential use as musical material: sounds that were previously ephemeral can be captured, and environmental phenomena can be imported into music. Moreover, close exploration of sounding bodies (including instruments) with microphones magnifies and reveals the internal detail of sounds, sometimes with surprising results. Sound recording is itself a transformation process, and recorded sounds may appear in a work without further alteration. Alternatively, recorded sounds can be subjected to transformations ranging from lightly enhanced colorations to alterations so extensive that the transformed sound is but a distant relative of the original. For example, a sound can be analysed into its constituent components, which can then be reconfigured, so that timbre and shape are transformed.
Creating a sound through synthesis requires the composer to design the constituents of a sound and their evolution according to a particular method – for example, building sounds based on waveforms, constructing sounds out of the briefest sound-grains, or specifying the parameters of models based on the behaviour of the voice, instruments and other sounding bodies. Given a viable method, the composer can both emulate existing sounds and design original sounds. However, no device or computer program is capable of realizing every composer’s designs with ease. Furthermore, technology is not neutral: all technological processes result in characteristic acoustic behaviours that influence the musical outcome.
‘Electro-acoustic’ merely describes the technology used to provide the production tools; it does not describe the sound world or the distinctive idioms made possible by this technology. Although ‘electro-acoustic’ is adopted in this article as the most appropriate generic adjective, other terms have been used either as surrogates or to represent a particular approach to the medium.
In the 1950s elektronische Musik was the term given by a group of German composers, initially working in Cologne, to music on magnetic tape consisting of sounds generated electronically (by means of oscillators, for example) – that is, music whose materials are created synthetically. The composers aimed to use electronic resources to construct timbres, thereby extending control to the structure of sound itself, and they envisaged that a musical structure would be planned before realizing it electronically. These aims only became truly viable with the arrival of the computer.
Musique concrète was created in Paris in 1948 by Pierre Schaeffer (soon joined by Pierre Henry). It grew out of Schaeffer’s experience in radio, but was also inspired by film soundtracks. The word ‘concrète’ originally conveyed the idea that the composer was working directly (concretely) with the sound material, in contrast to the composer of instrumental or vocal music who works indirectly (abstractly) using a symbolic system of notation which represents the sounds to be made concrete by instruments and/or voices. In musique concrète sound materials could be taken from pre-existing recordings (including instrumental and vocal music) and recordings made specially, whether of the environment or with instruments and objects in front of a studio microphone. These source sounds might then be subjected to treatments before being combined in a structure; the compositional process proceeded by experiment. Schaeffer intended that sounds should be perceived and appreciated for their abstract properties rather than being attached to meanings or narratives associated with their sources and causes. The relationship between what sounds signify and their abstract sonic attributes lies at the heart of the subsequent development of the acousmatic music aesthetic. Musique concrète quickly became identified with ‘natural’, real-world sounds, even though concrète theory did not exclude the use of recorded electronic sounds.
In Paris towards the end of the 1950s ‘electro-acoustic music’ was promoted as a better term for representing the cohabitation of the concrète and electronic approaches to sounds. At this stage, however, ‘electro-acoustic’ referred only to music on tape. To confuse matters, as studios spread ‘electronic music’ lost its specialized German connotations and in many countries came to be synonymous with ‘electro-acoustic music’ as a collective term for all approaches to the medium. ‘Electro-acoustic’ gradually became the dominant term, although ‘electronic’ is still in use.
‘Tape music’ means simply that the music in its final form is recorded on magnetic tape. The term is closely associated with works composed in the USA in the early 1950s and has been widely used internationally ever since, although decreasingly now that tape (analogue or digital) is no longer the only final storage medium.
‘Computer music’ entered the vocabulary when the computer became a significant compositional tool; the first attempts at synthesis took place in 1957 at the Bell Telephone Laboratories in Murray Hill, New Jersey. The earliest computer music studios were distinct from (analogue) electronic music studios. Today all electro-acoustic music may be regarded as computer music, and although ‘computer’ may not fully represent the technological means employed, the term continues to be widely used.
Since the late 1980s ‘sonic art’ has been adopted to situate electro-acoustic music within a wider framework. Although electro-acoustic resources are not obligatory for creating sonic art, the term has the advantage of indicating an openness to all types of sound.
In traditional music the listener has visual access to the gestures of sound-making, an experience that is an essential aspect of the listener’s affinity with the human articulation of music. In acousmatic music, which exists in recorded form and is designed for loudspeaker listening, the listener perceives the music without seeing the sources or causes of the sounds. Acousmatic music thus ruptures traditional notions of music reception. In terms of content the genre, playing on its invisibility and liberty, is ideal for exploring the ambiguous and allusive play of causalities, metamorphoses, acoustic imagery and the behaviour of sounds in virtual spaces. The recorded format of acousmatic music allows the composer to combine sounds created at different times and on different systems, and offers the utmost flexibility for juxtaposing and superimposing sounds with attention to the finer details of sound quality. Two aesthetic tendencies have emerged. The more ‘abstract’ approach is concerned with developing discourses of sound types and timbres; the other favours recognizable ‘real-world’ sounds (including other music), a more radiophonic approach, which can border on the documentary, and is sometimes referred to as ‘anecdotal’ music. However, the two tendencies can merge and should not necessarily be regarded as polarized. The argument as to whether anecdotal music is inferior to more abstract music is a continuation of the debates concerning the merits of programme music.
The word ‘acousmatic’ refers to the akusmatikoi, pupils of Pythagoras who, so that they might better concentrate on his teachings, were required to sit in absolute silence while they listened to their master speak, hidden from view behind a screen. In a radio talk in 1955 the French writer Jérôme Peignot used the expression ‘bruit acousmatique’ to describe the separation of a sound from its origins as encountered in musique concrète. Schaeffer in his Traité des objets musicaux (1966) compared the role of the tape recorder to the screen of Pythagoras, emphasizing the concentrated listening facilitated when working in the studio with sound recorded on tape: repeated listening encouraged a better appreciation of the detailed abstract attributes of sounds. In 1974 the composer François Bayle, head of the Groupe de Recherches Musicales, suggested adopting the term as more suitable than ‘electro-acoustic music’ for representing the special conditions of listening to music on tape. Acousmatic music has focussed attention on how we listen to sounds and to music, and what we seek through listening. Consequently, music analysis and music psychology have expanded their fields of inquiry to encompass the wider sound world of electro-acoustic music.
The earliest electric instruments, such as the theremin and ondes martenot, influenced subsequent synthesis and interface designs, but did not assist in establishing new musical genres. John Cage pioneered the use of electronic devices on the concert platform: his Imaginary Landscape series (1939–52) includes the earliest use in live performance of electric sound devices and recordings, sometimes combined with amplified ‘small sounds’ (which would otherwise remain barely audible).
Two approaches to combining electronic resources with live performers emerged in the 15 years after the effective foundation of electro-acoustic music in 1948. ‘Mixed music’ involved combining live instrumental/vocal performer(s) and pre-recorded tape, as in Schaeffer and Henry’s Orphée 53 (1953) for soprano and tape, and Maderna’s Musica su due dimensioni I (1952) for flute, cymbal and tape. Mixed music embraced divergent aesthetics, ranging from works focussing on relationships between ‘extended’ or non-standard instrumental sounds and the sound world opened up by the acousmatic approach, to works that explored the pitch and rhythmic complexities of serialism, with taped electronic sounds acting as an accompaniment to the performer. Stockhausen’s Kontakte (1959–60) embodies elements of both approaches. Composers also surrounded the performer(s) with environmental sounds, sometimes to articulate social and political arguments, as in Nono’s La fabbrica illuminata (1964) for female voice and tape, or as part of more extensive sound environments and installations.
In ‘live electronic music’, sound produced by the performer was modified electronically at the time of production in a manner controlled by the instrumentalist or another performer (often at the mixing console). By the end of the 1960s performance groups typically used devices that changed the spectral characteristics (filtering, ring modulation, flanging and phasing), spatial positioning (panning) and sound envelope shapes, as well as echo and delay systems (based at that time on tape), which made possible the superposition and repetition of material. Many of these devices became more widely available after the introduction of voltage control in the mid-1960s.
Both mixed and live electronic music posed questions of microphone type and placement, amplification and balance. Indeed, amplification could in many circumstances be considered a form of transformation, projecting otherwise barely perceptible sounds and altering the spectral balance of the original. In some cases electric and electronic sources replaced the live acoustic instrument and were fed directly to the processing devices.
The analogue processes available to performers and composers in the 1950s and 60s were replaced by digital equivalents as fast microprocessors became available in the 70s and 80s. This same revolution led to the widespread introduction of the personal computer from the early 80s. Until this time computers had been used for synthesis and processing, but working in what was called ‘deferred time’, often waiting a considerable period for the process to be completed. The ever-increasing speed of digital devices finally allowed composers to hear the sound as soon as the instruction to create or process it was given. This was known as working in ‘real-time’, a term which has tended to replace ‘live’ (as in ‘live electronic music’) in an often confusing manner.
Digital technology has been applied to music in two ways: event processing and signal processing. In event processing (standardized by the adoption of the MIDI protocol after 1983), the music is represented digitally as streams or channels of ‘note events’ specified primarily by their pitch, duration (note on/note off) and dynamic level (velocity of attack). This enables composers to create and store note files to be triggered during performance, activating and controlling sound-production devices such as synthesizers and samplers.
The computer emerged as a ‘performer’ on stage in the mid-1980s, when it became relatively simple to describe note relationships in computer terms and to manipulate notes in real time. In effect the computer could assume the role of an improviser. This led directly to ‘interactive composition’, in which performer and computer were, for example, free to choose among possible responses or even to develop event material (most commonly pitches and rhythms) produced at the time of performance according to rules defined in advance by the composer. By the mid-1990s systems had been created that were capable of ‘learning’ and devising such rules of response during the performance itself.
Some computer systems can ‘track’ the live performer and adapt the electro-acoustic part accordingly. In the first generation of such systems the computer compared the real performance with a stored score, adjusting the accompanying material to fit (with respect to timing and, to a certain extent, accommodating performer errors). By the mid-1990s more flexible options had become available in which the performer could influence, in real time, the dynamic, timing and even timbral constitution of an electro-acoustic part.
Slower to develop, because more demanding still of computational speed, was digital signal processing in real time. This technology is concerned with transformations of spectral and temporal aspects of sound quality – the major constituents of what we loosely call timbre. Until the mid-1990s this field was dominated by stand-alone devices which could be controlled in real time by the performer (or by a separate computer). But the increasing speed of personal computers has allowed the implementation of many such processes in real time, making possible the integration of event and signal processing within a single control environment – a development that will influence both studio composition and performance practice.
Of course an ‘event’ cannot exist without a ‘signal’, and vice versa. Nonetheless there remains a clear distinction between traditions of electro-acoustic music-making that retain a pitched and rhythmic (event-dominated) approach and those that are more textural and timbral (signal-dominated) in their discourse. But a central ground has emerged where complex timbral events (more or less pitched) in rhythmic sequences interact with the live musical material.
In works that demand the strict synchronization of the live performers with a fixed electro-acoustic tape part, a click track may be required to enforce adherence to tempo and accurate entry cues. Many musicians object to this timing strait-jacket. However, the development of sound-recording systems based on computer hard disk storage allows ‘sound files’ (which previously would have been in a fixed disposition on tape) to be triggered and even mixed during performance, thus giving performers greater control over timing.
There is no agreement as to what constitutes ‘live’ electro-acoustic music. The presence of a live performer cannot always be detected from a recording; even at a concert there is often no apparent relationship between a visible human gesture and an acoustic result. The human performer may be influencing streams of computer data calculated in real time which, when heard, give no clear indication of human activity. Research in the psychology of sound and music perception may begin to explain what we perceive as ‘human presence’ through our ears alone. There remains a divide between the idealist view that computers may learn to become ‘independent’ performers (and composers) and the argument that computers should be used to extend essentially human performance creativity which may continue to be recognized as such through its sound alone.
The need for human/computer interfaces more appropriate to a truly musical relationship has led to two kinds of devices: those that follow and measure human physical action (‘controllers’), and those that analyse the acoustic result of a performance.
Most early applications of electricity to the creation of music were directed towards electronic versions of acoustic instruments. From 1945, and especially after the introduction of voltage control in the 1960s, ‘control’ devices (the performer interface) were increasingly separated from ‘production’ devices (those related to the synthesis and processing of sound). From this divide emerged instrument controllers which seek to follow human performance actions and translate this information into a form suitable to control quite separate sound-production equipment. The first generation of such interfaces, developed from the mid-1960s, was used to control analogue synthesizers and processors via voltage control.
Given an additional impetus by the introduction of MIDI, a new digital generation of controllers was developed in the 1980s. The most important were based on well-known instrumental types, for example guitar, wind, string and percussion controllers. These devices tracked and measured the physical action that causes sound production (finger position and pressure, breath pressure, strike velocity etc.), and usually had no acoustic sound output of their own. The designers often added the measurement of physical actions that were not significant in the original acoustic instrument – for example, finger pressure (‘aftertouch’) on the wind controller.
Another group comprised more general devices which analysed the sound result of instruments (using pitch-tracking, envelope-following and timbre-analysis techniques), translating the measurements into control information. These could be adapted for use with a variety of instrumental sources, often standard acoustic instruments with minor modifications and attachments. The more sophisticated and detailed such analysis was, the greater became the apparent limits of the MIDI protocol in terms of speed and timing. Most devices produce information output at rates far faster than MIDI can accurately transmit, and the compression of this information to work within the limits of MIDI leads to a loss of expressive performance detail. As a result there has been considerable pressure for a faster replacement to the MIDI standard.
Performance action controllers have come to dominate the marketplace, usually being more reliable, more universal (as MIDI devices) and cheaper, but they are considerably less sensitive to performance nuance (especially timbral variation). However, controllers based on signal analysis are set to emerge more strongly as faster, more reliable real-time analysis methods become available. A combination of both approaches (performance action and signal analysis) has been used in some devices.
More radical interface designs have been proposed. Some retain the physical feedback familiar to instrumental performers. Surfaces, webs (strings under tension), springs (for example in games-machine paddles and joysticks) and solid objects made of familiar or newly developed elastic substances may be deformed and ‘played’. The gestural energy of touch and pressure is transduced and transmitted (in the same way as with more standard controllers) to the sound-production apparatus. Other interfaces detect physical movement without elastic resistance. Devices have been built into gloves, pads (used under the floor or sometimes on the performer’s body) or installed in furniture or sculpture. There is sometimes not even direct physical contact with the device, as with ultrasonic proximity and movement sensors used extensively, for example, with dance and installations. Some interfaces combining these approaches have been developed for use by composers and performers with special needs.
Since the mid-1960s biophysical interfaces have been developed to control sound-production and modification devices. Originally taken over from medical systems, transducers for the detection of biological variables such as skin resistance and brain activity waves have been used to control sound sources (the biofeedback sound systems of David Rosenboom and works by Alvin Lucier are examples). Although such interfaces remain on the fringes of experimental music, they are rich in possibilities.
Electro-acoustic music is dependent on loudspeakers as the medium of transmission. Therefore the types and qualities of loudspeakers, their ability to project sound and their placement relative to the listener are important factors in the reception of electro-acoustic music: the perception of spatial images and textural detail changes in different listening conditions. This is particularly true of acousmatic music and acousmatic elements of performances, most notably where the composer has paid great attention to detail when working in a high-quality studio environment, which is quite different from that of a concert hall, public space or home. The diffusion of sound in public remains a fragile, variable and imperfect art which has developed for the most part empirically.
The first concerts of electro-acoustic music were French radio broadcasts of musique concrète, and the first public concert was of Schaeffer and Henry’s Symphonie pour un homme seul (played from disc turntables on stage) at the Ecole Normale de Musique in Paris in 1950. Schaeffer recognized the potential blandness of simple loudspeaker projection in a large space, and in 1951 he experimented with using four channels to create a play of perspectives and trajectories at the Théâtre de l’Empire in Paris. Other special systems designed for concert diffusion include the 425 loudspeakers of the Philips Pavilion at the Brussels Exposition in 1958 (Varèse’s Poème électronique and Xenakis’s Concret PH were conceived for this space), and the spherical auditorium with 50 loudspeakers at the Osaka World’s Fair in 1970, used for performances of Stockhausen’s works. The first permanent loudspeaker installation for the diffusion of acousmatic music in concert was the ‘Gmebaphone’ of the Groupe de Musique Expérimentale de Bourges (first concert in 1973), followed by the ‘Acousmonium’ of the Groupe de Recherches Musicales in Paris (1974).
The last two systems served as models for many sound-diffusion installations devoted to concert presentations of electro-acoustic works. Typically, loudspeakers (usually not of the same type and frequency coloration) are placed at various distances from listeners in differing perspectives and orientations in order to project the music in a kind of topographical relief. A main solo pair of speakers usually projects a detailed frontal image, more widely spaced pairs permit a broadening of the image and less directional speakers create peripheral atmosphere by reflecting the sound off walls. Speakers can project the sound upwards in order to create ‘height’; small higher-frequency units can carry the sound above listeners, and the lower register can be extended with special bass speakers. The person diffusing the sound adjusts the level of each speaker (or stereo loudspeaker pair or grouping) during performance, combining speakers to expand, dramatize and ‘sonorize’ the environment, and to vary the acoustic image so that the listener is ‘in’ the music rather than ‘viewing’ it from a distance. Sound diffusion ultimately aims to encourage attentive listening and assist the engaging of listeners’ imaginations while enhancing the inherent spatial dimensions of the music.
The first electro-acoustic works were monophonic (one-track); some early works on tape were composed on more than one track, permitting concert presentation of the tracks on separate loudspeakers. (For example, a tape recorder with six spools allowed the simultaneous playback of three mono tapes for the first performance of Messiaen’s Timbres durées in 1952.) Stereo stabilized as the norm for acousmatic works in 1959–60, but many early ‘stereo’ works would better be described as two-track rather than possessing the stereo ‘image’ we recognize today. The quadraphonic (four-track) format emerged in the late 1950s and is still used. It requires loudspeakers to be placed in four locations around the listener both to create surround-sound environments and to realize trajectories such as rotating sounds, as in Stockhausen’s Kontakte for tape, piano and percussion. The multi-speaker installation described above is suitable for diffusing works composed in a stereo format, but works may exist in more than two channels, providing an opportunity for more textural separation, greater complexity of concurrent events and a more polyphonic approach to spatial play. In the late 1990s the eight-channel format gained popularity, encouraged by the availability of eight-channel digital tape.
Computer-assisted automated systems, some more suitable for spatializing live electronic music than for diffusing works in fixed recorded format, appeared in the 1980s. Notable were the 4X system developed at IRCAM, used for Boulez’s Répons (a project begun in 1980) to process and spatialize the sound of the six instrumental soloists; and the computer-assisted gestural control system developed by the GRAME studio in Lyons in 1986. Automation permits the pre-programming of spatial settings, trajectories and patterns, and the memorizing of the fader movements created by the person diffusing the sound; means of gestural control other than mixer faders are attractive for live electronic performance.
Computer programs and processors for spatialization have also been designed to be used in the composition process so that the result is encoded in the music itself. An early example was John Chowning’s program to create virtual spaces outside the four speakers of the quadraphonic square and detailed sound-paths around the auditorium, as in his Turenas (1972). In stereo as well one can create the illusion of sound travelling in three-dimensional space outside the normal limits defined by the physical speaker enclosures, and even above and below the listener. Because such spatial effects depend on the quality of loudspeaker, a controlled acoustic and listening position, they rarely survive concert diffusion in a public space, but they are likely to become incorporated in home sound systems used in conjunction with television, thereby opening up new possibilities for the electro-acoustic composer.
The public presentation of acousmatic music has been condemned both for the temporal fixity of musical structures and for the lack of visual interest. The art of diffusion has arisen partly in response to these complaints. Diffusion can radically affect (for better or worse) the impact and atmosphere of acousmatic works. But the use of conventional concert spaces for acousmatic music, with listeners facing forwards in fixed seating, raises traditional visual expectations which by definition cannot be satisfied. Hence there have been many experiments with less traditional settings, sometimes in collaboration with other media. The first open-air diffusion was of part of Schaeffer and Henry’s Symphonie pour un homme seul, with improvised choreography by Merce Cunningham, in Waltham, Massachusetts, in 1952. Max Neuhaus took electro-acoustic music underwater in 1971. A notable early installation was Henry’s Spatiodynamisme, which consisted of 12 tapes triggered automatically in an aleatory manner, contributing to the environment associated with Nicolas Schöffer’s ‘tour cybernétique’ at Saint-Cloud in 1955. Among earlier audiovisual events the light-and-sound installation for Xenakis’s first Polytope at the Montreal Exposition in 1967 was particularly innovatory.
With the arrival of the compact disc, and the consequent elimination of the background noise of the long-playing record, the listener could buy a copy of an acousmatic work which was identical in quality to the original. Thus by the late 1990s acousmatic music in particular was often conceived with private listening in mind. There has been a significant expansion in commercially available repertory, and composers are able to produce their own compact discs immediately on completion of a new work.
Schaeffer founded the first electro-acoustic studio in 1948 under the auspices of Radiodiffusion-Télévision Française, a model that was followed throughout Europe. Initially studios relied on (78 r.p.m.) disc technology. In addition to mixing, the most commonly used processes were speed change, repetition (‘closed groove’ – later, with tape, called ‘looping’) and cutting into the evolution of a sound (most often removing its attack). The introduction of tape machines in 1951 marked the establishment of what became known as the ‘classical tape’ studio. The design of these studios was broadly the same whether sources were recorded and manipulated, as in the French musique concrète tradition, or synthesized in an often laborious process of mixing from simple sources, as in the early years of the Studio für Elektronische Musik of Westdeutscher Rundfunk in Cologne.
Most European national radio networks had channels dedicated to cultural programming, and the establishment of studios under their auspices was an extension of this practice, as well as building on existing radiophonic, sound drama and Hörspiel traditions. These studios had a solid infrastructure of well-maintained recording equipment in a high-quality monitoring environment, to which were added such electronic devices as oscillators, filters and amplifiers. From the start the production of works for concert or broadcast was the studios’ primary mission.
In the USA, where such national or regional institutions did not exist, the earliest studios were assembled by composers for personal and sometimes commercial ventures, or for specific projects. Cage’s Williams Mix (1952) was realized in a temporary studio with assistance from Louis and Bebe Barron’s private studio in New York (operational since 1948); the San Francisco Tape Music Center was originally established by a composers’ collective (1959). The first institutional studios in the USA were set up in university music departments, and some developed strong links with engineering and, later, computer science and artificial intelligence departments. In several cases strong entrepreneurial relationships with industry were established; the studios’ emphasis was sometimes as much on research and technical innovation as on musical ends. These studios laid the foundation for America’s enormous contribution to computer music software.
The
following are among the most important early classical tape studios (original
names have been used, and the dates are those of the first recognized
production of music).
Club d’Essai, Radiodiffusion-Télévision Français, Paris (1948)
[now Groupe de Recherches Musicales, part of the Institut National de
l’Audiovisuel]
Tape Music Studio, Columbia University, New York (1951) [now
Columbia-Princeton Electronic Music Center]
Studio für Elektronische Musik, Westdeutscher Rundfunk,
Cologne (1951)
Electronic Music Studio, NHK (Japanese Radio), Tokyo (1953)
Studio di Fonologia, Radio Audizioni Italiane, Milan (1953)
[closed 1977]
Studio Eksperymentalne, Polskie Radio, Warsaw (1957)
Elektronmusikstudion (EMS), Sveriges Radio, Stockholm (1957)
[now Elektroakustisk Musik i Sverige (EMS), a subsidiary of the Swedish Concert
Institute]
Studio de Musique Electronique de Bruxelles (APELAC), Brussels
(1958) [closed 1967]
Estudio de Fonología Musical, University of Buenos Aires
(1958) [closed 1973]
Electronic Music Studio, University of Toronto (1959)
San Francisco Tape Music Center (1959) [now the Tape Music
Center, Mills College, Oakland, California]
Studio voor Elektronische Muziek, University of Utrecht (1961)
[now amalgamated with the Institut voor Sonologie, Royal Conservatory, The
Hague]
The classical tape studio relied heavily on manual control of the sound source and sound-processing devices. The advent from the mid-1960s of devices such as oscillators, filters and amplifiers, which allowed electrical voltages to replace much of the painstaking manual operation, made an immediate impact on studios that concentrated on systematic sound synthesis and processing. At the same time the transistor revolution was leading to increasing miniaturization and the development of the synthesizer as we know it today. The synthesizer could be used both as a versatile studio generation device and, more significantly, as a live performance instrument, one that was rapidly developed in popular music and jazz performance.
This second wave of studios extended the diversity of the classical studio. Those orientated towards the French tradition treated the new versatility of sound generation as a potential source of rich and complex timbres over which it became possible to exert more control in terms of timbral evolution. Those with a greater interest in retaining rhythmic, harmonic and melodic approaches developed devices that stored a ‘sequence’ of voltages that could be triggered at a controllable rate or stepped through by the user, and looped if required; hence the term ‘sequencer’ which was later to become an important component of computer control.
The demand for more sophisticated analogue sequencers led to several relatively short-lived developments in computer applications in the 1970s. These ‘hybrid’ systems, in which a simple low-speed ‘digital-to-analogue’ converter allowed the computer to operate voltage-controlled synthesis and processing systems, were effectively overtaken by the introduction of MIDI systems from 1983. (For the evolution of digital synthesis and sound processing see Computers and music, §II.)
The early evolution of the computer music studio was largely separate from the developments outlined above. A small group of research centres in the 1960s grew, by the 70s, into ‘computer music centres’ with necessarily strong relationships to computer science interests. The personal computer revolution of the late 80s and 90s eventually brought the fruits of these developments to all studios. The integration of these originally distinct studio types parallels the increasing speed of computer systems. Production studios had always worked with immediate sound feedback, and many were willing to integrate the new, more powerful computer tools only as processing times fell, firstly to real time for control software (sequencers) and then to real time for processing, recording and editing.
Each stage of this evolution has seen a steady shift away from tape and towards the hard disks of computer memory as the main storage and manipulation medium, although digital audio tape (DAT) and compact disc (CD) remain common media for storage of the final work. There has been a corresponding trend away from direct physical contact – manipulation of a bank of tape machines, the cutting of tape with razor blade, the manual setting of values on front panels of devices – towards a purely visual (on-screen) replication of these same functions, often using icons representing the original physical processes. The physical mixing console has diminished in importance and is increasingly replaced by its virtual representation. The monitoring environment is, however, as important as ever, with even greater demands to exclude unwanted noise and to use loudspeakers that are increasingly accurate over a wide frequency range. This parallels the greater demand for high-quality sound systems for entertainment venues, video games, film sound and television.
The growing use of electro-acoustic resources in education has led not only to the application of computer methods to traditional aspects of Western musical notation, composition and ear training, but also to the introduction of electro-acoustic music in all its varieties to composers, performers and listeners at a much earlier age. An education studio (often completely mobile) consisting of a computer controlling synthesis, sampling and processing devices, possibly with hard disk recording or a small stand-alone multi-track recording facility, is increasingly common in pre-university education.
The popular dance music phenomenon of the late 1980s and early 90s was facilitated by the expansion of home studios using the first generation of computer sequencers, samplers and synthesizers affordable on a personal budget. With respect to technical production standards the difference between ‘amateur’ and ‘professional’ studios has progressively eroded, especially as it became feasible to record and edit on hard disk. This has transformed institutional studios (whether in universities or research centres) from hardware service providers into centres of contact and exchange within larger networks. A new relationship is forming between such studios and composers’ personal facilities.
Finally, dissemination of music over the internet will have considerable consequences for the production and consumption of electro-acoustic (and indeed any kind of) music. The studio of the future may be linked directly to other studios, performance spaces, sound and music libraries, and home sound systems. Although the internet environment is likely to become increasingly ‘noisy’ and difficult to navigate, it may lead to the creation of the ‘virtual studio’ in which a composer can configure an ideal sound-processing and synthesis environment; this need not be located at any one place but may be accessed from anywhere the composer chooses.
The aesthetic approaches associated with electro-acoustic art music have often arisen quite independently in other genres of sonic art: sound effects and soundtracks for film and ‘sound design’ for the theatre, sound environments for site-specific art installations and museum exhibitions, sound sculpture and kinetic art, radio art and imaginative radio drama, sound poetry (text-sound composition) and vernacular music genres such as dance music. Electro-acoustic music may be considered variously as a distinct, autonomous genre; as a component – whether equal in status, dominant, supporting or decorative – in instrumental/vocal music and in multimedia or intermedia arts; and as a sonic practice absorbed, consciously or not, into another genre. Furthermore, it has become increasingly difficult to maintain clear distinction between electro-acoustic ‘art’ music and vernacular musics that embrace electro-acoustic attitudes. This blurring of differentiation among genres, and sharing of practice across genres, is inevitable as common electro-acoustic means become cheaper and more readily available to individuals.
P. Schaeffer: A la recherche d’une musique concrète (Paris, 1952)
H. Eimert and K. Stockhausen, eds.: Die Reihe, i (1955); Eng. trans. Die Reihe, i (1958) [electronic music issue]
J. Cage: Silence (Middletown, CT, 1961)
K. Stockhausen: Texte zur Musik (Cologne, 1963–89)
P. Boulez: Relevés d’apprenti (Paris, 1966; Eng. trans., 1991)
P. Schaeffer: Traité des objets musicaux (Paris, 1966)
P. Schaeffer and G. Reibel: Solfège de l’objet sonore (Paris, 1966)
H. Davies: Répertoire international des musiques électroacoustiques/International Electronic Music Catalog (Cambridge, MA, 1968)
J.-C. Risset: An Introductory Catalog of Computer Synthesized Sounds (with Sound Examples) (Murray Hill, NJ, 1969); repr. in The Historical CD of Digital Sound Synthesis, Computer Music Currents, xiii (1995)
R. Kostelanetz: John Cage (New York, 1970)
S. Heikinheimo: The Electronic Music of Karlheinz Stockhausen (Helsinki, 1972)
A. Strange: Electronic Music: Systems, Techniques, and Controls(Dubuque, IA, 1972, 2/1983)
‘Les musiques électro-acoustiques’, Musique en jeu, viii (1972)
H. Eimert and H.U. Humpert: Das Lexikon der elektronischen Musik (Regensburg, 1973)
P. Schaeffer: La musique concrète (Paris, 1973)
E. Schwartz: Electronic Music: a Listener’s Guide (New York, 1973)
M. Nyman: Experimental Music (London, 1974)
S. Reich: Writings about Music, ed. K. Koenig (Halifax, NS, and New York, 1974)
J. Appleton and R. Perera, eds.: The Development and Practice of Electronic Music (Englewood Cliffs, NJ, 1975)
O. Revault D’Allonnes: Xenakis: Les Polytopes (Paris, 1975)
M. Chion and G. Reibel: Les musiques électroacoustiques (Aix-en-Provence, 1976)
D. Rosenboom: Biofeedback and the Arts (Vancouver, 1976)
D. Ernst: The Evolution of Electronic Music (New York, 1977)
P. Griffiths: A Guide to Electronic Music (London, 1979)
D. Keane: Tape Music Composition (London, 1980)
G. Mâche and A. Vande Gorne, eds.: Répertoire acousmatique 1948–1980 (Paris, 1980)
M. Chion: La musique électroacoustique (Paris, 1982)
P. Mion, J.-J. Nattiez and J.-C. Thomas: L’envers d’une oeuvre: ‘De natura sonorum’ de Bernard Parmegiani (Paris, 1982)
B. Schrader: Introduction to Electro-Acoustic Music (Englewood Cliffs, NJ, 1982)
M. Chion: Guide des objets sonores: Pierre Schaeffer et la recherche musicale (Paris, 1983)
T. Machover, ed.: ‘Musical Thought at IRCAM’, CMR, i (1984)
B. Truax: Acoustic Communication (Norwood, 1984)
D. Osmond-Smith, ed.: Luciano Berio: Two Interviews (London, 1985)
C. Roads and J. Strawn, eds.: Foundations of Computer Music (Cambridge, MA, 1985)
M. Chion and F. Delalande, eds.: ‘Recherche musicale au GRM’, ReM, nos.394–7 (1986)
S. Emmerson, ed.: The Language of Electroacoustic Music (London, 1986)
F. Dhomont, ed.: L’espace du son (Ohain, 1988)
R.J. Heifetz, ed.: On the Wires of our Nerves (Cranbury, NJ, 1989)
C. Roads, ed.: The Music Machine (Cambridge, MA, 1989)
‘Musiques électroniques’, Revue Contrechamps, xi (1990)
M. Chion: L’art des sons fixés, ou La musique concrètement (Fontaine, 1991)
F. Dhomont, ed.: L’espace du son, 2 (Ohain, 1991)
P. Nelson and S. Montague, eds.: ‘Live Electronics’, CMR, vi/1 (1991)
A. Vande Gorne, ed.: Vous avez dit acousmatique? (Ohain, 1991)
Analisi musicale II: Trent 1991
D. Kahn and G. Whitehead, eds.: Wireless Imagination: Sound, Radio, and the Avant-Garde (Cambridge, MA, 1992)
E. Ungeheuer: Wie die elektronische Musik ‘erfunden’ wurde… (Mainz, 1992)
I. Xenakis: Formalized Music (Stuyvesant, NY, 1992)
F. Bayle: Music acousmatique: propositions … positions (Paris, 1993)
P. Manning: Electronic and Computer Music (Oxford, 1993)
S. Emmerson, ed.: ‘Timbre Composition in Electroacoustic Music’, CMR, x/2 (1994)
G. Born: Rationalizing Culture: IRCAM, Boulez, and the Institutionalization of the Musical Avant-Garde (Berkeley, 1995)
P. Griffiths: Modern Music and After (Oxford, 1995)
C. Harris, ed.: ‘Computer Music in Context’, CMR, xiii/2 (1996)
C. Roads and others: The Computer Music Tutorial (Cambridge, MA, 1996)
T. Wishart: On Sonic Art (London, 1996)
J. Chadabe: Electric Sound: the Past and Promise of Electronic Music (Upper Saddle River, NJ, 1997)
L. Camilleri and D. Smalley, eds.: ‘The Analysis of Electroacoustic Music’, Journal of New Music Research (1998) xxvii/1-2/[whole issue]
K. Stockhausen: ‘Aktuelles’, Die Reihe, i (1955), 57–63; Eng. trans. in Die Reihe, i (1958), 45–51
L. Berio: ‘Poesia e musica – un’ esperienza’, Incontri musicali, iii (1959), 98–111
K. Stockhausen: ‘Zwei Vorträge’, Die Reihe, v (1959), 50–73; Eng. trans. in Die Reihe, v (1961), 59–82
K. Stockhausen: ‘Musik und Sprache’, Die Reihe, vi (1960), 36–58; Eng. trans. in Die Reihe, vi (1964), 40–64
M. Babbitt: ‘Twelve-Tone Rhythmic Structures and the Electronic Medium’, PNM, i/1 (1962–3), 49–79
K. Stockhausen: ‘The Concept of Unity in Musical Time’, Perspectives on Contemporary Music Theory, ed. B. Boretz and E. Cone (New York, 1972), 214–25
J. Harvey: ‘Mortuos plango, vivos voco: a Realisation at IRCAM’, Computer Music Journal, v/4 (1981), 22–4
M. McNabb: ‘Dreamsong: the Composition’, Computer Music Journal, v/4 (1981), 36–53
D. Morrill: ‘Loudspeakers and Performers: some Problems and Proposals’, Computer Music Journal, v/4 (1981), 25–9
J. Appleton: ‘Live and in Concert: Composer/Performer Views of Real-Time Performance Systems’, Computer Music Journal, viii/1 (1984), 48–51
J. Chadabe: ‘Interactive Composing: an Overview’, Computer Music Journal, viii/1 (1984), 22–7; repr. in The Music Machine, ed. C. Roads (Cambridge, MA, 1989), 143–8
B. Vercoe: ‘The Synthetic Performer in the Context of Live Performance’, International Computer Music Conference: San Francisco 1984, 199–200
G. Loy: ‘Musicians Make a Standard: the MIDI Phenomenon’, Computer Music Journal, ix/4 (1985), 8–26
B. Vercoe and M. Puckette: ‘Synthetic Rehearsal: Training the Synthetic Performer’, International Computer Music Conference: San Francisco 1985, 275–8
M. Waisvisz: ‘The Hands, a Set of Remote MIDI-Controllers’, International Computer Music Conference: San Francisco 1985, 313–18
F. Delalande: ‘En l’absence de partition, le cas singulier de l’analyse de la musique électroacoustique’, Analyse musicale, iii (1986), 54–8
J.-C. Risset: ‘Timbre et synthèse des sons’, Analyse musicale, iii (1986), 9–20
R. Dannenberg and H. Mukaino: ‘New Techniques for Enhanced Quality of Computer Accompaniment’, International Computer Music Conference: San Francisco 1988, 243–9
F.R. Moore: ‘The Dysfunctions of MIDI’, Computer Music Journal, xii/1 (1988), 19–28
T. Wishart: ‘The Composition of Vox-5’, Computer Music Journal, xii/4 (1988), 21–7
T. Machover and J. Chung: ‘Hyperinstruments: Musically Intelligent and Interactive Performance and Creativity Systems’, International Computer Music Conference: San Francisco 1989, 86–90
K. Stockhausen: ‘Four Criteria of Electronic Music’, Stockhausen on Music, ed. R. Maconie (London, 1989), 88–111
X. Chabot: ‘Gesture Interfaces and a Software Toolkit for Performance with Elkectronics’, Computer Music Journal, xiv/2 (1990), 15–27
R.B. Knapp and H. Lusted; ‘A Bioelectric Controller for Computer Music Applications’, Computer Music Journal, xiv/1 (1990), 42–7
J. Pressing: ‘Cybernetic Issues in Interactive Performance Systems’, Computer Music Journal, xiv/1 (1990), 12–25
R. Gehlhaar: ‘SOUND=SPACE: an Interactive Musical Environment’, CMR, vi/1 (1991), 59–72
M. Puckette: ‘Combining Event and Signal Processing in the MAX Graphical Programming Environment’, Computer Music Journal, xv/3 (1991), 68–77
J. Ryan: ‘Some Remarks on Musical Instrument Design at STEIM’, CMR, vi/1 (1991), 3–17
D. Wessel: ‘Instruments that Learn, Refined Controllers, and Source Model Loudspeakers’, Computer Music Journal, xv/4 (1991), 82–6
C. Ten Hoopen: ‘Abstract and Mimetic Qualities in Electroacoustic Music’, Avant garde, vii (1992), 119–32
C. Cadoz, A. Luciani and J.L. Florens: ‘CORDIS-ANIMA: a Modeling and Simulation System for Sound and Image Synthesis – the General Formalism’, Computer Music Journal, xvii/1 (1993), 19–29
L. Camilleri: ‘Metodologie e concetti analitici nello studio di musiche elettroacustiche’, RIM, xxviii (1993), 131–74
S. Emmerson: “‘Live” versus “Real-Time”’, CMR, x/2 (1994), 95–101
D.A. Jaffe and W.A. Schloss: ‘The Computer-Extended Ensemble’, Computer Music Journal, xviii/2 (1994), 78–86
K. McMillen and others: ‘The Zipi Music Interface Language’, Computer Music Journal, xviii/4 (1994), 47–96
M. Grabócz: ‘Narrativity and Electroacoustic Music’, Musical Signification, ed. E. Tarasti (New York, 1995), 535–40
A. McCartney: ‘Inventing Images: Constructing and Contesting Gender in Thinking about Electroacoustic Music’, Leonardo Music Journal, v (1995), 57–66
A. MacDonald: ‘Performance Practice in the Presentation of Electroacoustic Music’, Computer Music Journal, xix/4 (1995), 88–92
H. Davies: ‘A History of Sampling’, Organised Sound, i (1996), 3–11
C. Lippe: ‘Real-Time Interactive Digital Signal Processing: a View of Computer Music’, Computer Music Journal, xx/4 (1996), 21–4
C. Roads: ‘Early Electronic Music Instruments: Time Line 1899–1950’, Computer Music Journal, xx/3 (1996), 20–23
D. Smalley: ‘The Listening Imagination: Listening in the Electroacoustic Era’, CMR, xiii/2 (1996), 77–107
J.A. Paradiso and N. Gerschenfeld: ‘Music Applications of Electric Field Sensing’, Computer Music Journal, xxi/2 (1997), 69–89
D. Smalley: ‘Spectromorphology: Explaining Sound Shapes’, Organised Sound, ii (1997), 107–26